paparazzi-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Paparazzi-devel] Vision assisted landing


From: Paulo Neves
Subject: [Paparazzi-devel] Vision assisted landing
Date: Tue, 3 Jun 2014 14:52:52 +0100

Hi,
I am making my master thesis in a system to land and automatically in a station to change the battery of a quad rotor and take off.

I have already have my station visual recognition algorithm ready to test but i would like to know what would be the best way to communicate to paparazzi the navigation procedure to land.  The visual recognition algorithm would run on device like gumstix and communicate through serial interface

I have 2 ways I could communicate the navigation procedure:

1
1. Gather flight data(attitude/position) from paparazzi, 
2. gather the computed position from the visual algorithm, 
3. Set the quad-rotor in some stabilized mode(in paparazzi i think it is called holding),
3. With the above role my own Kalman filter
4. Create a PID controller that outputs 3 axis movement, x,y,z. The attitude control would be done by paparazzi

2
1.Gather flight data(attitude/position)
2 compute position offset from position with visual algorithm and gathered flight data
3 feed offset to the flight data estimator.

What would be the best approach to follow?

Any of these 2 approaches require me to write an ivy parser. Unfortunately i had already started creating a c++ mavlink message library that would allow me to execute the navigation procedure of 1. I understand Paparazzi uses the Ivy message system and it looks like it has a much higher level API with most of the event handling work included. 

Would it be much effort to implement the messaging system required for any of the 2 navigation procedures?

Thank you
Paulo Neves


reply via email to

[Prev in Thread] Current Thread [Next in Thread]