|
From: | Reto Büttner |
Subject: | Re: [Paparazzi-devel] Vision assisted landing |
Date: | Wed, 4 Jun 2014 08:30:00 +0200 |
Hi,I am making my master thesis in a system to land and automatically in a station to change the battery of a quad rotor and take off.I have already have my station visual recognition algorithm ready to test but i would like to know what would be the best way to communicate to paparazzi the navigation procedure to land. The visual recognition algorithm would run on device like gumstix and communicate through serial interfaceI have 2 ways I could communicate the navigation procedure:11. Gather flight data(attitude/position) from paparazzi,2. gather the computed position from the visual algorithm,3. Set the quad-rotor in some stabilized mode(in paparazzi i think it is called holding),3. With the above role my own Kalman filter4. Create a PID controller that outputs 3 axis movement, x,y,z. The attitude control would be done by paparazzi
21.Gather flight data(attitude/position)2 compute position offset from position with visual algorithm and gathered flight data3 feed offset to the flight data estimator.What would be the best approach to follow?Any of these 2 approaches require me to write an ivy parser. Unfortunately i had already started creating a c++ mavlink message library that would allow me to execute the navigation procedure of 1. I understand Paparazzi uses the Ivy message system and it looks like it has a much higher level API with most of the event handling work included.Would it be much effort to implement the messaging system required for any of the 2 navigation procedures?Thank youPaulo Neves
_______________________________________________
Paparazzi-devel mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/paparazzi-devel
[Prev in Thread] | Current Thread | [Next in Thread] |