[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Paparazzi-devel] FPV app on AppStore for H.264 wifi video links

From: Gerard Toonstra
Subject: Re: [Paparazzi-devel] FPV app on AppStore for H.264 wifi video links
Date: Wed, 20 Mar 2013 13:26:35 -0300

I actually migrated from Linux to the Mac. My initial idea was to run the HD display on a device like the Pandaboard or 
a raspberry. These are light enough to carry into the field and have HDMI or DVI outs. ( the objective of all this was to 
get streaming HD video on the ground, so HD goggles are an assumption ).

I moved from the Linux to the Mac for primarily 2 reasons:
- The H.264 decoding on these platforms is done by totally closed DSP chips, so are uncontrollable if latency is not up to par.
- The display drivers for Linux on the Mac didn't work for my HMZ-T1 goggles after 10.10 or so. So the drivers are an issue.

On the Mac everything just works and I'm also using it as a home theater and company administration machine.

Well, as a quick workaround… I believe my research thread also contains information on how I did the decoding back then. 
If you have gstreamer installed, you can construct a simple gstreamer cmd line to put the video on the desktop. 
This won't give you the OSD or any other functionality, but it's a start.

I'm impressed with the visual performance of the HMZ-T1, but not so much its ergonomics. There's another goggle set I'm curious
about qua performance, which is the ST-1080. That one is much more portable into the field and runs off a 5V battery pack. Could be
a better choice, but I haven't used them yet. The Rift has been mentioned by some, but that definitely needs an SDK. 
It won't display the desktop correctly without it (see FAQ), so you'd end up creating a 3D environment with texture mapped quads or something
to be able to see the entire screen (it's fov is too wide and pixel projection density is not constant it seems, with pixels getting less density in peripheral vision).

If anyone wants to know how I created a very low-latency video stream from a desktop running the flight simulator, let me know. 



On Mar 20, 2013, at 1:04 PM, Felix Ruess <address@hidden> wrote:

Hi Gerard,

very cool stuff!! Want's me to get a nice video link as well ;-)
Will your app only be available for MAC, or somehow run on - say - Linux as well at some point?

Cheers, Felix

On Mon, Mar 18, 2013 at 10:03 PM, Gerard Toonstra <address@hidden> wrote:

Hi there,

I'd like to announce something not immediately related to paparazzi *now*, although this may change in the future…

I've been working on an affordable, accessible and low-latency (HD) video downlink (for consumers and hobbyists) for a year or so
using my own resources. Software wise things have been going smoothly, but hardware wise there are still issues to resolve.
There's a forum post where a lot of research so far is documented on

The ground station software that decodes and shows the transmitted video is now stable enough that I released this as
an app on the AppStore. It's called "FPV" (FIrst Person View) as it targets mostly hobbyists, but newer features that I have
in the pipeline may make this very interesting for attempts to integrating this with some paparazzi ground station modules or
the autopilot itself. The FPV app decodes H.264 video over wifi and uses some Long Range wifi modules to get all the
data across. (actually, in theory, paparazzi could already use a wifi link for telemetry using the w5100 module that I created, but that still
needs some work to remove the wait loops and other cleanups. ). There has been some other work done with Gumstix I believe.

The link to the App in the AppStore is here:

A video showing the app with a video stream from a simulator is here:

( the video stream is created by running the FlightGear simulator on Linux, recording the desktop in realtime and transferring this over
my private network. The telemetry is taken from FlightGear as well and used to paint the OSD ).  It's also interesting to note that the OSD
at the top shows the "technical" info that you'd typically find on an OSD designed by an engineer. The "bottom" view is more of a
view designed after Cognitive Work Analysis.

The manual, protocol description and more information on the hardware that I used for field testing is here:

These new features mentioned are all about showing virtual cues in the streamed image. Think "virtual tunnel" to fly a specific trajectory manually,
points of interest and how these pois could be generated by touching a display showing the same streamed image.
The idea behind this is that spectators who're watching along with an FPV flight might want to request flying over
a specific area or point and how they'd communicate that exact point with the pilot. In more serious applications
you'd probably have a vehicle with an AP. It is then necessary to communicate effectively between team members on the ground.
One person is scanning a camera image for POIs and at some point needs to communicate his perceptions with the
flight operator. How do you get this POI seen from a camera image quickly on a 2D map so that navigation changes
can be made effectively?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]