|
From: | Marcus D. Leech |
Subject: | Re: [Discuss-gnuradio] FIFO latency |
Date: | Sat, 28 May 2011 14:50:40 -0400 |
User-agent: | Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.17) Gecko/20110428 Fedora/3.1.10-1.fc14 Thunderbird/3.1.10 |
On Sat, May 28, 2011 at 22:06, Marcus D. Leech<address@hidden> wrote:I evaluated latency of a FIFO (actually an ordinary pipe, but the kernel mechanisms are identical), and measured 30usecs average on my 1.2GHz AMD Phenom system with plenty 'o memory. I sent timestamps across the FIFO (struct timeval), and the reader grabbed the local time of day, and computed the difference. There's a fair amount of uncertainty on the reader due to gettimeofday() call overhead. But 30usec on a wimpy CPU is certainly comfortably below 1msec.gettimeofday() is a fast function. But if you want real high-fidelity - read CPU clock counter. Just make sure your app runs on a one selected core. Could you post your app and raw results? I'm interested in min/mean/max values and distribution graphs. Because max values do play role when playing with real-time.
====== latency_writer.c ======== #include <stdio.h> #include <time.h> #include <sys/time.h> main () { struct timeval tv; while (1) { gettimeofday (&tv, NULL); fwrite (&tv, sizeof(tv), 1, stdout); fflush (stdout); usleep (250000); } } ============ latency_reader.c ============== #include <stdio.h> #include <time.h> #include <sys/time.h> main () { struct timeval now; struct timeval sender; long long int t1, t2; while (fread (&sender, sizeof(sender), 1, stdin) != 0) { gettimeofday (&now, NULL); t1 = sender.tv_sec * 1000000; t1 += sender.tv_usec; t2 = now.tv_sec * 1000000; t2 += now.tv_usec; fprintf (stderr, "%lld\n", t2 - t1); } } I just run it like: ./latency_writer | ./latency_reader -- Marcus Leech Principal Investigator Shirleys Bay Radio Astronomy Consortium http://www.sbrac.org
[Prev in Thread] | Current Thread | [Next in Thread] |