|
From: | David Winter |
Subject: | Re: Resampling radio data |
Date: | Wed, 17 Feb 2021 17:18:28 +0100 |
User-agent: | Microsoft-MacOutlook/16.45.21011103 |
Hey, >Larger n yields greater efficiencies, right? Doing lots of small calls isn't necessarilyas efficient as doing less large calls, so long as you can handle the latency and the processor can stay fed with data. I figure >~4000 samples is a good compromise, but maybe larger ones work out better, too. Keep in mind that the FFT has an algorithmic complexity of O(n*log n), thus smaller overheads become a problem. Keeping your FFT size near the capacity of your registers / L1 cache also can’t hurt ^^ Obviously you can still wait until you have accumulated a large batch of samples Ultimately you might just have to benchmark the two approaches and compare Should fftw not work out, you could also try writing a small fixed-size fft yourself and inlining that, the code isn’t too bad (There are enough C-examples online). David Von: Discuss-gnuradio <discuss-gnuradio-bounces+dastw=gmx.net@gnu.org> im Auftrag von Brian Padalino <bpadalino@gmail.com> On Wed, Feb 17, 2021 at 10:05 AM Marcus Müller <mueller@kit.edu> wrote:
If the bandwidth is already constrained to 20MHz/23MHz, then there would be no sidelobes - correct?
Larger n yields greater efficiencies, right? Doing lots of small calls isn't necessarily as efficient as doing less large calls, so long as you can handle the latency and the processor can stay fed with data. I figure ~4000 samples is a good compromise, but maybe larger ones work out better, too. For reference, here's some performance benchmarks for FFTW (randomly chosen processor): Single precision, non powers of 2, 1d transforms are some good benchmarks to look at for something like this. Obviously some numbers are better than others, but the general trend seems to skew towards the mid-thousands as being a decent number to target for maximizing throughput. Brian |
[Prev in Thread] | Current Thread | [Next in Thread] |