|
From: | Marcus Müller |
Subject: | Re: [Discuss-gnuradio] Streaming IQ File Compression |
Date: | Sat, 16 Jul 2016 11:04:00 +0200 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1 |
If there's a lot of white noise, you won't get much compressionAlas, Entropy is killing compression. So yeah, if you anyhow can, try to reduce bandwidth by filtering and decimating. You can also just "round" or even "throw away" bits – at 100MS/s (presumably coming from a X310 running at a 200MS/s physical sampling rate), the effect of oversampling (only by a factor of 2) will not have filled up all the bits in your digital numbers with meaningful signal – in fact, you basically gain 1 bit per oversampling of 2, so that's 15bit (X310's ADC is 14 bit) only – now, buffering sixteen shorts, repacking only their upper 15 bits into a 30 byte chunk of data, instead of a 32 byte chunk, might not really be the performance leap you're looking for. But the effective number of bits you'll get from that ADC might be even lower (see the ADS62P48 datasheet), and that's not even accounting for noise floor – so if you keep only 12 of the 16 bit, you could actually get a "free" storage bandwidth reduction of 4/16th=25%; better than nothing (and much nicer to implement on a CPU than 15 out of 16), and for all we can say about representing the original analog signal, still loss-less. Every further bit you save will add quantization noise – in fact, that should be around 6 dB per bit you save. I think the truth here is that Dan is absolutely right: you need to store the information contained in a signal, not necessarily the signal itself (because the signal might contain noise and things that are irrelevant). Now, if you've got lots of CPU to spare, and knowledge on what you're looking for, one would possibly go for things like sparse representations of your signal (compressive sensing), or a different quantization of coefficients of your signal under a different base (e.g. after a DFT or wavelet transform) – which would then be a lossy representation of your signal (and very close to what lossy audio codecs essentially do). Speaking of audio codecs: Maybe the lossless FLAC algorithm is
something for you; I haven't used libFLAC in what seems to be
forever, but in general, it has a linear predictor for "upcoming
samples", and stores the differences between actual samples and
predicted samples, compressing these with a Huffmanesque code.
This works well because audio samples tend to be quite correlated.
But maybe the majority of your sample data is, too. Best regards, Marcus On 16.07.2016 06:34, Dan CaJacob wrote:
|
[Prev in Thread] | Current Thread | [Next in Thread] |