[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Coding gain through orthogonal transforms & practical low rate codes
From: |
Daniel Estévez |
Subject: |
Re: Coding gain through orthogonal transforms & practical low rate codes (was: Re: QAM constellation script) |
Date: |
Sat, 6 May 2023 20:52:34 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.9.1 |
On 06/05/2023 16:40, Marcus Müller wrote:
Yep, that'd be rate-1; you don't need to squint much to see that the
N-point-DFT has exactly N (=often 2ᵐ) such entries, and when you send N
symbols in parallel, you get a DSSS-single-user-CDMA system called OFDM :D
But that's not on bit-level.
Hi Marcus,
I wasn't speaking of sending 2^n codes simultaneously, but rather
choosing one code out of a set of 2^n (our alphabet) to encode m bits
and transmit that. Afterwards, choose another code out of the same set
to encode the next m bits and transmit it. Etc.
In your OFDM example (really good point!), instead of OFDM (where you
transmit in all subcarriers simultaneously), we would get 2^n-FSK (where
we choose a subcarrier each time), with the possibility of having each
tone be QAM modulated, if you wish to complicate it further and by
analogy to OFDM.
While OFDM has the same uncoded BER as the QAM modulation you use in
each subcarrier (it's really 2^n parallel and independent channels),
this kind of 2^n-FSK+QAM doesn't have the same BER, I believe.
The (Fast) Hadamard transform is one of
these things to decompose an arbitrary 2ᵐ length binary vector into
orthogonal binary vectors of the same length (and I was pointed to it
when a student of mine, someone else I forget and I were discussing how
to make two uncorrelated random vectors from a single 64bit random
vector in the context of WGN generation).
Unfortunately the Hadamard transform is not so good for spread spectrum.
Some of the vectors you get have really ugly, non-spread spectrum (for
example, one of the vectors is all 1's). The Hadamard transform has many
other advantages, though.
In this sense, using different DSSS codes actually achieves some
coding gain that justifies bandwidth expansion, whereas the
alternative approach of using a single PN sequence modulated in BPSK
is as you say a repetition code and doesn't provide any coding gain.
Yep, in the end this is finding a constellation in a higher-dimensional
space that has better distance than the N times the embedding in 1d
space. It's probably sphere packing all the way down.
An interesting problem is to consider coherent decoding of an N-symbol
alphabet chosen from the vector space \mathbb{C}^n, where you are free
to choose n as large as you want (or you can also do it in continuous
time, but it doesn't change what happens).
The decision error probability basically depends on the angles between
the N symbols, so it is advantageous to make these angles as large as
possible.
The optimal solution that maximizes the minimum angle between the
vectors is pretty neat. For N=2 you choose two opposite symbols (+1 and
-1), so the angle is 180deg. For N=3 you choose the vertices of an
equilateral triangle, so the angle is 120deg. For N=4 you choose the
vertices of a regular tetrahedron, so the angle is ~109deg. In higher
dimension we get a regular (N-1)-simplex. I haven't bothered to compute
how the angle between vertices behaves as N->infty, but it probably
tends to 90deg.
Choosing vectors following this solution is usually not convenient
(people often want each component of the vectors to have the same
amplitude, etc.). So orthogonal codes are a good alternative and more
flexible solution, that is almost optimal for large N.
Now, of course this utterly disagrees with my gut feeling. We're doing a
base transform, keeping the bit energy constant. We invert the process
under white noise and are supposed to get a gain?!
But that's the difference between "classical" DSSS (and the OFDM up
there) with the consideration that N-FSK might be a better embedding in
N-dimensional complex space than m BPSK symbols would be: we're not
necessarily "going back" before making the decision. Some
representations are smarter than others to make decisions.
Yup. How you map your bits into symbols, and specially the geometry of
those symbols (understood as vectors in C^n or C^[0,Ts]) matters. In
fact there is a Shannon capacity for a binary-input channel and another
(larger) Shannon capacity for an unconstrained input channel.
Surely this idea is very far from getting us a capacity-achieving code
(any simple FEC beats uncoded m-FSK even if m is pretty large). But on
the other hand, the typical bandwidth expansions of a DSSS system are
on the order of 100x or more. I would be curious to know if there's an
r=1/100 code that is good and can be implemented in practice.
There's a former colleague of mine who did[1] error-correcting codes for
quantum key exchange reconciliation if you're interested in low-rate codes
The idea is that you have like LDPC codes very much, and wonder how you
can construct them for low rates so that you don't go directly insane
while en- and even more importantly while decoding.
That's an interesting idea! My main experience designing my own LDPC
codes is with high rate code. My understanding from the literature is
that for high rate codes it's relatively easy to pick a
near-capacity-achieving code (degree distribution doesn't matter so
much, random codes from the ensemble are very likely to be good, etc.).
For lower rate codes (and I'm talking about r=1/6, not something crazy
like r=1/100) I remember that things get much more subtle and people
come up with crazy ideas for degree distributions.
You then decide to doodle a small/moderate LDPC-typical Tanner graph,
where you draw a couple of edges with more than one line. Call that a
protograph.
> [...]
That way, you can construct a long low-rate code from a smaller
Tanner-like-graph, drastically reducing the number of short cycles.
I'm quite familiar with the protograph construction. This is what the
CCSDS LDPC codes use.
If I remember correctly, one of the properties of a protograph code is
that the rate of the (expanded) code is the same as the rate of the
protograph. So for a rate r=1/100 you need to put in at least 100
variable nodes and 99 check nodes in your protograph. Already this looks
like a tough problem! It's certainly more complex than the CCSDS codes
I'm used to (Figure 8-4 in https://public.ccsds.org/Pubs/130x1g3.pdf).
Things get a bit unhandy even at the abstraction level "protograph" for
extremer rates, so Kadir came up with an abstraction level on top of
that (where you stop caring about which node of multi-edge degree
("type") is which, you just keep track of how many nodes of which types
are connected to which during design).
Ah, so you fix a degree distribution of sorts, if I understood
correctly, and then build the protograph according to this distribution
"procedurally".
For decoding, standard sum-product algorithm should do, so, "reasonably
fast".
I can believe this. Compared to your typical r=1/2 code, in a very low
rate code you just have almost twice as many check nodes. So
belief-propagation would probably be only twice as expensive.
Probably there are diminishing returns when r -> 0, and decoding cost
blows up. Hence the usual idea of concatenating a DSSS-repetition-code
and an outer usual FEC (maybe not an r=4/5 Turbo, but r=1/6).
Well, the keywords "concatenating", "low-rate" and "outer FEC" combine
in my head to suggest one looks into three things:
1. Turbo codes with something like a mother code rate of 1/3 (I think
LTE has such a thing?) and a lot of repetitions. I think such things
were used in deep space missions (?),
Not that I have heard of. Current Turbo codes for deep-space missions do
not use repetitions (reference: same CCSDS document I linked above). I
think the current choice of codes date back to the first time they were
introduced in deep space after their invention. On the other hand, in
the past people played with more exotic things, such as long constrain
length convolutional codes. Maybe this is what you had in mind.
Best,
Daniel.
OpenPGP_signature
Description: OpenPGP digital signature
- Re: QAM constellation script, (continued)
- Re: QAM constellation script, Marcus Müller, 2023/05/03
- Re: QAM constellation script, Adrian Musceac, 2023/05/04
- Re: QAM constellation script, Marcus Müller, 2023/05/04
- Message not available
- Re: QAM constellation script, Marcus Müller, 2023/05/04
- Re: QAM constellation script, Adrian Musceac, 2023/05/04
- Re: QAM constellation script, Marcus Müller, 2023/05/04
- Re: QAM constellation script, George Katsimaglis, 2023/05/04
- Re: QAM constellation script, George Katsimaglis, 2023/05/04
- Re: QAM constellation script, Daniel Estévez, 2023/05/06
- Coding gain through orthogonal transforms & practical low rate codes (was: Re: QAM constellation script), Marcus Müller, 2023/05/06
- Re: Coding gain through orthogonal transforms & practical low rate codes (was: Re: QAM constellation script),
Daniel Estévez <=