Error-Correcting Codes: Channel Decoding
Because the idea of channel coding has merit (so long as the code is efficient), let's develop a systematic procedure for performing channel decoding. One way of checking for errors is to try recreating the error correction bits from the data portion of the received block
. Using matrix notation, we make this calculation by multiplying the received block
by the matrix
known as the parity check matrix. It is formed from the generator matrix
by taking the bottom, error-correction portion of
and attaching to it an identity matrix. For our (7,4) code,
The parity check matrix thus has size
, and the result of multiplying this matrix with a received word is a length-
binary vector. If no digital channel errors occur—we receive a codeword so that
— then
. For example, the first column of
,
, is a codeword. Simple calculations show that multiplying this vector by
results in a length-
zero-valued vector.
Exercise
Show that
for all the columns of
. In other words, show that
an
matrix of zeroes. Does this property guarantee that all codewords also satisfy
?
When we multiply the parity-check matrix times any codeword equal to a column of GG, the result consists of the sum of an entry from the lower portion of
and itself that, by the laws of binary arithmetic, is always zero.
Because the code is linear—sum of any two codewords is a codeword—we can generate all codewords as sums of columns of GG. Since multiplying by
is also linear,
.
When the received bits
do not form a codeword,
does not equal zero, indicating the presence of one or more errors induced by the digital channel. Because the presence of an error can be mathematically written as
, with
a vector of binary values having a 1 in those positions where a bit error occurred.
Exercise
Show that adding the error vector
to a codeword flips the codeword's leading bit and leaves the rest unaffected.
In binary arithmetic see this table, adding 0 to a binary value results in that binary value while adding 1 results in the opposite binary value.
Consequently,
. Because the result of the product is a length-
vector of binary values, we can have
non-zero values that correspond to non-zero error patterns
. To perform our channel decoding,
- compute (conceptually at least) ;
- if this result is zero, no detectable or correctable error occurred;
- if non-zero, consult a table of length- binary vectors to associate them with the minimal error pattern that could have resulted in the non-zero result; then
- add the error vector thus obtained to the received vector cˆc ^ to correct the error (because ).
- Select the data bits from the corrected word to produce the received bit sequence .
The phrase minimal in the third item raises the point that a double (or triple or quadruple …) error occurring during the transmission/reception of one codeword can create the same received word as a single-bit error or no error in another codeword. For example,
and
are both codewords in the example (7,4) code. The second results when the first one experiences three bit errors (first, second, and sixth bits). Such an error pattern cannot be detected by our coding strategy, but such multiple error patterns are very unlikely to occur. Our receiver uses the principle of maximum probability: An error-free transmission is much more likely than one with three errors if the bit-error probability
is small enough.
Exercise
How small must
be so that a single-bit error is more likely to occur than a triple-bit error?
The probability of a single-bit error in a length-
block is
and a triple-bit error has probability
. For the first to be greater than the second, we must have
For
,
.
This textbook is open source. Download for free at http://cnx.org/contents/778e36af-4c21-4ef7-9c02-dae860eb7d14@9.72.
Explore CircuitBread
Get the latest tools and tutorials, fresh from the toaster.