Perhaps the simplest error correcting code is the repetition code.
Here, the transmitter sends the data bit several times, an odd number of times in fact. Because the error probability
is always less than
, we know that more of the bits should be correct rather than in error. Simple majority voting of the received bits (hence the reason for the odd number) determines the transmitted bit more accurately than sending it alone. For example, let's consider the three-fold repetition code: for every bit
emerging from the source coder, the channel coder produces three. Thus, the bit stream emerging from the channel coder
has a data rate three times higher than that of the original bit stream
. The coding table illustrates when errors can be corrected and when they can't by the majority-vote decoder.
Thus, if one bit of the three bits is received in error, the receiver can correct the error; if more than one error occurs, the channel decoder announces the bit is 1 instead of transmitted value of 0. Using this repetition code, the probability of
. This probability of a decoding error is always less than
, the uncoded value, so long as
Demonstrate mathematically that this claim is indeed true. Is
This question is equivalent to
. Because this is an upward-going parabola, we need only check where its roots are. Using the quadratic formula, we find that they are located at
. Consequently in the range
the error rate produced by coding is smaller.
This textbook is open source. Download for free at http://firstname.lastname@example.org.
Get the latest tools and tutorials, fresh from the toaster.