FB pixel

Subtleties of Coding

In the Huffman code, the bit sequences that represent individual symbols can have differing lengths so the bitstream index

does not increase in lock step with the symbol-valued signal's index

. To capture how often bits must be transmitted to keep up with the source's production of symbols, we can only compute averages. If our source code averages

bits/symbol and symbols are produced at a rate

, the average bit rate equals

, and this quantity determines the bit interval duration

.

Exercise

Calculate what the relation between

and the average bit rate

is.

.

A subtlety of source coding is whether we need "commas" in the bitstream. When we use an unequal number of bits to represent symbols, how does the receiver determine when symbols begin and end? If you created a source code that required a separation marker in the bitstream between symbols, it would be very inefficient since you are essentially requiring an extra symbol in the transmission stream.

Point of Interest:

A good example of this need is the Morse Code: Between each letter, the telegrapher needs to insert a pause to inform the receiver when letter boundaries occur.

As shown in this example, no commas are placed in the bitstream, but you can unambiguously decode the sequence of symbols from the bitstream. Huffman showed that his (maximally efficient) code had the prefix property: No code for a symbol began another symbol's code. Once you have the prefix property, the bitstream is partially self-synchronizing: Once the receiver knows where the bitstream starts, we can assign a unique and correct symbol sequence to the bitstream.

Exercise

Sketch an argument that prefix coding, whether derived from a Huffman code or not, will provide unique decoding when an unequal number of bits/symbol are used in the code.

Because no codeword begins with another's codeword, the first codeword encountered in a bit stream must be the right one. Note that we must start at the beginning of the bit stream; jumping into the middle does not guarantee perfect decoding. The end of one codeword and the beginning of another could be a codeword, and we would get lost.

However, having a prefix code does not guarantee total synchronization: After hopping into the middle of a bitstream, can we always find the correct symbol boundaries? The self-synchronization issue does mitigate the use of efficient source coding algorithms.

Exercise

Show by example that a bitstream produced by a Huffman code is not necessarily self-synchronizing. Are fixed-length codes self synchronizing?

Consider the bitstream …0110111… taken from the bitstream 0|10|110|110|111|…. We would decode the initial part incorrectly, then would synchronize. If we had a fixed-length code (say 00,01,10,11), the situation is much worse. Jumping into the middle leads to no synchronization at all!

Another issue is bit errors induced by the digital channel; if they occur (and they will), synchronization can easily be lost even if the receiver started "in synch" with the source. Despite the small probabilities of error offered by good signal set design and the matched filter, an infrequent error can devastate the ability to translate a bitstream into a symbolic signal. We need ways of reducing reception errors without demanding that

be smaller.

The first electrical communications system—the telegraph—was digital. When first deployed in 1844, it communicated text over wireline connections using a binary code—the Morse code—to represent individual letters. To send a message from one place to another, telegraph operators would tap the message using a telegraph key to another operator, who would relay the message on to the next operator, presumably getting the message closer to its destination. In short, the telegraph relied on a network not unlike the basics of modern computer networks. To say it presaged modern communications would be an understatement. It was also far ahead of some needed technologies, namely the Source Coding Theorem. The Morse code, shown in Table, was not a prefix code. To separate codes for each letter, Morse code required that a space—a pause—be inserted between each letter. In information theory, that space counts as another code letter, which means that the Morse code encoded text with a three-letter source code: dots, dashes and space. The resulting source code is not within a bit of entropy, and is grossly inefficient (about 25%). Table shows a Huffman code for English text, which as we know is efficient.

%Morse CodeHuffman Code
A6.22.-1011
B1.32-...010100
C3.11-.-.10101
D2.97-..01011
E10.53.001
F1.68..-.110001
G1.65--.110000
H3.63....11001
I6.14..1001
J0.06.---01010111011
K0.31-.-01010110
L3.07.-..10100
M2.48--00011
N5.73-.0100
O6.06---1000
P1.87.--.00000
Q0.10--.-0101011100
R5.87.-.0111
S5.81...0110
T7.68-1101
U2.27..-00010
V0.70...-0101010
W1.13.--000011
X0.25-..-010101111
Y1.07-.--000010
Z0.06--..010101110101
 

This textbook is open source. Download for free at http://cnx.org/contents/778e36af-4c21-4ef7-9c02-dae860eb7d14@9.72.

 
Use left and right arrow keys to change pagesUse left and right arrow keys to change pages.
Swipe left and right to change pages.\Swipe left and right to change pages.
Make Bread with our CircuitBread Toaster!

Get the latest tools and tutorials, fresh from the toaster.

What are you looking for?