**3. The channel concept
**When information is moved from one form to another, a number of issues must be considered. Consider an example. An opera is being performed live at the Metropolitan Opera in New York. The information contained in the music being produced by the orchestra and the words coming from the mouths of the singers must be "transformed" so that it can be shown on digital or analog television at a future time. To accomplish this, many steps must be taken. The orchestral music and singing must be captured by microphones and the acting and motion of the singers must be captured by a camera. The microphones transform the sound energy into electrical energy. Similarly, the camera must capture the images of the opera singers (and perhaps shots of the audience) in a different form. During this transformation process there are many steps involved. Rather than chart exactly what these are, notice that at each stage of information transformation there is a chance that "mistakes" may creep in. Often such mistakes may described by the generic term "noise." Noise is either a random or systematic distortion of the relation between the input to a transformation step and the output from a transformation step. Following the pioneering perspective of the mathematician Claude Shannon, we will use the term

One increasingly important aspect of the sending of information over a channel omitted from the diagram above is security. What good is it that one can accurately transfer money electronically between two banks if the message can be altered on the fly to divert the funds to some other bank or individual thief? Maintaining secure communications means that one has to use an encryption system of some kind along with all the other factors!

There are a variety of types of channels that can be considered. The information that is to be sent over a channel involves messages that are built from the letters of an alphabet. In the simplest case this alphabet consists of the symbols 0 and 1, the binary numbers. The binary numbers can be thought of as an algebraic structure known as a finite field of 2 elements. In fact, there are finite fields whose size is any prime number to a power (e.g. 2^{5}, 3^{8} 11^{15}, etc.) The finite fields are often used as the alphabet in thinking about how to build up messages to be sent over a channel, and thus, play an important role in the theory of error correcting codes. Here, however, for simplicity we will restrict our discussion to codes whose messages consist of binary digits. This message may represent an image that has been changed to binary form, sounds that have been changed to binary form, or information that enables one to synchronize the sound and the picture, which would have to be accomplished to make it possible to show an opera on television.

The simplest form of channel is known as the *binary symmetric channel*. This refers to the fact that one can input either a 0 or a 1 and there is a fixed probability that a 0 will be changed to a 1 or a 1 changed to a 0. (The binary asymmetric channel handles the case where it is more likely a 0 will be changed to a 1 than a 1 to a 0 or conversely.) We will denote the probability that a bit is unchanged by p, which is a real number somewhere between 0 and 1. (A perfect channel would have p equal to 1.) If a bit is sent accurately with probability p, it arrives changed with probability 1-p. We will assume that bits are not created or destroyed and the situation for each position of a binary string is independent of the situation for each other position of a binary string. (This is not always realistic. Sometimes errors occur in bursts which mean that neighbors are more likely to have errors than bits far away. Techniques have been developed to deal with the burst error problem.)

Using this approach, given that the string 0011 is sent, one can compute the probability that 1111 will be received, under the assumption that the transmission is modeled by a binary symmetric channel. Another model for a channel allows for the creation or erasure of symbols as a form of error. This is often referred to as the binary channel with erasure.

If one has code words which are binary strings of the same length, it may happen that if one adds two of the code words without a carry digit (i.e. add as if one were doing addition in the field of 2 elements), the result is another code word. Such codes are known as *linear codes* and they have especially nice properties. This is because the elements of the code form a group and the powerful tools of modern algebra can be applied to such codes. Although linear codes are among the most studied and best understood codes, many codes used in practice are not linear ones.