This textbook is aimed at graduate students and upper level undergraduates in mathematics, engineering, and computer science. The material and the approach of the textwere developed over several years atAuburnUniversity in two independent courses, Information Theory and Data Compression. Although the material in the two courses is related, we think it unwise for information theory to be a prerequisite for data compression, and have written the data compression section of the text so that it can be read by or presented to students with no prior knowledge of information theory. There are references in the data compression part to results and proofs in the information theory part of the text, and those who are interested may browse over those references, but it is not absolutely necessary to do so. In fact, perhaps the best pedagogical order of approach to these subjects is the reverse of the apparent logical order: students will come to information theory curious and better prepared for having seen some of the definitions and theorems of that subject playing a role in data compression.
Our main aim in the data compression part of the text, as well as in the course it grew from, is to acquaint the students with a number of significant lossless compression techniques, and to discuss two lossy compression methods.
Our aim is for the students to emerge competent in and broadly conversant with a large range of techniques. We have striven for a “practical” style of presentation: here is what you do and here is what it is good for. Nonetheless, proofs are provided, sometimes in the text, sometimes in the exercises, so that the instructor can have the option of emphasizing the mathematics of data compression to some degree.
Information theory is of a more theoretical nature than data compression. It provides a vocabulary and a certain abstraction that can bring the power of simplification to many different situations. We thought it reasonable to treat it as a mathematical theory and to present the fundamental definitions and elementary results of that theory in utter abstraction from the particular problems of communication through noisy channels, which inspired the theory in the first place. We bring the theory to bear on noisy channels in Chapters 3 and 4.
The treatment of information theory given here is extremely elementary. The channels are memoryless and discrete, and the sources are all “zerothorder,” one-state sources (although more complicated source models are discussed in Chapter 7). We feel that this elementary approach is appropriate for the target audience, and that, by leavingmore complicated sources and channels out of the picture, we more effectively impart the grasp of Information Theory that we hope our students will take with them.
The exercises range from the routine to somewhat lengthy problems that introduce additional material or establish more difficult results. An asterisk by an exercise or section indicates that thematerial is off themain road, so to speak, and might reasonably be skipped. In the case of exercises, it may also indicate that the problem is hard and/or unusual.
In the data compression portion of the book, a number of projects require the use of a computer. Appendix A documents Octave and Matlab scripts written by the authors that can be used on some of the exercises and projects involving transform methods and images, and that can also serve as building blocks for other explorations. The software can be obtained from the authors’ site, listed in Appendix C. In addition, the site contains information about the book, an online version of Appendix A, and links to other sites of interest.