Preprint Article Version 1 This version is not peer-reviewed

Mutual Information, the Linear Prediction Model, and CELP Voice Codecs

Version 1 : Received: 14 April 2019 / Approved: 16 April 2019 / Online: 16 April 2019 (11:08:04 CEST)

A peer-reviewed article of this Preprint also exists.

Gibson, J. Mutual Information, the Linear Prediction Model, and CELP Voice Codecs. Information 2019, 10, 179. Gibson, J. Mutual Information, the Linear Prediction Model, and CELP Voice Codecs. Information 2019, 10, 179.

Journal reference: Information 2019, 10, 179
DOI: 10.3390/info10050179

Abstract

We write the mutual information between an input speech utterance and its reconstruction by a Code-Excited Linear Prediction (CELP) codec in terms of the mutual information between the input speech and the contributions due to the short term predictor, the adaptive codebook, and the fixed codebook. We then show that a recently introduced quantity, the log ratio of entropy powers, can be used to estimate these mutual informations in terms of bits/sample. A key result is that for many common distributions and for Gaussian autoregressive processes, the entropy powers in the ratio can be replaced by the corresponding minimum mean squared errors. We provide examples of estimating CELP codec performance using the new results and compare to the performance of the AMR codec and other CELP codecs. Similar to rate distortion theory, this method only needs the input source model and the appropriate distortion measure.

Subject Areas

autoregressive models; entropy power; linear prediction model; CELP voice codecs; mutual information

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.