Submitted:
22 April 2024
Posted:
23 April 2024
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Theoretical Framework
2.1. Motivation: Cognitive Modelling
2.2. Predictive Cognition
2.3. The Information Dynamics of Music and Language
2.4. Sequence Segmentation and Boundary Entropy
2.5. The Information Dynamics of Thinking
3. Information Dynamics in Continuous Systems
3.1. Motivation and Structure of the Article
3.2. Coordinate Invariance
3.3. A Continuous Alternative for Information Content
4. Contrast Information
4.1. Contrast Information
4.2. Expected Contrast Information



4.3. Relationship with Information Content
Therefore, the mean absolute error between the information content and contrast information is equal to the entropy H(). □5. Temporal Variants of Contrast Information
5.1. Predictive Contrast Information
5.2. Connective Contrast Information
5.3. Reflective Contrast Information
5.4. Backward Temporal Variants
- Backward Predictive Contrast Information

- Backward Connective Contrast Information

- Backward Reflective Contrast Information

5.5. Terminology
6. Constrast Information of Some Stochastic Processes
6.1. Contrast Information of a Discrete-Time Markov Process
6.2. Contrast Information of a Continuous-Time Markov Process

6.3. Discrete-Time Gaussian Process
where S is partioned into target A, source B, and context C, with associated mean vector and block covariance matrix .
6.4. Continuous-Time Gaussian Process

7. Contrast Information in IDyOMS
7.1. Information Dynamics of Multidimensional Sequences (IDyOMS)
7.2. Results
8. Discussion
8.1. Contributions
8.2. Future Work
- Boundary Entropy
- The first application of the new measures will be in replicating prior segmentation work in music. This will require adaptation of the information profile peak picking algorithms for use on contrast information. Subsequently, we will test the continuous measures on speech, using the TIMIT dataset4.
- Continuous-state IDyOMS
- We will extend our IDyOMS software to allow for continuous states, thus extending its representational reach in music and other domains. This will require the substituting the PPM algorithm with models of continuous feature dimensions, as well as methods for combining viewpoint models based on contrast information rather than entropy.
- Neural correlates
- We aim to collaborate with colleagues in neuroscience to investigate whether and how the neural correlates of perceived sound correspond with IDyOT representations, with a view to making the system more human-like.
- Spectral Knowledge Representation
- We will further develop the idea of Spectral Knowledge Representation [36] to allow our system to reason using the symbols identified by segmentation.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Acknowledgments
Conflicts of Interest
References
- Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2 ed.; John Wiley & Sons, Inc), 2006.
- Pearce, M.T. The Construction and Evaluation of Statistical Models of Melodic Structure in Music Perception and Composition. PhD thesis, Department of Computing, City University, London, London,UK, 2005.
- Pearce, M.T.; Wiggins, G.A. Auditory Expectation: The Information Dynamics of Music Perception and Cognition. Topics in Cognitive Science 2012, 4, 625–652. [Google Scholar] [CrossRef]
- Abdallah, S.A.; Plumbley, M.D. Information dynamics: Patterns of expectation and surprise in the perception of music. Connection Science, 2009. In press.
- Agres, K.; Abdallah, S.; Pearce, M. Information-Theoretic Properties of Auditory Sequences Dynamically Influence Expectation and Memory. Cognitive Science, 2017, pp. 1–34. [CrossRef]
- MacKay, D.J.C. Information Theory, Inference, and Learning Algorithms; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
- Friston, K. The free-energy principle: A unified brain theory? Nature Reviews Neuroscience 2010, 11, 127–138. [Google Scholar] [CrossRef] [PubMed]
- Schmidhuber, J. Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990–2010). Autonomous Mental Development, IEEE Transactions on 2010, 2, 230–247. [Google Scholar] [CrossRef]
- Clark, A. Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences 2013, 36, 181–204. [Google Scholar] [CrossRef] [PubMed]
- Wiggins, G.A. Creativity, Information, and Consciousness: the Information Dynamics of Thinking. Physics of Life Reviews 2020, 34–35, 1–39. [Google Scholar] [CrossRef] [PubMed]
- Wiggins, G.A. Artificial Musical Intelligence: computational creativity in a closed cognitive world. In Artificial Intelligence and the Arts: Computational Creativity in the Visual Arts, Music, 3D, Games, and Artistic Perspectives; Computational Synthesis and Creative Systems, Springer International Publishing, 2021.
- Huron, D. Sweet Anticipation: Music and the Psychology of Expectation; Bradford Books, MIT Press: Cambridge, MA, 2006. [Google Scholar]
- Conklin, D. Prediction and Entropy of Music. Master’s thesis, Department of Computer Science, University of Calgary, Canada, 1990.
- Conklin, D.; Witten, I.H. Multiple Viewpoint Systems For Music Prediction. Journal of New Music Research 1995, 24, 51–73. [Google Scholar] [CrossRef]
- Pearce, M.T. Statistical learning and probabilistic prediction in music cognition: mechanisms of stylistic enculturation. Annals of the New York Academy of Sciences 2018, 1423, 378–395. [Google Scholar] [CrossRef] [PubMed]
- Moffat, A. Implementing the PPM data compression scheme. IEEE Transactions on Communications 1990, 38, 1917–1921. [Google Scholar] [CrossRef]
- Bunton, S. Semantically Motivated Improvements for PPM Variants. The Computer Journal 1997, 40, 76–93. [Google Scholar] [CrossRef]
- Pearce, M.T.; Conklin, D.; Wiggins, G.A. Methods for Combining Statistical Models of Music. In Computer Music Modelling and Retrieval; Wiil, U.K., Ed.; Springer Verlag: Heidelberg, Germany, 2005; pp. 295–312. [Google Scholar]
- Pearce, M.T.; Wiggins, G.A. Expectation in Melody: The Influence of Context and Learning. Music Perception 2006, 23, 377–405. [Google Scholar] [CrossRef]
- Pearce, M.T.; Herrojo Ruiz, M.; Kapasi, S.; Wiggins, G.A.; Bhattacharya, J. Unsupervised Statistical Learning Underpins Computational, Behavioural and Neural Manifestations of Musical Expectation. NeuroImage 2010, 50, 303–314. [Google Scholar] [CrossRef] [PubMed]
- Hansen, N.C.; Pearce, M.T. Predictive uncertainty in auditory sequence processing. Frontiers in Psychology 2014, 5. [Google Scholar] [CrossRef] [PubMed]
- Pearce, M.T.; Wiggins, G.A. Evaluating cognitive models of musical composition. Proceedings of the 4th International Joint Workshop on Computational Creativity; Cardoso, A., Wiggins, G.A., Eds.; Goldsmiths, University of London: London, 2007; pp. 73–80. [Google Scholar]
- Pearce, M.T.; Müllensiefen, D.; Wiggins, G.A. The role of expectation and probabilistic learning in auditory boundary perception: A model comparison. Perception 2010, 39, 1367–1391. [Google Scholar] [CrossRef] [PubMed]
- Wiggins, G.A. Cue Abstraction, Paradigmatic Analysis and Information Dynamics: Towards Music Analysis by Cognitive Model. Musicae Scientiae 2010, Special Issue: Understanding musical structure and form: papers in honour of Irène Deliège, 307–322.
- Wiggins, G.A. “Iletthemusicspeak”:cross-domainapplicationofacognitivemodelofmusicallearning. In Statistical Learning and Language Acquisition; Rebuschat, P., Williams, J., Eds.; Mouton De Gruyter: Amsterdam, NL, 2012; pp. 463–495. [Google Scholar]
- Griffiths, S.S.; McGinity, M.M.; Forth, J.; Purver, M.; Wiggins, G.A. Information-Theoretic Segmentation of Natural Language. Proceedings of the 2nd Workshop on AI and Cognition, 2015.
- Tan, N.; Aiello, R.; Bever, T.G. Harmonic structure as a determinant of melodic organization. Memory and Cognition 1981, 9, 533–9. [Google Scholar] [CrossRef] [PubMed]
- Chiappe, P.; Schmuckler, M.A. Phrasing influences the recognition of melodies. Psychonomic Bulletin & Review 1997, 4, 254–259. [Google Scholar]
- Wiggins, G.A.; Sanjekdar, A. Learning and consolidation as re-representation: revising the meaning of memory. Frontiers in Psychology: Cognitive Science 2019, 10. [Google Scholar] [CrossRef] [PubMed]
- Shannon, C. A mathematical theory of communication. Bell System Technical Journal 1948, 27, 379–423. [Google Scholar] [CrossRef]
- Large, E.W. A generic nonlinear model for auditory perception. In Auditory Mechanisms: Processes and Models; Nuttall, A.L., Ren, T., Gillespie, P., Grosh, K., de Boer, E., Eds.; World Scientific: Singapore, 2006; pp. 516–517. [Google Scholar]
- Spivey, M. The Continuity of Mind; Oxford University Press, 2008.
- Kraus, N.; Nicol, T. Brainstem Encoding of Speech and Music Sounds in Humans. In The Oxford Handbook of the Auditory Brainstem; Oxford University Press, 2019; pp. (on–line version). [CrossRef]
- Bellier, L.; Llorens, A.; Marciano, D.; Gunduz, A.; Schalk, G.; Brunner, P.; Knight, R.T. Music can be reconstructed from human auditory cortex activity using nonlinear decoding models. PLoS Biol 2023, 21, e3002176. [Google Scholar] [CrossRef]
- Pasley, B.N.; David, S.V.; Mesgarani, N.; Flinker, A.; Shamma, S.A.; Crone, N.E.; Knight, R.T.; Chang, E.F. Reconstructing Speech from Human Auditory Cortex. PLoS Biol 2023, 10, e1001251. [Google Scholar] [CrossRef]
- Homer, S.T.; Harley, N.; Wiggins, G.A. The Discrete Resonance Spectrogram: a novel method for precise determination of spectral content. In preparation.
- Caticha, A. Relative Entropy and Inductive Inference. AIP Conference Proceedings, 2004, Vol. 707, pp. 75–96, [physics/0311093]. [CrossRef]
- Caticha, A. Entropic Inference and the Foundations of Physics; University of Albany – SUNY, 2012.
- Eckhorn, R.; Pöpel, B. Rigorous and Extended Application of Information Theory to the Afferent Visual System of the Cat. I. Basic Concepts. Kybernetik, 16, 191–200. [CrossRef]
- DeWeese, M.R.; Meister, M. How to Measure the Information Gained from One Symbol. Network: Computation in Neural Systems, 1999.
- Good, I.J. Good Thinking: The Foundations of Probability and Its Applications; University of Minnesota Press, 1983.
- Braverman, M.; Chen, X.; Kakade, S.; Narasimhan, K.; Zhang, C.; Zhang, Y. Calibration, Entropy Rates, and Memory in Language Models. Proceedings of the 37 th International Conference on Machine Learning, 2020.
- Anderson, W.J. Continuous-Time Markov Chains; Springer Series in Statistics, Springer New York, 1991. [CrossRef]
- Brockwell, P.J.; Davis, R.A. Time Series: Theory and Methods; Springer Series in Statistics, Springer New York, 1991. [CrossRef]
- Soch, J. ; Maja.; Monticone, P.; Faulkenberry, T.J.; Kipnis, A.; Petrykowski, K.; Allefeld, C.; Atze, H.; Knapp, A.; McInerney, C.D.; Lo4ding00.; others. The Book of Statistical Proofs (Version 2023), 2024. [CrossRef]
- Brockwell, P.; Davis, R.; Yang, Y. Continuous-Time Gaussian Autoregression. Statistica Sinica, 17, 63–80.
- Cleary, J.; Witten, I. Data Compression Using Adaptive Coding and Partial String Matching. IEEE Transactions on Communications 1984, 32, 396–402. [Google Scholar] [CrossRef]
- Hedges, T.; Wiggins, G.A. The Prediction of Merged Attributes with Multiple Viewpoint Systems. Journal of New Music Research 2016, 45, 314–332. [Google Scholar] [CrossRef]
| 1 | We use this joint term because it is not yet clear to us where the boundary between the mind and the brain lies, if indeed there is one. |
| 2 | Here, we represent the transitions using a right stochastic matrix, though it is also common to see a left stochastic matrix representing the transition matrix. We use the right stochastic matrix representation so that we can use Dirac notation with the standard semantics, where a column vector corresponds to a state, as opposed to row vector with the left stochastic matrix. |
| 3 | |
| 4 |




| Regime | Continuous | Notation | Discrete | Notation |
|---|---|---|---|---|
| Past | X | X | ||
| Near Past | ||||
| Point Past | ||||
| Present | Y | Y | ||
| Point Future | ||||
| Near Future | ||||
| Future | Z | Z |
| CPITCH | DUR | CPITCH x DUR | ||||
|---|---|---|---|---|---|---|
| Order | ||||||
| 0 | 0.86 | 0.88 | 0.89 | 1.00 | -0.83 | -0.82 |
| 1 | 0.85 | 0.86 | 0.72 | 0.51 | 0.34 | 0.38 |
| 2 | 0.76 | 0.77 | 0.70 | 0.66 | 0.45 | 0.47 |
| 3 | 0.73 | 0.76 | 0.70 | 0.69 | 0.23 | 0.28 |
| 4 | 0.75 | 0.81 | 0.71 | 0.72 | 0.19 | 0.33 |
| 5 | 0.77 | 0.86 | 0.73 | 0.78 | 0.23 | 0.39 |
| 6 | 0.78 | 0.88 | 0.74 | 0.80 | 0.32 | 0.45 |
| 7 | 0.80 | 0.90 | 0.75 | 0.82 | 0.45 | 0.50 |
| 8 | 0.82 | 0.91 | 0.75 | 0.84 | 0.58 | 0.54 |
| 9 | 0.84 | 0.92 | 0.76 | 0.86 | 0.66 | 0.57 |
| 10 | 0.85 | 0.92 | 0.77 | 0.87 | 0.69 | 0.60 |
| CPITCH | DUR | CPITCH x DUR | ||||
|---|---|---|---|---|---|---|
| Order | ||||||
| 0 | 1.00 | - | 1.00 | - | 1.00 | - |
| 1 | 0.20 | -0.06 | 0.36 | -0.48 | -0.28 | -0.23 |
| 2 | 0.24 | 0.28 | 0.31 | -0.14 | 0.06 | 0.33 |
| 3 | 0.51 | 0.50 | 0.40 | 0.29 | 0.27 | 0.36 |
| 4 | 0.67 | 0.68 | 0.44 | 0.29 | 0.34 | 0.46 |
| 5 | 0.73 | 0.81 | 0.50 | 0.42 | 0.37 | 0.53 |
| 6 | 0.75 | 0.89 | 0.55 | 0.49 | 0.42 | 0.59 |
| 7 | 0.77 | 0.93 | 0.60 | 0.60 | 0.54 | 0.64 |
| 8 | 0.80 | 0.95 | 0.64 | 0.70 | 0.68 | 0.68 |
| 9 | 0.83 | 0.96 | 0.66 | 0.76 | 0.77 | 0.71 |
| 10 | 0.85 | 0.97 | 0.67 | 0.80 | 0.81 | 0.73 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).