Submitted:
17 July 2025
Posted:
18 July 2025
You are already at the latest version
Abstract
Keywords:
1. Introduction
2. Brief Review of the Mathematics of Coincidences
2.1. Index of Coincidence
2.2. The Birthday Problem
3. Kolmogorov Complexity and Algorithmic Probability
3.1. Kolmogorov Complexity
3.2. Algorithmic Probability
4. Algorithmic Probability and Index of Coincidence
4.1. IC for Algorithmic Probability
4.2. Curious Coincidences and Uncomputability
4.3. Real-World Data Examples
5. Simplicity Bias
5.1. Upper Bound
- (3) Tooth shapes from intricate and realistic computational models of dental morphology [52].
- (4) Plant shapes in computational simulations of plant growth [49].
- (5) Output binary strings from finite state transistors [53].
- (8) Natural time series data [57], in particular digitised up-down series patterns.
5.2. IC and Simplicity Bias
5.3. Convergent Evolution
6. Coincidences in Time Series
6.1. Time Series Patterns
6.2. What’s Next?
6.3. Solomonoff Induction
6.4. Coincidence in Time Series Anomalies
6.5. Near Matches
7. Discussion
Acknowledgments
References
- Jung, C.G.; Hull, R. Synchronicity: An Acausal Connecting Principle. (From Vol. 8. of the Collected Works of C. G. Jung), rev - revised ed.; Princeton University Press, 1960.
- Diaconis, P.; Mosteller, F. Methods for studying coincidences. Journal of the American Statistical Association 1989, 84, 853–861. [Google Scholar] [CrossRef]
- Friedman, W.F. The index of coincidence and its applications in cryptography; Aegean Park Press, 1922.
- Hofert, M. Random number generators produce collisions: Why, how many and more. The American Statistician 2021, 75, 394–402. [Google Scholar] [CrossRef]
- Pollanen, M. A Double Birthday Paradox in the Study of Coincidences. Mathematics 2024, 12, 3882. [Google Scholar] [CrossRef]
- Johansen, M.K.; Osman, M. Coincidences: A fundamental consequence of rational cognition. New Ideas in Psychology 2015, 39, 34–44. [Google Scholar] [CrossRef]
- Hand, D.J. The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day; Macmillan, 2014.
- Fisher, R. The design of experiments; Springer, 1971.
- Graham, R.L.; Rothschild, B.L.; Spencer, J.H. Ramsey theory; John Wiley & Sons, 1991.
- Li, M.; Vitanyi, P. An introduction to Kolmogorov complexity and its applications; Springer-Verlag New York Inc, 2008.
- Cover, T.; Thomas, J. Elements of information theory; John Wiley and Sons, 2006.
- MacKay, D.J. Information theory, inference and learning algorithms; Cambridge university press, 2003.
- Holst, L. The general birthday problem. Random Structures & Algorithms 1995, 6, 201–208. [Google Scholar] [CrossRef]
- Henze, N. A Poisson limit law for a generalized birthday problem. Statistics & Probability Letters 1998, 39, 333–336. [Google Scholar] [CrossRef]
- Camarri, M.; Pitman, J. Limit distributions and random trees derived from the birthday problem with unequal probabilities. Electronic Journal of Probability [electronic only] 2000, 5, Paper. [Google Scholar] [CrossRef]
- Zhou, Q. Birth, Death, Coincidences and Occupancies: Solutions and Applications of Generalized Birthday and Occupancy Problems. Methodology and Computing in Applied Probability 2023, 25, 53. [Google Scholar] [CrossRef]
- Mase, S. Approximations to the birthday problem with unequal occurrence probabilities and their application to the surname problem in Japan. Annals of the Institute of Statistical Mathematics 1992, 44, 479–499. [Google Scholar] [CrossRef]
- Mezard, M.; Montanari, A. Information, physics, and computation; Oxford University Press, USA, 2009.
- Solomonoff, R.J. A Preliminary Report on a General Theory of Inductive Inference (Revision of Report V-131). Contract AF 1960, 49, 376. [Google Scholar]
- Kolmogorov, A. Three approaches to the quantitative definition of information. Problems of information transmission 1965, 1, 1–7. [Google Scholar] [CrossRef]
- Chaitin, G.J. A theory of program size formally identical to information theory. Journal of the ACM (JACM) 1975, 22, 329–340. [Google Scholar] [CrossRef]
- Bédard, C.A. Lecture Notes on Algorithmic Information Theory. arXiv, arXiv:2504.18568 2025.
- Turing, A.M. On computable numbers, with an application to the Entscheidungsproblem. J. of Math 1936, 58, 5. [Google Scholar]
- Vitányi, P.M. Similarity and denoising. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 2013, 371, 20120091. [Google Scholar] [CrossRef] [PubMed]
- Vitányi, P. How incomputable is Kolmogorov complexity? Entropy 2020, 22, 408. [Google Scholar] [CrossRef] [PubMed]
- Veness, J.; Ng, K.S.; Hutter, M.; Uther, W.; Silver, D. A monte-carlo aixi approximation. Journal of Artificial Intelligence Research 2011, 40, 95–142. [Google Scholar] [CrossRef]
- Delahaye, J.; Zenil, H. Numerical Evaluation of Algorithmic Complexity for Short Strings: A Glance into the Innermost Structure of Algorithmic Randomness. Appl. Math. Comput. 2012, 219, 63–77. [Google Scholar] [CrossRef]
- Soler-Toscano, F.; Zenil, H.; Delahaye, J.P.; Gauvrit, N. Calculating Kolmogorov complexity from the output frequency distributions of small Turing machines. PloS one 2014, 9, e96223. [Google Scholar] [CrossRef] [PubMed]
- Bennett, C. The thermodynamics of computation – a review. International Journal of Theoretical Physics 1982, 21, 905–940. [Google Scholar] [CrossRef]
- Kolchinsky, A.; Wolpert, D.H. Thermodynamic costs of Turing machines. Physical Review Research 2020, 2, 033312. [Google Scholar] [CrossRef]
- Zurek, W. Algorithmic randomness and physical entropy. Physical Review A 1989, 40, 4731. [Google Scholar] [CrossRef] [PubMed]
- Avinery, R.; Kornreich, M.; Beck, R. Universal and accessible entropy estimation using a compression algorithm. Physical review letters 2019, 123, 178102. [Google Scholar] [CrossRef] [PubMed]
- Martiniani, S.; Chaikin, P.M.; Levine, D. Quantifying hidden order out of equilibrium. Physical Review X 2019, 9, 011031. [Google Scholar] [CrossRef]
- Ferragina, P.; Giancarlo, R.; Greco, V.; Manzini, G.; Valiente, G. Compression-based classification of biological sequences and structures via the Universal Similarity Metric: experimental assessment. BMC bioinformatics 2007, 8, 252. [Google Scholar] [CrossRef] [PubMed]
- Johnston, I.G.; Dingle, K.; Greenbury, S.F.; Camargo, C.Q.; Doye, J.P.; Ahnert, S.E.; Louis, A.A. Symmetry and simplicity spontaneously emerge from the algorithmic nature of evolution. Proceedings of the National Academy of Sciences 2022, 119, e2113883119. [Google Scholar] [CrossRef] [PubMed]
- Adams, A.; Zenil, H.; Davies, P.C.; Walker, S.I. Formal definitions of unbounded evolution and innovation reveal universal mechanisms for open-ended evolution in dynamical systems. Scientific reports 2017, 7, 1–15. [Google Scholar] [CrossRef] [PubMed]
- Devine, S.D. Algorithmic Information Theory for Physicists and Natural Scientists; IOP Publishing, 2020.
- Cilibrasi, R.; Vitányi, P. Clustering by compression. Information Theory, IEEE Transactions on 2005, 51, 1523–1545. [Google Scholar] [CrossRef]
- Hutter, M. On universal prediction and Bayesian confirmation. Theoretical Computer Science 2007, 384, 33–48. [Google Scholar] [CrossRef]
- Levin, L. Laws of information conservation (nongrowth) and aspects of the foundation of probability theory. Problemy Peredachi Informatsii 1974, 10, 30–35. [Google Scholar]
- Zenil, H.; Badillo, L.; Hernández-Orozco, S.; Hernández-Quiroz, F. Coding-theorem like behaviour and emergence of the universal distribution from resource-bounded algorithmic probability. International Journal of Parallel, Emergent and Distributed Systems 2019, 34, 161–180. [Google Scholar] [CrossRef]
- Kirchherr, W.; Li, M.; Vitányi, P. The miraculous universal distribution. The Mathematical Intelligencer 1997, 19, 7–15. [Google Scholar] [CrossRef]
- Yanofsky, N.S. Kolmogorov Complexity and Our Search for Meaning: What Math Can Teach Us about Finding Order in Our Chaotic Lives. The Best Writing on Mathematics 2019 2019, 8, 208. [Google Scholar]
- Cilibrasi, R.; Vitanyi, P. Automatic meaning discovery using Google. Schloss Dagstuhl–Leibniz-Zentrum für Informatik, 2006.
- Fink, T.M. Recursively divisible numbers. Journal of Number Theory 2024, 256, 37–54. [Google Scholar] [CrossRef]
- Berger, A.; Hill, T.P. An introduction to Benford’s law; Princeton University Press, 2015.
- Wolfram, S. A New Kind of Science; Wolfram Media, 2002.
- Alaskandarani, M.; Dingle, K. Low complexity, low probability patterns and consequences for algorithmic probability applications. Complexity 2023, 2023, 9696075. [Google Scholar] [CrossRef]
- Dingle, K.; Camargo, C.Q.; Louis, A.A. Input–output maps are strongly biased towards simple outputs. Nature communications 2018, 9, 761. [Google Scholar] [CrossRef] [PubMed]
- Lempel, A.; Ziv, J. On the complexity of finite sequences. Information Theory, IEEE Transactions on 1976, 22, 75–81. [Google Scholar] [CrossRef]
- Dingle, K.; Batlle, P.; Owhadi, H. Multiclass classification utilising an estimated algorithmic probability prior. Physica D: Nonlinear Phenomena 2023, 448, 133713. [Google Scholar] [CrossRef]
- Dingle, K.; Hagolani, P.; Zimm, R.; Umar, M.; O’Sullivan, S.; Louis, A.A. Bounding phenotype transition probabilities via conditional complexity. bioRxiv, 2024; 12. [Google Scholar]
- Dingle, K.; Pérez, G.V.; Louis, A.A. Generic predictions of output probability based on complexities of inputs and outputs. Scientific reports 2020, 10, 1–9. [Google Scholar] [CrossRef] [PubMed]
- Valle-Pérez, G.; Camargo, C.Q.; Louis, A.A. Deep learning generalizes because the parameter-function map is biased towards simple functions. arXiv preprint arXiv:1805.08522, arXiv:1805.08522 2018.
- Mingard, C.; Skalse, J.; Valle-Pérez, G.; Martínez-Rubio, D.; Mikulik, V.; Louis, A.A. Neural networks are a priori biased towards Boolean functions with low entropy. arXiv preprint arXiv:1909.11522, arXiv:1909.11522 2019.
- Mingard, C.; Rees, H.; Valle-Pérez, G.; Louis, A.A. Deep neural networks have an inbuilt Occam’s razor. Nature Communications 2025, 16, 220. [Google Scholar] [CrossRef] [PubMed]
- Dingle, K.; Kamal, R.; Hamzi, B. A note on a priori forecasting and simplicity bias in time series. Physica A: Statistical Mechanics and its Applications 2023, 609, 128339. [Google Scholar] [CrossRef]
- Dingle, K.; Alaskandarani, M.; Hamzi, B.; Louis, A.A. Exploring simplicity bias in 1D dynamical systems. Entropy 2024, 26, 426. [Google Scholar] [CrossRef] [PubMed]
- Hamzi, B.; Dingle, K. Simplicity bias, algorithmic probability, and the random logistic map. Physica D: Nonlinear Phenomena 2024, 463, 134160. [Google Scholar] [CrossRef]
- Conway-Morris, S. Evolution: like any other science it is predictable. Philosophical Transactions of the Royal Society B: Biological Sciences 2010, 365, 133. [Google Scholar] [CrossRef] [PubMed]
- McGhee, G.R. Convergent evolution: limited forms most beautiful; MIT press, 2011.
- Morris, S. Life’s solution: inevitable humans in a lonely universe; Cambridge University Press, 2003.
- Morris, S.C. The deep structure of biology: is convergence sufficiently ubiquitous to give a directional signal; Number 45, Templeton Foundation Press, 2008.
- Dingle, K.; Schaper, S.; Louis, A.A. The structure of the genotype–phenotype map strongly constrains the evolution of non-coding RNA. Interface focus 2015, 5, 20150053. [Google Scholar] [CrossRef] [PubMed]
- Louis, A.A. Contingency, convergence and hyper-astronomical numbers in biological evolution. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 2016, 58, 107–116. [Google Scholar] [CrossRef] [PubMed]
- Wright, E.S. Tandem repeats provide evidence for convergent evolution to similar protein structures. Genome Biology and Evolution 2025, 17, evaf013. [Google Scholar] [CrossRef] [PubMed]
- Chatfield, C. The analysis of time series: theory and practice; Springer, 2013.
- Zenil, H.; Delahaye, J. An algorithmic information theoretic approach to the behaviour of financial markets. Journal of Economic Surveys 2011, 25, 431–463. [Google Scholar] [CrossRef]
- Rathmanner, S.; Hutter, M. A philosophical treatise of universal induction. Entropy 2011, 13, 1076–1136. [Google Scholar] [CrossRef]
- Merhav, N.; Feder, M. Universal prediction. IEEE Transactions on Information Theory 1998, 44, 2124–2147. [Google Scholar] [CrossRef]
- Willems, F.M.; Shtarkov, Y.M.; Tjalkens, T.J. The context-tree weighting method: Basic properties. IEEE transactions on information theory 1995, 41, 653–664. [Google Scholar] [CrossRef]
- Hutter, M.; Legg, S.; Vitanyi, P.M. Algorithmic probability. Scholarpedia 2007, 2, 2572. [Google Scholar] [CrossRef]
- Solomonoff, R.J. A formal theory of inductive inference. Part I. Information and control 1964, 7, 1–22. [Google Scholar] [CrossRef]
- Hutter, M. Universal artificial intelligence: Sequential decisions based on algorithmic probability; Springer Science & Business Media, 2004.
- Ryabko, B.; Astola, J.; Malyutov, M. Compression-based methods of statistical analysis and prediction of time series; Springer, 2016.
- Blázquez-García, A.; Conde, A.; Mori, U.; Lozano, J.A. A review on outlier/anomaly detection in time series data. ACM computing surveys (CSUR) 2021, 54, 1–33. [Google Scholar] [CrossRef]
- Dasgupta, A.; Li, B. Detection and analysis of spikes in a random sequence. Methodology and Computing in Applied Probability 2018, 20, 1429–1451. [Google Scholar] [CrossRef]
- Horváth, L.; Rice, G. Change Point Analysis for Time Series; Springer, 2024.
- Ishkuvatov, R.; Musatov, D. On approximate uncomputability of the Kolmogorov complexity function. In Proceedings of the Computing with Foresight and Industry: 15th Conference on Computability in Europe, CiE 2019, Durham, UK, July 15–19 2019; Proceedings 15. Springer, 2019. pp. 230–239. [Google Scholar]
- Ishkuvatov, R.; Musatov, D.; Shen, A. Approximating Kolmogorov complexity. Computability 2023, 12, 283–297. [Google Scholar] [CrossRef]
- Dingle, K. Optima and simplicity in nature. arXiv, arXiv:2210.02564 2022.
- Dingle, K. Fitness, optima, and simplicity. Preprints, 2022; 2022080402. [Google Scholar]
- Wyeth, C.; Bu, D.; Yu, Q.; Gao, W.; Liu, X.; Li, M. Lossless data compression by large models. Nature Machine Intelligence 2025. [Google Scholar] [CrossRef]
| 1 | |
| 2 | After this chapter was reviewed and accepted for publication, I came to know that Noson Yanofsky had made essentially the same point earlier [43]. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
