Preprint
Article

This version is not peer-reviewed.

Radioactive Information: How Uncomputability Ensures O(1) Precision for Non-Shannon Inequalities

Submitted:

25 December 2025

Posted:

26 December 2025

You are already at the latest version

Abstract
Shannon entropy and Kolmogorov complexity describe complementary facets of information. We revisit Q2 from 27 Open Problems in Kolmogorov Complexity: whether all linear information inequalities including non‑Shannon‑type ones admit $$\mathcal{O}(1)$$-precision analogues for prefix‑free Kolmogorov complexity. We answer in the affirmative via two independent arguments. First, a contradiction proof leverages the uncomputability of $$K$$ to show that genuine algorithmic dependencies underlying non‑Shannon‑type constraints cannot incur length‑dependent overheads. Second, a coding‑theoretic construction treats the copy lemma as a bounded‑overhead coding mechanism and couples prefix‑free coding (Kraft's inequality) with typicality (Shannon-McMillan-Breiman) to establish $$\mathcal{O}(1)$$ precision; we illustrate the method on the Zhang-Yeung (ZY98) inequality and extend to all known non‑Shannon‑type inequalities derived through a finite number of copy operations. These results clarify the structural bridge between Shannon‑type linear inequalities and their Kolmogorov counterparts, and formalize artificial independence as the algorithmic analogue of copying in entropy proofs. Collectively, they indicate that the apparent discrepancy between statistical and algorithmic information manifests only as constant‑order effects under prefix complexity, thereby resolving a fundamental question about the relationship between statistical and algorithmic information structure.
Keywords: 
;  ;  ;  ;  ;  

1. Introduction

Since their respective introduction, Shannon entropy (Information Theory) and Kolmogorov complexity (Algorithmic Information Theory) have observed two distinct trajectories. Shannon’s foundational work [1], initially developed for telecommunication problems, has permeated numerous disciplines and found widespread practical application. In contrast, Kolmogorov complexity1 emerged from independent contributions by Solomonoff [2,3], Kolmogorov [4] and Chaitin [5] as a framework for algorithmic probability and the foundation of statistics. Despite its theoretical elegance, Kolmogorov complexity has remained largely confined to the theoretical realm due to its uncomputability.
This uncomputability, often viewed as a limitation, may instead represent a powerful feature that reveals deeper information structure beyond what statistical methods can capture. The existence of non-Shannon-type inequalities – constraints that cannot be derived from Shannon’s basic inequalities provides compelling evidence of this richer structure. These inequalities have practical implications in areas such as network coding [6] [Section 3.5], demonstrating that the gap between statistical and algorithmic information has real-world consequences.
Problem Q2 from the “27 Open Problems in Kolmogorov Complexity” directly addresses this gap: Do all linear inequalities for entropies (or complexities), or at least the known non-Shannon-type inequalities, hold with precision O ( 1 ) for prefix complexity? Our work provides a definitive answer by leveraging the very uncomputability that has historically constrained Algorithmic Information Theory, transforming it from a perceived limitation into the key insight that resolves this open problem.

1.1. 27 Open Problems in Kolmogorov Complexity: Q2

The work [7] identifies and lists 27 problems in the field of Kolmogorov complexity.
We recall/reproduce the Q2 ([7] [p. 5]):
Do all linear inequalities for entropies (or complexities, since they are the same), or at least the known non-Shannon inequalities, hold with precision O ( 1 ) for the case of prefix complexity?
This question actually embeds two questions, which can be divided and formulated as follows:
  • Do all linear Shannon-type inequalities for entropies (or complexities), hold with precision O ( 1 ) for the case of prefix-free complexity?
  • Do all the known non-Shannon-type inequalities, hold with precision O ( 1 ) for the case of prefix complexity?

1.2. Paper Organization

As an attempt and proposal to answer 1.1, this work is structured as follows:
  • We start by recalling an earlier result concerning the equivalence between Shannon entropy inequalities and their Kolmogorov complexity counterpart,
  • We recall the main proof technique (copy lemma) involved in proving non-Shannon-type inequalities and its Kolmogorov complexity (artificial independence) counterpart,
  • We propose a first proof by contradiction, answering positively Q2,
  • We follow with a second proof based on coding theory,
  • We close this work by recalling our salient points and conclude.

2. Preliminary Results

Early work has shown the equivalence that all the linear inequalities that hold for Shannon entropy also hold for Kolmogorov complexity [8]. In other words, we know that there is a one-to-one mapping between non-Shannon-type inequalities and their Kolmogorov complexity counterpart [8].
Nevertheless, we are interested in determining to what level of precision they hold; Q2 asks the question whether the non-Shannon-type inequalities hold with O ( 1 ) precision.

3. Information Theory Recalls

3.1. Elemental Inequalities

The basic linear inequalities can be expressed under a reduced form called the elemental forms. A number of resources regarding this point are: [6,9,10].
Theorem 1
(II.1 [11]). [12] Any Shannon’s information measure can be expressed as a conic combination of the following two elemental forms of Shannon’s information measures:
i) H ( X i | X N n { i } )
ii) I ( X i ; X j | X K ) , where i j and K N n { i , j } .
These are a basic building block i.e. Shannon-type inequalities are fully characterized by these inequalities.

3.2. Copy Lemma

The copy lemma is a proof technique initially proposed in [13]. It is to be noted that this designation was acquired after the proof technique was introduced. That is, it was not defined under the appellation copy lemma. It constitutes a major tool in the arsenal for proving non-Shannon-type inequalities. However, it is unknown if it is sufficient to generalize it for n random variables.
The (copy lemma) main idea involves the construction of a Markov chain: ( X , Y ) ( Z , T ) U , where U is a copy of the random variable X [14]. The denomination under the Kolmogorov complexity may (or not) convey a more intuitive sense: artificial independence (4.5).
Formally, it takes the following shape, we use the following formulation:
Lemma 1
([15], Lemma III.3). Given jointly distributed random variable X and collections of random variables Y and Z, there exists a random variable U, jointly related to X, Y, and Z, such that U is a Z-copy of X over Y.
With the following condition:
(C1)
H ( U | X , Y , Z ) = 0 (conditional determinism)
(C2)
The joint probability distributions of ( X , Y , Z ) and ( U , Y , Z ) are equal.

Copy Lemma—A Revealing Hidden Structure Mechanism

Intuitively, the copy lemma can be seen as a structure revealing construction that exposes hidden constraints in the entropy space. Indeed, for n 3 random variables, all the constraints on the entropy vectors come from the non-negativity of the basic information inequalities or linear inequalities satisfying the polymatroid axioms [6,10]. However, for n 4 , there are “hidden” constraints that are not captured by the basic information inequalities. These information structures can be revealed by the copy lemma and works through the following process:
-
It preserves valid dependencies: the construction ensures that U has the same relationship with ( Y , Z , T ) as X does. Specifically, H ( U , Y , Z , T ) = H ( X , Y , Z , T ) + O ( 1 ) .
-
It splits only necessary dependencies: it breaks the direct connection between X and U given ( Z , T ) while preserving conditional independence. This means X and U are conditionally independent given ( Z , T ) : I ( X ; U | Z , T ) = 0 .
-
Unearth genuine constraints: the obtained inequality isn’t artificial – it reflects real constraints in the entropy space.

4. Algorithmic Information Theory Recalls

4.1. Kolmogorov Complexity

We start by recalling the definition of plain Kolmogorov complexity (C). Then, we introduce its prefix version.
The plain Kolmogorov complexity of a string σ is defined as:
Definition 1
(Plain Kolmogorov Complexity, [16]).
C f ( σ ) = min { | τ | : f ( τ ) = σ }
That is, the (plain) Kolmogorov complexity (C) of a string ( σ ) with respect to a reference Turing machine2 (f) is the shortest description of the string ( σ ) which is obtained when the input ( τ ) is fed into a reference machine (f) and halts.

4.2. Prefix-Free Machines

We now proceed with the prefix Kolmogorov complexity, commonly designated as K.
First, we recall the description of a prefix-machine:
Proposition 1
([16], Prop. 3.5.1). (i) if Φ is a prefix-free partial computable function, then there is a prefix-free (self-delimiting) machine M | M computes Φ.
(ii) There is a universal (self-delimiting) prefix-free machine.
Secondly, we proceed with a slightly adapted definition of a universal prefix machine:
Definition 2
([16], Def. 3.5.2). The following universal prefix-free oracle machine U is fixed. We write U ( σ ) for U ( σ ) .
The machine U is minimal which means that for any prefix machine M:
C U ( σ ) C M ( σ ) + O ( 1 )
Using that, we can define the prefix-free Kolmogorov complexity (K) of a string σ as:
K ( τ ) = C U ( σ )

Note:

In [16] and more generally, the terms: Turing machines and functions can be used exchanged seamlessly. As formulated by Chaitin, algorithmic information theory is a blend of: recursive function theory and program size [17].

4.3. Kraft Inequality

Theorem 2
(Kraft Inequality, [18](p.76)). There exists a prefix D-ary code with codeword lengths l 1 , l 2 , . . . , l k if only if
1 k D l i 1

4.4. Shannon-McMillan-Breiman Theorem

The Shannon-McMillan-Breiman Theorem also known as the weak asymptotic equipartition property (AEP) is defined as follows:
Definition 3
(Shannon-McMillan-Breiman, [9] 5.4). For an i.i.d. information source { X k } with generic random variable X and generic distribution p ( x ) :
1 n log p ( X ) H ( X )
That is, the weak AEP states that as the number of random variables grows ( n ), the mean of the logarithm probability approaches entropy.
Additionally, if the random variable source is stationary and if { X k } is also ergodic, we have the following property [9][5.52]:
1 n log P ( X ) H

4.5. Artificial Independence

The artificial independence mirrors the copy lemma (3.2) for the Kolmogorov complexity case [19] [p. 346].
Following our discussion on the copy lemma (3.2), we adapt and define artificial independence as follows:
Lemma 2
(Artificial Independence). Given strings x , y , z , t , a string u (“copy” of x given z , t ) such that the following conditions hold with O ( 1 ) precision for prefix complexity K:
K ( u | x , y , z , t ) = O ( 1 ) Conditional Determinism
K ( u , y , z , t ) = K ( x , y , z , t ) + O ( 1 ) Joint Complexity Preservation
K ( u | y , z , t ) = K ( x | y , z , t ) + O ( 1 ) Conditional Complexity Preservation
K ( u , z , t ) = K ( x , z , t ) + O ( 1 ) Markov Property
Where:
(7) corresponds to condition one: H ( U | X , Y , Z ) = 0 of the copy lemma.
(8) This ensures that u has the same joint complexity with y , z , t as x does.
(9) This preserves the conditional complexity relationship.
(10) This corresponds to the Markov chain structure ( X , Y ) ( Z , T ) U .

Why the O ( 1 ) precision holds for K:

  • The prefix machine doesn’t need explicit delimiters to specify when we have reached the program’s end,
  • As stated in [20], the use of Kolmogorov complexity (prefix) allows for the logarithmic overhead to be dropped,
  • The program that computes u from ( x , y , z , t ) is of fixed length – it doesn’t grow as the strings get longer; it is independent of string lengths.

5. Proof 1 by Contradiction—Why Should Q2 Hold or Not?

In a first phase, we present a proof constructed by contradiction to Q2 (1.1) that exploits the equivalence between entropy based inequalities and their Kolmogorov counterpart; that is, from the realm of Shannon-type to non-Shannon-type inequalities.
To construct our proof by contradiction, we need something that can “deal with” the deepness and intricacies portrayed by the non-Shannon-type inequalities. Indeed, there are infinitely many of these quantities [21]. In our case, that something is Kolmogorov’s uncomputable aspect which can be seen as a double-edged feature.

5.1. Leveraging K’s Uncomputability

Furthermore, our call on uncomputability is substantiated with the following known correspondences between Shannon and Kolmogorov frameworks:
  • Shannon entropy (H) is computable, that is, given a probability distribution we can compute its entropy,
  • Kolmogorov complexity (K) is uncomputable, that is, given a input string x, we cannot readily obtain its minimal shortest description,
  • We assume that the existence of non-Shannon-type inequalities to be linked to K’s uncomputable nature. Whereas Shannon entropy captures statistical regularities, K has a more expressive power i.e., it is able to unravel deeper patterns – algorithmic patterns [22,23]. In a sense, it’s like trying to detect codebooks versus recipes.
The critical threshold ( n = 4 ) in the non-Shannon-type inequalities spectrum is an embodiment of this richness [10]. It tells that there are informational structures that escapes the statistical framework detection system.
In the non-Shannon-type informational landscape, this is characterized as follows [10]:

Case: n 3 :

For n = 2 , Γ 2 * = Γ 2 , we observe an overlap between the entropy region and Shannon region.
For n = 3 , Γ ¯ 3 * = Γ 3 , the closure of the entropic region equals the Shannon region.
Statistical dependencies (or elemental inequalities) are sufficient to capture the description of the information landscape.

Case: n 4 :

Γ ¯ 4 * Γ 4 , not all points in the Shannon region are necessarily entropic.
This suggests that there are algorithmic dependencies beyond statistical ones and that statistical pattern mining falls short in extracting them.
Based on these elements, we formulate the following proposition:
Proposition 2
(Entropy vs. Expected Complexity). For any computable distribution P:
H ( P ) E x P [ K ( x ) ] + O ( 1 )

5.2. Proof 1—Construction By Contradiction

We now proceed with our first proof concerning Q2 (1.1).
Theorem 3.
All non-Shannon-type inequalities hold with O (1) precision for prefix complexity.
Proof. 
Assume for contradiction that there ∃ a non-Shannon-type inequality that does not hold with O ( 1 ) precision for prefix complexity.
1. By definition of non-Shannon-type inequalities, this inequality describes constraints that cannot be derived from the elemental inequalities.
2. The existence of such inequalities implies that Γ ¯ n * Γ n for n 4 , that is, there are information structures beyond what statistical dependencies can capture.
3. These structures are what K captures but missed by H – they represent algorithmic structures, not just statistical regularities.
4. The uncomputability of K means that these structures cannot be fully described by any finite set of statistical constraints.
5. If the inequality held only with O ( log n ) precision, it would imply that the structure could be “approximated” by encoding it in a way that depends on string length n.
6. However, genuine algorithmic structure, as captured by K must be independent of string length – it’s an intrinsic property of the information itself.
7. The O ( 1 ) requirement achieved through prefix-machinery ensures that the structure is genuinely algorithmic, not just a statistical artifact that scales with n.
8. Therefore, any inequality describing these uncomputable structures mush hold with O ( 1 ) precision for prefix complexity.
9. We arrive at a contradiction – our initial assumption must be false. □
Corollary 1.
All non-Shannon-type inequalities hold with O ( 1 ) precision for prefix complexity.

6. Proof 2—A Coding Theory Perspective

Hereafter, we propose a second more technical proof than our previous (5.2) to theorem (3).
It is articulated around the interplay between:
-
Kolmogorov complexity (3),
-
Copy lemma (3.2),
-
Kraft inequality (4.3),
-
Shannon-McMillan-Breiman theorem (4.4) or weak AEP.
As a pivot, we use the first non-Shannon-type inequality to be studied: ZY98 inequality [24].
Our proof requires the codes to be expressed under: Σ = { 0 , 1 } * alphabet. This bounds the information measures as follows:
-
H ( X ) [ 0 , 1 ] ,
-
H ( X | Y ) [ 0 , 1 ] ,
-
I ( X ; Y ) [ 0 , 1 ] .
For n = 4 , the entropy space remains 15 dimensional [25] but vector coordinates are bounded [ 0 , 1 ] .
Theorem 4
(ZY98, [14]). For any four random variables X 1 , X 2 , X 3 , and X 4 ,
2 I ( X 3 ; X 4 ) I ( X 1 ; X 2 ) + I ( X 1 ; X 3 , X 4 ) + 3 I ( X 3 ; X 4 | X 1 ) + I ( X 3 ; X 4 | X 2 )

Proof 2—Coding-Theoretic Approach

In this section, we focus on the Kolmogorov complexity translation of ZY98 inequality.

Prefix ∧ Copy Lemma ∧ Kraft

With prefix-free codes, we have Kraft inequality. That is, instantaneous codes verify the Kraft inequality (where codes can be binary or d-ary [18]). Additionally and in combination, when using the copy lemma (3.2) construction, we have that the additional coding overhead is bounded by a constant. Moreover, we need some sort of guarantee about the typicality of this to be the case which is provided by the Shannon-McMillan-Breiman theorem.
Proof:
1. Assumption of binary variables: let X 1 , X 2 , X 3 , X 4 be binary random variables.
2. Typical set construction: by the weak AEP for large N, we have:
-
Most sequences of length N fall in the typical set A ϵ N ,
-
( X 1 N , X 2 N , X 3 N , X 4 N ) A ϵ N :
2 N ( H ( X 1 , X 2 , X 3 , X 4 ) + ϵ p ( x 1 N , x 2 N , x 3 N , x 4 N ) 2 N ( H ( X 1 , X 2 , X 3 , X 4 ) ϵ
3. Prefix-free code construction: we define a prefix code where:
-
The codeword length for sequence x N is approximately N · H ( x ) + O ( 1 )
-
By Kraft’s inequality: 2 l ( x N ) 1
4. Copy lemma as a coding scheme: the copy lemma corresponds to a specific coding strategy:
-
Encode X 1 N using approximately N · H ( X 1 ) bits,
-
Encode the copy U N using the same codebook as X 1 N but with fixed overhead,
-
The Kraft inequality ensures this overhead is in O ( 1 ) .
5. Information inequalities as a coding constraints: the ZY98 inequality can be seen as a constraint on the achievable coding rates: 2 R 34 R 12 + R 134 + 3 R 34 | 1 + R 3 , 4 | 2 , where R maps the coding rate to the corresponding information quantity.
6. Bounded error: since all information measures are bounded [ 0 , 1 ] . The difference between the left and right size of ZY98 is bounded, this ensures that the use of the copy lemma introduces only an O ( 1 ) overhead.
7. Prefix complexity precision: the coding overhead of the copy lemma construction is of O ( 1 ) due to Kraft’s inequality.
8. Conclusion: the ZY98 inequality holds with O ( 1 ) precision. □
In order to obtain ZY98’s Kolmogorov counterpart, we apply the same modus operandi used in [20].
Equipped with the prefix-machine (4.2) properties, we obtain O ( 1 ) precision yielding:
Theorem 5
(ZY98 inequality under K).
2 K ( X 3 ; X 4 ) K ( X 1 ; X 2 ) + K ( X 1 ; X 3 , X 4 ) + 3 K ( X 3 ; X 4 | X 1 ) + K ( X 3 ; X 4 | X 2 ) + O ( 1 )
Combined with our proof of the ZY98 inequality, it holds that all known non-Shannon-type inequalities map to their Kolmogorov complexity counterpart under prefix properties with O ( 1 ) precision. This is because all known non-Shannon-type inequalities were proven using a finite number use of the copy lemma. Indeed, for the application of a single copy lemma, we have O ( 1 ) precision. Similarly, for k number of copy lemma, we observe k · O ( 1 ) = O ( 1 ) precision. It is only if the number of application grew with the string length that the precision would degrade.

7. Discussion and Conclusion

In this work, we were interested in Q2 of [7]. In substance, this question asks the question whether the non-Shannon-type inequalities hold with O ( 1 ) precision for Kolmogorov complexity.
Through the process of answering this question, we tried to shed some light on the copy lemma (artificial independence for its Kolmogorov complexity counterpart) construction which is the main tool used for proving non-Shannon-type inequalities and that it is more than a proof technique or mathematical trick.
As for the results, we answer in the positive to Q2 by proposing two proofs. A first proof constructed by contradiction that exploits the uncomputable nature of Kolmogorov complexity. We proceed by assuming that there is an inequality that doesn’t hold with O ( 1 ) precision. However, after logical derivation we arrive at a contradiction, inducing that there isn’t one.
Furthermore, we proposed a second proof rooted in coding theory that connects: prefix complexity, copy lemma, Kraft inequality and weak AEP. The key insight we used is to consider the copy lemma as a coding strategy with a bounded overhead (Kraft inequality). Concerning the second proof, we assume the codewords to be coded using a binary alphabet. However, we believe this requirement can be relaxed.

Author Contributions

The author conducted the study and wrote the manuscript. The author has read and agreed to the published version of the manuscript.

Acknowledgments

The author conducted this research as an independent scholarly project outside the scope of their primary research duties at King Abdullah University of Science and Technology.

Conflicts of Interest

The author declares no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
IT Information Theory
AIT Algorithmic Information Theory
C Plain Kolmogorov Complexity
K Kolmogorov Complexity (prefix)
AEP Asymptotic Equipartition Property

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 623–656. [Google Scholar] [CrossRef]
  2. Solomonoff, R.J. A Formal Theory of Inductive Inference. Part I. Information and Control 1964, 7, 1–22. [Google Scholar] [CrossRef]
  3. Solomonoff, R.J. A Formal Theory of Inductive Inference. Part II. Information and Control 1964, 7, 224–254. [Google Scholar] [CrossRef]
  4. K.A., N. Three approaches to the quantitative definition of information. Problems in Information Transmission 1965, 1, 1–7. [Google Scholar]
  5. Chaitin, G.J. On the Length of Programs for Computing Finite Binary Sequences. J. ACM 1966, 13, 547–569. [Google Scholar] [CrossRef]
  6. Yeung, R.W. Facets of entropy. Commun. Inf. Syst. 2015, 15, 87–117. [Google Scholar] [CrossRef]
  7. Romashchenko, A.; Shen, A.; Zimand, M. 27 Open Problems in Kolmogorov Complexity. arXiv 2022, arXiv:2203.15109. [Google Scholar] [CrossRef]
  8. Hammer, D.; Romashchenko, A.; Shen, A.; Vereshchagin, N. Inequalities for Shannon Entropy and Kolmogorov Complexity. Journal of Computer and System Sciences 2000, 60, 442–464. [Google Scholar] [CrossRef]
  9. Yeung, R.W. Information theory and network coding. In Information Technology: Transmission, Processing and Storage, 2008 ed.; Springer: New York, NY, 2008. [Google Scholar]
  10. Topal, T. Information Theory Laws: A Recollection. Preprints 2025. [Google Scholar] [CrossRef]
  11. Guo, L.; Yeung, R.W.; Gao, X.S. Proving Information Inequalities and Identities With Symbolic Computation. IEEE Trans. Inf. Theor. 2023, 69, 4799–4811. [Google Scholar] [CrossRef]
  12. Yeung, R.W. A framework for linear information inequalities. IEEE Trans. Inf. Theory 1997, 43, 1924–1934. [Google Scholar] [CrossRef]
  13. Zhang, Z.; Yeung, R. A non-Shannon-type conditional inequality of information quantities. IEEE Transactions on Information Theory 1997, 43, 1982–1986. [Google Scholar] [CrossRef]
  14. Yeung, R.W.; Li, C.T. Machine-Proving of Entropy Inequalities. IEEE BITS the Information Theory Magazine 2021, 1, 12–22. [Google Scholar] [CrossRef]
  15. Dougherty, R.; Freiling, C.F.; Zeger, K. Six New Non-Shannon Information Inequalities. In Proceedings of the Proceedings 2006 IEEE International Symposium on Information Theory, ISIT 2006, The Westin Seattle, Seattle, Washington, USA, July 9-14, 2006; IEEE; 2006, pp. 233–236. [Google Scholar] [CrossRef]
  16. Downey, R.G.; Hirschfeldt, D.R. Algorithmic Randomness and Complexity. In Theory and Applications of Computability; Springer, 2010. [Google Scholar] [CrossRef]
  17. Chaitin, G.J. How to Run Algorithmic Information Theory on a Computer. arXiv 1995. [Google Scholar] [CrossRef]
  18. Höst, Stefan. Information and Communication Theory; IEEE Press, 2019. [Google Scholar] [CrossRef]
  19. Shen, A.; Uspensky, V.A.; Vereshchagin, N. Kolmogorov Complexity and Algorithmic Randomness. In Mathematical Surveys and Monographs; American Mathematical Society, 2017; Vol. 220. [Google Scholar]
  20. Hammer, D.; Shen, A. A Strange Application of Kolmogorov Complexity. Theory Comput. Syst. 1998, 31, 1–4. [Google Scholar] [CrossRef]
  21. Matús, F. Infinitely Many Information Inequalities. In Proceedings of the IEEE International Symposium on Information Theory, ISIT 2007, Nice, France, June 24-29, 2007; IEEE; 2007, pp. 41–44. [Google Scholar] [CrossRef]
  22. Zenil, H.; Hernández-Orozco, S.; Kiani, N.A.; Soler-Toscano, F.; Rueda-Toicen, A.; Tegnér, J. A Decomposition Method for Global Evaluation of Shannon Entropy and Local Estimations of Algorithmic Complexity. Entropy 2018, 20, 605. [Google Scholar] [CrossRef] [PubMed]
  23. Hernández-Espinosa, A.; Ozelim, L.; Abrahão, F.S.; Zenil, H. SuperARC: A Test for General and Super Intelligence Based on First Principles of Recursion Theory and Algorithmic Probability. abs/2503.16743; CoRR. 2025; p. 2503.16743. [Google Scholar] [CrossRef]
  24. Zhang, Z.; Yeung, R. On characterization of entropy function via information inequalities. IEEE Transactions on Information Theory 1998, 44, 1440–1452. [Google Scholar] [CrossRef]
  25. Tiwari, H.; Thakor, S. On Characterization of Entropic Vectors at the Boundary of Almost Entropic Cones. In Proceedings of the 2019 IEEE Information Theory Workshop, ITW 2019, Visby, Sweden, August 25-28, 2019; IEEE; 2019, pp. 1–5. [Google Scholar] [CrossRef]
1
Also called algorithmic complexity or Solomonoff-Kolmogorov-Chaitin complexity
2
It is commonly assumed to be a universal one.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated