Appendix 1 A Parable About a Physicist, Seeds, and Total Darkness
Appendix 1.1 Problem Statement
Imagine that you are sitting in a room in absolute darkness. Eyes see nothing, even after long adaptation. At hand is a large bowl of seeds or nuts. You are well-fed, unhurried, memory works normally. You are sitting on an office chair that can rotate. You have two ears, and they work normally.
The only thing you can do is throw seeds somewhere into the darkness and listen to the response: a thud against a wall, a clang against a metal plate, or complete silence (flew out an open window).
Task: understand the structure of the surrounding space using only these data—what you threw and what you heard in response.
What you have:
Seeds—unlimited supply. This is your input signal .
Ability to throw—you can control where and with what force to throw. This is the ability to influence the system.
Two ears—you hear in stereo. Left and right ears receive signal with different delays and amplitudes. This is two-channel observation .
Rotating chair—you can turn in place, changing orientation relative to sound sources. This is controlled modulation of the observation channel.
Memory—you remember what you threw a second ago, two seconds ago. The system does not reset after each throw. There are temporal correlations.
Pulse—you can count your own pulse as a rough internal rhythm. This is analogous to discrete time (pulse beats).
What you do NOT have:
Vision—no direct access to "space geometry".
Coordinate grid—no predefined axes . Where is up, where is down, where is left, where is right—unknown.
Uniform clock—pulse exists, but it is uneven and not global (this is your internal rhythm, not "world time").
Notion of "force"—you simply throw seeds somehow, without a theory of "gravity" or "inertia".
This is the basic situation of system identification theory: there is a channel (room), there is input (seeds), there is output (sound), there is no a priori ontology (space, time, force).
Appendix 1.2. Experiment 1: Not Throwing Seeds—Learning Nothing
I decided to first just sit quietly and listen. Maybe the room itself will "say" something?
Result: silence. No information.
Formal interpretation: This is Newton’s first law. When input excitation
(not throwing seeds), the output signal contains no information about system structure. Fisher information matrix is degenerate:
. Data are uninformative (Definition 8.2 from
Section 2).
Conclusion: Passive observation is useless. To obtain information, active excitation of the system is necessary.
Appendix 1.3. Experiment 2: Throwing Uniformly—Learning Little
I started throwing seeds strictly rhythmically: once per second (by pulse), in the same direction, with the same force. I hear regular thud against wall—thud, thud, thud, thud...
What can be learned? Something exists in that direction at a certain distance (from sound delay). But nothing more.
Formal interpretation: Input signal contains only one frequency . This is insufficient persistent excitation. According to Lemma 13.1, Toeplitz matrix is degenerate for . System response can be determined only at frequency , but not the entire transfer function .
Physics analogy: If applying force at only one frequency to a "mass on spring" system (), one can measure only the resonant frequency , but not mass m and stiffness k separately.
Conclusion:Variety in excitation is needed—throw seeds at different intervals, in different directions, with different force.
Appendix 1.4. Experiment 3: Role of Two Ears
I threw a seed forward and heard a response. Interesting: the left ear heard the sound slightly earlier than the right, and louder. This means the reflection came from left-front, not right-front.
What does the second ear provide?
Appendix 1.4.1. Interaural Time Difference
If a sound source is located at angle
relative to the head’s axis of symmetry, interaural delay:
where
cm—distance between ears,
c—speed of sound.
For (source directly to the left) delay ms. This is audibly distinguishable.
Appendix 1.4.2. Phase Difference
For a sinusoidal signal of frequency
f, interaural phase difference:
At Hz and we get rad —this is a well-distinguishable phase difference.
Formal interpretation: Two ears provide
two-channel observation:
where
are impulse responses of left and right channels, depending on angle
and distance
r to source.
Fisher information matrix for angle
:
Total information is greater than from one ear: .
Appendix 1.4.3. Experiment: Plugging One Ear
Now I plug my right ear with a finger and repeat the experiment. I throw a seed to the left.
Result: I hear a response, but cannot precisely say where it came from—left-front or left-back? Mirror configurations became indistinguishable.
Formally: System lost chirality (ability to distinguish left and right). Information matrix over angular parameters has zero eigenvalue for reflections relative to sagittal plane.
Effective observation dimension dropped: → – (non-integer due to partial information from head diffraction).
Conclusion: The second ear is not simply a "backup channel" but a symmetry breaking that makes angular parameters identifiable.
Appendix 1.5. Experiment 4: Rotation on Chair
Now a different experiment. I sit motionless and throw a seed strictly forward. I hear a thud. But if I don’t move, I cannot distinguish: is the wall directly in front of me, or am I slightly turned left/right?
Solution: I start slowly rotating on the office chair (one full rotation per minute) and continue throwing seeds in a fixed direction relative to the chair.
Appendix 1.5.1. What Happens During Rotation
As I rotate, the angle between throw direction (in chair frame) and reflecting surface (in room frame) changes: , where is angular rotation velocity.
The observed signal becomes
modulated:
If has angular dependence (reflection directionality), then signal contains information about .
Example: If wall is located at angle relative to initial throw direction, then as I rotate I will hear:
At : maximum response (throw perpendicular to wall)
At : weaker response (throw at angle)
At : almost no response (throw almost parallel to wall)
This modulation allows determining —the angular position of the wall.
Appendix 1.5.2. Formal Interpretation
Rotation adds
controlled phase modulation to the observation channel. Spectral decomposition:
where Fourier coefficients
depend on angular position of reflector.
Fisher information matrix over
:
Without rotation () all harmonics with vanish: for . Information over angle .
Critical observation: Rotation transforms angular parameters from latent (hidden, unidentifiable) to observable (identifiable).
Appendix 1.5.3. Experiment: Not Rotating
Now I sit motionless and throw seeds in all directions (turning arm but not body). I hear responses with different delays.
What can I learn? Distribution of distances: at what radii reflective surfaces exist. But all objects at the same distance are indistinguishable by angle.
Formally: System becomes radially symmetric. Angular modes degenerate. Hankel matrix loses rank: transition from rank() ≈ 2D to rank() ≈ 1D.
Intuition: Without rotation, the world for me is a set of concentric circles (acoustic rings). I know distances but not angles.
Appendix 1.6. Experiment 5: Most Critical—One Ear + No Rotation
Now I combine both limitations: plug right ear AND sit motionless.
I throw seeds. I hear responses.
Result: I know only:
When response arrived (delay )
How loud response (amplitude)
Statistics of hits (how often I hear responses with random throws)
But I do not know:
Where sound came from (direction)
How many objects at the same distance (all merge into one "ring")
Object shapes (only radial profiles distinguishable)
Formally: Fisher information matrix over angular parameters
:
Hankel matrix of rank 1: rank() = 1. Effective observation dimension dropped to less than one (sub-one-dimensional identifiability).
Intuition: The world turned into a one-dimensional "bag of reflections"—a set of delays without angular structure. It’s as if instead of a 2D map, only a 1D histogram of distances remained.
Appendix 1.7. Experiment 6: Throwing in Different Directions—Structure Emerges
I return to full configuration: two ears, rotating chair. I throw seeds in a fan: forward, left, right, up, down. I hear different responses:
Forward—dull thud (soft wall, far)
Left—ringing clang (metal plate, close)
Right—nothing (window open)
Up—thud with delay (ceiling high)
Down—almost instant thud (floor close)
Now structure emerges. I begin building a mental map: "in this direction something ringing and close, in that—soft and far".
Formal interpretation: I identify directional dependence of response. Thanks to two ears (phase difference) and rotation (modulation) I can distinguish angular modes.
Key observation: "Directions" (forward, left, right) are not coordinates in a predefined space. These are simply indices with which I number distinguishable response modes. If there were ringing in one direction and also ringing in the opposite (indistinguishable), I would not distinguish these directions.
Connection to coordinates: Coordinates in this interpretation are labels for distinguishable directions (modes), not points in absolute space.
Appendix 1.8. Experiment 7: Echolocator—Complete Picture
I decide to switch to a radical method: instead of seeds I use an echolocator (imagine I have one at hand). It sends a short broadband pulse—a click containing all frequencies at once.
Result: I hear complex echo—mixture of reflections from all objects in the room with different delays. If rotating the echolocator (turning it in different directions) and rotating on the chair, after several pulses I "see" the entire room: where walls, where plate, where window, where ceiling.
Formal interpretation: Echolocator sends white noise—signal with uniform spectrum at all frequencies. This is maximally persistent excitation (p.e. of order ∞). Such signal excites all system modes simultaneously.
From response, complete impulse response can be recovered—how system responds to unit impulse in direction at distance r with delay . Knowing h, transfer function can be recovered and all system parameters identified (object positions, their acoustic properties).
Physical analogy: This is as if in mechanics applying -function to system (instantaneous impact) and observing free oscillations. From oscillation frequencies all parameters are determined.
Conclusion: For complete system identification, maximally broadband excitation AND breaking of all symmetries (two ears + rotation) are needed.
Appendix 1.9. Identifiability Conditions Table
| Conditions |
Eff. dimension |
What is lost |
| Rotation + 2 ears |
∼2D (non-integer) |
— |
| No rotation |
∼1D |
angles |
| One ear |
∼1–1.5D |
chirality |
| No rotation + 1 ear |
∼1D → <1 |
almost everything |
Formal interpretation via Hankel rank:
Rotation + 2 ears: rank() ≥ 2, system fully identifiable
No rotation: rank() drops, angular modes degenerate
One ear: Loss of antisymmetric part of observation operator
No rotation + one ear: rank() = 1, system reduces to radial profile
Appendix 1.10. Where Do "Coordinates" and "Space" Come From?
After a series of experiments with echolocator, two ears and rotation on chair, I accumulated data: a map of reflection delays depending on emission direction. For convenience I decide to parameterize these directions.
For example:
Direction 1 (forward) → call it "axis x"
Direction 2 (left) → call it "axis y"
Direction 3 (up) → call it "axis z"
Reflection delays can be recalculated into "distances" (multiplying by sound speed c). This yields a triplet of numbers for each object.
Critical moment: These "coordinates" are not ontological entities existing before experiment. This is a construction—a convenient parameterization of distinguishable response modes. If I had chosen different directions for axes, I would get different coordinates (different mode numbering).
Formal interpretation (Section 6): Coordinates are indices in spectral state decomposition:
where
are eigenfunctions (modes),
k is index (coordinate).
Coordinate transformation is simply mode renumbering. There is no "space geometry" as an a priori entity.
Appendix 1.11. Where Does "Mass" Come From?
Suppose there is a massive object in the room (say, a piano). I throw seeds at it and listen to response. Sound is dull, with long decay—piano slowly damps oscillations.
I try to build a model: how much energy needs to be invested (number of seeds) to hear a response of certain amplitude?
It turns out, for heavy objects response is weak—many seeds needed to "shake" piano. For light objects (plate) response is strong—one seed suffices.
Formal interpretation: I try to identify parameter
m (mass) in model
. Identification accuracy is determined by Fisher information matrix:
The larger the mass, the harder to determine it—response weaker, information less. Mass is not "quantity of matter" (ontological property) but a conditioning parameter of identification problem.
Physical intuition: Try determining mass of a tanker by pushing it with hand and measuring displacement with ruler. Problem is ill-conditioned—slightest measurement error gives huge mass estimate error. For accurate estimate, either large forces (powerful tugboat instead of hand) or precision measurements (laser interferometer instead of ruler) are needed.
Appendix 1.12. Where Does "Time" Come From?
In darkness there are no clocks uniformly ticking "seconds". There is only my pulse—rough and uneven rhythm. How then to speak of "time"?
Answer: time is an event ordering index. I remember that I threw a seed, then heard response, then threw another. This is sequence (indices).
But real information is contained not in the sequence itself but in
correlations between events:
If response at moment t correlates with response at moment (for example, echo from distant wall arrives with delay), this means system possesses memory—preserves traces of past states.
Frequency domain is primary: Instead of analyzing sequence
it is more convenient to transition to spectrum—decomposition by frequencies (Khinchin’s theorem):
Spectral density contains complete information about statistical signal properties. "Time" appears as shift parameter in phase , not as fundamental entity.
In simple terms: Instead of question "how does system evolve in time t?" I pose question "what is frequency response of system ?" The former is a derived construction from the latter.
Appendix 1.13. Three Newton’s Laws in Darkness
Now I reformulate three Newton’s laws in terms of my seed experiment:
Appendix 1.13.1. First Law: Not Throwing Seeds—Learning Nothing
If (not throwing seeds), no response, zero information. Impossible to distinguish whether I sit in room with piano or with plate—data uninformative.
Formally: rank() = 0, data uninformative (Definition 8.2).
Physical analog: In absence of external force () impossible to determine body mass—it does not affect trajectory (uniform motion).
Appendix 1.13.2. Second Law: Need Minimum Two Types of Throws
To distinguish piano (heavy) from plate (light), seeds must be thrown in at least two different ways—for example, with different force or at different frequencies (rhythmic and chaotic).
Why second order? If system were first order, plate would teleport instantly from seed impact—force would directly set velocity, without intermediate stage. Second order means: seed first changes acceleration (how fast plate gains velocity), then plate gains velocity, then changes position. Two integration stages: . This is minimal structure with memory (two states: where plate is and how fast it moves), ensuring strict causality—there is delay between throw and displacement.
Formally: For identifying second-order model , persistent excitation of order is necessary (Theorem 13.1). System Hankel rank rank() = 2—minimal state space dimension for nontrivial dynamics.
Physical analog: Minimal nontrivial identifiable model of mechanical system has order 2 (coordinate + velocity). Mass m is conditioning parameter: the larger m, the harder identification.
Appendix 1.13.3. Third Law: Echo Must Be Symmetric
If throwing seed at wall and hearing echo, then wall "throws" seed back at me, responses must be symmetric. If wall responds stronger than I "hit" it—energy comes from nowhere (wall is generator). If weaker—energy disappears (wall is absorber).
For closed system (room isolated, windows closed) energy is conserved, therefore interaction is symmetric:
Formally: For consistent identification of interacting subsystems, operator adjointness is necessary. This ensures finiteness of Hankel rank of combined system.
Physical analog: Third law is condition of energy closure and identifiability of isolated system.
Appendix 1.14. Fundamental Conclusion
Identification is not "how good are sensors" but "how many symmetries you can break".
Rotation and second ear:
Without them environment remains unidentifiable background. With them environment transforms into system with finite Hankel rank.
Formally: Controlled modulation (rotation) and multichannel observation (two ears) are minimal conditions under which angular parameters transition from latent to observable, and Fisher information matrix becomes non-degenerate.
Appendix 1.15. Moral of Parable
This parable shows that to build a model of physical system, a priori ontological concepts are not needed:
"Absolute space" not needed—mode indices suffice
"Mass as object property" not needed—conditioning parameter suffices
"Force as entity" not needed—input signal suffices
"Absolute time" not needed—ordering index suffices
Only needed:
Observation channel (hearing in darkness = electromagnetic spectrum in reality)
Ability to influence input (throw seeds = apply forces)
Response observation (hear sound = measure trajectories)
Memory (system does not reset = temporal correlations)
Symmetry breaking (two ears + rotation = multichannel observation with controlled modulation)
From this minimal set, through identification theory, all "laws of nature" are derived—not as ontological statements but as boundaries of knowability.
Physics in darkness is not a metaphor. This is a literal description of scientific inquiry procedure: we sit in "total darkness" of ignorance, throw "seeds" of experiments and listen to "echoes" of results. No direct access to "reality" exists. There is only observation channel and its identifiability.
Newton’s laws in this parable are not revelations about "nature of things" but operating instructions for echolocator in darkness.
Appendix 2 The Dzhanibekov Effect: Information Loss and Orientation Identifiability
In this appendix we propose an alternative interpretation of the Dzhanibekov effect (tennis racket theorem) that goes beyond the canonical explanation via linear instability of rotation around the intermediate principal axis. The main idea is that the observed flip is not only a dynamical but also an informational event, associated with temporary loss and subsequent restoration of orientation identifiability.
Appendix 2.1. Canonical Explanation and Its Limitations
Rotation of a free rigid body is described by Euler’s equations in principal axes:
where
are principal moments of inertia,
are components of angular velocity in body frame.
For rotation around the intermediate axis (
), linearization shows exponential instability with increment:
Small perturbation grows as , leading to characteristic body flip through angle over time .
While such description is mathematically correct, it leaves fundamental questions unanswered: why the flip has an almost universal character (angle , not arbitrary), why the effect manifests especially clearly in microgravity, and why orientation restoration occurs abruptly rather than as continuous precession. Most importantly: the canonical explanation does not predict dependence of the effect on observation channel.
Appendix 2.2 Orientation as Hidden Identification Parameter
In the absence of external references (microgravity, isolated system), rigid body orientation is not directly observed—it is a hidden parameter that must be identified from available signal .
Observer measures some projection of rotational motion (for example, body shadow on detector, signal from markers on surface):
where
h is observation function,
is measurement noise.
Accuracy of orientation identification
is characterized by Fisher information matrix (
Section 2.3):
When , asymptotic variance of estimate —identification becomes impossible (Cramér-Rao bound).
Appendix 2.3 Spectral Decomposition and Bessel Functions
For periodic rotation with angular velocity
, observed signal admits spectral decomposition:
where Fourier coefficients
depend on initial orientation
.
In spectral decomposition of rotational dynamics, coefficients are naturally expressed through Bessel functions of the first kind . Parameter z is determined by observation geometry (angle between rotation axis and detector direction), characteristic body dimensions and angular velocity.
Critical property of Bessel functions. Each function
has a countable set of zeros
(
):
At points , corresponding spectral component vanishes: . Phase information encoded in n-th harmonic disappears—different values of give identical observable signal.
Signal sensitivity to orientation near zero:
Fisher information matrix degenerates:
In simple terms: When passing through a Bessel function zero, the system loses ability to distinguish different orientations—data become uninformative regarding parameter
(Definition 8.2 from
Section 2).
Appendix 2.4 Evolution Through Zero-Identifiability Region
During evolution of Euler instability, rotation axis deviates from initial direction. Perturbations grow exponentially with increment , changing system geometry relative to fixed observer.
Parameter z, determining the argument of Bessel functions in spectral decomposition, changes during evolution. Qualitatively: if initial configuration corresponds to , then exponential growth of perturbations leads to change of in range sufficient to pass through zeros .
Key observation: Euler dynamical instability is the mechanism that brings system into vicinity of Bessel function zeros, where orientation identifiability is lost.
Exact form of is determined by solution of full (nonlinear) Euler equations and requires either analytical analysis using elliptic functions or numerical modeling. This is a direction for further research.
Appendix 2.5 Topology of SO(3) and Flip as Branch Choice
Rotation group
has nontrivial topology: fundamental group
. There exists a double covering by the group of unit quaternions:
Physically: rotation through is not equivalent to identity transformation in quaternion space (), although rotation matrix remains the same.
When passing through zero-identifiability region (), phase information is temporarily lost. Upon exiting this region, phase must be restored from available observations.
Due to topology of
, restoration is ambiguous: two equivalent branches exist, differing by rotation through
:
Flip as informational event. Observed body flip is interpreted as spontaneous choice of one of two topologically equivalent branches during orientation restoration after passing through blind zone.
Choice is determined stochastically by:
Measurement noise at moment of exit from zero-information region
Small fluctuations of initial conditions
Structure of information matrix in vicinity of zero
Appendix 2.6 Role of Observers: Experimentally Testable Prediction
Key consequence of proposed interpretation: Dzhanibekov effect ceases to be purely "internal" property of rotating body and becomes dependent on observation channel.
Suppose there are
K independent observers, each measuring their own projection of rotation:
Total Fisher information matrix:
If Bessel function zeros for different channels do not coincide (different observation angles, different detector types), then at point where one channel loses information (), other channels preserve nonzero sensitivity (, ).
Total information remains finite:
Region of complete identifiability loss narrows or disappears.
Experimental prediction, absent in canonical explanation via Euler instability:
- 1.
Flip suppression by multiple detectors. Increasing number of independent observation channels (video cameras at different angles, gyroscopic sensors, optical markers with different orientations, weak external field as additional reference) should make flip less abrupt, reduce its amplitude or completely suppress it.
- 2.
Dependence on observation geometry. Flip probability and characteristic time should depend on observation angle and detector placement. Configurations where Bessel function zeros for different channels coincide give maximum effect prominence.
- 3.
Role of microgravity. On Earth, gravitational field breaks rotational symmetry, acting as additional observation channel (via precession and libration). In microgravity this channel is absent—hence clearer manifestation of effect in space experiments.
Practical implementation. Experiment with wing nut on ISS or in parabolic flight with simultaneous video recording from several angles. Comparison of flip statistics (amplitude, time to flip, probability) when using:
One camera (baseline configuration)
Two cameras at angle
Three cameras (tetrahedral configuration)
Additional references (weak magnetic field, luminous markers)
If proposed interpretation is correct, flip statistics should systematically change with increasing number of independent observation channels.
Appendix 2.7 Connection to Main Article Concept
The Dzhanibekov effect in this interpretation serves as illustration of work’s central idea: classical "laws of nature" are not ontological statements about reality but boundaries of information extraction from observations.
Parallel with Newton’s laws:
First law: Without excitation () data uninformative ⇒ without observer orientation unidentifiable
Second law: Mass as identification conditioning parameter ⇒ moment of inertia as orientation identification conditioning parameter
Third law: Symmetry of interactions ⇒ symmetry of information channels
Hankel rank of rotational dynamics. For free rotation around single axis, system reduces to pair of angles
, giving Hankel rank 2 (minimal realization,
Section 2.4). Upon loss of identifiability, effective rank drops to 1—only angular velocity
observable, not orientation
itself.
Energy as norm. Rotational kinetic energy
is quadratic norm in state space (
Section 8). Energy conservation ensures norm-preservation of evolution operator but does not exclude flip: states
and
have same energy and are topologically equivalent.
Appendix 2.8 Conclusion
Proposed interpretation does not contradict canonical explanation via Euler instability but complements it with informational perspective. Dynamical instability is the mechanism that brings system into vicinity of critical configurations (Bessel function zeros) where identifiability is lost. The flip itself is a phase restoration event upon exit from blind zone, determined by topology of .
Main difference from canonical interpretation: prediction of effect dependence on observation channels. If microgravity experiments show that adding independent detectors systematically affects flip statistics, this will be direct confirmation of informational nature of effect and demonstrate observer’s role in classical mechanics—a theme traditionally considered the prerogative of quantum theory.
The Dzhanibekov effect transforms from "microgravity curiosity" into fundamental phenomenon at boundary of dynamics and information theory, accessible for experimental verification on existing space platforms.
Appendix 3 Flickering Lighthouses in Darkness: Insights from the Frugal Devil’s Wife of Ancient Times
Appendix 3.1 The Curious Title: Why the Devil’s Wife?
The title of this appendix refers to two historical mathematical figures that illuminate our investigation. The “ancient frugal devil’s wife” is a whimsical historical nickname for Maria Gaetana Agnesi (1718–1799), the Italian mathematician who studied the curve now known as the “Witch of Agnesi” in 1748. The Italian name
versiera (meaning “she who turns”) was mistranslated into Latin as
avversiera (“devil’s wife” or “female adversary”). The curve itself is defined as
which is mathematically identical to the probability density function of the Cauchy distribution — precisely the structure emerging from the projection of uniform circular motion onto a straight line, the heart of the lighthouse problem.
The epithet “frugal” reflects the remarkable efficiency of this curve: a simple rational function encoding heavy-tailed probabilistic behavior with infinite information content packed into a finite spatial domain. The “flickering lighthouses in darkness” evoke our epistemic situation from the parable in the main text — we observe the world through limited sensory channels (like the physicist in total darkness), and the mathematical structure of rotating sources reveals fundamental boundaries of what can be known.
This appendix extends the discrete lighthouse problem to continuous configurations, revealing that differential rotation on logarithmic spirals is the solution to a well-posed optimization problem: how to pack the maximum number of distinguishable rotating sources within finite observation constraints while maintaining complete parameter identifiability.
Appendix 3.2 The Classical Lighthouse Problem and the Degeneracy
The classical lighthouse problem describes a uniformly rotating light source positioned at distance
b from a straight shoreline. The angular position of the light beam as a function of time is
, where
is the angular velocity. The projection of this rotational motion onto the linear coordinate
x of the shoreline yields the well-known result: the distribution of illumination events follows a Cauchy (Lorentzian) distribution
where
a is the projection of the source position onto the perpendicular to the shoreline, and
is the scale parameter.
This problem encapsulates a fundamental situation in system identification where rotational symmetry induces heavy-tailed probability distributions with non-existent moments. From spatial observations
alone, the characteristic function of the distribution is
which depends only on the product
b (at the single-source level,
since we normalize
for the distribution). For finite observation time
T, the effective coverage length is
for small
. The fundamental identifiability boundary emerges:
only the product is observable from spatial data alone. The parameters
b and
cannot be independently determined — this is the
compensation effect.
For
K lighthouses, the mixture characteristic function is
From spatial observations alone, we can identify: (1) the number of sources K (Hankel rank), (2) the positions , (3) the scale parameters . We cannot separate and .
Appendix 3.3 Extended Sensor Array: Physical Setup and Finite Constraints
The transition from the classical problem to the optimization problem requires introducing realistic physical constraints. We consider a planar observation domain with a linear sensor array positioned along the x-axis.
Appendix 3.3.1. Physical Parameters (Given)
Sensor length: (finite spatial extent), sensor domain
Observation time: (finite temporal extent)
Signal propagation speed:c (speed of signal propagation along sensor wire from detection point to readout)
Frequency resolution: (Fourier limit)
Spatial resolution: (sensor element spacing or pixel size)
Temporal resolution: (sampling rate of detection electronics)
Appendix 3.3.2. Source Geometry
A rotating source (lighthouse) k is positioned at coordinates where:
: perpendicular distance from source to sensor line
: lateral offset along sensor line (projection of source onto x-axis)
: angular velocity of rotation
: initial phase
The beam from source
k strikes the sensor at position
x at times determined by the geometric condition
, giving
Appendix 3.3.3. Signal Propagation Delay
When the beam strikes the sensor at position
x, the electrical signal must propagate along the sensor wire to a central readout point (assume readout at
). This introduces an additional delay
The observed detection time at the readout is therefore
This propagation delay creates an additional information channel for determining the spatial position x of each detection independently.
Appendix 3.3.4. Complete Spatio-Temporal Signal
The observed signal at the readout is
where
is the amplitude and
is measurement noise. For a continuous sensor, the sum over
x becomes an integral.
Appendix 3.4 Three Independent Information Channels and Channel Independence
The extended sensor geometry with finite L, finite T, and signal propagation creates three mathematically independent information channels that break the degeneracy.
Appendix 3.4.1. Channel 1: Spectral Frequencies
For any fixed sensor position
x, the signal from source
k is periodic with fundamental frequency
. The Fourier spectrum of
contains discrete lines at integer multiples
for
. Standard frequency estimation theory gives the Cramér-Rao bound
where SNR is the signal-to-noise ratio. This channel provides
independent of
and
.
Appendix 3.4.2. Channel 2: Spatio-Temporal Delays
Consider two sensor positions
. The time delay between detections of the same beam (same
n) from source
k is
Critically, this delay is independent of the beam index n, depending only on geometry. The first term encodes the geometry modulated by , while the second term (propagation delay) is independent of source parameters. If is known from Channel 1, the geometric term determines and uniquely through the functional form of arctan. The propagation term provides an independent check.
For small angular spans, the delay can be linearized:
where
. Knowing
from Channel 1, we can solve for
.
Appendix 3.4.3. Channel 3: Spatial Distribution
The spatial distribution of detection events along the sensor encodes the lateral offset
. For source
k, the intensity as a function of position follows (approximately, for
)
The peak position is at , and the width is determined by . Given from Channel 1 and from Channel 2, this channel provides .
Appendix 3.4.4. Formal Proof of Channel Independence
Theorem A12 (Channel Independence).
The Fisher information matrix for the parameter vector is block-diagonal:
where each block corresponds to one of the three channels.
Proof. The Fisher information matrix is defined as
We decompose the signal into three statistically independent components:
- 1.
temporal spectrum at fixed x (depends only on )
- 2.
cross-correlation between sensor positions (depends on )
- 3.
spatial intensity distribution (depends on )
The log-likelihood factorizes:
For the spectral component,
and
since the spectrum depends only on frequencies. Thus:
For the delay component, once
is known, the delay
is a function of
and
alone:
where
g is the propagation term independent of source parameters. The functional form
f involves arctan, which is not degenerate — different
pairs produce measurably different delay patterns. Given
is already known from the spectral channel, the Fisher information for
from the delay channel is:
This is independent of the spatial Fisher information which comes from the intensity distribution.
The three channels therefore contribute independent information, and the total Fisher matrix is the sum:
□
Corollary A1 (Breaking the Degeneracy). With access to all three channels, the parameters are completely identifiable. The degeneracy of the classical (single-channel) problem is eliminated.
Appendix 3.5. The Optimization Problem: Packing Lighthouses into Finite Constraints
Having established that extended sensor arrays enable complete parameter identification, we now pose the central optimization problem: How should we configure rotating sources to maximize the number of distinguishable lighthouses within finite observation constraints?
Appendix 3.5.1. Formal Problem Statement
Problem A1 (Optimal Lighthouse Configuration). Given:
Sensor length L (finite spatial domain)
Observation time T (finite temporal window)
Signal propagation speed c along sensor
Frequency resolution
Spatial resolution (sensor element spacing)
Temporal resolution (electronics sampling rate)
Operating frequency range
Objective:Maximize the effective number of distinguishable sources K that can be identified by the sensor array.
Decision Variables:
Constraints:
- 1.
-
Spectral separation:For any two sources at positions with angular velocities , harmonics must not overlap:
where is the maximum observable harmonic order, determined by the signal-to-noise ratio and harmonic amplitude decay.
- 2.
Spatial separation:Effective coverage regions must not completely overlap:
- 3.
Delay resolvability:Time delays must be measurable:
- 4.
Geometric confinement:All sources lie within a bounded domain:
- 5.
Effective coverage:Sources must be detectable by the sensor:
Appendix 3.5.2. Intuition for the Optimization
The key insight is that the optimization problem has two competing demands:
- 1.
Spectral efficiency: To maximize K, we want to pack as many distinct frequencies into as possible.
- 2.
Spatial efficiency: To maintain identifiability, sources must be spatially separated enough that their Cauchy distributions don’t completely merge on the sensor of length L.
For discrete configurations, these demands conflict, leading to a hard limit on (as we prove below). The resolution is to transition to a continuous distribution where each infinitesimal spatial element has a unique frequency, eliminating spectral interference entirely.
Appendix 3.6. Fundamental Limitations of Discrete Configurations
Theorem A13 (Spectral Crowding Limit for Discrete Sources).
For K discrete sources with angular velocities in the range , the maximum number of identifiable sources is
where is the maximum harmonic order that can be reliably distinguished.
Proof. The spectral separation constraint requires
for all
and all
. The worst case occurs when harmonics of adjacent sources are as close as possible while still satisfying the constraint. For adjacent sources, this means
The most efficient packing (minimizing wasted spectral space) occurs when this is an equality. Taking the ratio:
Applying this recursively from
to
:
Taking logarithms:
Solving for
K:
Taking the floor gives the maximum integer number of sources. □
Corollary A2 (Fundamental Discrete Limit). The maximum number of distinguishable discrete sources is strongly limited by the ratio of frequency range to maximum harmonic order. This represents a fundamental bottleneck: even with ideal sensors (, ), the spectral crowding constraint imposes .
Theorem A14 (Hankel Rank Limitation).
For a discrete system of K sources, the observed temporal signal has the form
The Hankel matrix constructed from uniformly sampled data has rank exactly K:
Proof. The Hankel matrix is defined as
This can be factorized as
where
V is the Vandermonde matrix with entries
and
D is diagonal with entries
. For distinct frequencies
, the Vandermonde matrix has rank
K, thus
. □
This finite rank limitation means the information capacity is — there is no advantage to increasing K beyond the spectral crowding limit.
Appendix 3.7. Continuous Formulation: The Way Out
Appendix 3.7.1. From Discrete to Continuous
The resolution to the discrete limitation is to replace the discrete set with a continuous distribution. Instead of K distinct sources, we consider a continuum of sources parameterized by a spatial coordinate.
Let be a radial coordinate (distance from some reference point). Define:
The joint density in
space is
The key property: if is a monotonic function, then each frequency ω corresponds to exactly one radius r. There is no spectral overlap between different radial elements.
Appendix 3.7.2. Spectral Density
The spectral density (number of sources per unit frequency) is
where
is the inverse function of
.
Appendix 3.7.3. Power-Law Differential Rotation
We consider the family of power-law rotation laws:
Special cases:
: Solid-body rotation (all radii rotate together)
: Keplerian rotation (gravitationally bound orbits)
: Super-Keplerian (some accretion disk models)
Thus the spectral density is
Appendix 3.7.4. Phase Averaging and the Emergence of Bessel Functions
For lighthouses rotating rapidly such that (many complete rotations during the observation time T), the detector observes flashes. The temporal signal becomes effectively averaged over the rotation phase .
The critical physical effect is the
propagation delay combined with continued rotation. When the lighthouse beam strikes the sensor at position
x at time
, the electrical signal must propagate to the central readout at
with delay
During this propagation time, the lighthouse continues to rotate. The additional phase accumulated is
This propagation-induced phase shift is mathematically equivalent to wave propagation with an effective wave vector
For a sensor point at position
x, the total observed phase (combining geometric and propagation contributions) averaged over the initial rotation phase
involves integrals of the form
where
is the Bessel function of the first kind, order zero. This is the standard integral representation of the Bessel function arising from circular or rotational averaging.
The appearance of Bessel functions in this context is thus a direct consequence of the interplay between rotational motion and finite signal propagation speed. The “radius” in the 1D sensor geometry is (distance from readout point), and the effective wave vector encodes the phase accumulated during signal propagation.
Appendix 3.7.5. Hankel Matrix for Continuous Systems
For a continuous distribution, the temporal signal is
The Hankel matrix elements, incorporating the phase averaging effect for rapidly rotating sources, become
For the logarithmic spiral configuration with differential rotation, the spatial coordinate along the spiral is
and the angular velocity is
. The effective argument of the Bessel function, arising from the propagation delay physics, is
where we used
. For Keplerian rotation (
), this simplifies to
explaining why Bessel zeros occur at regular intervals in this case.
The Hankel matrix elements, incorporating the phase averaging effect for rapidly rotating sources, become
where the Bessel function argument is explicitly determined by the propagation delay
and the rotation frequency
.
Appendix 3.8. Variational Derivation of Logarithmic Spiral Geometry
We have established that continuous distributions with monotonic eliminate spectral crowding. The question remains: What spatial distribution maximizes the Fisher information subject to geometric constraints?
Appendix 3.8.1. Fisher Information Functional
The total Fisher information is the sum of contributions from the three channels. For the continuous case:
Spectral channel:
This is maximized when
is as large as possible for all
, i.e., sources are distributed across the entire frequency range.
Spatial channel:
This is the Fisher information for density estimation. It is maximized when sources are spread out (large gradients).
where the kernel encodes spatio-temporal delay information.
Appendix 3.8.2. Geometric Constraint
We require that sources fit within a bounded domain with characteristic size R. Additionally, to maintain resolvability on a sensor of length L, the effective coverage lengths must satisfy .
Appendix 3.8.3. Variational Argument
Consider sources distributed along a curve in polar coordinates, parameterized by angle . The radial position is , and the angular velocity is . We seek the curve that maximizes information subject to:
- 1.
The frequency range is covered: spans as varies.
- 2.
Geometric confinement: for all .
- 3.
Spatial separation: adjacent sources (infinitesimal increments ) must be spatially separated.
For the spectral constraint with power-law
, we have
To span the full frequency range
as
varies from 0 to
, we need
The most efficient spatial distribution is one where the logarithmic radial increment is proportional to the angular increment:
This is the logarithmic spiral. The parameter determines the tightness of the spiral winding.
Appendix 3.8.4. Combined Parameter
For sources on a logarithmic spiral with differential rotation:
Define the combined parameter
. Then:
The frequency decreases exponentially with angular position. The parameter
is determined by the requirement that the spiral spans the full frequency range in a given angular extent
:
Appendix 3.8.5. Why Logarithmic Spirals Are Optimal
The logarithmic spiral geometry achieves the following:
- 1.
Maximal spectral coverage: The exponential mapping provides uniform coverage in logarithmic frequency space, which is the natural scale for the spectral separation constraint.
- 2.
Spatial efficiency: The self-similar structure of the logarithmic spiral means that the spatial separation between adjacent angular elements is proportional to the radius, matching the scaling of the effective coverage length .
- 3.
Constant information density: Each angular increment contributes the same amount to the Fisher information, avoiding wasted capacity.
Appendix 3.9. Main Optimality Theorem
Theorem A15 (Information-Theoretic Optimality of Differential Rotation on Logarithmic Spirals).
Consider the optimization problem of Section 4.1. Among all configurations of rotating sources satisfying the stated constraints, the configuration that maximizes total Fisher information is:
- 1.
Spatial distribution:Sources distributed along logarithmic spiral(s) (single spiral or superposition of multiple spirals)
- 2.
Rotation law:Power-law differential rotation with
- 3.
Combined parameter:
This configuration achieves:
Infinite Hankel rank (in the noise-free limit)
Zero spectral interference between radial elements
Complete parameter separability via three independent channels
Optimal scaling of Fisher information with the number of sources
Proof (Proof). The proof proceeds in three parts corresponding to the three components of Fisher information.
Part 1 (Spectral optimality): The spectral Fisher information is maximized when for all and when the spectral density is as uniform as possible in logarithmic frequency space. The power-law differential rotation maps the radial domain bijectively onto the frequency domain with spectral density . This provides coverage of the entire operating band without spectral overlap, which is the necessary condition for maximizing .
Part 2 (Spatial optimality): Given the spectral constraint from Part 1, we must choose the spatial distribution to maximize and . The spatial Fisher information is maximized when sources are distributed to create maximum spatial gradients. The logarithmic spiral provides the optimal balance: it packs sources into a compact region (satisfying the geometric constraint ) while maintaining sufficient spatial separation to avoid complete overlap of the Cauchy tails.
The key insight is that the effective coverage length scales as . For (Keplerian), is constant for all k, meaning all sources have comparable spatial footprints on the sensor. For , outer sources have larger footprints; for , inner sources dominate. The logarithmic spiral geometry ensures that adjacent sources (in angle ) are spatially separated by a distance that scales with their coverage length, maintaining a constant overlap ratio.
Part 3 (Coupling and Hankel rank): The combined parameter
links the spatial geometry (spiral parameter
) and the spectral allocation (rotation index
). The constraint that the spiral must span the full frequency range determines
:
which gives the stated result.
For this configuration, the Hankel matrix elements (incorporating phase averaging as derived in
Section 6.4) are
The kernel
is a smooth function of
. The operator
is a Hankel integral operator with continuous kernel, which has continuous spectrum and therefore infinite rank (in the noise-free case where the kernel is not truncated).
Combining Parts 1–3, the logarithmic spiral with differential rotation achieves the maximum possible Fisher information subject to all constraints. □
Remark A1 (Role of vs. ). The choice of how to split the combined parameter λ between the rotation index α and the spiral parameter β is not unique from a pure information-theoretic standpoint (only the product matters). However, physical considerations may favor certain values. For example:
In gravitational systems, (Keplerian) is dictated by Newton’s law.
In accretion disks with viscosity, emerges from angular momentum transport.
In designed sensor networks, β might be chosen to fit geometric constraints of the deployment region.
Appendix 3.10. Connection to Bessel Functions and Identifiability Boundaries
The appearance of Bessel functions in both the Airy disk analysis (Section on diffraction) and the lighthouse problem has a common mathematical origin: rotational averaging, whether in space (circular aperture) or in time (rapid rotation).
Appendix 3.10.1. Bessel Zeros as Identifiability Boundaries
Zeros of Bessel functions,
, mark points where rotational averaging causes complete destructive interference of the
n-th angular mode. At these points, the Fisher information for parameters related to that mode vanishes:
For the Airy disk in diffraction optics, the first zero at defines the Rayleigh criterion: two point sources separated by angular distance become unresolvable because the maximum of one source’s diffraction pattern falls on the first dark ring of the other.
For the lighthouse problem with differential rotation, the effective argument of the Bessel functions (arising from phase averaging combined with propagation delay) is
where
n is the angular mode number,
r is the radial coordinate,
is the angular velocity, and
c is the signal propagation speed. For power-law differential rotation
, this becomes
For
(Keplerian rotation):
The Bessel zeros are regularly spaced in the mode number n, independent of radius. However, the continuous distribution of sources across all radii means that while individual angular modes n may vanish at Bessel zeros for specific combinations of r and , the system as a whole retains information through the redundancy of the continuous spectrum. No single radial element is permanently lost to a Bessel zero—the infinite Hankel rank provides resilience against these identifiability boundaries.
Appendix 3.10.2. Hankel Matrix Structure and Logarithmic Self-Similarity
For the logarithmic spiral configuration, the Hankel matrix elements are
Substituting the explicit forms
and
:
The exponential weight comes from the frequency dependence of the Fourier transform. The Bessel function encodes the rotational phase averaging combined with propagation delay physics. The argument depends on through both the spatial coordinate (spiral geometry) and the frequency (differential rotation).
The kernel exhibits logarithmic scaling self-similarity: under the transformation
, the radius scales as
and the frequency scales as
. The product
appearing in the Bessel argument transforms as
For Keplerian rotation (
), the product
is invariant under logarithmic scaling, maintaining perfect self-similarity of the identifiability structure.
Appendix 3.11. Physical Manifestations and Concluding Remarks
The information-theoretic optimality of differential rotation on logarithmic spirals explains their ubiquity in natural systems:
Spiral galaxies: Exhibit with (approximately Keplerian in outer regions). The spiral arm structure maximizes information transmission about the mass distribution to external observers.
Accretion disks: Around compact objects (black holes, neutron stars), due to angular momentum transport. The logarithmic spiral structure optimizes radiative energy transfer.
Hurricanes and cyclones: Logarithmic spiral cloud bands with differential rotation optimize energy and momentum transport in atmospheric/oceanic vortices.
Biological structures: DNA double helix, nautilus shells, and other biological spirals provide optimal packing of genetic/structural information.
This analysis completes the trilogy of identifiability boundaries presented in the main text:
- 1.
First Law (): No excitation ⇒ rank (parable: physicist in darkness)
- 2.
Second Law (mass): Conditioning parameter ⇒ Var (parable: heavy seeds harder to identify)
- 3.
Rotation + Bessel zeros: Angular averaging ⇒ at (parable: lighthouse at Bessel zero, observer blind)
The logarithmic spiral with differential rotation represents the optimal escape from the discrete limitations. By transforming the discrete optimization (K sources) into a continuous functional optimization (, ), we achieve qualitative advantages: infinite Hankel rank, elimination of spectral interference, and complete parameter separability.
This is the deeper meaning of the “boundary of identifiability” as the central organizing principle: Physical laws describe not reality itself but the limits of what can be learned about reality from observations. The ubiquity of spiral structures in nature is not a statement about “how things are” but about “what can be known” — systems have evolved to maximize their informational capacity, and differential rotation on logarithmic spirals is the mathematically optimal solution to this evolutionary pressure.
The “frugal Devil’s wife” — Maria Agnesi’s mistranslated epithet — captures this insight perfectly: with finite resources (finite sensor length L, finite observation time T), the optimal strategy is not to deploy a handful of discrete lighthouses but to weave a continuous tapestry of rotating sources along nature’s most efficient curve, the logarithmic spiral. This is the ancient wisdom, rediscovered through modern information theory.
Appendix 4 Spectral Topology of Irreversibility: Information-Theoretic Foundation for Mass Anomalies in Non-Equilibrium Systems
Appendix 4.1. 9.1. Introduction: Beyond Thermodynamic Irreversibility
Traditional explanations of irreversibility rely on statistical mechanics concepts — entropy growth, increase of accessible microstates, and the psychological arrow of time. While phenomenologically successful, this approach treats irreversibility as emergent from reversible microscopic dynamics, rather than as a fundamental information-theoretic phenomenon. This section develops an alternative framework in which irreversibility arises from the fundamental limits of spectral resolvability: when spectral components of a system overlap in such a way that their separation becomes fundamentally impossible, the information about initial conditions is lost not statistically, but geometrically.
The connection between spectral structure and physical parameters runs deeper than mere metaphor. As established in
Section 4, mass functions as a conditioning parameter:
, indicating that heavier objects possess spectrally compressed information channels that are more susceptible to overlap. This section extends this insight to show that non-equilibrium processes — deformation, rotation, heating — modify the spectral topology of a system in ways that can be understood as transitions between discrete informational states with different effective masses.
The experimental foundations of this theory trace to the pioneering work of N.A. Kozyrev [
3,
4,
5], whose investigations of rotating mechanical systems revealed anomalous mass-dependent effects that defied conventional explanation. Kozyrev’s observations, met with skepticism due to inconsistent reproduction attempts in civilian laboratories, gain coherence when interpreted through the lens of spectral topology and persistent excitation requirements. His insistence on observing for integer numbers of rotational periods, previously dismissed as an experimental artifact, emerges as a precise formulation of the phase coherence condition necessary to avoid information loss through spectral leakage.
The theoretical framework presented here unifies these empirical observations with the mathematical apparatus of system identification, demonstrating that the boundary of identifiability — the horizon beyond which system parameters cannot be resolved — is identical to the boundary of informational irreversibility. This equivalence provides a rigorous foundation for understanding mass anomalies in non-equilibrium systems and suggests experimental protocols optimized for their detection.
Appendix 4.2. 9.2. Spectral Overlap as the Fundamental Mechanism of Irreversibility
Consider a dynamical system characterized by a set of natural frequencies corresponding to its normal modes. In the absence of external perturbations, these modes are orthogonal in the spectral domain and can be uniquely identified from the system’s response. The Fisher information matrix , which quantifies the distinguishability of system parameters, is diagonal in this basis, and its determinant attains its maximum value, indicating full identifiability.
When the system enters a non-equilibrium state — through rotation, deformation, or thermal excitation — the spectral structure becomes perturbed. For a rotating system, this perturbation can be modeled as a phase modulation induced by the angular velocity
. The accumulated phase during the characteristic propagation time
h is
. For a phase-modulated signal with modulation index
, Carson’s rule gives the effective bandwidth:
For systems with cylindrical symmetry, the angular dependence of dynamical modes is described by Bessel functions
. When the accumulated phase
approaches a zero of
, the corresponding mode becomes unobservable — its amplitude vanishes. The effective spectral line width therefore broadens proportionally to the accumulated phase:
where
k is a dimensionless constant determined by system geometry and the carrier frequency
. This linear dependence follows directly from phase modulation theory and connects naturally to the discrete structure of Bessel function zeros, which determine the positions of informational minima in the parameter space. The condition for irreversible information loss occurs when the broadened lines begin to overlap:
At this point, the modes i and j become indistinguishable in the spectral domain. The Fisher information matrix acquires off-diagonal elements, and its determinant begins to decrease. When the overlap becomes complete — when — the system has crossed the information-theoretic horizon. No measurement, however precise, can recover the original parameters; the information about initial conditions has been lost not through statistical averaging, but through the geometry of spectral overlap.
This mechanism of irreversibility differs fundamentally from the thermodynamic entropy increase. Thermodynamic irreversibility emerges from the practical impossibility of tracking degrees of freedom; informational irreversibility arises from the mathematical impossibility of separating components that have merged in the spectral domain. The former is epistemic; the latter is ontological.
The phase coherence condition in system identification requires that observations be made over an integer number of rotational periods:
This condition ensures that the accumulated phase is an integer multiple of
, eliminating phase ambiguity in the spectral analysis. When observations are made over non-integer periods, the accumulated phase takes arbitrary values, and the spectral decomposition becomes contaminated by leakage artifacts. Kozyrev’s empirical observation [
4] that integer rotational periods are essential for reproducible results finds rigorous justification through this formalism.
In the language of system identification, the phase coherence condition manifests through the Hankel matrix structure. The Hankel matrix , formed from correlation functions of input and output signals, has rank equal to the number of identifiable modes. When the coherence condition is violated, the effective rank decreases as information from different modes becomes mixed in the Hankel singular value spectrum, and the system approaches the boundary of identifiability.
Appendix 4.3. 9.3. Angular Velocity as an Information-Theoretic Control Parameter
The angular velocity of a rotating system functions as a control parameter governing the distance from the information-theoretic horizon. In the language of phase transitions, is the order parameter that drives the system through a continuous phase transition at the critical value , where .
The critical angular velocity is determined by the characteristic time
h:
or equivalently,
where
v is the characteristic propagation velocity within the system and
R is its characteristic dimension. This relationship reveals that
represents the frequency at which rotational dynamics match the internal dynamical frequency of the system. Below
, the system remains in the spectrally resolvable regime; above
, spectral overlap dominates and informational irreversibility sets in.
The chirality of rotational effects — the dependence on the sign of
— emerges naturally from the phase modulation formalism. The transformation
changes the sign of the accumulated phase
, which manifests as a mirror reflection of the spectral structure in the complex plane. For physical observables like energy and momentum, this reflection is invisible because these quantities depend on
. For informational characteristics — spectral structure, correlation functions, the Fisher matrix itself — the sign of
is critical. This distinction explains the chiral asymmetry observed in Kozyrev’s experiments [
5,
6] and confirmed in subsequent investigations including the cryogenic experiments of Tajmar and collaborators [
8,
10].
The angular velocity therefore controls not merely the magnitude of an effect, but its very nature. At low , the system behaves classically, with parameters well-defined and distinguishable. At high , the system enters the regime of spectral overlap, where parameters become uncertain and discrete transitions between informational states become possible. The transition is continuous in the mathematical sense, but the change in observable phenomenology is dramatic.
This framework provides a rigorous foundation for Kozyrev’s qualitative observation [
7] that rotation creates what he termed a "flow of time." The "flow" is not a literal substance but the rate of information loss through spectral broadening, proportional to
in the linear regime and diverging as the horizon is approached.
Appendix 4.4. 9.4. Persistent Excitation and the Requirement of White Noise
The observation of informational effects in rotating systems requires more than mere rotation; it requires active probing through persistent excitation. This requirement was intuitively understood by Kozyrev, who employed mechanical vibrators in his experiments, but its theoretical justification emerges only from the information-theoretic framework.
For a system in the spectral overlap regime, the distinguishability of its modes depends on the spectral content of the probing signal. A monochromatic excitation at frequency will couple strongly to modes near and weakly to modes at other frequencies. The resulting measurement provides information only about the excited modes, leaving the unexcited modes unconstrained. This is the principle of persistent excitation: to identify a system fully, the probing signal must contain energy at all frequencies of interest.
White noise, with its flat power spectral density
, provides optimal persistent excitation. As formulated in the classical system identification literature [
1], a signal is persistently exciting of order
n if its spectral density satisfies:
The uncorrelated samples of white noise ensure statistical independence between measurement instants, and its uniform spectral coverage excites all modes of the system. The information gained about each mode is maximized, and the covariance of parameter estimates is minimized — precisely the condition for optimal identifiability.
Many attempts to reproduce Kozyrev’s results failed because they employed deterministic or narrowband excitation rather than white noise. Without persistent excitation, the spectral overlap could not be fully probed, and the characteristic signatures of informational effects remained below the detection threshold. This explains the inconsistent literature on Kozyrev replication: successful experiments employed adequate excitation, while unsuccessful experiments did not.
White noise excitation corresponds to the most mixed quantum state, the thermal state at infinite temperature. This maximally mixed state maximizes the entropy of the probing field while minimizing its correlation with any particular system mode. The information gained through such excitation is therefore the most general and least biased possible.
Cryogenic temperatures enhance the observability of informational effects through multiple mechanisms. First, thermal fluctuations are suppressed exponentially according to the Boltzmann distribution, reducing the "informational noise" that masks weak effects. Second, the material parameters
v and
R change with temperature, modifying the characteristic time
h and shifting the critical velocity
. Third, detector noise decreases, improving the signal-to-noise ratio for the weak signals associated with spectral overlap. These factors combine to explain the enhanced reproducibility of cryogenic experiments, including those of Tajmar and collaborators [
8,
9] who observed anomalous signals up to 18 orders of magnitude larger than classical gravitomagnetic predictions at temperatures near 5 Kelvin.
Appendix 4.5. 9.5. Discrete Transitions and the Information Potential
In the vicinity of the information-theoretic horizon, the system does not exhibit continuous variation of its effective parameters. Instead, discrete transitions between distinct informational states are observed. These transitions manifest experimentally as sudden jumps in the inferred mass m, occurring at seemingly random intervals and with amplitudes that take values from a discrete set .
The discreteness of these transitions finds explanation through the concept of an information potential
, which can be rigorously defined in terms of Hankel singular values. The Hankel singular values (HSV) of a system, obtained from the singular value decomposition of the Hankel matrix
, characterize the importance and strength of controllability and observability of each mode [
1]. Arranged in decreasing order:
the HSV define an information-theoretic landscape in parameter space. The information potential can be defined as:
where
represents the system parameters including mass. Local minima of this potential correspond to configurations with maximal Hankel singular values, i.e., with maximal identifiability.
The ratio of consecutive HSV:
characterizes the "depth" of the potential landscape. Large values (
) indicate the presence of pronounced local minima — valleys in the information landscape between which the system can become trapped.
The effective mass of the system is determined by its position in this landscape:
where
is the baseline mass of the non-rotating system,
are the discrete mass shifts corresponding to transitions between minima, and
are the occupation probabilities of these metastable states.
At low temperatures and high angular velocities, the system becomes trapped in individual local minima, exhibiting hysteresis and history dependence. Transitions between minima occur when external perturbations — mechanical vibrations, thermal fluctuations, or quantum tunneling events — provide sufficient energy to overcome the barriers separating the minima. The amplitudes are determined by the topology of the information potential and are therefore universal for a given class of systems, depending only on the geometry and material properties, not on the detailed experimental conditions.
This framework explains both the discreteness of mass jumps (the system moves between discrete minima) and their bidirectional nature (jumps can be positive or negative depending on the relative depths of the minima and the direction of perturbation). The information potential replaces the thermodynamic free energy as the relevant potential function, reflecting the information-theoretic rather than energetic nature of the transitions.
The condition number of the transfer function matrix, , diverges as the system approaches the information-theoretic horizon. The three conditions — , , and — are equivalent signatures of the identifiability boundary, all indicating the same fundamental limit of distinguishability between system parameters. The observability index, which quantifies the rate at which information about system states appears at the outputs, provides additional characterization of the potential landscape structure.
Kozyrev’s observations [
5] of "stepwise" changes in system weight, his noting of "capture" in certain states, and his documentation of history dependence are all consistent with this picture of an information potential with multiple local minima. His qualitative descriptions, formulated without the mathematical apparatus of system identification, nonetheless captured the essential phenomenology of discrete informational transitions.
Appendix 4.6. 9.6. Historical Context: Kozyrev’s Experiments and the Reproduction Question
The experimental work of N.A. Kozyrev (1908-1983) on rotating mechanical systems remains one of the most intriguing yet controversial episodes in the history of unconventional physics. Kozyrev, an accomplished astrophysicist recognized for his pioneering work on lunar volcanism and stellar spectroscopy, turned in his later career to investigations he termed "causal mechanics" [
3,
7] — an attempt to establish a physics of irreversible time.
His experiments with rotating gyroscopes, suspended from torsion balances and subjected to mechanical vibration, revealed apparent anomalies: changes in effective weight that depended on the angular velocity and direction of rotation. These observations were reported with impressive consistency over several decades of research [
5], yet independent reproduction attempts yielded mixed results. American researchers using precision gyroscopes found no weight changes [
12,
13]; a French group recorded anomalies; Japanese investigators at cryogenic temperatures reported positive results [
11]. The pattern of success and failure correlates strongly with experimental conditions, particularly the quality of vibration excitation and the temperature of the sample.
A hypothesis that coherently explains these observations involves Kozyrev’s access to Soviet military technology during the Cold War era. High-precision gyroscopes for aviation and navigation, and random noise generators for cryptographic applications, were classified technologies of that period. Kozyrev’s institutional position may have provided access to such equipment, giving his experiments capabilities unavailable to civilian laboratories. The absence of technical details in his publications, often attributed to incomplete understanding, may alternatively reflect classification constraints on sensitive equipment specifications.
This hypothesis explains both the consistency of Kozyrev’s results and the difficulties of civilian reproduction attempts. White noise excitation with controlled spectral density, integer-period synchronization, and cryogenic operation — conditions we now understand as essential — required technologies that were unavailable outside military contexts. The modern availability of these technologies democratizes the experimental study of informational effects, enabling systematic investigation that was impossible in Kozyrev’s era.
The theoretical framework developed in this section provides a unified interpretation of Kozyrev’s observations. His "time flow" is the informational loss rate through spectral broadening. His insistence on integer rotational periods is the phase coherence condition . His discrete jumps are transitions between minima of the information potential defined through Hankel singular values. His chiral effects are manifestations of the odd parity of phase modulation under . The empirical content of Kozyrev’s work survives the transition to modern information-theoretic language, while his speculative interpretations are clarified and, where necessary, corrected.
Future experimental programs should incorporate the lessons of this historical analysis. White noise excitation with verified spectral flatness, precise synchronization to integer rotational periods, cryogenic operation to suppress thermal noise, and chiral discrimination between clockwise and counterclockwise rotation constitute the optimal protocol for investigating informational effects in rotating systems. The theoretical framework predicts specific experimental signatures that distinguish this interpretation from alternatives, enabling critical testing and further development of the theory.
Appendix 5 Where the Celestial Beacons Lead: Shadow Modes, Information Echoes, and the Fractal Topology of Pulsar Dynamics
Appendix 5.1. Observational Evidence for Quasi-Periodic Structures
Pulsars, as natural laboratories with rotation frequencies spanning from millisecond to several-second regimes, provide unique opportunities to test predictions of spectral irreversibility theory. Recent observational campaigns have revealed a rich structure of quasi-periodic oscillations (QPOs) in pulsar timing residuals, particularly in post-glitch recovery phases, which may represent direct manifestations of shadow modes and information echoes predicted by the theoretical framework.
The analysis of fourteen-year timing residual data from the Vela pulsar using correlation sum techniques revealed a fractal dimension of approximately
, suggesting underlying dynamical structure that could indicate a chaotic attractor or, alternatively, the projection of higher-dimensional dynamics onto the observable subspace [
14]. This finding established an important precedent: pulsar timing noise is not purely stochastic but contains structured components amenable to systematic analysis.
More recent work on post-glitch recovery of the Vela pulsar has uncovered statistically significant quasi-periodic oscillations with periods of
days (Z-score
),
days (Z-score
), and
days (Z-score
) in the vortex residuals [
15]. These damped sinusoidal-like oscillations in the spin-down rate are interpreted within the vortex bending model as arising from the collective response of the superfluid interior to glitch-induced perturbations. Crucially, these oscillations are decisively associated with the triggering glitch rather than accumulated history, indicating their transient nature consistent with information echo phenomena.
Systematic monitoring of 259 isolated radio pulsars between 2007 and 2023 revealed that 238 displayed significant variability in their spin-down rates, with quasi-periodic oscillations identified in 45 pulsars through visual inspection and Lomb-Scargle periodogram analysis [
16]. Notably, some pulsars exhibit both long and short modulation timescales that may be harmonically related, while others show dual modulation timescales with approximate fractional relations. The empirical power-law relation
connects modulation periods to spin periods across the population, suggesting a universal mechanism underlying the observed QPO hierarchy. Importantly, the observed scaling exponent
differs from the Takachenko prediction
. Within the spectral irreversibility framework, this discrepancy arises naturally from the fractal effective dimension of the information potential, which deviates from ideal geometric predictions due to partial observability of shadow modes and the discrete structure of Hankel singular values.
Appendix 5.2. Theoretical Interpretation: Shadow Modes and Information Echoes
Within the framework of spectral irreversibility theory, the observed quasi-periodic oscillations can be interpreted as manifestations of shadow modes — dynamical components that exist in the full multidimensional system but project weakly or not at all onto the observable electromagnetic channel. Rotation creates coupling between previously independent modes through Coriolis and centrifugal terms in the equations of motion, partially illuminating these shadow components and making them accessible to observation.
The information echo concept provides a natural explanation for the characteristic timescales and statistical properties of observed QPOs. To strictly enforce chirality and energy conservation, the interaction is modeled via an anti-Hermitian coupling matrix. The evolution of the observable mode
and the hidden shadow mode
is given by:
where
is an odd function of angular velocity, satisfying
and ensuring symmetry under time-reversal parity
. The asterisk denotes complex conjugation, making the off-diagonal elements conjugate antisymmetric. The anti-Hermitian structure reflects information rather than energy flow: the measurement process selects a projection that breaks reciprocity between observable and shadow modes, allowing information to leak from observable to shadow channel but preventing the reverse. Note that the stochastic driving force
acts solely on the observable channel, reflecting the physical reality that measurement noise and external excitation enter through the accessible electromagnetic channel rather than directly perturbing the shadow mode. The eigenfrequencies of the coupled system are
, where
. The beat frequency
determines the information echo period
.
Critical predictions follow from this model. First, the information echo is maximized at a critical frequency where , and vanishes both at (weak coupling) and (mode fusion). Second, the echo amplitude scales with the excitation level of the system, being most pronounced during glitch recovery when the system passes through the parameter regime of enhanced coupling. Third, the coupling structure with in the off-diagonal implies chirality: the phase and potentially the amplitude of information echoes should depend on the direction of rotation relative to other axes (e.g., magnetic axis), with opposite signs for opposite rotation directions.
The fractal hierarchy of quasi-periodicities observed in pulsar timing data finds a natural explanation in the structure of the information potential , where are the Hankel singular values of the system. The minima of this potential correspond to states of enhanced identifiability, and transitions between minima during glitch events generate the observed QPO spectrum. If the system dimension is non-integer, as suggested by the user’s conceptual framework, the spectral indices follow non-integer relations, producing the observed fractional period ratios.
The recoverability of the signal amplitude
A from the timing residuals scales with the singular values of the Hankel matrix according to
, where empirically
–
. This scaling arises from the asymptotic covariance of the parameter estimates, governed by the Fisher Information Matrix
:
For a harmonic oscillator embedded in noise, the frequency information term scales as
, linking the Cramér-Rao bound directly to signal strength. When coupled modes are present, the effective information about the shadow mode grows with the square of the coupling coefficient, which itself may depend on the excitation level. The deviation from the ideal value
arises from partial observability: if shadow modes project onto the observable channel with efficiency
, the effective Fisher information scales as
, yielding
. Empirically,
–
explains the observed range
–
.
Appendix 5.3. Takachenko Oscillations and Vortex Lattice Dynamics
The standard interpretation of quasi-periodic structures in pulsar timing connects them to Takachenko oscillations — collective elastic oscillations of the triangular vortex lattice formed in the superfluid interior of neutron stars due to rotation [
17]. These oscillations occur in planes orthogonal to the rotation axis and generate transverse sound waves through the vortex lattice, causing periodic variations in the angular momentum of the superfluid component.
The observed quasi-periodicities, particularly the 256-day and 511-day oscillations in PSR B1828-11, have been modeled within the Takachenko oscillation framework as manifestations of a combined superfluid vortex lattice [
17]. A characteristic relation between oscillation period
T, rotation period
P, and superfluid region radius
R can be derived from the dispersion relation for Tkachenko waves, yielding an approximate scaling
for fixed wavenumber.
Within the spectral irreversibility framework, Takachenko oscillations represent one specific realization of the coupled-mode dynamics. The two-dimensional vortex lattice naturally produces Bessel function eigenmodes, and the coupling between these modes under rotation creates the hierarchical structure observed in timing data. The empirical scaling relation agrees quantitatively with the Takachenko model for ideal geometric configurations, providing a baseline for understanding deviations observed in real systems.
The framework extends the standard Takachenko interpretation by adding several testable elements. First, the anti-Hermitian coupling structure predicts chirality effects absent from the standard model. Second, the discrete topology of the information potential predicts an Integer Period Effect: detection significance peaks when observation windows contain integer numbers of echo periods, beyond mere spectral leakage artifacts. Third, the fractal effective dimension predicts hierarchical period ratios following continued fraction expansions of specific irrational numbers, rather than simple harmonic relationships. These additions provide distinctive signatures that distinguish the framework from standard vortex physics interpretations.
Table A1.
Distinguishing predictions between standard vortex physics and spectral irreversibility framework
Table A1.
Distinguishing predictions between standard vortex physics and spectral irreversibility framework
| Observable |
Takachenko/Vortex Model |
Spectral Irreversibility |
| Period scaling |
(geometric) |
(fractal dimension) |
| Chirality |
No prediction |
|
| Integer period effect |
Not predicted |
Critical for detection |
| Fractal hierarchy |
Discrete spectrum |
Irrational period ratios |
| Partial observability |
Implicit |
Explicit via
|
Within this framework, vortex lattice eigenmodes are proportional to , where are Bessel functions of the first kind. At radii where vanishes, the corresponding mode has zero projection onto the observable electromagnetic channel — these are precisely the shadow modes whose signatures appear as quasi-periodic oscillations in timing residuals. The spacing between successive Bessel zeros determines the hierarchical structure of observable periods, connecting directly to the information potential’s discrete topology and the lighthouse section’s analysis of mode identifiability.
Appendix 5.4. Predictions for System Identification Analysis
Spectral irreversibility theory generates specific, testable predictions that distinguish it from standard interpretations of pulsar timing structures. These predictions derive from fundamental principles of system identification rather than specific physical models of neutron star interiors.
Prediction 1: Excitation-Dependent Amplitude. The amplitude A of information echoes should scale with the excitation level of the system, approximately as with –, where represents the timing noise amplitude or other indicators of internal activity. This prediction follows from information theory: the accessible information about coupled modes increases with the signal-to-noise ratio, and the Fisher Information Matrix elements scale quadratically with signal amplitude. The deviation of from the ideal value of 2 arises from partial observability of shadow modes, parameterized by efficiency , yielding . Glitches, representing extreme excitation events, should produce the largest and most detectable echoes, consistent with observations of prominent QPOs in post-glitch data.
Prediction 2: Integer Period Effect. The discrete nature of observation introduces the Integer Period Effect. For a pure tone
observed over a finite duration
T, the windowed Fourier transform yields a spectrum proportional to
. The spectral power at
is maximal if and only if:
Deviations from this condition result in spectral leakage, where power disperses into sidelobes, potentially masking weak beacons beneath the noise floor. Following the system identification principle of spectral leakage, quasi-periodic structures should be most detectable when the observation window contains an integer number of echo periods. This predicts periodic modulation of statistical significance with observation duration: for an echo with period
T, significance should peak at window lengths
and be suppressed at
. The Vela pulsar observations, spanning approximately 100 months, fall near maxima for periods of 314 and 344 days, explaining the high significance of these detections. Importantly, this is not merely a windowing artifact: the information potential
possesses additional structure at integer periods due to the discrete spectrum of Hankel singular values, maximizing identifiability when the system’s observable states align with the measurement grid.
Prediction 3: Aliasing Patterns. True information echoes with frequencies above the Nyquist frequency of the observing cadence will produce aliasing patterns following specific rules. If the true period
, the observed period
will satisfy:
for some integer
n. Cross-validation between datasets with different sampling frequencies should reveal these aliasing signatures, distinguishing true high-frequency echoes from artifacts.
Prediction 4: Fractal Hierarchy of Period Ratios. The ratios of observed quasi-periodicities within individual pulsars should form a fractal set containing infinitely many rational approximations to irrational numbers. This follows from the non-integer dimension hypothesis and the structure of the information potential. A quantitative test distinguishes the prediction from random sampling: period ratios should follow the continued fraction expansion of specific irrational numbers predicted by the information potential’s fractal dimension. For a pulsar with detected QPOs, compute all pairwise ratios and compare their continued fraction convergents against the predicted sequence. Systematic agreement, rather than coincidental approximations, would strongly support the fractal hierarchy prediction.
Prediction 5: Chirality Signatures. Since the coupling coefficient is odd in angular velocity, information echoes should exhibit asymmetry with respect to rotation direction. Pulsars with measured precession axes should show correlations between echo phase and precession phase, with the sign of correlation determined by rotation chirality. Testing this prediction requires population-level statistical analysis but provides a distinctive signature of the rotational coupling mechanism.
These predictions are not mutually exclusive with the Takachenko or vortex pinning interpretations but rather provide a complementary perspective emphasizing the information-theoretic rather than purely mechanical nature of the phenomena. Positive tests of multiple predictions would strongly support the spectral irreversibility framework while also constraining the physical parameters of neutron star interiors.
Appendix 5.5 Methodology for Observational Verification
Successful testing of the predictions outlined above requires a systematic, multi-stage approach to pulsar timing analysis. The following methodology provides a structured framework for observers seeking to search for shadow mode signatures and information echoes in pulsar timing data.
Step 1: Sample Selection. The choice of target pulsars significantly impacts the ability to detect and characterize quasi-periodic structures. Optimal targets satisfy the following criteria: (i) well-characterized glitch history with precise measurements of glitch epochs, sizes, and recovery parameters (examples include the Vela pulsar, the Crab pulsar, and PSR B1931+24); (ii) diversity in rotation frequency spanning from millisecond pulsars ( rad/s) to slowly rotating pulsars ( rad/s) to test the frequency dependence of coupling; (iii) multiple observing campaigns with different cadences (e.g., daily observations with CHIME, weekly observations with Parkes or MeerKAT) to enable cross-validation for aliasing tests. A minimum sample of 10–15 pulsars spanning this parameter space provides sufficient statistical power for population-level tests.
Step 2: Pre-processing and Residual Extraction. The raw timing data must be carefully processed to isolate intrinsic quasi-periodic variations from instrumental and propagation effects. The procedure includes: (i) fitting and removal of the pulsar timing model (spin frequency, position, proper motion, dispersion measure variations) using standard timing software such as TEMPO or TEMPO2; (ii) identification and excision of glitch epochs and their immediate aftermath; (iii) whitening of the residual time series to reduce red noise power, either through autoregressive modeling or by differencing; (iv) segmentation into post-glitch intervals for individual analysis. The resulting timing residuals should be approximately white noise with known variance for subsequent spectral analysis.
Step 3: Quasi-Periodic Oscillation Detection. Multiple complementary methods should be applied to maximize detection probability and characterize the detected signals. (i) Spectral methods: the Lomb-Scargle periodogram provides robust periodogram estimation for unevenly sampled data, while the Generalized Lomb-Scargle variant properly handles weighted data. The significance of detected peaks should be assessed against false alarm probability thresholds derived from extensive Monte Carlo simulations of the noise background. (ii) Wavelet analysis: continuous wavelet transforms (e.g., with the Morlet wavelet) provide time-frequency resolution necessary to track QPO evolution through glitch recovery phases and identify transient structures. (iii) Recurrence quantification analysis (RQA): phase space reconstruction via the Takens embedding method (d-dimensional delay vectors) followed by construction of the recurrence matrix reveals diagonal structures corresponding to quasi-periodic trajectories. Key RQA metrics include the mean diagonal line length (LAM), which quantifies the determinism of the dynamics, and the recurrence rate. (iv) Sliding window analysis: to test Prediction 2 (Integer Period Effect), the significance of detected periods should be computed as a function of analysis window length . Peaks at for integer n provide strong evidence for the spectral leakage mechanism.
Step 4: Cross-Validation and Prediction Testing. The final stage involves systematic testing of the five predictions using the detected QPO sample. (i) For Prediction 1 (Excitation-Dependent Amplitude), construct a scatter plot of versus for the ensemble of detected signals and perform linear regression to estimate the exponent . (ii) For Prediction 2 (Integer Period Effect), verify that detection significance peaks when the observation window contains an integer number of echo periods, using the sliding window analysis from Step 3. (iii) For Prediction 3 (Aliasing Patterns), compare period measurements from different telescopes with different sampling cadences and verify that the differences satisfy the aliasing equation. (iv) For Prediction 4 (Fractal Hierarchy), for pulsars with three or more detected QPOs, compute all pairwise period ratios and test whether they form a dense set approximating irrational numbers through continued fraction analysis. (v) For Prediction 5 (Chirality), subset the sample by rotation direction (inferred from geometry or precession measurements) and test for asymmetry in QPO properties between subsets.
Successful implementation of this methodology will either validate the spectral irreversibility framework or constrain its parameters, contributing to the understanding of both neutron star physics and the fundamental limits of identifiability in dynamical systems.
Appendix 5.6 Methodological Considerations for Future Analysis
Successful testing of the predictions outlined above requires careful attention to methodological issues that have complicated previous analyses. The finding that random walks with steep power spectra can mimic strange attractors in correlation dimension analysis [
14] demonstrates that distinguishing chaotic dynamics from projection effects requires sophisticated statistical tests.
A multi-pronged approach to pulsar timing analysis is recommended. First, wavelet transform analysis should complement Fourier-based methods, providing time-frequency resolution necessary to track QPO evolution through glitch recovery phases. Second, recurrence quantification analysis (RQA) of reconstructed phase space should reveal diagonal structures corresponding to quasi-periodic trajectories, with characteristic lengths proportional to echo periods. Third, the integer period effect can be tested by systematic variation of analysis window length and measurement of statistical significance as a function of window duration. Fourth, cross-validation between data from different telescopes and observing campaigns should identify aliasing patterns and eliminate instrumental systematic effects.
The discovery of quasi-periodic oscillations in 45 out of 259 monitored pulsars, with many more expected to reveal structure in longer datasets, provides a growing sample for statistical analysis. The planned expansion of pulsar timing arrays with next-generation facilities will further increase sensitivity to subtle timing structures, potentially revealing shadow mode dynamics in previously inaccessible parameter regimes.
In conclusion, pulsar astrophysics offers a unique testing ground for spectral irreversibility theory. The observed quasi-periodic structures in pulsar timing data, their hierarchical organization, and their dependence on rotation parameters find natural explanations within the framework of shadow modes and information echoes. Systematic testing of the predictions outlined above will either validate the theory or constrain its parameters, contributing to the understanding of both neutron star physics and the fundamental limits of identifiability in dynamical systems.
Thus, pulsar timing emerges as a natural experiment where the abstract boundaries of identifiability materialize as specific, testable patterns in the data. The “shadow modes” are not merely unobservable degrees of freedom — they are parameters whose Fisher information vanishes under normal conditions but becomes temporarily measurable during glitch-induced transitions between minima of the information potential . These minima, in turn, correspond to the zeros of Bessel functions that characterize the eigenmodes of the rotating vortex lattice. The predicted scaling laws, integer-period effects, and aliasing patterns are direct consequences of the Cramér-Rao bound acting on the coupled-oscillator system that describes the neutron star’s interior. In this view, the “laws” of neutron star seismology (Takachenko oscillations, vortex pinning) are not fundamental ontological statements but efficient parameterizations of the identifiable dynamics within the constraints of the electromagnetic observation channel.