1. Introduction
Gaussian Boson Sampling (GBS) produces photon-count patterns whose probabilities are governed by hafnian functionals of a covariance encoded in a Gaussian photonic state. Beyond its foundational role in quantum advantage, GBS has recently been positioned as a practical engine for approximating
Gaussian expectation problems (GEPs), where one seeks
for
and analytic or polynomial
f. Wick’s theorem expresses
as a hafnian–weighted sum over multi-indices drawn from
B, aligning the target with the native GBS statistics. Building on this structure, Andersen and Shan introduced degree-resolved GBS estimators and proved open, nonempty regions in which GBS achieves
sample–complexity advantages over Monte Carlo (MC), and further showed that tuning the mean photon number to the task increases the percentage of instances with advantage [
1,
2].
1.1. Calibrated Residuals, Principal–Angle Conditioning, and Predict–Then–Prove Guarantees
This work advances three aspects that are often treated separately. First, the residual is not merely a distance: it is a calibrated, minimal, and dynamical quantity. We prove that every admissible tuning step is –nonexpansive, so the residual cannot inflate under retuning or composition. Second, conditioning is captured quantitatively by the principal angle (via its cosine ), and this geometric control enters directly into the variance and MSE formulae, providing sharp two–sided bounds; this bridges statistical estimation, photonic physics, and numerical analysis in a single comparison geometry. Third, the framework is predict–then–prove: before sampling, it specifies where to operate (photon–number matching), how conditioning will affect dispersion (principal angles), and what decay rate to expect (Lyapunov envelope), and then establishes these prescriptions as theorems.
1.2. Problem
Practitioners need to foresee, before allocating shots, when GBS will outperform MC and how to tune the device to realize that advantage. Concretely: given a covariance B and an integrand f, can we predict in advance whether calibrated GBS will require asymptotically fewer samples than MC to reach a prescribed accuracy, and can we prescribe a robust one-parameter tuning rule that maximizes the gain? We seek design-time criteria (not post hoc curve fits) that tell us when to use GBS, when to defer to MC, and how to switch between them by degree.
1.3. DSFL Intuition: Forecasting When GBS Beats MC
The DSFL framework predicts, a priori, when and how GBS will outperform Monte Carlo (MC) by placing the target statistics and the sampler in a single Hilbert geometry and tracking one quantity—the residual. This yields the following design-time principles:
Single yardstick. Target and sampler are embedded in the same space; a single residual provides a quantitative knob to optimize and monitor.
No inflation under tuning. Retuning, counting, and block composition act as nonexpansive maps; the residual cannot increase under admissible operations.
Unique, certifiable optimum. The dominant variance/MSE factor is strictly convex in the scale and attains its unique minimum at the mean–photon–number match ; this yields a robust one-line tuning rule.
Predictable conditioning. Alignment between sampling and blueprint features is quantified by the principal angle (cosine ); small certifies low dispersion. A computable surrogate from warm-up features allows pre-run assessment.
Rates, not just limits. If each update removes a fixed fraction of the residual, the residual obeys a non-asymptotic exponential envelope, providing a predicted semi-log MSE slope before data collection.
Regime map and hybrid policy. The geometry identifies regimes where GBS is inherently favorable (high degrees) and where MC suffices (low degrees), and prescribes a degree-wise hybrid switch using the same residual/angle diagnostics.
Robustness. Composition and small hardware drifts degrade angles and rates smoothly; re-matching restores near-optimality.
In summary, DSFL turns GBS from a black-box sampler into a calibrated, predictable engine: it specifies where to operate, how conditioning impacts dispersion, and what decay rate to expect, enabling an informed, pre-run decision on when—and by how much—GBS will outperform MC.
1.4. Thesis (DSFL View)
We show that GBS is a
calibrated variance-reduction engine inside a single Hilbert comparison geometry governed by a Deterministic Statistical Feedback Law (DSFL). This provides one residual to minimize and four provable pillars: (i)
Admissibility & DPI: counting and retuning are
-nonexpansive and intertwine with the estimator, so the calibrated residual cannot inflate; (ii)
Calibration optimality: the unique minimizer of the leading variance/MSE factor is the
photon-number match , which implements the tuning advocated and empirically optimized in [
1,
2]; (iii)
Angle conditioning: principal (Friedrichs) angles between the sampling and blueprint subspaces yield sharp two-sided dispersion bounds
, explaining conditioning and leakage; (iv)
Lyapunov envelope: a one-step contraction turns into a non-asymptotic exponential error decay with rate
maximized at the photon match.
1.5. Contributions
A unified geometry that proves a data–processing inequality (no residual inflation) for tuning and composition, and recovers the
envelopes of [
1].
A one-line strict-convexity proof that
predicts the optimal tuning rule
, matching the optimized estimators in [
3].
Principal-angle bounds that forecast dispersion and a Lyapunov inequality that delivers exponential, non-asymptotic decay—together forming practical, pre-run design rules to decide GBS vs. MC and to set t.
A regime map that explains, a priori, when calibrated GBS is polynomial while MC is exponential (large degree), and a hybrid, degree-wise policy for the small-degree regime.
In short, DSFL turns “folk rules’’ into theorems:
match ,
measure angles, and
expect exponential envelopes—thereby
foreseeing and
tuning the regimes where GBS outperforms MC, in line with and extending the results of [
1,
2].
2. What GBS Does (Succinct but Precise)
GBS distribution and hafnians
Let
be symmetric with
. The GBS probability of a pattern
is
where
are eigenvalues of
B. The mean photon number is
Gaussian expectations via Wick’s theorem
For
and
,
A degree partition gives
with
.
Two practical GBS estimators
GBS-I (importance). For , the estimator using weights is unbiased.
GBS-P (probability). For , estimate by counts and average with weights. The bias vanishes at rate ; the leading term is explicit.
Tuning knob: photon-number matching
Scale
with
and choose
t so that
i.e.
match the sampler’s mean photon number to the target degree
K.
3. Related Work
This section situates our DSFL-based treatment of Gaussian Boson Sampling (GBS) within three intertwined literatures: (i) the foundations of GBS, including sampling distributions, computational hardness, and experimental progress; (ii) GBS as a vehicle for numerical estimation, in particular its use for Gaussian expectation problems together with formal -sample–complexity claims and percentage-of-advantage quantification; and (iii) control-theoretic and geometric principles that justify our residual-contraction, angle-conditioned bounds, and data–processing inequalities in a single Hilbertian comparison geometry. We use these threads to explain how the “folk rules’’ of GBS operation—photon-number tuning, conditioning via feature alignment, and non–asymptotic error envelopes—can be elevated to theorems, and to place our results alongside prior theory and practice.
3.1. Foundations of GBS and Hardness
The modern formulation of GBS as a sampling task driven by hafnian-valued amplitudes is due to Hamilton
et al. [
4] with detailed modeling and refinements in [
5]. Complexity-theoretic hardness follows the linear-optical lineage of Aaronson–Arkhipov [
6] and extends to experimentally informed, high-dimensional variants [
7,
8]. Algorithmic primitives for hafnian evaluation and classical simulation—including supercomputer-grade implementations and the
thewalrus library—are documented in [
9,
10]. Photonic demonstrations approaching or attaining quantum advantage are reported in [
11,
12]. Related Gaussian models include loop-hafnians for displaced states [
13] and threshold-detection (Torontoian) statistics [
14], which frame hardware-relevant variants. On the application side, GBS has been explored for vibronic spectra [
15,
16], molecular docking [
17], dense subgraph discovery and graph kernels [
18,
19,
20,
21], random point processes [
22,
23], and scalable time-bin/loop architectures [
24,
25]. These works supply the physical and computational substrate on which our estimation-theoretic analysis is built.
3.2. GBS Estimators for Gaussian Expectations
The use of GBS samples to approximate Gaussian expectation problems (GEPs)—via Wick’s theorem [
26] to express Gaussian moments as hafnian-weighted functionals—was developed in [
1], which introduced two concrete estimators (GBS-I and GBS-P) and proved non-asymptotic
guarantees on an open, nonempty subset of instances. An optimized variant matching the device’s mean photon number to the target degree and quantifying the
percentage of the problem space with GBS advantage appears in [
2]. Our DSFL framework recovers these bounds within a single comparison geometry, explains the tuning law (
), and adds angle-conditioned rate controls. Complementary analytic identities (e.g., the hafnian master theorem [
27]) and approximation tools [
28,
29] delineate tractable regimes and provide context for the sampling-based estimators considered here.
3.3. Hilbertian Residual Control and Data Processing
Our analysis leverages standard Hilbert-space contractivity of orthogonal projections (counting) and parameter changes (retuning) to derive a data–processing inequality (DPI) for the DSFL residual, together with principal-angle (Friedrichs) bounds for conditioning and Lyapunov-type decay envelopes. While these tools are classical (see, e.g., two-subspace/Friedrichs-angle inequalities and discrete Grönwall arguments), their joint instantiation for GBS estimation and calibration appears to be new in this context. For Monte Carlo baselines and variance-reduction perspectives that we compare against, see [
30,
31,
32,
33].
Gaussian Boson Sampling (GBS) was introduced as a photonic sampling model whose output probabilities are governed by hafnians of submatrices associated with a Gaussian covariance [
4,
5]. In analogy with (standard) Boson Sampling [
6], a growing body of work establishes classical intractability (under plausible conjectures) not only for exact sampling but also for sampling from suitable approximations to the ideal distribution [
7,
8]. Algorithmic progress on hafnian evaluation—both general-purpose [
9] and software implementations (e.g.
The Walrus) [
10]—has enabled faithful classical simulations at nontrivial scales, supporting experimental validation [
11,
12] and methodological development.
3.4. Applications of GBS
Proposed applications of GBS span vibronic spectroscopy [
15,
16], molecular docking [
17], dense subgraph discovery [
18], perfect matchings [
19], graph similarity and kernels [
20,
21], point processes [
22], and photonic simulations of molecular dynamics [
23]. These lines have been complemented by platform-level studies of time-bin/loop architectures that aim at scalable implementations [
24,
25].
3.5. GBS Estimators for Gaussian Expectations
The specific connection between GBS and
Gaussian expectation problems (GEPs) is made precise by Wick’s theorem [
26], which expresses Gaussian moments as hafnians. Building on this structure, Andersen and Shan
(i) introduced degree-resolved GBS estimators (GBS-P and GBS-I) for GEPs and proved open, nonempty regions of exponential advantage over plain Monte Carlo (MC), measured in guaranteed
sample sizes [
1], and
(ii) optimized the mean photon number to the task and quantified the fraction of the problem space displaying advantage, with large observed percentages and sizeable efficiency ratios [
2]. Related algorithmic components include hafnian bounds (via Stirling-type inequalities) and polylogarithmic envelopes that deliver computable, nonasymptotic sample-complexity expressions.
3.6. Variance-Reduction and MC Baselines
From a classical perspective, GBS-I can be seen as an importance sampler whose proposal is the physical GBS law; GBS-P corresponds to a (biased-but-consistent) probability estimator with explicit leading error terms. These sit alongside standard MC variance-reduction techniques (importance, stratification, antithetic sampling, quasi-MC), whose strengths and limitations in high dimensions are well documented [
30,
31,
32,
33]. The present analysis adopts plain MC as the baseline, following [
1,
2].
3.7. Hafnian Identities and Tractable Limits
Structural identities such as the hafnian master theorem [
27] provide tractable regimes for particular coefficient ensembles (e.g.
weights), while general approximation results for hafnians and permanents (e.g. Barvinok-type bounds) furnish algorithmic surrogates outside the photonic setting [
28,
29]. These are complementary to the sampling-based estimators considered here.
3.8. Angles, Data Processing, and Residual Control
The present DSFL view organizes the GBS estimation task in one Hilbert geometry and leverages two ingredients that are ubiquitous in information processing and operator theory: (i) data–processing inequalities (DPI) for nonexpansive, intertwining updates (counting maps, retuning), ensuring no residual inflation under admissible operations; and (ii) principal-angle (Friedrichs) bounds that control conditioning of two intertwined subspaces (blueprint vs sampling features) and yield two-sided variance inequalities with factors . The Lyapunov envelope for the residual—a discrete coercivity statement converting one-step contraction into an exponential decay rate—is classical in stability theory and is instantiated here by the t-matched (photon-number–matched) tuning law. While the DSFL terminology is specific to this work, the underlying tools are aligned with standard Hilbertian projection theory and CS-decompositions.
3.9. Positioning
Relative to [
3,
3], the present manuscript provides a unifying explanation of
when/why/how GBS outperforms MC by: (a) proving that mean–photon–number matching is the unique minimizer of the leading residual/variance factor; (b) quantifying conditioning via principal angles with explicit two-sided bounds; and (c) deriving nonasymptotic Lyapunov envelopes for error decay that recover the
sample-size laws. This framework is compatible with current hardware models [
34,
35,
36] and classical tooling for hafnians [
9,
10], and it complements domain applications [
15,
17,
18,
19,
20,
22,
23].
4. Limitations and Scope
The guarantees developed here are tailored to Gaussian expectation problems in which Wick reduction expresses the target as hafnian–weighted functionals of a real, symmetric covariance with spectrum strictly inside . This setting aligns the photonic sampler’s distribution with the blueprint geometry in a way that makes DSFL calibration, principal-angle conditioning, and Lyapunov envelopes directly applicable. Extending the framework beyond this regime entails additional structure. For non-Gaussian observables or general boson sampling models (e.g., loop-hafnians with coherent displacements, threshold detection/Torontoian statistics, or complex-valued amplitudes with sign patterns), one must re-specify the feature map, the admissible updates, and the calibration functional so that (i) an appropriate inner-product geometry captures the relevant moments, (ii) the data–processing inequality remains valid under the physical update semigroup, and (iii) bias terms induced by nonlinearity or signs are explicitly controlled. A second limitation concerns conditioning proxies: our use of principal-angle surrogates (e.g., cross-Gram norms between sampling and blueprint features) is computationally light but approximate; computing exact projectors or Friedrichs angles may be prohibitive for very large . Finally, while our analysis covers noiseless sampling and block composition via nonexpansive maps, hardware noise and calibration drift require model-specific contractions; the DSFL conclusions persist under such perturbations only insofar as a positive coercivity margin can be certified after re-matching the mean photon number.
4.1. DSFL Foresight for GBS: Verbal Logic → Mathematical Predictors
We give a from-first-principles account of how the Deterministic Statistical Feedback Law (DSFL) predicts (before any run) the behavior of Gaussian Boson Sampling (GBS): how tuning affects error, when/why it beats Monte Carlo (MC), how variance/bias scales, why mean-photon-number matching is optimal, and how composition/noise impact performance. Each verbal claim is paired with an explicit, checkable mathematical statement.
5. Stage 1: Put GBS in One Hilbert Geometry (the DSFL Stage)
DSFL places the statistical blueprint (the quantity to estimate) and the physical sampler (the device) in the same comparison geometry and measures a single misfit (residual). In our setting, GBS turns a scale parameter t (the photon-number knob) into a sample law . The estimator (GBS-I or GBS-P restricted to degree K) is a calibration map that reads off from . Alignment ⇒ small residual; misalignment ⇒ large residual.
Mathematics:
Fix a degree
K. Let the blueprint vector be
Let the physical law at scale
t be the degree-
marginal
Work in
with the standard inner product. Define the
readout (calibration)
by the chosen estimator:
Define the
Residual of SamenessThis single scalar is the DSFL misfit we will minimize, bound, and track.
Stage 2: Admissibility ⇒ No-Inflation (DPI)
Changing t and counting more samples is nonexpansive in the comparison geometry; residuals cannot inflate just by tuning/concatenation. Any sensible schedule is safe: at worst neutral, typically improving.
Mathematics (DPI)
Let
be the blueprint/physical update. Here
while
transports
and applies the counting map. The intertwining
holds by construction. If
(counting is
-nonexpansive), then
Prediction #1 (robustness). Any well-behaved change of
t, any device concatenation (gluing), and any extra sampling cannot increase the DSFL residual.
Stage 3: Optimal Tuning = Mean-Photon Matching
Both the GBS distribution and the estimator operate on multi-indices with . If the device emits photon numbers far from , mass is spent in the wrong degree, creating misalignment/variance. Matching the mean photon number to centers mass where the estimator reads, minimizing misfit and MSE.
Mathematics (Calibration Optimality)
For GBS-P the leading MSE is
and for GBS-I the variance is
Both have a factor
. Then
and
Therefore
with
is the unique strict minimizer.
Prediction #2 (tuning law). Set
t so that
; this minimizes estimator MSE/variance and the residual
.
Stage 4: Conditioning and Leakage via Principal Angles
If the sampling and blueprint subspaces are aligned, readout is clean; if skew, mass leaks. DSFL quantifies this by the Friedrichs angle ; acts like a conditioning factor: (worse), (better).
Mathematics (Two-Subspace Inequality)
Let
project the sampling features at scale
t and
the degree-
K blueprint. With
,
Since empirical MSE
(up to constants),
Prediction #3 (angle effect). As
with
,
and bounds tighten; mis-tuning increases
and hurts conditioning.
Stage 5: Rates via a Lyapunov Envelope
Near the matched regime, the error behaves like a dissipative system: each sample reduces a fixed fraction of residual. DSFL encodes this with a Lyapunov inequality, giving a non-asymptotic exponential envelope.
Mathematics:
Let
. Under admissibility (DPI) and a coercivity margin
(monotone in alignment/angle),
Thus
.
Prediction #4 (rate). The decay slope
is largest at the match; mis-tuning reduces
.
Stage 6: When GBS Beats MC (Regime Maps)
MC at degree K inherits all cross-moment mixings, exploding with effective dimension; calibrated GBS concentrates on the right degree. DSFL converts this into sample-complexity comparisons.
Mathematics (Complexity Ratio)
Chebyshev/Markov give guaranteed sizes
versus
For large
K or correlated growth
, the MC numerator inflates exponentially (many cross indices), while GBS stays calibrated:
Prediction #5 (regime map). Large
K⇒ GBS wins; small
K⇒ use DSFL angle proxy to switch (hybrid GBS/MC).
Stage 7: What Happens When You Sweep t
MSE versus t is U-shaped with the minimum exactly at ; sharp spectra create narrower minima (more sensitive tuning).
Mathematics:
Let
. Then
Prediction #6 (t-sweep). A unique convex minimum at
; sharper eigenvalue spectra yield narrower minima.
Stage 8: Composition and Noise
Composition: concatenating blocks is admissible; residuals cannot inflate.
Noise: small Gaussian perturbations contract in
; same predictions hold with slightly reduced rate/angle; re-matching
restores optimality. [
37,
38]
Mathematics:
If each block satisfies
and
, then for
,
For
with
small, Davis–Kahan yields
Hence
decreases by
;
still centers the optimum.
Stage 9: Actionable Tests
The DSFL analysis yields a concrete testing protocol that can be implemented prior to data collection and updated on-line with negligible overhead. First, the sampler must be calibrated by matching its mean photon number to the target degree. Concretely, given a covariance
B with spectrum
and a degree index
, the scale
is defined as the unique solution of the monotone equation
Setting
centers the sampling law on multi-indices of weight
and, by the variance formulas derived above, minimizes the mean–square error for both GBS-I and GBS-P.
Second, conditioning is certified by a computable principal-angle proxy. Let
denote a (finite) feature matrix obtained from
M synthetic or warm-up GBS feature evaluations at scale
t, and let
encode a basis for the degree-
K blueprint subspace (e.g. normalized monomials indexed by
). The quantity
upper bounds the Friedrichs cosine
up to discretization error. Small values of
certify good alignment (large principal angle
) and hence favorable two-sided variance bounds, whereas large values indicate ill-conditioning and trigger either a re-tuning of
t toward (
26) or a temporary deferral to the Monte Carlo channel for the corresponding
K.
Third, channel selection can be automated by an empirical residual criterion. Denote by the empirical DSFL residual estimated from a fixed budget of warm-up samples (identical across candidates). Choosing the channel with smaller minimizes the predicted MSE under the nonexpansive dynamics that define DSFL and is therefore statistically safe. Since both GBS and MC updates are –nonexpansive in the comparison geometry, this selection step does not increase the residual in expectation.
Finally, under architectural changes (e.g. insertion of an additional optical block or concatenation in a time-bin loop), admissibility guarantees a data–processing inequality for the composite map, hence no inflation of the residual; only the calibration Equation (
26) must be re-checked for the updated effective covariance and scale. In particular, the composition of nonexpansive, intertwining maps remains nonexpansive and intertwining, so the DSFL guarantees persist verbatim after gluing.
Executive Summary of Foresight
Within a single Hilbert geometry, DSFL certifies that tuning and concatenation steps are nonexpansive; hence, the residual cannot deteriorate under admissible manipulation of the sampler or of the pipeline. The unique operating point that minimizes the mean–square error and the estimator variance is characterized by the mean-photon-number matching condition
, which is strictly convex in
t and therefore algorithmically well-posed. Conditioning is governed by the principal angle
between the sampling subspace at scale
t and the degree-
K blueprint; the associated cosine
controls sharp two-sided variance bounds and decreases as the calibration improves. In the matched regime, the expected residual satisfies a discrete Lyapunov inequality and thus admits a non-asymptotic exponential envelope
with decay rate
maximized at the calibration point. Consequently, for high degrees (in particular when
K grows as
), the calibrated GBS estimators exhibit polynomial sample complexity in
N, while the plain Monte Carlo baseline inherits an exponential dependence through cross-moment couplings; for small
K, the angle proxy (
27) recommends a hybrid policy that defers to MC until alignment becomes favorable. The dependence of the MSE on the scale
t is convex with a unique minimum at
; perturbations of
B (hardware noise) or blockwise composition merely reduce the alignment and rate by controllable amounts, and re-matching via (
26) restores near-optimal performance. Throughout, no inflation occurs under gluing by the data–processing inequality.
Bottom Line
The preceding statements are
a priori consequences of the DSFL axioms (intertwining and
–nonexpansive updates) in conjunction with the Wick/GBS representation of Gaussian expectations. They condense into operational equations
which together predict the behavior of GBS before sampling and specify the tuning required to achieve optimal performance in practice.
6. DSFL Primer (Minimal Machinery, with Proofs)
6.1. Comparison Geometry
Let be a (real or complex) Hilbert space equipped with the norm . We fix two closed subspaces (“statistical” and “physical” channels), and denote by the corresponding orthogonal projectors.
6.2. Calibration (Interchangeability)
Definition 6.1 (Calibration pair)
. A calibration
is a pair of bounded operators
satisfying the interchangeability identities
Remark 6.1. The relations (30) imply that is a left inverse of on , and is a right inverse of on . In particular, is an isomorphism of onto , and restricts to the identity on .
6.3. Residual of Sameness
Definition 6.2 (Residual)
. Given (blueprint) and (response), the Residual of Sameness
is the squared –distance
Remark 6.2. By construction, iff . Moreover, is invariant under any isometry U on that preserves : if , then .
6.4. Admissibility and the Data–Processing Inequality
Definition 6.3 (Admissible update)
. A pair of bounded operators with
is admissible
if:
-
1.
Intertwining (calibration consistency):
-
2.
Nonexpansiveness: .
Theorem 6.4 (Data–processing inequality (DPI))
. If is admissible, then for all and ,
Proof. Using (
33),
which is (
34). □
Corollary 6.5 (Iterated DPI)
. If are admissible, then so is , and
Proof. Intertwining and are preserved under composition. Apply Theorem 6.4. □
6.5. Angles and Conditioning
Let be closed subspaces with orthogonal projectors .
Definition 6.6 (Friedrichs angle)
. The Friedrichs cosine
is , and the Friedrichs angle
is defined by
We recall a standard two–subspace inequality; we include a proof for completeness.
Proposition 6.7 (Two–subspace bounds)
. Let be closed subspaces and set . For any , ,
Proof. By polarization,
Since
,
, we have
. Thus
Using
and
yields (
38). □
Proposition 6.8 (Through–subspace contraction)
. Let be closed subspaces with . Then for any bounded ,
In particular, if , then .
Proof. By the dual characterization of the operator norm,
But
(take the singular value decomposition of
). Hence (
41). □
Remark 6.3 (Interpretation). In applications, M and N are the “pointer” subspaces representing, respectively, the sampling polarization and the blueprint (degree–K) polarization. The factor controls leakage and conditioning across the seam: estimates that pass through an –contractive kernel are suppressed by at least .
6.6. Lyapunov Envelopes (Continuous and Discrete)
We formalize a standard energy–dissipation mechanism that will underlie non–asymptotic error envelopes.
Lemma 6.9 (Differential energy inequality)
. Let be (weakly) differentiable and set . Assume there exists a bounded self–adjoint and a remainder such that
If, for some and ,
then
Theorem 6.10 (Continuous Lyapunov envelope)
. Under the hypotheses of Lemma 6.9 with ,
Proof. Apply Grönwall’s inequality to (
45). □
A discrete analogue covers sampling iterations.
Proposition 6.11 (Discrete Lyapunov envelope)
. Let be nonnegative numbers such that for some ,
Then
Proof. Iterate (
48) and use
. □
Remark 6.4 (Connection to admissibility)
. In many applications (e.g. residuals computed after one sampling/update step), admissibility of the update and a sectorial coercivity provide (48) in expectation, yielding an expected
exponential envelope of the form .
7. Mapping GBS ⟺ DSFL
Here we place the GBS estimation problem and the DSFL control law into a single, explicit Hilbert–space model and show that the main “folk rules” for operating a Gaussian Boson Sampler—how to tune the mean photon number, when advantage over Monte Carlo (MC) appears, how conditioning depends on feature alignment, and why errors decay at a fixed rate—follow as theorems from one comparison geometry. The construction has three ingredients. First, for a fixed target degree
K, we embed both the statistical blueprint (the degree–
K coefficient vector of the integrand) and the physical sampling law induced by the device at scale
t into the same finite–dimensional Hilbert space
indexed by multi–indices
. This makes the GBS law
and the blueprint
comparable as vectors and allows us to measure a single misfit—the DSFL residual
Second, we model the estimators (GBS–P and GBS–I restricted to degree K) as bounded linear functionals , so that “reading off” the degree–K contribution to the Gaussian expectation is nothing but a linear evaluation in . In this form, the basic DSFL admissibility hypotheses hold for the two primitive operations available on hardware or in simulation: (a) changing the scale t (retuning the source) and (b) replacing the law by its empirical counterpart (counting). Both operations act by contractions on and intertwine with the calibration . Consequently, the DSFL data–processing inequality applies directly: no admissible update can inflate the degree–K residual.
Third, we extract quantitative laws from the geometry. A short calculus identity shows that the leading
t–dependence of the residual and of the estimator variance is governed by the convex factor
so the unique minimizer occurs at the photon–number match
. This is the precise DSFL formulation of “mean–photon–number matching is optimal.” Conditioning and leakage across degrees are controlled by the Friedrichs principal angle
between the blueprint subspace and the sampling feature subspace in
; standard two–subspace estimates yield two–sided variance (or squared–bias) bounds with multiplicative factors
. Finally, a one–step conditional contraction of the residual—guaranteed by admissibility plus a positive angle margin—produces a Lyapunov inequality and hence a non–asymptotic exponential envelope
with
maximal at the photon–match.
The subsections below make this scheme explicit. We first specify the blueprint , the physical channel , and the linear estimator inside . We then verify admissibility (intertwining and nonexpansiveness) for counting and retuning, yielding a degree–K data–processing inequality. Next, we prove calibration optimality—photon–number matching minimizes the residual—and derive principal–angle bounds for the estimator’s dispersion. We conclude with a Lyapunov–type result that converts the one–step contraction into an exponential decay rate, thereby recovering and clarifying the sample–complexity laws for GBS and the observed advantage regimes over MC.
7.1. Blueprint, Physical Channel, and Calibration in One Geometry
Fix an integer
and a symmetric covariance
with spectrum
. Let
7.1.1. Hilbert Model
We work on the comparison Hilbert space
7.1.2. Blueprint
Let
be the target integrand. The degree-
K blueprint is the coefficient vector
Wick’s theorem yields the degree-
K contribution to the Gaussian expectation
where
denotes the (multi-)submatrix of
B indexed by
I.
7.1.3. Physical Channel
For any scale
, the GBS distribution on
is
We regard
as the “physical state” at scale
t. The (global) mean photon number of
is
7.1.4. Calibration (Estimator Readout)
We model the degree-
K estimators as linear functionals on
:
Thus
is bounded (by Cauchy–Schwarz). For empirical laws
obtained by counting, we use the same
; admissibility will follow from linearity and nonexpansiveness of orthogonal projections.
7.1.5. Residual
With this embedding, the DSFL residual for the degree-
K task at scale
t is
where
is the canonical identification
(we suppress the statistics subspace and view
as an element of
).
7.2. Admissibility of GBS Tuning and Counting
Lemma 7.1 (Intertwining)
. Let and Φ be either: (i) the identity (fixed t) followed by the orthogonal projection onto the simplex of probability vectors (counting), or (ii) the change of scale followed by . Then for one has
Proof.
on the blueprint space; acts on physical vectors only and preserves the ambient space (projection and change of parameters do not move ). Therefore and , yielding the intertwining . □
Lemma 7.2 (Nonexpansiveness). The operators Φ of the previous lemma satisfy .
Proof. Orthogonal projection onto a closed convex set is a contraction in ; parameter change is the identity on the ambient linear space (we reparametrize the same coordinates). Hence the composition has operator norm . □
Proposition 7.3 (DPI for GBS tuning and counting)
. Let be as above. Then the DPI holds for the degree-K residual:
Proof. Apply the abstract DPI to , with as specified and . □
7.3. Calibration Optimality: Photon-Number Matching
Lemma 7.4 (Scale factor calculus)
. Define
Then
where is the mean photon number. In particular, iff , and , so is strictly convex on .
Proof. Differentiate:
The second derivative is the sum of a positive term
and strictly positive terms
, proving strict convexity. □
Theorem 7.5 (Calibration Optimality)
. Assume admits a representation
with constants , independent of t. Then the unique minimizer of on is characterized by the photon-number matching law
Proof. By the assumed form, ; hence critical points of are those of . Combine with the previous lemma and strict convexity to obtain uniqueness and . □
Remark 7.1. For both GBS-P and GBS-I, the leading t–dependence of the MSE/variance and thus of the residual is multiplicative by , while the coefficient depends on but not on t. This yields the displayed form up to a harmless additive constant B (see the variance formulæ below).
7.4. Angles Quantify Conditioning: Two-Sided MSE Bounds
Let
denote the orthogonal projector onto the “blueprint” subspace spanned by the canonical basis
weighted by the coefficients
(precisely, the range of the linear functional
), and
the projector onto the sampling feature subspace at scale
t (the range of
within
). Set
Proposition 7.6 (Angle–variance link)
. Let be either the GBS-P or GBS-I degree-K estimator from n samples at fixed t. Then there exist t–independent constants such that
Equivalently, the MSE (for GBS-P, replacing Var by the leading squared bias) is sandwiched between two multiples of the split energy, with constants modulated by .
Proof sketch with precise ingredients. Both estimators are linear statistics of the empirical law
:
Then
where
is the covariance operator of one draw,
Since
and the features live in
, one has (up to a bounded
t–independent similarity determined by
and
B)
for a positive definite diagonal
D encoding the
t–independent weights
and
. Two applications of standard two-subspace estimates (Friedrichs/CS-decomposition) yield
Decompose
relative to
; invoke the sharp two-subspace inequalities to bound
above/below by
and absorb
t–independent factors into
to conclude. □
7.5. Lyapunov Envelope and Non-Asymptotic Rates
Theorem 7.7 (DSFL Lyapunov for GBS)
. Let denote one update step (blueprint-side map U and physical-side map Φ; counting and/or retuning). Fix and degree . Assume there exists such that, conditionally on the past,
Then the mean residual decays exponentially:
Moreover, is monotone improving as the principal-angle factor decreases, and it is maximized at the photon-number match .
Proof. Taking total expectation in the conditional contraction yields
Iterating and using
gives
The dependence of
on
follows from the angle–variance control; its maximizer coincides with the minimizer of the leading residual factor, attained at
by calibration optimality. □
Corollary 7.8. If , then . In particular, the right-hand side is minimized in t at .
Remark 7.2. The hypothesis is a sectorial coercivity statement: in expectation, one update removes a fixed fraction of the degree-K residual. Admissibility (data–processing) and a positive principal-angle margin guarantee such a contraction. Empirically this appears as a straight line of slope in semi-log MSE plots.
8. Sample-Complexity Consequences
In this section we derive, from first principles in the comparison geometry, (i) exact variance formulae for the unbiased importance estimator (GBS-I), (ii) an asymptotic mean–square error (MSE) for the probability/square–root estimator (GBS-P) under the usual sign convention, and (iii) sample–size envelopes in the
sense via Chebyshev/Markov bounds. Throughout we fix a degree
, a covariance
B with spectrum
, and a scale
. We also use the hafnian homogeneity
and the GBS law on
8.1. GBS-I (Unbiased): Exact Variance and Guarantee
Define the unbiased importance estimator for
by
Then
by direct substitution. A single-line computation using
and the homogeneity gives
Proposition 8.1 (Variance of GBS-I)
. For the unbiased estimator above,
Proof. Since
is the empirical mean of i.i.d.
Y with
, we have
, which after the computation of
above gives (
81). □
Corollary 8.2 (
sample size for GBS-I)
. By Chebyshev,
Hence it suffices to take
In particular, at the photon-match (Theorem Calibration optimality
), is minimized, thus minimizing the right-hand side.
8.2. GBS-P (Bias Vanishes): Asymptotic MSE via Delta Method
For the (degree-
K) probability/square–root estimator we assume the standard sign convention
is available.
1 Define
Since
, one has
. The multinomial delta method with covariance
therefore yields
A short calculation gives
and hence
We conclude:
Proposition 8.3 (Asymptotic MSE of GBS-P)
. As ,
Corollary 8.4 (
sample size for GBS-P)
. By Markov’s inequality applied to the centered square, an asymptotic sufficient sample size is
As in Corollary 8.2
, the right-hand side is minimized at the photon match .
8.3. When GBS Outperforms Monte Carlo
Let
be the plain Monte Carlo estimator for the degree-
K contribution with
and
Then
and a standard Wick expansion gives
The quadratic form in (
90) mixes
all degree-
multi–indices pairwise via
and therefore grows combinatorially in
K (and, under mild spectral hypotheses, exponentially in an effective dimension), whereas the GBS prefactors (
81)–(
87) are controlled by the single convex factor
after photon–number matching.
Theorem 8.5 (Regime separation: polynomial vs exponential)
. Fix B with eigenvalues bounded away from 1 and coefficients on the unit sphere of . Then there exists such that for sequences with and one has the complexity separation
for achieving the same tolerance, provided the GBS sampler is photon–matched (). For fixed small K the two complexities are of the same order, and hybrid policies (MC for small K, GBS beyond a data–driven crossover) are optimal.
Proof sketch. Under the matching law
, the GBS prefactors are minimized and remain polynomial in
N (because
with
and
t bounded away from
uniformly in
N at fixed
). In contrast, (
90) sums
terms with
and (for generic
B)
bounded away from zero on a positive fraction of pairs, whence an exponential lower bound in
. Careful bookkeeping of constants gives the stated separation. □
8.4. Theory.
DPI, calibration optimality , angle bounds, the exact GBS–I variance, the large-n GBS–P MSE, and the envelopes are publishable as-is.
Consequences. The exact/algebraic forms (
81)–(
87) and the convexity identity
imply that photon–number matching
simultaneously (i) minimizes the leading factors in the GBS variance/MSE, (ii) sharpens the principal–angle bounds (conditioning), and (iii) maximizes the Lyapunov rate in the residual contraction. These yield non–asymptotic
envelopes for
and the regime map against
as asserted in Theorem 8.5.
9. Design Rules from DSFL (Actionable)
This section collects operational prescriptions that follow from the comparison geometry developed above and states them as verifiable statements with proofs or proof sketches. Throughout we fix a degree , a covariance B, and a scale . The Hilbert space, blueprint, physical law, residual and principal-angle factor are as in the previous sections.
9.1. Calibration by Photon-Number Matching
Theorem 9.1 (Calibration rule)
. Let denote the degree-K DSFL residual at scale t. Assume with and . Then uniquely minimizes on if and only if
Moreover, at the leading factors in the variance/MSE bounds of the degree-K GBS estimators are minimized.
Proof. The identity and the strict convexity of prove existence and uniqueness of the minimizer with stationarity condition . The leading t–dependence of and of the large-n MSE of is proportional to by the variance/MSE formulæ in the sample-complexity section; hence the same minimizer applies. □
9.2. Angle Check and Channel Selection
Define and let be the t–independent constants from the angle–variance proposition.
Proposition 9.2 (Angle gate for estimator dispersion)
. For the degree-K estimators at fixed t and n samples,
Consequently, if then
and if with then
Proof. The two-sided bound is the angle–variance link. The corollaries follow by monotonicity in . □
Theorem 9.3 (Angle threshold ⇒ GBS vs MC decision)
. Let and be guaranteed sample sizes (for a fixed ) derived from Chebyshev/Markov bounds for the degree-K task. There exists a computable threshold such that
Conversely, if with η below a computable level , then .
Proof sketch. Insert the angle bounds into the Chebyshev/Markov forms for (resp. the corresponding quantity for MC). The MC denominator involves the full mixed moment structure, while the GBS denominator is degree-filtered; both numerators are bounded above/below by times t–independent constants. Solving the resulting inequality for yields the thresholds and (explicit in terms of and the MC dispersion proxies). □
9.2.1. Practical Surrogate
Let be the feature matrix whose columns are the (normalized) sampling features at scale t and a basis for the degree-K blueprint range. Then satisfies , so is a conservative sufficient condition for choosing GBS.
9.3. Degree Decomposition and Hybrid Allocation
Let be the degree decomposition and write . Suppose each degree-k component is estimated either by GBS (at its matched ) or by MC, yielding per-degree guaranteed sample sizes and under a common budget with per-degree tolerances satisfying .
Proposition 9.4 (Greedy degree-wise policy is optimal under monotone margins)
. Assume and are positive and the advantage margins
are nondecreasing in k. Then the policy that assigns GBS to the largest k for which and MC otherwise minimizes the total sample size among all degree-wise assignments obeying the same tolerance split.
Proof. Under the monotonicity hypothesis, the sequence of savings is ordered; an exchange argument shows any assignment deviating from the “assign GBS to the largest positive ” rule can be improved by swapping one GBS and one MC choice, strictly reducing total cost. Iterating yields the stated greedy optimum. □
Corollary 9.5 (A posteriori hybrid switch)
. If an a priori monotonicity check is unavailable, an empirical DSFL criterion suffices: after a warm-up with samples per degree,
By the Lyapunov contraction and angle bounds, this rule is asymptotically consistent.
9.4. Seam Stability Under Composition (Gluing) and Leakage Quantification
Consider a pipeline of L updates , , where each acts on the blueprint side and each on the physical side (retuning, counting, linear optical blocks etc.).
Proposition 9.6 (No inflation under gluing)
. If every step is DSFL-admissible, i.e. and , then the composite update satisfies
Proof. The data–processing inequality holds at each step:
Induction on
ℓ yields the composite bound. □
Proposition 9.7 (Angle leakage bound)
. Let be the Friedrichs angle between the degree-K blueprint subspace and the sampling feature subspace after
step ℓ. If each is a contraction onto a feature subspace, then
In particular, if with , then stays bounded away from 0 and the angle–variance bounds remain uniformly coercive along the pipeline.
Proof sketch. For orthogonal projectors one has . CS-decomposition and sub-multiplicativity give Each factor is bounded by the corresponding cosine; the asserted bound follows from the inequality applied to . □
9.4.1. Consequence
Under calibrated retuning at each seam (so that ), the principal angle cannot collapse in finite length pipelines formed by admissible blocks whose individual misalignments are summable; thus residuals do not inflate and the Lyapunov envelope persists.
10. Experiments
This section describes synthetic and hardware-aligned numerical studies designed to validate the DSFL predictions. We specify the experimental design, the estimators and metrics, and we state formal consistency properties that connect empirical readouts (MSE curves, angle surrogates, advantage percentages) to the theoretical quantities in the preceding sections. Throughout, degree-K quantities are taken at the photon-match unless otherwise indicated; empirical laws are formed from n samples.
10.1. Simulated GBS and Synthetic
10.1.1. Design.
For each :
Draw eigenvalue spectra
from prescribed
classes:
with amplitude
and decay
chosen so that
.
Draw a random Haar orthogonal and set .
Sample the degree-
K blueprint on the unit sphere:
Compute the matched scale by solving ; equivalently, find the unique such that .
10.1.2. Estimators and Metrics
For each configuration and for a range of sample sizes n:
Form and from n i.i.d. synthetic GBS draws at (via the hafnian-defined pmf).
Form the plain Monte Carlo estimator using n i.i.d. and Wick reconstruction to isolate degree K if desired.
Estimate MSEs and empirical residuals by repetitions:
Report guaranteed sample sizes by inverting Chebyshev/Markov envelopes calibrated with empirical variances.
Compute efficiency ratios and advantage percentages:
10.1.3. Theoretical Guarantees
Proposition 10.1 (Consistency of advantage estimators)
. Fix and a spectrum class with . Let and be defined from empirical variances with repetitions and samples per repetition at matched . Then
where are the population Chebyshev/Markov sample sizes. Consequently, the empirical advantage percentage converges almost surely to the true percentage under the same configuration ensemble.
Proof sketch. Strong laws for empirical second moments of linear statistics of i.i.d. draws imply almost-sure convergence of variance estimators. Continuity of the inversion map yields the stated limits; the law of large numbers over configuration draws implies convergence of frequencies. □
10.2. Ablations Testing DSFL Predictions
10.2.1. T-Sweep (U-Shape)
For each fixed and , sweep t in a neighborhood of and evaluate and .
Proposition 10.2 (Empirical convexity and minimum at the match)
. Under the simulators above, for any fixed and sufficiently large ,
are approximately convex with a unique minimum at (the solution of ). Moreover, their second derivatives with respect to are strictly positive up to errors.
Proof sketch. The residual and variance/MSE leading factors are proportional to ; Lemma (scale calculus) showed and stationarity iff . Empirical averages converge to their expectations by SLLN, hence the observed convex U-shape and unique minimum at up to sampling error. □
10.2.2. Angle Surrogate vs Variance
Construct features and blueprint basis , compute , and correlate with empirical across t.
Proposition 10.3 (Monotone trend)
. For fixed and large , the map is bounded between two affine functions of :
with in probability as .
Proof sketch. Replace the projector norm by the computable surrogate ; the angle–variance proposition yields the population inequalities. Empirical variances converge to their population values, and the Gram-based surrogate is continuous in sample averages of features, giving the displayed bounds up to errors. □
10.2.3. Hybrid Crossover
Partition the blueprint across degrees and estimate the total samples needed by degree-wise GBS vs MC. Plot the crossover index where the hybrid policy switches.
Proposition 10.4 (Crossover index is increasing with
N and decreasing with decay)
. Let be the smallest degree where under a polynomial (or exponential) eigen-decay parameter β. Then for sufficiently large N and fixed tolerance ,
i.e. larger systems favor GBS at lower degrees, while stronger decay (smaller effective dimension) delays the crossover.
Proof sketch. The MC dispersion includes mixed-moment growth with the effective dimension and increases rapidly with N when decay is weak; GBS dispersion at the matched degree depends primarily on degree-k features. Comparing the two sample-size envelopes and differentiating w.r.t. the structural parameters yields the stated monotonicities. □
10.3. Hardware-Aligned Kernels (Optional)
10.3.1. Models
Implement time-bin/loop architectures by stochastic kernels that compose (i) squeezing maps defining , (ii) interferometer unitary draws, and (iii) photon-number resolving measurements (or threshold versions). Insert calibrated loss channels modeled as contractions on .
10.3.2. Stress Tests
For each pipeline, concatenate L calibrated blocks and measure after each seam, as well as .
Proposition 10.5 (No inflation under concatenation with calibrated retuning)
. Assume each block is DSFL-admissible and retuned to its local match . Then the empirical residual is nonincreasing along the pipeline up to sampling error:
Proof. Apply the DPI for each admissible block (projection/contraction and intertwining), and use convergence of empirical residuals to their expectations. □
Proposition 10.6 (Robustness to small losses). Let a loss channel of strength act between calibrated blocks. Then the matched scale solves the perturbed equation and the contraction rate degrades by at most ; consequently, the empirical semi-log MSE slope remains within of the noiseless value for fixed n.
Proof sketch. A lossy Gaussian channel induces with ; Davis–Kahan perturbation gives , while the scale calculus shows shifts continuously in . The Lyapunov rate is monotone in the angle margin, hence perturbs by . □
10.3.3. Reporting
For each setting we present: (i) MSE and residual vs n (semi-log slopes), (ii) MSE vs t (U-shapes with minimum at ), (iii) variance vs angle surrogate curves, (iv) advantage heatmaps over , and (v) pipeline residual traces across seams with/without loss.
11. Concluding Discussion
This work reframes Gaussian Boson Sampling (GBS) as a calibrated variance–reduction engine operating in a single comparison geometry. Within that geometry, the Deterministic Statistical Feedback Law (DSFL) supplies three pillars that, taken together, elevate the “folk rules’’ of GBS into theorems: (i) calibration optimality, which identifies photon–number matching as the unique minimizer of the leading variance/MSE factor ; (ii) principal–angle conditioning, which yields sharp two–sided dispersion bounds with multiplicative constants ; and (iii) a Lyapunov envelope that converts one–step contraction into a non–asymptotic exponential error decay with maximal at the photon match. These statements are not merely sufficient conditions; they jointly explain why GBS beats Monte Carlo (MC) in high–degree regimes, how to tune the device to realize the advantage, and what rates to expect in finite samples.
On the algorithmic side, the DSFL lens unifies unbiased (GBS–I) and biased–but–consistent (GBS–P) estimators, shows that both inherit the same convex t–dependence, and clarifies the role of degree partitioning: once the target is decomposed into degree components, GBS naturally concentrates effort where MC pays an exponential price through cross–moment couplings, while a hybrid policy remains optimal for small degrees. On the systems side, DSFL’s data–processing inequality guarantees no residual inflation under counting, retuning, or block concatenation; principal–angle control quantifies leakage across interfaces; and perturbation arguments (Davis–Kahan type) explain the graceful degradation under small hardware noise together with recovery by re–matching .
The implications are practical. First, calibration reduces to solving a strictly convex, one–dimensional equation for t, hence is robust and fast; this makes photon–number matching a pre–run, certifiable step. Second, a computable angle surrogate (e.g., a cross–Gram norm between sampling features and blueprint basis) provides an a priori gate for channel selection (GBS vs MC), enabling hybrid deployment without post–hoc tuning. Third, the Lyapunov envelope gives finite–n guarantees that can be monitored online by empirical DSFL residuals, turning semi–log MSE plots into quantitative health checks.
There are natural extensions. For non–Gaussian observables or generalized boson–sampling models (loop–hafnians, threshold detection), one must redesign the feature map and admissible updates so that the DPI persists and bias terms remain controlled; the same Hilbertian template applies. Angle surrogates can be refined—e.g. via randomized subspace embeddings—to scale to very large while preserving principal–angle fidelity. Finally, the experimental program suggested here—t–sweeps for convex U–shapes, variance–vs–angle curves, and block–wise residual traces—offers falsifiable diagnostics for hardware pipelines, and a path to controller–in–the–loop calibration where t and the interferometer are adjusted to maximize the Lyapunov rate subject to device constraints.
In sum, DSFL furnishes a predict–then–prove methodology for GBS: it predicts the optimal operating point (), the conditioning (), and the rate () before data are collected; it proves that admissible updates cannot worsen the calibrated misfit; and it turns those predictions into actionable rules that persist under composition and mild noise. This synthesis clarifies when and why GBS delivers sample–complexity advantages over MC, and how to tune photonic pipelines to realize them in practice.
Funding
No external funding was received for this work.
Data and code availability
This work uses only synthetic data produced by the algorithms described in the paper; no external datasets were used or analyzed. A fully reproducible package containing (i) source code, (ii) parameter files for every figure, (iii) exact random seeds, and (iv) an exportable runtime environment (both environment.yml and requirements.txt) will be deposited at Zenodo under an archival DOI (to be added at acceptance) and mirrored on GitHub (read-only tag matching the DOI). The repository includes a one-click reproducibility script: which regenerates all tables and figures in the manuscript. Tested platforms: Linux (x86_64) and macOS (arm64) with Python ≥ 3.10; core dependencies: NumPy, SciPy, Matplotlib, and JAX/Numba (optional, for accelerated runs). Code will be released under an OSI-approved license (MIT).
Acknowledgments
The author affirms sole authorship and received no external funding.
Conflicts of Interest
The author declares no competing interests.
Appendix A. Crisp Theorem Statements (Ready to Formalize)
Appendix A.1. Standing Setup
Let
be a Hilbert space with norm
, and closed subspaces
. A
calibration is a pair of bounded maps
satisfying the
interchangeability identities
with
the orthogonal projector onto
. For
and
, the
Residual of Sameness is
An
admissible update is a pair
with
,
such that
For angle bounds, let
be closed subspaces with orthogonal projectors
, and
Friedrichs cosine
DPI (GBS Admissibility)
Theorem A.1 (Data–processing inequality)
. If is admissible, then for all and ,
Corollary A.2 (Composition)
. If are admissible, then is admissible and
Proof. Intertwining and are preserved under composition; apply Theorem A.1. □
Calibration optimality ⇒ mtB = 2K
Appendix A.2. GBS Specialization
Let
be symmetric with
, and fix a degree
. For a scale
, define
Assume the degree-
K residual admits the leading form
with
independent of
t.
Lemma A.3 (Scale calculus). , and .
Proof. Differentiate termwise. □
Theorem A.4 (Calibration optimality)
. The unique minimizer of is characterized by the photon–number match
Equivalently, minimizes the leading t–factors in the variance/MSE of the degree-K GBS estimators.
Proof. By Lemma A.3, critical points of satisfy and is strictly convex, hence the minimizer is unique. The variance/MSE leading factors for GBS-I and GBS-P are proportional to ; thus they share the same minimizer. □
What is provable once clearly stated assumptions are added (action items)
Angle ⇒ rate monotonicity. We state that the Lyapunov rate
improves as
decreases.
Action: Include Lemma B.2, which bounds the rate between linear functions of the angle margin:
with
t–independent
. The proof is a two–subspace argument using the through–subspace contraction (Proposition 6.8) and the sharp bounds in Proposition A.7.
Residual factorization. We use to map calibration to residual minimization. Action: Add Lemma A.5 (Degree–K residual factorization) showing that all t–dependence factors through and for the degree–K laws. A concise proof sketch is included in the appendix, relying on hafnian homogeneity and linearity of the readouts.
Lemma A.5 (Residual factorization for degree-
K)
. Fix , a covariance B with spectrum in , and coefficients . Let , set and
Consider the degree-K DSFL residual built from either readout
Then there exist t–independent constants and (depending only on and B) such that
Sketch. For
, hafnian homogeneity gives
, whence
. Thus every coordinate of
carries the common scalar factor
. The degree-
K readouts introduce only the normalization
(and
in GBS-I), which does not depend on
t beyond
. Expanding
shows the
t–dependence factors through
, with all inner products collected into
t–independent constants
and
. A fully expanded computation (including the multinomial covariance form for GBS-P and the exact second moment for GBS-I) is provided in
Appendix A. □
Corollary A.6 (Calibration ⇒ residual minimization). Under the hypotheses of Lemma A.5, the unique minimizer of on is characterized by .
Proof. By Lemma A.5, with . The scale calculus shows and is strictly convex on . Hence the unique minimizer of satisfies . □
Proof. By hafnian homogeneity, . Hence The parenthesized factor is independent of t after regrouping; similarly, the cross term is proportional to times a t–independent sum, and is t–independent. Expanding yields the claimed affine dependence on with . □
Angle bounds
Proposition A.7 (Two-subspace inequality)
. Let be closed subspaces with . For any and ,
Proof. Expand , and use , then . □
Appendix A.3. GBS specialization (degree-K).
Let be the blueprint projector and the sampling-feature projector at scale t, and define .
Theorem A.8 (Angle–dispersion link)
. For the degree-K estimators from n samples at fixed t, there exist t–independent such that
with Var understood as the variance for GBS-I and as the leading asymptotic variance (or squared-bias term) for GBS-P. Equivalently, MSE is sandwiched between two multiples of the split energy, with multiplicative factors .
Proof sketch. Write the estimator as with w fixed and the empirical law. Then , where is the single-sample covariance. The range of lies in , while w decomposes into . Apply Proposition A.7 twice (once to control leakage into , once for the split relative to ) and absorb t–independent weights into . □
Lyapunov bound
Theorem A.9 (Discrete Lyapunov envelope)
. Let be the degree-K residual and suppose there exists such that, conditionally on the past,
for an admissible one-step update . Then
Moreover, increases as decreases, and is maximized at the calibration match .
Proof. Taking total expectation yields with . Iteration gives . By Theorem A.8, improved alignment (smaller ) strengthens contraction; by Theorem A.4, the match maximizes this gain. □
Corollary A.10 ( sample-size envelope). If , then . At the match , the right-hand side is minimized in t.
Proof. Solve for n. □
Appendix B. Explaining the Elements: sDoF/pDoF, Interchangeability, DSFL and the Residual of Sameness R
This section fixes the primitives used throughout. We first embed the statistical degrees of freedom (sDoF) and physical degrees of freedom (pDoF) in one comparison Hilbert space; then we define the single observable (Residual of Sameness); finally we state the admissibility gate that makes the residual monotone and, when dynamics are present, yields a Lyapunov envelope.
Appendix B.1. Interchangeability (Calibration)
Let
be a Hilbert space carrying two closed subspaces
(statistical channel) and
(physical channel). A calibration is a pair of bounded maps
satisfying
with
the orthogonal projector onto
. Thus
provides calibrated statistical representatives of
,
provides calibrated physical images of
, and composing in either channel returns the original.
Appendix B.2. One Observable: the Residual of Sameness
For
and
,
is the squared distance from
p to
. It vanishes iff
and is invariant under isometries preserving
.
Appendix B.3. Admissibility and the Data–Processing Inequality
A pair
with
and
is
admissible if
Then the Hilbertian data–processing inequality holds:
Hence no admissible update inflates the calibrated mismatch. In operator–algebraic models,
is an orthogonal conditional expectation, and (
34) follows from Kadison–Schwarz.
Appendix B.4. Propagation with a Gap: the DSFL Lyapunov Envelope
Assume a residual identity
and bounds
Then
Thus
decays at a single rate
fixed by the sector’s coercivity margin.
Appendix B.5. Operational Arrow (Accumulated Improvement)
Define
It is nondecreasing and quantifies total calibrated mismatch removed, independently of the choice of clock.
Appendix B.6. Minimal Checklist for Applications
Fix
,
,
; choose
with (
30).
Verify (
A19) for each step; (
34) then guarantees monotonicity.
If dynamics are available, identify
in (
A22) to obtain
in (
A23).
Appendix B.7. Generic Two–Channel Application Template
Let
and
with calibration
. Set
,
. If
then
and
. This dynamic envelope pairs with the static monotonicity
whenever
is admissible.
Appendix B.8. Terminology and Mechanism: sDoF/pDoF, DSFL, and the Dual Feedback Loop
Appendix B.8.1. What are sDoF and pDoF?
The
statistical degrees of freedom (sDoF) are the blueprint variables
that encode the model, prior, or control law; the
physical degrees of freedom (pDoF) are the measured/realized responses
. Both live in the same comparison Hilbert space
via a calibration
obeying
Equation (
A26) ensures two-way coherence:
reads a calibrated blueprint from a physical state, and
writes a calibrated physical image of a statistical state. The
Residual of Sameness
measures the mismatch between pDoF and the calibrated sDoF within
.
Appendix B.8.2. Why “DSFL”
Deterministic Statistical Feedback Law (DSFL) is shortened to
DSFL because updates are deterministic (linear/contractive), the driver is statistical (sDoF), the mechanism is feedback (via the residual
), and it functions as a law (one inequality governs all steps). For any
admissible update
(
,
),
so the residual is
–monotone.
Appendix A.8.3. Dual Feedback Loop (Discrete Time)
There are two tightly coupled feedback paths in the same geometry:
with
and
. By (
A28),
sample–wise. If, in addition, a Lyapunov identity with a gap holds,
, then
(exponential envelope).
Appendix B.8.4. Flow form (Continuous Time)
Writing
, assume
with
. Then
and
. The accumulated drop
is nondecreasing and quantifies the total calibrated mismatch removed.
Appendix B.8.5. Practical Interpretation
The P-loop evolves pDoF through an –contractive map that respects calibration, reducing e on the physical side; the S-loop refines sDoF so that tracks the evolving p. Because both loops live in and intertwine , they cannot fabricate residual—only dissipate or transport it. In applications with slow drift, one may update the calibration on a coarser timescale as long as , ; between updates the inner dual loop remains admissible and residual–monotone.
What Is Fully Proved (or Reducible to Standard Facts)
Appendix B.8.6. Data–Processing Inequality (DPI) in the DSFL Geometry
Claim. Admissible updates (intertwining and nonexpansive ) cannot inflate the residual.
Status. Proved in the text (Theorem “DPI”) by a one–line Hilbert–space contraction; stability under composition is immediate.
Appendix B.8.7. Calibration Optimality mtB = 2K
Claim. The unique minimizer of the leading variance/MSE factor is the photon–number match.
Status. Proved by strict convexity on
of
Scope. Optimal for any quantity whose
t–dependence factors through
: the leading terms of the GBS-I variance and GBS-P MSE, and the DSFL residual representation
(when that representation holds).
Appendix B.8.8. Angle (Friedrichs) Bounds
Claim. Two–sided MSE/variance control with factors .
Status. Proved via the two–subspace inequalities and ; explicit derivations included.
Appendix B.8.9. Exact Variance for GBS-I; Asymptotic MSE for GBS-P
GBS-I. Exact variance formula proved from the pmf and hafnian homogeneity.
GBS-P. Large-n leading term proved by the multinomial delta method (gradient and covariance computed).
Scope/assumptions. For GBS-P we assume the usual sign-known convention and interior probabilities/smoothness so the delta method applies.
Appendix B.8.10. Chebyshev/Markov (ε,δ) Guarantees
Claim. Invert variance/MSE to obtain guaranteed sample sizes.
Status. Proved once the variance/MSE statements hold (and ), by direct Chebyshev/Markov bounds.
Appendix B.8.11. Lyapunov Envelopes (Discrete/Continuous)
Claim. If a one–step conditional contraction holds, the expected residual decays exponentially.
Status. Proved as an implication via discrete (and continuous) Grönwall.
Scope. The contraction is the hypothesis; the envelope follows rigorously.
What Is Provable Once Clearly Stated Assumptions Are Added
Appendix B.8.12. Residual Representation
Where used. To map calibration optimality to residual minimization.
Status. Provable for degree-K laws and the two estimators, since the only nontrivial t–dependence enters through and . One factors t out of (i) the multinomial covariance and (ii) the linear readout; a short lemma closes the argument.
Appendix B.8.13. “Angle ⇒ rate” Linkage
Claim. The Lyapunov rate improves as decreases.
Status. Provable once the conditional contraction constant is written as a monotone functional of the split energy controlled by . With the projector bounds in hand, a brief comparison lemma formalizes this.
Appendix B.8.14. Composition/Gluing Stability and Small-Noise Robustness
Claim. No inflation under concatenation; small perturbations degrade angles/rates by .
Status. Concatenation: already follows from DPI under composition (proved). Robustness: Davis–Kahan/Wedin gives
hence
perturbs by
and rates follow from the previous item.
Appendix B.8.15. Asymptotic Normality (CLT) for GBS-P
Claim. Beyond the leading MSE constant, a full CLT holds.
Status. Provable by standard smooth-functionals-of-the-multinomial arguments under mild regularity (already used for the delta-method step).
What Is Solid but We Cite Rather than Re-Prove Here
Corollary B.1 (CLT for GBS-P)
. Let and write . Fix a scale and assume the GBS law
i.e. all degree-K outcome probabilities are strictly interior. Let the degree-K GBS-P functional be
with (or absorbed into when ). Let be the empirical law based on n i.i.d. samples from , and . Then
where
In particular, the asymptotic variance simplifies to
which matches the leading constant in Proposition 8.3.
Proof. Write
and
. The interior condition
implies the classical multinomial central limit theorem:
Since
g is
on the open simplex (each coordinate appears under a square root and
), the delta method yields
The displayed gradient follows from
with
, so
. Finally,
because
by construction (Wick + homogeneity). Hence
□
Remark B.1 (Boundary cases). If some coordinates satisfy , the map is not differentiable at p; one may either (i) remove those indices from (they are p-null) and re-normalize, or (ii) work with a directional delta method under sparsity assumptions. The interior condition in the corollary avoids these technicalities.
Appendix B.8.16. Open, Nonempty Regions of Advantage; Percentage-of-Advantage
Content (Andersen–Shan 2025). Large-n leading term for GBS-P (matching ours), improved tuning via , sample-size envelopes for both estimators, existence of nonempty open subsets with advantage over MC, and percentage-of-advantage values over under Haar–Vandermonde measures.
Status. We rely on these theorems and cite them explicitly.
What Is Still Heuristic/Needs Additional Work
Appendix B.8.17. Principal-angle surrogate as a certified upper bound.
Claim. The computable Gram surrogate upper-bounds the true Friedrichs cosine.
Status. Heuristic but tighten-able: if spans the sampling feature subspace and spans the blueprint subspace (properly normalized), then . A short approximation lemma with sampling-error bounds can make this rigorous.
Appendix B.8.18. Regime Separation (Poly vs. Exp).
Claim. For , while .
Status. Proof sketch given; a fully rigorous version needs fixed ensembles (eigenvalues in ; coefficients on the unit sphere) and a positive-mass lower bound for the MC mixed-hafnian quadratic form. With these in place, the exponential lower bound follows by counting/Stirling estimates (in line with Andersen–Shan open-set results).
Appendix B.8.19. One-Step Conditional Contraction (Hypothesis in Lyapunov Theorem)
Claim. Each update removes a fixed fraction of the residual in expectation.
Status. Currently an assumption (empirically supported by semi-log linearity). A device-level proof would require an explicit Markov kernel and a uniform spectral gap; feasible but model-specific.
Appendix B.8.20. One-Step Conditional Contraction (Hypothesis in Lyapunov Theorem)
Claim. Each update removes a fixed fraction of the residual in expectation.
Status. Currently an assumption (empirically supported by semi-log linearity). A device-level proof would require an explicit Markov kernel and a uniform spectral gap; feasible but model-specific.
Lemma B.2 (Rate improves as angle opens)
. Let be the degree-K residual at scale t and let be an admissible one-step update (intertwining and ). Assume the conditional one-step contraction hypothesis of Theorem A.9: there exists such that
Then there exist t–independent constants (depending only on the fixed weights and B through bounded positive operators) such that
In particular, increases as decreases and is maximized at the photon–number match .
Proof. Write the residual as
with
. One step gives
hence
. Taking conditional expectation and comparing to
,
Decompose
with respect to the blueprint/sampling pair,
and note that
acts through the sampling feature subspace (by admissibility) so that the “through–subspace’’ contraction applies:
Using
and the two–subspace bounds (Proposition A.7),
Squaring and taking expectation (conditionally on
e) yields
By the two–subspace lower bound,
where the constant
is an absolute (non-optimized) choice; any fixed
would do after renormalizing constants. Hence
which implies an upper bound
for some
independent of
t (absorbing the numerical constants and the fixed weights). A matching upper bound follows by reversing the roles of the projectors in the two–subspace inequalities:
and using
(another consequence of Proposition A.7) gives
for some
independent of
t. The monotonicity of
in
is immediate, and maximization at
follows since
is minimized (best alignment) at the photon–match by the calibration optimality and angle–conditioning discussion. □
Appendix B.9. Bottom Line
Rock solid now: DPI; calibration optimality ; angle bounds; exact GBS-I variance; large-n GBS-P MSE; inversions; Lyapunov envelope (conditional on a one-step contraction); composition stability; small-noise robustness (Davis–Kahan).
Provable with short add-ons: residual factorization ; monotone “angle ⇒ rate” linkage; CLT for GBS-P; certified Gram surrogate bound.
Backed by the literature: nonempty open regions with advantage and percentage-of-advantage computations (Andersen–Shan 2025).
Still to complete: a model-independent proof of the one-step contraction and a fully quantified “poly vs. exp” lower bound with explicit constants under clear ensembles.
References
- Andersen, J.E.; Shan, S. Using Gaussian Boson Samplers to Approximate Gaussian Expectation Problems, 2025, [arXiv:quant-ph/2502.19336].
- Andersen, J.E.; Shan, S. Estimating the Percentage of GBS Advantage in Gaussian Expectation Problems, 2025, [arXiv:quant-ph/2502.19362].
- Andersen, J.E.; Shan, S. Using Gaussian Boson Samplers to Approximate Gaussian Expectation Problems, 2025, [arXiv:quant-ph/2502.19336].
- Hamilton, C.S.; Kruse, R.; Sansoni, L.; Barkhofen, S.; Silberhorn, C.; Jex, I. Gaussian Boson Sampling. Phys. Rev. Lett. 2017, 119, 170501.
- Kruse, R.; Hamilton, C.S.; Sansoni, L.; Barkhofen, S.; Silberhorn, C.; Jex, I. Detailed study of Gaussian Boson Sampling. Phys. Rev. A 2019, 100, 032326.
- Aaronson, S.; Arkhipov, A. The Computational Complexity of Linear Optics. In Proceedings of the STOC ’11, 2011, pp. 333–342.
- Deshpande, A.; Mehta, A.; Vincent, T.; Quesada, N.; Hinsche, M.; Ioannou, M.; Madsen, L.; Lavoie, J.; Qi, H.; Eisert, J.; et al. Quantum computational advantage via high-dimensional Gaussian boson sampling. Science Advances 2022, 8, eabi7894. [CrossRef]
- Bulmer, J.F.F.; Bell, B.A.; Chadwick, R.S.; Jones, A.E.; Moise, D.; Rigazzi, A.; Thorbecke, J.; Haus, U.U.; Vaerenbergh, T.V.; Patel, R.B.; et al. The boundary for quantum advantage in Gaussian Boson Sampling. Science Advances 2022, 8, eabl9236. [CrossRef]
- Björklund, A.; Gupt, B.; Quesada, N. A faster hafnian formula for complex matrices and its benchmarking on a supercomputer. Journal of Experimental Algorithmics 2019, 24, 1–19. [CrossRef]
- Gupt, B.; Izaac, J.; Quesada, N. The Walrus: a library for hafnians, Hermite polynomials and Gaussian Boson Sampling. J. Open Source Software 2019, 4, 1705.
- Zhong, H.S.; Wang, H.; Deng, Y.H.; Chen, M.C.; Peng, L.C.; Luo, Y.H.; Qin, J.; Wu, D.; Gong, S.Q.; Su, H.; et al. Quantum computational advantage using photons. Science 2020, 370, 1460–1463. [CrossRef]
- Madsen, L.S.; Laudenbach, F.; Falamarzi Askarani, M.; Rortais, F.; Vincent, T.; Bulmer, J.F.F.; Miatto, F.M.; Neuhaus, L.; Helt, L.G.; Collins, M.J.; et al. Quantum computational advantage with a programmable photonic processor. Nature 2022, 606, 75–81. [CrossRef]
- Quesada, N.; Helt, L.G.; Izaac, J.; Arrazola, J.M.; Shahrokhshahi, R.; Myers, C.R.; Sabapathy, K.K. Simulating realistic non-Gaussian state preparation. Physical Review A 2019, 100, 022341, [arXiv:quant-ph/1810.03706]. [CrossRef]
- Quesada, N.; Arrazola, J.M.; Killoran, N. Gaussian boson sampling using threshold detectors. Physical Review A 2018, 98, 062322, [arXiv:quant-ph/1807.01639]. [CrossRef]
- Huh, J.; Guerreschi, G.G.; Peropadre, B.; McClean, J.R.; Aspuru-Guzik, A. Boson sampling for molecular vibronic spectra. Nature Photonics 2015, 9, 615–620. [CrossRef]
- Huh, J.; Yung, M.H. Vibronic boson sampling: Generalized Gaussian boson sampling for molecular vibronic spectra at finite temperature. Scientific Reports 2017, 7, 7462. [CrossRef]
- Banchi, L.; Fingerhuth, M.; Babej, T.; Ing, C.; Arrazola, J.M. Molecular docking with Gaussian Boson Sampling. Science Advances 2020, 6, eaax1950. [CrossRef]
- Arrazola, J.M.; Bromley, T.R. Using Gaussian boson sampling to find dense subgraphs. Physical Review Letters 2018, 121, 030503. [CrossRef]
- Brádler, K.; Dallaire-Demers, P.L.; Rebentrost, P.; Su, D.; Weedbrook, C. Gaussian boson sampling for perfect matchings of arbitrary graphs. Physical Review A 2018, 98, 032310. [CrossRef]
- Brádler, K.; Friedland, S.; Izaac, J.; Killoran, N.; Su, D. Graph isomorphism and gaussian boson sampling. Special Matrices 2021, 9, 166–196. [CrossRef]
- Schuld, M.; Brádler, K.; Israel, R.; Su, D.; Gupt, B. A quantum hardware-induced graph kernel based on GBS, 2019, [arXiv:quant-ph/1905.12646].
- Jahangiri, S.; Arrazola, J.M.; Quesada, N.; Killoran, N. Point processes with Gaussian boson sampling. Physical Review E 2020, 101, 022134. [CrossRef]
- Sparrow, C.; Martín-López, E.; Maraviglia, N.; Neville, A.; Harrold, C.; Carolan, J.; Joglekar, Y.N.; Hashimoto, T.; Matsuda, N.; O’Brien, J.L.; et al. Simulating the vibrational quantum dynamics of molecules using photonics. Nature 2018, 557, 660–667. [CrossRef]
- He, Y.; Ding, X.; Su, Z.E.; Huang, H.L.; Qin, J.; Wang, C.; Unsleber, S.; Chen, C.; Wang, H.; He, Y.M.; et al. Time-bin-encoded boson sampling with a single-photon device. Physical Review Letters 2017, 118, 190501. [CrossRef]
- Motes, K.R.; Gilchrist, A.; Dowling, J.P.; Rohde, P.P. Scalable boson sampling with time-bin encoding using a loop-based architecture. Physical Review Letters 2014, 113, 120501. [CrossRef]
- Wick, G.C. The evaluation of the collision matrix. Physical Review 1950, 80, 268–272. [CrossRef]
- Kocharovsky, V.V.; Kocharovsky, V.V.; Tarasov, S.V. The hafnian master theorem. Linear Algebra and its Applications 2022, 651, 144–161. [CrossRef]
- Barvinok, A. Approximating Permanents and Hafnians. Discrete Analysis 2017, 2017, 1–34. [CrossRef]
- Barvinok, A. Polynomial time algorithms to approximate permanents and mixed discriminants within a simply exponential factor. Random Structures & Algorithms 1999, 14, 29–61. [CrossRef]
- Robert, C.P.; Casella, G. Monte Carlo Statistical Methods; Springer, 1999. [CrossRef]
- Rubinstein, R.Y.; Kroese, D.P. Simulation and the Monte Carlo Method, 3 ed.; Wiley, 2016. [CrossRef]
- Owen, A.B. Monte Carlo Theory, Methods and Examples. https://artowen.su.domains/mc/, 2013. Accessed 2025-02-28.
- Owen, A.B. Practical Quasi-Monte Carlo Integration. https://artowen.su.domains/mc/practicalqmc.pdf, 2023. Accessed 2025-02-28.
- Madsen, L.S.; Laudenbach, F.; Askarani, M.F.; Rortais, F.; Vincent, T.; Bulmer, J.F.F.; Miatto, F.M.; Neuhaus, L.; Helt, L.G.; Collins, M.J.; et al. Quantum computational advantage with a programmable photonic processor. Nature 2022, 606, 75–81. [CrossRef]
- Zhong, H.S.; Wang, H.; Deng, Y.H.; Chen, M.C.; Peng, L.C.; Luo, Y.H.; Qin, J.; Wu, D.; Ding, X.; Hu, Y.; et al. Quantum computational advantage using photons. Science 2020, 370, 1460–1463. [CrossRef]
- Deutsch, F. Best Approximation in Inner Product Spaces; Springer, 2001.
- Davis, C.; Kahan, W.M. The Rotation of Eigenvectors by a Perturbation. SIAM Journal on Numerical Analysis 1970, 7, 1–46. [CrossRef]
- Wedin, P.Å. Perturbation bounds in connection with singular value decomposition. BIT Numerical Mathematics 1972, 12, 99–111. [CrossRef]
- van der Vaart, A.W. Asymptotic Statistics; Cambridge Univ. Press, 1998.
- Billingsley, P. Probability and Measure, 3rd ed.; Wiley, 1995.
| 1 |
When all the convention is trivial; otherwise this corresponds to the usual loop-hafnian or sign–augmented readout. |
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).