Preprint
Article

This version is not peer-reviewed.

Degeneracy of the Operator-Valued Poisson Kernel Near the Numerical Range Boundary

Submitted:

05 February 2026

Posted:

06 February 2026

Read the latest preprint version here

Abstract
Let $A\in\C^{d\times d}$ and let $W(A)$ denote its numerical range. For a bounded convex domain $\Omega\subset\C$ with $C^1$ boundary containing $\spec(A)$, consider the operator-valued boundary kernel \[ P_{\Omega}(\sigma,A)\;:=\;\Real\!\Bigl(n_{\Omega}(\sigma)\,(\sigma\Id-A)^{-1}\Bigr), \qquad \sigma\in\partial\Omega, \] where $n_{\Omega}(\sigma)$ is the outward unit normal at $\sigma$. For convex $\Omega$ with $W(A)\subset\Omega$ this kernel is strictly positive definite on $\partial\Omega$ and underlies boundary-integral functional calculi on convex domains. We analyze the opposite limiting regime $\Omega\downarrow W(A)$. Along any $C^1$ convex exhaustion $\Omega_\varepsilon\downarrow W(A)$, if $\sigma_\varepsilon\in\partial\Omega_\varepsilon$ approaches $\sigma_0\in\partial W(A)$ with convergent outward normals and $\sigma_0\notin\spec(A)$, then $\lambda_{\min}(P_{\Omega_\varepsilon}(\sigma_\varepsilon,A))\to 0$ and the corresponding min-eigenvectors converge (up to subsequences and phases) to the canonical subspace $(\sigma_0\Id-A)\mathcal M(n)$ determined by the maximal eigenspace of $H(n)=\Real(\overline{n}A)$. Quantitatively, we obtain two-sided bounds in terms of an explicit support-gap scalar, yielding a linear degeneracy rate under bounded-resolvent hypotheses and an explicit rate for outer offsets $W(A)+\varepsilon\mathbb{D}$. For normal matrices we compute the eigenvalues of $P_{\Omega}(\sigma,A)$ explicitly, showing that degeneracy may fail at spectral support points unless the supporting face contains multiple eigenvalues.
Keywords: 
;  ;  ;  ;  

1. Introduction

Let A C d × d and denote its numerical range
W ( A ) : = { x * A x : x C d , x = 1 } .
It is a compact convex subset of C (Toeplitz–Hausdorff theorem; see, e.g., [1,2]). A central open problem due to Crouzeix asks whether W ( A ) is a 2-spectral set for A, i.e.,
p ( A ) 2 max z W ( A ) | p ( z ) | for every polynomial p .
See [3,4] for the formulation and [5] for the best known universal constant 1 + 2 .
Background and relation to the convex-domain functional calculus. Up to harmless normalization conventions, a recurring tool in the convex-domain approach of Delyon–Delyon and Crouzeix is the operator-valued boundary kernel
P Ω ( σ , A ) : = Re n Ω ( σ ) ( σ I A ) 1 , σ Ω ,
defined for a bounded convex domain Ω C with C 1 boundary containing spec ( A ) , where n Ω ( σ ) is the outward unit normal at σ . This kernel appears in double-layer potential representations and boundary integral operators used to obtain functional calculus bounds on convex domains [4,5,6,7,8]. For convex Ω with W ( A ) Ω , positivity/coercivity of σ P Ω ( σ , A ) on Ω encodes strict separation of supporting half-planes and serves as a key structural input in such estimates [7,8,9].
Motivation: loss of coercivity near W ( A ) . In applications and numerical implementations of boundary-integral calculi, one often approximates W ( A ) by C 1 convex supersets Ω ε W ( A ) . It is therefore natural to ask whether coercivity of the pointwise kernel P Ω ε ( σ , A ) can remain uniform as ε 0 . The results below show that this is impossible in general: even when the resolvent stays bounded (i.e. at non-spectral boundary points σ 0 W ( A ) spec ( A ) ), the smallest eigenvalue of P Ω ε ( σ , A ) must deteriorate at boundary points σ Ω ε approaching W ( A ) in a fixed supporting direction.
What is new in this paper. The existing convex-domain literature primarily exploits positivity of (1.2) for fixed domains Ω W ( A ) [4,7,8,9]. Here we analyze the complementary limiting regime in which Ω shrinks to W ( A ) , and we make explicit the resulting loss of coercivity of the pointwise kernel. The analysis is driven by a congruence identity and by a scalar support gap  δ ( σ , n ) = Re ( n ¯ σ ) λ max ( Re ( n ¯ A ) ) , which admits a support-function interpretation in standard convex-geometry terminology.
  • We prove a qualitative degeneracy theorem (Theorem 1): along any C 1 convex exhaustion Ω ε W ( A ) , if σ ε Ω ε approaches a non-spectral boundary point σ 0 W ( A ) spec ( A ) with convergent outward normals n Ω ε ( σ ε ) n , then λ min ( P Ω ε ( σ ε , A ) ) 0 and the limiting min-eigenvector directions lie in ( σ 0 I A ) M ( n ) , where M ( n ) is the maximal eigenspace of H ( n ) = Re ( n ¯ A ) .
  • We establish two-sided bounds for λ min ( P Ω ( σ , A ) ) in terms of the support gap δ ( σ , n ) , yielding a linear degeneracy rate under bounded-resolvent hypotheses (Lemma 3 and Corollary 3), and compute δ explicitly for standard outer offsets W ( A ) + ε D (Proposition 2).
  • Under a spectral-isolation hypothesis for λ max ( H ( n ) ) , we obtain convergence of the entire near-kernel invariant subspace (spectral projector) along the exhaustion (Proposition 3).
  • We analyze the contrasting spectral-support regime σ 0 spec ( A ) W ( A ) for normal matrices via an explicit eigenvalue formula for P Ω ( σ , A ) , showing that degeneracy may fail at a spectral support point unless the supporting face contains multiple eigenvalues (Proposition 4 and Examples 1–2).
Organization.Section 2 fixes notation and recalls support-function identities. Section 3 introduces P Ω ( σ , A ) , proves the key congruence identity, and establishes quantitative support-gap bounds together with a geometric interpretation of δ . Section 4 contains the degeneracy theorem, quantitative corollaries, subspace convergence, and explicit examples, followed by a brief discussion of open problems.

2. Preliminaries

We use the standard notation for disks:
D : = { z C : | z | < 1 } , D ¯ : = { z C : | z | 1 } .
Throughout, A C d × d is fixed. For vectors x C d we use x : = ( x * x ) 1 / 2 . For matrices B C d × d we use the induced operator norm B : = sup x = 1 B x . We write B * for the conjugate transpose and Re ( B ) : = ( B + B * ) / 2 .
For a Hermitian matrix B, we write its eigenvalues in nondecreasing order as
λ 1 ( B ) λ d ( B ) ,
and in nonincreasing order as λ 1 ( B ) λ d ( B ) . In particular, λ min ( B ) = λ 1 ( B ) and λ max ( B ) = λ 1 ( B ) .
Remark 1 
(Spectrum is contained in the numerical range). One has spec ( A ) W ( A ) . Indeed, if A x = λ x with x = 1 , then x * A x = λ W ( A ) . Consequently, W ( A ) Ω implies spec ( A ) Ω for any open set Ω C .

2.1. Support Functions and the Hermitian Pencil

For unimodular ω C (i.e. | ω | = 1 ), define the Hermitian matrix
H ( ω ) : = Re ( ω ¯ A ) = 1 2 ( ω ¯ A + ω A * ) .
We will later write n C (with | n | = 1 ) for outward unit normals on Ω ; in the support-function identities below and throughout, such an n simply plays the role of the unimodular direction ω .
Let λ max ( H ( ω ) ) denote its largest eigenvalue and let
M ( ω ) : = Ker λ max ( H ( ω ) ) I H ( ω )
denote the corresponding maximal eigenspace.
Lemma 1 
(Support function of the numerical range). For every unimodular ω C ,
max z W ( A ) Re ( ω ¯ z ) = λ max ( H ( ω ) ) .
Moreover, if x C d is a unit eigenvector of H ( ω ) associated with λ max ( H ( ω ) ) , then x * A x W ( A ) and
Re ω ¯ x * A x = λ max ( H ( ω ) ) .
Proof. 
For x = 1 ,
Re ω ¯ x * A x = Re x * ( ω ¯ A ) x = x * Re ( ω ¯ A ) x = x * H ( ω ) x .
Taking the maximum over x = 1 yields (2.2) by Rayleigh–Ritz. If x is a maximizing unit vector, then x * A x W ( A ) attains the support functional in direction ω , hence lies on W ( A ) and satisfies the stated identity.    □

2.2. Convex Domains with C 1 Boundary and Normals

We identify C with R 2 in the usual way. Let Ω C be a bounded open convex set with C 1 boundary. Then for each σ Ω there is a unique outward unit normal vector. This C 1 assumption is used only to guarantee that the outward unit normal n Ω ( σ ) exists and is unique at every boundary point, ensuring that P Ω ( σ , A ) is well-defined; no higher regularity (e.g. curvature bounds) is used. We represent the normal as a unimodular complex number n Ω ( σ ) C with | n Ω ( σ ) | = 1 so that the supporting half-plane at σ is
Π Ω ( σ ) = z C : Re n Ω ( σ ) ¯ ( z σ ) 0 .
Equivalently, by convexity one has Ω ¯ Π Ω ( σ ) and Ω { z C : Re ( n Ω ( σ ) ¯ ( z σ ) ) < 0 } . Under the identification C R 2 , the functional z Re ( n ¯ z ) is the Euclidean inner product with the unit vector corresponding to n.
Definition 1 
( C 1 convex exhaustion). A family { Ω ε } ε > 0 is called a C 1 convex exhaustionof a compact convex set K C if:
(i)
each Ω ε C is a bounded open convex set with C 1 boundary;
(ii)
Ω ε Ω ε for 0 < ε < ε ;
(iii)
K Ω ε for all ε > 0 ;
(iv)
ε > 0 Ω ε ¯ = K .
Remark 2 
(Subsequence selection for convergent normals). Let ε k 0 and σ k Ω ε k be any sequence. Since each outward normal n k : = n Ω ε k ( σ k ) is unimodular, the sequence { n k } { z C : | z | = 1 } lies in a compact set. Hence there is always a subsequence (not relabeled) such that n k n for some unimodular n. In particular, the normal convergence hypothesis in Theorem 1 can always be arranged by passing to a subsequence.

3. The Operator-Valued Poisson Kernel

Let Ω C be a bounded open convex set with C 1 boundary and assume spec ( A ) Ω . Then ( σ I A ) 1 exists for all σ Ω .
Definition 2 
(Operator-valued Poisson kernel). For σ Ω , define
P Ω ( σ , A ) : = Re n Ω ( σ ) ( σ I A ) 1 .

3.1. A Congruence Identity

Lemma 2 
(Congruence identity). Let σ spec ( A ) and let n C be unimodular. Then
( σ I A ) * Re n ( σ I A ) 1 ( σ I A ) = Re n ¯ ( σ I A ) = Re ( n ¯ σ ) I Re ( n ¯ A ) .
Proof. 
Write R : = ( σ I A ) 1 . Then R ( σ I A ) = I and ( σ I A ) * R * = I . Using Re ( X ) = 1 2 ( X + X * ) ,
( σ I A ) * Re ( n R ) ( σ I A ) = 1 2 ( σ I A ) * ( n R ) ( σ I A ) + ( σ I A ) * ( n ¯ R * ) ( σ I A ) = Re n ¯ ( σ I A ) .
Expanding gives (3.2).    □

3.2. Support-Gap Bounds

For unimodular n C define the support gap
δ ( σ , n ) : = Re ( n ¯ σ ) λ max ( H ( n ) ) , H ( n ) = Re ( n ¯ A ) .
Lemma 3 
(Support-gap characterization and quantitative bounds). Let A C d × d , let σ spec ( A ) , and let n C be unimodular. Set
P ( σ , n ) : = Re n ( σ I A ) 1 , α : = Re ( n ¯ σ ) , δ : = α λ max ( H ( n ) ) .
(This notation emphasizes dependence on the prescribed direction n; when n = n Ω ( σ ) one has P ( σ , n ) = P Ω ( σ , A ) .) Then:
(a)
P ( σ , n ) 0 if and only if δ 0 , and P ( σ , n ) 0 if and only if δ > 0 .
(b)
If δ = 0 , then P ( σ , n ) is singular and
Ker ( P ( σ , n ) ) = ( σ I A ) M ( n ) , M ( n ) = Ker ( λ max ( H ( n ) ) I H ( n ) ) .
(c)
If δ > 0 , then
δ σ I A 2 λ min P ( σ , n ) δ ( σ I A ) 1 2 .
Proof. 
Let B : = σ I A and P : = P ( σ , n ) . By Lemma 2,
B * P B = Re ( n ¯ B ) = α I Re ( n ¯ A ) = α I H ( n ) = : Q .
Since B is invertible, congruence by B preserves (semi)definiteness, so P 0 Q 0 and P 0 Q 0 . As Q is Hermitian with λ min ( Q ) = α λ max ( H ( n ) ) = δ , this proves (a).
If δ = 0 , then Q 0 is singular with Ker ( Q ) = M ( n ) , and P 0 by (a). For P 0 , x Ker ( P ) x * P x = 0 . Writing x = B y ,
x * P x = y * Q y ,
so x Ker ( P ) y Ker ( Q ) = M ( n ) , proving (b).
If δ > 0 , then Q 0 and P = B * Q B 1 0 . For x = 1 and y = B 1 x , one has x = B y and hence y 1 / B ; thus
x * P x = y * Q y λ min ( Q ) y 2 = δ y 2 δ / B 2 ,
giving the lower bound in (3.3). For the upper bound, take y a unit eigenvector of Q for λ min ( Q ) = δ and set x = B y / B y ; then
x * P x = y * Q y B y 2 = δ B y 2 δ B 1 2 .
   □
Remark 3 
(Connection with the convex-domain Poisson kernel literature). Up to normalization conventions, P Ω ( σ , A ) is the operator-valued boundary kernel appearing in the Carl Neumann double-layer potential framework for convex domains; see, e.g., [7,8,9]. Lemma 3 isolates the dependence of λ min ( P ( σ , n ) ) on the scalar support gap δ ( σ , n ) .

3.3. Strict positivity when W ( A ) Ω

Lemma 4 
(Strict separation at a supporting line). Let Ω C be a bounded open convex set with C 1 boundary and let K Ω be compact. Fix σ Ω and let n = n Ω ( σ ) be the outward unit normal. Then
max z K Re ( n ¯ z ) < Re ( n ¯ σ ) .
Proof. 
By (2.3), Ω { z : Re ( n ¯ ( z σ ) ) < 0 } , hence K { z : Re ( n ¯ ( z σ ) ) < 0 } . The continuous function z Re ( n ¯ ( z σ ) ) attains its maximum on compact K, and this maximum is strictly negative. Rearranging yields the claim.    □
Proposition 1 
(Positivity of the Poisson kernel). Assume W ( A ) Ω . Then for every σ Ω ,
P Ω ( σ , A ) 0 .
Proof. 
Fix σ Ω and set n : = n Ω ( σ ) and α : = Re ( n ¯ σ ) . By Lemma 4 with K = W ( A ) and Lemma 1,
λ max ( H ( n ) ) = max z W ( A ) Re ( n ¯ z ) < α ,
so δ ( σ , n ) = α λ max ( H ( n ) ) > 0 . Now apply Lemma 3 (a).    □

3.4. Geometric Meaning of the Support Gap and Offset Exhaustions

For a compact convex set K C and unimodular n C , define its support function
h K ( n ) : = max z K Re ( n ¯ z ) .
If Ω C is a bounded open convex set with C 1 boundary and σ Ω has outward normal n = n Ω ( σ ) , then necessarily Re ( n ¯ σ ) = h Ω ¯ ( n ) , i.e. the boundary point lies on the supporting line in direction n.
Lemma 5 
(Support gap as a support-function difference). Let Ω C be a bounded open convex set with C 1 boundary and σ Ω . Let n : = n Ω ( σ ) . Then
δ ( σ , n ) = Re ( n ¯ σ ) λ max ( H ( n ) ) = h Ω ¯ ( n ) h W ( A ) ( n ) .
In particular, δ ( σ , n ) measures the separation between the supporting line of Ω ¯ in direction n and the corresponding supporting line of W ( A ) .
Proof. 
Since n is the outward unit normal at σ Ω , the supporting half-plane characterization implies Re ( n ¯ z ) Re ( n ¯ σ ) for all z Ω ¯ , hence h Ω ¯ ( n ) = Re ( n ¯ σ ) . By Lemma 1, h W ( A ) ( n ) = λ max ( H ( n ) ) . Combining gives the claim.    □
Proposition 2 
(Outer offsets: δ is explicit). Let K C be compact and convex and fix ε > 0 . Define the outer offset (outer parallel set)
K ε : = K + ε D = { z + w : z K , w C , | w | < ε } .
Then for every unimodular n C ,
h K ε ¯ ( n ) = h K ( n ) + ε .
In particular, taking K = W ( A ) and Ω ε : = W ( A ) + ε D , for any boundary point σ Ω ε with outward normal n = n Ω ε ( σ ) (whenever defined) one has
δ ( σ , n ) = ε .
Consequently, since σ Ω ε implies σ W ( A ) and hence σ spec ( A ) (Remark 1), Lemma 3 (c) yields
ε σ I A 2 λ min P Ω ε ( σ , A ) ε ( σ I A ) 1 2 .
Proof. 
Fix unimodular n. For any z K and w C with | w | ε ,
Re ( n ¯ ( z + w ) ) = Re ( n ¯ z ) + Re ( n ¯ w ) h K ( n ) + | w | h K ( n ) + ε ,
so h K ε ¯ ( n ) h K ( n ) + ε . On the other hand, choosing z K with Re ( n ¯ z ) = h K ( n ) and w = ε n gives | w | = ε and
Re ( n ¯ ( z + w ) ) = h K ( n ) + ε ,
so h K ε ¯ ( n ) h K ( n ) + ε . This proves the support-function identity and hence the displayed formula for δ follows from Lemma 5.
The final eigenvalue bounds are an immediate substitution of δ = ε into (3.3).    □
Remark 4 
(Smoothness versus offsets). If K has flat faces, then ( K + ε D ) is typically only C 1 , 1 (curvature may jump at transitions between translated faces and rounded arcs). Proposition 2 is therefore best viewed as a geometric model illustrating how the support gap scales with the outer distance parameter ε. For the purposes of Definition 1, one may replace K + ε D by any convex domain with C 1 boundary whose support function differs from h K by a quantity comparable to ε; the same interpretation of δ then applies. For example, one may take Minkowski sums with a fixed smooth strictly convex unit ball (instead of D ) or smooth the support function to obtain a genuine C 1 (indeed smooth) convex exhaustion with the same first-order support-gap scaling.

3.5. Hausdorff distance and support-function control of the support gap

For a nonempty compact set K C and z C , write
( z , K ) : = inf w K | z w | .
For nonempty compact sets K , L C , define the (Euclidean) Hausdorff distance
d H ( K , L ) : = max sup z K ( z , L ) , sup w L ( w , K ) .
Lemma 6 
(Hausdorff distance via support functions). Let K , L C be nonempty compact convex sets and let D ¯ : = { z C : | z | 1 } . Then
d H ( K , L ) = sup | n | = 1 h K ( n ) h L ( n ) .
If moreover K L , then h K ( n ) h L ( n ) for all | n | = 1 and hence
d H ( L , K ) = sup | n | = 1 h L ( n ) h K ( n ) .
Proof. 
For t 0 and a nonempty compact set K, the Minkowski sum
K + t D ¯ = { z + w : z K , | w | t }
is the closed t-neighborhood of K, i.e. K + t D ¯ = { u C : ( u , K ) t } . Consequently,
d H ( K , L ) = inf t 0 : K L + t D ¯ and L K + t D ¯ .
For compact convex sets M , N C one has M N if and only if h M ( n ) h N ( n ) for all | n | = 1 . (Indeed, the forward direction is immediate; conversely, if x M N , a separating supporting line for the convex compact set N yields a unimodular n with Re ( n ¯ x ) > max z N Re ( n ¯ z ) = h N ( n ) , hence h M ( n ) Re ( n ¯ x ) > h N ( n ) .)
Moreover, support functions add under Minkowski sums, and h t D ¯ ( n ) = t for | n | = 1 ; hence
h K + t D ¯ ( n ) = h K ( n ) + t .
Therefore, K L + t D ¯ is equivalent to h K ( n ) h L ( n ) + t for all | n | = 1 , and similarly L K + t D ¯ is equivalent to h L ( n ) h K ( n ) + t for all | n | = 1 . Thus d H ( K , L ) is the smallest t such that | h K ( n ) h L ( n ) | t for all | n | = 1 , i.e.
d H ( K , L ) = sup | n | = 1 | h K ( n ) h L ( n ) | .
If K L , then h K h L , so the absolute value may be dropped, giving the second identity.    □
Corollary 1 
(Support gap bounded by the Hausdorff approximation error). Assume W ( A ) Ω , and set
Δ ( Ω ) : = d H ( Ω ¯ , W ( A ) ) = sup | n | = 1 h Ω ¯ ( n ) h W ( A ) ( n ) .
Then for every σ Ω with outward normal n = n Ω ( σ ) ,
δ ( σ , n ) = Re ( n ¯ σ ) λ max ( H ( n ) ) = h Ω ¯ ( n ) h W ( A ) ( n ) Δ ( Ω ) .
Consequently, since σ spec ( A ) for σ Ω , Lemma 3 (c) yields
λ min P Ω ( σ , A ) δ ( σ , n ) ( σ I A ) 1 2 Δ ( Ω ) ( σ I A ) 1 2 .
Moreover, there exists σ Ω such that
δ ( σ , n Ω ( σ ) ) = Δ ( Ω ) ,
and for this point one has the two-sided estimate
Δ ( Ω ) σ I A 2 λ min P Ω ( σ , A ) Δ ( Ω ) ( σ I A ) 1 2 .
Proof. 
The identity δ ( σ , n ) = h Ω ¯ ( n ) h W ( A ) ( n ) is Lemma 5, and the bound δ ( σ , n ) Δ ( Ω ) follows from the definition of Δ ( Ω ) . The eigenvalue bounds are then immediate from Lemma 3 (c).
Finally, the function n h Ω ¯ ( n ) h W ( A ) ( n ) is continuous on the unit circle, so it attains its maximum at some unimodular n . Choose σ Ω ¯ such that Re ( n ¯ σ ) = h Ω ¯ ( n ) ; then σ Ω and the supporting line { z : Re ( n ¯ z ) = Re ( n ¯ σ ) } is a supporting line for Ω ¯ at σ . Since Ω is C 1 , the outward unit normal at σ is uniquely defined and equals n , and hence
δ ( σ , n Ω ( σ ) ) = h Ω ¯ ( n ) h W ( A ) ( n ) = Δ ( Ω ) .
   □

4. Degeneracy Along a C 1 Convex Exhaustion

4.1. Qualitative Degeneracy and Limiting Kernel Directions

Theorem 1 
(Degeneracy of the Operator-Valued Poisson Kernel). Let A C d × d and let { Ω ε } ε > 0 be a C 1 convex exhaustion of W ( A ) (Definition 1). For σ Ω ε , set
P Ω ε ( σ , A ) : = Re n Ω ε ( σ ) ( σ I A ) 1 .
Fix any sequence ε k 0 and points σ k Ω ε k such that
σ k σ 0 W ( A ) , n k : = n Ω ε k ( σ k ) n C , | n | = 1 .
(After passing to a subsequence, the convergence n k n is automatic; see Remark 2.) Assume σ 0 spec ( A ) . Let H ( n ) = Re ( n ¯ A ) and M ( n ) = Ker ( λ max ( H ( n ) ) I H ( n ) ) .
Then:
(1)
(Vanishing) λ min P Ω ε k ( σ k , A ) 0 as k .
(2)
(Limiting directions)If u k is any unit eigenvector of P Ω ε k ( σ k , A ) for λ min P Ω ε k ( σ k , A ) , then every accumulation point u 0 of { u k } satisfies
u 0 ( σ 0 I A ) M ( n ) .
(3)
(One-dimensional case)If dim M ( n ) = 1 , then there exist phases θ k R such that
e i θ k u k ( σ 0 I A ) v ( σ 0 I A ) v ( k ) ,
where v is any unit vector spanning M ( n ) .
Proof. 
Set B k : = σ k I A and R k : = B k 1 , and define
P k : = Re ( n k R k ) , α k : = Re ( n k ¯ σ k ) .
Define also B 0 : = σ 0 I A , R 0 : = B 0 1 , P 0 : = Re ( n R 0 ) , α 0 : = Re ( n ¯ σ 0 ) .
Step 1: Congruence identities. By Lemma 2,
B k * P k B k = α k I H ( n k ) , B 0 * P 0 B 0 = α 0 I H ( n ) ,
where H ( n k ) = Re ( n k ¯ A ) .
Step 2: α 0 = λ max ( H ( n ) ) . Since n k is the outward normal at σ k Ω ε k , the supporting half-plane property gives Re ( n k ¯ z ) α k for all z Ω ε k and hence for all z W ( A ) . Passing to the limit yields Re ( n ¯ z ) α 0 for all z W ( A ) . Because σ 0 W ( A ) , equality holds at z = σ 0 , so α 0 = max z W ( A ) Re ( n ¯ z ) . Lemma 1 now gives
α 0 = λ max ( H ( n ) ) , Ker ( α 0 I H ( n ) ) = M ( n ) ,
so α 0 I H ( n ) 0 is singular.
Step 3: P k P 0 in operator norm. Since σ 0 spec ( A ) , B 0 is invertible. Write
B k = B 0 + ( σ k σ 0 ) I = B 0 ( I + E k ) , E k : = ( σ k σ 0 ) R 0 .
Then E k 0 , so for large k, I + E k is invertible and
R k = B k 1 = ( I + E k ) 1 R 0 , R k R 0 0 .
Therefore,
P k P 0 = Re ( n k R k n R 0 ) | n k n | R k + R k R 0 0 .
Step 4: λ min ( P 0 ) = 0 and λ min ( P k ) 0 . Since P k , P 0 are Hermitian, Weyl’s inequality yields
λ min ( P k ) λ min ( P 0 ) P k P 0 0 ,
so λ min ( P k ) λ min ( P 0 ) . By (4.1) and Step 2,
B 0 * P 0 B 0 = α 0 I H ( n ) 0 is sin gular .
Since B 0 is invertible, P 0 0 is singular, hence λ min ( P 0 ) = 0 , proving (1). Moreover,
Ker ( P 0 ) = B 0 Ker ( α 0 I H ( n ) ) = ( σ 0 I A ) M ( n )
by Lemma 3 (b) (with δ = 0 ).
Step 5: Limiting eigenvectors. Let u k be unit min-eigenvectors: P k u k = λ min ( P k ) u k . Along a convergent subsequence, u k u 0 . Then
u 0 * P 0 u 0 = lim k u k * P 0 u k = lim k u k * P k u k + u k * ( P 0 P k ) u k = lim k λ min ( P k ) + o ( 1 ) = 0 .
Since P 0 0 , this implies u 0 Ker ( P 0 ) = ( σ 0 I A ) M ( n ) , proving (2).
Step 6: One-dimensional case. If dim M ( n ) = 1 , then dim Ker ( P 0 ) = 1 , so the smallest eigenvalue of P 0 is simple. By the Davis–Kahan sin Θ theorem for invariant subspaces (see [10]), the corresponding one-dimensional eigenspaces of P k converge to Ker ( P 0 ) in gap metric, hence there exist phases θ k such that e i θ k u k u , where u spans Ker ( P 0 ) = ( σ 0 I A ) M ( n ) . This gives (3).    □
Remark 5 
(Why σ 0 spec ( A ) is essential). The hypothesis σ 0 spec ( A ) ensures that ( σ I A ) 1 remains bounded near σ 0 , so P 0 is a finite Hermitian matrix. When σ 0 spec ( A ) , the resolvent diverges and the behavior of λ min ( P Ω ε ( σ ε , A ) ) depends on the spectral geometry; see Proposition 4 below and Section 4.7.
Corollary 2 
(Global coercivity collapse along a C 1 convex exhaustion). Assume that A is not a scalar multiple of the identity (equivalently, W ( A ) is not a singleton). Let { Ω ε } ε > 0 be a C 1 convex exhaustion of W ( A ) and define the global coercivity constant
c ( ε ) : = inf σ Ω ε λ min P Ω ε ( σ , A ) , P Ω ε ( σ , A ) = Re n Ω ε ( σ ) ( σ I A ) 1 .
Then
lim inf ε 0 c ( ε ) = 0 .
In particular, there do not exist ε 0 > 0 and c 0 > 0 such that P Ω ε ( σ , A ) c 0 I for all 0 < ε < ε 0 and all σ Ω ε .
Proof. 
Since A is not scalar, the compact convex set W ( A ) contains more than one point, hence W ( A ) is infinite, whereas spec ( A ) is finite. Choose σ 0 W ( A ) spec ( A ) .
Fix any sequence ε k 0 . We claim that ( σ 0 , Ω ε k ) 0 . Indeed, if not, then there exist δ > 0 and a subsequence (not relabeled) such that ( σ 0 , Ω ε k ) δ for all k, hence the open ball B ( σ 0 , δ ) is contained in Ω ε k for all k. Taking closures and intersecting over k yields B ( σ 0 , δ ) k Ω ε k ¯ = W ( A ) , contradicting σ 0 W ( A ) .
Therefore we may choose σ k Ω ε k with σ k σ 0 . By compactness of the unit circle, after passing to a subsequence we have n Ω ε k ( σ k ) n for some unimodular n. Theorem 1 then gives
λ min P Ω ε k ( σ k , A ) 0 .
Since c ( ε k ) λ min ( P Ω ε k ( σ k , A ) ) , it follows that lim inf ε 0 c ( ε ) = 0 .    □

4.2. Quantitative Degeneracy Rate

Corollary 3 
(Linear rate in terms of the support gap). In the setting of Theorem 1, define
δ k : = Re ( n k ¯ σ k ) λ max ( H ( n k ) ) , H ( n k ) = Re ( n k ¯ A ) .
Then δ k > 0 for each k and δ k 0 . Moreover, for all sufficiently large k,
δ k 4 σ 0 I A 2 λ min P Ω ε k ( σ k , A ) 4 δ k ( σ 0 I A ) 1 2 .
In particular, λ min ( P Ω ε k ( σ k , A ) ) = Θ ( δ k ) .
Proof. 
Since W ( A ) Ω ε k and σ k Ω ε k with normal n k , Lemma 4 and Lemma 1 imply λ max ( H ( n k ) ) < Re ( n k ¯ σ k ) , so δ k > 0 .
As n k n and σ k σ 0 , Re ( n k ¯ σ k ) Re ( n ¯ σ 0 ) . Also H ( n k ) H ( n ) in operator norm, hence λ max ( H ( n k ) ) λ max ( H ( n ) ) . By Step 2 in the proof of Theorem 1, Re ( n ¯ σ 0 ) = λ max ( H ( n ) ) , so δ k 0 .
Set B k = σ k I A and B 0 = σ 0 I A . Since B k B 0 and B 0 is invertible, for large k one has B k 2 B 0 and B k 1 2 B 0 1 . Applying Lemma 3 (c) to ( σ , n ) = ( σ k , n k ) gives
δ k B k 2 λ min P Ω ε k ( σ k , A ) δ k B k 1 2 ,
and the stated constants follow.    □

4.3. Convergence of the Near-Kernel Subspace

Proposition 3 
(Convergence of the near-kernel spectral projector). Assume the setting of Theorem 1 and set m : = dim M ( n ) . Assume that λ max ( H ( n ) ) is isolated with multiplicity m, i.e.
γ H : = λ max ( H ( n ) ) λ m + 1 ( H ( n ) ) > 0 .
Let
P 0 : = Re n ( σ 0 I A ) 1 , K 0 : = Ker ( P 0 ) = ( σ 0 I A ) M ( n ) , Π 0 : C d K 0
be the orthogonal projector onto K 0 .
For each k, let P k : = P Ω ε k ( σ k , A ) and let Π k be the orthogonal projector onto the direct sum of the eigenspaces of P k corresponding to its m smallest eigenvalues. Then Π k Π 0 0 as k .
Moreover, writing B 0 = σ 0 I A , one has the explicit spectral-gap bound
λ m + 1 ( P 0 ) γ H B 0 2 ,
and consequently, for all sufficiently large k,
Π k Π 0 2 P k P 0 λ m + 1 ( P 0 ) 2 B 0 2 γ H P k P 0 .
Proof. 
By Lemma 2 and Step 2 of Theorem 1,
B 0 * P 0 B 0 = Q 0 : = λ max ( H ( n ) ) I H ( n ) 0 .
The eigenvalues of Q 0 are 0 with multiplicity m and at least γ H on M ( n ) , so λ m + 1 ( Q 0 ) = γ H .
Using the Courant–Fischer characterization with the change of variables x = B 0 y , one obtains for every j
λ j ( P 0 ) = min dim S = j max x S x = 1 x * P 0 x = min dim S = j max y B 0 1 S y 0 y * Q 0 y B 0 y 2 1 B 0 2 λ j ( Q 0 ) .
Taking j = m + 1 gives (4.4).
Next, Theorem 1 gives P k P 0 0 . Since P 0 has an isolated cluster of m eigenvalues at 0 separated by the gap λ m + 1 ( P 0 ) > 0 , the Davis–Kahan sin Θ theorem for invariant subspaces [10] yields (4.5), and hence Π k Π 0 0 .    □

4.4. The Spectral-Support Regime for Normal Matrices

Proposition 4 
(Normal matrices: explicit eigenvalues near a spectral support point). Let A be normal with eigenvalues λ 1 , , λ d (listed with algebraic multiplicity). Fix σ spec ( A ) and unimodular n C . Then
P ( σ , n ) : = Re n ( σ I A ) 1
is unitarily diagonalizable and its eigenvalues are the scalars
p j ( σ , n ) : = Re n σ λ j = Re ( n ¯ ( σ λ j ) ) | σ λ j | 2 , j = 1 , , d .
Now fix σ 0 spec ( A ) and let
J 0 : = { j { 1 , , d } : λ j = σ 0 } .
Let σ k spec ( A ) and unimodular n k satisfy
σ k σ 0 , n k n .
Write p k , j : = p j ( σ k , n k ) . Then:
(i)
For every j J 0 ,
p k , j p j ( σ 0 , n ) = Re n σ 0 λ j .
(ii)
For every j J 0 one has theexactidentity
p k , j = Re n k σ k σ 0 = Re ( n k ¯ ( σ k σ 0 ) ) | σ k σ 0 | 2 .
In particular, if there exists c > 0 such that
Re ( n k ¯ ( σ k σ 0 ) ) c | σ k σ 0 | for all sufficiently large k ,
then p k , j + for every j J 0 .
(iii)
Assume in addition that n is a supporting direction for W ( A ) = conv { λ 1 , , λ d } at σ 0 , i.e.
Re ( n ¯ λ j ) Re ( n ¯ σ 0 ) , j = 1 , , d .
Then for every j J 0 ,
p j ( σ 0 , n ) = Re ( n ¯ ( σ 0 λ j ) ) | σ 0 λ j | 2 = Re ( n ¯ σ 0 ) Re ( n ¯ λ j ) | σ 0 λ j | 2 0 ,
and p j ( σ 0 , n ) = 0 if and only if λ j lies on the same supporting line { z : Re ( n ¯ z ) = Re ( n ¯ σ 0 ) } . If moreover (4.7) holds (so that all p k , j + for j J 0 ), then
λ min P ( σ k , n k ) min j J 0 p j ( σ 0 , n ) ,
which is strictly positive if and only if no eigenvalue λ j σ 0 lies on the supporting line { z : Re ( n ¯ z ) = Re ( n ¯ σ 0 ) } .
Proof. 
Since A is normal, A = U diag ( λ 1 , , λ d ) U * for some unitary U, hence
( σ I A ) 1 = U diag 1 σ λ 1 , , 1 σ λ d U * .
Therefore,
P ( σ , n ) = Re n ( σ I A ) 1 = U Re diag n σ λ 1 , , n σ λ d U * = U diag Re n σ λ 1 , , Re n σ λ d U * ,
which proves (4.6). The limit in (i) follows by continuity of the map ( σ , n ) Re n / ( σ λ j ) when σ 0 λ j . For (ii), if λ j = σ 0 then
Re n k σ k σ 0 = Re n k σ k σ 0 ¯ | σ k σ 0 | 2 = Re ( n k ¯ ( σ k σ 0 ) ) | σ k σ 0 | 2 ,
and (4.7) implies p k , j c / | σ k σ 0 | + .
Finally, (4.8) implies Re ( n ¯ ( σ 0 λ j ) ) 0 for all j, giving the nonnegativity (and the characterization of equality) in (iii). If additionally (4.7) holds, then p k , j + for all j J 0 while p k , j p j ( σ 0 , n ) [ 0 , ) for j J 0 , so for large k the minimum eigenvalue is attained among indices j J 0 , yielding the stated limit and positivity criterion.    □
Example 1 
(Nondegeneracy at a spectral support point). Let A = diag ( 0 , 1 ) , so W ( A ) = [ 0 , 1 ] . Take σ = 1 + ε with ε > 0 and n = 1 . Then
P ( σ , n ) = Re ( σ I A ) 1 = diag 1 1 + ε , 1 ε ,
so λ min ( P ( σ , n ) ) = 1 1 + ε 1 as ε 0 . Thus the smallest eigenvalue doesnotdegenerate when the limiting support point is spectral and unique on the support face.
Example 2 
(Degeneracy at a spectral point with a flat support face). Let A = diag ( 1 , 1 + i ) and take σ = 1 + ε , n = 1 . Then
P ( σ , n ) = diag 1 ε , Re 1 ε i = diag 1 ε , ε ε 2 + 1 ,
so λ min ( P ( σ , n ) ) = ε ε 2 + 1 ε 0 . Here the supporting functional Re ( z ) is maximized by more than one eigenvalue, and degeneracy persists at σ 0 = 1 spec ( A ) .

4.5. A fully explicit 2 × 2 example: a nilpotent Jordan block

Example 3 
(Exact Poisson kernel and exact degeneracy rate for a disk exhaustion). Let
A = 0 1 0 0 .
Then W ( A ) = { z C : | z | 1 2 } . For r > 1 2 , let Ω r : = { z C : | z | < r } and choose σ = r e i t Ω r . The outward normal at σ is n Ω r ( σ ) = e i t and
( σ I A ) 1 = 1 σ 1 σ 2 0 1 σ .
Hence
P Ω r ( σ , A ) = Re e i t ( σ I A ) 1 = 1 r e i t 2 r 2 e i t 2 r 2 1 r ,
whose eigenvalues are λ ± ( r ) = 2 r ± 1 2 r 2 . In particular,
λ min P Ω r ( σ , A ) = 2 r 1 2 r 2 = r 1 2 r 2 ,
so the degeneracy islinearas r 1 2 .
Moreover, a min-eigenvector is u ( r , t ) ( e i t , 1 ) (independent of r). For the support direction n = e i t ,
H ( n ) = Re ( n ¯ A ) = 1 2 0 e i t e i t 0 , M ( n ) = span { ( e i t , 1 ) } .
At σ 0 = 1 2 e i t W ( A ) ,
( σ 0 I A ) ( e i t , 1 ) = 1 2 ( 1 , e i t ) ( e i t , 1 ) ,
in agreement with Theorem 1.

4.6. Numerical Experiments

This section provides numerical illustrations of: (i) the linear degeneracy predicted by Corollary 3 (and, in offset form, Proposition 2), (ii) the global coercivity collapse of Corollary 2, and (iii) the contrasting behavior at spectral support points for normal matrices (Proposition 4 and Examples 1–2).
Sampling model for an “outer offset” exhaustion. Fix a unimodular direction n C . Let H ( n ) = Re ( n ¯ A ) and let v M ( n ) be a unit vector in the maximal eigenspace of H ( n ) (Lemma 1). The corresponding numerical-range support point is
z 0 ( n ) : = v * A v W ( A ) , Re ( n ¯ z 0 ( n ) ) = λ max ( H ( n ) ) .
For ε > 0 we define the offset boundary point
σ ε ( n ) : = z 0 ( n ) + ε n .
Then Re ( n ¯ σ ε ( n ) ) = λ max ( H ( n ) ) + ε , so the support gap equals δ ( σ ε ( n ) , n ) = ε (cf. Proposition 2). Moreover, σ ε ( n ) W ( A ) , hence σ ε ( n ) spec ( A ) (because spec ( A ) W ( A ) ; Remark 1), so the resolvent is well-defined.
We evaluate the pointwise kernel
P ε ( n ) : = Re n σ ε ( n ) I A 1 ,
and track λ min ( P ε ( n ) ) as ε 0 . In the generic (bounded-resolvent) regime z 0 ( n ) spec ( A ) , Corollary 3 predicts the linear scaling λ min ( P ε ( n ) ) = Θ ( ε ) and convergence of min-eigenvectors to ( z 0 ( n ) I A ) M ( n ) (Theorem 1).
Experiment 1: exact linear rate for the nilpotent Jordan block. We revisit Example 3 with A = 0 1 0 0 and the disk exhaustion Ω r = { z : | z | < r } , r > 1 2 . Writing ε = r 1 2 , one has the exact formula λ min ( P Ω r ( σ , A ) ) = ε r 2 , hence linear degeneracy as ε 0 . Figure 1 compares the computed smallest eigenvalue to the exact expression.
Experiment 2: generic nonnormal matrix—linear degeneracy and eigenvector convergence. We generate a fixed random complex matrix A C 5 × 5 (seeded for reproducibility), fix one direction n = e i θ , and form σ ε ( n ) = z 0 ( n ) + ε n as above. Figure 2 shows λ min ( P ε ( n ) ) against ε on a log–log scale, together with a reference ε line; the observed slope is 1 on the plotted range. Figure 3 tracks the distance of a min-eigenvector u ε of P ε ( n ) to the predicted limiting subspace ( z 0 ( n ) I A ) M ( n ) , quantified by ( I Π ) u ε where Π is the orthogonal projector onto ( z 0 ( n ) I A ) M ( n ) (consistent with Theorem 1).
Experiment 3: approximate global coercivity collapse. For the same A we approximate the global coercivity constant
c ( ε ) = inf σ Ω ε λ min ( P Ω ε ( σ , A ) )
by sampling a fine grid of directions { n j } and using the offset model σ ε ( n j ) = z 0 ( n j ) + ε n j . Figure 4 plots the sampled minimum min j λ min ( P ε ( n j ) ) versus ε , illustrating the collapse asserted by Corollary 2. (Here the offset model has Δ ( Ω ε ) = ε , so Corollary 1 also predicts that uniform coercivity cannot persist as ε 0 .)
Experiment 4: normal matrices at spectral support points. We reproduce the contrasting behavior in Examples 1–2 by evaluating P ( 1 + ε , 1 ) = Re ( ( 1 + ε ) I A ) 1 for two diagonal (hence normal) matrices: A = diag ( 0 , 1 ) and A = diag ( 1 , 1 + i ) . Figure 5 shows that the former remains bounded away from 0 as ε 0 , while the latter degenerates linearly, consistent with Proposition 4.
Reproducibility. All figures are generated by the accompanying scripts poisson_utils.py and run_numerical_experiments.py, which require only NumPy and Matplotlib and save PDF figures into a figs/ folder.

4.7. Discussion and Open Problems: the Nonnormal Spectral-Support Regime

Remark 6 
(Beyond the normal case at spectral support points). Theorem 1 treats the bounded-resolvent regime σ 0 spec ( A ) , while Proposition 4 gives a complete description of what can happen at a spectral support point σ 0 spec ( A ) W ( A ) fornormalmatrices.
Fornonnormalmatrices, the regime σ 0 spec ( A ) W ( A ) appears substantially more delicate: the resolvent ( σ I A ) 1 typically diverges as σ σ 0 , and the interplay between (i) the approach geometry σ σ 0 along Ω ε , (ii) the supporting direction n, and (iii) the Jordan/pseudospectral behavior of A near σ 0 can produce several qualitatively different limits for λ min ( P Ω ( σ , A ) ) .
Natural questions suggested by the present results include:
  • Can one classify (or even bound) the possible asymptotic behavior of λ min ( P Ω ε ( σ ε , A ) ) as σ ε σ 0 spec ( A ) W ( A ) , in terms of the local spectral data of A (e.g. Jordan structure) and the support direction n?
  • In analogy with Proposition 4, is there a purely geometric/spectral criterion characterizing when degeneracy must occur at a spectral support point for a general (possibly defective) A?
  • How do these boundary effects interact with quantitative constants in boundary-integral functional calculi and with conditioning of numerical schemes based on C 1 domain approximations Ω ε W ( A ) ?
We leave these questions for future work.

References

  1. Toeplitz, O. Das algebraische Analogon zu einem Satze von Fejér. Math. Z 1918, 2(no. 1–2), 187–197. [Google Scholar] [CrossRef]
  2. Hausdorff, F. Der Wertevorrat einer Bilinearform. Math. Z 1919, 3(no. 1), 314–316. [Google Scholar] [CrossRef]
  3. Crouzeix, M. Bounds for analytical functions of matrices. Integral Equations Operator Theory 2004, 48, 461–477. [Google Scholar] [CrossRef]
  4. Crouzeix, M. Numerical range and functional calculus in Hilbert space. J. Funct. Anal. 2007, 244(no. 2), 668–690. [Google Scholar] [CrossRef]
  5. Crouzeix, M.; Palencia, C. The numerical range is a (1+2)-spectral set. SIAM J. Matrix Anal. Appl. 2017, 38(no. 2), 649–655. [Google Scholar] [CrossRef]
  6. Delyon, B.; Delyon, F. Generalization of von Neumann’s spectral sets and integral representation of operators. Bull. Soc. Math. France 1999, 127(no. 1), 25–41. [Google Scholar] [CrossRef]
  7. Badea, C.; Crouzeix, M.; Delyon, B. Convex domains and K-spectral sets. Math. Z 2006, 252(no. 2), 345–365. [Google Scholar] [CrossRef]
  8. Schwenninger, F. L.; de Vries, J. The double-layer potential for spectral constants revisited. Integr. Equ. Oper. Theory 2025, 97 13. [Google Scholar] [CrossRef] [PubMed]
  9. Crouzeix, M.; Greenbaum, A. Spectral Sets: Numerical Range and Beyond. SIAM J. Matrix Anal. Appl. 2019, 40(no. 3), 1087–1101. [Google Scholar] [CrossRef]
  10. Davis, C.; Kahan, W. M. The rotation of eigenvectors by a perturbation. III. SIAM J. Numer. Anal. 1970, 7, 1–46. [Google Scholar] [CrossRef]
Figure 1. Nilpotent Jordan block (Example 3): log–log plot of λ min versus ε = r 1 2 , showing the predicted linear scaling.
Figure 1. Nilpotent Jordan block (Example 3): log–log plot of λ min versus ε = r 1 2 , showing the predicted linear scaling.
Preprints 197731 g001
Figure 2. Random nonnormal A C 5 × 5 (fixed seed), fixed direction n: λ min ( P ε ( n ) ) scales linearly in ε (Corollary 3).
Figure 2. Random nonnormal A C 5 × 5 (fixed seed), fixed direction n: λ min ( P ε ( n ) ) scales linearly in ε (Corollary 3).
Preprints 197731 g002
Figure 3. Same setup as Figure 2: the min-eigenvector direction converges to ( z 0 ( n ) I A ) M ( n ) as ε 0 (Theorem 1). The plotted “gap” is ( I Π ) u ε .
Figure 3. Same setup as Figure 2: the min-eigenvector direction converges to ( z 0 ( n ) I A ) M ( n ) as ε 0 (Theorem 1). The plotted “gap” is ( I Π ) u ε .
Preprints 197731 g003
Figure 4. Approximate global coercivity constant c ( ε ) computed by sampling directions and using the offset model σ ε ( n ) = z 0 ( n ) + ε n . The minimum over directions tends to 0 with ε 0 (Corollary 2).
Figure 4. Approximate global coercivity constant c ( ε ) computed by sampling directions and using the offset model σ ε ( n ) = z 0 ( n ) + ε n . The minimum over directions tends to 0 with ε 0 (Corollary 2).
Preprints 197731 g004
Figure 5. Normal matrices at a spectral support point: nondegeneracy for A = diag ( 0 , 1 ) versus linear degeneracy for A = diag ( 1 , 1 + i ) , consistent with Proposition 4.
Figure 5. Normal matrices at a spectral support point: nondegeneracy for A = diag ( 0 , 1 ) versus linear degeneracy for A = diag ( 1 , 1 + i ) , consistent with Proposition 4.
Preprints 197731 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated