Preprint
Article

This version is not peer-reviewed.

An Approximation to Riemann Hypothesis

Submitted:

25 January 2026

Posted:

26 January 2026

You are already at the latest version

Abstract
A revision mainly in the appendix.
Keywords: 
;  

1. Introduction

Riemann zeta-function ζ ( s ) is originally defined as
ζ ( s ) = n = 1 1 n s , for Re s > 1 .
and it also can be expressed as the product form
ζ ( s ) = p 1 1 1 / p s , for Re s > 1 .
This formula is called Euler’s product formula, which indicates the relation between ζ ( s ) and prime numbers. About ζ ( s ) there is a well-known Riemann hypothesis, states that all the non-trivial zeros of ζ ( s ) are on the critical line Re s = 1 / 2 . The researches on the conjecture are no doubt a most time-consuming one in mathematics, refer to see the survey paper [3].
The so-called trivial zeros of ζ ( s ) are s = 2 , 4 , , and nontrivial zeros of ζ ( s ) are known all in the critical strip 0 Re s 1 .
Denote by N ( T ) the number of zeros of ζ ( σ + i t ) in the region 0 σ 1 , 0 t T , and by N 0 ( T ) the number of zeros on the critical line σ = 1 / 2 , 0 t T . Riemann hypothesis is that
N 0 ( T ) = N ( T ) .
For N ( T ) , it is known that
N ( T ) = T 2 π log T 2 π T 2 π + O ( log T ) .
And for N 0 ( T ) , Hardy firstly shown that there are infinity many zeros on the critical line, and then he and Littlewood [5] and Selberg [8] proved that
κ = N 0 ( T ) N ( T ) > 0 .
Levinson [6] proved
κ 1 3 .
and then this result has been improved successively. Conrey [2], Feng [4] proved respectively
κ 0.407 , κ 0.412 .
In this paper, we will prove that
Theorem 1.1.
N ( T ) = N 0 ( T ) + E .
where E T 1 / 2 ( log T ) 3 .
The main arguments in this paper are based the papers [1,6,7], but instead of using Riemann-Siegel formula, it will be applied an auxiliary function ω ( s , T 1 , T 2 ) defined in Lemma 2.1, which will play a role of mollifier and ferry, it firstly used in [7] but here with a small modification.

2. Some Lemmas

In the following, it will be used an auxiliary function ω ( s , T 1 , T 2 ) , which defined as following.
Suppose that T T 1 T 2 2 T , λ = T 1 ϵ , for any ϵ > 0 , s 1 = λ + c + i u , ( c 0 ) , s = v + i t , define
g ( w ) = e λ Γ ( w ) λ w ,
and
ω ( s , T 1 , T 2 ) = 1 2 π T 1 T 2 g ( s 1 s ) d u .
Lemma 2.1.
Let σ = λ + c v , η = min | x t | | x [ T 1 , T 2 ] , there is
| ω ( s , T 1 , T 2 ) | exp ( ( ( c v ) 2 η 2 ) / 2 σ )
Suppose Δ Δ 0 ( = ( 2 β λ log T ) 1 / 2 , β > 0 ) , if t [ T 1 + Δ , T 2 Δ ] , then
| ω ( c + i t , T 1 , T 2 ) 1 | T β .
And if t T 1 Δ , or t T 2 + Δ , then
| ω ( c + i t , T 1 , T 2 ) | T β .
If | t u | = o ( σ 2 / 3 ) , then
arg g ( s 1 s ) = t u 2 σ ( t u ) 3 6 σ 2 ( c v ) ( t u ) σ + O 1 σ .
Proof. 
By Stirling’s formula, it has
Re ( log Γ ( σ + ( u t ) i ) ) = σ 1 2 log ( σ 2 + ( u t ) 2 ) 2 σ + 1 2 log ( 2 π ) ( u t ) arctan u t σ + O 1 σ
And
Re ( log ( Γ ( σ + ( u t ) i ) ) ) + Re ( log ( λ ( σ + ( u t ) i ) ) ) + λ = ( c v ) 2 2 σ ( u t ) 2 4 σ 2 ( u t ) 2 2 σ 1 2 log σ + 1 2 log ( 2 π ) + O 1 σ
Hence,
| ω ( s , T 1 , T 2 ) | e ( c v ) 2 / 2 σ 1 2 π σ T 1 T 2 e ( u t ) 2 / 2 σ d u e ( ( c v ) 2 η 2 ) / 2 σ .
Besides, it is familiar that
e λ = 1 2 π i ( c ) Γ ( s 1 s ) λ s s 1 d s .
Hence,
1 ω ( c + i t , T 1 , T 2 ) = R 1 + R 2 ,
where
R 1 = e λ 2 π T 2 Γ ( λ + ( u t ) i ) λ ( λ + ( u t ) i ) d u ,
R 2 = e λ 2 π T 1 Γ ( λ + ( u t ) i ) λ ( λ + ( u t ) i ) d u ,
Hence, if t [ T 1 + Δ , T 2 Δ ] , then
R 1 T 2 e λ Γ ( λ + ( u t ) i ) λ ( λ + ( u t ) i ) d u λ 1 / 2 T 2 exp ( ( u t ) 2 / 2 λ ) d u T β .
and similarly,
R 2 λ 1 / 2 T 1 exp ( ( u t ) 2 / 2 λ ) d u T β .
if t T 1 Δ , or t T 2 + Δ , then
ω ( c + i t , T 1 , T 2 ) λ 1 / 2 T 1 T 2 exp ( ( u t ) 2 / 2 λ ) d u T β .
If | t u | = o ( σ 2 / 3 ) , then
Im ( log ( Γ ( s 1 s ) λ s s 1 ) ) = t u 2 σ + ( t u ) 3 3 σ 2 ( t u ) 3 2 σ 2 ( c v ) ( t u ) σ + O 1 σ = t u 2 σ ( t u ) 3 6 σ 2 ( c v ) ( t u ) σ + O 1 σ .
Lemma 2.2.
Let L = log ( T / 2 π ) , G 0 ( s ) = ζ ( s ) + ζ ( s ) / L , and 0 < a 1 / 2 , 1 < b < β 1 , u [ T 1 , T 2 ] , s 1 = λ + c + i u , then
1 2 π i a + ( u Δ ) i a + ( u + Δ ) i g ( s 1 s ) G 0 ( s ) G 0 ( 2 a s ) d s = 1 2 π i b + ( u Δ ) i b + ( u + Δ ) i g ( s 1 s ) G 0 ( s ) G 0 ( 2 a s ) d s + O ( b / T )
Proof. 
Let B be the rectangle with vertices a + ( u Δ ) i , a + ( u + Δ ) i , b + ( u Δ ) i and b + ( u + Δ ) i . Take the integral B g ( s 1 s ) G ( s ) G ( 2 a s ) d s , with the residue theorem, it has
1 2 π i b + ( u Δ ) i b + ( u + Δ ) i g ( s 1 s ) G 0 ( s ) G 0 ( 2 a s ) d s = 1 2 π i a + ( u Δ ) i a + ( u + Δ ) i g ( s 1 s ) G 0 ( s ) G 0 ( 2 a s ) d s + 1 2 π i a + ( u + Δ ) i b + ( u + Δ ) i g ( s 1 s ) G 0 ( s ) G 0 ( 2 a s ) d s + 1 2 π i b + ( u Δ ) i a + ( u Δ ) i g ( s 1 s ) G 0 ( s ) G 0 ( 2 a s ) d s .
On the upper side and the lower side, as | Im ( s 1 s ) | = Δ , by Lemma 2.1, it has
| g ( s 1 s ) | 1 / T β .
Besides, by the functional equation, it is easy to know
| G 0 ( v + ( u ± Δ ) i ) G 0 ( 2 a ( v + ( u ± Δ ) i ) ) | T b , ( a v b ) .
Hence, the two integrals on the upper side and the lower side of B
1 2 π i a + ( u ± Δ ) i b + ( u ± Δ ) i g ( s 1 s ) G 0 ( s ) G 0 ( 2 a s ) d s b / T ,
and the Lemma is followed.
Lemma 2.3.
Suppose that 0 < α < β 2 . For r > 0 , let
J ( r ) = 1 2 π T 1 T 2 u Δ u + Δ t 2 π α e ( t u ) 2 / 2 σ exp i t log t r e d t d u
Then for T 1 Δ r T 2 + Δ ,
J ( r ) r 2 π α r 1 / 2 σ 1 / 2 e π i / 4 .
And if r < T 1 Δ , or r > T 2 + Δ , and Δ Δ 0 T ϵ , then
J ( r ) = O ( 1 ) .
Proof. 
Denote by
F ( u ) = u Δ u + Δ t 2 π α e ( t u ) 2 / 2 σ exp i t log t r e d t
And let t = u ( 1 + x ) ,then
F ( u ) = u 2 π α u e i u ρ F 1 ( u ) ,
where ρ = log ( u / r ) 1 ,
F 1 ( u ) = Δ / u Δ / u exp A ( x ) + B ( x ) i d x ,
A ( x ) = α log ( 1 + x ) ( u x ) 2 / 2 σ , B ( x ) = u ( 1 + x ) log ( 1 + x ) + u x ρ .
By Gauss’s integration, it is easy to follow
F 1 ( u ) 1 u 2 π σ 1 + α σ / u 2 i σ / u exp ( 1 + ρ i α / u ) 2 u 2 ( u / σ + α / u i )
and
F ( u ) u 2 π α e i u ρ 2 π σ 1 + α σ / u 2 i σ / u exp ( log ( u / r ) i α / u ) 2 σ 2 ( 1 + α σ / u 2 i σ / u )
If r [ T 1 Δ , T 2 + Δ ] , let u = r ( 1 + x ) , then
F ( u ) r 2 π α 2 π σ exp x 2 ( σ r i ) 2 ( 1 + x ) α
and
1 2 π T 1 / r 1 T 2 / r 1 F ( u ) r d x r 2 π r 2 π α 2 π σ · π ( σ + α r i ) / 2 exp α 2 ( σ + α + r i ) 2 ( σ + α ) 2 + r 2 ) r 2 π α r 1 / 2 σ e π i / 4
If r < T 1 Δ , or r > T 2 + Δ , and if Δ Δ 0 T ϵ , then
| F ( u ) | T α + 1 / 2 exp Δ T 2 σ 2 T α + 1 / 2 β O ( 1 / T ) .

3. The Proof of Theorem 1.1

Proof. 
Let h ( s ) = π s / 2 Γ ( s / 2 ) , then the functional equation of ζ ( s ) can be written as
h ( s ) ζ ( s ) = h ( 1 s ) ζ ( 1 s )
By Stirling’s formula, it has
log h ( s ) = 1 2 ( s 1 ) log s 2 π s 2 + C 0 + O 1 s
Let f ( s ) = log h ( s ) , then
f ( s ) = h ( s ) h ( s ) = 1 2 log s 2 π + O 1 s
and for larger t
f ( s ) + f ( 1 s ) = log t 2 π + O 1 s
Taking logarithm of equation (3.1), and then derivative, it follows
h ( s ) ζ ( s ) ( f ( s ) + f ( 1 s ) ) = h ( s ) ζ ( s ) h ( 1 s ) ζ ( 1 s )
We note that the right side of (3.5) is a sum of two conjugative complex numbers as s = 1 / 2 + i t , so the zeros of the right side of (3.5) occur if and only if
arg ( h ( s ) ζ ( s ) ) π / 2 mod π
On the left side of (3.5), clearly, h ( s ) is never zero, and by (3.4), so these zeros are just the zeros of ζ ( 1 / 2 + i t ) .
Moreover, let χ ( s ) = h ( 1 s ) / h ( s ) , then ζ ( s ) = χ ( s ) ζ ( 1 s ) , and
ζ ( s ) = χ ( s ) { ( f ( s ) + f ( 1 s ) ) ζ ( 1 s ) + ζ ( 1 s ) }
By (3.6), the zeros of ζ ( 1 / 2 + i t ) are the ones
arg ( h ( 1 s ) { ( f ( s ) + f ( 1 s ) ) ζ ( 1 s ) + ζ ( 1 s ) } ) π / 2 mod π
on σ = 1 / 2 , equivalently,
arg ( h ( s ) { ( f ( s ) + f ( 1 s ) ) ζ ( s ) + ζ ( s ) } ) π / 2 mod π
on σ = 1 / 2 . Write L ( s ) = f ( s ) + f ( 1 s ) , and denote by
G ( s ) = ζ ( s ) + ζ ( s ) / L ( s )
The investigation above means
N 0 ( T ) = 1 π Δ 0 T arg ( h G ( 1 / 2 + i t ) )
By(3.2), it can be known that
Δ 0 T arg ( h ( 1 / 2 + i t ) ) = T 2 log T 2 π T 2 + O ( log T )
So, the main task to determine N 0 ( T ) is to calculate Δ 0 T arg ( G ( 1 / 2 + i t ) ) .
Let L = log ( T / 2 π ) , U T , and let D be the rectangle with the vertices 1 / 2 + i T , c + i T , c + i ( T + U ) , 1 / 2 + i ( T + U ) , ( c 3 ) . First of all, we might as well assume there are no zeros of G ( s ) on the boundary of D, then by the principle of argument, the change of arg G ( s ) around D is equal to 2 π times N G ( D ) , the number of zeros of G ( s ) in D.
On the right side of D
| G ( c + i t ) 1 | n 2 n c + O ( 1 / L ) 1 / 3
so, arg G ( s ) change less than π . On the lower side and the upper side of D, by a known result [9, § 9.4 ], a extension of Jessen’s theorem, taking account on the order of G ( s ) , we can know that arg G ( s ) = O ( L ) as 0 < σ 3 , and arg G ( σ + i t ) = O ( 2 σ ) as σ 3 , hence, for any 0 b c , it has
b c arg G ( σ + i T ) d σ , b c arg G ( σ + i ( T + U ) ) d σ O ( L )
So,
Δ T T + U arg ( G ( 1 / 2 + i t ) ) = 2 π N G ( D ) + O ( log T )
Now the work is turned into to evaluate N G ( D ) .
Let 1 / 2 a = O ( 1 / L ) , and C be the rectangle with vertices a + i T , c + i T , c + i ( T + U ) , a + i ( T + U ) . Taking the integral C log G ( s ) d s , by the Littlewood’s Lemma [9, § 9.9 ], it has
T T + U log | G ( a + i t ) | d t T T + U log | G ( c + i t ) | d t + a c arg G ( σ + i ( T + U ) ) d σ a c arg G ( σ + i T ) d σ = 2 π dist
where dist is the sum of the distances of the zeros of G ( s ) from the left.
By (3.9), it is easy to know
T T + U log G ( c + i t ) d t = T T + U log ζ ( c + i t ) d t + O ( 1 / L )
and it is familiar that
log ζ ( s ) = n Λ ( n ) n s log n
So
T T + U log | G ( c + i t ) | d t 1 .
With (3.12), the rest is to calculate the first integral of (3.14).
By the concavity of logarithm, it has
T T + U log | G ( a + i t ) | d t = 1 2 T T + U log | G ( a + i t ) | 2 d t 1 2 U log 1 U T T + U | G ( a + i t ) | 2 d t
At first, we simplify G ( s ) as
G 0 ( s ) = ζ ( s ) + ζ ( s ) L .
Then
G ( s ) = G 0 ( s ) + E ( s ) .
E ( s ) = 1 L ( s ) 1 L ζ ( s ) 1 L 3 ζ ( s ) .
And
T 1 T 2 | G ( a + i t ) | 2 d t = T 1 T 2 | G 0 ( a + i t ) | 2 d t + 2 Re T 1 T 2 G 0 ( a + i t ) E ( a i t ) d t + T 1 T 2 | E ( a + i t ) | 2 d t
By Cauchy’s inequality
T 1 T 2 G 0 ( a + i t ) E ( a i t ) d t T 1 T 2 | G 0 ( a + i t ) | 2 d t T 1 T 2 | E ( a + i t ) | 2 d t 1 / 2
The third integral in the right side of (3.16) is much smaller than the first one, which will be actually calculated later, hence
T 1 T 2 | G ( a + i t ) | 2 d t = ( 1 + ϵ ) T 1 T 2 | G 0 ( a + i t ) | 2 d t .
Let
ω 1 ( s ) = 1 2 π i | u t | Δ g ( s 1 s ) d s 1 .
From Lemma 2.1, it is known that ω 1 ( s ) is the domination of ω ( s , T 1 , T 2 ) , and which is a positive real number apart from a small error term.
Moreover, let
ϕ ( s ) = ω 1 1 / 2 ( s ) .
Then it has
arg ϕ ( v + T 1 i ) = o ( 1 ) , arg ϕ ( v + T 2 i ) = o ( 1 ) .
And
a c arg ϕ ( v + T 1 i ) d v o ( c ) , a c arg ϕ ( v + T 2 i ) d v o ( c ) .
Moreover, by Lemma 2.1, there is
T 1 T 2 log | ϕ ( c + i t ) | d t T b .
(3.19) and (3.20) indicate that function ϕ ( s ) may be used as a mollifier.
Let
G ( s ) = G 0 ( s ) ϕ ( s ) .
In the following we will replace G ( s ) by G ( s ) , and let
I = T 1 T 2 | G ( s ) | 2 d t
Then,
I = T 1 T 2 | G 0 ( s ) | 2 ω 1 ( s ) d t = 1 2 π T 1 T 2 t Δ t + Δ g ( s 1 s ) d u | G 0 ( a + i t ) | 2 d t = 1 2 π T 1 T 2 u Δ u + Δ g ( s 1 s ) | G 0 ( a + i t ) | 2 d t d u + R 1 + R 2 R 3 R 4
where
R 1 = 1 2 π T 1 T 1 + Δ u Δ T 1 g ( s 1 s ) d u | G 0 ( a + i t ) | 2 d t , R 2 = 1 2 π T 2 Δ T 2 u + Δ T 1 g ( s 1 s ) d u | G 0 ( a + i t ) | 2 d t , R 3 = 1 2 π T 1 Δ T 1 T 1 u + Δ g ( s 1 s ) d u | G 0 ( a + i t ) | 2 d t , R 4 = 1 2 π T 2 T 2 + Δ u Δ T 2 g ( s 1 s ) d u | G 0 ( a + i t ) | 2 d t .
By the mean-value theorems (cf. [9, Ch.7]), it has
R 1 T 1 T 1 + Δ | G 0 ( a + i t ) | 2 d t Δ L ,
similarly,
R i Δ L , i = 2 , 3 , 4 .
Then, by Lemma 2.2, it has
I = 1 2 π i T 1 T 2 b + ( u Δ ) i b + ( u + Δ ) i g ( s 1 s ) G 0 ( s ) G 0 ( 2 a s ) d s + O ( Δ L )
Moreover, by the functional equation of ζ ( z ) , it has
ζ ( 2 a ( b + i t ) ) = χ ( 2 a ( b + i t ) ) ζ ( 1 + b 2 a i t ) ,
ζ ( 2 a ( b + i t ) ) = χ ( 2 a ( b + i t ) ) ( L ζ ( 1 + b 2 a + i t ) + ζ ( 1 + b 2 a + i t ) )
and
G 0 ( 2 a ( b + i t ) ) = χ ( 2 a ( b + i t ) ) ζ ( 1 + b 2 a + i t ) L
and it is easy to know
χ ( 2 a ( b + i t ) ) = t 1 / 2 + b 2 a exp π i 4 + i t log t 2 π e
Hence,
I = I 1 + I 2 + O ( Δ L ) .
where
I 1 = 1 2 π T 1 T 2 u Δ u + Δ g ( s 1 s ) χ ( 2 a ( b + i t ) ) ζ ( b + i t ) ζ ( 1 + b 2 a + i t ) L d t d u
I 2 = 1 2 π T 1 T 2 u Δ u + Δ g ( s 1 s ) χ ( 2 a ( b + i t ) ) ζ ( b + i t ) L ζ ( 1 + b 2 a + i t ) L d t d u
Expanding ζ ( s ) and ζ ( s ) as Dirichlet’s series, and dividing I 1 and I 2 into three parts respectively,
I 1 = I 1 , 1 + I 1 , 2 + I 1 , 3 = 2 π x y < T 1 Δ + T 1 Δ 2 π x y T 2 + Δ + 2 π x y > T 2 + Δ ,
I 2 = I 2 , 1 + I 2 , 2 + I 2 , 3 = 2 π x y < T 1 Δ + T 1 Δ 2 π x y T 2 + Δ + 2 π x y > T 2 + Δ .
Then by Lemmas 2.1,∼,3, there are
I 1 , 1 , I 1 , 3 , I 2 , 1 , I 2 , 3 o ( 1 ) .
and
I 1 , 2 = 2 π L T 1 Δ 2 π x y T 2 + Δ ( x y ) b log y x b y b I 2 , 2 = 2 π L 2 T 1 Δ 2 π x y T 2 + Δ ( x y ) b log x log y x b y b .
where b = b + 1 2 a , and b = c ( > 1 ) .
i.e.
I 1 , 2 = 2 π L T 1 Δ 2 π x y T 2 + Δ x 1 2 a log y , I 2 , 2 = 2 π L 2 T 1 Δ 2 π x y T 2 + Δ x 1 2 a log x log y .
Let
H ( n ) = 1 L 1 x y n x 1 2 a log y 1 L 2 1 x y n x 1 2 a log x log y .
It is easy to follow that
H ( n ) n L 2 ( 1 a ) n 1 2 a ( 1 2 a ) 3 log 2 n 2 ( 1 2 a ) log n ( 1 2 a ) 2 1 ( 1 2 a ) 3
Let x = ( 1 2 a ) L , then
H ( n ) 2 n L x 3 ( e x 1 x x 2 / 2 ) n L / 3
Obviously, estimation (3.23) is not sufficient for the proof of Theorem 1.1. Nevertheless, there is an alternative way to improve it.
Actually, the argument of Levinson [6] can be extended to differentiate the functional equation of ζ ( s ) to higher order k, and similarly to obtain the functions G ( s , k ) , k = 1 , 2 , . . . , and similar results as (3.23), and more sharper.
For examples, for k = 1 ,
G ( s , 1 ) = ζ ( s ) + ζ ( s ) / L ,
and
H ( n , 1 ) = n L x 3 { a 1 ( x ) e x b 1 ( x ) } n L / 3 .
where
a 1 ( x ) = 2 , b 1 ( x ) = 2 + 2 x + x 2 .
For k = 2 ,
G ( s , 2 ) = ζ ( s ) + 4 ζ ( s ) / L + 4 ζ ( s ) / L 2 .
and
H ( n , 2 ) n L x 5 { a 2 ( x ) e x b 2 ( x ) } n L / 5
where
a 2 ( x ) = 384 192 x + 48 x 2 8 x 3 + x 4 , b 2 ( x ) = 384 + 192 x + 48 x 2 + 8 x 3 + x 4 .
And for k = 3 ,
G ( s , 3 ) = ζ ( s ) + 6 ζ ( s ) / L + 12 ζ ( s ) / L 2 + 8 ζ ( s ) / L 3 .
and
H ( n , 3 ) n L x 7 { a 3 ( x ) e x b 3 ( x ) } n L / 7 .
where
a 3 ( x ) = 46080 23040 x + 5760 x 2 960 x 3 + 120 x 4 12 x 5 + x 6 , b 3 ( x ) = 46080 + 23040 x + 5760 x 2 + 960 x 3 + 120 x 4 + 12 x 5 + x 6 .
So, it is predictable that for k L / 2 , there will be
H ( n , k ) n .
A rough proof for this is added in the appendix.
So,
I = U + T ϵ ,
where ϵ is a arbitrary small positive number.
And
T T + U | G ( a + i t ) | 2 d t = U + T ϵ .
Let U = T , by (3.15), it follows
T T + U log | G ( a + i t ) | d t T 2 log 1 + T 1 + ϵ T ϵ .
With (3.12), (3.14), (3.19),(3.20) and (3.27), and ( 1 2 a ) L = 1 , it follows
2 π N G ( D ) T ϵ + O ( Δ L ) 1 / 2 a T 1 / 2 L 3 .
i.e.
Δ 2 T T arg G ( 1 / 2 + i t ) O ( T 1 / 2 L 3 ) .
and
( N ( 2 T ) N ( T ) ) ( N 0 ( 2 T ) N 0 ( T ) ) O ( T 1 / 2 L 3 ) .
Then let T be T / 2 k , 1 k log 2 ( T ) , and summing. This proves Theorem 1.1 in the case that there are no zeros of G ( s ) on the boundary of D.
For the rest case, let N 1 and N 2 be the numbers of zeros of G ( s ) on the left side of D, σ = 1 / 2 , and in D with σ > 1 / 2 , respectively. Indent the left side of D with small semicircles with centers at the zeros and lying in σ 1 / 2 . Let N 1 be the number of distinct zeros in the N 1 zeros. Let V j be the variation in arg G in the jth interval between the successive semicircles. Then by the principle of argument, it has
j V j π N 1 = 2 π N 2 + O ( L ) ,
Let W j be the variation of argument of
h ( s ) ( f ( s ) + f ( 1 s ) ) G ( s )
in the jth interval, where W j is taken for increasing t, while V j is taken for decreasing t. With (3.2) and (3.28), it has
j W j = Im ( f ) | T T + U j V j = Im ( f ) | T T + U ( 2 π N 2 + π N 1 ) + O ( L )
By (3.8), in the jth open interval, the number of zeros of ζ ( 1 / 2 + i t ) is at least
( W j / π ) 1 .
and in all the open intervals, the number of zeros is at least
1 π j W j N 1 1 = 1 π Im ( f ) | T T + U ( 2 N 2 + N 1 ) N 1 1 + O ( L ) = 1 π Im ( f ) | T T + U 2 N G ( D ) + N 1 N 1 + O ( L )
Moreover, by (3.7), we can know that on the side σ = 1 / 2 , a zero of G ( s ) is also a zero of ζ ( s ) , and so a zero of ζ ( s ) , with multiplicity one greater, so there are N 1 + N 1 such zeros of ζ ( 1 / 2 + i t ) , adding to (3.30), in total, it has
N 0 ( T + U ) N 0 ( T ) 1 π Im ( f ) | T T + U 2 N G ( D ) + 2 N 1 + O ( L ) .
By (3.11), we can know
1 π Im ( f ) | T T + U = N ( T + U ) N ( T ) + O ( L ) .
i.e.
( N ( T + U ) N ( T ) ) ( N 0 ( T + U ) N 0 ( T ) ) O ( T 1 / 2 L 3 ) .
Besides, we know that on the critical line a zero of G ( s ) is also a zero of ζ ( s ) , and so a zero of ζ ( s ) , with multiplicity one greater. Hence
( m 1 ) N G ( D ) .
where sum is over the distinct zeros of ζ ( s ) on the left side of D, m is the multiplicity of a zero.
And so,
m 2 m 2 N G ( D ) O ( T 1 / 2 L 3 ) .
This means that the non-trivial zeros of ζ ( s ) are all on the critical line, and all are simple, with at most O ( T 1 / 2 L 3 ) ones excepted.

Appendix A. Some Phased Results

It will be used the following two formules,
The first one is
1 n log r y y 2 2 a d y = r ! ( 1 2 a ) r + 1 1 n 1 2 a 0 d r [ r , d ] ( 1 2 a ) d + 1 log r d n .
where [ r , d ] = r ( r d + 1 ) , [ r , 0 ] = 1 . which is easy to be followed by the integration by parts.
The second one is that, let
τ r , l = 0 1 0 x l 0 x 1 ( 1 2 x ) r d x d x 1 d x l ,
then
τ r , l = 1 d l ( 1 ) d 1 2 d [ r + d , d ] ( l + 1 d ) ! + ( 1 ) l 2 l + 1 [ r + l + 1 , l + 1 ] ( 1 + ( 1 ) r + l ) .
By deriving the functional equation of ζ ( s ) successively, as in [6], it will be obtained the functions similar to G ( s )
G ( s , k ) = i C k i ( 2 / L ) i ζ ( i ) ( s ) = 1 + 2 L d d s k ζ ( s ) . ( k > 1 )
By the induction, it is easy to deduce that
ζ ( k ) ( s ) = ( 1 ) k χ ( s ) 0 v k L k v C k v ζ ( v ) ( 1 s ) .
With (3),(4) and the functional equation, it has
G ( s , m ) = ( 1 ) m χ ( s ) 0 i m C m i ( 2 / L ) i ζ ( i ) ( 1 s ) .
Lemma 1 . For 0 d m , l 0 , denote by
a d , l = 0 j , k m C m k C m j 2 k + j ( 1 ) k + j 0 i k d ( 1 ) i C k d i [ k , d ] 1 [ j + i + l + 1 , l + 1 ] .
Then
a d , l = ( 1 ) m 2 d [ m , d ] τ 2 m d , l .
Proof. 
Let
g v ( x ) = ( 1 ) v A ( v ) ( x ) B ( x ) , ( v = 0 , 1 , . . . . )
where
A ( x ) = ( 1 2 ( 1 x ) ) m , B ( x ) = ( 1 2 x ) m .
Expanding the binormals A ( x ) and B ( x ) , then there is
g d ( x ) = 0 j , k m C m k C m j 2 k + j ( 1 ) k + j 0 i k d ( 1 ) i C k d i [ k , d ] x i + j
and
a d , l = 0 1 0 x l 0 x 1 g d ( x ) d x d x 1 d x l .
On the other hand, it is clear that
g d ( x ) = ( 1 ) d 2 d [ m , d ] ( 1 + 2 x ) m d ( 1 2 x ) m = ( 1 ) m 2 d [ m , d ] ( 1 2 x ) 2 m d .
Hence,
a d , l = ( 1 ) m 2 d [ m , d ] 0 1 0 x l 0 x 1 ( 1 2 x ) 2 m d d x d x 1 d x l = ( 1 ) m 2 d [ m , d ] τ 2 m d , l .
Lemma 2 . Let
Φ d = ( 1 ) m n 2 2 a ( 2 2 a ) d + 1 0 j , k m C m k C m j ( 2 / L ) j + k ( 1 ) j + k 1 n log j y log k d ( n / y ) y 2 2 a d y .
Then
Φ d 2 d [ m , d ] n ( 2 2 a ) d + 1 ( 2 m d + 1 ) L d 1 e x + ( 1 ) d 2 .
Proof. 
With formula (1), it has
Φ d = ( 1 ) m n ( 2 2 a ) d + 1 0 j , k m C m k C m j ( 2 / L ) j + k ( 1 ) j + k ( U + V ) ,
where
U = n 1 2 a 0 i k d ( 1 ) i C k d i ( j + i ) ! ( 1 2 a ) j + i + 1 log k d i n , V = 0 i k d ( 1 ) i C k d i 0 v j + i [ j + i , v ] ( 1 2 a ) v + 1 log j + k d v n .
Let x = ( 1 2 a ) L , then
Φ d = ( 1 ) m n L ( 2 2 a ) d + 1 L d 0 j , k m C m k C m j 2 j + k ( 1 ) j + k 0 i k d ( 1 ) i C k d i · e x ( j + i ) ! x j + i + 1 0 v j + i [ j + i , v ] x v + 1 = ( 1 ) m n L ( 2 2 a ) d + 1 L d 0 j , k m C m k C m j 2 j + k ( 1 ) j + k 0 i k d ( 1 ) i C k d i · r 0 ( j + i ) ! x j + i r + 1 r ! 0 v j + i [ j + i , v ] x v + 1 = ( 1 ) m n L ( 2 2 a ) d + 1 L d 0 j , k m C m k C m j 2 j + k ( 1 ) j + k 0 i k d ( 1 ) i C k d i · r > j + i x r j i 1 [ r , r ( j + i ) ] ( 1 ) m n L ( 2 2 a ) d + 1 L d l 0 a d , l x l , ( l = r ( i + j + 1 ) ) n L ( 2 2 a ) d + 1 L d 2 d [ m , d ] 2 m + 1 d e x + ( 1 ) d 2 .
Let
I ( m ) = T 1 T 2 ω 1 ( s ) G ( s , m ) G ( 2 a s , m ) d s .
Then by Lemmas 2.1∼ 2.3 and formules (2),(3) and (4), there is
I ( m ) = H ( m ) + O ( Δ L ) .
where
H ( m ) = ( 1 ) m 2 π 0 j , k m C m k C m j ( 2 / L ) k + j ( 1 ) k + j T 1 Δ 2 π x y T 2 + Δ ( x y ) b log k x log j y x b y b .
Correspondingly, define
H ( n , m ) = ( 1 ) m 0 j , k m C m k C m j ( 2 / L ) k + j ( 1 ) k + j 1 x y n ( x y ) b log k x log j y x b y b .
Then, it has
H ( n , m ) = ( 1 ) m 0 j , k m C m k C m j ( 2 / L ) k + j ( 1 ) k + j 1 y n log j y 1 x n / y x 1 2 a log k x ( 1 ) m 0 j , k m C m k C m j ( 2 / L ) k + j ( 1 ) k + j 1 n log j y 1 n / y x 1 2 a log k x d x d y = 0 d m ( 1 ) d Φ d .
where
Φ d = ( 1 ) m n 2 2 a ( 2 2 a ) d + 1 0 j , k m C m k C m j ( 2 / L ) k + j ( 1 ) k + j [ k , d ] 1 n log j y log k d ( n / y ) y 2 2 a d y .
It is easy to know Φ 0 is the dominant of H ( n , m ) only when m = o ( L ) .
With the factor e x + ( 1 ) d 2 in the expression (7), which make the difference betewen the adjacent two terms Φ d and Φ d + 1 is relatively large if x is a small constant. A easy way to mend the difference is to take appropriately larger x and m, say, x = O ( log L ) , m = ( 1 + δ ) L / 2 , however, this will increase the value of Φ d significantly, and make the values of H ( n , m ) fluctuate greatly. Frankly speaking, we have not a definite solution yet, only some concrete examples made by PC. In view of the present situation, we still leave the context and Theorem 1.1 unchanged, and hope to make some progress later.

References

  1. Balasubramanian, R.; Conrey, J.B.; Heath-Brown, D.R. Asymptotic mean square of the product of the Riemann zeta-function and a Dirichlet polynomial. J. reine angew. Math. 1985, 357, 161–181. [Google Scholar]
  2. Conrey, J. B. More than two fifths of the zeros of the Riemann zeta function are on the critical line. J. Reine Angew. Math. 1989, 399, 1–26. [Google Scholar]
  3. Conrey, J. B. Riemann’s Hypothesis. [PubMed]
  4. Feng, Shaoji. Zeros of the Riemann zeta function on the critical line. J. Number Theory 2012, 132(no. 4), 511–542. [Google Scholar]
  5. Hardy, G. H.; Littlewood, J. E. Contributions to the theory of the Riemann zeta-function and the theory of the distribution of primes. Acta Mathematica 1918, 41, 119–196. [Google Scholar] [CrossRef]
  6. Levinson, Norman. More than one third of zeros of Riemann’s zeta-function are on s=1/2. Advances in Math. 1974, 13, 383–436. [Google Scholar]
  7. A.P. Li, A note on the mean square of Riemann zeta-function, Preprints.org id 155915.
  8. Atle Selberg, On the zeros of Riemann’s zeta-function. Skr. Norske Vid. Akad. Oslo I. 1942, (1942). no. 10, 1-59.
  9. Titchmarsh, E.C. The Theory of the Riemann Zeta-Function; Oxford, 1986. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated