Preprint
Article

This version is not peer-reviewed.

On the Convergence Order of Jarratt-Type Methods for Nonlinear Equations

A peer-reviewed article of this preprint also exists.

Submitted:

16 April 2025

Posted:

16 April 2025

You are already at the latest version

Abstract
The convergence order of Jarratt-type methods for solving nonlinear equations are obtained without using the Taylor expansion. We use assumptions on the derivatives of the involved operator up to second order only contrary to the earlier studies. The proof provided in this paper does not depend on the Taylor series expansion which in turn reduces assumptions on the higher order derivatives of the involved operator and increases the applicability of these methods. The applicability of the method is further extended using the concept of generalized condition in the local convergence and majorizing sequences in the semi-local analysis. Numerical examples and Basins of attractions of the methods are provided in this study.
Keywords: 
;  ;  ;  

1. Introduction

Several real-world problems can be mathematically modelled as an equation of the form
A ( v ) = 0 ,
where A : E X Y is a nonlinear operator mapping between the Banach spaces X and Y and E is open convex set in X . One of the most challenging problems appearing in real-world is to determine the solution v * of (1). Iterative methods are an alternate attractive technique to approximate solutions of nonlinear equations as obtaining the exact solution to these nonlinear equations becomes difficult. One of the most extensively used quadratically convergent iterative method is Newton’s method as it converges rapidly from any sufficiently good initial guess. Even though this method provides a good convergence rate, the need to compute and invert the derivative of the given operator function in each of the iterative step, limits the applicability of these method. To overcome this several Newton-like methods are available in the literature [1,3,5,13,19]. One such successful attempt was made by Ren et al., in [18] providing iterative method (see(2)) of order six. Recall [10] that a sequence { a n } in X with lim n a n = α is said to be convergent of order q > 1 , if there exist a nonzero constant C such that
lim n a n + 1 α a n α q = C .
Previous studies primarily used Taylor expansion to determine the order of convergence(OC), which necessitates the existence of higher-order derivatives. An alternative method involves employing the computational order of convergence (COC) [23], defined as:
ρ ¯ = ln a n + 1 α a n α ln a n α a n 1 α ,
where a n 1 , a n , a n + 1 are three consecutive iterates near root α or the approximate computational order of convergence (ACOC) defined as:
ρ ¯ = ln a n + 1 a n a n a n 1 ln a n a n 1 a n 1 a n 2 ,
where a n 2 , a n 1 , a n , a n + 1 are four consecutive iterates near root α , to obtain the OC.
The limitation of COC and ACOC for iterative methods lies in their susceptibility to the oscillating behavior of approximations and slow convergence during early iterations [17]. As a result, COC and ACOC do not accurately reflect the true OC.
In [18], Ren et al., considered the following iterative scheme defined by
w n = v n 2 3 A ( v n ) 1 A ( v n ) , z n = v n 1 2 [ 3 A ( w n ) A ( v n ) ] 1 [ 3 A ( w n ) + A ( v n ) ] A ( v n ) 1 A ( v n ) , v n + 1 = z n A ( z n ) 1 A ( z n ) , n = 0 , 1 , 2 , 3 . . .
for solving (1), when X = Y = R k .
In [18], Taylor’s expansion is used to achieve a sixth-order convergence, but the analysis requires conditions on the derivatives of A up to the seventh order. These assumptions restrict the applicability of the method (2) to problems involving operators that are differentiable at least seven times.
In this article, we initially determine the OC of the method (refer to [9,13]) defined for all n = 0 , 1 , 2 , as follows:
w n = v n 2 3 A ( v n ) 1 A ( v n ) , v n + 1 = v n 1 2 [ 3 A ( w n ) A ( v n ) ] 1 [ 3 A ( w n ) + A ( v n ) ] A ( v n ) 1 A ( v n ) ,
where X and Y are Banach spaces. The local convergence of certain Jarratt-type methods was analyzed in [2] by relying solely on assumptions about the derivative of order one of A . However, the OC was determined using COC and ACOC, which, as previously noted, are not ideal for calculating the convergence order. This raises the question: can we establish a third-order convergence for (3) and a sixth-order convergence for (2) without using assumptions on the higher-order derivatives of A or Taylor expansion?
Additionally, we enhance the method to a fifth-order approach, given as follows:
w n = v n 2 3 A ( v n ) 1 A ( v n ) , z n = v n 1 2 [ 3 A ( w n ) A ( v n ) ] 1 [ 3 A ( w n ) + A ( v n ) ] A ( v n ) 1 A ( v n ) , v n + 1 = z n A ( 3 w n v n 2 ) 1 A ( z n ) , n = 0 , 1 , 2 , 3 . . .
in Section 4.
In Section 2, we establish a third-order convergence for method (3), and in Section 3, we demonstrate a sixth-order convergence for (2), relying on assumptions about the derivatives of A up to the second order. Consequently, our analysis broadens the applicability of methods (3), (2), and (4) to problems that could not be addressed using the approaches in [4,14,18,20,21].
In Section 5, we examine the constraints of our approach and propose novel strategies to overcome these limitations for local as well as semi-local convergence scenarios. The convergence conditions are solely tied to the operators involved in the method for both the semi-local and local cases.
The remaining part of the paper includes the efficiency index in Section 6, numerical demonstration in Section 7, and basins of attraction in Section 8, concluding with a summary in Section 9.

2. Order of Convergence(OC) of (3):

The analysis of local convergence relies on the following assumptions:
(A1)
A ( v * ) 1 &∃ L 0 > 0 such that v , w E ,
A ( v * ) 1 ( A ( v ) A ( w ) ) L 0 v w .
(A2)
s L 1 > 0 such that v , w E ,
A ( v * ) 1 ( A ( v ) A ( w ) ) L 1 v w .
(A3)
L 2 > 0 such that
A ( v * ) 1 A ( v ) L 2 , v E .
and
(A4)
L 3 > 0 such that
A ( v * ) 1 A ( v ) L 3 , v E .
Using the constants L 0 , L 1 , L 2 and L 3 , we define continuous nondecreasing functions (CNF) Θ 1 , H 1 : [ 0 , 1 L 0 ) R as follows;
Θ 1 ( t ) = L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 t ) ) + 1 ) t
and
H 1 ( t ) = Θ 1 ( t ) 1 .
Given that H 1 ( 0 ) = 1 and H 1 ( t ) a p p r o a c h e s as t tends to 1 L 0 , it follows that H 1 ( t ) = 0 has a smallest positive root in the interval [ 0 , 1 L 0 ) , which we denote by λ . Define CNF g 1 , h 1 : [ 0 , λ ) R by
g 1 ( t ) = 1 48 ( 1 Θ 1 ( t ) ) ( 1 L 0 t ) 2 × [ ( 93 ( L 1 L 0 t ) + 32 L 1 L 3 + 36 L 0 L 2 ) ( 1 L 0 t ) + 12 L 0 L 2 L 3 ]
and
h 1 ( t ) = g 1 ( t ) t 2 1 .
Since h 1 ( 0 ) = 1 and h 1 ( t ) a p p r o a c h e s as t λ , it follows that h 1 ( t ) = 0 has a smallest positive root in the interval [ 0 , λ ) , which is denoted by λ 1 .
Let
r = min { λ 1 , 1 2 L 0 , 1 } .
Then, we have
0 g 1 ( t ) t 2 < 1 , t [ 0 , r ) .
Throughout the paper, we consider B ( v 0 , ρ ) = { v X : v v 0 < ρ } and B [ v 0 , ρ ] = { v X : v v 0 ρ } , for ρ > 0 and v 0 X .
Theorem 1.
Assuming (A1)-(A4) are true, the sequence { v n } given by (3) with initial value v 0 B ( v * , r ) converges to v * , and the following estimate is valid:
v n + 1 v * g 1 ( r ) v n v * 3 .
Proof. An inductive argument will be employed for the proof. As a first step, we will demonstrate that the operator 3 A ( w ) A ( v ) is invertible for all v , w belonging to the open ball B ( v * , r ) . Note that, by (A1), we have
( 2 A ( v * ) ) 1 ( 3 A ( w ) A ( v ) 2 A ( v * ) ) ( 2 A ( v * ) ) 1 [ 3 ( A ( w ) A ( v * ) ) ( A ( v ) A ( v * ) ) ] 3 2 A ( v * ) 1 ( A ( w ) A ( v * ) ) + 1 2 A ( v * ) 1 ( A ( v ) A ( v * ) ) L 0 2 ( 3 w v * + v v * ) 3 2 L 0 r + 1 2 L 0 r = 2 L 0 r < 1 .
Therefore, by Banach lemma(BL)on invertible operators, 3 A ( w ) A ( v ) is invertible and by (8), we have
( 3 A ( w ) A ( v ) ) 1 A ( v * ) 1 2 ( 1 L 0 2 ( 3 w v * + v v * ) ) .
Similarly, one can prove that
A ( v ) 1 A ( v * ) 1 1 L 0 v v * .
Next, by the method (3), we have,
v 1 v * = v 0 v * 1 2 ( 3 A ( w 0 ) A ( v 0 ) ) 1 ( 3 A ( w 0 ) + A ( v 0 ) ) A ( v 0 ) 1 A ( v 0 ) = ( 3 A ( w 0 ) A ( v 0 ) ) 1 [ ( 3 A ( w 0 ) A ( v 0 ) ) ( v 0 v * ) 1 2 ( 3 A ( w 0 ) + A ( v 0 ) ) A ( v 0 ) 1 A ( v 0 ) ] .
Note that,
A ( v 0 ) = A ( v 0 ) A ( v * ) = 0 1 A ( v * + t ( v 0 v * ) ) d t ( v 0 v * ) .
For convenience, let P = ( 3 A ( w 0 ) A ( v 0 ) ) 1 . In order to prove (7), we rearrange the equation (11) as follows:
v 1 v * = P [ ( 3 A ( w 0 ) A ( v 0 ) ) 2 A ( v 0 ) + ( 2 I 1 2 ( 3 A ( w 0 ) + A ( v 0 ) ) A ( v 0 ) 1 ) A ( v 0 ) = P 3 0 1 ( A ( w 0 ) A ( v * + t ( v 0 v * ) ) ) d t ( v 0 v * ) + 0 1 ( A ( v * + t ( v 0 v * ) ) A ( v 0 ) ) d t ( v 0 v * ) ( by ( 12 ) ) + ( 2 A ( v 0 ) 3 2 A ( w 0 ) 1 2 A ( v 0 ) ) A ( v 0 ) 1 A ( v 0 ) = P 3 0 1 0 1 A ( v * + t ( v 0 v * ) + θ ( w 0 v * t ( v 0 v * ) ) ) d θ × ( w 0 v * t ( v 0 v * ) ) d t ( v 0 v * ) 0 1 0 1 A ( v * + t ( v 0 v * ) + θ ( 1 t ) ( v 0 v * ) ) d t ( 1 t ) ( v 0 v * ) 2 + 3 2 0 1 A ( w 0 + θ ( v 0 w 0 ) ) d θ ( v 0 w 0 ) A ( v 0 ) 1 A ( v 0 ) = P 3 0 1 0 1 A ( v * + t ( v 0 v * ) + θ ( w 0 v * t ( v 0 v * ) ) ) d θ × ( ( 1 3 t ) ( v 0 v * ) 2 ) d t + 3 0 1 0 1 A ( v * + t ( v 0 v * ) + θ ( w 0 v * t ( v 0 v * ) ) ) d θ × 2 3 ( v 0 v * A ( v 0 ) 1 A ( v 0 ) ) d t ( v 0 v * ) ( because w 0 v * = 1 3 ( v 0 v * ) + 2 3 ( v 0 v * A ( v 0 ) 1 A ( v 0 ) ) ) 0 1 0 1 A ( v * + t ( v 0 v * ) + θ ( 1 t ) ( v 0 v * ) ) d t ( 1 t ) ( v 0 v * ) 2 + 0 1 A ( w 0 + θ ( v 0 w 0 ) ) d θ ( A ( v 0 ) 1 A ( v 0 ) ) 2
Let S 1 ( θ , t ) = A ( v * + t ( v 0 v * ) + θ ( w 0 v * t ( v 0 v * ) ) ) , S 2 ( θ , t ) = A ( v * + t ( v 0 v * ) + θ ( 1 t ) ( v 0 v * ) ) and S 3 ( θ ) = A ( w 0 + θ ( v 0 w 0 ) ) . Then, by (13), we have
v 1 v * = P 0 1 0 1 S 1 ( θ , t ) d θ ( ( 1 2 t ) ( v 0 v * ) 2 ) d t + 2 0 1 0 1 S 1 ( θ , t ) d θ d t × A ( v 0 ) 1 ( A ( v 0 ) 0 1 A ( v * + τ ( v 0 v * ) ) d τ ) ( v 0 v * ) 2 0 1 0 1 S 2 ( θ , t ) d θ d t ( v 0 v * ) 2 + 0 1 0 1 ( S 2 ( θ , t ) S 1 ( θ , t ) ) t d t ( v 0 v * ) 2 + 0 1 S 3 ( θ ) d θ ( A ( v 0 ) 1 A ( v 0 ) ) 2 = P 0 1 0 1 S 1 ( θ , t ) d θ ( ( 1 2 t ) ( v 0 v * ) 2 ) d t + 2 0 1 0 1 S 1 ( θ , t ) d θ d t × A ( v 0 ) 1 ( A ( v 0 ) 0 1 A ( v * + τ ( v 0 v * ) ) d τ ) ( v 0 v * ) 2 + 0 1 0 1 ( S 2 ( θ , t ) S 1 ( θ , t ) ) t d t ( v 0 v * ) 2 + 0 1 0 1 ( S 3 ( θ ) S 2 ( θ , t ) ) d θ d t ( v 0 v * ) 2 + 0 1 S 3 ( θ ) d θ [ ( A ( v 0 ) 1 0 1 A ( v * + τ ( v 0 v * ) ) ) 2 I ] ( v 0 v * ) 2 . = : I 1 + I 2 + I 3 + I 4 + I 5 ,
where,
I 1 = P 0 1 0 1 S 1 ( θ , t ) d θ ( ( 1 2 t ) ( v 0 v * ) 2 ) d t , I 2 = 2 P 0 1 0 1 S 1 ( θ , t ) d θ d t A ( v 0 ) 1 ( A ( v 0 ) 0 1 A ( v * + τ ( v 0 v * ) ) d τ ) ( v 0 v * ) 2 , I 3 = P 0 1 0 1 ( S 2 ( θ , t ) S 1 ( θ , t ) ) t d t ( v 0 v * ) 2 , I 4 = P 0 1 0 1 ( S 3 ( θ ) S 2 ( θ , t ) ) d θ d t ( v 0 v * ) 2
and
I 5 = P 0 1 S 3 ( θ ) d θ [ ( A ( v 0 ) 1 0 1 A ( v * + τ ( v 0 v * ) ) ) 2 I ] ( v 0 v * ) 2 .
Next, we estimate the norms of I 1 , I 2 , I 3 , I 4 and I 5 . Note that,
I 1 = P 0 1 0 1 S 1 ( θ , t ) d θ ( 1 2 t ) ( v 0 v * ) 2 d t P 0 1 0 1 [ A ( v * + t ( v 0 v * ) + θ ( w 0 v * t ( v 0 v * ) ) ) A ( v * ) ] d θ × ( 1 2 t ) ( v 0 v * ) 2 d t + ( 3 A ( w 0 ) A ( v 0 ) ) 1 0 1 0 1 A ( v * ) d θ ( 1 2 t ) ( v 0 v * ) 2 d t P A ( v * ) 0 1 0 1 A ( v * ) 1 [ A ( v * + t ( v 0 v * ) + θ ( w 0 v * t ( v 0 v * ) ) ) A ( v * ) ] d θ | 1 2 t | v 0 v * 2 d t + ( 3 A ( w 0 ) A ( v 0 ) ) 1 A ( v * ) 0 1 ( 1 2 t ) ( v 0 v * ) 2 d t L 1 P A ( v * ) 0 1 0 1 t ( v 0 v * ) + θ ( w 0 v * t ( v 0 v * ) ) × | 1 2 t | v 0 v * 2 d θ d t L 1 P A ( v * ) 0 1 0 1 | t | | 1 2 t | v 0 v * + | θ | | 1 2 t | w 0 v * + | θ t | | 1 2 t | v 0 v * × v 0 v * 2 d θ d t L 1 P A ( v * ) 0 1 | t | | 1 2 t | v 0 v * + | 1 2 t | 2 w 0 v * + | t | | 1 2 t | 2 v 0 v * v 0 v * 2 d θ d t L 1 P A ( v * ) 0 1 3 | t | | 1 2 t | 2 v 0 v * + | 1 2 t | 2 w 0 v * v 0 v * 2 d θ d t L 1 P A ( v * ) 3 8 v 0 v * 3 + 1 4 w 0 v * v 0 v * 2 ,
which is obtained using (A2) and (10)(with v = v 0 ). Note that w 0 v * w 0 v 0 + v 0 v * and using (10) we have,
w 0 v 0 = 2 3 A ( v 0 ) 1 A ( v 0 ) 2 3 A ( v 0 ) 1 A ( v * ) 0 1 A ( v * ) 1 A ( v * + t ( v 0 v * ) ) d t ( v 0 v * ) 2 L 3 3 ( 1 L 0 v 0 v * ) v 0 v * .
Therefore,
w 0 v * 1 + 2 L 3 3 ( 1 L 0 v 0 v * ) v 0 v *
and hence by (9) (with w = w 0 and v = v 0 ), we have
P A ( v * ) 1 2 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v * ) ) + 1 ) v v * ) .
Therefore, on using (17) in (15) we get,
I 1 L 1 ( 3 + 2 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v * ) ) ) 16 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v * ) ) + 1 ) v 0 v * ) v 0 v * 3 .
Next,
I 2 = 2 P 0 1 0 1 S 1 ( θ , t ) d θ d t × A ( v 0 ) 1 ( A ( v 0 ) 0 1 A ( v * + τ ( v 0 v * ) ) ) d τ ( v 0 v * ) 2 = 2 P A ( v * ) 0 1 0 1 A ( v * ) 1 S 1 ( θ , t ) d θ d t A ( v 0 ) 1 A ( v * ) × 0 1 A ( v * ) 1 ( A ( v 0 ) A ( v * + τ ( v 0 v * ) ) ) d τ ( v 0 v * ) 2 .
Therefore, by (9), (10), (A1) and (A3), we have
I 2 L 0 L 2 2 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v * ) ) + 1 ) v 0 v * ) ( 1 L 0 v 0 v * ) v 0 v * 3 .
By using (9) and (A2), we have
I 3 = P A ( v * ) 0 1 0 1 A ( v * ) 1 ( S 2 ( θ , t ) S 1 ( θ , t ) ) t d θ d t ( v 0 v * ) 2 L 1 P A ( v * ) 0 1 0 1 ( v * + t ( v 0 v * ) + θ ( 1 t ) ( v 0 v * ) ( v * + t ( v 0 v * ) + θ ( w 0 v * t ( v 0 v * ) ) ) ) t d θ d t v 0 v * 2 L 1 P A ( v * ) 0 1 0 1 | θ t | ( w 0 v * + v 0 v * ) d θ d t v 0 v * 2 L 1 8 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v * ) ) + 1 ) v 0 v * ) × ( v 0 v * + w 0 v * ) v 0 v * 2 .
Thus, by (16) and (17), we have
I 3 L 1 8 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v * ) ) + 1 ) v 0 v * ) × 2 + 2 L 3 3 ( 1 L 0 v 0 v * ) v 0 v * 3 L 1 ( 3 ( 1 L 0 v 0 v * ) + L 3 ) 12 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v * ) ) + 1 ) v 0 v * ) ( 1 L 0 v 0 v * ) v 0 v * 3 .
Similarly, we have
I 4 = P A ( v * ) 0 1 0 1 A ( v * ) 1 ( S 3 ( θ ) S 2 ( θ , t ) ) d θ d t ( v 0 v * ) 2 L 1 2 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v * ) ) + 1 ) v 0 v * ) × 0 1 0 1 ( ( w 0 + θ ( v 0 w 0 ) ) ( v * + t ( v 0 v * ) + θ ( 1 t ) ( v 0 v * ) ) ) d θ d t v 0 v * 2 L 1 2 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v * ) ) + 1 ) v 0 v * ) 0 1 0 1 ( w 0 v * + θ v 0 w 0 + | t + θ ( 1 t ) | v 0 v * ) d θ d t v 0 v * 2 L 1 2 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v * ) ) + 1 ) v 0 v * ) × ( 3 4 v 0 v * + w 0 v * + 1 2 v 0 w 0 ) v 0 v * 2 L 1 2 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v * ) ) + 1 ) v 0 v * ) × ( 5 4 v 0 v * + 3 2 w 0 v * ) v 0 v * 2 .
So, by using (16) and (17), we have
I 4 L 1 2 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v * ) ) + 1 ) v 0 v * ) × 5 4 + 3 2 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v * ) ) v 0 v * 3 = L 1 ( 11 11 L 0 v 0 v * + 4 L 3 ) 8 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v * ) ) + 1 ) v 0 v * ) ( 1 L 0 v 0 v * ) v 0 v * 3 .
Next, we shall obtain an estimate for I 5 . Observe that
I 5 = P A ( v * ) 0 1 A ( v * ) 1 S 3 ( θ ) d θ × A ( v 0 ) 1 0 1 A ( v * + τ ( v 0 v * ) ) d τ 2 I ( v 0 v * ) 2 P A ( v * ) A ( v * ) 1 S 3 ( θ ) × A ( v 0 ) 1 0 1 A ( v * + τ ( v 0 v * ) ) d τ 2 I ( v 0 v * ) 2 P A ( v * ) A ( v * ) 1 S 3 ( θ ) × A ( v 0 ) 1 0 1 A ( v * + τ ( v 0 v * ) ) d τ × A ( v 0 ) 1 0 1 A ( v * + τ ( v 0 v * ) ) d τ A ( v 0 ) ( v 0 v * ) 2
P A ( v * ) A ( v * ) 1 S 3 ( θ ) × A ( v 0 ) 1 0 1 A ( v * + τ ( v 0 v * ) ) d τ A ( v 0 ) + A ( v 0 ) 1 0 1 A ( v * + τ ( v 0 v * ) ) d τ × A ( v 0 ) 1 0 1 A ( v * + τ ( v 0 v * ) ) d τ I ( v 0 v * ) 2 P A ( v * ) A ( v * ) 1 S 3 ( θ ) × A ( v 0 ) 1 0 1 A ( v * + τ ( v 0 v * ) ) d τ A ( v 0 ) + A ( v 0 ) 1 0 1 A ( v * + τ ( v 0 v * ) ) d τ × A ( v 0 ) 1 0 1 A ( v * + τ ( v 0 v * ) ) d τ A ( v 0 ) ( v 0 v * ) 2 P A ( v * ) A ( v * ) 1 S 3 ( θ ) A ( v 0 ) 1 A ( v * ) × 0 1 A ( v * ) 1 ( A ( v * + τ ( v 0 v * ) ) A ( v 0 ) ) d τ + 0 1 A ( v * ) 1 A ( v * + τ ( v 0 v * ) ) d τ A ( v 0 ) 1 A ( v * ) × 0 1 A ( v * ) 1 ( A ( v * + τ ( v 0 v * ) ) A ( v 0 ) ) d τ v 0 v * 2 .
Therefore, by (9), (A1)-(A4), we have
I 5 L 2 2 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v * ) ) + 1 ) v 0 v * ) ( 1 L 0 v 0 v * ) × ( L 0 2 + L 3 L 0 2 ( 1 L 0 v 0 v * ) ) v 0 v * 3 = 1 4 ( 1 L 0 2 ( 3 ( 1 + 2 L 3 3 ( 1 L 0 v 0 v * ) ) + 1 ) v 0 v * ) ( 1 L 0 v 0 v * ) 2 × L 2 L 0 ( 1 L 0 v 0 v * + L 3 ) v 0 v * 3 .
Thus, from (14)-(24), we have
v 1 v * g 1 ( v 0 v * ) v 0 v * 3 .
Therefore, the iterate v 1 B ( v * , r ) because g 1 ( v 0 v * ) v 0 v * 3 g 1 ( r ) r 2 v 0 v * v 0 v * r .
Simply replace v 0 , w 0 , v 1 in the preceding arguments by v n , w n , v n + 1 to complete the induction for (7). □
Theorem 2.
The method defined by (3) exhibits a convergence order of 3.
Proof. The proof follows a similar argument to that of Theorem 3 in [6]. However, we include it here for completeness. Let e n = v n v * . Let q be maximal such that for some C > 0 ,
lim n e n + 1 e n q = C .
Then, since e n < r < 1 , by (5) (for large enough n), we have
e n + 1 g 1 ( r ) e n 3 .
So, by (25) and (26), we get
e n + 1 e n 3 g 1 ( r ) .
Thus, by (25), we get
q = 3 and C = g 1 ( r ) .
Thus convergence order q = 3 .

3. Order of Convergence(OC) of (2):

This section examines the OC of method (2). For our analysis we require some more CNF:
Let Θ 2 , H 2 : [ 0 , λ ) R defined by
Θ 2 ( t ) = L 0 g 1 ( t ) t 3
and
H 2 ( t ) = Θ 2 ( t ) 1 .
Given that H 2 ( 0 ) = 1 and H 2 ( t ) and t λ , we can conclude that the equation H 2 ( t ) = 0 possesses a smallest positive solution within [ 0 , λ ) . This solution is denoted as λ 2 .
Let g 2 , h 2 : [ 0 , λ 2 ) R be CNF defined by
g 2 ( t ) = L 0 2 ( 1 L 0 g 1 ( t ) t 3 ) g 1 ( t ) 2
and
h 2 ( t ) = g 2 ( t ) t 5 1 .
Then, h 2 ( 0 ) = 1 and h 2 ( t ) as t λ 2 . Therefore h 2 ( t ) = 0 has a smallest positive solution in [ 0 , λ 2 ) denoted by λ 3 .
Let
R = min { λ 3 , 1 2 L 0 , 1 } .
Then, for all t [ 0 , R ] ,
0 g 2 ( t ) t 5 < 1 .
Theorem 3.
Assuming (A1)-(A4) are true, the sequence { v n } given by (2) with initial value v 0 B ( v * , R ) converges to v * , and the following estimate is valid:
v n + 1 v * g 2 ( R ) v n v * 6 .
Proof. Adopting the same proof strategy as in Theorem 1, we find that:
z n v * g 1 ( v n v * ) v n v * 3 .
Note that by (10) and (A1)
v n + 1 v * = z n v * A ( z n ) 1 A ( z n ) = A ( z n ) 1 ( A ( z n ) 0 1 A ( v * + t ( z n v * ) ) d t ) ( z n v * ) = A ( z n ) 1 A ( v * ) 0 1 A ( v * ) 1 ( A ( z n ) A ( v * + t ( z n v * ) ) ) d t ( z n v * ) L 0 2 ( 1 L 0 z n v * ) z n v * 2 L 0 2 ( 1 L 0 g 1 ( v n v * ) v n v * 3 ) g 1 ( v n v * ) 2 v n v * 6 g 2 ( R ) v n v * 6 .
Now since g 2 ( R ) v n v * 6 g 2 ( R ) R 5 v n v * v n v * R , the iterate v n + 1 B ( v * , R ) .
Theorem 4.
The method defined by (2) exhibits a convergence order of 6 .
Proof. Employing a proof strategy analogous to that of Theorem 2.

4. Order of Convergence(OC) of (4):

We analyze the OC of method (4) in this section. We require some more CNFs as in previous sections:
Let g 3 , h 3 : [ 0 , 3 1 L 0 ) R be CNFs defined by
g 3 ( t ) = L 0 2 ( 1 L 0 2 t 2 2 ( 1 L 0 t ) ) L 0 1 L 0 t + g 1 ( t ) t g 1 ( t )
and
h 3 ( t ) = g 3 ( t ) t 4 1 .
Then, h 3 ( 0 ) = 1 and h 3 ( t ) as t 3 1 L 0 . Therefore h 3 ( t ) = 0 has a smallest positive solution in [ 0 , 3 1 L 0 ) denoted by λ 4 .
Let
R 1 = min { λ 4 , 1 2 L 0 , 1 } .
Then, for all t [ 0 , R 1 ] ,
0 g 3 ( t ) t 4 < 1 .
Theorem 5.
Assuming (A1)-(A4) are true, the sequence { v n } given by (4) with initial value v 0 B ( v * , R 1 ) converges to v * . , and the following estimate is valid:
v n + 1 v * g 3 ( R 1 ) v n v * 5 .
Proof. In imitation of the proof presented for Theorem 1, we obtain:
z n v * g 1 ( v n v * ) v n v * 3 .
Note that by (10) and (A1)
v n + 1 v * = z n v * A 3 w n v n 2 1 A ( z n ) = A 3 w n v n 2 1 ( A 3 w n v n 2 0 1 A ( v * + t ( z n v * ) ) d t ) ( z n v * ) = A 3 w n v n 2 1 A ( v * ) × 0 1 A ( v * ) 1 ( A 3 w n v n 2 A ( v * + t ( z n v * ) ) ) d t ( z n v * ) L 0 1 L 0 3 w n v n 2 v * 3 w n v n 2 v * + 1 2 z n v * z n v * L 0 2 ( 1 L 0 ( v n v * A ( v n ) 1 A ( v n ) ) ) × 2 v n v * A ( v n ) 1 A ( v n ) + z n v * z n v * L 0 2 ( 1 L 0 2 2 ( 1 L 0 v n v * ) v n v * 2 ) × L 0 1 L 0 v n v * v n v * 2 + z n v * z n v * L 0 2 ( 1 L 0 2 2 ( 1 L 0 v n v * ) v n v * 2 ) × L 0 1 L 0 v n v * + g 1 ( v n v * ) v n v * g 1 ( v n v * ) v n v * 5 g 3 ( R 1 ) v n v * 5 .
Here, we used the inequality
v n v * A ( v n ) 1 A ( v n ) = A ( v n ) 1 ( A ( v n ) 0 1 A ( v * + t ( v n v * ) ) d t v n v * A ( v n ) 1 A ( v * ) 0 1 A ( v * ) 1 [ A ( v * + t ( v n v * ) ) A ( v n ) ] d t v n v * L 0 2 ( 1 L 0 v n v * ) v n v * 2 .
Now since g 3 ( R 1 ) v n v * 5 g 3 ( R 1 ) R 1 4 v n v * v n v * R 1 , the iterate v n + 1 B ( v * , R 1 ) .
Theorem 6.
The method defined by (4) exhibits a convergence order of 5 .
Proof. Resembling the proof of Theorem 2.
The subsequent result addresses the uniqueness property of the solutions derived from the methods (3), (2), and (4).
Theorem 7.
Suppose Assumption (A1) holds and the equation A ( v ) = 0 , has a simple solution v * . Then, for the equation A ( v ) = 0 the only solution in the set E 1 = E B [ v * , ρ ] is v * provided that
L 0 < 2 ρ .
Proof. Suppose c E 1 is such that A ( c ) = 0 . Define the operator N : = 0 1 A ( v * + γ ( c v * ) ) d γ . Then by Assumption (A1) and (35), we have
A ( v * ) 1 ( N A ( v * ) ) L 0 0 1 v * + γ ( c v * ) v * d γ L 0 0 1 γ v * c d γ L 0 2 ρ < 1 .
So by BL, N is invertible and hence we get c = v * from the identity 0 = A ( c ) A ( v * ) = N ( c v * ) .

5. Convergence Under Generalized Conditions

The applicability of method (3) and the method (4) can be extended. Notice that the second condition (A2) can can be violated easily even for simple scalar functions. Define the function
A ( t ) = t 2 log t + 5 t 5 5 t 4 , t 0 0 , t = 0 .
Since v * = 1 and A ( t ) is discontinuous at t = 0 . , condition (A2) is violated in any neighborhood containing 0 and 1. This necessitates a convergence analysis based on generalized conditions and the operators inherent to the methods.
First the local convergence is considered under some conditions. Set T = [ 0 , + ) .
Presume:
(H1)
Consider a CNF ψ 0 : T T for which the smallest positive solution to ψ 0 ( t ) 1 = 0 is ρ 0 . . Let T 0 be the interval [ 0 , ρ 0 ) . .
(H2)
Let ρ 1 T 0 { 0 } , be the SPS of σ 1 ( t ) 1 = 0 , where the function σ 1 : T 0 T is given by
σ 1 ( t ) = 0 1 ψ ( ( 1 θ ) t ) d θ + 1 3 ( 1 + 0 1 ψ 0 ( θ t ) d θ ) 1 ψ 0 ( t )
for some CNF ψ : T 0 T .
(H3)
The equation p ( t ) 1 = 0 has a SPS denoted by ρ p T 0 { 0 } , where p : T 0 T is given by
p ( t ) = 3 2 ψ ( ( σ 1 ( t ) + 1 ) t ) + ψ 0 ( t ) .
Let T 1 = [ 0 , ρ p ) .
(H4)
The equation σ 2 ( t ) 1 = 0 has a SPS denoted by ρ 2 T 1 { 0 } , where σ 2 : T 1 T is given by
σ 2 ( t ) = 0 1 ψ ( ( 1 θ ) t ) d θ 1 ψ 0 ( t ) + 3 ψ ¯ ( t ) ( 1 + 0 1 ψ 0 ( θ t ) d θ ) 4 ( 1 p ( t ) ) ( 1 ψ 0 ( t ) ) ,
where
ψ ¯ ( t ) = ψ ( ( 1 + σ 1 ( t ) ) t ) ψ 0 ( σ 1 ( t ) t ) + ψ 0 ( t ) .
(H5)
The equation q ( t ) 1 = 0 has a SPS denoted by ρ q T 1 { 0 } , here q : T 1 T is given by
q ( t ) = ψ 0 ( 3 σ 1 ( t ) + 1 ) t 2 .
Let T 2 = [ 0 , ρ q ) .
(H6)
The equation σ 3 ( t ) 1 = 0 has a SPS denoted by ρ 3 , where σ 3 : T 2 T is given as
σ 3 ( t ) = 0 1 ψ ( ( 1 θ ) σ 2 ( t ) t ) d θ 1 ψ 0 ( σ 2 ( t ) t ) + 3 ψ ¯ ¯ ( t ) ( 1 + 0 1 ψ 0 ( θ σ 2 ( t ) t ) d θ ) 2 ( 1 q ( t ) ) ( 1 ψ 0 ( σ 2 ( t ) t ) ) ,
where
ψ ¯ ¯ ( t ) = q ( t ) + ψ 0 ( σ 2 ( t ) t ) 1 q ( t )
Let
ρ * = min { ρ i } , i = 1 , 2 , 3 .
The developed functions ψ 0 and ψ relate to the operators on the method (4).
(H7)
There exist an invertible linear operator L and v * E solving the equation A ( x ) = 0 such that for each x E
L 1 ( A ( x ) L ) ψ 0 ( x v * ) .
Notice that under condition (H1) and (36)
L 1 ( A ( v * ) L ) ψ 0 ( 0 ) < 1 .
Thus A ( v * ) is invertible. Let E 1 = E B ( v * , ρ 0 ) .
(H8)
L 1 ( A ( y ) A ( x ) ) ψ ( y x ) for each x , y E 1 .
and
(H9)
B [ v * , ρ * ] E .
The main local analysis for the method (4) follows in the next result.
Theorem 8.
Let the conditions (H1)-(H9) hold. Then, the following assertions are satisfied provided that v 0 B ( v * , ρ * ) { v * }
{ v n } B ( v * , ρ * ) ,
w n v * σ 1 ( v n v * ) v n v * v n v * < ρ * ,
z n v * σ 2 ( v n v * ) v n v * v n v * ,
v n + 1 v * σ 3 ( v n v * ) v n v * v n v *
and lim n v n = v * , where the functions σ i are provided previously and the radius ρ * is defined by the formula (36).
Proof. Let T * = [ 0 , ρ * ) . It follows that for each t T *
0 ψ 0 ( t ) < 1 ,
0 p ( t ) < 1 ,
0 q ( t ) < 1
and
0 σ i ( t ) < 1 .
The assertions (37)-(40) are shown by induction. Let u E * = B ( v * , ρ * ) but be arbitrary. The condition (H1) and the formula (36) give
L 1 ( A ( u ) L ) ψ 0 ( u v * ) ψ 0 ( ρ * ) < 1 .
Thus, A ( u ) is invertible,
A ( u ) 1 L 1 1 ψ 0 ( u v * )
and the iterate w 0 exists by the method (4) if n = 0 . Moreover, the first substep gives
w 0 v * = v 0 v * A ( v 0 ) 1 A ( v 0 ) + 1 3 A ( v 0 ) 1 A ( v 0 ) = 0 1 A ( v 0 ) 1 [ A ( v 0 ) A ( v * + θ ( v 0 v * ) ) ] d θ ( v 0 v * ) + 1 3 A ( v 0 ) 1 A ( v 0 ) .
Using (36), (44) (for i = 1 ), (H8), (45) and (46)
w 0 v * 1 1 ψ 0 ( v 0 v * ) 0 1 ψ ( ( 1 θ ) v 0 v * ) d θ v 0 v * + 1 3 ( 1 + 0 1 ψ 0 ( θ v 0 v * ) d θ ) v 0 v * σ 1 ( v 0 v * ) v 0 v * v 0 v * < ρ * .
Thus, the iterate w 0 E * and the item (38) holds if n = 0 .
The following estimate establishes the invertability of the linear operator ( 3 A ( w 0 ) A ( v 0 ) ) and iterate z 0 by the second substep of the method (4):
( 2 L ) 1 ( 3 A ( w 0 ) A ( v 0 ) 2 L ) 1 2 [ 3 L 1 ( A ( w 0 ) A ( v 0 ) ) + L 1 ( A ( v 0 ) L ) 1 2 ( 3 ψ ( w 0 v 0 ) + 2 ψ 0 ( v 0 v * ) ) p ( v 0 v * ) < 1 ,
where we used the conditions (H3), (H7), formulas (36), (42) and (37). Hence, by (48)
( 3 A ( w 0 ) A ( v 0 ) ) 1 L 1 2 ( 1 p ( v 0 v * ) ) .
Moreover, the second substep gives
z 0 v * = v 0 v * A ( v 0 ) 1 A ( v 0 ) + [ I 1 2 ( 3 A ( w 0 ) A ( v 0 ) ) 1 ( 3 A ( w 0 ) + A ( v 0 ) ) ] A ( v 0 ) 1 A ( v 0 ) .
It follows by (36), (44) (for i = 2 ), (45), (47), (49) and (50)
z 0 v * 0 1 ψ ( ( 1 θ ) v 0 v * ) d θ 1 ψ 0 ( v 0 v * ) + 3 ψ ( ( 1 + σ 1 ( v 0 v * ) ) v 0 v * ) ( 1 + 0 1 ψ 0 ( θ v 0 v * ) d θ ) 4 ( 1 p ( v 0 v * ) ) ( 1 ψ 0 ( v 0 v * ) ) v 0 v * σ 2 ( v 0 v * ) v 0 v * v 0 v * .
Thus, the iterate z 0 E * and for n = 0 the assertion (39) holds. Next the invertability of the linear operator A 3 w 0 v 0 2 establishes the existence of the iterate v 1 as follows:
L 1 ( A 3 w 0 v 0 2 L ) ψ 0 3 w 0 v 0 2 v * ψ 0 2 w 0 v * + w 0 v * + v 0 v * 2 = q ( v 0 v * ) < 1 , ( by ( 36 ) and ( 43 ) ) ,
so
A 3 w 0 v 0 2 1 L 1 1 q ( v 0 v * ) .
Then, the last substep of the method (4) gives in turn
v 1 v * = z 0 v * A ( z 0 ) 1 A ( z 0 ) + ( A ( z 0 ) 1 A 3 w 0 v 0 2 1 ) A ( z 0 ) = z 0 v * A ( z 0 ) 1 A ( z 0 ) + A ( z 0 ) 1 A 3 w 0 v 0 2 A ( z 0 ) A 3 w 0 v 0 2 1 A ( z 0 ) .
Using (36), (H8), (44) (for i = 3 ), (51), (52) and (53)
v 1 v * 0 1 ψ ( ( 1 θ ) z 0 v * ) d θ 1 ψ 0 ( z 0 v * ) + ψ ¯ ¯ ( z 0 v * ) ( 1 + 0 1 ψ 0 ( θ z 0 v * ) d θ ) ( 1 ψ 0 ( z 0 v * ) ) z 0 v * σ 3 ( v 0 v * ) v 0 v * v 0 v * ,
Hence, the iterate v 1 E * and the assertion (40) holds for n = 0 . The induction is terminated if v k , w k , z k , v k + 1 replaces v 0 , w 0 , z 0 , v 1 in the preceding calculations. Finally, from the estimate
v k + 1 v * c v k v * < ρ * ,
where c = σ 3 ( v 0 v * ) [ 0 , 1 ) . It follows lim k v k = v * and the iterate v k + 1 E * .
The isolation of the solution v * is discussed in the next result.
Proposition 1.
Suppose: there exists a solution v ¯ B ( v * , ρ 4 ) for some ρ 4 > 0 ; the condition (H7) holds in the ball B ( v * , ρ 4 ) and there exists ρ 5 ρ 4 such that
0 1 ψ 0 ( θ ρ 5 ) d θ < 1 .
Let E 2 = E B [ v * , ρ 5 ] .
Then, the equation A ( v ) = 0 is uniquelly solvable by v * in the region E 2 .
Proof. Define the linear operator L 1 = 0 1 A ( v * + θ ( v ¯ v * ) ) d θ . Then, by the condition (H7) in the ball B ( v * , ρ 4 ) and (56)
L ( L 1 L ) 0 1 ψ 0 ( θ v ¯ v * ) d θ ψ 0 ( ρ 5 ) < 1 .
Hence, v ¯ = v * follows from the identity v ¯ v * = L 1 1 ( A ( v ¯ ) A ( v * ) ) = L 1 1 ( 0 ) = 0 .
Remark 1.
(1) 
A possible choice for L = A ( v * ) . In practice L shall be chosen to tighten the function ψ 0 . Notice also that it does not necessarily follow from (H7) that v * is a simple solution or that A is differentiable at v * .
(2) 
The results for the method (3) are obtained by restriction to the first two substep of the method (4).
A analogous approach is followed in the semi-local analysis but the role of v * is exchanged by v 0 and that of function ψ 0 and ψ by φ 0 and φ , respectively which are developed below.
Suppose:
(e1)
There exists CNF φ 0 : T T such that the equation φ 0 ( t ) 1 = 0 has a SPS denoted by ρ 6 T { 0 } .
Set T 2 = [ 0 , ρ 6 ) . Let φ : T 2 T be a CNF. Define the sequence { α n } for α 0 = 0 , β 0 0 and each n = 0 , 1 , 2 , by
p ˜ n = 3 2 φ ( β n α n ) + φ 0 ( α n ) , γ n = β n + 1 8 ( 3 φ ( β n α n ) + 4 ( 1 + φ 0 ( α n ) ) ) ( β n α n ) 1 p ˜ n , μ ˜ n = ( 1 + 0 1 φ 0 ( α n + θ ( γ n α n ) ) d θ ) ( γ n α n ) + 3 2 ( 1 + φ 0 ( α n ) ) ( β n α n ) , q ˜ n = φ 0 ( 3 β n α n 2 ) , α n + 1 = γ n + μ n 1 q ˜ n , ξ n + 1 = ( 1 + 0 1 φ 0 ( α n + θ ( α n + 1 α n ) ) d θ ) ( α n + 1 α n ) + 3 2 ( 1 + φ 0 ( α n ) ) ( β n α n ) ,
and
β n + 1 = α n + 1 + 2 3 ξ n + 1 1 φ 0 ( α n + 1 ) .
(e2)
There exists ρ 7 [ 0 , ρ 6 ) such that for each n = 0 , 1 , 2 ,
p ˜ n < 1 , q ˜ n < 1 , φ 0 ( α n ) < 1 and α n ρ 7 .
It follows that 0 α n β n γ n α n + 1 ρ 7 and there exists ρ 8 [ 0 , ρ 7 ) such that lim n α n = ρ 8 .
The functions φ 0 and φ are connected to the operators on the method (4).
(e3)
There exists v 0 E such that
L 1 ( A ( v ) L ) φ 0 ( v v 0 ) .
Let E 3 = E B ( v * , ρ 6 ) . Notice that (e1) and (e3) imply that the linear operator A ( v 0 ) is invertible. Let A ( v 0 ) 1 A ( v 0 ) 3 2 β 0 .
(e4)
L 1 ( A ( w ) A ( v ) ) φ ( w v ) for each v , w E 3 and
(e5)
B [ v 0 , ρ 8 ] E .
As in the local case we obtain in turn and induction the estimates
w 0 v 0 = 2 3 A ( v 0 ) 1 A ( v 0 ) β 0 = β 0 α 0 < ρ 8 ,
z n w n = [ 2 3 I 1 2 ( 3 A ( w n ) A ( v n ) ) 1 ( 3 A ( w n ) + A ( v n ) ) ] A ( v n ) 1 A ( v n ) , = 1 6 ( 3 A ( w n ) A ( v n ) ) 1 [ 4 ( 3 A ( w n ) A ( v n ) ) 3 ( 3 A ( w n ) + A ( v n ) ) ] ( 3 2 ( w n v n ) ) , z n w n 1 8 [ 3 L 1 ( A ( w n ) A ( v n ) ) + 4 L 1 A ( v n ) ] 1 p ˜ n w n v n
γ n β n , z n v 0 z n w n + w n v 0
γ n β n + β n α 0 = γ n < ρ 8 ,
where p ˜ n = 3 2 φ ( β n α n ) + φ 0 ( α n ) ,
A ( z n ) = A ( z n ) A ( v n ) + A ( v n ) = 0 1 A ( v n + θ ( z n v n ) ) d θ ( z n v n ) 3 2 A ( v n ) ( w n v n ) , L 1 A ( z n ) 1 + 0 1 φ 0 ( v n v 0 ) + θ z n v n ) ) d θ z n v n + 3 2 ( 1 + φ 0 ( v n v 0 ) ) w n v n 1 + 0 1 φ 0 ( α n + θ ( γ n α n ) ) d θ ( γ n α n ) + 3 2 ( 1 + φ 0 ( α n ) ) ( β n α n ) = μ n ,
v n + 1 z n μ n 1 q ˜ n = α n + 1 γ n ,
where
q ˜ n = φ 0 3 β n α n 2 φ 0 3 w n v n 2 v 0 φ 0 2 w n v 0 + w n v n 2 φ 0 2 β n + β n α n 2 = q ˜ n < 1 ,
and
v n + 1 v 0 v n + 1 z n + z n v 0
α n + 1 β n + β n = α n + 1 < ρ 8 , A ( v n + 1 ) = A ( v n + 1 ) A ( v n ) 3 2 A ( v n ) ( w n v n )
= 0 1 A ( v n + θ ( v n + 1 v n ) ) d θ ( v n + 1 v n ) + 3 2 A ( v n ) ( w n v n ) ,
L 1 A ( v n + 1 ) 1 + 0 1 φ 0 ( v n v 0 + θ v n + 1 v n ) d θ v n + 1 v n + 3 2 ( 1 + φ 0 ( v n v 0 ) ) w n v n 1 + 0 1 φ 0 ( α n + θ ( α n + 1 α n ) ) d θ ( α n + 1 α n ) + 3 2 ( 1 + φ 0 ( α n ) ) ( β n α n ) = ξ n + 1 ,
w n + 1 v n + 1 2 3 A ( v n + 1 ) 1 L L 1 A ( v n + 1 ) 2 3 ξ n + 1 1 φ 0 ( v n + 1 v 0 ) 2 3 ξ n + 1 1 φ 0 ( α n + 1 ) = β n + 1 α n + 1 ,
and
w n + 1 v 0 w n + 1 v n + 1 + v n + 1 v 0 β n + 1 α n + 1 + α n + 1 α 0 = β n + 1 < ρ 8 .
It follows by (57)-(65) that the sequence { x n } is complete, since { α n } is convergent by the condition (e2). But X is a Banach space. Hence, there exists v * B [ x 0 , ρ 8 ] such that lim n v n = v * . Then, by letting n in
L 1 A ( v n + 1 ) ξ n + 1 ,
we deduce that A ( v * ) = 0 . Finally, notice that for j = 0 , 1 , 2 ,
v n + j v n α n + j α n
thus for j
v * v n ρ 8 α n .
Hence, the semi-local result for the method (4) is achieved.
Theorem 9.
Let that the conditions (e1)-(e5) hold. Then, there exists v * B [ v 0 , ρ 8 ] solving the equation A ( v ) = 0 . Moreover, the following assertions hold
{ v n } B ( v 0 , ρ 8 ) ,
w n v n β n α n ,
z n w n γ n β n ,
v n + 1 z n α n + 1 γ n
and
v * v n ρ 8 α n .
The uniqueness property of the solution is specified in the next result.
Proposition 2.
Suppose: There exists a solution v ¯ B ( v 0 , ρ 9 ) of the equation A ( v ) = 0 for some ρ 9 > 0 , the condition (e3) holds in the ball B ( v 0 , ρ 9 ) and there exists ρ 10 ρ 9 such that
0 1 φ 0 ( ( 1 θ ) ρ 9 + θ ρ 10 ) d θ < 1 .
Let E 4 = E B [ v 0 , ρ 10 ] . Then, the only possible solution of the equation A ( v ) = 0 in the region E 4 is v ¯ .
Proof. Let z ¯ E 4 with A ( z ¯ ) = 0 and the linear operator L 2 = 0 1 A ( v ¯ + θ ( z ¯ v ¯ ) ) d θ . It follows
L 1 ( L 2 L ) 0 1 φ 0 ( ( 1 θ ) v ¯ v 0 + θ z ¯ v 0 ) d θ 0 1 φ 0 ( ( 1 θ ) ρ 9 + θ ρ 10 ) d θ < 1 .
Thus, we deduce z ¯ = v ¯ .
Remark 2.
(1) 
A possible choice for L = A ( v 0 ) .
(2) 
Suppose that the conditions (e1)-(e5) hold. Then, set ρ 9 = ρ 8 and v ¯ = v * in Proposition 2.
(3) 
Replace the limit point ρ 8 by ρ 6 in the condition (e5).
(4) 
Clearly the results for the method (3) are obtained by simply restricting in the first two substeps of the method (4).

6. Efficiency Indices

There are several measures for comparing iterative methods other than OC, one of them is efficiency of the method. Recall the informational efficiency, introduced by Traub [22] is given by E . I = o 1 / s , where o is the order of the methods and s is the number of function evaluations. Ostowski [16], introduced a term before Traub called efficiency index or computational efficiency defined as C . E = ϖ 1 θ f , where ϖ is the OC of the method and θ f is the number of function evaluations. Thus, the E.I and the C.E of the method (2) are 6 5 = 1.2 and 6 1 5 = 1.4310 , the E. I and C. E of the method (3) are 4 3 = 1.33 and 4 1 3 = 1.587 and E. I and C. E of the method (4) are 5 5 = 1 and 5 1 5 = 1.3797 .

7. Numerical Example

Example 1.
Consider X = Y = R 3 , v 0 = ( 0 , 0 , 0 ) T , E = B [ 0 , 1 ] . Define function A on E for u = ( v , w , z ) T by
A ( u ) = e v 1 , e 1 6 w 3 + w , z 3 6 + z T .
Then, the first and second Fréchet derivatives are as follows:
A ( u ) = e v 0 0 0 e 1 2 w 2 + 1 0 0 0 z 2 2 + 1
and
A ( u ) = e v 0 0 | 0 0 0 | 0 0 0 0 0 0 | 0 ( e 1 ) w 0 | 0 0 0 0 0 0 | 0 0 0 | 0 0 z .
Now, we can observe that A ( v * ) = A ( v * ) 1 = diag ( 1 , 1 , 1 ) . Thus we get A ( v * ) = 1 . Thus,
A ( v * ) 1 ( A ( v ) A ( v * ) ) ( e 1 ) v v * ,
A ( v * ) 1 ( A ( v ) A ( v * ) ) ( e 1 ) v v * ,
A ( v * ) 1 A ( v ) e .
A ( v * ) 1 A ( v ) e 1 e 1 ,
Hence L 0 = L 1 = ( e 1 ) , L 2 = e and L 3 = e 1 e 1 . With respect to λ 1 = 0.1146 , λ 2 = 0.7115 , λ 3 = 0.6959 , and λ 4 = 0.3001 , we get r = R = R 1 = 0.0667 .
Example 2.
Consider the non-linear integral equation of the Hammerstein-type given by
A ( v ) ( θ ) = v ( θ ) 5 H ( v ) ( θ ) ,
where H is any function such that
H ( v ) ( θ ) = θ 0 1 β v 3 ( β ) d ( β ) ,
defined on X = Y = C [ 0 , 1 ] , the space of all continuous functions on the on the interval [ 0 , 1 ] let E = B [ 0 , 1 ] . Then, we obtain first Fréchet derivatives as
A ( v ( ψ ) ) ( θ ) = ψ ( θ ) 15 θ 0 1 γ v 2 ( γ ) ψ ( γ ) d γ , for all v E .
we can observe that v * = v * ( θ ) = 0 is a solution of A ( v ) . Then, by applying the conditions ( A 1 ) ( A 4 ) we have L 0 = L 1 = L 2 = 7.5 and L 3 = 2 . With respect to λ 1 = . 5028 , λ 2 = 0.0129 , λ 3 = 0.0523 , and λ 4 = 0.1333 , we get r = 0.0667 , R = 0.0523 , and R 1 = 0.0667 .
In the next example, we compare the iteration and the convergence order of methods (3), (2) and (4) with that of following methods:
Noor Waseem-type methods [10]: given for n = 0 , 1 , 2 , as
w n = v n A ( v n ) 1 A ( v n ) v n + 1 = v n 4 G n 1 A ( v n ) ,
where G n = 3 A ( 2 v n + w n 3 ) + A ( w n ) ,
w n = v n A ( v n ) 1 A ( v n ) z n = v n 4 G n 1 A ( v n ) v n + 1 = z n A ( w n ) 1 A ( z n )
and
w n = v n A ( v n ) 1 A ( v n ) z n = v n 4 G n 1 A ( v n ) v n + 1 = z n A ( z n ) 1 A ( z n ) .
Newton Simpson-type methods [11]: given for n = 0 , 1 , 2 , as
w n = v n A ( v n ) 1 A ( v n ) v n + 1 = v n 6 G n 1 A ( v n ) ,
where G n = A ( v n ) + 4 A ( v n + w n 2 ) + A ( w n ) ,
w n = v n A ( v n ) 1 A ( v n ) z n = v n 6 G n 1 A ( v n ) v n + 1 = z n A ( w n ) 1 A ( z n )
and
w n = v n A ( v n ) 1 A ( v n ) z n = v n 6 G n 1 A ( v n ) v n + 1 = z n A ( z n ) 1 A ( z n ) .
Example 3.
Let T = T 1 = R 2 . Consider the system of equations [12]
3 t 1 2 t 2 + t 2 2 = 1 t 1 4 + t 1 t 2 3 = 1 .
Observe, a 1 = ( 1 , 0.2 ) , a 2 = ( 0.4 , 1.3 ) and a 3 = ( 0.9 , 0.3 ) are the solutions of the above system of equations. The approximation to the solution a 3 using the methods (67)-(72), (3), (2) and (4) starting with v 0 = ( 2 , 1 ) is given. The results are displayed in Table 1- Table 3.

8. Basins of Attraction

For an iterative method, the set of all initial points which converges to a solution of an equation is known as Basins of attraction [7,8]. Using the approach of the basins of attractions we obtain the convergence area of the methods (2), (3) and (4) when applied to the following examples;
Example 4.
α 3 β = 0 β 3 α = 0 with solutions { ( 1 , 1 ) , ( 0 , 0 ) , ( 1 , 1 ) } .
Example 5.
3 α 2 β β 3 = 0 α 3 3 α β 2 1 = 0 with solutions
{ ( 1 2 , 3 2 ) , ( 1 2 , 3 2 ) , ( 1 , 0 ) } .
Example 6.
α 2 + β 2 4 = 0 3 α 2 + 7 β 2 16 = 0 with solutions
{ ( 3 , 1 ) , ( 3 , 1 ) , ( 3 , 1 ) , ( 3 , 1 ) } .
Corresponding to the roots of system of nonlinear equations, the basins of attraction are generated in a rectangle domain w = { ( α , β ) R 2 : 2 α 2 , 2 β 2 } with equidistant grid points of 401 × 401 . According to root, each initial point ( α 0 , β 0 ) R 2 is assigned a color, to which the corresponding iterative method converges, starting from ( α 0 , β 0 ) . If either the method converges to infinity or it does not converge, then the point is marked black. In a maximum of 100 iterations a tolerance of 10 8 is used.
Figure 1. Dynamical plane of the method (2) with Basins of attraction for Example 4(left), Example 5(middle) and Example 6(right).
Figure 1. Dynamical plane of the method (2) with Basins of attraction for Example 4(left), Example 5(middle) and Example 6(right).
Preprints 156128 g001
Figure 2. Dynamical plane of the method (3) with Basins of attraction for Example 4(left), Example 5(middle) and Example 6(right).
Figure 2. Dynamical plane of the method (3) with Basins of attraction for Example 4(left), Example 5(middle) and Example 6(right).
Preprints 156128 g002
Figure 3. Dynamical plane of the method (4) with Basins of attraction for Example 4(left), Example 5(middle) and Example 6(right).
Figure 3. Dynamical plane of the method (4) with Basins of attraction for Example 4(left), Example 5(middle) and Example 6(right).
Preprints 156128 g003

9. Conclusion

We studied Jarrat-type method of convergence order three and its two extensions with convergence order six and five, respectively. As mentioned in the introduction, we used assumptions on A and A only, so these methods (2), (3) and (4) can be used to solve problems which were not possible if we use the earlier convergence analysis using Taylor expansion. We discussed the limitations of our approach and developed new ways to overcome these limitations in Section 5. Finally, we compare the methods with other similar methods using an example. Also using Basins of attraction approach the convergence areas of the methods (2), (3) and (4) are given. In future research our ideas shall be applied on other methods to obtain similar benefits analogously [1,2,3,4,5,6,7,8,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]

References

  1. Argyros, I.K. The theory and applications of iteration methods. CRC Press, Engineering Series, Taylor and Francis Group 2022, 2. [Google Scholar]
  2. Argyros, I.K.; George, S. Extended Convergence Of Jarratt Type Methods. Appl. Math. E-Notes 2021, 21, 89–96. [Google Scholar]
  3. Bartle, R. G . Newton’s method in Banach spaces. Proceedings of the American Mathematical Society 1955, 6, 827–831. [Google Scholar]
  4. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J. R. On developing fourth-order optimal families of methods for multiple roots and their dynamics. Applied Mathematics and Computation 2015, 265, 520–532. [Google Scholar] [CrossRef]
  5. Ben-Israel, A. A Newton-Raphson method for the solution of systems of equations. Journal of Mathematical analysis and applications 1966, 15, 243–252. [Google Scholar] [CrossRef]
  6. Cárdenas, E.; Castro, R.; Sierra, W. A Newton-type midpoint method with high efficiency index. Journal of Mathematical Analysis and Applications 2020, 491, 124381. [Google Scholar] [CrossRef]
  7. Chun, C.; Lee, M.Y.B.; Neta, B.; Džunić, J. On optimal fourth-order iterative methods free from second derivative and their dynamics. Applied mathematics and computation 2012, 218, 6427–6438. [Google Scholar] [CrossRef]
  8. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Applied Mathematics and Computation 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
  9. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
  10. George, S.; Sadananda, R.; Jidesh, P.; Argyros, I.K. On the Order of Convergence of Noor-Waseem Method. Mathematics 2022, 10, 4544. [Google Scholar] [CrossRef]
  11. George, S.; Kunnarath, A.; Sadananda, R.; Jidesh, P.; Argyros, I.K. Order of convergence, extensions of Newton-Simpson method for solving nonlinear equations and their dynamics. Fractal Fract 2023, 163. [Google Scholar] [CrossRef]
  12. Iliev, A.; Iliev, I. Numerical method with order t for solving system nonlinear equations. Collection of scientific works 2000, 30, 3–4. [Google Scholar]
  13. Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Mathematics of Computation 1966, 20, 434–437. [Google Scholar] [CrossRef]
  14. Magreñán, A.A. Different anomalies in a Jarratt family of iterative root finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
  15. Ortega, J.M.; Rheinboldt, W.C. terative solution of nonlinear equations in several variables. Society for Industrial and Applied Mathematics 2000, 14. [Google Scholar]
  16. Ostrowski, A.M. Solution of Equations and Systems of Equations: Pure and Applied Mathematics, A Series of Monographs and Textbooks. Elsevier 2016, 9. [Google Scholar]
  17. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint methods for solving nonlinear equations: A survey. Applied Mathematics and Computation 2014, 226, 635–660. [Google Scholar] [CrossRef]
  18. Ren, H.; Wu, Q.; Bi, W. New variants of Jarratt’s method with sixth-order convergence. Numerical Algorithms 2099, 52, 585–603. [Google Scholar] [CrossRef]
  19. Saheya, B.; Chen, G.Q.; Sui, Y.K.; Wu, C.Y. A new Newton-like method for solving nonlinear equations. SpringerPlus 2016, 5, 1–3. [Google Scholar] [CrossRef]
  20. Shakhno, S. M.; Iakymchuk, R. P.; Yarmola, H. P. Convergence analysis of a two step method for the nonlinear squares problem with decomposition of operator. J. Numer. Appl. Math. 2018, 128, 82–95. [Google Scholar]
  21. Shakhno, S. M.; Gnatyshyn, O. P. On an iterative algorithm of order 1.839. . . for solving nonlinear operator equations. Appl. Math. Appl. 2005, 161, 253–264. [Google Scholar]
  22. Traub, J.F. Iterative methods for the solution of equations. American Mathematical Society 1982, 312. [Google Scholar] [CrossRef]
  23. Weerakoon, S.; Fernando, T. A variant of Newton’s method with accelerated third-order convergence. Applied mathematics letters 2000, 8, 87–93. [Google Scholar] [CrossRef]
  24. Werner, W. Über ein Verfahren der Ordnung 1+2 zur Nullstellenbestimmung. Numerische Mathematik 1979, 32, 333–342. [Google Scholar] [CrossRef]
Table 1. Methods of order 3.
Table 1. Methods of order 3.
k Noor Waseem Method (67) Ratio Newton Simpson method (70) Ratio Method(3) Ratio
x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 3 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 3 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 3
0 (2.000000,-1.000000) (2.000000,-1.000000) (2.000000,-1.000000)
1 (1.264067,-0.166747) 0.052791 (1.263927,-0.166887) 0.052792 (1.151437,0.051449) 0.040459
2 (1.019624,0.265386) 0.259247 (1.019452,0.265424) 0.259156 (0.994771,0.304342) 0.536597
3 (0.992854,0.306346) 1.578713 (0.992853,0.306348) 1.580144 (0.992780,0.306440) 1.951273
4 (0.992780,0.306440) 1.977941 (0.992780,0.306440) 1.977957 (0.992780,0.306440) 1.979028
5 (0.992780,0.306440) 1.979028 (0.992780,0.306440) 1.979028 (0.992780,0.306440) 1.979028
Table 2. Methods of order 5.
Table 2. Methods of order 5.
k Noor Waseem Method (68) Ratio Newton Simpson method (71) Ratio Method(4) Ratio
x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 5 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 5 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 5
0 (2.000000,-1.000000) (2.000000,-1.000000) (2.000000,-1.000000)
1 (1.127204,0.054887) 0.004363 (1.127146,0.054883) 0.004363 (1.144528,0.069067) 0.004375
2 (0.993331,0.305731) 0.501551 (0.993328,0.305734) 0.501670 (0.994305,0.304922) 0.495553
3 (0.992780,0.306440) 3.889725 (0.992780,0.306440) 3.889832 (0.992780,0.306440) 3.847630
4 (0.992780,0.306440) 3.916553 (0.992780,0.306440) 3.916553 (0.992780,0.306440) 3.916553
Table 3. Methods of order 6.
Table 3. Methods of order 6.
k Noor Waseem Method (69) Ratio Newton Simpson method (72) Ratio Method(2) Ratio
x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 6 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 6 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 6
0 (2.000000,-1.000000) (2.000000,-1.000000) (2.000000,-1.000000)
1 (1.067979,0.174843) 0.001211 (1.067906,0.174885) 0.001211 (1.027012,0.256566) 0.001057
2 (0.992784,0.306436) 1.383068 (0.992784,0.306436) 1.384152 (0.992780,0.306440) 3.122403
3 (0.992780,0.306440) 5.509412 (0.992780,0.306440) 5.509414 (0.992780,0.306440) 5.509727
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated