Preprint
Article

This version is not peer-reviewed.

A New Perspective on the Convergence of Mean-Based Methods for Nonlinear Equations

A peer-reviewed article of this preprint also exists.

Submitted:

09 September 2025

Posted:

09 September 2025

You are already at the latest version

Abstract
Many problems in science, engineering, and economics require solving nonlinear equations, often arising from attempts to model natural systems and predict their behavior. In this context, iterative methods provide an effective approach to approximate the roots of nonlinear functions. This work introduces five new parametric families of multipoint iterative methods specifically designed for solving nonlinear equations. Each family is built upon a two-step scheme: the first step applies the classical Newton method, while the second incorporates a convex mean, a weight function, and a frozen derivative (i.e., the same derivative from the previous step). The careful design of the weight function was essential to ensure fourth-order convergence while allowing arbitrary parameter values. The proposed methods are theoretically analyzed and dynamically characterized using tools such as stability surfaces, parameter planes, and dynamical planes on the Riemann sphere. These analyses reveal regions of stability and divergence, helping identify suitable parameter values that guarantee convergence to the root. Numerical experiments confirm the robustness and efficiency of the methods, often surpassing classical approaches in terms of convergence speed and accuracy. Overall, the results demonstrate that convex-mean-based parametric methods offer a flexible and stable framework for the reliable numerical solution of nonlinear equations.
Keywords: 
;  ;  ;  ;  

1. Introduction

In mathematics and engineering, many real-world and physical phenomena are modeled using nonlinear equations or systems. Solving such problems has led to the development of numerous numerical methods, which serve as fundamental tools for approximating solutions that cannot be found exactly or analytically.
Among these, iterative methods play a crucial role in finding the roots of nonlinear equations. Since a variety of iterative strategies exist, it becomes essential to evaluate their convergence order, stability, and computational efficiency. These aspects allow us to compare methods and choose the most appropriate one for a given problem.
Iterative methods are typically classified based on whether they are single-step or multi-step, with or without memory, and whether they require derivatives or not . In this context, we present new multipoint iterative methods aimed at approximating the zeros of nonlinear functions. These methods are inspired by the work in [1], which investigates and enhances Newton-type methods by incorporating convex combinations of classical means, achieving third-order convergence.
With the aim of improving this order of convergence, we introduce new multipoint iterative schemes based on the composition and in the use of weight function. These functions combine data from multiple evaluations of the function and its derivatives throughout previous iterations to improve both accuracy and efficiency. The proposed methods achieve fourth-order convergence and satisfy the optimality condition postulated by the Kung-Traub conjecture [2], which states:
The order of convergence p of a without memory iterative method cannot exceed 2 d 1 , where d is the number of functional evaluations per iteration.
A method that reaches this bound is called optimal. This and other criteria described below will allow us to define which method is more effective than another.
One of the most widely used and foundational approaches in nonlinear root-finding is Newton’s method. It is defined as:
x n + 1 = x n f ( x n ) f ( x n ) , n = 0 , 1 , 2 , ,
provided f ( x n ) 0 . Under appropriate smoothness conditions and for simple roots, the method exhibits quadratic convergence, meaning:
| x n + 1 α | C | x n α | 2 ,
for some constant C > 0 and being α a root of f ( x ) = 0 .
In 2000, Weerakoon and Fernando [3] proposed a third-order variant of Newton’s method. This method replaces the rectangular approximation in the integral form of Newton’s method with a trapezoidal approximation, reducing truncation error and improving convergence. Their method, known as the trapezoidal or arithmetic mean method, is defined as:
y n = x n f ( x n ) f ( x n ) , x n + 1 = x n 2 f ( x n ) f ( x n ) + f ( y n ) , n = 0 , 1 , 2 , .
This method laid the foundation for subsequent generalizations using other types of means. Researchers such as Chicharro et al. [4] and Cordero et al. [1] expanded this idea by incorporating various mathematical means to construct families of third-order methods:
x n + 1 = x n f ( x n ) M m f ( x n ) , f ( y n ) , n = 0 , 1 , 2 , ,
where y n denotes the Newton step and M m ( x , y ) represents the chosen mean applied to the values x and y.

1.1. Types of Means

Below are the different types of convex averages used in the literature for the design and analysis of various iterative methods. These concepts constitute a fundamental reference and will serve as a methodological basis for the development of our own iterative procedures.
Arithmetic Mean M A : The arithmetic mean of two real numbers x and y is given by:
M A ( x , y ) = x + y 2 .
This mean appears in the trapezoidal scheme (2).
Harmonic Mean M H : The harmonic mean of two positive real numbers x and y is defined as:
M H ( x , y ) = 2 x y x + y .
This mean is particularly sensitive to small values and is known for its use in rates and resistances. In the context of iterative methods, its reciprocal nature often yields improved stability under specific conditions. The following scheme arises by replacing in (2) the arithmetic mean by this mean, as done in [5]:
x n + 1 = x n f ( x n ) ( f ( x n ) + f ( y n ) ) 2 f ( x n ) f ( y n ) , n = 0 , 1 , 2 ,
Contraharmonic Mean M C : The contraharmonic mean is given by:
M C ( x , y ) = x 2 + y 2 x + y ,
and is always greater than or equal to the arithmetic mean. It accentuates larger values, making it suitable when higher magnitudes dominate the behavior of the function. When this mean is incorporated into iterative schemes, the obtained method presented in [1,6] is:
x n + 1 = x n ( f ( x n ) + f ( y n ) ) f ( x n ) ( f ( x n ) ) 2 + ( f ( y n ) ) 2 , n = 0 , 1 , 2 ,
Different authors have proved that all these schemes have order of convergence three.

1.2. Some Characteristics of the Iterative Methods

To analyze an iterative method in greater depth, it is essential to understand certain concepts related to the mathematical notation used, the order of convergence, the efficiency index, the computational order of convergence, as well as the fundamental theorems and conjectures that support the correct formulation of the proposed new multipoint methods. Each of these aspects is detailed below.
Order of convergence
The speed at which a sequence { x n } approaches a solution α is quantified by the order of convergence p. Formally, the sequence { x n } is said to converge to α with order p 1 and asymptotic error constant C > 0 if:
lim n | x n + 1 α | | x n α | p = C .
This limit establishes an asymptotic relation that describes how rapidly the errors e n = x n α decay as the number of iterations increases. Specifically:
  • If p = 1 and 0 < C < 1 , the convergence is linear,
  • If p = 2 , the convergence is quadratic,
  • If p = 3 , the convergence is cubic,
  • For p > 3 , the method has higher-order convergence.
In practice, a high value of p implies faster convergence toward the root of f ( x ) = 0 , assuming the constant C remains reasonably small. However, higher-order methods often require more function or derivative evaluations per iteration, increasing computational cost.
The error equation of an iterative method can be expressed as:
e n + 1 = C e n p + O ( e n p + 1 ) , C R ,
where C is the asymptotic error constant and O ( e n p + 1 ) denotes higher-order terms that become negligible as n increases. This expression is central to the local convergence analysis of iterative schemes.
Numerical estimation of the order of convergence
Since the exact root α is typically unknown, practical estimation of the convergence order relies on approximate values of the iterates. Two widely used techniques are:
  • The computational order of convergence (COC) defined in [3],
  • The approximate computational order of convergence (ACOC) defined in [7].
These tools are commonly used in numerical experimentation to assess the performance of iterative schemes.
The classical estimate (COC) [3], assuming knowledge of the root α , is given by:
p COC = ln x n + 1 α x n α ln x n α x n 1 α , n 2 .
When α is unknown, the ACOC [7] formula provides a root-free estimate using only the iterates:
p ACOC = ln x n + 1 x n x n x n 1 ln x n x n 1 x n 1 x n 2 , n 3 .
This tool allows us to approximate the theoretical order of convergence p without requiring knowledge of the exact solution. Its reliability increases as the iterates approach the root, provided the errors remain sufficiently small to avoid numerical cancellation or round-off issues.
Efficiency Index of an Iterative Method
To assess the computational efficiency of an iterative method, one must consider its convergence order and the number of function evaluations per iteration. Ostrowski (1966) introduced the efficiency index I, defined as:
I = p 1 / d ,
where p is the order of convergence and d is the total number of functional evaluations per iteration (including derivatives, if applicable). This index provides a comparative efficiency measure across methods with varying orders and computational demands.
More recently, the concept has been extended to the computational efficiency index (CEI) [8], which includes not only functional evaluations but also the products/quotients of the iterative method.
CEI = p 1 / ( d + o p ) ,
where o p refers to the number of products/quotients produced in each iteration.
These indicators allow different iterative methods to be compared regarding convergence speed and total computational cost.
In this manuscript, Section 2 presents some known fourth-order iterative methods used in the numerical section for comparison. In Section 3, the new schemes are presented and their order of convergence is proven. Section 4 deals with the dynamical analysis of one of the proposed families of iterative schemes. The best method in terms of stability is compared in the numerical section with known schemes. Two academic examples and two applied problems confirm the theoretical results. With some conclusions and references, we conclude the manuscript.

2. Some Fourth-Order Methods in the Literature

In recent decades, there has been an urgent need to develop iterative methods with high orders of convergence that do not require new functional evaluations or derivatives. Since Traub’s initial contributions [9] with his method known as the frozen derivative, various approaches have been proposed to address this challenge. The following iterative expression defines Traub’s method:
y n = x n f ( x n ) f ( x n ) , x n + 1 = y n f ( y n ) f ( x n ) , n = 0 , 1 , 2 ,
This scheme achieves cubic convergence without the need to evaluate the second derivative. Similarly, Jarratt [10] introduced a two-step iterative scheme:
y n = x n 2 3 f ( x n ) f ( x n ) , x n + 1 = x n 1 2 3 f ( y n ) + f ( x n ) 3 f ( y n ) f ( x n ) f ( x n ) f ( x n ) , n = 0 , 1 , 2 , ,
This also avoids evaluating second derivatives and achieves fourth-order convergence.
Based on these ideas, many multipoint methods have been developed to achieve even higher convergence orders. The specialized literature, including the works of Chun [11], Ostrowski [12], King [13], among others, offers a wide range of fourth-order schemes based on the adjustment of parameters and weighting functions. These schemes will serve as benchmarks for comparison with the methods proposed in this work.
Among them, it is worth highlighting the family of fourth-order methods introduced by Artidiello [14]. It is based on a weighting function H ( μ ) that generalizes several known schemes. Its formulation follows a two-step scheme:
y n = x n f ( x n ) f ( x n ) , x n + 1 = y n H ( μ n ) f ( y n ) f ( x n ) , n = 0 , 1 , 2 , ,
where μ = f ( y ) f ( x ) .
Theorem 1 
([14]). Let f : I R R be a sufficiently differentiable function on an open interval I containing a simple root α of f ( x ) = 0 , and H : R R any sufficiently differentiable function satisfying:
H ( 0 ) = 1 , H ( 0 ) = 2 , | H ( 0 ) | < .
Then, for an initial estimate x 0 close enough to α, method (7) converges with order at least 4 and its error equation is:
e n + 1 = 5 H ( 0 ) 2 c 3 2 c 2 c 3 c 2 2 e n 4 + O ( e n 5 ) ,
where c j = f ( j ) ( α ) j ! f ( α ) , j = 2 , 3 , , and e n = x n α .
This theoretical framework not only unifies classical methods (such as those of Ostrowski or Chun) through specific choices of H ( μ ) , but also provides a rigorous basis for designing new schemes. Table 1 summarizes several iterative methods that achieve fourth order thanks to the correct consideration of weighting functions.
Each of these weight functions iterative methods satisfies the conditions H ( 0 ) = 1 , H ( 0 ) = 2 , | H ( 0 ) | < , achieving the corresponding scheme a fouth-order convergence.

3. General Framework of Mean-Based Iterative Methods

As mentioned in the SubSection 1.1, all iterative methods exhibit cubic convergence. Consequently, this motivates the creation of a new family of iterative methods in which weight functions play a key role in increasing the order of convergence from three to four, without adding new functional evaluations.
We consider the following multipoint method:
x n + 1 = x n H ( μ n ) M T ( f ( x n ) , f ( y n ) ) f ( x n ) ,
where y n represents the Newton step, M T is an arbitrary mean applied to f ( x n ) and f ( y n ) , and H ( μ ) is a weight function, with μ = f ( y ) f ( x ) .
By choosing different symmetric means M T , such as: the arithmetic mean ( M A ) and ( M A y ); the harmonic mean ( M H y ); the counterharmonic mean ( M C ) and ( M c y ), we obtain several iterative schemes with different correction steps, as summarized in Table 2.
Theorem 2 
Let f : I R R be a sufficiently differentiable function on an open interval I, holding its simple root α I , that is, f ( α ) = 0 and f ( α ) 0 . The multipoint iterative method defined by a Newton step:
y n = x n f ( x n ) f ( x n ) ,
followed by a corrector step
x n + 1 = x n H ( μ n ) M T ( f ( x n ) , f ( y n ) ) f ( x n ) , n = 0 , 1 , ,
where:
  • M T ( f ( x n ) , f ( y n ) ) is a symmetric bivariate function representing a mean ( M A , M A y , M H y , M C or M C y ),
  • μ = f ( y ) f ( x ) , is the variable of the weight funtion H,
  • H ( μ ) is a weight function with a Taylor expansion around μ = 0 :
    H ( μ ) = H ( 0 ) + H ( 0 ) μ + 1 2 H ( 0 ) μ 2 + 1 6 H ( 0 ) μ 3 + O ( μ 4 ) .
If the coefficients H ( 0 ) , H ( 0 ) , H ( 0 ) , H ( 0 ) satisfy specific conditions given in Table 3 then the method achieves fourth-order convergence.
All methods achieve fourth-order convergence, with the generalized error equation:
e n + 1 = γ 1 c 2 3 + γ 2 c 2 c 3 e n 4 + O ( e n 5 ) ,
c j = 1 j ! f ( j ) ( α ) f ( α ) , j = 2 , 3 , ,
and e n = x n α represents the iteration error. The constants γ 1 and γ 2 depend on the chosen mean and on the coefficients of the weight function H ( μ ) .
Proof. 
Let e n = x n α denote the error at the n-th iteration. Since f is sufficiently differentiable and α is a simple root, we can use Taylor expansions of f ( x n ) and f ( x n ) around α . In terms of e n , we have:
f ( x n ) = f ( α ) + f ( α ) e n + f ( α ) 2 ! e n 2 + f ( α ) 3 ! e n 3 + f ( 4 ) ( α ) 4 ! e n 4 + O ( e n 5 ) .
Since f ( α ) = 0 , it follows that
f ( x n ) = f ( α ) e n + f ( α ) 2 ! e n 2 + f ( α ) 3 ! e n 3 + f ( 4 ) ( α ) 4 ! e n 4 + O ( e n 5 ) .
We simplify the calculations using the expression of the constants (11)
f ( x n ) = f ( α ) e n + c 2 e n 2 + c 3 e n 3 + c 4 e n 4 + O ( e n 5 ) .
In a similar way,
f ( x n ) = f ( α ) + f ( α ) e n + f ( α ) 2 e n 2 + f ( 4 ) ( α ) 6 e n 3 + O ( e n 4 ) , = f ( α ) 1 + 2 c 2 e n + 3 c 3 e n 2 + 4 c 4 e n 3 + O ( e n 5 ) .
Therefore,
f ( x n ) f ( x n ) = e n c 2 e n 2 + ( 2 c 2 2 2 c 3 ) e n 3 + ( 4 c 2 3 + 7 c 2 c 3 3 c 4 ) e n 4 + O ( e n 5 ) ,
y n = x n e n c 2 e n 2 + ( 2 c 2 2 2 c 3 ) e n 3 + ( 4 c 2 3 + 7 c 2 c 3 3 c 4 ) e n 4 + O ( e n 5 ) = e n + α e n + c 2 e n 2 ( 2 c 2 2 2 c 3 ) e n 3 ( 4 c 2 3 + 7 c 2 c 3 3 c 4 ) e n 4 + O ( e n 5 ) .
So,
y n α = c 2 e n 2 ( 2 c 2 2 2 c 3 ) e n 3 + ( 4 c 2 3 7 c 2 c 3 + 3 c 4 ) e n 4 + O ( e n 5 ) .
Hence, the error in the approximation of y n becomes:
e n y = c 2 e n 2 ( 2 c 2 2 2 c 3 ) e n 3 + ( 4 c 2 3 7 c 2 c 3 + 3 c 4 ) e n 4 + O ( e n 5 ) .
In a similar way, we have:
f ( y n ) = f ( α ) e n y + f ( α ) 2 ! ( e n y ) 2 + f ( α ) 3 ! ( e n y ) 3 + f 4 ( α ) 4 ! ( e n y ) 4 + O ( e n y ) 5 ,
we rewrite the expansion in terms of normalized coefficients (11):
f ( y n ) = f ( α ) e n y + c 2 ( e n y ) 2 + c 3 ( e n y ) 3 + c 4 ( e n y ) 4 + O ( e n y ) 5 .
In terms of e n , we obtain:
f ( y n ) = f ( α ) c 2 e n 2 + ( 2 c 3 2 c 2 2 ) e n 3 + ( 5 c 2 3 7 c 2 c 3 + 3 c 4 ) e n 4 + O ( e n 5 ) .
Therefore,
f ( x n ) + f ( y n ) = f ( α ) e n + 2 c 2 e n 2 + 2 c 2 2 + 3 c 3 e n 3 + 5 c 2 3 7 c 2 c 3 + 4 c 4 e n 4 + O ( e n ) 5 .
We calculate, by direct division, μ n = f ( y n ) f ( x n )
μ n = f ( y n ) f ( x n ) = c 2 e n + ( 3 c 2 2 + 2 c 3 ) e n 2 + ( 8 c 2 3 10 c 2 c 3 + 3 c 4 ) e n 3 + O ( e n 4 ) .
Since μ n = f ( y n ) f ( x n ) 0 as n , we expand the weight function H ( μ ) in a Taylor series around μ = 0 :
H ( μ ) = H ( 0 ) + H ( 0 ) μ + 1 2 H ( 0 ) μ 2 + 1 6 H ( 0 ) μ 3 + O ( μ 4 ) .
Substituting the expression in terms of e n , we obtain:
H ( μ n ) = H ( 0 ) + H ( 0 ) c 2 e n + 1 2 c 2 2 ( H ( 0 ) 6 H ( 0 ) ) + 2 c 3 H ( 0 ) e n 2 + c 2 3 8 H ( 0 ) 3 H ( 0 ) + H ( 0 ) 6 + 2 c 3 c 2 ( H ( 0 ) 5 H ( 0 ) ) + 3 c 4 H ( 0 ) e n 3 + O ( e n 4 )
Now, we detail the expansion of M T ( f ( x n ) , f ( y n ) ) using the arithmetic mean M A of Table 2. For the rest of the means, all calculations are analogous.
We return to (14) and (13), to substitute them into the expression:
f ( x n ) + f ( y n ) 2 f ( x n ) = e n 2 c 2 2 e n 3 + 9 c 2 3 2 7 c 2 c 3 2 e n 4 + O e n 5 ,
multiplied by (15):
H ( μ n ) f ( x n ) + f ( y n ) 2 f ( x n ) = e n H ( 0 ) 2 + 1 2 c 2 e n 2 H ( 0 ) + 1 4 c 2 2 ( 4 H ( 0 ) 6 H ( 0 ) + H ( 0 ) ) + c 3 H ( 0 ) e n 3 + 1 12 c 2 3 ( 54 H ( 0 ) + 36 H ( 0 ) 18 H ( 0 ) + H ( 0 ) ) 6 c 3 c 2 ( 7 H ( 0 ) + 10 H ( 0 ) 2 H ( 0 ) ) + 18 c 4 H ( 0 ) e n 4 + O ( e n 5 )
Thus, equation M A of Table 2 expands to:
x n + 1 = e n + α e n H ( 0 ) 2 1 2 c 2 e n 2 H ( 0 ) + 1 4 c 2 2 ( 4 H ( 0 ) 6 H ( 0 ) + H ( 0 ) ) + c 3 H ( 0 ) e n 3 1 12 c 2 3 ( 54 H ( 0 ) + 36 H ( 0 ) 18 H ( 0 ) + H ( 0 ) ) 6 c 3 c 2 ( 7 H ( 0 ) + 10 H ( 0 ) 2 H ( 0 ) ) + 18 c 4 H ( 0 ) e n 4 + O ( e n 5 ) ,
that is,
x n + 1 α = 1 H ( 0 ) 2 e n 1 2 c 2 H ( 0 ) e n 2 + c 2 2 H ( 0 ) + 3 2 H ( 0 ) 1 4 H ( 0 ) c 3 H ( 0 ) e n 3 + 1 12 c 2 3 ( 54 H ( 0 ) + 36 H ( 0 ) 18 H ( 0 ) + H ( 0 ) ) + 6 c 3 c 2 ( 7 H ( 0 ) + 10 H ( 0 ) 2 H ( 0 ) ) 18 c 4 H ( 0 ) e n 4 + O ( e n 5 ) .
By solving the system obtained from eliminating the first-, second, and third-order error terms, ensuring that:
H ( 0 ) = 2 , H ( 0 ) = 0 , H ( 0 ) = 8 , | H ( 0 ) | < ,
the error of the method based on the arithmetic mean M A becomes:
e n + 1 = 1 12 ( 36 H ( 0 ) ) c 2 3 12 c 2 c 3 e n 4 + O ( e n 5 ) .
This finishes the proof for the case of the arithmetic mean. The order of convergence of the remaining methods is obtained in a similar way, replacing the mean function M T and using the corresponding values of the coefficients of H ( μ ) presented in Table 3. Proceeding in this manner, the methods indicated in Table 2 achieve a fourth-order convergence. □

4. Dynamical Analysis

The order of convergence of an iterative method is not the only relevant criterion when evaluating its performance. In fact, the dynamical of the method, that is, the behavior of its orbits under different initial estimations, plays a fundamental role in its overall analysis. For it, tools from complex analysis are used, which represent the evolution of the methods in the Riemann sphere C { } [17,18,19].
We start from a rational function resulting from the application of an iterative method to a polynomial of low degree, denoted by R : C ^ C ^ , where C ^ = C { } denotes the Riemann sphere. The orbit of a point z 0 C ^ is given by the sequence:
{ z 0 , R ( z 0 ) , R 2 ( z 0 ) , , R n ( z 0 ) , } .
We are interested in studying the asymptotic behavior of the orbits, so we must classify the different points of the rational operator R. A point z ^ is k-periodic k 1 , if
R k ( z ^ ) = z ^ , and R p ( z ^ ) z ^ , 1 p < k .
We say that it is a fixed point of R if
R ( z ^ ) = z ^ .
If this fixed point is not a solution of the polynomial, it is called a strange fixed point, as well as being numerically undesirable, since the iterative method can converge on them under certain initial guesses [18].
The dynamical behavior of these fixed points z * is classified according to the modulus of the derivative R ( z * ) . If | R ( z * ) | < 1 , the fixed point is an attractor; if | R ( z * ) | = 1 , it is called parabolic or indifferent; if | R ( z * ) | > 1 , it is repellent; and if | R ( z * ) | = 0 , it is a superattractor.
On the other hand, the study of the basin of attraction of an attractor z * is defined as the set of points that converge to it
A ( z * ) = { z 0 C ^ : lim n R n ( z 0 ) = z * } .
The Fatou set F is the union of the basins of attraction. The Julia set J is its topological complement in the Riemann sphere and represents the union of the frontiers of the basins of attraction.
The following classic result, given by Fatou [20] and Julia [21], includes both periodic points (of any period) and fixed points, considered as periodic points of unit period.
Theorem 3 
Let R be a rational function. The immediate basins of attraction of each attractive periodic point contain at least one critical point.
Using this key result, the entire attraction behavior can be found using the critical points as seeds of the iterative process [22]. A point z * C ^ is critical for R if:
R ( z * ) = 0 .
We are going to start with a rational function resulting from the application of an iterative scheme on a quadratic polynomial. In order to obtain global results for the class of quadratic polynomial we prove a Scaling Theorem for the corresponding iterative method.

4.1. Conjugacy Classes

Let f and g be two analytic functions defined on the Riemann sphere. An analytic conjugacy between f and g is a diffeomorphism h on the Riemann sphere such that h f h 1 = g .
We now state a general theorem that applies to all types of symmetric means described in Table 2.
Theorem 4 
Let f : C ^ C be an analytic function on the Riemann sphere, and let h ( z ) = α z + β be an affine transformation with α 0 , and g ( z ) = λ f ( h ( z ) ) , with λ 0 . Let us consider the iterative scheme defined by
G f ( z ) = y f H ( μ f ) M T f ( z ) , f ( y f ) f ( z ) ,
where y f = z f ( z ) f ( z ) is Newton’s method, being M T one of the means that provide the schemes M A y , M A , M H y or M C , M C y , Here, H ( μ ) satisfies the conditions indicated in Table 3.
Then, (19) is analytically conjugate to the analogous method applied to g, that is:
( h G g h 1 ) ( z ) = G f ( z ) ,
sharing the same essential dynamics.
Proof. 
To prove the general result, we consider a particular case of the mean. For the rest of the methods, the proof is analogous. We choose the case of M A y , whose scheme is given by:
x n + 1 = y n H ( μ n ) f ( x n ) + f ( y n ) 2 f ( x n ) .
As can be seen, its structure is representative of the methods included in Table 2. We know that the affine function h ( z ) = α z + β has an inverse given by h 1 ( z ) = z β α .
By hypothesis, g ( z ) = λ f ( h ( z ) ) = λ f ( α z + β ) . By the chain rule, we obtain:
g ( h 1 ( z ) ) = λ f ( z ) , g ( h 1 ( z ) ) = λ α f ( z ) .
Defining the operator G g ( z ) as
G g ( z ) = y g H ( μ g ) M T g ( z ) , g ( y g ) g ( z ) ,
and evaluated at h 1 ( z ) , we obtain:
G g ( h 1 ( z ) ) = h 1 ( z ) g ( h 1 ( z ) ) g ( h 1 ( z ) ) H ( μ g ( h 1 ( z ) ) ) g ( h 1 ( z ) ) + g h 1 ( z ) g ( h 1 ( z ) ) g ( h 1 ( z ) ) 2 g ( h 1 ( z ) ) .
Using the identities from (22) and taking into account that
h h 1 ( z ) = z , h h 1 ( z ) g ( h 1 ( z ) ) g ( h 1 ( z ) ) = z f ( z ) f ( z ) ,
we deduce that:
g h 1 ( z ) g ( h 1 ( z ) ) g ( h 1 ( z ) ) = λ f z f ( z ) f ( z ) .
Substituting (22) and (25) into (24), we obtain:
G g ( h 1 ( z ) ) = z β α λ f ( z ) λ α f ( z ) H ( μ g ( h 1 ( z ) ) ) λ f ( z ) + λ f z f ( z ) f ( z ) 2 λ α f ( z ) = z β α f ( z ) α f ( z ) H ( μ g ( h 1 ( z ) ) ) f ( z ) + f z f ( z ) f ( z ) 2 α f ( z )
Now, we apply the transformation h:
h G g ( h 1 ( z ) ) = α z β α f ( z ) α f ( z ) H μ g ( h 1 ( z ) ) f ( z ) + f z f ( z ) f ( z ) 2 α f ( z ) + β = z f ( z ) f ( z ) H μ g ( h 1 ( z ) ) f ( z ) + f z f ( z ) f ( z ) 2 f ( z )
We observe that the term μ g ( h 1 ( z ) ) transforms as:
μ g ( h 1 ( z ) ) = g h 1 ( z ) g ( h 1 ( z ) ) g ( h 1 ( z ) ) g ( h 1 ( z ) ) = f z f ( z ) f ( z ) f ( z ) = μ f ( z ) .
Substituting (26) into (), we finally deduce:
h G g ( h 1 ( z ) ) = z f ( z ) f ( z ) H ( μ f ( z ) ) f ( z ) + f z f ( z ) f ( z ) 2 f ( z ) = G f ( z ) ,
which proves the desired identity (20), and confirms that G f and G g are analytically conjugate through the affine transformation h ( z ) . □
The same reasoning extends directly to the methods in Table 2, since:
  • In each case, the correction term maintains the form M T f ( z ) , f ( y f ) , with symmetric combinations based on means.
  • The affine transformation h acts compatibly on both f ( z ) and f ( y f ) , preserving the functional structure of the correction.
  • The identity μ g ( h 1 ( z ) ) = μ f ( z ) holds, since it depends only on the ratio f ( y f ) / f ( z ) , scaled by λ .
Therefore, the result holds for the entire family of iterative methods based on symmetric means, as described in Table 2.

4.2. Dynamics of Fourth-Order Methods

As shown in Table 2, five different parametric families of iterative methods are identified, each of them associated with specific conditions on the function H ( μ ) . To satisfy these conditions, polynomial weight functions have been selected. However, other types of functions could also be considered, as long as they meet the imposed constraints. This also allows the introduction of an additional parameter β in cases where H ( μ ) is bounded.
Table 4 presents the polynomials chosen by the authors for the development of this paper.
Here, μ n = f ( y n ) f ( x n ) and β is a free parameter.
To observe the dynamics of these iterative methods, we will take as an example the arithmetic mean family M A y , which is defined as:
y n = x n f ( x n ) f ( x n ) ,
x n + 1 = y n 2 f ( x n ) f ( y n ) 2 f ( x n ) f ( y n ) 2 β f ( x n ) f ( y n ) 3 f ( x n ) + f ( y n ) 2 f ( x n ) .
The other cases can be analyzed in a similar way.

4.3. Rational Operator

Proposition 1. 
Let us consider the generic quadratic polynomial p ( z ) = ( z a ) ( z b ) , of roots a and b. The rational operator related to family M A y given in (28) on p ( z ) is:
R p ( z , β ) = z 4 β + 2 z 6 + 16 z 5 + 54 z 4 + 96 z 3 β z 2 + 92 z 2 3 β z + 44 z + 8 β z 6 8 z 6 + 3 β z 5 44 z 5 + β z 4 92 z 4 96 z 3 54 z 2 16 z 2 ,
being β C an arbitrary parameter.
Proof. 
We apply the iterative scheme M A y to p ( z ) and obtain a rational function A p ( z , β ) that depends on the roots a , b and the parameter β C . Then, we apply a Möbius transformation [19,23,24] on A p ( z , β ) with
h ( z ) = z a z b ,
which satisfies h ( a ) = 0 , h ( b ) = and h ( ) = 1 . This transformation maps the roots a and b to the points 0 and , respectively, whose nature is attractive, and the divergence of the method to 1. Thus, the new conjugate rational operator is defined as:
R p ( z , β ) : = h A p ( z , β ) h 1 ( z ) ,
= z 4 β + 2 z 6 + 16 z 5 + 54 z 4 + 96 z 3 b e t a z 2 + 92 z 2 3 β z + 44 z + 8 β z 6 8 z 6 + 3 β z 5 44 z 5 + β z 4 92 z 4 96 z 3 54 z 2 16 z 2 .
which no longer depends on the parameters a and b. □
Thus, this transformation facilitates the analysis of the dynamics of iterative methods by allowing the standardization of roots and the structural study of dynamic planes and their stability regions [8].

4.4. Fixed Points of the Operator

Now, we calculate all the fixed points of R p ( z , β ) , to subsequently analyze their character (attractive, repulsive, neutral, or parabolic). Taking into account that the method has order four, the points z = 0 and z = are always superattractor fixed points, since they come from the roots of the polynomial.
The fixed points of R p ( z , β ) are z = 0 , z = , and nine strange fixed points:
  • z = 1 is a strange fixed point if β 312 5 ,
  • The roots of the polynomial:
    P β ( t ) = 2 + 18 t + 72 t 2 + ( 160 + β ) t 3 + ( 208 + 3 β ) t 4 + ( 160 + β ) t 5 + 72 t 6 + 18 t 7 + 2 t 8 ,
    denoted by e x i ( β ) , i = 1 , , 8 , for any β C .
Now, we study the stability of the strange fixed point z = 1 .
Proposition 2. 
The strange fixed point z = 1 has the following character:
  • If β = 312 5 , z = 1 is not a fixed point.
  • If β 312 5 > 1024 5 , z = 1 is attractor.
  • If β 312 5 = 1024 5 , z = 1 is parabolic.
  • If β 312 5 < 1024 5 , z = 1 is repulsive.
Proof. 
As seen in the previous section, the behavior of the fixed point can be determined according to the value of the stability function: it will be an attractor if | R p ( z , β ) | < 1 , a repulsor if | R p ( z , β ) | > 1 , superattractor if | R p ( z , β ) | = 0 and parabolic if | R p ( z , β ) | = 1 . The expression of operator | R p ( z , β ) | is
R p ( z , β ) = 2 z 3 ( z + 1 ) 8 4 β + 4 β z 4 32 z 4 + 7 β z 3 156 z 3 12 β z 2 248 z 2 + 7 β z 156 z 32 β z 6 8 z 6 + 3 β z 5 44 z 5 + β z 4 92 z 4 96 z 3 54 z 2 16 z 2 2 .
Therefore,
R p ( 1 , β ) = 1024 312 5 β .
If β = 312 5 , then z = 1 is not a fixed point. To determine whether it is attractive or repulsive, we solve:
1024 312 5 β 1 1024 2 | 312 5 β | 2 .
Expressing the right side in terms of ( β ) and ( β ) :
| 312 5 ( ( β ) + i ( β ) ) | 2 = ( 312 5 ( β ) ) 2 + 25 ( β ) 2 .
So,
1024 2 312 2 3120 ( β ) + 25 ( β ) 2 + 25 ( β ) 2
By simplifying, we get
( β ) 312 5 2 + ( β ) 2 1024 5 2 ,
thus,
β 312 5 1024 5 .
Graphically, the behavior of the fixed point z = 1 is visualized in Mathematica using the graph of the function Φ ( β ) = 1024 312 5 β .
In Figure 1, the attraction zones are the yellow area and the repulsion zone corresponds to the gray area. For values of β within the disk, z = 1 is repulsive; while for values of β outside the gray disk, z = 1 becomes attractive. Therefore, it is natural to select values within the gray disk, since repulsive divergence improves the performance of the iterative scheme.
For the eighth roots e x i ( β ) , i = 1 , , 8 , of polynomial P β ( t ) , we obtain the following results:
  • | R p ( e x 1 ( β ) ) | = 0 , there is no β value,
  • | R p ( e x 2 ( β ) ) | = 0 , for β 1 = 0.408822 , and β 2 = 18.0802 ,
  • | R p ( e x 3 ( β ) ) | = 0 , for β 1 = 0.408822 , and β 2 = 18.0802 ,
  • | R p ( e x 4 ( β ) ) | = 0 , for β 3 , 4 = 0.782768 ± 1.1103 i ,
  • | R p ( e x 5 ( β ) ) | = 0 , there is no β value,
  • | R p ( e x 6 ( β ) ) | = 0 , for β 3 , 4 = 0.782768 ± 1.1103 i ,
  • | R p ( e x 7 ( β ) ) | = 0 , for β 5 = 78.5858 ,
  • | R p ( e x 8 ( β ) ) | = 0 , for β 5 = 78.5858 .
In the following figure we represent the stability functions of the strange fixed points points e x i ( β ) , i = 1 , 2 , , 8 .
For each root e x i ( β ) evaluated in the rational derivative operator (), its stability surfaces are constructed. In this context, the graphical representation distinguishes the orange regions as zones of attraction | R p ( e x i ( β ) ) | < 1 , the gray regions as zones of repulsion | R p ( e x i ( β ) ) | > 1 , superattraction zones when it is the vertex of the cone | R p ( e x i ( β ) ) | = 0 , and parabolic zones when it is at the boundary | R p ( e x i ( β ) ) | = 1 .
From Figure 2, the following conclusions are drawn:
  • As the derivative operator associated with the strange fixed points e x 1 , 5 ( β ) can not be zero, it can be seen in Figure 2 that the resulting surface has only one gray region. This indicates that these fixed points are repulsive throughout the analyzed range, which is desirable, as it prevents convergence to a strange fixed point.
  • Furthermore, at points e x 2 , 3 ( β ) , we obtain we obtain β 1 = 0.408822 and β 2 = 18.0802 . Figure 2 shows an inverted cone-shaped surface (normally yellow), representing an attractor inside the cone and a superattractor at its vertex β 2 (that of β 1 is similar, so it is omitted). The associated unstable domain is approximately [ 0 , 0.2 ] × [ 0 , 0.2 ] , indicating a small but localized region. Similarly, by setting the derivative operator associated with the roots e x 4 , 6 ( β ) to zero, we obtain β 3 = 0.782768 + 1.1103 i and β 4 = 0.782768 1.1103 i . Figure 2 and Figure 2 show behavior qualitatively similar to that of β 2 , with a comparable domain.
  • By setting the derivative operator associated with the roots e x 7 , 8 ( β ) to zero, we obtain β 5 = 78.5858 . As illustrated in Figure 2, a considerably wider region of attraction appears, approximately ( 0 , 100 ) × ( 0 , 100 ) , indicating that the method shows marked instability for these values of β .
Therefore, to ensure the robustness of the method, values of β where some root e x i ( β ) is an attractor or superattractor should be avoided. In contrast, values such as β where all strange fixed points are repellers, are preferable to ensure stable numerical behavior.
Just as we have studied strange fixed points, we must also analyze critical points, since, recalling Theorem 3, it turns out that each attraction basin of an attractive periodic point (any period) contains at least one critical point.

4.5. Critical points of the operator

Proposition 3. 
The critical points of the rational operator R p ( z , β ) are z = 0 , z = , directly related to the zeros of the polynomial, and the following free critical points:
z = 1 ,
z 1 , 2 ( β ) = ± 1 2 a ( β ) + b ( β ) c ( β ) d ( β ) e ( β ) ,
z 3 , 4 ( β ) = ± 1 2 a ( β ) + b ( β ) c ( β ) + d ( β ) e ( β ) ,
where the auxiliary functions a ( β ) , b ( β ) , c ( β ) , d ( β ) , and e ( β ) are algebraic simplifications used for easy of notation:
a ( β ) = 2 ( β 8 ) 369 β 2 1800 β + 784 ( 7 β 156 ) 3 64 ( β 8 ) 3 ( 3 β 62 ) ( 7 β 156 ) ( β 8 ) 2 + 2 ( 7 β 156 ) β 8 , b ( β ) = ( 7 β 156 ) 2 32 ( β 8 ) 2 3 β 62 β 8 8 β 64 4 ( β 8 ) , c ( β ) = 369 β 2 1800 β + 784 16 ( β 8 ) , d ( β ) = 369 β 2 1800 β + 784 16 ( β 8 ) , e ( β ) = 7 β 156 16 ( β 8 ) .
Thus, there are five free critical points, except for β = 0 , β = 312 5 , and β = 8 , where only three free critical points exist.
Proof. 
To prove the result, we recall that the derivative of the rational operator () is:
R p ( z , β ) = 2 z 3 ( z + 1 ) 8 4 β + 4 β z 4 32 z 4 + 7 β z 3 156 z 3 12 β z 2 248 z 2 + 7 β z 156 z 32 β z 6 8 z 6 + 3 β z 5 44 z 5 + β z 4 92 z 4 96 z 3 54 z 2 16 z 2 2
It is easily observed that its roots are z = 0 , z = , z = 1 , z 1 , 2 ( β ) and z 3 , 4 ( β ) . These last four correspond to the roots of the polynomial of degree 4 in the numerator.
Now, let us observe that for certain values of β , only three free critical points exist. One such case is β = 0 , where the derivative of the operator simplifies to:
2 z 3 ( z + 1 ) 6 8 z 2 + 23 z + 8 ( 2 z + 1 ) 2 2 z 3 + 6 z 2 + 4 z + 1 2 .
Here, the strange critical points are z = 1 , and the conjugate pair
z = 1 16 23 ± 273 .
When β = 312 5 , the derivative operator becomes:
10 x 3 ( x + 1 ) 8 272 x 2 + 895 x + 272 136 x 5 + 494 x 4 + 420 x 3 + 180 x 2 + 45 x + 5 2 .
In this scenario, the strange critical points are z = 1 , and the conjugate pair
z = 1 544 895 ± 3 56121 .
And finally, when β = 8 , the derivative operator becomes:
2 x 4 ( x + 1 ) 8 25 x 2 + 86 x + 25 10 x 5 + 42 x 4 + 48 x 3 + 27 x 2 + 8 x + 1 2 ,
whose zeros are z = 1 and the conjugate pair
z = 1 25 43 ± 6 34 .
For the free critical point z = 1 , we have R p ( 1 , β ) = 1 , which is a strange fixed point. Therefore, the parameter plane associated with this critical point is not of much interest, since we already know the stability of z = 1 .
To visualize the behavior of the free critical points that depend on β , we plot the parameter planes. In each parameter plane, we use each free critical point as an initial estimation. A mesh of 2000 × 2000 points is defined in the complex plane. Each point of the mesh corresponds to a value of β , that is an iterative method member to the family, and for each one of them, we iterate the rational function R p ( z , β ) . If the orbit of the critical point converges to z = 0 or z = in a maximum of 100 iterations, the point is colored red; otherwise, it is colored black.
As a first step, we graph the parameter plane of the conjugate pair z 1 , 2 ( β ) , both in the domain D 1 = [ 150 , 25 ] × [ 225 , 225 ] , which represents a broad stable performance region around the origin, and D 2 = [ 150 , 25 ] × [ 60 , 60 ] , which represents a divergence zone related to the e x i .
It is observed that there are many values of the parameter β for which the free critical points converge to the roots z = 0 or z = , visually, showing convergence in an approximate domain of [ 50 , 100 ] × [ 100 , 100 ] . On the other hand, Figure 3 presents a divergence detail and refers to the strange fixed points e x 7 ( β ) and e x 8 ( β ) , which, when computing their derivative operator set to zero, yielded a value of β = 78.5858 , which precisely aligns with the divergence observed in Figure 2.
Likewise, the parameter plane of the conjugate pair z 3 , 4 ( β ) is shown, both in the domain D 3 = [ 150 , 275 ] × [ 210 , 210 ] , and a detail in the domain D 4 = [ 70 , 270 ] × [ 100 , 100 ] , which represents the region that has not converged to any of the roots.
Figure 4 shows very stable behavior in an approximate domain of [ 50 , 50 ] × [ 100 , 100 ] , while when the domain D 4 corresponding to Figure 4 is viewed in more detail, the mostly black region demonstrates a divergent behavior of the studied method (28).

4.6. Dynamical Planes

In the case of dynamical planes, each point in the complex plane is considered as a starting point z 0 of the iterative scheme and is represented with different colors depending on the point it converges to. In this case, points that converge to z = are colored blue, and those that converge to z = 0 are colored orange. These dynamical planes have been generated using a grid of 800 × 800 points and a maximum of 100 iterations per point. In these planes, the fixed points are represented by a white circle, the critical points by a white square, and the attracting points by an asterisk.
Next, the dynamical planes are plotted based on the values for β obtained from the strange fixed points of the operator () and from the observations in the parameter plane.
In Figure 5 and Figure 6, good behavior of the method can be observed when choosing β = 0 and β = 1 . The colors of the plots indicate convergence of the method to z = 0 and z = , which are the roots.
A notable case is β = 312 5 , since in Proposition Section 4.4 it was established that when β takes that value, z = 1 is not a fixed point, as observed in Figure 7.
In Figure 8, it can be clearly seen that z = 1 is no longer characterized as a strange fixed point of the method. Moreover, we recall that when β 312 5 < 1024 5 , z = 1 is repulsive, as shown in Figure 5 and Figure 6, and when β 312 5 > 1024 5 , z = 1 is an attractor, as shown in Figure 9 and Figure 10.
Observe that when evaluating the dynamical plane in Figure 9, the method converges to z = 1 ; a strange fixed point of the rational operator, even though the root z = 0 is relatively closer. Furthermore, note the notable instability of choosing such a high value of β .
Based on the previous study (see Figure 2), when considering the value β 5 = 78.5858 , complex dynamical behavior was observed. In that figure, it is seen that the associated attraction cone covers a significantly larger area compared to other values of β . Additionally, this basin of attraction is related to the parameter plane of the conjugate critical point z 1 , 2 . In this scenario, the method converges to basins different from the roots 0 and , indicating that it is not suitable for root-finding. Therefore, values such as β = 78.5858 should be avoided when applying this method. Likewise, another example of divergence β = 200 .
Figure 11. β = 78.5858 .
Figure 11. β = 78.5858 .
Preprints 175921 g011
Figure 12 represents the divergence observed in the parameter plane of Figure 4.
The analysis of the remaining iterative methods presented in Table 4 has been carried out similarly. The same qualitative information obtained in the previous analysis was also found in the rest of the families. The union of parameter planes is the same for all families. Therefore, their qualitative performance is the same, with a wide area of complex values for parameter β giving rise to stable methods.

5. Numerical Examples

The iterative methods used in this section are presented in Table 4. This table considers all the conditions established in Table 3 with respect to H μ , aiming to guarantee fourth-order convergence. In particular, the parameter β = 0 is selected, given its favorable behavior observed in the dynamical analysis.
To assess the efficiency of the newly proposed iterative methods, a comparison is made with classical algorithms from the literature, as listed in Table 1.
This section evaluates the performance of the newly proposed multipoint iterative methods compared to several well-established fourth-order methods. The comparison includes the following performance indicators:
  • Number of iterations required for convergence (Iter),
  • Approximate Computational Order of Convergence (ACOC),
  • Bounds of the errors (Incr, Incr2), where
    I n c r = | x n + 1 x n | , I n c r 2 = | f ( x n + 1 ) | ,
  • Efficiency Index (EI),
  • Execution time (Time), measured in seconds.

5.1. Academic Example 1: f 1 ( x ) = cos ( x ) x

The target root of the problem considered is x * 0.7391 , obtained from the proposed nonlinear function. For the iterative process, x 0 = 0 was taken as the initial estimate, with a convergence tolerance of 10 50 and a maximum number of 100 iterations. The implemented algorithm uses as a stopping criterion.
| x n + 1 x n | < 10 50 , o r | f ( x n + 1 ) | < 10 50 .
If neither criterion is met, the procedure ends when the maximum number of iterations is reached. Table 5 presents the results obtained under these conditions.
The results reported in Table 5 underscore the strong competitiveness of the newly developed methods when compared with classical schemes. The proposed approaches exhibit comparable, and in several cases superior, performance in terms of both accuracy and computational efficiency. This is reflected not only in the reduced number of iterations required to approximate the root, but also in the exact convergence order of four achieved by several of the new methods. In particular, tiny residual errors were obtained, such as | f ( x n + 1 ) | 10 197 for method M H y and exactly zero for method M A y .
Nevertheless, it is important to acknowledge the effective performance of the classical iterative schemes. For instance, the Jarratt’s method demonstrated remarkable robustness and efficiency, reaching errors of the order of 10 143 with relatively low computational cost.
On the other hand, the Artidiello IV method, despite its high estimated order of convergence (ACOC = 4.25 ), produced significantly larger final errors ( 10 18 ). This behavior may indicate the presence of numerical instabilities or sensitivity to the transcendental nature of the problem.

5.2. Academic Example 2:  f 2 ( x ) = e x cos ( x ) 2

The real root of this function is x * 0.9488 , with an initial estimate x 0 = 0.5 , the same tolerance 10 50 , the maximum number of 100 iterations, and the same stopping criterium as before. The results are given in Table 6.
In this nonlinear and rapidly varying function, M A y exhibits the most outstanding precision, with final error on the order of 10 208 , confirming its high robustness. The classical methods Jarratt, Ostrowski, and Artidiello IV maintain excellent convergence behavior with very low errors ( 10 176 ) and lower execution times.
Overall, methods M C and M C y achieve superlinear convergence with low number of iterations, and their performance may improve with adaptive strategies. In contrast, methods such as M A y offer a balance between precision and convergence, suggesting their suitability for problems demanding extremely high accuracy.
The proposed methods M C , M C y , and M H y demonstrate solid fourth-order behavior, but exhibit larger errors compared to Example 1, indicating sensitivity to the exponential component of f 2 ( x ) . Among all methods, Kung-Traub’s performance is clearly limited in convergence order A C O C 3.0 , highlighting its theoretical constraint, though its execution time is the lowest.

Applied Problems

Problem 1: Chemical Equilibrium in Ammonia Synthesis

The analysis of chemical equilibrium systems using numerical methods has been widely addressed in the scientific literature. Solving complex nonlinear equations that model fractional conversions in reactive processes such as ammonia synthesis requires robust and efficient techniques.
This work analyzes the chemical equilibrium corresponding to the ammonia synthesis reaction from nitrogen and hydrogen, in a molar ratio of 1:3 [25], under standard industrial conditions (500 °C and 250 atm). The equation that describes this reaction is the following:
f ( x ) = x 4 7.79075 x 3 + 14.7445 x 2 + 2.511 x 1.674 ,
where x is the fractional conversion. Of the four real roots of this equation, only one x * 0.27776 lies within the physical interval [ 0 , 1 ] , and therefore it is the only one with chemical significance.
In Table 7, it can be seen that all methods converge rapidly to the physically meaningful root, demonstrating high efficiency for this type of problem. For this results, we can used x 0 = 0.5 , and the same stopping criterium as in the previous examples.
Method M A y exhibits the best performance in terms of accuracy, reaching an Incr2 on the order of 10 208 , positioning it as the most precise among the set, at the cost of one additional iteration.
Among the classical methods, Ostrowski and Zhao et al. stand out due to their excellent accuracy, with very small errors ( 10 138 and 10 145 , respectively) and theoretically ideal ACOC. The Artidiello IV method also shows good performance, achieving an Incr2 as low as 10 161 , although its practical order of convergence falls slightly below the expected value.
It is worth noting that the Kung-Traub method is the only one in the set whose practical order of convergence approaches 3 (ACOC 2.994 ), a limitation reflected in the relative error and intrinsic efficiency (EI). Nevertheless, it exhibits competitive computational time, being faster than some more precise methods.
Regarding the proposed methods M A , M C , M C y , and M H y , a consistent stability in the order of convergence and a good approximation in the errors can be observed, with results close to the classical methods, though without systematically surpassing them. Their computational performance is acceptable, albeit slightly inferior in terms of time.
In summary, for this chemical equilibrium problem, the methods with the best overall performance considering accuracy, efficiency, and stability are M A y , Ostrowski, and Zhao et al., all offering an ideal combination of minimal errors, fulfilled theoretical order of convergence, and low execution times.
Table 8. Comparison of iterative methods with initial point x 0 = 0 .
Table 8. Comparison of iterative methods with initial point x 0 = 0 .
Method Root Iter Incr Incr2 ACOC Time (s)
M A -0.3841 10 3.656 × 10 31 4.79 × 10 120 3.997 0.1931
M A y -0.3841 9 5.865 × 10 157 3.893 × 10 208 4.000 0.1948
M H y -0.3841 9 6.609 × 10 35 1.24 × 10 134 3.998 0.1766
M C -0.3841 11 4.265 × 10 16 1.52 × 10 59 3.928 0.2086
M C y -0.3841 13 1.081 × 10 36 7.571 × 10 142 3.998 0.2341
Jarratt 0.2778 4 3.431 × 10 16 2.049 × 10 61 3.906 0.08466
Ostrowski 0.2778 4 3.157 × 10 16 1.459 × 10 61 3.997 0.0515
King ( β = 1 ) -0.3841 19 1.563 × 10 33 1.602 × 10 129 3.998 0.293
Kung-Traub 0.2778 6 1.707 × 10 51 8.41 × 10 152 3.000 0.09162
Zhao et al. 0.2778 5 1.077 × 10 34 8.707 × 10 136 4.002 0.09587
Chun -0.3841 73 1.426 × 10 42 1.898 × 10 165 3.999 0.8783
Artidiello IV 0.2778 5 1.516 × 10 39 9.091 × 10 156 3.960 0.08355
Using x 0 = 0 as initial estimation, a bifurcation in the convergence of the methods is observed. In particular, all mean-base d methods M A , M A y , M H y , M C , and M C y converge to the root x * 0.384094 , which lacks physical interpretation within the context of the chemical equilibrium problem. Although this root is mathematically valid, it does not represent the desired solution, indicating a strong dependence of these methods on the initial condition and their sensitivity to the local behavior of the function.
The case of the King and Chun methods is particularly notable: although they also converge to the same non-physical root, they do so after a large number of iterations (19 and 73, respectively), reflecting lower numerical efficiency compared to the other methods.
On the other hand, classical methods such as Jarratt, Ostrowski, Zhao et al., Kung-Traub, and Artidiello IV correctly converge to the physically meaningful root x * 0.27776 , requiring approximately 4 to 6 iterations and yielding extremely low final errors.
This highlights the importance of appropriately selecting the initial condition in problems with multiple solutions.

Problem 2: Determination of the Maximum in Planck’s Radiation Law

The study of blackbody radiation through numerical methods has been fundamental in the development of quantum physics. As noted in [26] in their work “Didactic Procedure for Teaching Planck’s Formula”, determining the spectral maximum in Planck’s distribution requires advanced techniques to solve nonlinear transcendental equations.
We analyze the equation derived from Planck’s radiation law that determines the wavelength corresponding to the maximum energy density:
f 5 ( x ) = e x 1 + x 5 ,
where x = h c λ k T . Among the possible solutions, only x 4.9651142317 has physical meaning in this context [27].
Table 9 shows that all proposed methods M A , M A y , M H y , M C , M C y converge correctly to the physically valid root x * 4.9651142317 , even starting from a distant initial condition ( x 0 = 1 ).
Method M A y stands out for its quartic convergence (ACOC = 4.0) and a final error of exactly 0 in only 5 iterations, making it the most accurate method. Although it is slightly more computationally expensive, it achieves the highest relative efficiency (EI ≈ 1.587) among the proposed methods.
Methods M A , M C , M C y show quadratic convergence A C O C 2 , with errors of order 10 67 in only 4 iterations, with low execution times. The method M H y slightly improves the order of convergence A C O C 2.67 , which represents a good compromise between efficiency and robustness.
In contrast, several classical methods converge to values that do not represent the physically significant root. For example, Chun converges to a negative value ( x 158.3 ) with a clear divergence (Incr2 10 68 ). Meanwhile, the other methods diverge completely, presenting values with no physical meaning.
This highlights that, under unfavorable initial conditions, the proposed methods remain stable, while the classical ones are sensitive. Therefore, for problems such as Planck’s, methods based on the mean offer a more robust and reliable alternative.
Table 10 shows that all evaluated methods successfully converged to the physically meaningful root of Planck’s equation, despite starting from a distant initial condition ( x 0 = 10 ).
The M A y method maintains its outstanding performance, with quartic convergence order (ACOC = 4.0), zero error in the second increment (Incr2 = 0), and maximum intrinsic efficiency ( EI = 1.587 ). This confirms its high robustness and accuracy even under unfavorable scenarios.
The methods M A , M H y , M C , and M C y exhibited similar behavior, with A C O C 3.45 and convergence achieved in just 3 iterations, maintaining errors on the order of 10 17 and very low execution times (< 0.04 s).
Among the classical methods, King, Ostrowski, Zhao et al., and Jarratt also showed good efficiency and stable convergence. The Kung-Traub method, although achieving higher precision I n c r 2 10 122 , resulted in a lower ACOC (3.0) and reduced efficiency due to its higher cost per iteration.
In summary, all the analyzed methods proved to be efficient from a distant initial point, with M A y standing out for its stability and performance.

6. Conclusions

This manuscrpt presents a new perspective on the design, analysis, and dynamical behavior of fourth-order multipoint iterative methods, constructed through convex combinations of classical means and parameterized weight functions. By extending the Newton-type scheme and incorporating arithmetic, harmonic, and contra-harmonic, a versatile family of optimal methods is developed, complying with the Kung–Traub conjecture by achieving order four with a minimal number of functional evaluations. Other means such as Heronian, and centroidal have been used without positive results. The resulting iterative methods do not reach order four, so the are not optimal schemes.
The general formulation, grounded in a solid theoretical framework (Taylor expan- sions, affine conjugation, and local error analysis), enabled the derivation of explicit conditions on the weight functions to ensure fourth-order convergence. This was complemented by a rigorous dynamical systems analysis using tools such as conjugated rational operators, stability surfaces, parameter planes, and dynamical planes on the Riemann sphere.
The results reveal that the proposed parametric families, particularly those associated with the modified arithmetic mean ( M A y ), exhibit stable convergence behavior over large regions of the complex parameter space β . Nevertheless, critical regions of divergence were identified, including attraction basins unrelated to the roots or convergence toward strange fixed points. Such zones often visualized as black or green regions in the parameter and dynamical planes must be avoided in practice. In this context, the detailed study of free critical points proved fundamental, as each attractive basin must contain at least one critical point. Their behavior provided early insights into the method’s stability and convergence characteristics. Noteworthy cases such as β = 78.5858 or β = 400 illustrated convergence toward undesirable fixed points (e.g., z = 1 ), despite proximity to the root z = 0 , emphasizing the importance of well-informed parameter selection.
Finally, numerical experiments confirmed the competitiveness of the proposed schemes. In various tests, both academic and applied, methods based on convex means showed efficient performance comparable to classical methods. In most of the problems considered, they managed to converge to the root in approximately 3 to 5 iterations. The only exception was the first applied example, which required a greater number of iterations; however, it also reached the root with errors of the order of 10 208 . Likewise, in the second applied problem, their robustness was evident, as they were the only methods that converged to the root, even starting from initial conditions far from the solution, unlike classical methods.
In summary, the proposed class of parametric iterative methods based on convex means not only achieves high computational efficiency but, when coupled with dynamical analysis, offers a robust and predictive framework for the stable and accurate solution of nonlinear equations.

References

  1. Cordero, A.; Franceschi, J.; Torregrosa, J.R.; Zagati, A.C. A Convex Combination Approach for Mean-Based Variants of Newton’s Method. Symmetry 2019, 11, 1106. [Google Scholar] [CrossRef]
  2. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. Journal of the ACM (JACM) 1974, 21, 643–651. [Google Scholar] [CrossRef]
  3. Weerakoon, S.; Fernando, T. A variant of Newton’s method with accelerated third-order convergence. Applied mathematics letters 2000, 13, 87–93. [Google Scholar] [CrossRef]
  4. Chicharro, F.I.; Cordero, A.; Martínez, T.H.; Torregrosa, J.R. Mean-based iterative methods for solving nonlinear chemistry problems. Journal of Mathematical Chemistry 2020, 58, 555–572. [Google Scholar] [CrossRef]
  5. Özban, A.Y. Some new variants of Newton’s method. Applied Mathematics Letters 2004, 17, 677–682. [Google Scholar] [CrossRef]
  6. Ababneh, O.Y. New Newton’s method with third-order convergence for solving nonlinear equations. World Academy of Science, Engineering and Technology 2012, 61, 1071–1073. [Google Scholar]
  7. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Applied Mathematics and Computation 2007, 190, 686–698. [Google Scholar] [CrossRef]
  8. Zúñiga, A.G.; Cordero, A.; Torregrosa, J.R.; Soto, J.P. Diseño y análisis de la convergencia y estabilidad de métodos iterativos para la resolución de ecuaciones no lineales. Revista Digital: Matemática, Educación e Internet 2021, 21, 1–27. [Google Scholar] [CrossRef]
  9. Traub, J.F. Iterative methods for the solution of equations; Vol. 312, American Mathematical Soc., 1982.
  10. Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Mathematics of computation 1966, 20, 434–437. [Google Scholar] [CrossRef]
  11. Chun, C. Some fourth-order iterative methods for solving nonlinear equations. Applied mathematics and computation 2008, 195, 454–459. [Google Scholar] [CrossRef]
  12. Ostrowski, A.M. Solution of equations in Euclidean and Banach spaces; New York, Academic Press.
  13. King, R.F. A family of fourth order methods for nonlinear equations. SIAM journal on numerical analysis 1973, 10, 876–879. [Google Scholar] [CrossRef]
  14. Artidiello Moreno, S.d.J. Diseño, implementación y convergencia de métodos iterativos para resolver ecuaciones y sistemas no lineales utilizando funciones peso. PhD thesis, Universitat Politècnica de València, 2014.
  15. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. Journal of the ACM (JACM) 1974, 21, 643–651. [Google Scholar] [CrossRef]
  16. Zhao, L.; Wang, X.; Guo, W. New families of eighth-order methods with high efficiency index for solving nonlinear equations. WSEAS Transactions on Mathematics 2012, 11, 283–293. [Google Scholar]
  17. Blanchard, P. The dynamics of Newton’s method. In Proceedings of the Proceedings of Symposia in Applied Mathematics. American Mathematical Society Providence, RI, USA, 1994, Vol. 49, pp. 139–154.
  18. Maimo, J.G. Análisis dinámico y numérico de familias de métodos iterativos para la resolución de ecuaciones no lineales y su extensión a espacios de Banach. PhD thesis, Universitat Politècnica de València, 2017.
  19. Blanchard, P. Complex analytic dynamics on the Riemann sphere. Bulletin of the American mathematical Society 1984, 11, 85–141. [Google Scholar] [CrossRef]
  20. Fatou, P. Sur les frontières de certains domaines. Bulletin de la Société Mathématique de France 1923, 51, 16–22. [Google Scholar] [CrossRef]
  21. Julia, G. Mémoire sur l’itération des fonctions rationnelles. Journal de mathématiques pures et appliquées 1918, 1, 47–245. [Google Scholar]
  22. Moscoso-Martínez, M.; Chicharro, F.I.; Cordero, A.; Torregrosa, J.R.; Ureña-Callay, G. Achieving optimal order in a novel family of numerical methods: Insights from convergence and dynamical analysis results. Axioms 2024, 13. [Google Scholar] [CrossRef]
  23. Amat, S.; Busquier, S.; Plaza, S. Review of some iterative root-finding methods from a dynamical point of view. Scientia 2004, 10, 35. [Google Scholar]
  24. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Applied Mathematics and Computation 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
  25. Nuevo, D. La síntesis de amoniaco. https://www.tecpa.es/la-sintesis-de-amoniaco/, 2024. Publicado el 19 de marzo de 2024 en TECPA Formación de ingenieros.
  26. González, S.L. Procedimiento didáctico para el estudio de la fórmula de Planck en carreras de ingeniería. Caderno Brasileiro de Ensino de Física 2021, 38, 270–292. [Google Scholar] [CrossRef]
Figure 1. Stability function of z = 1 .
Figure 1. Stability function of z = 1 .
Preprints 175921 g001
Figure 2. Behavior of fixed points ex i ( β ) in attraction zones.
Figure 2. Behavior of fixed points ex i ( β ) in attraction zones.
Preprints 175921 g002
Figure 3. Plane of parameters of z 1 , 2 ( β ) in domain D 1 and a detail in D 2 .
Figure 3. Plane of parameters of z 1 , 2 ( β ) in domain D 1 and a detail in D 2 .
Preprints 175921 g003
Figure 4. Plane of parameters of z 3 , 42 ( β ) in domain D 3 and a detail in D 4 .
Figure 4. Plane of parameters of z 3 , 42 ( β ) in domain D 3 and a detail in D 4 .
Preprints 175921 g004
Figure 5. β = 0 .
Figure 5. β = 0 .
Preprints 175921 g005
Figure 6. β = 1 .
Figure 6. β = 1 .
Preprints 175921 g006
Figure 7. β = 312 5 .
Figure 7. β = 312 5 .
Preprints 175921 g007
Figure 8. Approach to β = 312 5 .
Figure 8. Approach to β = 312 5 .
Preprints 175921 g008
Figure 9. β = 400 .
Figure 9. β = 400 .
Preprints 175921 g009
Figure 10. Evaluation at β = 400 .
Figure 10. Evaluation at β = 400 .
Preprints 175921 g010
Figure 12. β = 200 .
Figure 12. β = 200 .
Preprints 175921 g012
Table 1. Summary of Fourth-Order Iterative Methods.
Table 1. Summary of Fourth-Order Iterative Methods.
Author H ( μ ) Iterative Method
Chun (MEDCH4) [11] 1 + 2 μ y n = x n f ( x n ) f ( x n ) , x n + 1 = y n f ( x n ) + 2 f ( y n ) f ( x n ) f ( y n ) f ( x n )
Ostrowski (MEDOS4) [12] 1 1 + μ y n = x n f ( x n ) f ( x n ) , x n + 1 = y n f ( x n ) f ( x n ) 2 f ( y n ) f ( y n ) f ( x n )
King (MEDK4) [13] 1 + ( 2 + β ) μ 1 + β μ y n = x n f ( x n ) f ( x n ) , x n + 1 = y n f ( x n ) + ( 2 + β ) f ( y n ) f ( x n ) + β f ( y n ) f ( y n ) f ( x n )
Kung–Traub (MEDKT4) [15] 1 ( 1 μ ) 2 y n = x n f ( x n ) f ( x n ) , x n + 1 = y n f ( x n ) 2 ( f ( x n ) f ( y n ) ) 2 f ( y n ) f ( x n )
Zhao et al. (MEDZ4) [16] 1 + 2 μ + μ 2 1 4 μ 2 y n = x n f ( x n ) f ( x n ) , x n + 1 = y n f ( x n ) 2 + 2 f ( x n ) f ( y n ) + f ( y n ) 2 f ( x n ) 2 4 f ( y n ) 2 f ( y n ) f ( x n )
Artidiello (MED44) [14] ( 1 + μ ) 2 1 5 μ 2 y n = x n f ( x n ) f ( x n ) , x n + 1 = y n ( f ( x n ) + f ( y n ) ) 2 f ( x n ) 2 5 f ( y n ) 2 f ( y n ) f ( x n )
Table 2. Iterative methods defined via different symmetric means M T , along with their abbreviations and corresponding correction steps.
Table 2. Iterative methods defined via different symmetric means M T , along with their abbreviations and corresponding correction steps.
Method Mean Correction Step
M A Arithmetic x n + 1 = x n H ( μ n ) f ( x n ) + f ( y n ) 2 f ( x n )
M A y Arithmetic with y n x n + 1 = y n H ( μ n ) f ( x n ) + f ( y n ) 2 f ( x n )
M H y Harmonic with y n x n + 1 = y n H ( μ n ) 2 f ( x n ) f ( y n ) f ( x n ) ( f ( x n ) + f ( y n ) )
M C Contraharmonic x n + 1 = x n H ( μ n ) f ( x n ) 2 + f ( y n ) 2 f ( x n ) ( f ( x n ) + f ( y n ) )
M c y Contraharmonic with y n x n + 1 = y n H ( μ n ) f ( x n ) 2 + f ( y n ) 2 f ( x n ) ( f ( x n ) + f ( y n ) )
Table 3. Coefficients of H ( μ ) and Error Equations.
Table 3. Coefficients of H ( μ ) and Error Equations.
Mean Coefficients of H ( μ ) Error Equation
M A H ( 0 ) = 2 , H ( 0 ) = 0 , H ( 0 ) = 8 , | H ( 0 ) | < 1 12 ( 36 + H ( 0 ) ) c 2 3 12 c 2 c 3 e n 4 + O ( e n 5 )
M A y H ( 0 ) = 0 , H ( 0 ) = 2 , H ( 0 ) = 4 , | H ( 0 ) | < 1 12 ( 48 + H ( 0 ) ) c 2 3 12 c 2 c 3 e n 4 + O ( e n 5 )
M H y H ( 0 ) = 1 2 , H ( 0 ) = 3 2 , | H ( 0 ) | < ( 7 + H ( 0 ) ) c 2 3 c 2 c 3 e n 4 + O ( e n 5 )
M C H ( 0 ) = 1 , H ( 0 ) = 2 , H ( 0 ) = 4 , | H ( 0 ) | < 5 H ( 0 ) 6 c 2 3 c 2 c 3 e n 4 + O ( e n 5 )
M c y H ( 0 ) = 0 , H ( 0 ) = 1 , H ( 0 ) = 6 , | H ( 0 ) | < 6 H ) ( 0 ) 6 c 2 3 c 2 c 3 e n 4 + O ( e n 5 )
Table 4. Families of Fourth-Order Iterative Methods Based on Means.
Table 4. Families of Fourth-Order Iterative Methods Based on Means.
Mean H ( μ ) Iterative scheme
M A 2 + 4 μ 2 + β μ 3 x n + 1 = x n ( 2 + 4 μ n 2 + β μ n 3 ) f ( x n ) + f ( y n ) 2 f ( x n )
M A y 2 μ + 2 μ 2 + β μ 3 x n + 1 = y n ( 2 μ n + 2 μ n 2 + β μ n 3 ) f ( x n ) + f ( y n ) 2 f ( x n )
M H y 1 2 + 3 2 μ + β μ 2 x n + 1 = y n 1 2 + 3 2 μ n + β μ n 2 2 f ( x n ) f ( y n ) f ( x n ) ( f ( x n ) + f ( y n ) )
M C 1 + 2 μ + 2 μ 2 + β μ 3 x n + 1 = x n ( 1 + 2 μ n + 2 μ n 2 + β μ n 3 ) f ( x n ) 2 + f ( y n ) 2 f ( x n ) ( f ( x n ) + f ( y n ) )
M C y μ + 3 μ 2 + β μ 3 x n + 1 = y n ( μ n + 3 μ n 2 + β μ n 3 ) f ( x n ) 2 + f ( y n ) 2 f ( x n ) ( f ( x n ) + f ( y n ) )
Table 5. Performance comparison for f 1 ( x ) = cos ( x ) x .
Table 5. Performance comparison for f 1 ( x ) = cos ( x ) x .
Method Iter Incr Incr2 ACOC EI Time (s)
M A 4 6.107 × 10 32 1.096 × 10 126 3.995 1.587 5.151 × 10 2
M A y 5 2.324 × 10 86 0.000 4.000 1.587 5.759 × 10 2
M H y 5 1.044 × 10 49 1.795 × 10 197 4.000 1.587 6.806 × 10 2
M C 4 6.098 × 10 15 1.588 × 10 58 3.840 1.566 3.098 × 10 2
M C y 5 9.841 × 10 49 1.247 × 10 193 4.000 1.587 3.803 × 10 2
Jarratt 4 5.37 × 10 36 3.276 × 10 143 3.998 1.587 4.025 × 10 2
Ostrowski 4 1.893 × 10 35 5.492 × 10 141 3.998 1.587 4.155 × 10 2
King ( β = 1 ) 4 2.133 × 10 24 1.632 × 10 96 3.980 1.585 4.469 × 10 2
Kung-Traub 5 2.925 × 10 24 4.084 × 10 72 3.001 1.442 4.487 × 10 2
Zhao et al. 5 3.787 × 10 35 5.1 × 10 140 3.999 1.587 5.990 × 10 2
Chun 4 1.893 × 10 35 5.492 × 10 141 3.998 1.587 3.784 × 10 2
Artidiello IV 5 1.752 × 10 18 6.378 × 10 74 4.250 1.620 5.160 × 10 2
Table 6. Performance comparison for f 2 ( x ) = e x cos ( x ) 2 .
Table 6. Performance comparison for f 2 ( x ) = e x cos ( x ) 2 .
Method Iter Incr Incr2 ACOC EI Time (s)
M A 4 9.237 × 10 34 6.510 × 10 133 4.000 1.587 5.444 × 10 2
M A y 5 2.646 × 10 97 7.787 × 10 208 4.000 1.587 7.244 × 10 2
M H y 4 1.575 × 10 14 1.398 × 10 55 3.926 1.578 6.071 × 10 2
M C 4 4.862 × 10 18 8.841 × 10 70 3.964 1.583 6.263 × 10 2
M C y 4 1.164 × 10 14 3.535 × 10 56 3.924 1.577 6.040 × 10 2
Jarratt 4 3.801 × 10 44 4.500 × 10 175 4.000 1.587 4.654 × 10 2
Ostrowski 4 2.037 × 10 44 3.549 × 10 176 4.000 1.587 5.681 × 10 2
King ( β = 1 ) 4 4.612 × 10 28 4.044 × 10 110 3.995 1.587 3.920 × 10 2
Kung-Traub 5 6.989 × 10 29 5.036 × 10 85 3.000 1.442 3.818 × 10 2
Zhao et al. 4 1.256 × 10 25 3.421 × 10 101 4.042 1.593 4.764 × 10 2
Chun 4 2.037 × 10 44 3.549 × 10 176 4.000 1.587 4.491 × 10 2
Artidiello IV 4 2.842 × 10 16 3.140 × 10 63 4.109 1.602 4.625 × 10 2
Table 7. Comparison of iterative methods for the chemical equilibrium problem.
Table 7. Comparison of iterative methods for the chemical equilibrium problem.
Method Iter Incr Incr2 ACOC EI Time (s)
M A 4 1.14 × 10 33 5.257 × 10 131 3.996 1.587 0.1490
M A y 5 8.294 × 10 130 3.893 × 10 208 4.000 1.587 0.1331
M H y 4 4.491 × 10 32 2.600 × 10 124 3.995 1.587 0.1218
M C 4 9.663 × 10 33 4.140 × 10 127 3.995 1.587 0.1058
M C y 4 2.179 × 10 32 1.255 × 10 125 3.995 1.587 0.0974
Jarratt 4 4.663 × 10 33 6.992 × 10 129 3.996 1.587 0.1188
Ostrowski 4 2.592 × 10 35 6.632 × 10 138 3.997 1.587 0.0617
King ( β = 1 ) 4 1.125 × 10 33 4.976 × 10 131 3.996 1.587 0.0630
Kung-Traub 4 4.635 × 10 19 1.684 × 10 54 2.994 1.441 0.1018
Zhao et al. 4 5.225 × 10 37 4.830 × 10 145 4.001 1.588 0.0720
Chun 4 9.756 × 10 33 4.303 × 10 127 3.995 1.587 0.0492
Artidiello IV 4 7.927 × 10 41 6.800 × 10 161 3.971 1.584 0.0729
Table 9. Comparison of iterative methods for the Planck problem
Table 9. Comparison of iterative methods for the Planck problem
Method Root Iter Incr Incr2 ACOC EI
M A 4.965 4 2.578 × 10 16 1.08 × 10 67 2.008 1.262
M A y 4.965 5 6.912 × 10 67 0.0 4.0 1.587
M H y 4.965 4 2.71 × 10 16 1.564 × 10 67 2.671 1.388
M C 4.965 4 2.82 × 10 16 1.689 × 10 67 1.981 1.256
M C y 4.965 4 2.943 × 10 16 2.089 × 10 67 1.915 1.242
Jarratt 5.609 × 10 200 5 2.616 × 10 50 4.487 × 10 200 4.0 1.587
Ostrowski 3.471 × 10 51 4 4.178 × 10 13 2.777 × 10 51 3.942 1.580
King ( β = 1 ) 5.427 × 10 115 15 3.081 × 10 29 4.342 × 10 115 3.997 1.587
Kung-Traub 1.186 × 10 79 7 5.334 × 10 27 9.486 × 10 80 3.000 1.442
Zhao et al. 3.344 × 10 119 7 4.003 × 10 30 2.675 × 10 119 3.984 1.585
Chun 158.3 100 1.639 5.668 × 10 68 5.121 1.724
Artidiero IV 1.68 × 10 91 6 2.588 × 10 23 1.344 × 10 91 3.976 1.584
Table 10. Comparison of iterative methods for the Planck problem with x 0 = 10 .
Table 10. Comparison of iterative methods for the Planck problem with x 0 = 10 .
Method Iter Incr Incr2 ACOC EI Time (s)
M A 3 3.413 × 10 17 3.316 × 10 71 3.466 1.513 0.0380
M A y 4 2.223 × 10 70 0.0 4.0 1.587 0.0638
M H y 3 4.172 × 10 17 8.786 × 10 71 3.447 1.511 0.0323
M C 3 3.787 × 10 17 5.496 × 10 71 3.456 1.512 0.0394
M C y 3 3.978 × 10 17 6.976 × 10 71 3.452 1.511 0.0388
Jarratt 3 3.106 × 10 12 5.067 × 10 51 3.539 1.524 0.0509
Ostrowski 3 3.047 × 10 17 1.911 × 10 71 3.477 1.515 0.0323
King ( β = 1 ) 3 3.413 × 10 17 3.314 × 10 71 3.466 1.513 0.0305
Kung-Traub 4 5.964 × 10 40 2.674 × 10 122 3.0 1.442 0.0576
Zhao et al. 3 2.869 × 10 17 1.424 × 10 71 3.483 1.516 0.0411
Chun 3 3.047 × 10 17 1.911 × 10 71 3.477 1.515 0.0388
Artidiello IV 3 2.693 × 10 17 1.045 × 10 71 3.489 1.517 0.03368
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated