On Efficient Iterative Numerical Methods for Simultaneous Determination of all Roots of Non-Linear Function

We construct a family of two-step optimal fourth order iterative methods for finding single root of non-linear equations. We generalize these methods to simultaneous iterative methods for determining all the distinct as well as multiple roots of single variable non-linear equations. Convergence analysis is present for both cases to show that the order of convergence is four in case of single root finding method and is twelve for simultaneous determination of all roots of non-linear equation. The computational cost, Basin of attraction, efficiency, log of residual and numerical test examples shows, the newly constructed methods are more efficient as compared to the existing methods in literature.


Introduction
To solve non-linear equation ( ) 0 fx= (1) is the oldest problem of science in general and in mathematics in particular. These non-linear equations have a diverse application in many areas of science and engineering. Several iterative methods have been used to find the roots of non-linear equation (1), using different techniques such as Decomposition methods, Homotopy analysis method, Variation iteration methods and modification in Newton Raphson method etc. All these methods are used to approximate one root at a time. But mathematician are also interested in simultaneous finding of all roots of non-linear equation because simultaneous iterative methods are very popular due to their wider region of convergence, are more stable as compared to single root finding methods and implemented for parallel computing as well. More detailed on single as well as simultaneous determination of all roots can be found in [1-1011-13, 15, 22, 24-30] and reference cited there in. The most famous of single root finding method is the classical Newton -Raphson method: () , ( in (2) then, we get classical Weierstrass -Dochive methods to approximate all roots of (1) is: The main aim of this paper is to construct family of optimal fourth order method and then generalize it into simultaneous iterative methods for finding all roots of non-linear equation (1).

Constructions of Method and Convergence Analysis
Here, we present some optimal fourth iterative methods for finding roots of non-linear equation (1) are: Jarrat et al. [16] suggest the following optimal fourth order methods (abbreviated as JM): Siyyam et al. [17] present the folloeing optimal fourth order iterative method (abbreviated as SM): Here, we propose the following iteration scheme is real valued function and find later.
For the iteration scheme (8), we have the following convergence theorem as using CAS Maple 18, we find the error equation of the iteration scheme defined by (8).  (14) by (11), we have: by Taylor series about 0 to obtain: 3 where 3 (0) 1 DA =  + .

The Concrete Fourth order Methods
We now construct some concrete forms of the family of methods describe by algorithm (8). Let us take the function () u  defined by Table1: Concrete forms of methods  = Therefore, we get following four new two-step fourth order method:

Complex Dynamical Study of Iterative Methods
Here, we discuss the dynamical study of iterative methods (MNS-1-MNS-4, JM, SM, YM). By choosing suitable initial guess we observe all iterative methods converge. Here, we investigate the region from where we take the initial estimates to achieve the root of non-linear equation.
Actually we numerically approximate the domain of attractions of the roots as a qualitative measure, how the iterative methods depend on the choice of initial estimations. To answer these questions on the dynamical behavior of the iterative methods, we investigate the dynamics of the methods (MNS-1-MNS-4) and compare it JM, SM, YM. Let us recall some basic concepts of this study in the background contexture of complex dynamics. For more details on the dynamical behavior of the iterative methods one can consult [19][20][21].Taking a rational function : C C f → , where C denotes the complex plane, the orbit 0 C t  is defines a set such as the set of starting points whose orbit tends to . s  The closure of the set of its repelling periodic points of a rational map is known as the Julia set denoted by J(R) and its complement is the Fatou set denoted by F(R). The iterative methods when they applied to find the roots of (1), provides the rational map as a stopping criteria and maximum number of iteration is taken as 25. We mark a dark blue point if the orbit of the iterative methods does not converges to root after 25 iterations which means it has a distance greater than 3 10 − to any root. Different color is used for different roots. Iterative methods have different basins of attraction distinguished by their colors. In basins, brightness in color represents the number of iterations to achieve the root of (1). Brightness in color means less number of iterations steps. Note that darkest blue regions denote the lake of convergence to any root of (1). Finally, in table 1, we present Elapsed time of basins of attraction correspond to iterative maps (MNS-1-MNS-4, JM, SM, YM) using tic-toc command in code using MATLAB (R2011b).

Generalization to Simultaneous Iterative Methods
Suppose, the non-linear equation (1) This implies, This gives, Now from (27), an approximation of () ()

Computational Aspect
Here, we compare the computational efficiency of the Midrog Petkovic method [8] and the new method(NMSM-1, NMSM-2, NMSM-3, NMSM-4). As presented in [8], the efficiency of an iterative method can be estimated using the efficiency index given by Considering the number of operations of a complex polynomial with real and complex roots reduce to operation of real arithmetic, given in Table 3.1 as polynomial degree m taking the dominants term of order  m 2  . Apply (4.3) and data given in Table 6, we calculate the percentage ratio (( ),( )) where X is one of the methods, namely Elirch-Aberth and Petkovic methods. Figure 4.1 graphically illustrates these percentage ratioes. It is evident from figure 4.1 that the newly constructed simultaneous methods (3.11,3.12,3.13,3.14) is more efficient as compared to the Elirch [14] and Petkovic methods [8,13].  We also calculate the CPU execuation time, as all the calculations are done using maple 18 on (Processor Intel(R) Core(TM) i3-3110m CPU@2.4GHz with 64-bit Operating System. We observe that CPU time of the method MMN8M is less than M. S. Petkovic methods [8], showing the domminace efficiency of our method 3.11,3.12,3.13,3.14 as compared to them.
with multiple exact roots (   1): i. The initial approximations have been taken as: For distinct roots ( 1 ): The initial approximations have been taken as:  0 For distinct roots ( 1 ): The initial approximations have been taken as:  0 For distinct roots ( 1 ): 1 .
The initial approximations have been taken as:  0 x 1 
The initial approximations have been taken as:  0 x 1 0.
The initial approximations have been taken as:  0 x 1 0.