Maximum efficiency for a family of Newton-like methods with frozen derivatives and some applications

https://doi.org/10.1016/j.amc.2013.01.047Get rights and content

Abstract

A generalized k-step iterative application of Newton’s method with frozen derivative is studied and used to solve a system of nonlinear equations. The maximum computational efficiency is computed. A sequence that approximates the order of convergence is generated for the examples, and it numerically confirms the calculation of the order of the method and computational efficiency. This type of method appears in many applications where the authors have heuristically chosen a given number of steps with frozen derivatives. An example is shown in which the total variation (TV) minimization model is approximated using the schemes described in this paper.

Highlights

► We explore the applications of a generalized k-step iterative Newton’s method with frozen derivative. ► We analyze the order and the convergence of the family. ► We are able to compute the maximum computational efficiency in the family for a given example. ► In most of the cases, the 2-step iterative method seems the most efficient.

Introduction

In this paper, a family of Newton-like methods with frozen derivatives is analyzed. This type of method appears in many applications where the authors heuristically choose a given number of steps with frozen derivatives (see for instance this incomplete list of Refs. [14], [21], [22], [23], [24], [25]). Our goal is to give a rigorous estimate of the maximum efficiency of these schemes for a given problem.

Let F:DRmRm be a nonlinear operator and assume that F has at least bounded third-order derivatives with continuity on an open convex set D. Suppose that the equation F(x)=0 has a solution αD at which F(α) is nonsingular.

Setting e=x-α, we develop F(x) and its derivatives in a neighborhood of α. That is, assuming that F(α)-1 exists, we haveF(x)=Γe+A2e2+O(e3),where Γ=F(α) and A2=12Γ-1F(α)L2(Rm,Rm). From (1), the derivative of F(x) can be written as F(x)=ΓI+2A2e+O(e2), and expanding the inverse of the derivative of F(x) in terms of e in formal power developments, we getF(x)-1=I-2A2e+O(e2)Γ-1.

A direct computation of the local order of convergence for Newton’s method and for its generalization in k steps are presented in Section 2. The optimal number of steps in order to maximize the computational efficiency index is computed in Section 3. In Section 4, we analyze several researchers’ examples that illustrate our theoretical results. Section 5 is devoted to the local and semilocal convergence of these schemes. The last section shows an application related to image denoising, where we approximate the total variation (TV) minimization model using the described schemes. A nonlinear primal–dual method is used to remove some of the singularity caused by the nondifferentiability of the quantity u/|u| in the definition of the TV norm. A finite difference scheme is then applied and the associated nonlinear system of equations is approximated by the most efficient candidate of our family for this particular problem.

Section snippets

Order of convergence

In this section we derive the local order of convergence. This derivation is clearer and more compact than other approaches in the literature for other similar Newton-type methods in several dimensions.

Given xn, we consider the following iterative method (known as the ‘simplified Newton method’ [1]);xn(k)=xn(k-1)-F(xn)-1F(xn(k-1)),k1,where xn(0)=xn and in the last step, the last computed term is xn+1=xn(k). For k=1, subtracting the root α from both sides of (3) and taking into account (1), (2)

Optimal computational efficiency

The computational efficiency index (CEI) and the computational cost (C) are defined asCEI(k,μ0,μ1,m)=ρk1C(k,μ0,μ1,m),whereC(k,μ0,μ1,m)=a0(k,m)μ0+a1(m)μ1+p(k,m).

In (5), a0(k,m) and a1(m) represent the number of evaluations of the scalar functions of F and F respectively. The function p(k,m) represents the number of products needed per iteration. The ratios μ0 and μ1 between products and evaluations are introduced in order to express the value of C(k,μ0,μ1,m) only in terms of products.

Without

First numerical results

The first set of numerical computations were performed using the MAPLE computer algebra system with 2048 digits. The classical stopping criterion||eI||=||xI-α||<0.5·10-ε,was used, where ε=2048 is replaced byEI=||eˆI||||eˆI-1||<0.5·10-η,

eˆI=xI-xI-1 and η=ρ-1ρ2ε. Note that (9) is independent of knowledge of the root (see [5]).

According to the definition of the computational cost (5), an estimation of the factors μ0,1 is claimed. To do this, we express the cost of the evaluation of the elementary

Local and semilocal analysis of convergence

In this section we propose two convergence results that we will apply to an image denoising problem in the next section. We can find this type of result for the classical Newton method in [9], [10].

On the application of the iterative methods in image processing

During some phases of the manipulation of an image, some random noise and blurring is usually introduced. This noise and blurring makes the later phases of processing the image difficult and inaccurate. In the past two decades many authors have introduced and analyzed certain tools for the image restoration problem. See for instance the following incomplete list; [11], [12], [13], [15], [16], [17], [18], [19], [20], [26], [27], [28].

In this paper, we focus on approaches which use partial

Conclusion

In this paper we have presented a rigorous and useful procedure to find the best scheme in a family of Newton type methods with frozen derivatives for solving a given system of nonlinear equations. These methods have been used in many applications where the authors heuristically chose a given number of steps with frozen derivatives. We have applied this new strategy of this paper to the approximation of several known problems. We have analyzed the order, the computational efficiency and the

References (31)

  • A. Ralston, P. Rabinowitz, A First Course in Numerical Analysis, McGraw-Hill College, Ohio,...
  • M. Grau et al.

    Accelerated iterative methods for finding solutions of a system of nonlinear equations

    Appl. Math. Comput.

    (2007)
  • I.K. Argyros et al.

    A unified approach for enlarging the radius of convergence for Newton’s method and applications

    Nonlinear Funct. Anal. Appl.

    (2005)
  • I.K. Argyros

    Relaxing the convergence conditions for Newton-like methods

    J. Appl. Math. Comput.

    (2006)
  • S. Amat et al.

    Analysis of a New Nonlinear Subdivision Scheme. Applications in Image Processing

    Foundations of Computational Mathematics

    (2002)
  • Cited by (0)

    1

    Research supported by MICINN-FEDER MTM2010-17508 (Spain), and by 08662/PI/08 (Murcia).

    View full text