Globally Convergent Inexact Generalized Newton Methods with Decreasing Norm of the Gradient

Globally Convergent Inexact Generalized Newton Methods with Decreasing Norm of the Gradient

Year:    2002

Author:    Ding-Guo Pu

Journal of Computational Mathematics, Vol. 20 (2002), Iss. 3 : pp. 289–300

Abstract

In this paper, motivated by the Martinez and Qi methods [1], we propose one type of globally convergent inexact generalized Newton methods to solve unconstrained optimization problems in which the objective functions are not twice differentiable, but have LC gradient. They make the norm of the gradient decreasing. These methods are implementable and globally convergent. We prove that the algorithms have superlinear convergence rates under some mild conditions.
The methods may also be used to solve nonsmooth equations.

You do not have full access to this article.

Already a Subscriber? Sign in as an individual or via your institution

Journal Article Details

Publisher Name:    Global Science Press

Language:    English

DOI:    https://doi.org/2002-JCM-8918

Journal of Computational Mathematics, Vol. 20 (2002), Iss. 3 : pp. 289–300

Published online:    2002-01

AMS Subject Headings:   

Copyright:    COPYRIGHT: © Global Science Press

Pages:    12

Keywords:    Nonsmooth optimization Inexact Newton method Generalized Newton method Global convergence superlinear rate.

Author Details

Ding-Guo Pu