- Journal Home
- Volume 18 - 2025
- Volume 17 - 2024
- Volume 16 - 2023
- Volume 15 - 2022
- Volume 14 - 2021
- Volume 13 - 2020
- Volume 12 - 2019
- Volume 11 - 2018
- Volume 10 - 2017
- Volume 9 - 2016
- Volume 8 - 2015
- Volume 7 - 2014
- Volume 6 - 2013
- Volume 5 - 2012
- Volume 4 - 2011
- Volume 3 - 2010
- Volume 2 - 2009
- Volume 1 - 2008
Numer. Math. Theor. Meth. Appl., 8 (2015), pp. 313-335.
Published online: 2015-08
Cited by
- BibTex
- RIS
- TXT
In this paper, a primal-dual interior point method is proposed for general constrained optimization, which incorporated a penalty function and a kind of new identification technique of the active set. At each iteration, the proposed algorithm only needs to solve two or three reduced systems of linear equations with the same coefficient matrix. The size of systems of linear equations can be decreased due to the introduction of the working set, which is an estimate of the active set. The penalty parameter is automatically updated and the uniformly positive definiteness condition on the Hessian approximation of the Lagrangian is relaxed. The proposed algorithm possesses global and superlinear convergence under some mild conditions. Finally, some preliminary numerical results are reported.
}, issn = {2079-7338}, doi = {https://doi.org/10.4208/nmtma.2015.m1338}, url = {http://global-sci.org/intro/article_detail/nmtma/12412.html} }In this paper, a primal-dual interior point method is proposed for general constrained optimization, which incorporated a penalty function and a kind of new identification technique of the active set. At each iteration, the proposed algorithm only needs to solve two or three reduced systems of linear equations with the same coefficient matrix. The size of systems of linear equations can be decreased due to the introduction of the working set, which is an estimate of the active set. The penalty parameter is automatically updated and the uniformly positive definiteness condition on the Hessian approximation of the Lagrangian is relaxed. The proposed algorithm possesses global and superlinear convergence under some mild conditions. Finally, some preliminary numerical results are reported.