- Journal Home
- Volume 42 - 2024
- Volume 41 - 2023
- Volume 40 - 2022
- Volume 39 - 2021
- Volume 38 - 2020
- Volume 37 - 2019
- Volume 36 - 2018
- Volume 35 - 2017
- Volume 34 - 2016
- Volume 33 - 2015
- Volume 32 - 2014
- Volume 31 - 2013
- Volume 30 - 2012
- Volume 29 - 2011
- Volume 28 - 2010
- Volume 27 - 2009
- Volume 26 - 2008
- Volume 25 - 2007
- Volume 24 - 2006
- Volume 23 - 2005
- Volume 22 - 2004
- Volume 21 - 2003
- Volume 20 - 2002
- Volume 19 - 2001
- Volume 18 - 2000
- Volume 17 - 1999
- Volume 16 - 1998
- Volume 15 - 1997
- Volume 14 - 1996
- Volume 13 - 1995
- Volume 12 - 1994
- Volume 11 - 1993
- Volume 10 - 1992
- Volume 9 - 1991
- Volume 8 - 1990
- Volume 7 - 1989
- Volume 6 - 1988
- Volume 5 - 1987
- Volume 4 - 1986
- Volume 3 - 1985
- Volume 2 - 1984
- Volume 1 - 1983
Cited by
- BibTex
- RIS
- TXT
It is interesting to compare the efficiency of two methods when their computational loads in each iteration are equal. In this paper, two classes of contraction methods for monotone variational inequalities are studied in a unified framework. The methods of both classes can be viewed as prediction-correction methods, which generate the same test vector in the prediction step and adopt the same step-size rule in the correction step. The only difference is that they use different search directions. The computational loads of each iteration of the different classes are equal. Our analysis explains theoretically why one class of the contraction methods usually outperforms the other class. It is demonstrated that many known methods belong to these two classes of methods. Finally, the presented numerical results demonstrate the validity of our analysis.
}, issn = {1991-7139}, doi = {https://doi.org/10.4208/jcm.2009.27.4.013}, url = {http://global-sci.org/intro/article_detail/jcm/8583.html} }It is interesting to compare the efficiency of two methods when their computational loads in each iteration are equal. In this paper, two classes of contraction methods for monotone variational inequalities are studied in a unified framework. The methods of both classes can be viewed as prediction-correction methods, which generate the same test vector in the prediction step and adopt the same step-size rule in the correction step. The only difference is that they use different search directions. The computational loads of each iteration of the different classes are equal. Our analysis explains theoretically why one class of the contraction methods usually outperforms the other class. It is demonstrated that many known methods belong to these two classes of methods. Finally, the presented numerical results demonstrate the validity of our analysis.