Convergence theory for preconditioned eigenvalue solvers in a nutshell

التفاصيل البيبلوغرافية
العنوان: Convergence theory for preconditioned eigenvalue solvers in a nutshell
المؤلفون: Argentati, Merico E., Knyazev, Andrew V., Neymeyr, Klaus, Ovtchinnikov, Evgueni E., Zhou, Ming
المصدر: Foundations of Computational Mathematics, 17(3), pp. 1-15, 2017. Online: 23 November 2015
سنة النشر: 2014
المجموعة: Mathematics
مصطلحات موضوعية: Mathematics - Numerical Analysis, 49M37, 65F15, 65K10, 65N25
الوصف: Preconditioned iterative methods for numerical solution of large matrix eigenvalue problems are increasingly gaining importance in various application areas, ranging from material sciences to data mining. Some of them, e.g., those using multilevel preconditioning for elliptic differential operators or graph Laplacian eigenvalue problems, exhibit almost optimal complexity in practice, i.e., their computational costs to calculate a fixed number of eigenvalues and eigenvectors grow linearly with the matrix problem size. Theoretical justification of their optimality requires convergence rate bounds that do not deteriorate with the increase of the problem size. Such bounds were pioneered by E. D'yakonov over three decades ago, but to date only a handful have been derived, mostly for symmetric eigenvalue problems. Just a few of known bounds are sharp. One of them is proved in [doi:10.1016/S0024-3795(01)00461-X] for the simplest preconditioned eigensolver with a fixed step size. The original proof has been greatly simplified and shortened in [doi:10.1137/080727567] by using a gradient flow integration approach. In the present work, we give an even more succinct proof, using novel ideas based on Karush-Kuhn-Tucker theory and nonlinear programming.
Comment: 12 pages, accepted for Foundations of Computational Mathematics 2015
نوع الوثيقة: Working Paper
DOI: 10.1007/s10208-015-9297-1
URL الوصول: http://arxiv.org/abs/1412.5005
رقم الأكسشن: edsarx.1412.5005
قاعدة البيانات: arXiv
الوصف
DOI:10.1007/s10208-015-9297-1