From: hook@nas.nasa.gov (Ed Hook)
Subject: Re: Lanczos Algorithm
Date: 8 Mar 2000 17:37:31 GMT
Newsgroups: sci.math
Summary: [missing]
In article <38C60A82.221EE8D0@ix.urz.uni-heidelberg.de>,
Asmus Loewenhaupt writes:
|> Also I have problems proving and understanding this
|> theorem:
|> Let Km(A,v) be the krylov-subspace Km(A,v)=span{v, A*v, ..., A^m*v},
|> where A is a (n x n)-Matrice over R an v is a vector. Let p(A) be the
|> minimalpolynomial of v in Respect to A, t.m. the normed polynomial so
|> that p(A)*v=0. Let � be the degree of p.
|> Then K�(A,v) is invariant under A, and for all m >= � is
|> Km(A,v)=K�(A,v).
|>
It's clear that K�(A,v) is invariant under A if and only if
A^{�+1}v belongs to K�(A,v). But � is the degree of p(A), which
implies that you can use the relation p(A)v = 0 to write
A^� v in terms of the vectors A^{�-1} v, A^{�-2} v, ... , Av, v,
And, then, you can write A^{�+1} v in terms of A^� v, ... , Av --
that is, you can show that A^{�+1}v lies in K�(A,v). Hence,
K�(A,v) is invariant under A.
For the other claim, note that K�(A,v) is (obviously) a subspace
of Km(A,v) if m > �; you just need to show the _other_ inclusion.
But the argument above can be repeated to show, by induction, that
A^m v lies in K�(A,v) for every m > �. Therefore, Km(A,v) is a
subspace of K�(A,v) if m > � and the claim follows.
--
Ed Hook | Copula eam, se non posit
MRJ Technology Solutions, Inc. | acceptera jocularum.
NAS, NASA Ames Research Center | I can barely speak for myself, much
Internet: hook@nas.nasa.gov | less for my employer
==============================================================================
From: sci.math.num-analysis@elsner.org (Ulrich Elsner)
Subject: Re: Iterative analysis : convergence
Date: 14 Dec 2000 09:59:44 GMT
Newsgroups: sci.math.num-analysis
According to Eric GUERIN :
> I need to solve large linear systems involving 200x200 matrices. Such a
> large number of equations implies that I can't use a direct method.
A 200x200 matrix will take about 320 Kbytes memory. On my machine (a
Pentium-III 800) it takes less than 0.1 second to solve such a system
(using Matlab). Unless your resources are really limited, just use a
direct solver, e.g., from LAPACK http://www.cs.colorado.edu/~lapack/. If
performance matters, make sure your code is well tuned, e.g. by using the
ATLAS BLAS routines http://www.netlib.org/atlas/ .
Iterative methods will (in general) only help if your matrices are
sparse.
> That's why I'm trying to use iterative methods such as Jacobi,
> Gauss-Seidel. My problem is that the convergence condition is very
> strict : the spectrum radius must be less than 1.
>
>- Is there other methods less strict regarding convergence conditions ?
>- Is there a way to adapt Gauss-Seidel ?
Modern Krylov subspace based methods (cg, qmr, GRRES,...) should be faster
and converge more often. For a first look at these methods, try
@BOOK{templates,
author = "R. Barrett and M. Berry and T. Chan and J. Demmel and J. Donato
and J. Dongarra and V. Eijkhout and R. Pozo and C. Romine and H.
van der Vorst",
title = "Templates for the solution of linear systems",
year = 1994,
publisher = "SIAM"
}
which can be found as a Postscript file at
http://www.netlib.org/templates/templates.ps
Regards,
Ulrich Elsner
--
Ulrich Elsner, Fak. fuer Mathematik, TU Chemnitz, D-09107 Chemnitz, Germany