From: Ralf Loesche Newsgroups: sci.math.num-analysis Subject: Re: Lg. eigensystems: best approach Date: Wed, 04 Mar 1998 13:37:49 +0100 Greg Newby wrote: > > Thanks for helping to clarify the question. I've received a few > email & posted responses on different algorithms, and will be > looking at the packages that implement them. More below: > [MEGASNIP] > > ======== specifics > [SNIP] > > : - if you need only some values, which ones (the biggest EV, some EV > : close to some given value). > > Start with the very largest, then get the next largest etc., until > a cutoff is reached. > Well, knowing this, you could use LANCZOS type methods for your symmetric term-by-term matrix. Note: LANCZOS methods sometimes fail to deliver the right multiplicity for some eigenvalues in question. I.e., you get more eigenvalues and corresponding eigenvectors of a certain magnitude than the matrix actually has, or you get less than the correct number. Detection of these kind of 'ghost eigenvalues' is not easy and one of the major drawbacks of LANCZOS/ARNOLDI type methods.(see CULLUM/WILLOUGHBY: Lanczos Algorithms for Large Symmetric Eigenvalue Computations') If you need the multiplicity calculated right for your problem, you should look for alternative methods. If the 100 largest eigenvalues/eigenvectors are in question and your matrix is positive definite, then you can simultaneously iterate 100 vectors (Power Method, subspace iteration). This can be done until the iterated subspace is close enough to the invariant subspace belonging to the 100 largest eigenvalues. Alternatively, as a final fast converging refinement step, you can use Newton's method applied to the eigenvalue problem as described in http://www.math.tu-dresden.de/~loesche/IOKOMO-03-96.ps This gives you eigenvalues and the corresponding eigenvectors that belong to the invariant subspace. All Linear Algebra can be done using matrix-vector-multiplications only. This can be done very efficiently if you exploit the sparsity of your matrix. [MEGASNIP] HTH, Ralf Loesche -- ################################################################## Please REMOVE THIS to reply to this message. ################################################################## ============================================================================== From: elsner@mathematik.tu-chemnitz.de (Ulrich Elsner) Newsgroups: sci.math.num-analysis Subject: Re: Lg. eigensystems: best approach Date: 4 Mar 1998 09:41:36 GMT Greg Newby wrote: >Ulrich Elsner (elsner@mathematik.tu-chemnitz.de) wrote: >[Explanation of the problem] > >: - do you need _all_ eigenvalues and eigenvectors >No. [...] >With 6000 by 6000 matrices, this is something like 300 EVs. I don't >know how many it would be with a much larger matrix, but probably >wouldn't want more than a few thousand EVs anyway. > > [only needs the largest EV] > >In a 13000 by 13000 term by term matrix I'd like to analyze, >it's about 92% sparse > >: Is the matrix only symmetric or also positive definite >Yes, both. > >: - How important is performance, [...] >So, performance doesn't matter from that standpoint. But considering >that 6000 by 6000 takes several hours, I don't want to wait several >years for 100K by 10M. > OK, thanks for the clarification. This makes recommending suitable algorithms and software much easier. First, a couple of general remarks: - Your matrix is definitely sparse enough that it is worth using specialized sparse algorithms. This means you can fit larger matrices into the available memory, thus avoiding slow out of core methods. - you only need a small percentage of all eigenvectors. This again is good in memory terms since eigenvectors are dense (so storing all eigenvectors takes a lot of space). It also means you can effectively use algorithms that produce eigenvectors "one at a time" like the Lanczos- or Arnoldi-type algorithms. You want the eigenvectors corresponding to the biggest eigenvalues, so Lanczos will converge rather fast. So it all sounds like your first shot should be a Lanczos type algorithm. Explanations can be found in any good numerical analysis book, e.g. in @BOOK{GolVL, author = "G. H. Golub and Loan, C. F. van", title = "Matrix Computations", year = 1996, publisher = "Johns Hopkins University Press", address = "Baltimore and London", edition = "third"} If you are only interested in solving the problem and not in the theory, there are many software-packages available (e.g. http://www.netlib.org for lots of free code). I would probably first try ARPACK which uses implicitly restarted Arnoldi, a modern variant of the Lanczos algorithm. (http://www.caam.rice.edu/~kristyn/parpack_home.html or on netlib) Since this requires you to supply the matrix-vector multiplication routine, you might also want to look at some general sparse libraries (e.g. sparse-blas on netlib or SPARSKIT http://www.cs.umn.edu/Research/arpa/SPARSKIT/sparskit.html ) An alternative would be sparse direct method. I don't know much about them but dimly remember a lecture by M. Berry (http://www.cs.utk.edu/~berry/) at the last SIAM conference on sparse matrices. I hope this helps a little, Regards, -- Ulrich Elsner, Fak. fuer Mathematik, TU Chemnitz, D-09107 Chemnitz, Germany