From: spellucci@mathematik.tu-darmstadt.de (Peter Spellucci) Subject: Re: Questions about SVD algorithm Date: 10 May 2000 15:15:58 GMT Newsgroups: sci.math.num-analysis Summary: [missing] In article <8fbl9l$svt$1@nnrp1.deja.com>, mikefairbank@my-deja.com writes: |> Hi, |> I have a few questions on the Singular Value Decomposition algorithm |> from the Numerical Recipes in C book. I would be grateful if anyone |> can help with any of them... |> 1. Is the time the algorithm takes to run proportional to N^3 (for a |> square matrix of side N)? I can't find this in the book but it does |> say matrix inversion in general is N^3. In this case (SVD), what is |> the constant before the N^3 (e.g. compared to matrix multiplication). |> 2. I'm just using this routine to find the inverse of a matrix. Why |> does the code in the book not also just work out the inverse too |> instead of using U, W and V in the back-sub routine. Each time you |> call the book's back-substitution routine it has to multiply these 3 |> matrices together which seems wasteful when you could just do it once. |> Is there a reason why it does this? I notice that the back-sub routine |> calculates its answer by multiplying from the right first (i.e. V(W^-1 |> (U'(B))) ). Is it done this way round for accuracy or something? yes, clearly. The only reason to use SVD for linear systems solving is numerical stability. If you use it the way implemented, then the whole procedure is backward stable in the sense of Wilkinson (due to the orthogonality of U and V and the properties of the floating point arithmetic (in the idealized form)). If you would form the inverse explicitly, a large blowup of rounding errors in the matrix multiply would occur. to your first question: it is clearly much about n^3, depending on details of the transformation implementation. as far as I remember, NR uses the classical givens transformations on stage two and Householder for stage 1, the transforma- tion to bidiagonal form. Hence stage one is 4/3n^3+O(n^2) for the reduction to bidiagonal form itself. In addition you have to accumulate the transformations, starting with the unit matrix. This gives a plus of 2n^3 operations. Operations means something like d=a+b*c here, including memory accesses. On stage two, a rule of thumb says that you need about 2 sweeps (the "chasing") for every singular value, but this is possibly a bit optimistic for some of the singular values found first, the most "expensive" ones, since the bidiagonal form shrinks in dimension . But well, let us assume that number 2. every sweep needs (n-i) Givensrotations from the left and the right, there i runs from 1 to n. The effort for working on the bidiagonal form itself is O(n^2) hence. But you need accumulation of these transformations into U and V an effort of 4n operations for each, giving about 8n^3 additional operations. Hence I sum up to (34/3)*n^3+O(n^2). you see, an expensive way to do the job, but clearly the best one if your matrix is almost rank deficient. hope that helps peter