From: Fedup@nospam.com (Paul Tarr)
Subject: Re: Applied Linear Algebra Norms
Date: Wed, 10 May 2000 22:40:28 GMT
Newsgroups: sci.math.research
Summary: [missing]
I want to thank those who provided responses to my original normed vector
space question. I have summarized the responses but they are too long to post.
They are available at the web site
http://home1.gte.net/ptarr/Overview.htm
and I added a description of the original problem that prompted the question.
For those that are new here is the original question:
I have been searching for a forum on applied linear algebra. I haven't found
one and this may not be the place but perhaps someone here can direct me to
such a forum.
Basically I am trying to form a coherent picture of a "state space" such as is
used in engineering control theory. It does not seem that such a space is a
normed vector space (nor it seems can the concept of "distance" or length be
effectively defined) and that tends to create all kinds of problems. For
example, if I have a state vector transpose(X)=(x,v) where x is position and v
is velocity then the inner product of X is (xx+vv) which adds two quantities
with different units (e.g., [meters squared] plus [meters squared]/[second
squared]) which does not have a meaning in the usual sense of being a measure
of "length" squared.
Things get worse when I consider a state vector such as
transpose(X)=(x,v,phi,da) where x is position, v is velocity, phi is an angle
and da is a gravity error. Such a state vector comes up in one
(physically) dimensional INS analysis and using common textbook formulas one
sometimes gets the silly result that the answers depends on the gravity units
employed (e.g., meters/(second squared), feet/(second squared), etc.). This is
particularly vexing when forming suboptimal state vectors using pseudo
inverses. In individual cases one can compensate but the compensation is ad
hoc. What I'd like to see a coherent theory. I thought for a (brief) while
that a metric tensor could be used to solve the problem but that didn't seem
to work out.
Math texts state that vectors are defined over a field (real numbers, complex
numbers etc.) which seem to preclude a state space being a vector space
because varioius "axis" of the state space have different units. Engineering
texts ignore the problem but use theorems proved valid over a vector space to
obtain results. For example, optimality proofs (e.g., Kalman filters) based on
projection (but how do you project a position on a velocity) or construction
of eigenvectors (but how do you mix a position and a velocity to form a new
"eigenvector"). As stated earlier, ad hoc methods can be employed in
individual circumstances but I also worry about the validity of the general
results without a coherent theory.
I'd like to find a discussion group, reference or textbook that discusses
these issues. Any help a reader can provide will be appreciated.
==============================================================================
From: Steve Lord
Subject: Re: Norms and Spectral Radii
Date: Fri, 20 Oct 2000 16:30:06 -0400
Newsgroups: sci.math
On 20 Oct 2000, Michael A. Schaeffer wrote:
> I used to be a lurker on this newsgroup, and now, after deciding to start
> lurking again, I see this flamewar on such things as 'norms' and 'spectral
> radii'. At any rate, as irrelevant as certain parts of the debate are, it
> did pique my curiosity enough to cause me to ask this question...
>
> What is a norm, and what is a spectral radius? I've taken basic calculus
There are several different definitions for 'norm'. Let's take a look at
norms on a vector space.
A vector space V over a field F is a set of vectors, with components in F,
that is closed under vector addition and scalar (with the scalars being
elements of F) multiplication.
A norm (we'll call it n) on the vector space V is a function that maps V
to the nonnegative integers and satisfies the following properties:
1) n(A) >= 0 for all A in V, and n(A) = 0 implies that A = 0.
2) n(k*A) = |k|*n(A) where A is in V and k is in F.
3) n(A+B) <= n(A) + n(B) for all A, B in V.
I'm sure that if you ask someone in the math department at UTexas, they
can point you in the direction of some books about norms, if you're
interested.
The spectral radius of a matrix A is defined to be the maximum absolute
value among the eigenvalues of A. There is a norm which has something to
do with the spectral radius; the 2-norm of a matrix A is the square root
of the greatest eigenvalue of A^T * A, where A^T is the transpose of A.
If A is symmetric, then the 2-norm of A is simply its spectral radius.
> (no diff eq) and linear algebra, so I'm not too familiar with the
> terms. I'm also curious how they can be used to solve converging
> series. Pertti Lounesto's convenient story raised more questions than
> answers, for me at least.
Not only do norms have something to do with convergence of a matrix
series, but it also has to do with what kind of error you'd expect if you
tried to solve the system A*x=b but instead solved a slightly perturbed
system (A+e)*x=b.
Steve L
==============================================================================
From: Lynn Killingbeck
Subject: Re: Norms and Spectral Radii
Date: Sun, 22 Oct 2000 02:50:52 -0500
Newsgroups: sci.math
Hop David wrote:
>
> Randy Poe wrote:
>
> >
> >
> > A norm is a way of measuring size, and it can be defined in a general
> > enough way that you can assign a magnitude to all sorts of things,
> > such as matrices.
> >
> > You know what absolute value is, I suppose. That's a norm for real
> > numbers.
> >
> > Have you run into magnitude of a vector? That's sqrt(x^2 + y^2) for a
> > 2-vector. It is the same as the straight-line distance from the origin
> > to the end of the vector.
> >
> > That is one example of a norm, but there are others, even for vectors.
> > One could use the p-th root of the sum of the p-th powers, for
> > instance, where p is anything from 1 to infinity. That's the so-called
> > p-norm. There are only a few basic requirements on a size measure for
> > it to qualify as a norm.
> >
> >
>
> That's been my understanding - norm is a synonym for size.
>
> I had thought that the determinant was a matrix norm, but several have pointed
> out that it's possible for a non-zero matrix to have a zero determinant, thus
> determinants are an unsatisfactory norm.
>
> I'll try to restate the norm of a matrix explained to me: Transform all the
> unit vectors with a matrix, and the norm of the largest transformed vector is
> the norm of the matrix.
>
> I see this is a size of sorts but it bothers me. For example the matrices:
>
> 2 0 0 0 0 0
> 0 2 0 and 0 2 0
> 0 0 2 0 0 2
>
> would both have the same norm: 2
>
> But the first seems a lot bigger to me. The second takes R^3 to a mere shadow
> of itself (although a shadow twice the size, or maybe I should say 4 times the
> area).
>
> -- Hop
> http://clowder.net/hop/index.html
My linear algebra textbook has 5 different norms. I'll try to transcribe
them into ASCII. All for an n-by-n square matrix A.
M(A) = n * max_i_j (a[i,j]), where max_i_j is the maximum over all rows
and columns.
N(A) = root_sum_square (a[i,j]), the Euclidian norm.
||A||_1 = max_i(sum_k|a[i,k]|), that is, sum the magnitudes of each
column and pick the largest such sum.
||A||_2 = max_k(sum_i|a[i,k]|), like the previous, but sum over the
rows rather than the columns before picking the largest.
||A|| = sqrt(lambda_1), where lambda_1 is the greatest eigenvalue of the
matrix (A*) * A, called the spectral norm elsewhere in these posts.
There follow some 10 inequalities amongst these various norms. The
following section is "Convergence of a Geometric Progression".
All in the section "The Concept of a Limit in Linear Algebra". Hope I
didn't mangle the material past recognition!
Lynn Killingbeck
P.S. The text is "Computational Methods of Linear Algebra" by
Faddeev and Faddeeva.