From: Edward Luke Subject: Re: sparse matrix Date: 08 Sep 1999 08:21:26 -0500 Newsgroups: sci.math.num-analysis,sci.stat.math Keywords: How do sparse matrices "usually" arise? "E. Robert Tisdale" writes: > > Erwin Kalvelagen wrote: > > > The number of nonzero elements in the matrices I see > > is normally bounded by a*n > > where a is a small number (between 2 and 10). > > Formulated differently: > > the number of nonzeroes in a column is small, > > in general less than 5 - 10, > > independent of the number of rows. > > That, I think, is a remarkable observation. > Can you explain why it should be so? Robert, I am afraid you are revealing how little you know about sparse matrix problems. Most sparse matrix problems are derived from circuit or finite-difference/element/volume PDE schemes. For most formulations, the matrix contains non-zero elements where there are connections between nodes, thus a fully connected circuit would require a dense matrix. On the other hand, most circuits are described by graphs of bounded valence (i.e. a node connects to at most K nodes regardless of the number of nodes in the problem). For example, if a 9 point stencil is used in a finite-difference problem, the resulting matrix will consist of 1 diagonal term and 8 off-diagonal terms representing the connection pattern of the stencil. As you can see, the number of off-diagonal terms is dictated by the stencil, not the size of the matrix. As a result, the larger the matrix, the more sparse it becomes! For many of these problems, the resulting linear system can be solved in O(n) time (usually using iterative algorithms)*. If this were not the case, many PDE problems would be fundamentally unsolvable due to computer resource requirements. * These algorithms usually require some form of sparse data structure to identify non-zero elements. These data-structures are typically very similar or identical to the representation of the connectivity graph required in the representation of the numerical algorithm (as in the finite-difference stencil). Given this, your point about using virtual memory to manage zeros becomes a moot point. It turns out that these data structures are most important in their management of data for algorithms rather than their management of memory resources. It just happens that they solve both problems. -- ___________________________________________________________________ Ed Luke lush@erc.msstate.edu NSF Engineering Research Center for Computational Field Simulations Mississippi State University http://www.erc.msstate.edu/~lush ============================================================================== From: Rolf Mantel Subject: Re: sparse matrix Date: 8 Sep 1999 13:49:45 GMT Newsgroups: sci.math.num-analysis,sci.stat.math >>>>> "Edward" == Edward Luke writes: Edward> "E. Robert Tisdale" writes: >> Erwin Kalvelagen wrote: >> >>> The number of nonzero elements in the matrices I see is >>> normally bounded by a*n where a is a small number (between 2 and >>> 10). Formulated differently: the number of nonzeroes in a >>> column is small, in general less than 5 - 10, independent of >>> the number of rows. >> >> That, I think, is a remarkable observation. Can you explain why >> it should be so? Edward> I am afraid you are revealing how little you know about Edward> sparse matrix problems. Most sparse matrix problems are Edward> derived from circuit or finite-difference/element/volume PDE Edward> schemes. I would not over-generalize. There are sparse matrix problems outside PDEs. There are PDE discretisations generating block structures which in 3 or more space dimensions can benefit from sparse solvers. Solvers for matrices with a known (especially multi-diagonal) sparsity structure (e.g. for finite-difference schemes) are again completely different to solvers for matrices with random sparsity structures. If you happen to work in an area dealing exclusively with low-connectivity structures your expectations of sparse matrices will be completely different than if you are mostly working with sparse matices that usually contain log (n) or n^(1/k) entries. Claiming that any of these areas is more typical than the other is risky. Rolf Mantel ============================================================================== From: Edward Luke Subject: Re: sparse matrix Date: 08 Sep 1999 09:57:08 -0500 Newsgroups: sci.math.num-analysis,sci.stat.math Rolf Mantel writes: > > Edward> I am afraid you are revealing how little you know about > Edward> sparse matrix problems. Most sparse matrix problems are > Edward> derived from circuit or finite-difference/element/volume PDE > Edward> schemes. > > I would not over-generalize. There are sparse matrix problems outside > PDEs. There are PDE discretisations generating > block structures which in 3 or more space dimensions can benefit from > sparse solvers. > > Solvers for matrices with a known (especially multi-diagonal) sparsity > structure (e.g. for finite-difference schemes) are again completely > different to solvers for matrices with random sparsity structures. > > If you happen to work in an area dealing exclusively with > low-connectivity structures your expectations of sparse matrices will > be completely different than if you are mostly working with sparse > matices that usually contain log (n) or n^(1/k) entries. Claiming > that any of these areas is more typical than the other is risky. I agree completely. There are many exceptional cases. Even the simple case of spectral methods in PDE's! Often the sparse matrix domain is handled with problem specific solutions. However, a dominant theme in most of these cases is a matrix that is derived from some sort of connection topology where the valence of the connectivity graph is bounded by either a constant value or a slowly increasing function (i.e. log(n) or n^(1/k)). One of the simplest examples of this is the FD stencil. As a result, the access patterns of the numerical algorithm and the linear solver are usually more strongly coupled. This is in stark contrast to typical dense matrix applications. BTW: I am finding however, that multi-diagonal sparsity may be best treated as random/semi-random sparsity when parallel processing is considered. In this case, the flexibility of a random sparsity data-structure allows for more efficient parallel decomposition of the parallel workload. - Ed -- ___________________________________________________________________ Ed Luke lush@erc.msstate.edu NSF Engineering Research Center for Computational Field Simulations Mississippi State University http://www.erc.msstate.edu/~lush