Subject: Re: Null-space of an integer matrix From: Dave Rusin Date: 19 Jan 1995 17:27:35 GMT Newsgroups: sci.math.research In article , Vladimir Kotlyar wrote: > >Is there an algorithm for computing a basis for the null-space of an >integer matrix, so that the basis has small (bounded) entries? There are several ways to interpret your question. (Assuming, as I will, that you meant for the basis vectors also to be integral.) If you want to know what the method is to compute the null space with the goal of minimizing the largest entry, I'd have to say 'I don't know'. You could meld Gaussian elimination (as below) with the Euclidean algorithm, as is done when classifying matrices over a PID (see e.g. Jacobson's "Basic Algebra") but I've never tried to do it efficiently; things that work theoretically are fine with me :-). If you want to know if the entries can be made 'small', I'd say the answer is 'not particularly'. Indeed, consider a diagonal matrix with N rows containing relatively prime entries on the diagonal, all roughly equal to K. Adjoin a single column consisting of all 1's. Then the null-space is one-dimensional, spanned by a vector the last entry of which is the product of the diagonal entries, i.e. roughly K^N. Is this 'small'? If you want to know if there is any upper bound on the entries of the null-space, I'd say 'sure'. If you just modify Gaussian elimination so as not to do division, you can keep track of how big the entries will grow. Let M_0 be the greatest magnitude of any entry in the matrix at the beginning. After n steps you can arrange the matrix to have a diagonal nxn matrix in the top left with zeros under it. If the greatest magnitude of the entries at this stage is M_n, then at worst you must multiply rows by a number this big and then add in another such, so when you consider the entries you have left you see M_(n+1) <= M_n^2 + M_n < 2M_n^2. By induction this shows M_n <= 2^(s-1) M_0^s where s=2^n. Once you end up with the linearly independent rows diagonalized, you can write down a null space pretty easily -- although as the previous example shows, the entries may be as big as the product of all the diagonal entries. Thus you can write down _an_ upper bound for the null space entries: (2M_0)^(N.2^N). I have no doubt someone will chime in with a significant reduction of the exponent N.2^N, but at least this does give you _a_ bound. Finally, let me add that you need to decide whether you're interested in null-spaces or annihilators; that will affect your answer. Let me clarify: As in the previous paragraph, you can write a sequence of integral vectors with the property that any rational (or real or...) vector in the null space is a rational (real...) linear combination of these. That even applies to integral vectors in the null space. If you want to find a collection of integral vectors with the property that any integral vector in the null space is an _integral_ linear combination of the ones in the first set, then you are asking a question about free modules over the integers. This is more subtle than the linear algebra; for example, you run into the question of whether a submodule of a free module is itself free (true for integers, not true for other rings). The method outlined in the previous paragraph gives a null-space basis of the following form: if the gcd if the diagonal entries is D, then a typical basis vector has most of the initial entries zero except for one, which is a divisor of D; all the final entries (those not corresponding to the diagonalized rows) are multiples of D. Clearly not all integral vectors in the null-space need to be an _integral_ linear combination of such vectors. dave ============================================================================== Date: Fri, 20 Jan 95 09:57:09 CST From: [Permission pending] To: rusin Subject: Re: Null-space of an integer matrix Dave, I read with interest your recent news article with the above title (article 758 in sci.math.research). I am just a "news voyeur" in that I read the articles, but never post any. I don't even know how to go about posting. However, I think I have something worthwhile to add to your discussion of finding "small" solutions to homogeneous systems of linear equations. The idea is this. Suppose a_{11}X_1+ +a_{1n}X_n=0 .... a_{m1}X_1+ +a_{mn}X_n=0 is a system with a_{ij} integral and |a_{ij}|\le A for all i and j. Suppose that m