From: rgep@pmms.cam.ac.uk (Richard Pinch) Newsgroups: sci.math Subject: Re: Euclid's algorythm Date: 24 Feb 1996 10:12:09 GMT In article <312E4841.EE7@login.net>, Kami Rousseau writes: |> Can anyone explain to me what the Euclidian algorythm for finding the |> GCD is?? \newcommand{\blob}{$\bullet$}\newcommand{\congruent}{\equiv} \newcommand{\mod}{\bmod} \def\O(#1){{\mbox{\rm O}\left({#1}\right)}}\def\paren(#1){{\left({#1}\right)}} \subsection{Euclid's algorithm over the integers} In general the highest common factor of integers $x$ and $y$ may be obtained by the {\em Euclidean algorithm}. Define sequences $q_n$, $r_n$ as follows: \begin{eqnarray*} x &=& y q_0 + r_1 \\ y &=& r_1 q_1 + r_2 \\ & &\ldots \\ r_{n-1} &=& r_n q_n + r_{n+1} \\ \end{eqnarray*} where $r_{n+1} < r_n$. We refer to the $q_n$ as the {\em partial quotients} and the $r_n$ as the {\em (partial) remainders}. Since the remainders $r_n $ form a decreasing sequence of positive integers, there must be an $N$ such that $r_{N+1} = 0$. It is not hard to see that $r_N$ is the highest common factor of $x$ and $y$. Further, since each $r_{n+1}$ is a linear combination of $r_{n-1}$ and $r_n$, we can find $a, b$ such that $r_N = a x + b y$. In particular, if $r_N = 1$, so that $x$ and $y$ are coprime, then we achieve the inversion of $x$ modulo $y$. \label{Euclid-alg-Z} We can also use the algorithm to obtain a constructive proof of the Chinese Remainder Theorem. If moduli $m$ and $n$ are coprime, use the algorithm to obtain $u$ and $v$ such that $mu + nv = 1$. Then $mu \congruent 0 \bmod m, \congruent 1 \bmod n$ and $nv \congruent 1 \bmod m, \congruent 0 \bmod n$, so that we can solve the simultaneous congruences $x \congruent a \bmod m$ and $x \congruent b \bmod n$ by putting $x = bmu + anv$. \medskip We could find $a,b$ from the Euclidean algorithm by storing all the values of $q_n$ and $r_n$ and substituting backwards. However, there is a method which does not require keeping the values. Define sequences $P_n$ and $Q_n$ as follows: $P_{-1} = 1$, $Q_{-1} = 0$, $P_0 = 0$, $Q_0 = 1$ and for $n \ge 1$ \begin{eqnarray*} P_{n+1} &= P_{n-1} - q_n P_n \\ Q_{n+1} &= Q_{n-1} - q_n Q_n . \\ \end{eqnarray*} The rule for forming successive elements is the same for the two sequences, only the initial conditions differ. If we define $x = r_{-1}$, $y = r_0$ then by induction $r_n = P_n x + Q_n y$. Hence $a = P_N$, $b = Q_N$. It is easy to show that two steps of the algorithm must reduce $r_n$ by a factor at least 2. \medskip \blob {\sf The number of steps to invert $x \mod m$ by the Euclidean algorithm is $\O(\log m)$.} \qed \medskip The worst case for the Euclidean algorithm is when the quotients $q_n$ are all equal to 1. This will be occur when $a,b$ are consecutive {\em Fibonacci numbers}, defined by the recurrence relation $F_0 = 0, F_1 = 1$, $F_{n+1} = F_n + F_{n-1}$. We have $$ F_n = { \phi^n - \bar\phi^n \over \sqrt 5 } $$ where $\phi, \phi' = {1 \pm \sqrt 5 \over 2}$ are the roots of the {\em auxiliary equation} $x^2 = x + 1$: $\phi$ is the {\em Golden ratio}. Hence in the worst case the Euclidean algorithm requires $\log_\phi b$ steps. \medskip \blob {\sf The average number of steps in Euclid's algorithm is ${12 \log 2 \over \pi^2} \log n$.} \qed See Knuth \cite{Knu:vol2} \S4.5.3, where the following result is also described. \label{Euclid-analysis} \medskip \blob{\sf The distribution of partial quotients $q_n$ in Euclid's algorithm is given by $$ \prob( q_n = a ) \sim \log \paren(1 + {1 \over (a+1)^2 + 1}). $$ } \qed \medskip Even so, there are faster variants on the Euclidean algorithm. Sorenson \cite{Sor:gcd} describes right- and left-shift $k$-ary algorithms with a worst-case time of $\O(n^2/\log n)$ bit operations on $n$-bit numbers. Richard Pinch; Queens' College, Cambridge