Before we begin, note that we will apply a sequence of steps that correspond to multiplication by an invertible matrix on the left or right, and by
Lemma 14.20, none of these steps will change the gcd.
To prove existence of such a matrix \(M\text{,}\) we claim we can multiply \(A\) on the right and the left by invertible matrices to transform it into a matrix of the form
\begin{equation*}
\left[\begin{array}{ll} g & 0 \\ 0 & B \end{array}\right]
\end{equation*}
for some
\((n-1) \times(m-1)\) matrix
\(B\text{,}\) where
\(g=\operatorname{gcd}(A)\text{.}\) If our claim holds, then we are done: notice that
\(g\) divides every entry of
\(B\text{,}\) since
\(\operatorname{gcd}(A)=\operatorname{gcd}(g, B)\) by
Lemma 14.20, and so by applying this fact to
\(B\text{,}\) and then over and over again, we arrive at a matrix of the desired form
\(M\text{.}\) Notice moreover that if
\(C\) is an invertible matrix, then so is
\begin{equation*}
\left[\begin{array}{ll} 1 & 0 \\ 0 & C \end{array}\right]
\end{equation*}
To construct a matrix in the form above, let \(a\) be the upper left \((1,1)\) entry of \(A\text{.}\) First, we are going to show that we can turn \(A\) into a matrix of the form
\begin{equation*}
\left[\begin{array}{ll} * & 0 \\ 0 & B \end{array}\right]
\end{equation*}
with the same gcd as
\(A\text{.}\) If a happens to divide all the entries on the first row and column, then we can simply apply elementary row and column operations to get to the desired form. Suppose there exists
\(b\) on the first column such that
\(a \nmid b\text{.}\) Then we may apply an elementary row operation switching rows so that
\(b=a_{2,1}\) is on the first column, second row. By
Lemma 14.21, we can now find an invertible matrix
\(C \in \mathrm{M}_{2}(R)\) such that
\begin{equation*}
C\left[\begin{array}{l} a \\ b \end{array}\right] =\left[\begin{array}{c} \operatorname{gcd}(a, b) \\ 0 \end{array}\right] .
\end{equation*}
Consider the \(m \times m\) matrix
\begin{equation*}
P:=\left[\begin{array}{cc} C & 0 \\ 0 & I_{m-2} \end{array}\right] .
\end{equation*}
Note that
\(P A\) has
\((P A)_{1,1}=\operatorname{gcd}(a, b)\) and
\((P A)_{2,1}=0\text{,}\) so this step replaces
\(a\) by
\(\operatorname{gcd}(a, b)\) and
\(b\) by 0 . By
Lemma 14.20,
\(\operatorname{gcd}(P A)=\operatorname{gcd}(A)\text{.}\) We can keep repeating this until the top left corner entry divides every entry on the first column, and this process must stop after at most
\(m-1\) steps, since there are only
\(m\) elements on the first column.
Similarly, if there exists \(b\) on the first row that \(a\) does not divide, we can repeat this process by instead multiplying \(A\) on the left by an invertible matrix, until we zero out all the remaining entries on the first row and column. Finally, we arrive at a matrix of the form
\begin{equation*}
\left[\begin{array}{ll} a & 0 \\ 0 & B \end{array}\right]
\end{equation*}
If \(a=\operatorname{gcd}(A)\text{,}\) we are done. If not, then we can find some entry \(b=a_{i, j}\) such that \(a \nmid b\text{.}\) We can then add the \(j\) th column to the first column, which puts \(b\) into the first column without affecting \(a\text{,}\) since the remainder of the top row is zero. But this brings us back to the previous situation, and we have already shown that we can replace the top left corner by \(\operatorname{gcd}(a, b)\text{.}\)
At each step, we replace \(a\) by some \(c\) with is both a divisor of \(a\) and a multiple of \(\operatorname{gcd}(A)\text{.}\) Our ring \(R\) is a UFD, so there are finitely many factors of \(a / \operatorname{gcd}(A)\text{,}\) and this process must stop. This shows that we can eventually replace \(A\) by
\begin{equation*}
\left[\begin{array}{cc} \operatorname{gcd}(A) & 0 \\ 0 & B \end{array}\right].
\end{equation*}
Now it remains to show the uniqueness portion of the theorem. For any
\(i\) and any matrix
\(B\text{,}\) let
\(\operatorname{gcd}_{i}(B)\) denote the gcd of all the
\(i \times i\) minors of
\(B\text{.}\) By
Lemma 14.23,
\(\operatorname{gcd}_{i}\) is unchanged by row and column operations, so
\(\gcd_i(A) = \gcd_i(M )\text{.}\)
For a matrix of the form \(M\text{,}\) the only minors that are nonzero are those where the choices of columns and rows are the same, and hence the only nonzero \(i \times i\) minors of \(M\) are \(g_{s_{1}} \cdots g_{s_{i}}\) for some \(s_{1}<\cdots<s_{i}\text{.}\) Since \(g_{s_{1}} \cdots g_{s_{i}}\) divide each other, it follows that
\begin{equation*}
\operatorname{gcd}_{i}(A)=\operatorname{gcd}_{i}(M)=g_{1} \cdots g_{i} .
\end{equation*}
In particular, the largest value of \(i\) such that some \(i \times i\) minor of \(A\) is nonzero is \(\ell\text{.}\) Also, we have
\begin{equation*}
g_{i}=\frac{\operatorname{gcd}_{i}(A)}{\operatorname{gcd}_{i-1}(A)} .
\end{equation*}
This proves uniqueness, for it shows that \(\ell, g_{1}, \ldots, g_{\ell}\) are all defined from \(A\) directly, without any choices.