Comprehensive Guide on Determinant of Elementary Matrices
Start your free 7-days trial now!
Determinant of matrices that differ by one row
Let square matrices $\boldsymbol{A}$, $\boldsymbol{B}$ and $\boldsymbol{C}$ have the same entries except for the $r$-th row. If the $r$-th row of $\boldsymbol{C}$ is equal to the sum of the $r$-th rows of $\boldsymbol{A}$ and $\boldsymbol{B}$, then the determinant of $\boldsymbol{C}$ is:
Proof. We can prove this using induction. We first show that the proposition holds for the $2\times2$ case. Consider the following matrices:
Here, the second row of $\boldsymbol{C}$ is the sum of the second rows of $\boldsymbol{A}$ and $\boldsymbol{B}$. The determinant of $\boldsymbol{A}$ and $\boldsymbol{B}$ is:
The determinant of $\boldsymbol{C}$ is:
Similarly, we can show that the proposition holds if the first row of $\boldsymbol{C}$ is the sum of the first rows of $\boldsymbol{A}$ and $\boldsymbol{B}$. Therefore, the proposition holds for the $2\times2$ case.
Next, the formal proof requires assuming the proposition to be true for the $(n-1)\times(n-1)$ case and showing that the proposition also holds for the $n\times{n}$ case. However, to keep things simple, we will show the case for $3\times3$ - we can generalize this to the $n\times{n}$ case using the same idea.
Consider the following matrices:
Our goal is to show that:
By theoremlink, we can use the cofactor expansion along the first column to find the determinant of $\boldsymbol{A}$ and $\boldsymbol{B}$ like so:
Compare the pairs of matrices in $\det(\boldsymbol{A})$ and $\det(\boldsymbol{B})$ and notice how they differ by a single row. For instance, focus on the first determinant of $\det(\boldsymbol{A})$ and $\det(\boldsymbol{B})$ - only the second row is different. Using the inductive assumption that the proposition holds for the $2\times2$ case, we have that $\mathrm{det}(\boldsymbol{A})+ \mathrm{det}(\boldsymbol{B})$ is:
Now, let's perform cofactor expansion along the first column to find the determinant of $\boldsymbol{C}$ like so:
This is the same expression as that of $\mathrm{det}(\boldsymbol{A})+ \mathrm{det}(\boldsymbol{B})$ in \eqref{eq:wNDCJhMsM0q0JdH8WeV}. Therefore, we conclude that:
This completes the proof.
Effect of multiplying a row by a scalar multiple on the determinant
Let $\boldsymbol{A}$ be a square matrix. If we multiply a row by a non-zero constant $k$ to produce matrix $\boldsymbol{B}$, then:
Proof. Consider the following matrices:
matrix $\boldsymbol{B}_1$ is obtained by multiplying the first row of $\boldsymbol{A}$ by some scalar $k$.
matrix $\boldsymbol{B}_2$ is obtained by multiplying the second row of $\boldsymbol{A}$ by some scalar $k$.
Our goal is to show the following:
The reason why we consider the two cases above is that we are going to compute the determinant by performing cofactor expansion along the first row. Therefore, we must consider the case when we modify the first row as well as the case when we modify any other row.
Firstly, the determinant of $\boldsymbol{B}_1$ is:
Next, the determinant of $\boldsymbol{B}_2$ is:
We then use our inductive assumption that $\mathrm{det}(\boldsymbol{B})=k\cdot\mathrm{det}(\boldsymbol{A})$ for $2\times2$ case:
This completes the proof.
Effect of interchanging two rows on the determinant
Let $\boldsymbol{A}$ be a square matrix. If we swap one row with another row to produce matrix $\boldsymbol{B}$, then:
Proof. We prove this by induction. We will consider the two cases below:
case when we interchange the first and second rows.
case when we interchange rows that are not the first row.
Consider the following two matrices:
Here, matrix $\boldsymbol{B}$ is obtained by interchanging the first two rows of $\boldsymbol{A}$.
Our goal is to show the following:
We know from theoremlink that the cofactor expansion along the $1$st column is equal to the cofactor expansion along the $1$st row, which is also equal to the determinant by definition:
The cofactor expansion along the $1$st column of $\boldsymbol{A}$ is:
Let's find the determinant of $\boldsymbol{B}$ by cofactor expansion along the $1$st column:
Now, we use the inductive assumption that interchanging two adjacent rows of a $2\times2$ matrix results in a sign flip. Applying this assumption to the third determinant of \eqref{eq:mzYnKejhz7N7eJUpnwd} gives:
We now add \eqref{eq:qt87x7fPDHW8HgOrlWj} and \eqref{eq:g7HNKKkvzSNRMcssofA} to get:
Therefore, we have that:
Great, we have managed to show that the sign of the determinant flips when we interchange the first row and the second row.
Let's now consider the case when we interchange rows that are not the first row:
Here, matrix $\boldsymbol{B}$ is formed by interchanging the second and third rows of $\boldsymbol{A}$.
Using the cofactor expansion along the first row, we compute the determinant of $\boldsymbol{B}$ like so:
We now use the inductive assumption that interchanging a pair of rows flips the sign of the determinant of $2\times2$ matrices:
This completes the proof.
Determinant of matrix with a row of zeros
If matrix $\boldsymbol{A}$ has a row containing all zeros, then $\det(\boldsymbol{A})=0$.
Proof. Consider the following matrix with all zeros as the first row:
We can easily see that the determinant computed using the cofactor expansion along the first row is equal to zero:
Next, we consider the case when the matrix contains a different row with all zeros:
We know from theoremlink that interchanging rows will only flip the sign of the determinant of $\boldsymbol{A}$. This means that we can keep interchanging rows to get the row with all zeros at the top:
In this case, since we perform a row-swapping operation twice, the sign of the determinant will remain unchanged. Regardless of the sign change, this determinant evaluates to zero:
This completes the proof.
Determinant of matrix with a pair of identical rows
If two rows of matrix $\boldsymbol{A}$ are the same, then $\det(\boldsymbol{A})=0$.
Proof. For simplicity, consider the following $3\times3$ matrix:
Here, the first two rows are the same. Consider another matrix $\boldsymbol{B}$ that is equivalent to $\boldsymbol{A}$ except that the first two rows are swapped:
From theoremlink, since $\boldsymbol{B}$ can be obtained by a single row-swapping operation on $\boldsymbol{A}$, the determinant of $\boldsymbol{B}$ is:
However, $\boldsymbol{A}$ and $\boldsymbol{B}$ are identical so their determinant must be equal:
The only way for \eqref{eq:vsmM3jVuhJzLX5uKNHM} and \eqref{eq:WKcCX2FTScpNR710UYk} to be true is if:
This completes the proof.
Determinant of matrix where one row is a multiple of another
If one row of $\boldsymbol{A}$ is a scalar multiple of another row, then $\mathrm{det}(\boldsymbol{A})=0$.
Proof. For simplicity, consider the following $3\times3$ matrix:
Here, the first row is $k$ times the second row.
Now, consider the matrix that is identical to $\boldsymbol{A}$ except we divide the first row by $k$ like so:
We know from theoremlink that the determinant of a matrix $\boldsymbol{A}'$ formed by multiplying a single row of a matrix $\boldsymbol{A}$ by any scalar, say $1/k$, is:
We know from theoremlink that if two rows of a matrix are the same, then its determinant is zero. Since $\boldsymbol{A}'$ has two identical rows, we have that $\mathrm{det}(\boldsymbol{A}')=0$. Therefore, \eqref{eq:TtajWUOX6gq69hxvjGk} becomes:
This means that $\mathrm{det}(\boldsymbol{A})=0$. This completes the proof.
Effect of adding a multiple of a row to another row on the determinant
Let $\boldsymbol{A}$ be a square matrix. If we add a multiple of a row to another row to produce matrix $\boldsymbol{B}$, then:
Proof. Consider the following matrices:
Here, $\boldsymbol{B}$ is obtained by multiplying the second row by $k$ and then adding it to the first row. Our goal is to show that:
Consider another matrix $\boldsymbol{A}'$ where we replace the first row of $\boldsymbol{A}$ with $k$ times the second row of $\boldsymbol{A}$ like so:
For our reference, we show $\boldsymbol{A}$, $\boldsymbol{A}'$ and $\boldsymbol{B}$ below:
Notice how all the rows except the first row are the same. Moreover, the first row of $\boldsymbol{B}$ is equal to the sum of the first row of $\boldsymbol{A}$ and $\boldsymbol{A}'$. By theoremlink, we have that:
We know from theoremlink that because $\boldsymbol{A}'$ has a row that is a scalar multiple of another row, $\mathrm{det}(\boldsymbol{A}')=0$. Therefore, \eqref{eq:sHejmwTQJ4JJCK6Ccph} becomes:
This completes the proof.
We now summarize the effects of elementary row operation on the determinant of a matrix.
Effect of elementary row operations on determinant
Let $\boldsymbol{A}$ be a square matrix. The determinant changes in the following ways after performing each type of elementary row operation:
if we multiply a row by a non-zero constant $k$ to produce matrix $\boldsymbol{B}$, then $\mathrm{det}(\boldsymbol{B})=k\cdot\mathrm{det}(\boldsymbol{A})$.
if we swap one row with another adjacent row to produce matrix $\boldsymbol{B}$, then $\mathrm{det}(\boldsymbol{B}) =-\mathrm{det}(\boldsymbol{A})$.
if we add a multiple of a row to another row to produce matrix $\boldsymbol{B}$, then $\mathrm{det}(\boldsymbol{B}) =\mathrm{det}(\boldsymbol{A})$.
Determinant of the identity matrix
If $\boldsymbol{I}$ is an identity matrix, then:
Proof. We will prove this by induction on the size of the identity matrix. Firstly, we show that the proposition holds for an $1\times1$ identity matrix. The determinant of a matrix with a single entry is the entry itself, and so $\det(\boldsymbol{I}_1)=1$.
Next, we assume that the proposition holds for an identity matrix of size $(n-1)\times(n-1)$, that is:
Our goal now is to show that the proposition holds for an identity matrix of size $n\times{n}$ shown below:
Let's perform cofactor expansionlink across the first row to obtain the determinant of $\boldsymbol{I}_n$ like so:
We now use our inductive assumption \eqref{eq:xXfsQg85DgvCFmR2Ivz} to conclude $\det(\boldsymbol{I}_n)=1$. By the principle of mathematical induction, the theorem holds for the general case. This completes the proof.
Determinant of k times the identity matrix
If $\boldsymbol{I}_n$ is an $n\times{n}$ identity matrix and $k$ is a scalar, then:
Proof. By theoremlink, the determinant of an identity matrix is one:
The matrix $k\boldsymbol{I}_n$ is:
This matrix can be obtained by performing $n$ elementary row operations of multiplying a single row by $k$ on the identity matrix $\boldsymbol{I}_n$. By theoremlink, we know that each of these elementary row operations will multiply the determinant by $k$. Therefore, the determinant of $k\boldsymbol{I}_n$ is:
This completes the proof.
Determinant of an elementary matrix
The determinant of an elementary matrix corresponding to multiplying a row by a non-zero scalar $k$ is:
The determinant of an elementary matrix corresponding to interchanging adjacent rows is:
The determinant of an elementary matrix corresponding to multiplying a row by a constant and then adding it to another row is:
Proof. Let's prove this case by case. From theoremlink, we know that the determinant of an identity matrix is:
The first type of elementary matrix $\boldsymbol{E}_1$ is obtained by multiplying a row of an identity matrix by a non-zero scalar $k$. By theoremlink, the determinant of this elementary matrix is:
The second type of elementary matrix $\boldsymbol{E}_2$ is obtained by interchanging two rows of the identity matrix. By theoremlink, the determinant of this elementary matrix is:
The third type of elementary matrix $\boldsymbol{E}_3$ is obtained by multiplying a row of an identity matrix by a constant and then adding it to another row. By theoremlink, the determinant of this elementary matrix is:
This completes the proof.
Determinant of a product of an elementary matrix and any matrix
If $\boldsymbol{A}$ is a square matrix and $\boldsymbol{E}$ is an elementary matrix, then:
Proof. Because there are $3$ types of elementary row operations, there are also $3$ types of elementary matrices. We must show that the proposition holds for all $3$ types of elementary matrices. We know from theoremlink that the determinant of an identity matrix is:
From theoremlink, we know that the determinant of an elementary matrix $\boldsymbol{E}_1$ corresponding to the elementary row operation of multiplying a single row by a non-zero constant $k$ is:
Multiplying $\boldsymbol{E}_1$ to some matrix $\boldsymbol{A}$ results in multiplying a row by $k$. From theoremlink, this means that:
Substituting \eqref{eq:TYFcvS732mjpILuP6gM} into \eqref{eq:h9KdqEkG6TUx3kUFxHb} gives:
Next, from theoremlink again, we know that the determinant of an elementary matrix $\boldsymbol{E}_2$ corresponding to the elementary row operation of interchanging two adjacent rows is:
Multiplying $\boldsymbol{E}_2$ to some matrix $\boldsymbol{A}$ results in interchanging two adjacent rows. From theoremlink, this means that:
Combining \eqref{eq:jEQZgj5fvdcSZEAFtJS} and \eqref{eq:xTVYEqErl7Zk1Gr3cah} gives:
Finally, from theoremlink again, we know that the determinant of an elementary matrix $\boldsymbol{E}_3$ corresponding to the elementary row operation of adding a multiple of one row to another is:
Multiplying $\boldsymbol{E}_3$ to some matrix $\boldsymbol{A}$ results in the same elementary row operation. From theoremlink, this means that:
Combining \eqref{eq:qkCvklB8gev6dGd5DjF} and \eqref{eq:vuu8aKetWMSt1RBfxyU} gives:
We have now shown the following result:
This means that for any type of elementary matrix $\boldsymbol{E}$, we have:
This completes the proof.
Determinant of a transpose of an elementary matrix
If $\boldsymbol{E}$ is an elementary matrix, then:
Proof. We know from theoremlink that the transpose of an elementary matrix is also an elementary matrix. More specifically, we found the following:
for elementary matrix $\boldsymbol{E}_1$ corresponding to multiplying a row by a non-zero scalar, we have that $\boldsymbol{E}^T_1 =\boldsymbol{E}_1$. Taking the determinant of both sides gives us $\mathrm{det}(\boldsymbol{E}^T_1)=\det(\boldsymbol{E}_1)$.
for elementary matrix $\boldsymbol{E}_2$ corresponding to interchanging two rows, we also have that $\boldsymbol{E}^T_2=\boldsymbol{E}_2$. Taking the determinant of both sides gives $\det(\boldsymbol{E}^T_2)=\det(\boldsymbol{E}_2)$.
for elementary matrix $\boldsymbol{E}_3$ corresponding to multiplying row $i$ by $k$ and then adding it to row $j$, the transpose $\boldsymbol{E}^T_3$ corresponds to multiplying row $j$ by $k$ and then adding it to row $i$. We know from theoremlink that the determinant of elementary matrices of this type equals one. Therefore, we conclude that $\det(\boldsymbol{E}_3^T)=\det(\boldsymbol{E}_3)=1$.
This means that for any elementary matrix $\boldsymbol{E}$, we have that:
This completes the proof.