search
Search
Login
Unlock 100+ guides
menu
menu
web
search toc
close
Comments
Log in or sign up
Cancel
Post
account_circle
Profile
exit_to_app
Sign out
What does this mean?
Why is this true?
Give me some examples!
search
keyboard_voice
close
Searching Tips
Search for a recipe:
"Creating a table in MySQL"
Search for an API documentation: "@append"
Search for code: "!dataframe"
Apply a tag filter: "#python"
Useful Shortcuts
/ to open search panel
Esc to close search panel
to navigate between search results
d to clear all current filters
Enter to expand content preview
icon_star
Doc Search
icon_star
Code Search Beta
SORRY NOTHING FOUND!
mic
Start speaking...
Voice search is only supported in Safari and Chrome.
Navigate to
check_circle
Mark as learned
thumb_up
0
thumb_down
0
chat_bubble_outline
0
Comment
auto_stories Bi-column layout
settings

Introduction to Determinants

schedule Aug 12, 2023
Last updated
local_offer
Linear Algebra
Tags
mode_heat
Master the mathematics behind data science with 100+ top-tier guides
Start your free 7-days trial now!

The ultimate goal of this chapter is to derive the relationship between matrix invertibility and determinant. Unfortunately, we cannot dive straight into their proofs just yet - we will first need to derive a series of basic properties of determinants.

Definition.

Determinant of a 2x2 matrix

Consider the $2\times2$ matrix $\boldsymbol{A}$ below:

$$\boldsymbol{A}= \begin{pmatrix} a&b\\ c&d \end{pmatrix}$$

The determinant of $\boldsymbol{A}$ is defined as:

$$\mathrm{det}(\boldsymbol{A})=ad-bc$$

The determinant of $\boldsymbol{A}$ is sometimes also written as:

$$\det(\boldsymbol{A})= \begin{vmatrix} a&b\\ c&d \end{vmatrix}$$

Note that there exists a more general definition of determinants that applies to a square matrix of any size. We will cover this later in this guide.

Example.

Computing the determinant of a 2x2 matrix (1)

Compute the determinant of the following matrix:

$$\boldsymbol{A}= \begin{pmatrix} 2&1\\ 4&3\\ \end{pmatrix}$$

Solution. The determinant of $\boldsymbol{A}$ is:

$$\begin{align*} \mathrm{det}(\boldsymbol{A}) &=(2)(3)-(1)(4)\\ &=2 \end{align*}$$

Later in the chapter, we will go over what it means for $\det(\boldsymbol{A})=2$. Specifically, we will cover:

  • the geometric interpretation behind determinants.

  • the relationship between determinant and invertibility.

Example.

Computing the determinant of a 2x2 matrix (2)

Compute the following determinant:

$$\begin{vmatrix} 1&5\\ 2&3\\ \end{vmatrix}$$

Solution. The determinant is:

$$\begin{align*} \begin{vmatrix} 1&5\\ 2&3\\ \end{vmatrix}&= (1)(3)-(5)(2)\\ &=-7 \end{align*}$$
Definition.

Minor and cofactor of an entry

Suppose $\boldsymbol{A}$ is a square matrix. The minor $M_{ij}$ of an entry $a_{ij}$ is defined as the determinant of the matrix that remains after removing the row and column that contains $a_{ij}$. The cofactor $C_{ij}$ of an entry $a_{ij}$ is defined as $(-1)^{i+j}M_{ij}$.

Example.

Computing the minor and cofactor

Consider the following matrix:

$$\boldsymbol{A}= \begin{pmatrix} \color{green}2&1&4\\ 4&3&1\\ 1&\color{red}0&2\\ \end{pmatrix}$$

Compute the minor and cofactor of the green and red entries.

Solution. To compute the minor of the green entry, we first must ignore the row and column (colored in blue below) that holds this entry:

$$\boldsymbol{A}= \begin{pmatrix} \color{blue}2&\color{blue}1&\color{blue}4\\ \color{blue}4&3&1\\ \color{blue}1&0&2\\ \end{pmatrix}$$

The determinant of the remaining sub-matrix is:

$$\begin{align*} M_{11}&=\begin{vmatrix} 3&1\\0&2 \end{vmatrix}\\ &=(3)(2)-(1)(0)\\ &=6 \end{align*}$$

Therefore, the minor of the green entry is $6$. Since the green entry is located at the $1$st row $1$st column, its cofactor is:

$$\begin{align*} C_{11} &=(-1)^{1+1}M_{11}\\ &=(-1)^2(6)\\ &=6 \end{align*}$$

Next, let's find the minor and cofactor of the red entry. We ignore the following values in blue:

$$\boldsymbol{A}= \begin{pmatrix} 2&\color{blue}1&4\\ 4&\color{blue}3&1\\ \color{blue}1&\color{blue}0&\color{blue}2\\ \end{pmatrix}$$

The minor of the red entry is the determinant of the sub-matrix:

$$\begin{align*} M_{32}&=\begin{vmatrix} 2&4\\4&1 \end{vmatrix}\\ &=(2)(1)-(4)(4)\\ &=-14 \end{align*}$$

The cofactor of the red entry is:

$$\begin{align*} C_{32} &=(-1)^{3+2}M_{32}\\ &=(-1)^5(-14)\\ &=14 \end{align*}$$

Notice how the cofactor of an entry is identical to the minor of that entry except that their sign might be different depending on the position of the entry in the matrix. The following theorem is useful for keeping track of the sign.

Theorem.

Checkerboard pattern of signs

The relationship between the signs of the cofactor and minor of an entry is described by the checkerboard pattern of signs shown below:

$$\begin{pmatrix} +&-&+&-&\cdots\\-&+&-&+&\cdots \\+&-&+&-&\cdots \\-&+&-&+&\cdots \\\vdots&\vdots&\vdots&\vdots&\smash\ddots\\ \end{pmatrix}$$

For instance, for the entry $a_{22}$, no sign flip occurs and hence its cofactor is equal to its minor. However, for the entry $a_{23}$, the cofactor and minor have opposite signs.

Proof. By definitionlink, the cofactor of the entry $a_{ij}$ is equal to the minor of $a_{ij}$ multiplied by $(-1)^{i+j}$. This means that for the top-left entry $a_{11}$, the associated sign is positive because $(-1)^{1+1}=1$. As we move along the row or column, the value alternates between positive and negative, which gives us the checkerboard pattern of signs. This completes the proof.

Definition.

Cofactor expansion along a row or column

Suppose we have an $n\times{n}$ matrix $\boldsymbol{A}$. The cofactor expansion along the $i$-th row is defined as:

$$C_{\mathrm{row}=i}=a_{i1}C_{i1}+a_{i2}C_{i2}+\cdots+a_{in}C_{in}$$

Where $C_{i1}$ is the cofactor of the entry in the $i$-th row $1$st column.

The cofactor expansion along the $j$-th column is defined as:

$$C_{\mathrm{col}=j} =a_{1j}C_{1j}+a_{2j}C_{2j}+\cdots+a_{nj}C_{nj}$$
Example.

Performing the cofactor expansion along a row or column

Consider the following matrix:

$$\boldsymbol{A}= \begin{pmatrix} 1&3&2\\1&3&0\\2&1&2\\ \end{pmatrix}$$

Perform the following:

  • cofactor expansion along the $1$st row.

  • cofactor expansion along the $1$st column.

  • cofactor expansion along the $2$nd row.

Solution. The cofactor expansion along the $1$st row is:

$$\begin{align*} C_{\mathrm{row}=1}&= a_{11}C_{11}+a_{12}C_{12}+a_{13}C_{13}\\ &=1\begin{vmatrix}3&0\\1&2\\\end{vmatrix}- 3\begin{vmatrix}1&0\\2&2\\\end{vmatrix}+ 2\begin{vmatrix}1&3\\2&1\\\end{vmatrix}\\ &=1(6-0)-3(2-0)+2(1-6)\\ &=-10 \end{align*}$$

Remember our checkerboard pattern of signs - this is why we see a negative for the second term!

The cofactor expansion along the $1$st column is:

$$\begin{align*} C_{\mathrm{col}=1}&= a_{11}C_{11}+a_{21}C_{21}+a_{31}C_{31}\\ &=1\begin{vmatrix}3&0\\1&2\\\end{vmatrix}- 1\begin{vmatrix}3&2\\1&2\\\end{vmatrix}+ 2\begin{vmatrix}3&2\\3&0\\\end{vmatrix}\\ &=1(6-0)-1(6-2)+2(0-6)\\ &=-10 \end{align*}$$

The cofactor expansion along the $2$nd row is:

$$\begin{align*} C_{\mathrm{col}=2}&= a_{21}C_{21}+a_{22}C_{22}+a_{23}C_{23}\\ &=-1\begin{vmatrix}3&2\\1&2\\\end{vmatrix}+ 3\begin{vmatrix}1&2\\2&2\\\end{vmatrix}- 0\begin{vmatrix}1&3\\2&1\\\end{vmatrix}\\ &=-1(6-2)+3(2-4)-0(1-6)\\ &=-10 \end{align*}$$

Notice how all of these cofactor expansions result in the same value! As we shall prove at the very end of this chapter, the cofactor expansion along any row or column is equal 🤯!

Definition.

General definition of determinants

If $\boldsymbol{A}$ is an $n\times{n}$ matrix, then the determinant of $\boldsymbol{A}$ is defined as the cofactor expansion along the first row:

$$\mathrm{det}(\boldsymbol{A})= a_{11}C_{11}+a_{12}C_{12}+\cdots+a_{1n}C_{1n}$$

Where:

  • $a_{1n}$ is the entry in the $1$st row $n$-th column of $\boldsymbol{A}$.

  • $C_{1n}$ is the cofactor of the entry $a_{1n}$.

As we have demonstrated in the previous example, the determinant can actually be computed using cofactor expansion along any row or column. Again, we will prove this later in the chapter!

Example.

Computing the determinant of a 3x3 matrix

Compute the determinant of the following matrix:

$$\boldsymbol{A}= \begin{pmatrix} 2&1&4\\ 4&3&1\\ 1&0&2\\ \end{pmatrix}$$

Solution. The determinant of $\boldsymbol{A}$ is:

$$\begin{align*} \mathrm{det}(\boldsymbol{A})&= a_{11}C_{11}+a_{12}C_{12}+a_{13}C_{13}\\ &=2\begin{vmatrix}3&1\\0&2\\\end{vmatrix}- 1\begin{vmatrix}4&1\\1&2\\\end{vmatrix}+ 4\begin{vmatrix}4&3\\1&0\\\end{vmatrix}\\ &=2(6-0)-1(8-1)+4(0-3)\\ &=-7 \end{align*}$$
Example.

Deriving the definition of 2x2 determinant

Suppose we have the following $2\times2$ matrix:

$$\boldsymbol{A}= \begin{pmatrix} a&b\\ c&d \end{pmatrix}$$

We have previously stated that the definition of the determinant of this $2\times2$ matrix is:

$$\det(\boldsymbol{A})= ad-bc$$

Let's derive this definition ourselves using the general definition of determinant. The determinant is defined as the cofactor expansion along the first row:

$$\begin{align*} \det(\boldsymbol{A})&= aC_{11}+bC_{12}\\ &= a\det(d)-b\det(c)\\ &= ad-bc \end{align*}$$
Theorem.

Determinant is equal to the cofactor expansion along the first column

We have originally defined the determinant to be equal to the cofactor expansion along the first row. The determinant is also equal to the co-factor expansion along the first column.

Proof. We will prove this by induction. We first must show that the proposition holds for the $2\times2$ case.

$$\begin{align*} \boldsymbol{A}&=\begin{pmatrix} a_{11}&a_{12}\\ a_{21}&a_{22} \end{pmatrix} \end{align*}$$

The cofactor expansion along the first row is:

$$\begin{align*} C_{\mathrm{row}=1}&= a_{11}\det(a_{22})- a_{12}\det(a_{21})\\ &=a_{11}a_{22}- a_{12}a_{21} \end{align*}$$

Remember, this is equal to $\det(\boldsymbol{A})$ as per the definition of determinant. The cofactor expansion along the first column is:

$$\begin{align*} C_{\mathrm{col}=1}&= a_{11}\det(a_{22})- a_{21}\det(a_{12})\\ &=a_{11}a_{22}- a_{21}a_{12} \end{align*}$$

Therefore, the cofactor expansion along the first row and that along the first column are equal! This means that we can use the cofactor expansion along the first column to compute the determinant as well.

Now, in a typical proof by induction, we will assume that the proposition holds for the $n-1\times{n-1}$ case and show that the proposition holds also for the $n\times{n}$ case. However, one of the problems about working with the general case is that the notation becomes convoluted and obscures the essence of the proof - this is precisely why almost all introductory linear algebra books avoid this proof. The approach we will take here is to consider the simple $3\times3$ case but we will attempt to make general claims during our proof.

Consider the following $3\times3$ matrix:

$$\begin{align*} \boldsymbol{A}&=\begin{pmatrix} a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33} \end{pmatrix} \end{align*}$$

The cofactor expansion along the first row and the cofactor expansion along the first column are:

$$\begin{equation}\label{eq:nPNlDtMwbd4Y30AI0Uv} \begin{aligned} \det(\boldsymbol{A})= C_{\mathrm{row=1}}&=a_{11}\begin{vmatrix} a_{22}&a_{23}\\ a_{32}&a_{33}\\ \end{vmatrix}- a_{12}\begin{vmatrix} a_{21}&a_{23}\\ a_{31}&a_{33}\\ \end{vmatrix}+ a_{13}\begin{vmatrix} a_{21}&a_{22}\\ a_{31}&a_{32}\\ \end{vmatrix}\\ C_{\mathrm{col}=1} &=a_{11}\begin{vmatrix} a_{22}&a_{23}\\ a_{32}&a_{33}\\ \end{vmatrix}- a_{21}\begin{vmatrix} a_{12}&a_{13}\\ a_{32}&a_{33}\\ \end{vmatrix}+ a_{31}\begin{vmatrix} a_{12}&a_{13}\\ a_{22}&a_{23}\\ \end{vmatrix} \end{aligned} \end{equation}$$

Our goal is to show the following:

$$\det(\boldsymbol{A}) =C_{\text{row=1}} =C_{\text{col=1}}$$

Notice how the first terms in \eqref{eq:nPNlDtMwbd4Y30AI0Uv} are equal. Therefore, let's ignore the first term - our goal now is to show the equivalence between the following:

$$\begin{equation}\label{eq:YR0PySipyXEteFNVf2m} \begin{aligned}[b] C'_{\mathrm{row=1}}&= - a_{12}\begin{vmatrix} a_{21}&a_{23}\\ a_{31}&a_{33}\\ \end{vmatrix}+ a_{13}\begin{vmatrix} a_{21}&a_{22}\\ a_{31}&a_{32}\\ \end{vmatrix}\\ C'_{\mathrm{col}=1} &=- a_{21}\begin{vmatrix} a_{12}&a_{13}\\ a_{32}&a_{33}\\ \end{vmatrix}+ a_{31}\begin{vmatrix} a_{12}&a_{13}\\ a_{22}&a_{23}\\ \end{vmatrix} \end{aligned} \end{equation}$$

We express the determinants using the following notation:

$$\begin{equation}\label{eq:TwW0mYFQngeOY8JOuu3} \begin{aligned}[b] C'_{\mathrm{row=1}}&=- a_{12}\det(\boldsymbol{A}_{12})+ a_{13}\det(\boldsymbol{A}_{13})\\ \end{aligned} \end{equation}$$
$$\begin{equation}\label{eq:WvYWE0w8RWFqo9Y34fc} \begin{aligned}[b] C'_{\mathrm{col}=1}\; &=- a_{21}\det(\boldsymbol{A}_{21})+ a_{31}\det(\boldsymbol{A}_{31}) \end{aligned} \end{equation}$$

Where $\boldsymbol{A}_{12}$ represents a sub-matrix with $1$st row and $2$nd column removed from the original matrix $\boldsymbol{A}$. The inductive assumption is that the cofactor expansion along the first row is equal to the cofactor expansion along the first column for the $2\times2$ case. We will use this assumption in the very next step.

Let's start by focusing on $C'_{\mathrm{row}=1}$. We compute the first determinant in \eqref{eq:TwW0mYFQngeOY8JOuu3} by cofactor expansion along the first column:

$$\begin{equation}\label{eq:TgWTPrRpUbT3wFUwe0C} \begin{aligned}[b] - a_{12}\det(\boldsymbol{A}_{12})&= -a_{12}\begin{vmatrix} a_{21}&a_{23}\\ a_{31}&a_{33}\\ \end{vmatrix}\\ &=-a_{12} (a_{21}\begin{vmatrix}a_{33}\\\end{vmatrix} -a_{31}\begin{vmatrix}a_{23}\\\end{vmatrix}) \\ &=-a_{12}\big[ a_{21}\det(\boldsymbol{A}_{{\color{green}12},{\color{red}21}})- a_{31}\det(\boldsymbol{A}_{{\color{green}12},{\color{red}31}}) \big]\\ &=-a_{12}a_{21}\det(\boldsymbol{A}_{{\color{green}12},{\color{red}21}})+ a_{12}a_{31}\det(\boldsymbol{A}_{{\color{green}12},{\color{red}31}}) \end{aligned} \end{equation}$$

Here, $\boldsymbol{A}_{{\color{green}12},{\color{red}21}}$ represents the sub-matrix in which the following two pairs of rows and columns are removed from $\boldsymbol{A}$:

  • the $1$st row and $2$nd column.

  • the $2$nd row and $1$st column.

Visually, the sub-matrix (the non-colored term) looks like the following:

$$\begin{align*} \boldsymbol{A}_{{\color{green}12},{\color{red}21}}&=\begin{pmatrix} \color{green}a_{11}&\color{green}a_{12}&\color{green}a_{13}\\ \color{red}a_{21}&\color{green}a_{22}&\color{red}a_{23}\\ \color{red}a_{31}&\color{green}a_{32}&a_{33} \end{pmatrix} \end{align*}$$

Notice how if we were to perform cofactor expansion to find the other terms in \eqref{eq:TwW0mYFQngeOY8JOuu3} as we did in \eqref{eq:TgWTPrRpUbT3wFUwe0C}, the terms with the specific combination $a_{12}a_{21}$ and $a_{12}a_{31}$ will only appear once in \eqref{eq:TwW0mYFQngeOY8JOuu3}. This also means that their corresponding determinants $\det(\boldsymbol{A}_{{\color{green}12},{\color{red}21}})$ and $\det(\boldsymbol{A}_{{\color{green}12},{\color{red}31}})$ will also appear once in \eqref{eq:TwW0mYFQngeOY8JOuu3}. In general, the following form will only appear in \eqref{eq:TwW0mYFQngeOY8JOuu3} once:

$$\begin{equation}\label{eq:jTj3w8z7xMFMhkx2ujs} a_{1j}\cdot{a_{i1}}\cdot \det(\boldsymbol{A}_{{\color{green}1j},\color{red}i1}) \end{equation}$$

Let's now move on to $C'_{\mathrm{col}=1}$ in \eqref{eq:WvYWE0w8RWFqo9Y34fc}. Once again, let's focus on the first determinant - but this time, we perform cofactor expansion along the first row:

$$\begin{equation}\label{eq:gnYpM9eGtvYjQB3RA65} \begin{aligned}[b] -a_{21}\begin{vmatrix} a_{12}&a_{13}\\ a_{32}&a_{33}\\ \end{vmatrix}&= -a_{21}\big[a_{12}\det(\boldsymbol{A}_{{\color{green}21},{\color{red}12}})- a_{13}\det(\boldsymbol{A}_{{\color{green}21},{\color{red}13}})\big]\\ &= -a_{21}a_{12}\det(\boldsymbol{A}_{{\color{green}21},{\color{red}12}})+ a_{21}a_{13}\det(\boldsymbol{A}_{{\color{green}21},{\color{red}13}}) \end{aligned} \end{equation}$$

The same idea holds here - if we were to perform cofactor expansion on the other terms in \eqref{eq:WvYWE0w8RWFqo9Y34fc} as we did in \eqref{eq:gnYpM9eGtvYjQB3RA65}, the terms with the combination $a_{21}a_{12}$ and $a_{21}a_{13}$ will only appear once in \eqref{eq:WvYWE0w8RWFqo9Y34fc}. In general, the following form will appear in \eqref{eq:WvYWE0w8RWFqo9Y34fc} once:

$$\begin{equation}\label{eq:QzQT6OUtX3efWjMkakK} a_{i1}\cdot{a_{1j}}\cdot \det(\boldsymbol{A}_{{\color{green}i1},{\color{red}1j}}) \end{equation}$$

This is similar to \eqref{eq:jTj3w8z7xMFMhkx2ujs} - except that we have $\det(A_{{\color{green}1j},{\color{red}i1}})$ in \eqref{eq:jTj3w8z7xMFMhkx2ujs} but $\det(\boldsymbol{A}_{{\color{green}i1},{\color{red}1j}})$ in \eqref{eq:QzQT6OUtX3efWjMkakK}. We now show that these two determinants are equal.

Recall that $\boldsymbol{A}_{{\color{green}1j},{\color{red}i1}}$ represents the sub-matrix after the following row/column removal from $\boldsymbol{A}$:

  • removing row $1$ and column $j$ first.

  • removing row $i$ and column $1$ after.

The ordering by which we perform the above removal does not matter. This means that the above sub-matrix $\boldsymbol{A}_{{\color{green}1j},{\color{red}i1}}$ is equal to the sub-matrix obtained by:

  • removing row $i$ and column $1$ first.

  • removing row $1$ and column $j$ after.

This sub-matrix is $\boldsymbol{A}_ {{\color{green}i1},{\color{red}1j}}$. Therefore, we conclude that:

$$\begin{equation}\label{eq:CiWC4e3N8IjRMlzO5Ij} \boldsymbol{A}_{{\color{green}1j},{\color{red}i1}}= \boldsymbol{A}_{{\color{green}i1},{\color{red}1j}} \end{equation}$$

Let's now go over some examples to demonstrate \eqref{eq:CiWC4e3N8IjRMlzO5Ij}. The sub-matrix $\boldsymbol{A}_{ \color{green}21,\color{red}12}$ is equal to $\boldsymbol{A}_{{\color{green}21},{\color{red}12}}$ as shown below:

$$\begin{align*} \boldsymbol{A}_{{\color{green}12},{\color{red}21}}&=\begin{pmatrix} \color{green}a_{11}&\color{green}a_{12}&\color{green}a_{13}\\ \color{red}a_{21}&\color{green}a_{22}&\color{red}a_{23}\\ \color{red}a_{31}&\color{green}a_{32}&a_{33} \end{pmatrix}= \begin{pmatrix} \color{green}a_{11}&\color{red}a_{12}&\color{red}a_{13}\\ \color{green}a_{21}&\color{green}a_{22}&\color{green}a_{23}\\ \color{green}a_{31}&\color{red}a_{32}&a_{33} \end{pmatrix}=\boldsymbol{A}_{{\color{green}21},{\color{red}12}} \end{align*}$$

Here's an example for the $4\times4$ case:

$$\begin{align*} \boldsymbol{A}_{{\color{green}13},{\color{red}21}}&=\begin{pmatrix} \color{green}a_{11}&\color{green}a_{12}&\color{green}a_{13}&\color{green}a_{14}\\ \color{red}a_{21}&\color{red}a_{22}&\color{green}a_{23}&\color{red}a_{24}\\ \color{red}a_{31}&a_{32}&\color{green}a_{33}&a_{34}\\ \color{red}a_{41}&a_{42}&\color{green}a_{43}&a_{44} \end{pmatrix}=\begin{pmatrix} \color{green}a_{11}&\color{red}a_{12}&\color{red}a_{13}&\color{red}a_{14}\\ \color{green}a_{21}&\color{green}a_{22}&\color{green}a_{23}&\color{green}a_{24}\\ \color{green}a_{31}&a_{32}&\color{red}a_{33}&a_{34}\\ \color{green}a_{41}&a_{42}&\color{red}a_{43}&a_{44} \end{pmatrix}=\boldsymbol{A}_{{\color{green}21},{\color{red}13}} \end{align*}$$

Now taking the determinant of both matrices in \eqref{eq:CiWC4e3N8IjRMlzO5Ij} gives:

$$\det(\boldsymbol{A}_{{\color{green}1j},{\color{red}i1}})= \det(\boldsymbol{A}_{{\color{green}i1},{\color{red}1j}})$$

Therefore, we can now equate \eqref{eq:jTj3w8z7xMFMhkx2ujs} and \eqref{eq:QzQT6OUtX3efWjMkakK} to get:

$$\begin{equation}\label{eq:SbUQL1xg9lqsFYgdMIE} a_{1j}\cdot{a_{i1}}\cdot \det(\boldsymbol{A}_{{\color{green}1j},{\color{red}i1}})= a_{i1}\cdot{a_{1j}}\cdot \det(\boldsymbol{A}_{{\color{green}i1},{\color{red}1j}}) \end{equation}$$

Remember, the number of times the left-hand side of \eqref{eq:SbUQL1xg9lqsFYgdMIE} appears in $C'_{\mathrm{row}=1}$ in \eqref{eq:TwW0mYFQngeOY8JOuu3} and the number of times the right-hand side of \eqref{eq:SbUQL1xg9lqsFYgdMIE} appears in $C'_{\mathrm{col}=1}$ in \eqref{eq:WvYWE0w8RWFqo9Y34fc} are the same. Therefore, we conclude that:

$$C'_{\mathrm{row}=1}= C'_{\mathrm{col}=1}$$

It then follows that:

$$\det(\boldsymbol{A})=C_{\text{row=1}} =C_{\text{col=1}}$$

Again, we have managed to prove our proposition for the $3\times3$ case but the flow of the proof for the general case is very similar. This completes the proof.

robocat
Published by Isshin Inada
Edited by 0 others
Did you find this page useful?
thumb_up
thumb_down
Comment
Citation
Ask a question or leave a feedback...