Properties of Eigenvalues and Eigenvectors
Start your free 7-days trial now!
Eigenvectors corresponding to distinct eigenvalues are linearly independent
If $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_k$ are eigenvectors of a matrix corresponding to distinct eigenvalues $\lambda_1$, $\lambda_2$, $\cdots$, $\lambda_k$ respectively, then $\{\boldsymbol{v}_1,\boldsymbol{v}_2\cdots,\boldsymbol{v}_k\}$ is linearly independent.
Solution. We will prove this theorem by contradiction. We assume that $\boldsymbol{A}$ has a linearly dependent set of eigenvectors $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_k$ each with a distinct eigenvalue $\lambda_1$, $\lambda_2$, $\cdots$, $\lambda_k$ respectively.
Even though $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_k$ form a linearly dependent set, we can use the plus-minus theoremlink to construct a linearly independent set $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_r$ where $r\lt{k}$. By definition of eigenvalues and eigenvectors, we have the following equations:
Because the eigenvectors $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_k$ form a linearly dependent set, an arbitrary eigenvector $\boldsymbol{v}_j$ where $j\in[1,k]$ can be expressed as a linear combination of the set of linearly independent eigenvectors $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_r$ like so:
Let the eigenvalue of $\boldsymbol{v}_j$ be denoted as $\lambda_{j}$. From the definition of eigenvalues and eigenvectors, we have that:
Substituting \eqref{eq:EeIFGb6TxoAgiAeUdHJ} into \eqref{eq:DbHPIz2jQcrqLytsjBb} gives:
The left-hand side can be written as:
Using \eqref{eq:xoVVTLpTXiHndW2VaX6}, we get:
Substituting \eqref{eq:L8BfA4CQDFo5qK3ShZR} into \eqref{eq:tfPYGwWIWzWFgoisUah} gives:
By definitionlink of linear independence, because $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_r$ are linearly independent vectors, we have that:
Since $\lambda_i$ and $\lambda_j$ are distinct, $\lambda_i-\lambda_j$ cannot be zero. This means that $c_1=c_2=\cdots=c_r=0$. Now, recall from \eqref{eq:EeIFGb6TxoAgiAeUdHJ} that eigenvector $\boldsymbol{v}_j$ is:
Since the scalar coefficients are zero, we have that $\boldsymbol{v}_j=0$. However, eigenvectors are non-zero vectors by definitionlink. We have a contradiction here, which means that our initial assumption that $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_r$ is a linearly dependent set is flawed. In other words, $\boldsymbol{v}_1$, $\boldsymbol{v}_2$, $\cdots$, $\boldsymbol{v}_r$ must be linearly independent with distinct eigenvalues.
This completes the proof.
Matrix is invertible if and only if zero is not an eigenvalue.
Let $\boldsymbol{A}$ be a square matrix. $\boldsymbol{A}$ is invertible if and only if $0$ is not an eigenvalue of $\boldsymbol{A}$.
Proof. We first prove the forward proposition. Assume $\boldsymbol{A}$ is invertible. By theoremlink, the only solution to $\boldsymbol{Ax}=\boldsymbol{0}$ is the trivial solution of $\boldsymbol{x}=\boldsymbol{0}$. This also means that the only solution to $\boldsymbol{Ax}=0\boldsymbol{x}$ is $\boldsymbol{x}=0$. In other words, for $0$ to be an eigenvalue of $\boldsymbol{A}$, the corresponding eigenvector must be $\boldsymbol{0}$. However, $\boldsymbol{0}$ cannot be an eigenvector by definitionlink and thus $0$ cannot be an eigenvalue of $\boldsymbol{A}$.
We now prove the converse. We assume that $0$ is not an eigenvalue of $\boldsymbol{A}$. This means that there cannot be any non-zero vector $\boldsymbol{x}$ such that $\boldsymbol{Ax}=0\boldsymbol{x}$ holds. In other words, $\boldsymbol{Ax}=0\boldsymbol{x}$ only holds if $\boldsymbol{x}$ is the zero vector. Because $\boldsymbol{Ax}=0\boldsymbol{x}$ is equivalent to $\boldsymbol{Ax}=\boldsymbol{0}$, we have that the only solution to $\boldsymbol{Ax}=\boldsymbol{0}$ is the trivial solution $\boldsymbol{x}=\boldsymbol{0}$. By theoremlink, we conclude that $\boldsymbol{A}$ is invertible.
This completes the proof.
Positive powers of eigenvalues are eigenvalues of the matrix raised to the same power
Let $\boldsymbol{A}$ be a square matrix. If $\lambda$ is an eigenvalue of $\boldsymbol{A}$, then $\lambda^k$ is an eigenvalue of $\boldsymbol{A}^k$ where $k\ge0$ is an integer.
Proof. We will prove this by induction. Consider the base case when $k=0$. Let $\boldsymbol{x}$ be an eigenvector of a square matrix $\boldsymbol{A}$ corresponding to the eigenvalue $\lambda$. Let's check whether or not the theorem is satisfied for the base case, that is, we want to show that $\lambda^0$ is an eigenvalue of $\boldsymbol{A}^0$.
We use the definition $\boldsymbol{A}^0=\boldsymbol{I}_n$ to get:
This proves the base case. We now assume that the theorem is true for $\lambda^{n-1}$, that is:
Now, consider $\boldsymbol{A}^n\boldsymbol{x}$ below:
We use the inductive assumption \eqref{eq:WBZJ4AFlKyT6k1aT049} to get:
By the principle of mathematical induction, we have that the theorem holds for all integers $k\ge0$. This completes the proof.
Reciprocal of an eigenvalue is an eigenvalue of the matrix inverse
Let $\boldsymbol{A}$ be a square invertible matrix. If $\lambda$ is an eigenvalue of $\boldsymbol{A}$, then $\lambda^{-1}$ is an eigenvalue of $\boldsymbol{A}^{-1}$.
Proof. Let $\boldsymbol{x}\ne\boldsymbol{0}$ be an eigenvector of $\boldsymbol{A}$ corresponding to eigenvalue $\lambda$. To show that $\lambda^{-1}$ is an eigenvalue of $\boldsymbol{A}^{-1}$, we can show that $\boldsymbol{A}^{-1}\boldsymbol{x}=\lambda^{-1}\boldsymbol{x}$. We start with the left-hand side:
By definition of eigenvalues and eigenvectors, we have that $\lambda^{-1}$ is an eigenvalue of $\boldsymbol{A}^{-1}$. This completes the proof.
Note that because $\boldsymbol{A}$ is invertible, the eigenvalues of $\boldsymbol{A}$ cannot be zero by theoremlink. This prevents the possibility of division by zero for $\lambda^{-1}=1/\lambda$.
Characteristic polynomial of a matrix and its transpose is the same
Let $\boldsymbol{A}$ be a square matrix. $\boldsymbol{A}$ and $\boldsymbol{A}^T$ share the same characteristic polynomiallink.
Proof. The characteristic polynomial of $\boldsymbol{A}$ is:
By theoremlink, we have that:
Using theoremlink and theoremlink yields:
This completes the proof.
Matrix and its transpose share the same eigenvalues
Let $\boldsymbol{A}$ be a square matrix. If $\lambda$ is an eigenvalue of $\boldsymbol{A}$, then $\lambda$ is also an eigenvalue of $\boldsymbol{A}^T$.
Proof. By theoremlink, we know that $\boldsymbol{A}$ and $\boldsymbol{A}^T$ share the same characteristic polynomial. This immediately means that $\boldsymbol{A}$ and $\boldsymbol{A}^T$ share the same eigenvalues. This completes the proof.
Non-zero scalar multiples of eigenvectors are also eigenvectors
Non-zero scalar multiples of an eigenvector of a matrix are also eigenvectors of the matrix.
Proof. Suppose $\boldsymbol{x}^*$ is an eigenvector of a matrix $\boldsymbol{A}$. By definitionlink, the following holds:
Where $\lambda$ is the corresponding eigenvalue. Now, consider the vector $k\boldsymbol{x}^*$ where $k$ is some non-zero scalar:
Since $\boldsymbol{A}(k\boldsymbol{x}^*)=\lambda(k\boldsymbol{x}^*)$, we have that $k\boldsymbol{x}^*$ is also an eigenvector of $\boldsymbol{A}$ by definition. This completes the proof.
Degree of characteristic polynomial
The characteristic polynomiallink of an $n\times{n}$ matrix has a degree of $n$.
Proof. Suppose $\boldsymbol{A}$ is an $n\times{n}$ matrix. The characteristic polynomial of $\boldsymbol{A}$ is:
Let's compute the determinant using cofactor expansionlink along the first column:
Observe how we are multiplying the first determinant term by a factor including a $\lambda$ term, but all subsequent determinants do not include a $\lambda$ multiple. For instance, we are multiplying the second and third determinant terms by $a_{21}$ and $a_{31}$ instead of $\lambda$. In this way, we are losing a degree of $\lambda$ for the subsequent determinant terms. Since the degree of the polynomial depends on the highest power, let's ignore the subsequent determinant terms:
Now, we evaluate the determinant recursively by cofactor expansionlink along the first column. The highest power of $\lambda$ will be given by:
The highest power of $\lambda$ is therefore $n$. This means that the characteristic polynomial has degree $n$. This completes the proof.
Symmetric matrices have real eigenvalues
If $\boldsymbol{A}$ is a symmetric matrixlink, then the eigenvalues of $\boldsymbol{A}$ are real.
Proof. Suppose eigenvalue $\lambda$ is a complex number. The corresponding eigenvector $\boldsymbol{x}$ may be a complex vectorlink as it could have complex entries as well. For this pair of eigenvalue and eigenvector, the following holds by definitionlink:
Here, $\boldsymbol{A}$ is assumed to be real, while $\lambda$ and $\boldsymbol{x}$ are complex. We take the complex conjugatelink of both sides to get:
By propertylink of matrix complex conjugates:
Also, by another propertylink of complex conjugates, $\overline{\lambda\boldsymbol{x}}= \overline{\lambda} \overline{\boldsymbol{x}}$. We can now express \eqref{eq:rUaQmK4SanwoPwdr8yI} as:
Now, consider $\lambda \overline{\boldsymbol{x}}^T\boldsymbol{x}$, which can be evaluated as:
Note the following:
2nd equality holds by \eqref{eq:EwLOklsm4rcvRKwSChk}.
4th equality holds by propertylink of matrix transpose.
5th equality holds by definitionlink of symmetric matrices.
6th equality equality holds by \eqref{eq:lHPypbFUvNh0UQKIxP1}.
final equality holds by propertylink of matrix transpose.
Since $\boldsymbol{x}$ is an eigenvector of $\boldsymbol{A}$, we have that $\boldsymbol{x}$ cannot be the zero vector by definitionlink. By theoremlink then, $\overline{\boldsymbol{x}}^T\boldsymbol{x}\ne0$. The only way for \eqref{eq:vlvAUm9Kax6if5WAUZi} to hold is if $\lambda=\overline{\lambda}$. By theoremlink, this means that $\lambda\in\mathbb{R}$. This completes the proof.