Covariance Matrix
Start your free 7-days trial now!
Covariance matrix for two random variables
The covariance matrix for the two random variables $X_1$ and $X_2$ look like the following:
Note that $\mathrm{cov}(X_1,X_2)=\mathrm{cov}(X_2,X_1)$.
As we can see, the diagonals contain the variance of the random variables $X_1,…,X_n$ while the other entries contain the covariance between these random variables. Since the covariance matrix contains information about both the covariance as well as the variance, we sometimes call the matrix the covariance-variance matrix.
Extending to higher dimensions
To generalise this to more than two random variables:
To extend this concept to higher dimensions, we need to use some Linear Algebra. Let $X$ refer to random vectors, and subscripted $X_i$ is used to refer to random scalars.
Suppose we have the following random vector and vector mean:
are random variables, each with finite variance, then the covariance matrix $\Sigma$ is the matrix whose $(i,j)$ entry is the covariance:
Where $\mu_i=\mathbb{E}(X_i)$ and $\mu_j=\mathbb{E}(X_j)$. In matrix form, this translates to the following:
Trick when converting to matrix form
When we see $v_iv_j$, we can often rewrite this as $\mathbf{v}\mathbf{v}^T$.
Note that $\mathbf{X}\mathbf{X}^T$ is a matrix (i.e. recall that $\mathbf{a}\mathbf{b}^T$ results in a matrix), although $X$ is a vector here. An equivalent statement of the above would be: