# Eigenvalues, eigenvectors and eigendecomposition

## What do you need to know to understand this topic?

• Basics of linear algebra

## Eigenwhat?

Eigen means own or self. In linear algebra, eigenvalue, eigenvector and eigendecomposition are terms that are intrinsically related. Eigendecomposition is the method to decompose a square matrix into its eigenvalues and eigenvectors. For a matrix $A$, if $$$$A\mathbf{v}=\lambda \mathbf{v}\label{eq:Avlv}$$$$ then $\mathbf{v}$ is an eigenvector of matrix $A$ and $\lambda$ is the corresponding eigenvalue. That is, if matrix $A$ is multiplied by a vector and the result is a scaled version of the same vector, then it is an eigenvector of $A$ and the scaling factor is its eigenvalue.

## Eigendecomposition

So how do we find the eigenvectors of a matrix? From $\eqref{eq:Avlv}$: $$A\mathbf{v}-\lambda I \mathbf{v} = 0$$ $$$$(A -\lambda I) \mathbf{v} = 0\label{eq:AlI}$$,$$ where $I$ is the identity matrix. The values of $\lambda$ where $\eqref{eq:AlI}$ holds are the eigenvalues of $A$. It turns out that this equation is equivalent to: $$$$det(A-\lambda I) = 0,\label{eq:detAlI}$$$$ where det() is the determinant of a matrix.

First, you must know that a matrix is non-invertible if and only if its determinant is zero. So, for the values of $\lambda$ that $\eqref{eq:detAlI}$ holds, $A-\lambda I$ is non-invertible (singular). In those cases, you cannot left-multiply both sides of $\eqref{eq:AlI}$ by $(A-\lambda I)^{-1}$ (since there is no inverse) to get: $$\mathbf{v} = 0,$$ which means that in those cases, the solution for $\eqref{eq:Avlv}$ is different from $\mathbf{v} = 0$ and $\lambda$ is an eigenvalue of $A$.

### An example

Let's see the eigendecomposition for the matrix: $$A=\left[\begin{array}{cc}1 & 0\\1 & 3\\\end{array}\right]$$ From $\eqref{eq:detAlI}$: $$det\left(\left[\begin{array}{cc}1-\lambda & 0\\1 & 3-\lambda\\\end{array}\right]\right) = 0$$ $$(1-\lambda)(3-\lambda) = 0$$ we get directly $\lambda_1 = 1$ and $\lambda_2 = 3$. The above expression is usually referred as the characteristic polinomial of a matrix.
Plugging $\lambda_1$ into $\eqref{eq:Avlv}$, we get: $$\left[\begin{array}{cc}1 & 0\\1 & 3\\\end{array}\right]\left[\begin{array}{c}v_{11}\\v_{12}\\\end{array}\right]= 1 \left[\begin{array}{c}v_{11}\\v_{12}\\\end{array}\right]$$ from which we get $v_{11} = -2v_{12}$. That is, any vector $\mathbf{v_1} = [v_{11}, v_{12}]$ where $v_{11} = -2v_{12}$ is an eigenvector of $A$ with eigenvalue 1.
Plugging $\lambda_2$ into $\eqref{eq:Avlv}$, we get: $$\left[\begin{array}{cc}1 & 0\\1 & 3\\\end{array}\right]\left[\begin{array}{c}v_{21}\\v_{22}\\\end{array}\right]= 3 \left[\begin{array}{c}v_{21}\\v_{22}\\\end{array}\right]$$ from which we get $v_{21} = 0$ and $v_{22} \in \mathbb{R}$. That is, any vector $\mathbf{v_2} = [v_{21}, v_{22}]$ where $v_{21} = 0$ is an eigenvector of $A$ with eigenvalue 3.

## Why is eigendecomposition useful?

Referring to our previous example, we can join both eigenvectors and eigenvalues in a single matrix equation: $$A\left[\mathbf{v_1 v_2}\right] = \left[\begin{array}{cc}1 & 0\\1 & 3\\\end{array}\right]\left[\begin{array}{cc}v_{11} & v_{21}\\v_{12} & v_{22}\\\end{array}\right] =\left[\begin{array}{cc}v_{11} & v_{21}\\v_{12} & v_{22}\\\end{array}\right]\left[\begin{array}{cc}\lambda_1 & 0\\0 & \lambda_2\\\end{array}\right] =\left[\mathbf{v_1 v_2}\right]\left[\begin{array}{cc}\lambda_1 & 0\\0 & \lambda_2\\\end{array}\right]$$ If we replace: $$\Lambda = \left[\begin{array}{cc}\lambda_1 & 0\\0 & \lambda_2\\\end{array}\right]$$ $$Q = \left[\mathbf{v_1 v_2}\right]$$ it is also true that: $$AQ = Q\Lambda$$ $$$$A = Q\Lambda Q^{-1}\label{eq:AQLQ}$$$$ Eigendecomposition decomposes a matrix $A$ into a multiplication of a matrix of eigenvectors $Q$ and a diagonal matrix of eigenvalues $\Lambda$. This can only be done if a matrix is diagonalizable. In fact, the definition of a diagonalizable matrix $A \in \mathbb{R}^{n \times n}$ is that it can be eigendecomposed into $n$ eigenvectors, so that $Q^{-1}AQ = \Lambda$.

### Matrix inverse with eigendecomposition

From $\eqref{eq:AQLQ}$: $$A^{-1} = Q^{-1} \Lambda^{-1}Q$$ The inverse of $\Lambda$ is just the inverse of each diagonal element (the eigenvalues). $Q^{-1}$ needs to be computed, but is often simpler than computing $A^{-1}$.

### Power of a matrix with eigendecomposition

From $\eqref{eq:AQLQ}$: $$A^2 = Q \Lambda Q^{-1} Q \Lambda Q^{-1} = Q \Lambda^{2} Q^{-1}$$ $$A^n = Q \Lambda^n Q^{-1}$$ The power of $\Lambda$ is just the power of each diagonal element. This becomes much simpler than multiplications of A.

## Properties of eigendecomposition

• $det(A)=\prod_{i=1}^{n}\lambda_i$
• $tr(A)=\prod_{i=1}^{n}\lambda_i$
• The eigenvalues of $A^{-1}$ are $\lambda_i^{-1}$
• The eigenvalues of $A^{n}$ are $\lambda_i^{n}$
• In general, the eigenvalues of $f(A)$ are $f(\lambda_i)$
• The eigenvectors of $A^{-1}$ are the same as the eigenvectors of $A$.
• if $A$ is hermitian (its conjugate transpose is equal to itself) and full-rank (all rows or columns are linearly independent), then the eigenvectors are mutually orthogonal (the dot-product between any two eigenvectors is zero) and the eigenvalues are real.
• $A$ is invertible if all its eigenvalues are different from zero and vice-versa.
• $A$ can only be eigendecomposed if its eigenvalues are not repeated.
If I helped you in some way, please help me back by liking this website on the bottom of the page or clicking on the link below. It would mean the world to me!