Last modified: February 15, 2022
This article is written in: 🇺🇸
Eigenvalue Decomposition (EVD)
Eigenvalue Decomposition (EVD), also known as Eigendecomposition, is a fundamental operation in linear algebra that breaks down a square matrix into a simpler form defined by its eigenvalues and eigenvectors. This decomposition provides deep insights into the properties and structure of a matrix, enabling simplifications in numerous computations such as matrix powers, transformations, and the solution of systems of linear equations. In many applications — from vibration analysis in engineering to principal component analysis in data science — EVD plays a critical role.
The key idea is that certain square matrices can be represented as a product of a matrix of their eigenvectors and a diagonal matrix of their eigenvalues. This diagonalization separates the scaling factors (eigenvalues) from the directions of transformations (eigenvectors) encoded by the original matrix.
Mathematical Formulation
Consider an matrix . The Eigenvalue Decomposition of is given by:
where:
I. Eigenvalues ():
The eigenvalues of are the roots of the characteristic polynomial:
If we solve this polynomial equation and find eigenvalues (not necessarily distinct), we denote them as .
II. Eigenvectors ():
For each eigenvalue , we find the corresponding eigenvector by solving:
Each eigenvector is a nonzero vector associated with the eigenvalue .
III. Eigenvector Matrix () and Eigenvalue Matrix ():
Once we have the set of eigenvalues and eigenvectors:
Construct as a diagonal matrix whose diagonal entries are the eigenvalues:
Construct such that its columns are the eigenvectors :
If is diagonalizable, and if the are chosen to be linearly independent, then is invertible and .
Derivation
The derivation of EVD follows naturally from the definition of eigenvalues and eigenvectors. For each eigenvalue , we have:
If we gather all eigenvectors into the matrix and consider how acts on the columns of :
If is invertible, we can write:
Not every matrix is guaranteed to have a full set of linearly independent eigenvectors. If it does, the matrix is said to be diagonalizable, and the above decomposition is possible. If not, the matrix cannot be decomposed purely into this form.
Algorithm Steps
I. Find Eigenvalues:
- Form the characteristic polynomial .
- Solve for . This may be done analytically for small matrices or numerically for larger matrices.
II. Find Eigenvectors:
- For each eigenvalue , solve .
- Ensure each eigenvector is normalized or scaled consistently.
III. Form and :
- Construct as the diagonal matrix with eigenvalues on the diagonal.
- Construct with eigenvectors as columns.
IV. Verify Invertibility of :
- If is invertible, then .
Example
Let:
I. Find Eigenvalues:
The characteristic polynomial:
Expanding:
Solve :
The roots are and .
II. Find Eigenvectors:
For :
Solve leads to .
For :
Solve leads to .
III. Form and :
Thus:
Advantages
I. Simplification of Computations:
Once in the form , computing powers of or applying certain transformations becomes much simpler. For example, , and since is diagonal, raising it to a power is straightforward.
II. Insights into Matrix Structure:
The eigendecomposition reveals the intrinsic "modes" of the linear transformation represented by . Eigenvalues show how the transformation scales each eigen-direction, and eigenvectors show the directions of these fundamental modes.
III. Numerical Stability in Some Computations:
Working with instead of can improve numerical stability and make some algorithms more efficient, particularly in areas like principal component analysis, spectral clustering, and other advanced data analysis tasks.
Limitations
I. Not All Matrices Are Diagonalizable:
Some matrices cannot be broken down into a pure eigen decomposition if they do not have enough linearly independent eigenvectors. For such matrices, more generalized decompositions like the Jordan normal form are required.
II. Computational Cost for Large Matrices:
Finding eigenvalues and eigenvectors for large matrices can be computationally expensive. Efficient numerical algorithms and approximations exist, but they may still be costly for very large systems.
III. Complex Eigenvalues:
For real matrices, eigenvalues can be complex. While this is not a fundamental limitation, it means we must consider complex arithmetic when performing the decomposition, which may not be desired in some real-world applications.