Enter the values of a 4×4 matrix below (you can also input a smaller n×n sub-matrix by filling in just the top-left n rows and n columns), then select a method to compute its eigenvalues and eigenvectors.
Eigenvalues and eigenvectors are fundamental concepts in linear algebra that reveal the intrinsic properties of linear transformations represented by matrices. They describe special directions and scaling factors that remain invariant under a transformation.
When a matrix A acts on a vector v, it typically changes both the direction and magnitude of the vector. However, eigenvectors are special vectors that only get scaled (stretched or shrunk) by the transformation—their direction remains unchanged. The corresponding eigenvalue indicates how much the eigenvector is scaled.
For a square matrix A (n×n), an eigenvalue λ and its corresponding eigenvector v satisfy:
A v = λ v
Where:
This equation can be rewritten as:
(A - λI) v = 0
Where I is the identity matrix. For non-trivial solutions (v ≠ 0), the determinant must be zero:
det(A - λI) = 0
This is called the characteristic equation. Expanding this determinant yields a polynomial in λ of degree n, called the characteristic polynomial.
Geometrically, eigenvalues and eigenvectors reveal how a matrix transformation affects space:
For a 2×2 matrix representing a transformation in the plane, if we have two perpendicular eigenvectors, they form the principal axes of the transformation. The transformation stretches/compresses along these axes by the corresponding eigenvalues.
The analytical method solves the characteristic equation directly:
Advantages: Exact results, works well for small matrices (2×2, 3×3)
Limitations: Computationally expensive for large matrices; polynomial root-finding can be numerically unstable; may produce complex eigenvalues
Power iteration is an iterative algorithm that finds the dominant eigenvalue (largest in absolute value) and its corresponding eigenvector:
Advantages: Simple to implement, memory efficient, works for very large sparse matrices
Limitations: Only finds the dominant eigenpair; convergence can be slow if eigenvalues are close in magnitude; may fail if the dominant eigenvalue is not unique
Other Methods: QR algorithm, Jacobi method, Lanczos algorithm, Arnoldi iteration
Eigenvalues and eigenvectors have several key properties:
Eigenvalues and eigenvectors appear throughout science and engineering:
A symmetric matrix (A = AT) has particularly nice properties:
A symmetric matrix where xTAx > 0 for all non-zero x:
A matrix is diagonalizable if it can be written as A = PDP-1, where D is diagonal:
Consider the matrix:
A = [ 4 1 ]
[ 2 3 ]
Step 1: Find characteristic polynomial:
det(A - λI) = det([ 4-λ 1 ]) = (4-λ)(3-λ) - 2 = λ² - 7λ + 10 = 0
[ 2 3-λ ])
Step 2: Solve for eigenvalues: λ₁ = 5, λ₂ = 2
Step 3: Find eigenvectors by solving (A - λᵢI)v = 0
For λ₁ = 5: v₁ = [1, 1]T (or any scalar multiple)
For λ₂ = 2: v₂ = [1, -2]T (or any scalar multiple)
Interpretation: The transformation stretches vectors along v₁ by factor 5 and along v₂ by factor 2.