Matrix Eigenvalues and Eigenvectors

Enter the values of a 4×4 matrix below (you can also input a smaller n×n sub-matrix by filling in just the top-left n rows and n columns), then select a method to compute its eigenvalues and eigenvectors.

4×4 Matrix (or sub-matrix)
Need Help?
  • By default there are 16 entries (4×4). If you want a smaller matrix, just fill in the top-left n×n block (e.g. 2×2 or 3×3).
  • Click "Calculate (Exact)" to see the real eigenvalues and corresponding eigenvectors.
  • Or switch to the Power Iteration tab for an approximate dominant eigenpair.
  • Click "Clear" to reset all inputs.
  • Complex eigenvalues are not displayed here.

📚 Mathematical Background

🔢 Eigenvalues and Eigenvectors - Introduction

Eigenvalues and eigenvectors are fundamental concepts in linear algebra that reveal the intrinsic properties of linear transformations represented by matrices. They describe special directions and scaling factors that remain invariant under a transformation.

When a matrix A acts on a vector v, it typically changes both the direction and magnitude of the vector. However, eigenvectors are special vectors that only get scaled (stretched or shrunk) by the transformation—their direction remains unchanged. The corresponding eigenvalue indicates how much the eigenvector is scaled.

📐 Mathematical Definition

For a square matrix A (n×n), an eigenvalue λ and its corresponding eigenvector v satisfy:

A v = λ v

Where:

  • A is an n×n square matrix
  • v is a non-zero vector (the eigenvector)
  • λ (lambda) is a scalar (the eigenvalue)

This equation can be rewritten as:

(A - λI) v = 0

Where I is the identity matrix. For non-trivial solutions (v ≠ 0), the determinant must be zero:

det(A - λI) = 0

This is called the characteristic equation. Expanding this determinant yields a polynomial in λ of degree n, called the characteristic polynomial.

🎯 Geometric Interpretation

Geometrically, eigenvalues and eigenvectors reveal how a matrix transformation affects space:

  • Eigenvectors point in directions that remain unchanged by the transformation (only scaled)
  • Eigenvalues tell us the scaling factor along each eigenvector direction:
    • λ > 1: Stretching (expansion) along that direction
    • 0 < λ < 1: Compression along that direction
    • λ = 1: No change in magnitude (direction preserved exactly)
    • λ < 0: Scaling with reflection (direction reversed)
    • λ = 0: Collapse to zero (direction in null space)

For a 2×2 matrix representing a transformation in the plane, if we have two perpendicular eigenvectors, they form the principal axes of the transformation. The transformation stretches/compresses along these axes by the corresponding eigenvalues.

🔬 Methods for Computing Eigenvalues

Analytical (Exact) Method

The analytical method solves the characteristic equation directly:

  1. Form the matrix (A - λI)
  2. Compute det(A - λI) = 0
  3. Solve the resulting polynomial equation for λ
  4. For each eigenvalue λᵢ, solve (A - λᵢI)v = 0 to find the corresponding eigenvector

Advantages: Exact results, works well for small matrices (2×2, 3×3)

Limitations: Computationally expensive for large matrices; polynomial root-finding can be numerically unstable; may produce complex eigenvalues

Power Iteration (Approximation Method)

Power iteration is an iterative algorithm that finds the dominant eigenvalue (largest in absolute value) and its corresponding eigenvector:

  1. Start with a random vector v₀
  2. Repeatedly multiply by matrix A: vₖ₊₁ = A vₖ / ||A vₖ||
  3. The sequence converges to the dominant eigenvector
  4. The eigenvalue is approximated by the Rayleigh quotient: λ = (vT A v) / (vT v)

Advantages: Simple to implement, memory efficient, works for very large sparse matrices

Limitations: Only finds the dominant eigenpair; convergence can be slow if eigenvalues are close in magnitude; may fail if the dominant eigenvalue is not unique

Other Methods: QR algorithm, Jacobi method, Lanczos algorithm, Arnoldi iteration

⭐ Important Properties

Eigenvalues and eigenvectors have several key properties:

  • Number of Eigenvalues: An n×n matrix has exactly n eigenvalues (counting multiplicities), though some may be complex
  • Trace: The sum of all eigenvalues equals the trace of the matrix (sum of diagonal elements): Σλᵢ = tr(A)
  • Determinant: The product of all eigenvalues equals the determinant: Πλᵢ = det(A)
  • Symmetric Matrices: Have all real eigenvalues and orthogonal eigenvectors
  • Triangular Matrices: Eigenvalues are the diagonal elements
  • Singular Matrices: Have at least one zero eigenvalue
  • Orthogonal Matrices: All eigenvalues have magnitude 1
  • Similar Matrices: Matrices related by A = PBP-1 have identical eigenvalues

🌟 Applications

Eigenvalues and eigenvectors appear throughout science and engineering:

  • Principal Component Analysis (PCA): Dimensionality reduction in data science by finding principal directions (eigenvectors) of maximum variance (eigenvalues)
  • Vibration Analysis: Natural frequencies (eigenvalues) and mode shapes (eigenvectors) of mechanical systems
  • Quantum Mechanics: Energy levels (eigenvalues) and wavefunctions (eigenvectors) of the Schrödinger equation
  • Google PageRank: The importance of web pages is the dominant eigenvector of the link matrix
  • Stability Analysis: System stability determined by eigenvalues of the Jacobian matrix; stable if all eigenvalues have negative real parts
  • Image Compression: Singular Value Decomposition (SVD) uses eigenvalues for optimal low-rank approximations
  • Graph Theory: Spectral graph theory studies graph properties through eigenvalues of adjacency or Laplacian matrices
  • Markov Chains: Steady-state distributions are eigenvectors corresponding to eigenvalue 1
  • Control Theory: Controllability and observability of linear systems
  • Facial Recognition: Eigenfaces method for pattern recognition

📊 Special Types of Matrices

Symmetric Matrices

A symmetric matrix (A = AT) has particularly nice properties:

  • All eigenvalues are real numbers
  • Eigenvectors corresponding to distinct eigenvalues are orthogonal
  • Can be diagonalized by an orthogonal matrix: A = QΛQT
  • Common in physics (covariance matrices, moment of inertia tensors)

Positive Definite Matrices

A symmetric matrix where xTAx > 0 for all non-zero x:

  • All eigenvalues are strictly positive
  • Important in optimization (Hessian matrices)
  • Covariance matrices in statistics are positive semi-definite

Diagonalizable Matrices

A matrix is diagonalizable if it can be written as A = PDP-1, where D is diagonal:

  • D contains the eigenvalues on its diagonal
  • Columns of P are the eigenvectors
  • Makes matrix powers easy to compute: Ak = PDkP-1
  • A matrix is diagonalizable if it has n linearly independent eigenvectors

🔍 Practical Example: 2×2 Matrix

Consider the matrix:

A = [ 4 1 ]

[ 2 3 ]

Step 1: Find characteristic polynomial:

det(A - λI) = det([ 4-λ 1 ]) = (4-λ)(3-λ) - 2 = λ² - 7λ + 10 = 0

[ 2 3-λ ])

Step 2: Solve for eigenvalues: λ₁ = 5, λ₂ = 2

Step 3: Find eigenvectors by solving (A - λᵢI)v = 0

For λ₁ = 5: v₁ = [1, 1]T (or any scalar multiple)
For λ₂ = 2: v₂ = [1, -2]T (or any scalar multiple)

Interpretation: The transformation stretches vectors along v₁ by factor 5 and along v₂ by factor 2.