Last modified: January 24, 2024

This article is written in: πŸ‡ΊπŸ‡Έ

Power Method

The power method is a fundamental iterative algorithm for estimating the eigenvalue of largest magnitude and its associated eigenvector for a given matrix. This technique is particularly appealing when dealing with large and sparse matrices, where direct eigenvalue computations (e.g., via the characteristic polynomial) are computationally expensive or numerically unstable. The power method capitalizes on the property that repeated multiplication by a matrix A will cause any initial vector to align with the direction of the eigenvector associated with the dominant eigenvalue, assuming this eigenvalue is well-separated from the others in magnitude.

power_method

Mathematical Formulation

Given an nΓ—n matrix A and an initial guess x(0), the power method iterates as follows:

I. Compute:

y(k)=Ax(k)

II. Normalize:

x(k+1)=y(k)β€–y(k)β€–

These steps are repeated until convergence. Convergence occurs when the vector x(k) no longer changes significantly between iterations or equivalently, when successive eigenvalue approximations stabilize.

Inverse Power Method for the Smallest Eigenvalue

If we are interested in finding the smallest eigenvalue of A, we can use the inverse power method. The eigenvalues of Aβˆ’1 are the reciprocals of the eigenvalues of A. Thus, applying the power method to Aβˆ’1 instead of A will yield the eigenvector associated with the smallest eigenvalue of A, and the inverse of the dominant eigenvalue of Aβˆ’1 will give us the smallest eigenvalue of A.

Derivation

Suppose A has distinct eigenvalues Ξ»1,Ξ»2,…,Ξ»n arranged so that:

|Ξ»1|>|Ξ»2|β‰₯β‹―β‰₯|Ξ»n|.

Let v1,v2,…,vn be the corresponding eigenvectors forming a basis. Any initial vector x(0) can be written as:

x(0)=c1v1+c2v2+β‹―+cnvn,with c1β‰ 0.

Applying A repeatedly:

Ax(0)=c1Ξ»1v1+c2Ξ»2v2+β‹―+cnΞ»nvn.

After k iterations:

Akx(0)=c1Ξ»1kv1+c2Ξ»2kv2+β‹―+cnΞ»nkvn.

As kβ†’βˆž, because |Ξ»1|>|Ξ»j| for j>1, the terms involving Ξ»jk vanish in comparison to Ξ»1k. Therefore:

Akx(0)β€–Akx(0)β€–β†’v1

and the Rayleigh quotient x(k)TAx(k)x(k)Tx(k) tends to Ξ»1.

Thus, the power method converges to the dominant eigenvector v1 and eigenvalue Ξ»1.

Algorithm Steps

I. Initialize: Choose an initial vector x(0) (random or based on domain knowledge).

II. Iterative Step: For k=0,1,2,…:

III. Convergence Check: Stop when β€–x(k+1)βˆ’x(k)β€– is sufficiently small, or when changes in the estimated eigenvalue become negligible.

IV. Eigenvalue Approximation: Once converged, estimate the dominant eigenvalue as:

Ξ»maxβ‰ˆx(k)TAx(k)x(k)Tx(k).

Example

Consider:

A=[21 13],x(0)=[1 1].

First iteration:

y(0)=Ax(0)=[21 13][1 1]=[3 4].

Normalize:

x(1)=15[3 4]=[0.6 0.8].

Second iteration:

y(1)=Ax(1)=[21 13][0.6 0.8]=[2.0 3.0].

Normalize:

x(2)=122+32[2 3]=[0.5547 0.8321].

Repeating these steps, x(k) converges to the eigenvector associated with the largest eigenvalue, which in this case is approximately 3.5616. The corresponding eigenvector stabilizes around:

[0.55 0.83]

Advantages

Limitations

Table of Contents

    Power Method
    1. Mathematical Formulation
    2. Inverse Power Method for the Smallest Eigenvalue
    3. Derivation
    4. Algorithm Steps
    5. Example
    6. Advantages
    7. Limitations