Last modified: April 16, 2023

This article is written in: 🇺🇸

Relaxation Method

The relaxation method, commonly referred to as the fixed-point iteration method, is an iterative approach used to find solutions (roots) to nonlinear equations of the form f(x)=0f(x)=0. Instead of directly solving for the root, the method involves rewriting the original equation in the form:

x=g(x),x=g(x),

where g(x)g(x) is a suitable function defined such that a fixed point of g(x)g(x) corresponds to a root of f(x)f(x). That is, if αα is a root of f(x)f(x), then g(α)=αg(α)=α.

This approach transforms a root-finding problem into a fixed-point problem. If the iteration defined by:

xn+1=g(xn)xn+1=g(xn)

converges, it will do so to the fixed point αα, which is also the root of the original equation. Relaxation methods can sometimes converge faster than bracket-based methods (like bisection), but they do not come with guaranteed convergence from any starting point. Convergence depends on the properties of the chosen g(x)g(x) and the initial guess x0x0.

Conceptual Illustration:

Imagine the line y=xy=x and the curve y=g(x)y=g(x). A fixed point αα is a point where the two curves intersect:

output(17)

The iteration takes a guess xnxn, and then maps it to xn+1=g(xn)xn+1=g(xn). If these iterations hone in on the intersection, the sequence converges to the root αα.

Mathematical Formulation

Starting Point:

Given a nonlinear equation:

f(x)=0f(x)=0 we want to find αα such that f(α)=0f(α)=0.

Rewriting the Equation:

We rearrange the equation into a fixed-point form:

x=g(x)x=g(x)

There can be infinitely many ways to rewrite f(x)=0f(x)=0 as x=g(x)x=g(x). A suitable g(x)g(x) must be chosen to ensure convergence.

Once we have:

xn+1=g(xn)xn+1=g(xn)

if the iteration converges, it must converge to αα satisfying α=g(α)α=g(α).

Convergence Conditions:

A key sufficient condition for convergence is that:

|g(x)|<1in the neighborhood of α.

If the derivative of g(x) in the region around the fixed point is less than 1 in absolute value, the iteration will typically converge.

Derivation

I. From Root to Fixed Point Problem:

Suppose we have f(x)=0. We isolate x on one side:

x=g(x)

If α solves f(α)=0, then g(α)=α.

II. Fixed-Point Iteration:

Define a sequence xn by choosing an initial guess x0 and applying:

xn+1=g(xn).

III. Convergence Insight:

Consider a point α such that g(α)=α. If we expand g near α:

g(α+h)g(α)+g(α)h=α+g(α)h

If |g(α)|<1, then iterating this process will reduce the error |h| at each step, leading to convergence of xn to α.

In essence, the choice of g(x) greatly influences the convergence behavior. A poorly chosen g(x) may lead to divergence or very slow convergence.

Algorithm Steps

Input:

Initialization:

Set iteration counter n=0.

Iteration:

I. Compute the next approximation:

xn+1=g(xn)

II. Check convergence:

III. Otherwise, set n=n+1 and repeat from step I.

Output:

Example

Given Equation:

x23x+2=0

This equation factors as (x1)(x2)=0, so the roots are x=1 and x=2.

Choose a Fixed-Point Form:

We can rearrange the equation to:

x=x2+23

Thus:

g(x)=x2+23

Initial Guess:

Let x0=0.

Iteration 1:

x1=g(x0)=g(0)=02+23=230.6667

Iteration 2:

x2=g(x1)=g(0.6667)=(0.6667)2+23=0.4444+232.44443=0.8148

Iteration 3:

x3=g(x2)=g(0.8148)=(0.8148)2+23=0.6639+232.66393=0.8880

Repeating this process, we observe xn approaching one of the roots (in this case, it will move closer to x=1, depending on the behavior of g(x) near that root).

Advantages

  1. Flexible formulation allows the relaxation method to work by simply rewriting the equation in fixed-point form x=g(x) and iterating, making it conceptually straightforward.
  2. Potentially faster than bisection, the method can converge more quickly when g(x) is well-chosen, particularly if |g(x)|<1 near the root.
  3. Broad applicability makes the method suitable for nonlinear equations that do not require derivatives, although convergence often depends on the properties of g(x).
  4. Straightforward implementation is achieved by directly iterating xn+1=g(xn), provided an appropriate g(x) is identified.

Limitations

  1. No guaranteed convergence means the method may fail if |g(x)|1 in the vicinity of the root, unlike bracketing methods that ensure convergence under certain conditions.
  2. Sensitive to the choice of g(x), as some transformations of f(x)=0 into x=g(x) promote convergence while others lead to divergence.
  3. Initial guess importance highlights that a poor starting point can result in divergence or very slow convergence, making the method less robust in such cases.
  4. Dependent on continuity and differentiability, with standard convergence theory requiring g(x) to be continuous and differentiable, limiting its applicability for problems that do not meet these conditions.

Table of Contents

    Relaxation Method
    1. Mathematical Formulation
    2. Derivation
    3. Algorithm Steps
    4. Example
    5. Advantages
    6. Limitations