Last modified: April 06, 2021
This article is written in: 🇺🇸
Relaxation Method
The relaxation method, commonly referred to as the fixed-point iteration method, is an iterative approach used to find solutions (roots) to nonlinear equations of the form . Instead of directly solving for the root, the method involves rewriting the original equation in the form:
where is a suitable function defined such that a fixed point of corresponds to a root of . That is, if is a root of , then .
This approach transforms a root-finding problem into a fixed-point problem. If the iteration defined by:
converges, it will do so to the fixed point , which is also the root of the original equation. Relaxation methods can sometimes converge faster than bracket-based methods (like bisection), but they do not come with guaranteed convergence from any starting point. Convergence depends on the properties of the chosen and the initial guess .
Conceptual Illustration:
Imagine the line and the curve . A fixed point is a point where the two curves intersect:
The iteration takes a guess , and then maps it to . If these iterations hone in on the intersection, the sequence converges to the root .
Mathematical Formulation
Starting Point:
Given a nonlinear equation:
we want to find such that .
Rewriting the Equation:
We rearrange the equation into a fixed-point form:
There can be infinitely many ways to rewrite as . A suitable must be chosen to ensure convergence.
Once we have:
if the iteration converges, it must converge to satisfying .
Convergence Conditions:
A key sufficient condition for convergence is that:
If the derivative of in the region around the fixed point is less than 1 in absolute value, the iteration will typically converge.
Derivation
I. From Root to Fixed Point Problem:
Suppose we have . We isolate on one side:
If solves , then .
II. Fixed-Point Iteration:
Define a sequence by choosing an initial guess and applying:
III. Convergence Insight:
Consider a point such that . If we expand near :
If , then iterating this process will reduce the error at each step, leading to convergence of to .
In essence, the choice of greatly influences the convergence behavior. A poorly chosen may lead to divergence or very slow convergence.
Algorithm Steps
Input:
- A function and a corresponding such that is equivalent to .
- An initial guess .
- A tolerance or a maximum number of iterations .
Initialization:
Set iteration counter .
Iteration:
I. Compute the next approximation:
II. Check convergence:
- If or , stop.
- If , stop.
III. Otherwise, set and repeat from step I.
Output:
- Approximate root: .
- Number of iterations performed.
Example
Given Equation:
This equation factors as , so the roots are and .
Choose a Fixed-Point Form:
We can rearrange the equation to:
Thus:
Initial Guess:
Let .
Iteration 1:
Iteration 2:
Iteration 3:
Repeating this process, we observe approaching one of the roots (in this case, it will move closer to , depending on the behavior of near that root).
Advantages
- Flexible formulation allows the relaxation method to work by simply rewriting the equation in fixed-point form and iterating, making it conceptually straightforward.
- Potentially faster than bisection, the method can converge more quickly when is well-chosen, particularly if near the root.
- Broad applicability makes the method suitable for nonlinear equations that do not require derivatives, although convergence often depends on the properties of .
- Straightforward implementation is achieved by directly iterating , provided an appropriate is identified.
Limitations
- No guaranteed convergence means the method may fail if in the vicinity of the root, unlike bracketing methods that ensure convergence under certain conditions.
- Sensitive to the choice of , as some transformations of into promote convergence while others lead to divergence.
- Initial guess importance highlights that a poor starting point can result in divergence or very slow convergence, making the method less robust in such cases.
- Dependent on continuity and differentiability, with standard convergence theory requiring to be continuous and differentiable, limiting its applicability for problems that do not meet these conditions.