Last modified: July 12, 2018
This article is written in: πΊπΈ
Secant Method
The Secant Method is a root-finding algorithm used in numerical analysis to approximate the zeros of a given function f(x). It can be regarded as a derivative-free variant of Newton's method. Instead of computing the derivative fβ²(x) at each iteration (as done in Newtonβs method), it approximates the derivative by using two previously computed points. This approach allows the Secant Method to be applied even when the function is not easily differentiable, and under suitable conditions, it often converges faster than simpler bracket-based methods (like the bisection method).
Conceptually, the Secant Method constructs a secant line between two points (xnβ1,f(xnβ1)) and (xn,f(xn)) on the graph of f(x). The root approximation is then taken as the x-intercept of this secant line. By iteratively updating these points, the method βzeroes inβ on a root.
Conceptual Illustration:
Imagine plotting the function f(x):
The intersection of the secant line with the x-axis gives the next approximation xn+1. Repeating this procedure leads to progressively better approximations of the root, assuming the method converges.
Mathematical Formulation
Consider a continuous function f(x) for which we want to solve f(x)=0. The Secant Method starts with two initial approximations x0 and x1, and then generates a sequence xn according to:
xn+1=xnβf(xn)xnβxnβ1f(xn)βf(xnβ1)
This formula approximates the derivative fβ²(xn) by the finite difference:
fβ²(xn)βf(xn)βf(xnβ1)xnβxnβ1
By replacing fβ²(xn) with the above approximation in the Newton's method formula, we arrive at the Secant Method formula.
Derivation
I. Starting from Newtonβs Method:
Newtonβs method update rule is:
xn+1=xnβf(xn)fβ²(xn).
II. Approximating the Derivative:
If the derivative fβ²(xn) is difficult to compute or unknown, we can use a finite difference approximation based on the two most recent points:
fβ²(xn)βf(xn)βf(xnβ1)xnβxnβ1.
III. Substitute into Newtonβs Formula:
Replacing fβ²(xn) with this approximation gives:
xn+1=xnβf(xn)f(xn)βf(xnβ1)xnβxnβ1.
IV. Simplify the Expression:
By rearranging, we get:
xn+1=xnβf(xn)xnβxnβ1f(xn)βf(xnβ1).
This is the Secant Method iteration formula.
Algorithm Steps
Input:
- A function f(x).
- Two initial points x0 and x1.
- A tolerance Ο΅>0 or a maximum number of iterations nmax.
Initialization:
Set n=1 (since we already have x0 and x1).
Iteration:
I. Evaluate f(xn) and f(xnβ1).
II. Compute:
xn+1=xnβf(xn)xnβxnβ1f(xn)βf(xnβ1)
III. Check for convergence:
- If |xn+1βxn|<Ο΅ or |f(xn+1)|<Ο΅, stop.
- If n>nmax, stop.
IV. Update indices: n=n+1 and repeat step I.
Output:
- Approximate root xn+1.
- Number of iterations performed.
Example
Given Function:
f(x)=x2β4.
We know the roots are x=Β±2. Suppose we do not know the roots in advance and start with:
x0=0,x1=1
Iteration 1:
- f(x0)=f(0)=02β4=β4.
- f(x1)=f(1)=12β4=β3.
Update:
x2=x1βf(x1)x1βx0f(x1)βf(x0)=1β(β3)1β0(β3)β(β4)=1β(β3)1β3+4=1β(β3Γ1)=1+3=4.
Check carefully:
Actually, let's compute step-by-step to avoid confusion:
x2=1β(β3)1β0β3β(β4)=1β(β3)1β3+4=1β(β3)11=1+3=4.
Iteration 2:
Now:
- x1=1,x2=4.
- f(1)=β3,f(4)=42β4=16β4=12.
Update:
x3=x2βf(x2)x2βx1f(x2)βf(x1)=4β124β112β(β3)=4β12315=4β12Γ0.2=4β2.4=1.6.
Iteration 3:
Now:
- x2=4,x3=1.6.
- f(4)=12,f(1.6)=1.62β4=2.56β4=β1.44.
Update:
x4=x3βf(x3)x3βx2f(x3)βf(x2)=1.6β(β1.44)1.6β4β1.44β12
=1.6β(β1.44)β2.4β13.44
=1.6β(β1.44)β2.4β13.44
Compute inside:
β2.4β13.44=0.1786(approx),(β1.44)Γ0.1786=β0.2572
So:
x4=1.6β(β0.2572)=1.6+0.2572=1.8572
Repeating further will bring the sequence closer to x=2.
However, let's simplify by noting that if we start closer to the root, the method converges faster. For example, if we start with x0=0 and x1=1, after the first step we got x2=4, which moved us away initially, but subsequent steps pull the approximation toward x=2. Adjusting initial guesses (e.g., starting with x0=1 and x1=3) might yield quicker convergence to the root x=2.
Advantages
- The secant method avoids requiring an analytic derivative, unlike Newtonβs method, which is particularly useful for functions that are difficult or impossible to differentiate analytically.
- Convergence in the secant method is typically faster than linear convergence, which is common in the bisection method, though it is not as fast as the quadratic convergence seen in Newton's method.
- The secant method is easy to implement due to its simple iterative formula, which relies only on function evaluations and does not require derivative or interval-based calculations.
Limitations
- Convergence is not guaranteed, as poor initial guesses can cause divergence or failure to find a root.
- Functions with multiple roots pose challenges because the secant method may converge to an unintended root if the initial guesses are closer to a different zero.
- The methodβs stability and efficiency are highly dependent on the initial guesses, x0 and x1, which play a critical role in determining whether the method converges and how quickly.
- The secant method does not offer quadratic convergence like Newtonβs method, resulting in slower convergence rates compared to derivative-based methods when derivatives are available and feasible to compute.