Last modified: August 03, 2024
This article is written in: πΊπΈ
Picard's Iteration Method
Picard's method, alternatively known as the method of successive approximations, is a tool primarily used for solving initial-value problems for first-order ordinary differential equations (ODEs). The approach hinges on an iterative process that approximates the solution of an ODE. Though this method is notably simple and direct, it can sometimes converge slowly, depending on the specifics of the problem at hand.
Mathematical Formulation of Picard's Method
Picard's method originates from the idea of expressing the solution to an ODE as an infinite series. We then approximate this solution by truncating the series after a certain number of terms. Given a first-order ODE of the general form yβ²=f(x,y), complemented by the initial condition y(x0)=y0, we can express the iterative formula for Picard's method as follows:
yn+1(x)=y0+β«xx0f(t,yn(t))dt
Derivation
Picard's method of successive approximations is based on the principle of transforming the differential equation into an equivalent integral equation, which can then be solved iteratively. Let's take a look at how it's derived.
Consider a first-order ordinary differential equation (ODE):
yβ²(t)=f(t,y(t)),
with an initial condition:
y(t0)=y0
The main idea of Picard's method is to rewrite the ODE as an equivalent integral equation by integrating both sides with respect to t over the interval [t0,t]:
β«tt0yβ²(t)dt=β«tt0f(t,y(t))dt
The left-hand side of the equation becomes y(t)βy(t0) by the fundamental theorem of calculus, which we can rewrite using our initial condition to obtain:
y(t)=y0+β«tt0f(s,y(s))ds
Now we can see the integral equation form of the initial value problem.
To solve this equation using Picard's method, we generate a sequence of functions yn(t) iteratively as:
yn+1(t)=y0+β«tt0f(s,yn(s))ds,
where y0(t) is the initial guess (often taken as the constant function y0(t)=y0).
Each approximation yn+1(t) is defined in terms of the previous approximation yn(t), creating an iterative process which can be repeated until a desired level of accuracy is achieved.
Algorithm steps
- Start with an initial approximation y0(x).
- Calculate the next approximation yn+1(x)=y0+β«xx0f(t,yn(t))dt.
- Repeat step 2 for a pre-determined number of iterations n or until the approximation of the solution meets the required accuracy.
Example Application
Consider the first-order ODE
yβ²=x+y,y(0)=1
We will use Picard iteration (also called the method of successive approximations) to approximate the solution. Recall that if y solves the initial value problem, then y must satisfy the integral equation
y(x)=y(0)+β«x0(t+y(t))dt
Because y(0)=1, this becomes
y(x)=1+β«x0(t+y(t))dt
Step 1: Choose an initial guess
A simple choice for the initial approximation is a constant function that satisfies the initial condition: y0(x)=1
Step 2: Form the iteration
Define the (n+1)-th approximation yn+1(x) by
yn+1(x)=1+β«x0(t+yn(t))dt
Step 3: Compute the first few iterates
I. First iterate y1:
Starting from y0(t)=1, y1(x)=1+β«x0(t+y0(t))dt=1+β«x0(t+1)dt
Compute the integral:
β«x0(t+1)dt=[12t2+t]t=xt=0=12x2+x
Hence
y1(x)=1+(12x2+x)=1+x+12x2
II. Second iterate y2:
Now substitute y1(t)=1+t+12t2 into the integral:
y2(x)=1+β«x0(t+y1(t))dt=1+β«x0(t+(1+t+12t2))dt=1+β«x0(1+2t+12t2)dt Compute this integral: β«x0(1+2t+12t2)dt=[t+t2+16t3]t=xt=0=x+x2+16x3
Therefore,
y2(x)=1+(x+x2+16x3)=1+x+x2+16x3
In principle, you continue in this mannerβsubstituting yn(x) back into the integral equationβto obtain yn+1(x). You either stop when you get a satisfactory approximation or when you reach your maximum number of iterations.
Observing a Pattern
Notice how each iteration adds higher-order polynomial terms in x. In fact, this process is building the power-series expansion of the true solution around x=0. More and more terms (with increasing powers of x) appear as you iterate.
The Exact Solution
Although Picard iteration is a good way to approximate or numerically solve the ODE, we can also solve it directly by standard methods for first-order linear ODEs. Indeed, rewriting
yβ²βy=x
the integrating factor is eβx. Multiplying the equation by eβx gives
ddx(eβxy)=xeβx
Integrate both sides from 0 to x:
eβxy(x)βeβ0y(0)=β«x0teβtdt
Since y(0)=1, the left-hand side is eβxy(x)β1. The right-hand side can be computed by integration by parts (or looked up in a table):
β«x0teβtdt=[β(t+1)eβt]x0=β(x+1)eβx+1
Hence,
eβxy(x)β1=β(x+1)eβx+1βΉeβxy(x)=2β(x+1)eβx
Multiply both sides by ex to solve for y(x):
y(x)=2exβ(x+1)
A quick check: at x=0, y(0)=2β 1β(0+1)=1, matching the initial condition.
Connecting Back to the Iterates
If you expand the exact solution y(x)=2exβ(x+1) in a Maclaurin series around x=0,
2ex=2(1+x+x22!+x33!+β―)=2+2x+x2+x33+β―
2exβ(x+1)=(2β1)+(2xβx)+x2+x33+β―=1+x+x2+x33+β―
Picard iteration naturally generates the partial sums of this series:
y1(x)=1+x+12x2,y2(x)=1+x+x2+16x3,y3(x)=1+x+x2+13x3+β―,
and so on. Each iterate gets you closer to the true (infinite series) solution; numerically, you see that the coefficients of each power of x converge to those in the exact expansion.
In this example, the partial sums y0,y1,y2,β― quickly converge to the exact solution
y(x)=2exβ(x+1)
If one needs a purely numerical approach (especially for more complicated f(x,y) ), continuing the iteration until it stabilizes (or up to a certain tolerance/number of terms) provides a valid approximate solution.
Advantages
- Picard's method provides a straightforward iterative process for solving ODEs and is particularly useful in theoretical contexts for proving the existence and uniqueness of solutions.
- The series-based solution derived from Picard's method allows for a clear understanding of the solution's structure and behavior in mathematical terms.
- It can serve as a foundation for teaching iterative techniques and for introducing students to numerical and analytical solution methods.
- The method is well-suited for equations where both the function f(x,y) and its integral are smooth, ensuring convergence in many standard cases.
Limitations
- Each iteration of Picard's method involves evaluating an integral, which becomes computationally demanding for complicated or multidimensional functions.
- Convergence is not guaranteed for functions f(x,y) that are discontinuous, unbounded, or violate Lipschitz conditions in the region of interest.
- The method tends to converge slowly for stiff or nonlinear ODEs, making it less practical for such problems compared to other numerical methods.
- Applying Picard's method can be impractical for ODEs requiring high precision, as the number of iterations to achieve accuracy grows significantly.
- The approach is not suitable for equations where numerical solutions are desired quickly, as alternative methods like Runge-Kutta or finite difference techniques are often more efficient.