Last modified: August 03, 2024

This article is written in: πŸ‡ΊπŸ‡Έ

Picard's Iteration Method

Picard's method, alternatively known as the method of successive approximations, is a tool primarily used for solving initial-value problems for first-order ordinary differential equations (ODEs). The approach hinges on an iterative process that approximates the solution of an ODE. Though this method is notably simple and direct, it can sometimes converge slowly, depending on the specifics of the problem at hand.

Mathematical Formulation of Picard's Method

Picard's method originates from the idea of expressing the solution to an ODE as an infinite series. We then approximate this solution by truncating the series after a certain number of terms. Given a first-order ODE of the general form yβ€²=f(x,y), complemented by the initial condition y(x0)=y0, we can express the iterative formula for Picard's method as follows:

yn+1(x)=y0+∫xx0f(t,yn(t))dt

Derivation

Picard's method of successive approximations is based on the principle of transforming the differential equation into an equivalent integral equation, which can then be solved iteratively. Let's take a look at how it's derived.

Consider a first-order ordinary differential equation (ODE):

yβ€²(t)=f(t,y(t)),

with an initial condition:

y(t0)=y0

The main idea of Picard's method is to rewrite the ODE as an equivalent integral equation by integrating both sides with respect to t over the interval [t0,t]:

∫tt0yβ€²(t)dt=∫tt0f(t,y(t))dt

The left-hand side of the equation becomes y(t)βˆ’y(t0) by the fundamental theorem of calculus, which we can rewrite using our initial condition to obtain:

y(t)=y0+∫tt0f(s,y(s))ds

Now we can see the integral equation form of the initial value problem.

To solve this equation using Picard's method, we generate a sequence of functions yn(t) iteratively as:

yn+1(t)=y0+∫tt0f(s,yn(s))ds,

where y0(t) is the initial guess (often taken as the constant function y0(t)=y0).

Each approximation yn+1(t) is defined in terms of the previous approximation yn(t), creating an iterative process which can be repeated until a desired level of accuracy is achieved.

Algorithm steps

  1. Start with an initial approximation y0(x).
  2. Calculate the next approximation yn+1(x)=y0+∫xx0f(t,yn(t))dt.
  3. Repeat step 2 for a pre-determined number of iterations n or until the approximation of the solution meets the required accuracy.

Example Application

Consider the first-order ODE

yβ€²=x+y,y(0)=1

We will use Picard iteration (also called the method of successive approximations) to approximate the solution. Recall that if y solves the initial value problem, then y must satisfy the integral equation

y(x)=y(0)+∫x0(t+y(t))dt

Because y(0)=1, this becomes

y(x)=1+∫x0(t+y(t))dt

Step 1: Choose an initial guess

A simple choice for the initial approximation is a constant function that satisfies the initial condition: y0(x)=1

Step 2: Form the iteration

Define the (n+1)-th approximation yn+1(x) by

yn+1(x)=1+∫x0(t+yn(t))dt

Step 3: Compute the first few iterates

I. First iterate y1:

Starting from y0(t)=1, y1(x)=1+∫x0(t+y0(t))dt=1+∫x0(t+1)dt

Compute the integral:

∫x0(t+1)dt=[12t2+t]t=xt=0=12x2+x

Hence

y1(x)=1+(12x2+x)=1+x+12x2

II. Second iterate y2:

Now substitute y1(t)=1+t+12t2 into the integral:

y2(x)=1+∫x0(t+y1(t))dt=1+∫x0(t+(1+t+12t2))dt=1+∫x0(1+2t+12t2)dt Compute this integral: ∫x0(1+2t+12t2)dt=[t+t2+16t3]t=xt=0=x+x2+16x3

Therefore,

y2(x)=1+(x+x2+16x3)=1+x+x2+16x3

In principle, you continue in this mannerβ€”substituting yn(x) back into the integral equationβ€”to obtain yn+1(x). You either stop when you get a satisfactory approximation or when you reach your maximum number of iterations.

Observing a Pattern

Notice how each iteration adds higher-order polynomial terms in x. In fact, this process is building the power-series expansion of the true solution around x=0. More and more terms (with increasing powers of x) appear as you iterate.

The Exact Solution

Although Picard iteration is a good way to approximate or numerically solve the ODE, we can also solve it directly by standard methods for first-order linear ODEs. Indeed, rewriting

yβ€²βˆ’y=x

the integrating factor is eβˆ’x. Multiplying the equation by eβˆ’x gives

ddx(eβˆ’xy)=xeβˆ’x

Integrate both sides from 0 to x:

eβˆ’xy(x)βˆ’eβˆ’0y(0)=∫x0teβˆ’tdt

Since y(0)=1, the left-hand side is eβˆ’xy(x)βˆ’1. The right-hand side can be computed by integration by parts (or looked up in a table):

∫x0teβˆ’tdt=[βˆ’(t+1)eβˆ’t]x0=βˆ’(x+1)eβˆ’x+1

Hence,

eβˆ’xy(x)βˆ’1=βˆ’(x+1)eβˆ’x+1⟹eβˆ’xy(x)=2βˆ’(x+1)eβˆ’x

Multiply both sides by ex to solve for y(x):

y(x)=2exβˆ’(x+1)

A quick check: at x=0, y(0)=2β‹…1βˆ’(0+1)=1, matching the initial condition.

Connecting Back to the Iterates

If you expand the exact solution y(x)=2exβˆ’(x+1) in a Maclaurin series around x=0,

2ex=2(1+x+x22!+x33!+β‹―)=2+2x+x2+x33+β‹―

2exβˆ’(x+1)=(2βˆ’1)+(2xβˆ’x)+x2+x33+β‹―=1+x+x2+x33+β‹―

Picard iteration naturally generates the partial sums of this series:

y1(x)=1+x+12x2,y2(x)=1+x+x2+16x3,y3(x)=1+x+x2+13x3+β‹―,

and so on. Each iterate gets you closer to the true (infinite series) solution; numerically, you see that the coefficients of each power of x converge to those in the exact expansion.

In this example, the partial sums y0,y1,y2,β‹― quickly converge to the exact solution

y(x)=2exβˆ’(x+1)

If one needs a purely numerical approach (especially for more complicated f(x,y) ), continuing the iteration until it stabilizes (or up to a certain tolerance/number of terms) provides a valid approximate solution.

Advantages

Limitations

Table of Contents

    Picard's Iteration Method
    1. Mathematical Formulation of Picard's Method
    2. Derivation
    3. Algorithm steps
    4. Example Application
      1. Step 1: Choose an initial guess
      2. Step 2: Form the iteration
      3. Step 3: Compute the first few iterates
      4. Observing a Pattern
      5. The Exact Solution
      6. Connecting Back to the Iterates
    5. Advantages
    6. Limitations