Last modified: June 04, 2024

This article is written in: πŸ‡ΊπŸ‡Έ

The Golden Ratio Search is a technique employed for locating the extremum (minimum or maximum) of a unimodal function over a given interval. Unlike gradient-based or derivative-requiring methods, this approach uses only function evaluations, making it broadly applicable even when derivatives are difficult or impossible to compute. The method takes its name from the special constant known as the golden ratio Ο•, which uniquely balances intervals to reduce the problem domain efficiently.

For a unimodal function f(x) defined on an interval [a,b], the Golden Ratio Search progressively narrows down the search interval by evaluating the function at two strategically chosen internal points. By leveraging the intrinsic ratio Ο•, each iteration reduces the size of the interval while ensuring that no potential minima are prematurely excluded. This process continues until the interval is sufficiently small, providing an approximation to the function’s minimum (or maximum, if desired by adjusting the comparison conditions).

Conceptual Illustration:

Imagine the graph of a unimodal function f(x). The goal is to isolate the minimum within a progressively smaller bracket. Initially, we have the interval [a,b]:

output(13)

We pick two points x1 and x2 inside [a,b] using the golden ratio partitioning. Based on the function values at these points, we eliminate a portion of the interval that cannot contain the minimum. This procedure repeats, each time reducing the interval size while maintaining the guarantee of containing the minimum.

Mathematical Formulation

The golden ratio is defined as:

Ο•=1+52β‰ˆ1.618.

This constant has unique properties, notably:

1Ο•=Ο•βˆ’1β‰ˆ0.618.

To apply the Golden Ratio Search for minimization, we start with an interval [a,b] and choose two internal points x1 and x2 such that:

x1=bβˆ’bβˆ’aΟ•,x2=a+bβˆ’aΟ•.

It follows from this construction that:

x2βˆ’abβˆ’x1=Ο•.

Because of how Ο• partitions the interval, the ratio of the smaller subinterval to the larger subinterval is equal to 1/Ο•, ensuring a consistent and balanced reduction.

Derivation

I. Unimodality Assumption:

Assume f(x) is unimodal on [a,b], meaning it has a single minimum within that interval. Without loss of generality, we focus on minimizing f(x).

II. Golden Ratio Partition:

Given the interval [a,b], we introduce points x1 and x2 as described:

x1=bβˆ’bβˆ’aΟ•,x2=a+bβˆ’aΟ•.

By doing so, we ensure:

x1<x2,a<x1<x2<b.

III. Decision Criterion:

We evaluate f(x1) and f(x2):

In either case, the length of the interval is reduced by a factor approximately 1Ο• each iteration, ensuring rapid convergence.

IV. Convergence:

After n iterations, the length of the interval is:

|bβˆ’a|(1Ο•)n.

Once this length is less than a prescribed tolerance Ο΅, we accept that the minimum is approximated by any point within the current interval.

Algorithm Steps

Input:

Initialization:

Iteration:

I. Compute:

x1=bβˆ’bβˆ’aΟ•,x2=a+bβˆ’aΟ•.

II. Evaluate f(x1) and f(x2).

III. If f(x1)>f(x2):

Set a=x1.

(The minimum is in the interval [x1,b])

Else:

Set b=x2.

(The minimum is in the interval [a,x2])

IV. If |bβˆ’a|<Ο΅ or kβ‰₯nmax:

Stop and approximate the minimum as a+b2.

V. Increment iteration counter k=k+1 and go back to step I.

Output:

Example

Given Function:

f(x)=x2.

We know that f(x)=x2 is unimodal (in fact, strictly convex) on any interval, and it attains its minimum at x=0.

Initial Setup:

a=βˆ’2,b=2,Ο•=1+52β‰ˆ1.618.

Iteration 1:

Compute:

x1=bβˆ’bβˆ’aΟ•=2βˆ’2βˆ’(βˆ’2)1.618=2βˆ’41.618β‰ˆ2βˆ’2.472β‰ˆβˆ’0.472.

x2=a+bβˆ’aΟ•=βˆ’2+41.618β‰ˆβˆ’2+2.472=0.472.

Evaluate:

f(x1)=(βˆ’0.472)2β‰ˆ0.2225,f(x2)=(0.472)2β‰ˆ0.2225.

Since f(x1)=f(x2), we can choose either subinterval. Let’s choose [a,b]=[βˆ’2,x2]=[βˆ’2,0.472] for simplicity. So:

b=0.472.

Iteration 2:

New interval: [a,b]=[βˆ’2,0.472]

Compute new points:

x1=bβˆ’bβˆ’aΟ•=0.472βˆ’0.472βˆ’(βˆ’2)1.618=0.472βˆ’2.4721.618β‰ˆ0.472βˆ’1.526β‰ˆβˆ’1.054.

x2=a+bβˆ’aΟ•=βˆ’2+2.4721.618β‰ˆβˆ’2+1.526=βˆ’0.474.

Evaluate:

f(x1)=(βˆ’1.054)2β‰ˆ1.110,f(x2)=(βˆ’0.474)2β‰ˆ0.224.

Compare:

f(x1)=1.110,f(x2)=0.224.

Since f(x1)>f(x2), the minimum lies in [x1,b]=[βˆ’1.054,0.472]? Actually, check logic:

We found f(x1)>f(x2), which means the smaller value is at x2. Thus, we keep the interval on the side of x2 because that is where the minimum is. According to the rules:

If f(x1)>f(x2), we set:

a=x1=βˆ’1.054

maintaining [a,b]=[βˆ’1.054,0.472].

Subsequent Iterations:

At each iteration, you would similarly compute new x1,x2, evaluate f(x1) and f(x2), and narrow down the interval. Ultimately, as you continue, the interval will shrink around x=0, since f(x)=x2 achieves its minimum at x=0.

Advantages

  1. No derivatives required makes the Golden Ratio Search ideal for problems where derivative information is unavailable or too expensive to compute.
  2. Guaranteed reduction of the search interval by approximately 1Ο• (the Golden Ratio) in each iteration ensures steady progress toward locating the minimum.
  3. The method's robustness guarantees convergence to a minimum within the provided interval, assuming the function is unimodal.
  4. Simplicity of implementation makes the algorithm accessible, as it involves straightforward iterative calculations without the need for complex procedures.

Limitations

  1. Unimodality required means the method assumes the function has only one minimum in the interval; multiple minima can lead to convergence to a local minimum or failure to isolate a solution.
  2. Initial bracketing of the interval [a,b] containing the minimum is essential, and determining this interval can be challenging if the function's behavior is poorly understood.
  3. The method is not the fastest for all problems, particularly when gradient-based methods like Newton’s method or quasi-Newton methods can be used effectively.
  4. Slow convergence can occur in flat regions around the minimum, where reductions per iteration provide limited new information for further narrowing the interval.

Table of Contents

    Golden Ratio Search
    1. Mathematical Formulation
    2. Derivation
    3. Algorithm Steps
    4. Example
    5. Advantages
    6. Limitations