Last modified: January 27, 2025
This article is written in: 🇺🇸
Gradient Descent
Gradient Descent is a fundamental first-order optimization algorithm widely used in mathematics, statistics, machine learning, and artificial intelligence. Its principal aim is to find the minimum of a given differentiable function f(x). Instead of searching blindly, it uses gradient information — the direction of steepest increase — to iteratively move toward a minimum by taking steps in the opposite direction, known as the steepest descent direction.
Because it only requires the gradient of the function rather than its Hessian or other higher-order derivatives, gradient descent is often the method of choice for large-scale optimization problems. For instance, in machine learning, gradient descent is the backbone of training many models, from linear regression to deep neural networks.
Conceptual Illustration:
Imagine the function f(x) as a landscape of hills and valleys. Gradient descent resembles a blindfolded hiker who can feel the slope of the ground beneath their feet. The hiker takes small steps in the direction of the steepest downward slope to eventually reach a valley (a minimum of f).
In higher dimensions (e.g., x∈Rn), this idea generalizes similarly, just with gradients represented as vectors.
Mathematical Formulation
Given a differentiable function f:Rn→R, we want to solve:
minx∈Rnf(x)
Gradient Descent Update Rule:
At iteration n, we have the current approximation xn. The gradient at xn, denoted ∇f(xn), points in the direction of greatest increase of f. To move towards a minimum, we take a step opposite to the gradient direction:
xn+1=xn−α∇f(xn)
where α>0 is the learning rate (also called the step size).
Main Components:
- The gradient ∇f(x) is like the universe’s way of saying, “This is the direction to climb if you want to make things bigger, faster, or better.” It’s a vector of partial derivatives pointing toward the steepest ascent of f.
- The learning rate α is the step size in our optimization dance. Too big, and you might leap over the minimum like an overenthusiastic kangaroo; too small, and you’re stuck inching along like a cautious turtle. It’s all about finding that Goldilocks sweet spot—just right!
Over successive iterations, provided the learning rate is chosen suitably and the function is well-behaved (e.g., convex), xn converges to a point $x^where\nabla f(x^) = 0$, indicating a critical point, often a minimum.
Derivation
I. First-Order Taylor Expansion:
Consider the first-order Taylor expansion of f at xn:
f(xn+1)≈f(xn)+∇f(xn)⊤(xn+1−xn)
To reduce f, we want xn+1 to move in a direction that decreases this linear approximation. The direction that maximally decreases f is the opposite of ∇f(xn).
II. Steepest Descent:
The direction of steepest descent is given by −∇f(xn). To decide how far to move in that direction, we introduce the learning rate α. Thus, the next iterate is:
xn+1=xn−α∇f(xn)
III. Convergence Considerations:
Under suitable conditions (e.g., if f is convex and α is sufficiently small), the sequence xn generated by gradient descent converges to a global minimum:
limn→∞xn=x∗
where x∗ is a minimizer of f.
Algorithm Steps
Input:
- A differentiable function f(x).
- A starting point x0.
- A learning rate α>0.
- A convergence tolerance ϵ>0 or a maximum number of iterations nmax.
Initialization:
Set n=0.
Iteration:
I. Compute the gradient at the current point:
gn=∇f(xn).
II. Update the point:
xn+1=xn−αgn.
III. Check for convergence:
- If ‖xn+1−xn‖<ϵ or n>nmax, stop.
- Otherwise, increment n=n+1 and repeat from step I.
Output:
- Approximate minimum xn+1.
- Number of iterations performed.
Example
Given Function:
f(x)=x2.
We know f is minimized at x=0. Let’s apply gradient descent step-by-step to illustrate how the algorithm converges to x=0.
Setup:
- Initial guess: x0=5.
- Learning rate: α=0.1.
- Gradient: f′(x)=2x.
Iteration 1:
- Current point: x0=5.
- Compute gradient: f′(5)=2⋅5=10.
Update:
x1=x0−αf′(x0)=5−0.1⋅10=5−1=4.
Iteration 2:
- Current point: x1=4.
- Compute gradient: f′(4)=2⋅4=8.
Update:
x2=x1−αf′(x1)=4−0.1⋅8=4−0.8=3.2.
Iteration 3:
- Current point: x2=3.2.
- Compute gradient: f′(3.2)=2⋅3.2=6.4.
Update:
x3=x2−αf′(x2)=3.2−0.1⋅6.4=3.2−0.64=2.56.
Continuing this process, we observe that xn keeps getting closer to 0. Over many iterations, the point will approach the exact minimum at x=0.
Advantages
- Simplicity and ease of implementation make gradient descent a popular choice, as it only requires first-order gradients and is straightforward to code.
- Scalability allows gradient descent to handle problems with large parameter spaces, such as deep learning models with millions of parameters, and advanced variants like stochastic gradient descent enhance its applicability in machine learning.
- No need for second-order information makes it computationally cheaper than methods requiring Hessians, as it relies solely on first-order derivatives.
- Versatility enables its application to a wide range of differentiable functions, particularly in complex, high-dimensional optimization problems.
Limitations
- Learning rate sensitivity poses a challenge, as selecting a learning rate (\alpha) that is too large can cause divergence, while a small (\alpha) may lead to very slow convergence.
- Local minima and saddle points in non-convex optimization problems can trap gradient descent, preventing it from finding the global minimum or causing it to plateau.
- Ill-conditioned problems with elongated contours or ravines can cause inefficient convergence, often requiring preconditioning or adaptive techniques to address the issue.
- Dependence on good initialization means poor starting points may lead to suboptimal solutions or prolonged convergence, particularly in functions with challenging landscapes.