Numerical Methods

Runge Kutta 🇺🇸

The Runge-Kutta method is part of a family of iterative methods, both implicit and explicit, which are frequently employed for the numerical integration of ordinary differential equations (ODEs). This family encompasses widely recognized methods like the Euler Method, Heun's method (a.k.a., the 2nd...

Gaussian Interpolation 🇺🇸

Gaussian Interpolation, often associated with Gauss’s forward and backward interpolation formulas, is a technique that refines the approach of polynomial interpolation when data points are equally spaced. Instead of using the Newton forward or backward interpolation formulas directly from one end of...

Central Difference 🇺🇸

The central‐difference method is a finite‐difference scheme for estimating derivatives that combines forward and backward differences via Taylor‐series expansions. By evaluating the function at points symmetrically placed around the target, it cancels out many of the lower‐order error terms, yieldin...

Thin Plate Spline Interpolation 🇺🇸

Thin Plate Spline (TPS) interpolation is a non‑parametric, spline‑based technique for fitting a smooth surface through scattered data in two or more spatial dimensions. In its classical 2‑D form one seeks a function $f\colon\mathbb R^{2}\to\mathbb R$ that passes through specified data points while m...

Least Squares 🇺🇸

Least Squares Regression is a fundamental technique in statistical modeling and data analysis used for fitting a model to observed data. The primary goal is to find a set of parameters that minimize the discrepancies (residuals) between the model’s predictions and the actual observed data. The "leas...

Interpolation 🇺🇸

Interpolation is the problem of reconstructing an unknown function from a finite set of exact data pairs...

Simpsons Rule 🇺🇸

Simpson's Rule is a powerful technique in numerical integration, utilized for approximating definite integrals when an exact antiderivative of the function is difficult or impossible to determine analytically. This method enhances the accuracy of integral approximations by modeling the region under ...

Trapezoidal Rule 🇺🇸

The Trapezoidal Rule is a fundamental numerical integration technique employed to approximate definite integrals, especially when an exact antiderivative of the function is difficult or impossible to determine analytically. This method is widely used in various fields such as engineering, physics, a...

Regression 🇺🇸

Regression analysis and curve fitting are important tools in statistics, econometrics, engineering, and modern machine-learning pipelines. At their core they seek a deterministic (or probabilistic) mapping $\widehat f: \mathcal X \longrightarrow \mathcal Y$ that minim...

Newton Polynomial 🇺🇸

Newton’s Polynomial, often referred to as Newton’s Interpolation Formula, is another classical approach to polynomial interpolation. Given a set of data points $(x_0,y_0),(x_1,y_1),\dots,(x_n,y_n)$ with distinct $x_i$ values, Newton’s method constructs an interpolating polynomial in a form that make...

Lagrange Polynomial Interpolation 🇺🇸

Lagrange Polynomial Interpolation is a widely used technique for determining a polynomial that passes exactly through a given set of data points. Suppose we have a set of $(n+1)$ data points $(x_0, y_0), (x_1, y_1), \ldots, (x_n, y_n)$ where all $x_i$ are distinct. The aim is to find a polynomial $L...

Newtons Method 🇺🇸

Newton's method (or the Newton-Raphson method) is a powerful root-finding algorithm that exploits both the value of a function and its first derivative to rapidly refine approximations to its roots. Unlike bracketing methods that work by enclosing a root between two points, Newton's method is an ope...

Singular Value Decomposition 🇺🇸

Singular Value Decomposition (SVD) is a fundamental matrix decomposition technique widely used in numerous areas of science, engineering, and data analysis. Unlike the Eigenvalue Decomposition (EVD), which is restricted to square and diagonalizable matrices, SVD applies to any rectangular matrix. It...

Integration Introduction 🇺🇸

$$\int_{1}^{2} x^2 dx \approx \sum_{i=1}^{10} h \cdot f(1 + 0.1i)$...

Midpoint Rule 🇺🇸

The Midpoint Rule is a robust numerical method for approximating definite integrals. It seeks to estimate the area under a curve by partitioning it into a collection of rectangles and then summing the areas of these rectangles. This method is particularly useful when an antiderivative of the functio...

Eigen Value Decomposition 🇺🇸

Eigenvalue Decomposition (EVD), also known as Eigendecomposition, is a fundamental operation in linear algebra that breaks down a square matrix into a simpler form defined by its eigenvalues and eigenvectors. This decomposition provides deep insights into the properties and structure of a matrix, en...

Systems of Equations 🇺🇸

A linear system of equations is a collection of one or more linear equations involving the same set of variables. Such systems arise in diverse areas such as engineering, economics, physics, and computer science. The overarching goal is to find values of the variables that simultaneously satisfy all...

Qr Method 🇺🇸

The QR method is a widely used algorithm in numerical linear algebra for determining the eigenvalues of a given square matrix. Unlike direct methods such as solving the characteristic polynomial, which can be complicated and unstable numerically for large matrices, the QR method leverages iterative ...

Inverse Matrix 🇺🇸

The inverse of a matrix A is denoted as A^-1. It is a unique matrix such that when it is multiplied by the original matrix A, the result is the identity matrix I. Mathematically, this is expressed as...

Heuns Method 🇺🇸

Heun's method is an improved version of Euler's method that enhances accuracy by using an average of the slope at the beginning and the predicted slope at the end of the interval...

Gradient Descent 🇺🇸

Gradient Descent is a fundamental first-order optimization algorithm widely used in mathematics, statistics, machine learning, and artificial intelligence. Its principal aim is to find the minimum of a given differentiable function $f(x)$. Instead of searching blindly, it uses gradient information —...

Taylor Series 🇺🇸

The Taylor series is a fundamental tool in calculus and mathematical analysis, offering a powerful way to represent and approximate functions. By expanding a function around a specific point, known as the "center" or "point of expansion," we can express it as an infinite sum of polynomial terms deri...

Partial Differential Equations 🇺🇸

A partial differential equation (PDE) is an equation that involves...

Power Method 🇺🇸

The power method is a fundamental iterative algorithm for estimating the eigenvalue of largest magnitude and its associated eigenvector for a given matrix. This technique is particularly appealing when dealing with large and sparse matrices, where direct eigenvalue computations (e.g., via the charac...

Golden Ratio Search 🇺🇸

The Golden Ratio Search is a technique employed for locating the extremum (minimum or maximum) of a unimodal function over a given interval. Unlike gradient-based or derivative-requiring methods, this approach uses only function evaluations, making it broadly applicable even when derivatives are dif...

Matrix Methods 🇺🇸

Matrices are often described as rectangular arrays of numbers organized into rows and columns, and they form the bedrock of numerous processes in numerical methods. People use them for solving systems of linear equations, transforming geometric data, and carrying out many algorithmic tasks that lie ...

Gauss Seidel 🇺🇸

The Gauss-Seidel method is a classical iterative method for solving systems of linear equations of the form $A\mathbf{x} = \mathbf{b}$, where $A$ is an $n \times n$ matrix, $\mathbf{x}$ is the vector of unknowns $(x_1, x_2, \ldots, x_n)$, and $\mathbf{b}$ is a known vector. Unlike direct methods suc...

Jacobi Method 🇺🇸

The Jacobi method is a classical iterative algorithm used to approximate the solution of a system of linear equations $A\mathbf{x} = \mathbf{b}$. Instead of attempting to solve the system directly using methods such as Gaussian elimination, the Jacobi method iteratively refines an initial guess for ...

Forward Difference 🇺🇸

The forward difference method is a fundamental finite difference technique utilized for approximating the derivatives of functions. Unlike the central and backward difference methods, which use information from both sides or preceding points, respectively, the forward difference method relies solely...

Bisection Method 🇺🇸

The bisection method is a classical root-finding technique used extensively in numerical analysis to locate a root of a continuous function $f(x)$ within a specified interval $[a, b]$. It belongs to the family of bracketing methods, which use intervals known to contain a root and systematically redu...

Differentiation 🇺🇸

Differentiation is a cornerstone concept in calculus, fundamental to understanding how quantities change in relation to one another. At its core, differentiation is used to determine the rate at which a particular quantity is changing at a specific point. This rate of change is quantitatively expres...

Lu Decomposition 🇺🇸

LU Decomposition (or LU Factorization) is a powerful and widely used technique in numerical linear algebra for solving systems of linear equations, computing inverses, and determining determinants. The core idea is to factorize a given square matrix $A$ into the product of a lower-triangular matrix ...

Relaxation Method 🇺🇸

The relaxation method, commonly referred to as the fixed-point iteration method, is an iterative approach used to find solutions (roots) to nonlinear equations of the form $f(x) = 0$. Instead of directly solving for the root, the method involves rewriting the original equation in the form...

Linear Interpolation 🇺🇸

Linear interpolation is one of the most basic and commonly used interpolation methods. The idea is to approximate the value of a function between two known data points by assuming that the function behaves linearly (like a straight line) between these points. Although this assumption may be simplist...

Root Finding 🇺🇸

Root-finding algorithms aim to solve equations of the form...

Picards Method 🇺🇸

Picard's method, alternatively known as the method of successive approximations, is a tool primarily used for solving initial-value problems for first-order ordinary differential equations (ODEs). The approach hinges on an iterative process that approximates the solution of an ODE. Though this metho...

Ordinary Differential Equations 🇺🇸

An ordinary differential equation (ODE) is an equation that involves...

Backward Difference 🇺🇸

The backward difference method is a finite difference technique employed to approximate the derivatives of functions. Unlike the forward difference method, which uses information from points ahead of the target point, the backward difference method relies on function values from points preceding the...

Monte Carlo 🇺🇸

Monte Carlo integration is a numerical technique for approximating integrals using randomness. Rather than systematically sampling a function at predetermined points, as done in methods like the trapezoidal rule or Simpson’s rule, Monte Carlo methods rely on random samples drawn from a prescribed do...

Gaussian Elimination 🇺🇸

Gaussian elimination is a fundamental algorithmic procedure in linear algebra used to solve systems of linear equations, find matrix inverses, and determine the rank of matrices. The procedure systematically applies elementary row operations to transform a given matrix into an upper-triangular form ...

Cubic Spline Interpolation 🇺🇸

Cubic spline interpolation is a refined mathematical tool frequently used within numerical analysis. It's an approximation technique that employs piecewise cubic polynomials, collectively forming a cubic spline. These cubic polynomials are specifically engineered to pass through a defined set of dat...

Secant Method 🇺🇸

The Secant Method is a root-finding algorithm used in numerical analysis to approximate the zeros of a given function $f(x)$. It can be regarded as a derivative-free variant of Newton's method. Instead of computing the derivative $f'(x)$ at each iteration (as done in Newton’s method), it approximate...

Eigenvalues and Eigenvectors 🇺🇸

Eigenvalues and eigenvectors are foundational concepts in linear algebra, with extensive applications across various domains such as physics, computer graphics, and machine learning. These concepts are instrumental in decomposing complex matrix transformations, thereby simplifying numerical computat...

Eulers Method 🇺🇸

Euler's Method is a numerical technique applied in the realm of initial value problems for ordinary differential equations (ODEs). The simplicity of this method makes it a popular choice in cases where the differential equation lacks a closed-form solution. The method might not always provide the mo...