Numerical Methods

Inverse Matrix ๐Ÿ‡บ๐Ÿ‡ธ

The inverse of a matrix A is denoted as A^-1. It is a unique matrix such that when it is multiplied by the original matrix A, the result is the identity matrix I. Mathematically, this is expressed as...

Thin Plate Spline Interpolation ๐Ÿ‡บ๐Ÿ‡ธ

Thin Plate Spline (TPS) Interpolation is a non-parametric, spline-based method for interpolating scattered data in two or more dimensions. Originally arising in the context of fitting a smooth surface through a set of points in $\mathbb{R}^2$, thin plate splines can be generalized to higher dimensio...

Partial Differential Equations ๐Ÿ‡บ๐Ÿ‡ธ

A partial differential equation (PDE) is an equation that involves...

Eigenvalues and Eigenvectors ๐Ÿ‡บ๐Ÿ‡ธ

Eigenvalues and eigenvectors are foundational concepts in linear algebra, with extensive applications across various domains such as physics, computer graphics, and machine learning. These concepts are instrumental in decomposing complex matrix transformations, thereby simplifying numerical computat...

Midpoint Rule ๐Ÿ‡บ๐Ÿ‡ธ

The Midpoint Rule is a robust numerical method for approximating definite integrals. It seeks to estimate the area under a curve by partitioning it into a collection of rectangles and then summing the areas of these rectangles. This method is particularly useful when an antiderivative of the functio...

Trapezoidal Rule ๐Ÿ‡บ๐Ÿ‡ธ

The Trapezoidal Rule is a fundamental numerical integration technique employed to approximate definite integrals, especially when an exact antiderivative of the function is difficult or impossible to determine analytically. This method is widely used in various fields such as engineering, physics, a...

Forward Difference ๐Ÿ‡บ๐Ÿ‡ธ

The forward difference method is a fundamental finite difference technique utilized for approximating the derivatives of functions. Unlike the central and backward difference methods, which use information from both sides or preceding points, respectively, the forward difference method relies solely...

Lu Decomposition ๐Ÿ‡บ๐Ÿ‡ธ

LU Decomposition (or LU Factorization) is a powerful and widely used technique in numerical linear algebra for solving systems of linear equations, computing inverses, and determining determinants. The core idea is to factorize a given square matrix $A$ into the product of a lower-triangular matrix ...

Newton Polynomial ๐Ÿ‡บ๐Ÿ‡ธ

Newtonโ€™s Polynomial, often referred to as Newtonโ€™s Interpolation Formula, is another classical approach to polynomial interpolation. Given a set of data points $(x_0,y_0),(x_1,y_1),\dots,(x_n,y_n)$ with distinct $x_i$ values, Newtonโ€™s method constructs an interpolating polynomial in a form that make...

Gradient Descent ๐Ÿ‡บ๐Ÿ‡ธ

Gradient Descent is a fundamental first-order optimization algorithm widely used in mathematics, statistics, machine learning, and artificial intelligence. Its principal aim is to find the minimum of a given differentiable function $f(x)$. Instead of searching blindly, it uses gradient information โ€”...

Picards Method ๐Ÿ‡บ๐Ÿ‡ธ

Picard's method, alternatively known as the method of successive approximations, is a tool primarily used for solving initial-value problems for first-order ordinary differential equations (ODEs). The approach hinges on an iterative process that approximates the solution of an ODE. Though this metho...

Cubic Spline Interpolation ๐Ÿ‡บ๐Ÿ‡ธ

Cubic spline interpolation is a refined mathematical tool frequently used within numerical analysis. It's an approximation technique that employs piecewise cubic polynomials, collectively forming a cubic spline. These cubic polynomials are specifically engineered to pass through a defined set of dat...

Gauss Seidel ๐Ÿ‡บ๐Ÿ‡ธ

The Gauss-Seidel method is a classical iterative method for solving systems of linear equations of the form $A\mathbf{x} = \mathbf{b}$, where $A$ is an $n \times n$ matrix, $\mathbf{x}$ is the vector of unknowns $(x_1, x_2, \ldots, x_n)$, and $\mathbf{b}$ is a known vector. Unlike direct methods suc...

Golden Ratio Search ๐Ÿ‡บ๐Ÿ‡ธ

The Golden Ratio Search is a technique employed for locating the extremum (minimum or maximum) of a unimodal function over a given interval. Unlike gradient-based or derivative-requiring methods, this approach uses only function evaluations, making it broadly applicable even when derivatives are dif...

Jacobi Method ๐Ÿ‡บ๐Ÿ‡ธ

The Jacobi method is a classical iterative algorithm used to approximate the solution of a system of linear equations $A\mathbf{x} = \mathbf{b}$. Instead of attempting to solve the system directly using methods such as Gaussian elimination, the Jacobi method iteratively refines an initial guess for ...

Regression ๐Ÿ‡บ๐Ÿ‡ธ

Regression analysis and curve fitting are critical methods in statistical analysis and machine learning. Both aim to find a function that best approximates a set of data points, yet their typical applications may vary slightly. They are particularly useful in understanding relationships among variab...

Power Method ๐Ÿ‡บ๐Ÿ‡ธ

The power method is a fundamental iterative algorithm for estimating the eigenvalue of largest magnitude and its associated eigenvector for a given matrix. This technique is particularly appealing when dealing with large and sparse matrices, where direct eigenvalue computations (e.g., via the charac...

Differentiation ๐Ÿ‡บ๐Ÿ‡ธ

Differentiation is a cornerstone concept in calculus, fundamental to understanding how quantities change in relation to one another. At its core, differentiation is used to determine the rate at which a particular quantity is changing at a specific point. This rate of change is quantitatively expres...

Interpolation ๐Ÿ‡บ๐Ÿ‡ธ

Interpolation is a method of constructing new data points within the range of a discrete set of known data points. It plays a crucial role in data analysis by helping to predict unknown values for any point within the given range...

Systems of Equations ๐Ÿ‡บ๐Ÿ‡ธ

A linear system of equations is a collection of one or more linear equations involving the same set of variables. Such systems arise in diverse areas such as engineering, economics, physics, and computer science. The overarching goal is to find values of the variables that simultaneously satisfy all...

Taylor Series ๐Ÿ‡บ๐Ÿ‡ธ

The Taylor series is a fundamental tool in calculus and mathematical analysis, offering a powerful way to represent and approximate functions. By expanding a function around a specific point, known as the "center" or "point of expansion," we can express it as an infinite sum of polynomial terms deri...

Eulers Method ๐Ÿ‡บ๐Ÿ‡ธ

Euler's Method is a numerical technique applied in the realm of initial value problems for ordinary differential equations (ODEs). The simplicity of this method makes it a popular choice in cases where the differential equation lacks a closed-form solution. The method might not always provide the mo...

Qr Method ๐Ÿ‡บ๐Ÿ‡ธ

The QR method is a widely used algorithm in numerical linear algebra for determining the eigenvalues of a given square matrix. Unlike direct methods such as solving the characteristic polynomial, which can be complicated and unstable numerically for large matrices, the QR method leverages iterative ...

Linear Interpolation ๐Ÿ‡บ๐Ÿ‡ธ

Linear interpolation is one of the most basic and commonly used interpolation methods. The idea is to approximate the value of a function between two known data points by assuming that the function behaves linearly (like a straight line) between these points. Although this assumption may be simplist...

Matrix Methods ๐Ÿ‡บ๐Ÿ‡ธ

Matrices are often described as rectangular arrays of numbers organized into rows and columns, and they form the bedrock of numerous processes in numerical methods. People use them for solving systems of linear equations, transforming geometric data, and carrying out many algorithmic tasks that lie ...

Gaussian Interpolation ๐Ÿ‡บ๐Ÿ‡ธ

Gaussian Interpolation, often associated with Gaussโ€™s forward and backward interpolation formulas, is a technique that refines the approach of polynomial interpolation when data points are equally spaced. Instead of using the Newton forward or backward interpolation formulas directly from one end of...

Heuns Method ๐Ÿ‡บ๐Ÿ‡ธ

Heun's method is an improved version of Euler's method that enhances accuracy by using an average of the slope at the beginning and the predicted slope at the end of the interval...

Simpsons Rule ๐Ÿ‡บ๐Ÿ‡ธ

Simpson's Rule is a powerful technique in numerical integration, utilized for approximating definite integrals when an exact antiderivative of the function is difficult or impossible to determine analytically. This method enhances the accuracy of integral approximations by modeling the region under ...

Relaxation Method ๐Ÿ‡บ๐Ÿ‡ธ

The relaxation method, commonly referred to as the fixed-point iteration method, is an iterative approach used to find solutions (roots) to nonlinear equations of the form $f(x) = 0$. Instead of directly solving for the root, the method involves rewriting the original equation in the form...

Singular Value Decomposition ๐Ÿ‡บ๐Ÿ‡ธ

Singular Value Decomposition (SVD) is a fundamental matrix decomposition technique widely used in numerous areas of science, engineering, and data analysis. Unlike the Eigenvalue Decomposition (EVD), which is restricted to square and diagonalizable matrices, SVD applies to any rectangular matrix. It...

Lagrange Polynomial Interpolation ๐Ÿ‡บ๐Ÿ‡ธ

Lagrange Polynomial Interpolation is a widely used technique for determining a polynomial that passes exactly through a given set of data points. Suppose we have a set of $(n+1)$ data points $(x_0, y_0), (x_1, y_1), \ldots, (x_n, y_n)$ where all $x_i$ are distinct. The aim is to find a polynomial $L...

Secant Method ๐Ÿ‡บ๐Ÿ‡ธ

The Secant Method is a root-finding algorithm used in numerical analysis to approximate the zeros of a given function $f(x)$. It can be regarded as a derivative-free variant of Newton's method. Instead of computing the derivative $f'(x)$ at each iteration (as done in Newtonโ€™s method), it approximate...

Bisection Method ๐Ÿ‡บ๐Ÿ‡ธ

The bisection method is a classical root-finding technique used extensively in numerical analysis to locate a root of a continuous function $f(x)$ within a specified interval $[a, b]$. It belongs to the family of bracketing methods, which use intervals known to contain a root and systematically redu...

Root Finding ๐Ÿ‡บ๐Ÿ‡ธ

Root-finding algorithms aim to solve equations of the form...

Monte Carlo ๐Ÿ‡บ๐Ÿ‡ธ

Monte Carlo integration is a numerical technique for approximating integrals using randomness. Rather than systematically sampling a function at predetermined points, as done in methods like the trapezoidal rule or Simpsonโ€™s rule, Monte Carlo methods rely on random samples drawn from a prescribed do...

Integration Introduction ๐Ÿ‡บ๐Ÿ‡ธ

$$\int_{1}^{2} x^2 dx \approx \sum_{i=1}^{10} h \cdot f(1 + 0.1i)$...

Central Difference ๐Ÿ‡บ๐Ÿ‡ธ

The central difference method is a finite difference method used for approximating derivatives. It utilizes the forward difference, backward difference, and the principles of Taylor series expansion to derive a more accurate approximation of derivatives. This method is particularly valuable in numer...

Runge Kutta ๐Ÿ‡บ๐Ÿ‡ธ

The Runge-Kutta method is part of a family of iterative methods, both implicit and explicit, which are frequently employed for the numerical integration of ordinary differential equations (ODEs). This family encompasses widely recognized methods like the Euler Method, Heun's method (a.k.a., the 2nd...

Newtons Method ๐Ÿ‡บ๐Ÿ‡ธ

Newton's method (or the Newton-Raphson method) is a powerful root-finding algorithm that exploits both the value of a function and its first derivative to rapidly refine approximations to its roots. Unlike bracketing methods that work by enclosing a root between two points, Newton's method is an ope...

Gaussian Elimination ๐Ÿ‡บ๐Ÿ‡ธ

Gaussian elimination is a fundamental algorithmic procedure in linear algebra used to solve systems of linear equations, find matrix inverses, and determine the rank of matrices. The procedure systematically applies elementary row operations to transform a given matrix into an upper-triangular form ...

Ordinary Differential Equations ๐Ÿ‡บ๐Ÿ‡ธ

An ordinary differential equation (ODE) is an equation that involves...

Backward Difference ๐Ÿ‡บ๐Ÿ‡ธ

The backward difference method is a finite difference technique employed to approximate the derivatives of functions. Unlike the forward difference method, which uses information from points ahead of the target point, the backward difference method relies on function values from points preceding the...

Least Squares ๐Ÿ‡บ๐Ÿ‡ธ

Least Squares Regression is a fundamental technique in statistical modeling and data analysis used for fitting a model to observed data. The primary goal is to find a set of parameters that minimize the discrepancies (residuals) between the modelโ€™s predictions and the actual observed data. The "leas...

Eigen Value Decomposition ๐Ÿ‡บ๐Ÿ‡ธ

Eigenvalue Decomposition (EVD), also known as Eigendecomposition, is a fundamental operation in linear algebra that breaks down a square matrix into a simpler form defined by its eigenvalues and eigenvectors. This decomposition provides deep insights into the properties and structure of a matrix, en...