Numerical Methods

Gaussian Elimination ๐Ÿ‡บ๐Ÿ‡ธ

Gaussian elimination is a fundamental algorithmic procedure in linear algebra used to solve systems of linear equations, find matrix inverses, and determine the rank of matrices. The procedure systematically applies elementary row operations to transform a given matrix into an upper-triangular form ...

Gauss Seidel ๐Ÿ‡บ๐Ÿ‡ธ

The Gauss-Seidel method is a classical iterative method for solving systems of linear equations of the form $A\mathbf{x} = \mathbf{b}$, where $A$ is an $n \times n$ matrix, $\mathbf{x}$ is the vector of unknowns $(x_1, x_2, \ldots, x_n)$, and $\mathbf{b}$ is a known vector. Unlike direct methods suc...

Systems of Equations ๐Ÿ‡บ๐Ÿ‡ธ

Linear systems of equations can be represented in a matrix form, which enables the use of a variety of numerical methods for solving them. ...

Inverse Matrix ๐Ÿ‡บ๐Ÿ‡ธ

The inverse of a matrix A is denoted as A^-1. It is a unique matrix such that when it is multiplied by the original matrix A, the result is the identity matrix I. Mathematically, this is expressed as...

Lu Decomposition ๐Ÿ‡บ๐Ÿ‡ธ

LU Decomposition (or LU Factorization) is a powerful and widely used technique in numerical linear algebra for solving systems of linear equations, computing inverses, and determining determinants. The core idea is to factorize a given square matrix $A$ into the product of a lower-triangular matrix ...

Jacobi Method ๐Ÿ‡บ๐Ÿ‡ธ

The Jacobi method is a classical iterative algorithm used to approximate the solution of a system of linear equations $A\mathbf{x} = \mathbf{b}$. Instead of attempting to solve the system directly using methods such as Gaussian elimination, the Jacobi method iteratively refines an initial guess for ...

Cubic Spline Interpolation ๐Ÿ‡บ๐Ÿ‡ธ

Cubic spline interpolation is a refined mathematical tool frequently used within numerical analysis. It's an approximation technique that employs piecewise cubic polynomials, collectively forming a cubic spline. These cubic polynomials are specifically engineered to pass through a defined set of dat...

Linear Interpolation ๐Ÿ‡บ๐Ÿ‡ธ

Linear interpolation is one of the most basic and commonly used interpolation methods. The idea is to approximate the value of a function between two known data points by assuming that the function behaves linearly (like a straight line) between these points. Although this assumption may be simplist...

Interpolation ๐Ÿ‡บ๐Ÿ‡ธ

Interpolation is a method of constructing new data points within the range of a discrete set of known data points. It plays a crucial role in data analysis by helping to predict unknown values for any point within the given range...

Gaussian Interpolation ๐Ÿ‡บ๐Ÿ‡ธ

Gaussian Interpolation, often associated with Gaussโ€™s forward and backward interpolation formulas, is a technique that refines the approach of polynomial interpolation when data points are equally spaced. Instead of using the Newton forward or backward interpolation formulas directly from one end of...

Thin Plate Spline Interpolation ๐Ÿ‡บ๐Ÿ‡ธ

Thin Plate Spline (TPS) Interpolation is a non-parametric, spline-based method for interpolating scattered data in two or more dimensions. Originally arising in the context of fitting a smooth surface through a set of points in $\mathbb{R}^2$, thin plate splines can be generalized to higher dimensio...

Regression ๐Ÿ‡บ๐Ÿ‡ธ

Regression analysis and curve fitting are critical methods in statistical analysis and machine learning. Both aim to find a function that best approximates a set of data points, yet their typical applications may vary slightly. They are particularly useful in understanding relationships among variab...

Newton Polynomial ๐Ÿ‡บ๐Ÿ‡ธ

Newtonโ€™s Polynomial, often referred to as Newtonโ€™s Interpolation Formula, is another classical approach to polynomial interpolation. Given a set of data points $(x_0,y_0),(x_1,y_1),\dots,(x_n,y_n)$ with distinct $x_i$ values, Newtonโ€™s method constructs an interpolating polynomial in a form that make...

Least Squares ๐Ÿ‡บ๐Ÿ‡ธ

Least Squares Regression is a fundamental technique in statistical modeling and data analysis used for fitting a model to observed data. The primary goal is to find a set of parameters that minimize the discrepancies (residuals) between the modelโ€™s predictions and the actual observed data. The "leas...

Lagrange Polynomial Interpolation ๐Ÿ‡บ๐Ÿ‡ธ

Lagrange Polynomial Interpolation is a widely used technique for determining a polynomial that passes exactly through a given set of data points. Suppose we have a set of $(n+1)$ data points $(x_0, y_0), (x_1, y_1), \ldots, (x_n, y_n)$ where all $x_i$ are distinct. The aim is to find a polynomial $L...

Newtons Method ๐Ÿ‡บ๐Ÿ‡ธ

Newton's method (or the Newton-Raphson method) is a powerful root-finding algorithm that exploits both the value of a function and its first derivative to rapidly refine approximations to its roots. Unlike bracketing methods that work by enclosing a root between two points, Newton's method is an ope...

Bisection Method ๐Ÿ‡บ๐Ÿ‡ธ

The bisection method is a classical root-finding technique used extensively in numerical analysis to locate a root of a continuous function $f(x)$ within a specified interval $[a, b]$. It belongs to the family of bracketing methods, which use intervals known to contain a root and systematically redu...

Secant Method ๐Ÿ‡บ๐Ÿ‡ธ

The Secant Method is a root-finding algorithm used in numerical analysis to approximate the zeros of a given function $f(x)$. It can be regarded as a derivative-free variant of Newton's method. Instead of computing the derivative $f'(x)$ at each iteration (as done in Newtonโ€™s method), it approximate...

Gradient Descent ๐Ÿ‡บ๐Ÿ‡ธ

Gradient Descent is a fundamental first-order optimization algorithm widely used in mathematics, statistics, machine learning, and artificial intelligence. Its principal aim is to find the minimum of a given differentiable function $f(x)$. Instead of searching blindly, it uses gradient information โ€”...

Root Finding ๐Ÿ‡บ๐Ÿ‡ธ

Root: A root of a function $f(x)$ is a value $x = r$ such that $f(r) = 0$. This applies not only to linear functions but can also extend to nonlinear systems, such as those including powers ($x^2, x^3, \ldots$), roots, radicals and non-integer polynomials ($\sqrt{x}, x^{3/5}, x^{pi} \ldots$), or tri...

Golden Ratio Search ๐Ÿ‡บ๐Ÿ‡ธ

The Golden Ratio Search is a technique employed for locating the extremum (minimum or maximum) of a unimodal function over a given interval. Unlike gradient-based or derivative-requiring methods, this approach uses only function evaluations, making it broadly applicable even when derivatives are dif...

Relaxation Method ๐Ÿ‡บ๐Ÿ‡ธ

The relaxation method, commonly referred to as the fixed-point iteration method, is an iterative approach used to find solutions (roots) to nonlinear equations of the form $f(x) = 0$. Instead of directly solving for the root, the method involves rewriting the original equation in the form...

Taylor Series ๐Ÿ‡บ๐Ÿ‡ธ

The Taylor series is a fundamental tool in calculus and mathematical analysis, offering a powerful way to represent and approximate functions. By expanding a function around a specific point, known as the "center" or "point of expansion," we can express it as an infinite sum of polynomial terms deri...

Central Difference ๐Ÿ‡บ๐Ÿ‡ธ

...

Backward Difference ๐Ÿ‡บ๐Ÿ‡ธ

The backward difference approximation of the first derivative of a function $f$ at a point $x$ with step size $h$ is given by...

Forward Difference ๐Ÿ‡บ๐Ÿ‡ธ

...

Differentiation ๐Ÿ‡บ๐Ÿ‡ธ

The classical definition of the derivative of a function $f(x)$ at a point $x_0$...

Integration Introduction ๐Ÿ‡บ๐Ÿ‡ธ

$$\int_{1}^{2} x^2 dx \approx \sum_{i=1}^{10} h \cdot f(1 + 0.1i)$...

Trapezoidal Rule ๐Ÿ‡บ๐Ÿ‡ธ

The Trapezoidal Rule operates by assuming the region under the graph of the function as a trapezoid, then calculating its area...

Simpsons Rule ๐Ÿ‡บ๐Ÿ‡ธ

The foundation of Simpson's Rule lies in the concept of estimating the integral of a function $f(x)$ over a specified interval $[a, b]$ by the area beneath a quadratic polynomial. This polynomial passes through the points $(a, f(a))$, $((a+b)/2, f((a+b)/2))$, and $(b, f(b))$...

Monte Carlo ๐Ÿ‡บ๐Ÿ‡ธ

Monte Carlo integration is a numerical technique for approximating integrals using randomness. Rather than systematically sampling a function at predetermined points, as done in methods like the trapezoidal rule or Simpsonโ€™s rule, Monte Carlo methods rely on random samples drawn from a prescribed do...

Midpoint Rule ๐Ÿ‡บ๐Ÿ‡ธ

For a function $f(x)$ defined over an interval $[a, b]$, the Midpoint Rule provides the following approximation for the integral...

Matrix Methods ๐Ÿ‡บ๐Ÿ‡ธ

$$A = LU$...

Power Method ๐Ÿ‡บ๐Ÿ‡ธ

The power method is a fundamental iterative algorithm for estimating the eigenvalue of largest magnitude and its associated eigenvector for a given matrix. This technique is particularly appealing when dealing with large and sparse matrices, where direct eigenvalue computations (e.g., via the charac...

Eigen Value Decomposition ๐Ÿ‡บ๐Ÿ‡ธ

Eigenvalue Decomposition (EVD), also known as Eigendecomposition, is a fundamental operation in linear algebra that breaks down a square matrix into a simpler form defined by its eigenvalues and eigenvectors. This decomposition provides deep insights into the properties and structure of a matrix, en...

Qr Method ๐Ÿ‡บ๐Ÿ‡ธ

The QR method is a widely used algorithm in numerical linear algebra for determining the eigenvalues of a given square matrix. Unlike direct methods such as solving the characteristic polynomial, which can be complicated and unstable numerically for large matrices, the QR method leverages iterative ...

Singular Value Decomposition ๐Ÿ‡บ๐Ÿ‡ธ

Singular Value Decomposition (SVD) is a fundamental matrix decomposition technique widely used in numerous areas of science, engineering, and data analysis. Unlike the Eigenvalue Decomposition (EVD), which is restricted to square and diagonalizable matrices, SVD applies to any rectangular matrix. It...

Eigenvalues and Eigenvectors ๐Ÿ‡บ๐Ÿ‡ธ

Eigenvalues and eigenvectors are foundational concepts in linear algebra, with extensive applications across various domains such as physics, computer graphics, and machine learning. These concepts are instrumental in decomposing complex matrix transformations, thereby simplifying numerical computat...

Picards Method ๐Ÿ‡บ๐Ÿ‡ธ

Picard's method, alternatively known as the method of successive approximations, is a tool primarily used for solving initial-value problems for first-order ordinary differential equations (ODEs). The approach hinges on an iterative process that approximates the solution of an ODE. Though this metho...

Eulers Method ๐Ÿ‡บ๐Ÿ‡ธ

Euler's Method is a numerical technique applied in the realm of initial value problems for ordinary differential equations (ODEs). The simplicity of this method makes it a popular choice in cases where the differential equation lacks a closed-form solution. The method might not always provide the mo...

Heuns Method ๐Ÿ‡บ๐Ÿ‡ธ

Heun's method is an improved version of Euler's method that enhances accuracy by using an average of the slope at the beginning and the predicted slope at the end of the interval...

Runge Kutta ๐Ÿ‡บ๐Ÿ‡ธ

The Runge-Kutta method is part of a family of iterative methods, both implicit and explicit, which are frequently employed for the numerical integration of ordinary differential equations (ODEs). This family encompasses widely recognized methods like the Euler Method, Heun's method (a.k.a., the 2nd...

Ordinary Differential Equations ๐Ÿ‡บ๐Ÿ‡ธ

Ordinary Differential Equations (ODEs) are equations that involve one independent variable and the derivatives of one dependent variable with respect to the independent variable. They are called "ordinary" to distinguish them from partial differential equations (PDEs), which involve partial derivati...