Articles

Eulers Method ๐Ÿ‡บ๐Ÿ‡ธ

Euler's Method is a numerical technique applied in the realm of initial value problems for ordinary differential equations (ODEs). The simplicity of this method makes it a popular choice in cases where the differential equation lacks a closed-form solution. The method might not always provide the mo...

Midpoint Rule ๐Ÿ‡บ๐Ÿ‡ธ

For a function $f(x)$ defined over an interval $[a, b]$, the Midpoint Rule provides the following approximation for the integral...

Picards Method ๐Ÿ‡บ๐Ÿ‡ธ

Picard's method, alternatively known as the method of successive approximations, is a tool primarily used for solving initial-value problems for first-order ordinary differential equations (ODEs). The approach hinges on an iterative process that approximates the solution of an ODE. Though this metho...

Monte Carlo ๐Ÿ‡บ๐Ÿ‡ธ

Monte Carlo integration is a numerical technique for approximating integrals using randomness. Rather than systematically sampling a function at predetermined points, as done in methods like the trapezoidal rule or Simpsonโ€™s rule, Monte Carlo methods rely on random samples drawn from a prescribed do...

Eigenvalues and Eigenvectors ๐Ÿ‡บ๐Ÿ‡ธ

Eigenvalues and eigenvectors are foundational concepts in linear algebra, with extensive applications across various domains such as physics, computer graphics, and machine learning. These concepts are instrumental in decomposing complex matrix transformations, thereby simplifying numerical computat...

Central Difference ๐Ÿ‡บ๐Ÿ‡ธ

...

Simpsons Rule ๐Ÿ‡บ๐Ÿ‡ธ

The foundation of Simpson's Rule lies in the concept of estimating the integral of a function $f(x)$ over a specified interval $[a, b]$ by the area beneath a quadratic polynomial. This polynomial passes through the points $(a, f(a))$, $((a+b)/2, f((a+b)/2))$, and $(b, f(b))$...

Newtons Method ๐Ÿ‡บ๐Ÿ‡ธ

Newton's method (or the Newton-Raphson method) is a powerful root-finding algorithm that exploits both the value of a function and its first derivative to rapidly refine approximations to its roots. Unlike bracketing methods that work by enclosing a root between two points, Newton's method is an ope...

Singular Value Decomposition ๐Ÿ‡บ๐Ÿ‡ธ

Singular Value Decomposition (SVD) is a fundamental matrix decomposition technique widely used in numerous areas of science, engineering, and data analysis. Unlike the Eigenvalue Decomposition (EVD), which is restricted to square and diagonalizable matrices, SVD applies to any rectangular matrix. It...

Linear Interpolation ๐Ÿ‡บ๐Ÿ‡ธ

Linear interpolation is one of the most basic and commonly used interpolation methods. The idea is to approximate the value of a function between two known data points by assuming that the function behaves linearly (like a straight line) between these points. Although this assumption may be simplist...

Taylor Series ๐Ÿ‡บ๐Ÿ‡ธ

The Taylor series is a fundamental tool in calculus and mathematical analysis, offering a powerful way to represent and approximate functions. By expanding a function around a specific point, known as the "center" or "point of expansion," we can express it as an infinite sum of polynomial terms deri...

Ordinary Differential Equations ๐Ÿ‡บ๐Ÿ‡ธ

Ordinary Differential Equations (ODEs) are equations that involve one independent variable and the derivatives of one dependent variable with respect to the independent variable. They are called "ordinary" to distinguish them from partial differential equations (PDEs), which involve partial derivati...

Lagrange Polynomial Interpolation ๐Ÿ‡บ๐Ÿ‡ธ

Lagrange Polynomial Interpolation is a widely used technique for determining a polynomial that passes exactly through a given set of data points. Suppose we have a set of $(n+1)$ data points $(x_0, y_0), (x_1, y_1), \ldots, (x_n, y_n)$ where all $x_i$ are distinct. The aim is to find a polynomial $L...

Qr Method ๐Ÿ‡บ๐Ÿ‡ธ

The QR method is a widely used algorithm in numerical linear algebra for determining the eigenvalues of a given square matrix. Unlike direct methods such as solving the characteristic polynomial, which can be complicated and unstable numerically for large matrices, the QR method leverages iterative ...

Cubic Spline Interpolation ๐Ÿ‡บ๐Ÿ‡ธ

Cubic spline interpolation is a refined mathematical tool frequently used within numerical analysis. It's an approximation technique that employs piecewise cubic polynomials, collectively forming a cubic spline. These cubic polynomials are specifically engineered to pass through a defined set of dat...

Trapezoidal Rule ๐Ÿ‡บ๐Ÿ‡ธ

The Trapezoidal Rule operates by assuming the region under the graph of the function as a trapezoid, then calculating its area...

Relaxation Method ๐Ÿ‡บ๐Ÿ‡ธ

The relaxation method, commonly referred to as the fixed-point iteration method, is an iterative approach used to find solutions (roots) to nonlinear equations of the form $f(x) = 0$. Instead of directly solving for the root, the method involves rewriting the original equation in the form...

Runge Kutta ๐Ÿ‡บ๐Ÿ‡ธ

The Runge-Kutta method is part of a family of iterative methods, both implicit and explicit, which are frequently employed for the numerical integration of ordinary differential equations (ODEs). This family encompasses widely recognized methods like the Euler Method, Heun's method (a.k.a., the 2nd...

Least Squares ๐Ÿ‡บ๐Ÿ‡ธ

Least Squares Regression is a fundamental technique in statistical modeling and data analysis used for fitting a model to observed data. The primary goal is to find a set of parameters that minimize the discrepancies (residuals) between the modelโ€™s predictions and the actual observed data. The "leas...

Eigen Value Decomposition ๐Ÿ‡บ๐Ÿ‡ธ

Eigenvalue Decomposition (EVD), also known as Eigendecomposition, is a fundamental operation in linear algebra that breaks down a square matrix into a simpler form defined by its eigenvalues and eigenvectors. This decomposition provides deep insights into the properties and structure of a matrix, en...

Jacobi Method ๐Ÿ‡บ๐Ÿ‡ธ

The Jacobi method is a classical iterative algorithm used to approximate the solution of a system of linear equations $A\mathbf{x} = \mathbf{b}$. Instead of attempting to solve the system directly using methods such as Gaussian elimination, the Jacobi method iteratively refines an initial guess for ...

Integration Introduction ๐Ÿ‡บ๐Ÿ‡ธ

$$\int_{1}^{2} x^2 dx \approx \sum_{i=1}^{10} h \cdot f(1 + 0.1i)$...

Golden Ratio Search ๐Ÿ‡บ๐Ÿ‡ธ

The Golden Ratio Search is a technique employed for locating the extremum (minimum or maximum) of a unimodal function over a given interval. Unlike gradient-based or derivative-requiring methods, this approach uses only function evaluations, making it broadly applicable even when derivatives are dif...

Newton Polynomial ๐Ÿ‡บ๐Ÿ‡ธ

Newtonโ€™s Polynomial, often referred to as Newtonโ€™s Interpolation Formula, is another classical approach to polynomial interpolation. Given a set of data points $(x_0,y_0),(x_1,y_1),\dots,(x_n,y_n)$ with distinct $x_i$ values, Newtonโ€™s method constructs an interpolating polynomial in a form that make...

Heuns Method ๐Ÿ‡บ๐Ÿ‡ธ

Heun's method is an improved version of Euler's method that enhances accuracy by using an average of the slope at the beginning and the predicted slope at the end of the interval...

Power Method ๐Ÿ‡บ๐Ÿ‡ธ

The power method is a fundamental iterative algorithm for estimating the eigenvalue of largest magnitude and its associated eigenvector for a given matrix. This technique is particularly appealing when dealing with large and sparse matrices, where direct eigenvalue computations (e.g., via the charac...

Lu Decomposition ๐Ÿ‡บ๐Ÿ‡ธ

LU Decomposition (or LU Factorization) is a powerful and widely used technique in numerical linear algebra for solving systems of linear equations, computing inverses, and determining determinants. The core idea is to factorize a given square matrix $A$ into the product of a lower-triangular matrix ...

Differentiation ๐Ÿ‡บ๐Ÿ‡ธ

The classical definition of the derivative of a function $f(x)$ at a point $x_0$...

Root Finding ๐Ÿ‡บ๐Ÿ‡ธ

Root: A root of a function $f(x)$ is a value $x = r$ such that $f(r) = 0$. This applies not only to linear functions but can also extend to nonlinear systems, such as those including powers ($x^2, x^3, \ldots$), roots, radicals and non-integer polynomials ($\sqrt{x}, x^{3/5}, x^{pi} \ldots$), or tri...

Regression ๐Ÿ‡บ๐Ÿ‡ธ

Regression analysis and curve fitting are critical methods in statistical analysis and machine learning. Both aim to find a function that best approximates a set of data points, yet their typical applications may vary slightly. They are particularly useful in understanding relationships among variab...

Inverse Matrix ๐Ÿ‡บ๐Ÿ‡ธ

The inverse of a matrix A is denoted as A^-1. It is a unique matrix such that when it is multiplied by the original matrix A, the result is the identity matrix I. Mathematically, this is expressed as...

Forward Difference ๐Ÿ‡บ๐Ÿ‡ธ

...

Matrix Methods ๐Ÿ‡บ๐Ÿ‡ธ

$$A = LU$...

Gradient Descent ๐Ÿ‡บ๐Ÿ‡ธ

Gradient Descent is a fundamental first-order optimization algorithm widely used in mathematics, statistics, machine learning, and artificial intelligence. Its principal aim is to find the minimum of a given differentiable function $f(x)$. Instead of searching blindly, it uses gradient information โ€”...

Thin Plate Spline Interpolation ๐Ÿ‡บ๐Ÿ‡ธ

Thin Plate Spline (TPS) Interpolation is a non-parametric, spline-based method for interpolating scattered data in two or more dimensions. Originally arising in the context of fitting a smooth surface through a set of points in $\mathbb{R}^2$, thin plate splines can be generalized to higher dimensio...