Articles

Standard Error and Lln 🇺🇸

Expected Value (E), also known as the mean, is the long-run average of a random variable, representing the value we anticipate on average from repeated random draws from a population...

Stationarity 🇺🇸

Stationarity is an important idea in time series analysis. A time series is considered stationary if its statistical properties—like the mean, variance, and autocovariance—stay constant over time. This matters because methods like ARIMA and ARMA are designed to work with stationary data, so it’s a g...

Autoregressive Models 🇺🇸

Autoregressive (AR) models are fundamental tools in time series analysis, used to describe and forecast time-dependent data. An AR model predicts future values based on a linear combination of past observations. The order of an AR model, denoted as $p$, indicates how many lagged past values are used...

Invertibility 🇺🇸

In time series modeling, invertibility is the property of a model that allows the innovation process (also called the noise or disturbance process) to be expressed as a function of the observed series and its past values. This is particularly relevant for Moving Average (MA) models...

Random Walk 🇺🇸

The random walk is a fundamental and widely used time series model, often applied in finance to represent stock prices and other economic indicators. The idea behind the random walk is that the value of the process at time $t$ is the sum of its value at time $t-1$ and a random shock (or noise). Esse...

Difference Equations 🇺🇸

A difference equation (also known as a recurrence relation) defines each term of a sequence based on previous terms. In some cases, the general term of a sequence is given explicitly (e.g., $a_n = 3n + 2$, resulting in the sequence $5, 8, 11, \dots$). However, more commonly, a difference equation pr...

Financial Time Series Models 🇺🇸

Financial series (prices, returns, exchange rates) often look very different from the classical stationary Gaussian assumptions. Common features include...

Autocovariance Function 🇺🇸

Autocovariance functions describe how values of a time series relate to their lagged counterparts, measuring the joint variability between a series at time $t$ and its value at a previous time $t-k$ (where $k$ is the lag). In autoregressive models, these relationships are expressed through coefficie...

Autocorrelation Function 🇺🇸

In time series analysis, understanding the relationships between observations at different time lags is crucial for model identification and forecasting. Two essential tools for analyzing these relationships are the Autocorrelation Function (ACF) and the Partial Autocorrelation Function (PACF)...

Arima Models 🇺🇸

ARMA, ARIMA, and SARIMA are models commonly used to analyze and forecast time series data. ARMA (AutoRegressive Moving Average) combines two ideas: using past values to predict current ones (autoregression) and smoothing out noise using past forecast errors (moving average). ARIMA (AutoRegressive In...

Seasonality and Trends 🇺🇸

Seasonality and trends are fundamental components in time series data that significantly impact analysis and forecasting. Understanding and correctly modeling these elements are useful for accurate predictions and effective time series modeling...

Regression with Arma Errors 🇺🇸

In many applications, we want to explain a response series $Y_t$ using covariates while still accounting for autocorrelation. A standard approach is regression with ARMA errors...

Forecasting 🇺🇸

Time series forecasting is a technique used to predict future values based on historical data. It is widely used in various fields, such as finance, economics, and meteorology. In this section, we will discuss the basics of time series forecasting...

Moving Average Models 🇺🇸

Moving Average (MA) models are a fundamental class of univariate time series models used for forecasting and understanding temporal data. Unlike Autoregressive (AR) models, which rely on past values of the series itself, MA models utilize past forecast errors to model the current value of the series...

Time Series 🇺🇸

Time series data consists of sequential observations collected over a period of time. This kind of data is prevalent in a range of fields such as finance, economics, climatology, and more. Time series analysis involves the exploration of this data to identify inherent structures such as patterns or ...

Yule Walker Equations 🇺🇸

The Yule-Walker equations are a set of linear relationships that tie the autocovariances/autocorrelations of a stationary autoregressive (AR $p$) process to its parameters. They are the work-horse for parameter estimation, diagnostic checking, and theoretical analysis of AR models...

Backward Shift Operator 🇺🇸

The backward shift operator (denoted by $B$) is a powerful tool in time series analysis, used to simplify the notation and manipulation of time series models. The operator shifts the time index of a time series back by one period, making it useful in autoregressive, moving average, and mixed models...

Series 🇺🇸

A sequence is an ordered list of numbers that can be viewed as a function mapping each natural number $n$ to a specific value $a_n$. More formally, a sequence ${a_n}$ is a function whose domain is the set of natural numbers, and the values are called the terms of the sequence...

Time Series Modeling 🇺🇸

Time series modeling involves analyzing data points collected or recorded at specific time intervals to understand underlying structures and make forecasts. Various models, such as Autoregressive (AR), Moving Average (MA), and their combinations (ARMA, ARIMA), are employed to capture different aspec...

Statistical Moments and Time Series 🇺🇸

Understanding the behavior of time series data is crucial across various fields such as finance, economics, and engineering. Statistical moments, especially the mean and standard deviation, are essential tools in summarizing and analyzing time series data. This section explores how these statistical...

Randomness Tests 🇺🇸

When a series looks noisy, it is still useful to check whether the noise is random or whether weak structure (trend or dependence) is present. The tests below are lightweight diagnostics for an IID or weak-dependence null...

Taylor Series 🇺🇸

The Taylor series is a fundamental tool in calculus and mathematical analysis, offering a powerful way to represent and approximate functions. By expanding a function around a specific point, known as the "center" or "point of expansion," we can express it as an infinite sum of polynomial terms deri...

Thin Plate Spline Interpolation 🇺🇸

Thin Plate Spline (TPS) interpolation is a non‑parametric, spline‑based technique for fitting a smooth surface through scattered data in two or more spatial dimensions. In its classical 2‑D form one seeks a function $f\colon\mathbb R^{2}\to\mathbb R$ that passes through specified data points while m...

Gaussian Interpolation 🇺🇸

Gaussian Interpolation, often associated with Gauss’s forward and backward interpolation formulas, is a technique that refines polynomial interpolation for equally spaced data points. Rather than building the interpolating polynomial from one end of the data interval (as Newton’s forward or backward...

Cubic Spline Interpolation 🇺🇸

Cubic spline interpolation is a refined mathematical tool frequently used within numerical analysis. It's an approximation technique that employs piecewise cubic polynomials, collectively forming a cubic spline. These cubic polynomials are specifically engineered to pass through a defined set of dat...

Least Squares 🇺🇸

Least Squares Regression is a fundamental technique in statistical modeling and data analysis used for fitting a model to observed data. The primary goal is to find a set of parameters that minimize the discrepancies (residuals) between the model’s predictions and the actual observed data. The "leas...

Linear Interpolation 🇺🇸

Linear interpolation is one of the most basic and commonly used interpolation methods. The idea is to approximate the value of a function between two known data points by assuming that the function behaves linearly (like a straight line) between these points. Although this assumption may be simplist...

Newton Polynomial 🇺🇸

Newton’s Polynomial, often referred to as Newton’s Interpolation Formula, is another classical approach to polynomial interpolation. Given a set of data points $(x_0,y_0),(x_1,y_1),\dots,(x_n,y_n)$ with distinct $x_i$ values, Newton’s method constructs an interpolating polynomial in a form that make...

Regression 🇺🇸

Regression analysis and curve fitting are important tools in statistics, econometrics, engineering, and modern machine-learning pipelines. At their core they seek a deterministic (or probabilistic) mapping $\widehat f: \mathcal X \longrightarrow \mathcal Y$ that minim...

Lagrange Polynomial Interpolation 🇺🇸

Lagrange Polynomial Interpolation is a widely used technique for determining a polynomial that passes exactly through a given set of data points. Suppose we have a set of $(n+1)$ data points $(x_0, y_0), (x_1, y_1), \ldots, (x_n, y_n)$ where all $x_i$ are distinct. The aim is to find a polynomial $L...

Liczby Losowe 🇵🇱

W języku C++ liczby losowe generuje się za pomocą standardowej biblioteki . Proces losowania zaczyna się od utworzenia generatora liczb pseudolosowych, np. std::mt19937, który bazuje na algorytmie Mersenne Twister. Aby uzyskać bardziej losowe wyniki, generator inicjalizuje się za pomocą unik...

L Wartosci R Wartosci 🇵🇱

W C++ bardzo dużo rzeczy kręci się wokół pytania: czy dane wyrażenie wskazuje na „konkretny obiekt w pamięci”, czy jest tylko tymczasowym wynikiem obliczeń. Z tego biorą się L-wartości (lvalues) i R-wartości (rvalues). Zrozumienie tego tematu odblokowuje m.in....

Typ Wyliczeniowy 🇵🇱

Typ wyliczeniowy (enum) pozwala opisać zamknięty zbiór możliwych wartości pod czytelnymi nazwami. Zamiast “magicznych liczb” (np. 0,1,2) używasz sensownych identyfikatorów (Poniedzialek, Wtorek), co poprawia czytelność i zmniejsza liczbę błędów...

Filters and Algorithms 🇺🇸

VTK’s filters and algorithms allow you to convert your data from “a static dataset” to a dynamic pipeline: you generate something, clean it up, extract meaning, and reshape it into a form that’s easier to analyze or visualize. Think of it like a workshop line: raw material comes in, tools operate on...

Data Types and Structures 🇺🇸

VTK is built to carry real-world 2D/3D data all the way from “numbers in memory” to “something you can see and reason about.” That means it needs data types that store values, but also store where those values live in space and how they connect. If you pick the right structure early, everything down...