Last modified: October 29, 2023
This article is written in: 🇺🇸
Autocovariance functions and coefficients
Autocovariance functions describe how values of a time series relate to their lagged counterparts, measuring the joint variability between a series at time and its value at a previous time (where is the lag). In autoregressive models, these relationships are expressed through coefficients, which quantify the influence of past values on future values. The autocovariance function helps in estimating these coefficients by analyzing the strength and pattern of correlations at different lags. Higher autocovariance at a specific lag suggests a stronger influence of past values on the present, aiding in model selection and parameter estimation for time series models like AR, MA, and ARIMA.
Random Variables (r.v.)
A random variable (r.v.) is a mapping from a set of outcomes in a probability space to a set of real numbers. We can distinguish between:
I. Discrete random variables take on countable values. For example, let:
II. Continuous random variables take on any value in a continuous range. For instance:
A realization is a specific observed value of a random variable. For instance:
Covariance
The covariance between two random variables and measures the linear relationship between them. It is defined as:
Where:
- is the mean of .
- is the mean of .
- denotes the expectation operator.
The covariance is symmetric:
Interpretation:
- If , and increase together.
- If , when increases, tends to decrease.
- If , there is no linear dependence between and .
Estimation of Covariance
To estimate the covariance from a paired dataset , we use the sample covariance formula:
Where:
- is the sample mean of ,
- is the sample mean of ,
- is the number of observations.
Stochastic Processes
A stochastic process is a collection of random variables indexed by time, denoted as:
where is the index set (often time or space).
Each follows a certain distribution with a mean and variance :
Example: A time series is a realization of a stochastic process. Consider the following realizations:
Realized as:
Autocovariance Function
The autocovariance function measures the covariance between two values of the time series at different times and :
Where:
- and are the values of the time series at times and , respectively.
- and are the means at times and .
Variance as a special case:
When , the autocovariance function simplifies to the variance of the series at time :
Lagged Autocovariance
The lagged autocovariance function measures the covariance between values of the series at times and , where is the lag:
For a stationary process, the autocovariance function depends only on the lag , not the specific times and :
This implies that the autocovariance function remains constant for different time points, provided the lag is the same.
Autocovariance Coefficients
Autocovariance measures the covariance of a time series with itself at different time lags. For a time series , the autocovariance at lag is defined as:
Where:
- is the value of the time series at time ,
- is the value of the time series at time ,
- is the mean of the series (assumed to be constant for weak stationarity).
Sample Estimation of the autocovariance coefficient is denoted by . For a time series with observations, the estimator is:
Where:
- is the sample mean of the series.
Assumption of Weak Stationarity
For weakly stationary processes, the mean is constant, and the autocovariance depends only on the lag , not on the actual time points and . Therefore, the autocovariance function becomes:
Under the assumption of weak stationarity, the sample autocovariance is computed as:
This allows us to estimate the strength of the relationship between and at different lags .