Categories
Table of contents

Autocorrelation figure generator

This autocorrelation figure generator creates two figures using random values from a standard normal distribution with a mean of 0 and a standard deviation of 1, simulating residuals from a regression.

The first figure shows the plot of residuals (\hat{\mu}_t) over time, while the second figure (which uses the same data as the first) plots \hat{\mu}_t against \hat{\mu}_{t-1}.

You can use the first slider below to select the number of time steps (i.e., the number of random values to be generated). You can also click on the slider thumb and use the right/left arrow keys on your keyboard to fine-tune the number of time steps. The slider ranges from 10 to 1,000.

To adjust the size of the autocorrelation coefficient (Rho), use the second slider, which has a range from -1 to +1. You can create a positive autocorrelation figure by choosing a positive Rho or a negative autocorrelation figure by choosing a negative Rho. To get a figure showing no autocorrelation, set Rho to 0.

Once you have set your values, click the “Generate” button to create the figures. To download the figures, click on the respective “Download” button below.

500

0.5

Figure 1: Plot of residuals (\hat{\mu}_t)

This figure shows the plot of residuals (\hat{\mu}_t) over time.

The number of time steps can be adjusted using the first slider. After selecting the desired number of time steps (you can click on the slider thumb and use the right/left arrow keys on your keyboard to fine-tune the number of time steps) and the size of the autocorrelation coefficient (Rho) using the second slider, click on “Generate” to create a new figure based on random values from a standard normal distribution with a mean of 0 and a standard deviation of 1. To download the figure, click on the “Download line chart” button above. Note that the second figure uses the same random data but plots \hat{\mu}_t against \hat{\mu}_{t-1}.

Figure 2: Plot of \hat{\mu}_t against \hat{\mu}_{t-1}

This figure plots \hat{\mu}_t against \hat{\mu}_{t-1} using the same data used in the first figure.

The number of time steps can be adjusted using the first slider. After selecting the desired number of time steps (you can click on the slider thumb and use the right/left arrow keys on your keyboard to fine-tune the number of time steps) and the size of the autocorrelation coefficient (Rho) using the second slider, click on “Generate” to create a new figure based on random values from a standard normal distribution with a mean of 0 and a standard deviation of 1. To download the figure, click on the “Download scatter plot” button above.

Related calculators:


What is autocorrelation?

Autocorrelation in time series refers to the correlation of a series with its own past values. In simpler terms, it measures how a time series is correlated with itself at different time lags.

Mathematically, given a time series X_t​ where t​ represents discrete time points, the autocorrelation function (ACF) at lag k is defined as:

\text{ACF}(k) = \frac{\text{Cov}(X_t, X_{t-k})}{\sqrt{\text{Var}(X_t) \cdot \text{Var}(X_{t-k})}}
\text{Cov}(X_t, X_{t-k})

where:

  • \text{Cov}(X_t, X_{t-k}) is the covariance between the time series at time t and time t-k.
  • Var(X_t​) is the variance of the time series at time t.

The autocorrelation function provides insights into the patterns and structure of the time series. A high autocorrelation at a specific lag suggests that the series is influenced by its past values at that lag. It’s a fundamental concept in time series analysis and is used for tasks such as detecting seasonality, identifying trends, and assessing stationarity.

Autocorrelation types

Positive, negative, and no autocorrelation refer to the direction and strength of the relationship between a time series and its past values at different lags.

  1. Positive autocorrelation: Positive autocorrelation occurs when the current value of a time series is positively correlated with its past values. This means that as the lag increases, the values tend to move in the same direction. For example, if there is positive autocorrelation at lag 1, it suggests that positive values tend to follow positive values, and negative values tend to follow negative values.
  2. Negative autocorrelation: Negative autocorrelation, on the other hand, occurs when the current value of a time series is negatively correlated with its past values. In this case, as the lag increases, the values tend to move in opposite directions. For instance, if there is negative autocorrelation at lag 1, it implies that positive values tend to follow negative values, and vice versa.
  3. No autocorrelation: No autocorrelation means that there is no systematic relationship between the current value of a time series and its past values at any lag. This indicates that past values do not influence the current value, and there is no pattern in the data that can be exploited for forecasting or analysis.

Autocorrelation is often assessed using statistical techniques such as the autocorrelation function (ACF) or by visual inspection of autocorrelation plots. Understanding the autocorrelation structure of a time series is crucial for various tasks in time series analysis, including forecasting, model building, and hypothesis testing.

Durbin-Watson test to detect autocorrelation

Autocorrelation can be detected using various statistical tests. One commonly used test is the Durbin-Watson test to detect autocorrelation in the residuals of a regression analysis. Here’s how it works:

  1. Calculate the Durbin-Watson statistic (DW):
    • The Durbin-Watson statistic is calculated based on the residuals from a regression model.
    • It measures the degree of autocorrelation in the residuals.
    • The formula for the Durbin-Watson statistic involves summing the squared differences between adjacent residuals and comparing it to the sum of the squared residuals.
    • The Durbin-Watson statistic ranges from 0 to 4. A value of 2 indicates no autocorrelation, while values significantly less than 2 suggest positive autocorrelation, and values significantly greater than 2 suggest negative autocorrelation.
  2. Interpret the Durbin-Watson statistic:
    • If the Durbin-Watson statistic is close to 2 (typically between 1.5 and 2.5), it suggests that there is no significant autocorrelation in the residuals.
    • If the Durbin-Watson statistic is less than 2, it indicates positive autocorrelation, meaning that neighboring residuals tend to be correlated.
    • If the Durbin-Watson statistic is greater than 2, it indicates negative autocorrelation, meaning that neighboring residuals tend to be inversely correlated.
    • The closer the Durbin-Watson statistic is to 0 or 4, the stronger the evidence for autocorrelation.
  3. Conduct hypothesis testing:
    • You can conduct hypothesis testing using the Durbin-Watson statistic. The null hypothesis is that there is no autocorrelation in the residuals (i.e., \(ρ=0\)). The alternative hypothesis is that there is autocorrelation (either positive or negative).
    • You can use critical values or p-values to determine whether to reject the null hypothesis in favor of the alternative.

The Durbin-Watson test is widely used due to its simplicity and effectiveness in detecting autocorrelation in regression residuals. However, it’s important to note that the Durbin-Watson test has some limitations, such as its sensitivity to the number of lagged terms in the regression model and its inability to detect autocorrelation at specific lags.