Following on the answer provided by @pltrdy, in this threat:
https://stackoverflow.com/a/27164416/14744492
How do you convert the pandas.Series.autocorr()
, which calculates lag-N (default=1) autocorrelation on Series, into autocovariances?
Sadly the command pandas.Series.autocov()
is not implemented in pandas.
What .autocorr(k)
calculates is the (Pearson) correlation coefficient for lag k. But we know that, for a series x, that coefficient for lag k is:
\rho_k = \frac{Cov(x_{t}, x_{t-k})}{Var(x)}
Then, to get autocovariance, you multiply autocorrelation by the variance:
def autocov_series(x, lag=1):
return x.autocorr(x, lag=lag) * x.var()
Note that Series.var
uses ddof of 1 by default so N - 1
divides the sample variance where N == s.size
(and you'd get an unbiased estimate for the population variance).