×

Computationally efficient methods for two multivariate fractionally integrated models. (English) Zbl 1224.62069

A univariate long memory process with memory \(d\) is one in which the spectral density \[ f(\lambda)\sim C| 1-\exp(-i\lambda)| ^{-2d}\;\text{as}\lambda\to0+\;\text{with}\;C>0. \] The process is stationary and invertible in the case where \(0\leq| d| <1/2\). The process is said to have short memory when \(d=0\) and long memory for any \(| d| <1/2\), although some authors reserve the term ‘long memory’ for \(d > 0\) and refer to processes with \(d < 0\) as ‘anti-persistent’ or ‘intermediate memory’; see P.J. Brockwell and R.A. Davis, Time series: theory and methods. 2nd ed., Berlin etc.: Springer (1991; Zbl 0709.62080), for more background. The simplest case of long memory is fractionally integrated white noise, \(\{y_t\}\), defined by the equation \( (1-L)^{d}y_t =\varepsilon_t\), where \(\{\varepsilon_t\}\) is a white noise with variance \(\sigma^2\) and \(L\) is the lag operator, \(Lx_t = x_{t-1}\). The autoregressive fractionally integrated moving average ARFIMA\((p,d,q)\) process \(\{x_t\}\) is defined by the equation \( a(L)(1-L)^{d}x_t =b(L)\varepsilon_t\), where \(a(L)\) and \(b(L)\) are polynomials of degree \(p\) and \(q\), respectively, that have no common roots and all of their roots are outside the unit circle. Together with the assumption that \(| d| <1/2\), these conditions ensure that \(\{x_t\}\) is a stationary and invertible process. We may consider the ARFIMA\((p,d,q)\) process \(\{x_t\}\) as an ARMA\((p,q)\) process driven by fractional white noise that is determined by the equation \(a(L)x_t =b(L)[(1-L)^{-d}]\varepsilon_t\), and we may describe the ARFIMA\((p,d,q)\) process \(\{x_t\}\) as an ordinary ARMA\((p,q)\) which is fractionally integrated and determined by the equation \(x_t =(1-L)^{-d}[b(L)\varepsilon_t/a(L)]\). Since the composition of linear filters is commutative in the univariate case, these two descriptions are identical. The composition of linear filters does not commute in the multivariate case, so there are different multiple possible extensions of univariate ARFIMA processes to VARFIMA processes.
The authors discuss two multivariate generalizations of fractionally integrated autoregressive models. Although FIVAR and VARFI models are similar at the first glance, their implications differ dramatically. Computationally efficient methods for computing the covariances of each model, for computing the quadratic form and approximating the determinant for maximum likelihood estimation and for simulations from each model are proposed. These algorithms are equally accurate, making it feasible to model multivariate long memory time series and to simulate from these models. They are used to fit models to data from goods and services inflations in the United States. For similar results see J. Pai and N. Ravishanker, Stat. Probab. Lett. 79, No. 9, 1282–1289 (2009; Zbl 1160.62348), where the preconditioned conjugate gradient algorithm is applied to the estimation of multivariate long memory time series models.

MSC:

62M10 Time series, auto-correlation, regression, etc. in statistics (GARCH)
65C60 Computational problems in statistics (MSC2010)
62H12 Estimation in multivariate analysis
65C05 Monte Carlo methods
62P20 Applications of statistics to economics
Full Text: DOI

References:

[1] Baillie, Long memory processes and fractional integration in econometrics, Journal of Econometrics 73 pp 5– (1996) · Zbl 0854.62099
[2] Bertelli, A note on calculating autocovariances of long-memory processes, Journal of Time Series Analysis 23 (5) pp 503– (2002) · Zbl 1062.62164
[3] Bottcher, Introduction to Large Truncated Toeplitz Matrices (1999) · doi:10.1007/978-1-4612-1426-7
[4] Brockwell, Time Series: Theory and Methods (1993)
[5] Chan, Circulant preconditioners for Toeplitz-block matrices, Numerical Algorithms 6 pp 89– (1994) · Zbl 0793.65020
[6] Chen, On the correlation matrix of the discrete Fourier transform and the fast solution of large Toeplitz systems for long-memory time series, Journal of the American Statistical Association 101 (474) pp 812– (2006)
[7] Chung, Calculating and analyzing impulse responses for the vector ARFIMA model, Economics Letters 71 pp 17– (2001) · Zbl 1080.62534
[8] Davies, Tests for the Hurst effect, Biometrika 74 pp 95– (1987) · Zbl 0612.62123
[9] Deo, Forecasting realized volatility using a long-memory stochastic volatility model: estimation, prediction and seasonal adjustment, Journal of Econometrics 131 (12) pp 29– (2006) · Zbl 1337.62355
[10] Doornik, Computational aspects of maximum likelihood estimation of autoregressive fractionally integrated moving average models, Computational Statistics and Data Analysis 42 (3) pp 333– (2003) · Zbl 1429.62391
[11] Dunsmuir, Vector linear time series models, Advances in Applied Probability 8 (2) pp 339– (1976) · Zbl 0327.62055
[12] Granger, An introduction to long memory time series models and fractional differencing, Journal of Time Series Analysis 1 pp 321– (1980) · Zbl 0503.62079
[13] Hamilton, Time Series Analysis (1994) · Zbl 0831.62061
[14] Hosking, Fractional differencing, Biometrika 68 (1) pp 165– (1981)
[15] Hosoya, The quasi-likelihood approach to statistical inference on multiple time-series with long-range dependence, Journal of Econometrics 73 pp 217– (1996) · Zbl 0854.62085
[16] Lobato, Consistency of the averaged cross-periodogram in long memory time series, Journal of Time Series Analysis 18 (2) pp 137– (1997) · Zbl 0938.62103
[17] Luceno, A fast likelihood approximation for vector general linear processes with long series: application to fractional differencing, Biometrika 83 (3) pp 603– (1996)
[18] Martin, Indirect estimation of ARFIMA and VARFIMA models, Journal of Econometrics 93 pp 149– (1999) · Zbl 0942.62106
[19] Pai, A multivariate preconditioned conjugate gradient approach for maximum likelihood estimation in vector long memory processes, Statistics and Probability Letters 79 pp 1282– (2009) · Zbl 1160.62348
[20] Peach, The historical and recent behavior of goods and services inflation, Economic Policy Review 10 pp 19– (2004)
[21] Ravishanker, Bayesian analysis of vector ARFIMA processes, Australian Journal of Statistics 39 pp 295– (1997) · Zbl 0897.62099
[22] Ravishanker, Bayesian prediction for vector ARFIMA processes, International Journal of Forecasting 18 (2) pp 207– (2002)
[23] Robinson, Time Series with Long Memory (2003) · Zbl 1113.62106
[24] Robinson, Cointegration in fractional systems with unknown integration orders, Econometrica 71 (6) pp 1727– (2003) · Zbl 1154.91614
[25] Robinson, Determination of cointegrating rank in fractional systems, Journal of Econometrics 106 pp 217– (2002) · Zbl 1038.62082
[26] Sela, R. J. (2008) Computationally efficient Gaussian maximum likelihood methods for vector ARFIMA models. Dissertation proposal. Available at: URL http://w4.stern.nyu.edu/emplibrary/EfficientMethods.pdf
[27] Shewchuk, J. R. (1994) An introduction to the conjugate gradient method without the agonizing pain. Unpublished manuscript.
[28] Sowell, F. (1989a) A decomposition of block Toeplitz matrices with applications to vector time series. Unpublished manuscript.
[29] Sowell, F. (1989b) Maximum likelihood estimation of fractionally integrated time series models. Unpublished manuscript.
[30] Tsay, W.-J. (2007) Maximum likelihood estimation of stationary multivariate ARFIMA processes. Unpublished manuscript.
[31] Whittle, On the fitting of multivariate autoregressions, and the approximate canonical factorization of a spectral density matrix, Biometrika 50 (12) pp 129– (1963) · Zbl 0129.11304 · doi:10.1093/biomet/50.1-2.129
[32] Wood, Simulation of stationary Gaussian processes in [0,1]d, Journal of Computational and Graphical Statistics 3 (4) pp 409– (1994)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.