arima(x, order = c(0, 0, 0), seasonal = list(order = c(0, 0, 0), period = NA), xreg = NULL, include.mean = T, delta = 0.01, transform.pars = T, fixed = NULL, init = NULL, method = c("ML", "CSS"), n.cond, optim.control = list()) predict(object, n.ahead = 1, newxreg, se.fit = T, ...)
frequency(x)
).
This should be a list with components
order
and
period
, but a specification of just a
numeric vector of length 3 will be turned into a suitable list with
the specification as the
order
.
x
.
T
for undifferenced series,
F
for differenced ones (where a mean
would not affect the fit nor predictions).
method = "CSS"
.
NA
entries in
fixed
will be varied.
transform.pars = T
will be overridden
if any ARMA parameters are fixed.
arima
fit.
xreg
to be used for
prediction. Must have at least
n.ahead
rows.
Different definitions of ARMA models have different signs for the AR and/or MA coefficients. The definition here has
X
and so the MA coefficients differ in sign from those of S-PLUS.
Further, if
include.mean
is true, this
formula applies to X-m rather than X. For ARIMA
models with differencing, the differenced series follows a zero-mean
ARMA model.
The variance matrix of the estimates is found from the Hessian of the log-likelihood, and so may only be a rough guide, especially for fits close to the boundary of invertibility.
Optimization is done by
. It will work best if the columns in
xreg
are roughly scaled to zero mean and
unit variance, but does attempt to estimate suitable scalings.
Finite-history prediction is used. This is only statistically
efficient if the MA part of the fit is invertible, so
predict.Arima
will give a warning for
non-invertible MA models.
The exact likelihood is computed via a state-space representation of
the ARMA process, and the innovations and their variance found by a
Kalman filter based on Gardner et al. (1980). This has the
option to switch to `fast recursions' (assume an effectively infinite
past) if the innovations variance is close enough to its asymptotic
bound. The argument
delta
sets the
tolerance: at its default value the approximation is normally
negligible and the speed-up considerable. Exact computations can be
ensured by setting
delta
to a negative
value.
If
transform.pars
is true, the
optimization is done using an alternative parametrization which is a
variation on that suggested by Jones (1980) and ensures that the model
is stationary. For an AR(p) model the parametrization is via the
inverse tanh of the partial autocorrelations: the same procedure is
applied (separately) to the AR and seasonal AR terms. The MA terms
are not constrained during optimization, but they are transformed to
invertible form after fitting if
transform.pars
is true.
Missing values are allowed, but any missing values will force
delta
to be ignored and full recursions
used. Note that missing values will be propogated by differencing, so
the procedure used in this function is not fully efficient in that
case.
Conditional sum-of-squares is provided mainly for expositional
purposes. This computes the sum of squares of the fitted innovations
from observation
n.cond
on, (where
n.cond
is at least the maximum lag of an
AR term), treating all earlier innovations to be zero. Argument
n.cond
can be used to allow comparability
between different fits. The ``part log-likelihood'' is the first
term, half the log of the estimated mean square. Missing values are
allowed, but will cause many of the innovations to be missing.
When regressors are specified, they are orthogonalized prior to fitting unless any of the coefficients is fixed. It can be helpful to roughly scale the regressors to zero mean and unit variance.
arima
, a list of class
"Arima"
with components:
coef
.
method = "ML"
fits.
x
.
For
predict.Arima
, a time series of
predictions, or if
se.fit = T
, a list
with components
pred
, the predictions, and
se
, the estimated standard errors. Both
components are time series.
The standard errors of prediction exclude the uncertainty in the estimation of the ARMA model and the regression coefficients.
The results are likely to be different from
arima.mle
, which computes a conditional
likelihood and does not include a mean in the model. Further, the
convention used by
arima.mle
reverses the signs of the MA coefficients.
B. D. Ripley
Brockwell, P. J. and Davis, R. A. (1996) Introduction to Time Series and Forecasting. Springer, New York. Sections 3.3 and 8.3.
Gardner, G, Harvey, A. C. and Phillips, G. D. A. (1980) Algorithm AS154. An algorithm for exact maximum likelihood estimation of autoregressive-moving average models by means of Kalman filtering. Applied Statistics 29, 311–322.
Harvey, A. C. (1993) Time Series Models, 2nd Edition, Harvester Wheatsheaf, sections 3.3 and 4.4.
Harvey, A. C. and McKenzie, C. R. (1982) Algorithm AS182. An algorithm for finite sample prediction from ARIMA processes. Applied Statistics 31, 180–187.
Jones, R. H. (1980) Maximum likelihood fitting of ARMA models to time series with missing observations. Technometrics 20 389–395.
arima(lh, order = c(1,0,0)) (fit <- arima(lh, order = c(3,0,0))) arima(lh, order = c(1,0,1)) predict(fit, n.ahead = 12) arima(lh, order = c(3,0,0), method = "CSS") # for a model with as few years as this, we want full ML (fit <- arima(USAccDeaths, order = c(0,1,1), seasonal = list(order=c(0,1,1)), delta = -1)) predict(fit, n.ahead = 6) arima(LakeHuron, order = c(2,0,0), xreg = time(LakeHuron)-1920)