lmRob(formula, data=<<see below>>, weights=<<see below>>, subset=<<see below>>, na.action, model=F, x=F, y=F, contrasts=NULL, nrep=NULL, robust.control=lmRob.robust.control(...), genetic.control=NULL, ...)
formula
object, with the response on the
left of a ~ operator, and the terms, separated by
+
operators, on the right.
data.frame
in which to interpret the
variables named in the
formula
,
or in the
subset
and the
weights
argument.
If this is missing,
then the variables in the
formula
should be on
the search list.
This may also be a single number to handle some special cases
- see below for details.
weights
must be the same as
the number of observations.
The weights must be nonnegative and it is strongly recommended that they be
strictly positive, since zero weights are ambiguous, compared to use of the
subset
argument.
model.frame
after any
subset
argument has been used.
The default (with
na.fail
) is
to create an error if any missing values are found. A possible alternative
is
na.exclude
, which deletes observations
that contain one or more missing values.
TRUE
, the model frame
is returned in component model.
TRUE
, the model matrix
is returned in component
x
.
TRUE
, the response is
returned in component
y
.
"Exhaustive"
resampling
is being used, the value of
nrep
is ignored.
"Exhaustive"
resampling is chosen).
Hence you will get (slightly) different answers each time if you make the
same call with a different seed. See
for how
to set the seed, and see
for a complete description of
the object returned.
By default, the
lmRob
function automatically
chooses an appropriate
algorithm to compute a final robust estimate with high breakdown point
and high efficiency. The final robust estimate is computed based on an
initial estimate with high breakdown point. For the initial estimation,
the alternate M-S estimate is used if there are any factor variables
in the predictor matrix, and an S-estimate is used otherwise. To compute
the S-estimate, a random resampling or a fast procedure is used unless the
data set is small, in which case exhaustive resampling is employed.
See
for how to choose different algorithms.
Gervini, D., and Yohai, V. J. (1999). A class of robust and fully efficient regression estimates, mimeo, Universidad de Buenos Aires.
Marazzi, A. (1993). Algorithms, Routines, and S Functions for Robust Statistics. Wadsworth & Brooks/Cole, Pacific Grove, CA.
Maronna, R. A., and Yohai, V. J. (1999). Robust regression with both continuous and categorical predictors, mimeo, Universidad de Buenos Aires.
Pena, D., and Yohai, V. (1999). A fast procedure for outlier diagnostics in large regression problems, Journal of the American Statistical Association, 94, 434-445.
Yohai, V. (1988). High breakdown-point and high efficiency estimates for regression, Annals of Statistics, 15, 642-665.
Yohai, V., Stahel, W. A., and Zamar, R. H. (1991). A procedure for robust estimation and inference in linear regression, in Stahel, W. A. and Weisberg, S. W., Eds., Directions in Robust Statistics and Diagnostics, Part II. Springer-Verlag.
oilcity.rob <- lmRob(Oil~Market, data=oilcity)