lm.fit.chol(x, y, singular.ok = F, tolerance = 1e-07) lm.fit.qr(x, y, singular.ok = F, tolerance = 1e-07, qr = F) lm.fit.svd(x, y, u.return = F)
x
will be the model matrix generated
by one of the fitting functions.
Note that none of the fitting methods handles
NA
s.
j
th column from the
preceding columns is less than this multiple of the original norm
of the column, the variable is judged linearly dependent on the
preceding variables.
In addition, any column whose initial norm is exactly zero is
(degenerately) linearly dependent.
Note that the
svd
algorithm has no need to make adjustments for
linear dependencies, and so has no
tolerance
argument.
qr
algorithm, should the full decomposition be returned,
or just the triangular factor?
svd
algorithm, should the
n
by
n
matrix on the left
of the decomposition be returned (u.return=n), the first
p
columns (u.return=p), or neither (u.return=0).
summary.lm()
and
other computations.
Note that this component, named
"R"
has a different form
for the
svd
algorithm, where it contains the diagonal and the
right-hand matrix from the decomposition, than for the other
algorithms, where it is the upper-triangular factor.
The essential property is that the component have a class for which
there is a corresponding
solve()
method.
This ensures that summaries can be computed transparently to the
numeric method.
When the response is a matrix, the first four components of the returned
object will be also, with columns corresponding to the response variables.
Names or dimnames for the various components are inherited in the
natural way from the column labels for
x
(for the effects, coefficients,
and
R
) and from the names or row labels of
y
(for the fitted values
and residuals).
See
model.matrix
for the computations that determine these labels in
the usual model-fitting context.