ms(formula, data = <<see below>>, start = <<see below>>, control = <<see below>>, trace = F)
~ expression, and
the right side of the formula is essentially an arbitrary S-PLUS expression.
parameters or
param)
that establish initial values.
start is recommended for unambiguous
specification of the parameters.
If
start is omitted,
the assumption is that any names occurring in
formula
that are not variables in the data frame are parameters.
The list form of
start allows the individual parameter names
to refer to subsets of the parameters of arbitrary length.
If a numeric starting vector is supplied, the named parameters must
each be of length 1.
ms.control for the possible control parameters and their default
settings.
trace.ms and
browser.ms.
If
trace is
TRUE, then
trace.ms is used.
No tracing is performed when
trace is
FALSE.
"ms" with the final parameters, function
and derivative values, and some internal information about the fit.
See
ms.object.
The standard tracer function is
trace.ms.
Also available, by
trace="browser.ms", is an invocation of the interactive
browser, in a frame containing all the fitting information.
See the definition of these functions for the calling sequence to any
do-it-yourself tracer function.
The use of special trace functions with
ms should be distinguished
from the standard S-PLUS tracing.
The latter is simpler and usually the best way to track the modeling.
Tracing through
ms allows access in S-PLUS to the internal flags
of the Fortran minimization algorithm.
If you don't need to look at that information, you can usually
trace a function you have written to compute the model information.
Often tracing on exit, for example,
trace(mymodel, exit = browser)
is a good way to look at your function
mymodel just before it
returns the next model values.
(Remember to untrace it before you edit it.)
The possible messages, and their meanings, are as follows:
successive parameter values are within a specified tolerance of each other.
Chambers, J. M., and Hastie, T. J. (eds) (1992). Statistical Models in S, Chapter 10, "Nonlinear Models". Pacific Grove, CA.: Wadsworth & Brooks/Cole.
lprob <- function(lp)log(1+ exp(lp)) - lp # log-likelihood
fit.alpha <- ms(~ lprob(D * alpha), pingpong)
lprob2 <- function(lp, X) {
# lp is the linear predictor, X is data in the linear predictor
elp <- exp(lp)
z <- 1 + elp
value <- log(z) - lp
attr(value, "gradient") <- - X/z
value
}
fit.alpha.2 <- ms(~ lprob2(D * alpha, D), pingpong)