optim(par, fn, gr = NULL, method = c("Nelder-Mead", "BFGS", "CG", "L-BFGS-B", "SANN"), lower = -Inf, upper = Inf, control = list(), hessian = FALSE, ...)
"Nelder-Mead"
. If it is
NULL
and it is needed, a finite-difference
approximation will be used.
"L-BFGS-B"
method.
fn
and
gr
.
By default this function performs minimization, but it will maximize
if
control$fnscale
is negative.
The default method is an implementation of that of Nelder and Mead
(1965), that uses only function values and is robust but relatively
slow. It will work reasonably well for non-differentiable functions.
Method
"BFGS"
is a quasi-Newton method (also known as a variable
metric algorithm), specifically that published simultaneously in 1970
by Broyden, Fletcher, Goldfarb and Shanno. This uses function values
and gradients to build up a picture of the surface to be optimized.
Method
"CG"
is a conjugate gradients method based on that by
Fletcher and Reeves (1964) (but with the option of Polak--Ribiere or
Beale--Sorenson updates). Conjugate gradient methods will generally
be more fragile that the BFGS method, but as they do not store a
matrix they may be successful in much larger optimization problems.
Method
"L-BFGS-B"
is that of Byrd et. al. (1994) which
allows box constraints, that is each variable can be given a lower
and/or upper bound. The initial value must satisfy the constraints.
This uses a limited-memory modification of the BFGS quasi-Newton
method. If non-trivial bounds are supplied, this method will be
selected, with a warning.
Nocedal and Wright (1999) is a comprehensive reference for the
previous three methods.
Method
"SANN"
is a variant of simulated annealing
given in Belisle (1992). Simulated-annealing belongs to the class of
stochastic global optimization methods. It uses only function values
but is relatively slow. It will also work for non-differentiable
functions. This implementation uses the Metropolis function for the
acceptance probability. The next candidate point is generated from a
Gaussian Markov kernel with scale proportional to the actual temperature.
Temperatures are decreased according to the logarithmic cooling
schedule as given in Belisle (1992, p. 890). Note that the
"SANN"
method depends critically on the settings of the
control parameters. It is not a general-purpose method but can be
very useful in getting to a good value on a very rough surface.
The
control
argument is a list that can supply any of the
following components:
trace
Integer. If positive, tracing information on the
progress of the optimization is produced. Higher values may
produce more tracing information: for method
"L-BFGS-B"
there are six levels of tracing. (To understand exactly what
these do see the source code: higher levels give more detail.)
fnscale
An overall scaling to be applied to the value
of
fn
and
gr
during optimization. If negative,
turns the problem into a maximization problem. Optimization is
performed on
fn(par)/fnscale
.
parscale
A vector of scaling values for the parameters.
Optimization is performed on
par/parscale
and these should be
comparable in the sense that a unit change in any element produces
about a unit change in the scaled value.
ndeps
A vector of step sizes for the finite-difference
approximation to the gradient, on
par/parscale
scale. Defaults to
1e-3
.
maxit
The maximum number of iterations. Defaults to
100
for the
derivative-based methods, and
500
for
"Nelder-Mead"
. For
"SANN"
maxit
gives the total number of function evaluations. There is no
other stopping criterion. Defaults to
10000
.
abstol
The absolute convergence tolerance. Only useful for non-negative
functions, as a tolerance for reaching zero.
reltol
Relative convergence tolerance. The algorithm stops if it is unable
to reduce the value by a factor of
reltol * (abs(val) + reltol)
at a
step. Defaults to
sqrt(.Machine$double.eps)
, typically about
1e-8
.
alpha, beta, gamma
Scaling parameters for the
"Nelder-Mead"
method.
alpha
is the
reflection factor (default 1.0),
beta
the contraction factor (0.5)
and
gamma
the expansion factor (2.0).
REPORT
The frequency of reports for the
"BFGS"
and
"L-BFGS-B"
methods if
control$trace
is positive. Defaults to every 10 iterations.
type
for the conjugate-gradients method. Takes value
1
for the
Fletcher-Reeves update,
2
for Polak-Ribiere and
3
for
Beale-Sorenson.
lmm
is an integer giving the number of BFGS updates retained in the
"L-BFGS-B"
method, It defaults to
5
.
factr
controls the convergence of the
"L-BFGS-B"
method. Convergence
occurs when the reduction in the objective is within this factor of
the machine tolerance. Default is
1e7
, that is a tolerance of about
1e-8
.
pgtol
helps controls the convergence of the
"L-BFGS-B"
method. It is a
tolerance on the projected gradient in the current search
direction. This defaults to zero, when the check is suppressed.
temp
controls the
"SANN"
method. It is the starting temperature for the
cooling schedule. Defaults to
10
.
tmax
is the number of function evaluations at each temperature for the
"SANN"
method. Defaults to
10
.
fn
corresponding to
par
.
fn
and
gr
respectively. This excludes those calls needed
to compute the Hessian, if requested, and any calls to
fn
to
compute a finite-difference approximation to the gradient.
0
indicates successful
convergence. Error codes are
.in +
1
indicates that the iteration limit
maxit
had been reached.
10
indicates degeneracy of the Nelder-Mead simplex.
51
indicates a warning from the
"L-BFGS-B"
method; see component
message
for further details.
52
indicates an error from the
"L-BFGS-B"
method; see component
message
for further details.
.in -
NULL
.
hessian
is true. A symmetric
matrix giving an estimate of the Hessian at the solution found. Note
that this is the Hessian of the unconstrained problem even if the
box constraints are active.
The code for methods
"Nelder-Mead"
,
"BFGS"
and
"CG"
was based originally on Pascal code
in Nash (1990) that was translated by
p2c
and then hand-optimized. Dr Nash has agreed that the code can be made
freely available.
The code for method
"SANN"
was
contributed by A. Trapletti.
The optim function was written by Brian Ripley. It originally appeared in the the MASS library section by Venables and Ripley. With Brian Ripley's permission, the function was incorporated into core S-PLUS, starting with S-PLUS Version 8.0.
The code for method
"L-BFGS-B"
is based on Fortran code by
Zhu, Byrd, Lu-Chen and Nocedal obtained from Netlib
(file
opt/lbfgs_bcm.shar
: another version is in
toms/778
).
Belisle, C. J. P. (1992) Convergence theorems for a class of simulated
annealing algorithms on R^d.
J. Applied Probability,
29, 885-895.
Byrd, R. H., Lu, P., Nocedal, J. and Zhu, C. (1995) A limited
memory algorithm for bound constrained optimization.
SIAM J. Scientific Computing,
16, 1190-1208.
Fletcher, R. and Reeves, C. M. (1964) Function minimization by
conjugate gradients.
Computer Journal
7, 148-154.
Nash, J. C. (1990) Compact Numerical Methods for
Computers. Linear Algebra and Function Minimisation. Adam Hilger.
Nelder, J. A. and Mead, R. (1965) A simplex algorithm for function
minimization.
Computer Journal
7, 308-313.
Nocedal, J. and Wright, S. J. (1999)
Numerical Optimization.
Springer.
fr <- function(x) { ## Rosenbrock Banana function x1 <- x[1] x2 <- x[2] 100 * (x2 - x1 * x1)^2 + (1 - x1)^2 } grr <- function(x) { ## Gradient of 'fr' x1 <- x[1] x2 <- x[2] c(-400 * x1 * (x2 - x1 * x1) - 2 * (1 - x1), 200 * (x2 - x1 * x1)) } optim(c(-1.2,1), fr) optim(c(-1.2,1), fr, grr, method = "BFGS") optim(c(-1.2,1), fr, NULL, method = "BFGS", hessian = TRUE) optim(c(-1.2,1), fr, grr, method = "CG") optim(c(-1.2,1), fr, grr, method = "CG", control=list(type=2)) optim(c(-1.2,1), fr, grr, method = "L-BFGS-B") flb <- function(x) { p <- length(x); sum(c(1, rep(4, p-1)) * (x - c(1, x[-p])^2)^2) } ## 25-dimensional box constrained optim(rep(3, 25), flb, NULL, "L-BFGS-B", lower=rep(2, 25), upper=rep(4, 25)) # par[24] is *not* at boundary ## "wild" function , global minimum at about -15.81515 fw <- function (x) 10*sin(0.3*x)*sin(1.3*x^2) + 0.00001*x^4 + 0.2*x+80 x <- seq(-50, 50, len=1000) plot(x, fw(x), main = "optim() minimising 'wild function'", type="l") (res <- optim(50, fw, method="SANN", control=list(maxit=20000, temp=20, parscale=20))) ## Now improve locally (r2 <- optim(res$par, fw, method="BFGS")) points(r2$par, r2$val, pch = 8, cex = 2)