Nonlinear Least Squares Subject to Box Constraints

DESCRIPTION:

Local minimizer for sums of squares of nonlinear functions subject to bound-constrained parameters.

USAGE:

nlregb(nres, start, residuals, jacobian=NULL, scale=NULL,
       control=NULL, lower=-Inf, upper=Inf, ...)

REQUIRED ARGUMENTS:

nres
the number of functions whose sum of squares is to be minimized.
start
p-vector of initial values for the parameters (NAs not allowed).
residuals
a vector-valued S-PLUS function that returns the vector of values of the functions whose sum of squares is to be minimized. This function must be of the form r(x,), where x is the vector of parameters over which the minimization takes place. Users can accumulate information through attributes of the value of residuals. If the attributes include any additional arguments of residuals or jacobian, the next call to residuals or jacobian will use the new values of those arguments. The order of the residual functions must be preserved throughout the computation.

OPTIONAL ARGUMENTS:

jacobian
an S function that returns the n by p Jacobian matrix of the residual functions; that is, the matrix whose rows are the gradients of the individual residual functions. This function must be of the form j(x,), where x is the vector of parameters over which the optimization takes place. As for residuals, users can accumulate information through attributes of the value of jacobian. It cannot be assumed that the value of x on a given call to jacobian is the same as the value of x used in the previous call to residuals. If jacobian is not supplied, the Jacobian matrix is estimated by finite differences.
scale
either a single positive value or else a numeric vector with positive components of length equal to the number of parameters to be used to scale the parameter vector. Unless specified by the user, scale is initialized automatically within nlregb. Although scale can have a great effect on the performance of the algorithm, it is not known how to choose it optimally. Automatic updating of the scale vector is the default, although other options can be selected through control.
control
a list of parameters by which the user can control various aspects of the minimization. For details, see the help file for nlregb.control.
lower, upper
either a single numeric value or else a vector of length equal to the number of parameters giving lower or upper bounds for the parameter values. The absence of a bound may be indicated by either NA or NULL, or by -Inf and Inf. The default is unconstrained minimization : lower = -Inf, upper = Inf.
...
additional arguments for residuals and/or jacobian.

VALUE:

a list with the following values:
parameters
final values of the parameters over which the optimization takes place.
objective
the final value of the objective (sum of squares).
message
a statement of the reason for termination. The possible messages, and their meanings, are as follows:
"X

successive parameter values are within a specified tolerance of each other.

"RELATIVE
successive objective values are within a specified tolerance of each other.
"BOTH
both of the above.
"ABSOLUTE
applies only if the minimum is meant to be zero, as for example solving a system of nonlinear equations using a least-squares objective.
"SINGULAR
this happens when the optimization algorithm thinks it can't make any further progress because it has too many degrees of freedom. If you get this message, it usually means that the objective function is either not differentiable, or it may not have an optimum.
"FALSE
this is what gets returned when the optimization algorithm can't make further progress. This usual cause of this is that the user has supplied a gradient, but the gradient is computed incorrectly.

The optimization also might stop if it exceeds a prescribed number of function evaluations, or a prescribed iteration limit.
grad.norm
the final norm of the objective gradient. If there are active bounds, then components corresponding to active bounds are excluded from the norm calculation. If the number of active bounds is equal to the number of parameters, NA will be returned.
iterations
the total number of iterations before termination.
r.evals
the total number of residual evaluations before termination.
j.evals
the total number of jacobian evaluations before termination.
scale
the final value of the scale vector.
aux
the final value of the function attributes.
residuals
final value of the residuals.
jacobian
final value of the jacobian (if supplied).
call
a copy of the call to nlregb.

NOTE:

nlregb is intended for functions that have at least two continuous derivatives on all of the feasible region, including the boundary. For best results, the Jacobian matrix of the residuals should be supplied whenever possible. Function and derivative values should be computed in C or Fortran within the outer S-PLUS function for greater efficiency.

METHOD:

nlregb is based on the Fortran functions dn2fb, and dn2gb (Dennis et al. (1981), Gay (1984), A T & T (1984)) from NETLIB (Dongarra and Grosse (1987).

REFERENCES:

A. T. & T. Bell Laboratories (1984). PORT Mathematical Subroutine Library Manual.

Dongarra, J. J. and Grosse, E. (1987). Distribution of mathematical software via electronic mail, Communications of the ACM, 30 , pp. 403-407.

Dennis, J. E., Gay, D. M., and Welsch, R. E. (1981). An Adaptive Nonlinear Least-Squares Algorithm ACM Transactions on Mathematical Software, 7 , pp. 348-368.

Dennis, J. E., Gay, D. M., and Welsch, R. E. (1981). Algorithm 573. NL2SOL - An Adaptive Nonlinear Least-Squares Algorithm. ACM Transactions on Mathematical Software, 7 , pp. 369-383.

Gay, D. M. (1984). A trust region approach to linearly constrained optimization. in Numerical Analysis. Proceedings, Dundee 1983, F. A Lootsma (ed.), Springer, Berlin, pp. 171-189.

SEE ALSO:

, , , , , , , , .

EXAMPLES:

# This example uses nlregb to solve a linear least-squares problem
n <- 20; p <- 5
A <- matrix(rnorm(n*p), nrow=n, ncol=p)
x <- rep(1, p)
y <- A%*%x
x0 <- rnorm(length(x))
lin.res <- function(x, y, A) {A%*%x-y}
nlregb(nres=n, start=x0, res=lin.res, y=y, A=A)
# Now try it with bounds with a solution partially outside the bounds
x <- c(0, -2, 0, 2, 0)
nlregb(nres=n, st=x0, res=lin.res, lo=-1, up=1,
       y=y, A=A)
# Now use the Jacobian matrix
lin.jac <- function(x, y, A) {A}
nlregb(nres=n, st=x0, res=lin.res, jac=lin.jac, lo=-1,
       up=1, y=y, A=A)

# An example fitting a nonlinear function (a variogram) using
#   data stored in a data frame.
# First simulate the data:
d <- runif(40, 0, 10)
vg <- 0.3 + 1.2 * (1 - exp(- d / 2)) + rnorm(40, 0, .05)
dvg <- data.frame(d=d, vg=vg)
# Define the residual function:
resid.func <- function(x, data) {
    data$vg - (x[1] + x[2] * (1.0 - exp(-data$d / x[3])))
}
# Call nlregb:
fit <- nlregb(nres=nrow(dvg), start=c(.2, 1, 5), residuals=resid.func,
    lower=rep(0, 3), upper=rep(Inf, 3), data=dvg)
fit$parameters