tune(method, train.x, train.y=NULL, data=list(), validation.x= NULL, validation.y=NULL, ranges=NULL, predict.func=predict, tunecontrol=tune.control(), ...) best.tune(...)
train.x
is a predictor
matrix. Ignored if
train.x
is a formula.
validation.x
or separately specified using
validation.y
.
seq
.
predict
behaviour is inadequate.
"tune.control"
, as created by the
function
tune.control()
. If omitted,
tune.control()
gives the defaults.
As performance measure, the classification error is used
for classification, and the mean squared error for regression. It is
possible to specify only one parameter combination (i.e., vectors of
length 1) to obtain an error estimation of the specified type
(bootstrap, cross-classification, etc.) on the given data set. For
conveneince, there
are
tune.foo()
wrappers defined for
svm()
.
Cross-validation randomizes the data set before building the splits
which, once created, remain constant during the training process.
The splits can be recovered through the
train.ind
component of the returned object.
tune
, an object of class
tune
, including the components:
best.tune()
returns the best model detected by
tune
.
David Meyer
mailto:David.Meyer@R-project.org
## tune `svm' for classification with RBF-kernel (default in svm), ## using one split for training/validation set obj <- tune(svm, Species~., data = iris.df, ranges = list(gamma = 2^(-1:1), cost = 2^(2:4)), tunecontrol = tune.control(sampling = "fix") ) ## alternatively: obj <- tune.svm(Species~., data = iris.df, gamma = 2^(-1:1), cost = 2^(2:4)) summary(obj) # plot(obj)