|
JMSLTM Numerical Library 4.0 | ||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | ||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Object com.imsl.datamining.neural.QuasiNewtonTrainer
Trains a network using the quasi-Newton method, MinUnconMultiVar
.
The Java Logging API can be used
to trace the performance training.
The name of this logger is com.imsl.datamining.QuasiNewtonTrainer
Accumulated levels of detail correspond to Java's
FINE
, FINER
, and FINEST
logging levels
with FINE
yielding the smallest amount of information and FINEST
yielding the most.
The levels of output yield the following:
Level | Output |
FINE |
A message on entering and exiting method train , and
any exceptions from and the exit status of MinUnconMultiVar
|
FINER |
All of the messages in FINE ,
the input settings,
and a summary report with the statistics from Network.computeStatistics() ,
the number of function evaluations and the elapsed time. |
FINEST |
All of the messages in FINER ,
and a table of the computed weights and their gradient values. |
MinUnconMultiVar
,
Serialized FormNested Class Summary | |
protected class |
QuasiNewtonTrainer.BlockGradObjective
|
protected class |
QuasiNewtonTrainer.BlockObjective
|
static interface |
QuasiNewtonTrainer.Error
Error function to be minimized by trainer. |
protected class |
QuasiNewtonTrainer.GradObjective
The Objective class is passed to the optimizer. |
protected class |
QuasiNewtonTrainer.Objective
The Objective class is passed to the optimizer. |
Field Summary | |
static QuasiNewtonTrainer.Error |
SUM_OF_SQUARES
Compute the sum of squares error. |
Constructor Summary | |
QuasiNewtonTrainer()
Constructs a QuasiNewtonTrainer object. |
Method Summary | |
protected Object |
clone()
Clones a copy of the trainer. |
QuasiNewtonTrainer.Error |
getError()
Returns the function used to compute the error to be minimized. |
double[] |
getErrorGradient()
Returns the value of the gradient of the error function with respect to the weights. |
int |
getErrorStatus()
Returns the error status from the trainer. |
double |
getErrorValue()
Returns the final value of the error function. |
static Formatter |
getFormatter()
Returns the logging formatter object. |
static Logger |
getLogger()
Returns the Logger object. |
int |
getTrainingIterations()
Returns the number of iterations used during training. |
boolean |
getUseBackPropagation()
Returns the use back propagation setting. |
protected void |
setEpochNumber(int num)
Sets the epoch number for the trainer. |
void |
setError(QuasiNewtonTrainer.Error error)
Sets the function used to compute the network error. |
void |
setFalseConvergenceTolerance(double falseConvergenceTolerance)
Set the false convergence tolerance for the Trainer . |
void |
setGradientTolerance(double gradientTolerance)
Set the gradient tolerance. |
void |
setMaximumStepsize(double maximumStepsize)
Sets the maximum step size. |
void |
setMaximumTrainingIterations(int maximumTrainingIterations)
Sets the maximum number of iterations to use in a training. |
protected void |
setParallelMode(ArrayList[] allLogRecords)
Sets the trainer to be used in multi-threaded EpochTainer. |
void |
setRelativeTolerance(double relativeTolerance)
Sets the relative tolerence. |
void |
setStepTolerance(double stepTolerance)
Sets the scaled step tolerance. |
void |
setUseBackPropagation(boolean flag)
Sets whether or not to use the back propagation algorithm for gradient calculations during network training. |
void |
train(Network network,
double[][] xData,
double[][] yData)
Trains the neural network using supplied training patterns. |
Methods inherited from class java.lang.Object |
equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
Field Detail |
public static final QuasiNewtonTrainer.Error SUM_OF_SQUARES
This is the default Error
object used by
QuasiNewtonTrainer
.
Constructor Detail |
public QuasiNewtonTrainer()
QuasiNewtonTrainer
object.
Method Detail |
protected Object clone()
public QuasiNewtonTrainer.Error getError()
Error
object containing the function to be minimized.public double[] getErrorGradient()
getErrorGradient
in interface Trainer
double
array whose length is equal to the number of network weights,
containing the value of the gradient of the error function
with respect to the weights.
Before training, null is returned.public int getErrorStatus()
getErrorStatus
in interface Trainer
int
representing the error status from the trainer.
Zero indicates that no errors were encountered during training.
Any non-zero value indicates that some error condition arose during
training.
In many cases the trainer is able to recover from these conditions and
produce a well-trained network.
Error Status | Condition |
0 | No error occurred during training. |
1 | The last global step failed to locate a lower point than the current error value. The current solution may be an approximate solution and no more accuracy is possible, or the step tolerance may be too large. |
2 | Relative function convergence; both the actual and predicted relative reductions in the error function are less than or equal to the relative function convergence tolerance. |
3 | Scaled step tolerance satisfied; the current point may be an approximate local solution, or the algorithm is making very slow progress and is not near a solution, or the step tolerance is too big. |
4 | MinUnconMultiVar.FalseConvergenceException thrown by optimizer. |
5 | MinUnconMultiVar.MaxIterationsException thrown by optimizer. |
6 | MinUnconMultiVar.UnboundedBelowException thrown by optimizer. |
MinUnconMultiVar.FalseConvergenceException
,
MinUnconMultiVar.MaxIterationsException
,
MinUnconMultiVar.UnboundedBelowException
public double getErrorValue()
getErrorValue
in interface Trainer
double
representing the final value of the error
function from the last training.
Before training, NaN
is returned.public static Formatter getFormatter()
Logger
support requires JDK1.4.
Use with earlier versions returns null.
The returned Formatter
is used as input to
Handler.setFormatter(java.util.logging.Formatter)
to format the output log.
Formatter
object, if present, or null
.public static Logger getLogger()
Logger
object.
This is the Logger
used to trace this class.
It is named com.imsl.datamining.neural.QuasiNewtonTrainer
.
Logger
object, if present, or null
.public int getTrainingIterations()
int
representing the number of iterations
used during training.MinUnconMultiVar.getIterations()
public boolean getUseBackPropagation()
boolean
specifying whether or not back propagation
is being used for gradient calculations.protected void setEpochNumber(int num)
num
- An int
array containing the epoch number.public void setError(QuasiNewtonTrainer.Error error)
error
- The Error
object containing the function to be used
to compute the network error.
The default is to compute the sum of squares error,
SUM_OF_SQUARES
.public void setFalseConvergenceTolerance(double falseConvergenceTolerance)
Trainer
.
falseConvergenceTolerance
- A double
specifying the false convergence tolerance.
Default: 2.22044604925031308e-14.MinUnconMultiVar.setFalseConvergenceTolerance(double)
public void setGradientTolerance(double gradientTolerance)
gradientTolerance
- A double
specifying the gradient tolerance.
Default: cube root of machine precision.MinUnconMultiVar.setGradientTolerance(double)
public void setMaximumStepsize(double maximumStepsize)
maximumStepsize
- A nonnegative double
value
specifying the maximum allowable step size in the optimizer.MinUnconMultiVar.setMaximumStepsize(double)
public void setMaximumTrainingIterations(int maximumTrainingIterations)
maximumTrainingIterations
- An int
representing the maximum number of
training iterations.
Default: 100.MinUnconMultiVar.setMaxIterations(int)
protected void setParallelMode(ArrayList[] allLogRecords)
allLogRecords
- An ArrayList
array containing the log
records.public void setRelativeTolerance(double relativeTolerance)
relativeTolerance
- A double
representing the relative
error tolerance. It must be in the interval [0,1].
Its default value is 3.66685e-11.MinUnconMultiVar.setRelativeTolerance(double)
public void setStepTolerance(double stepTolerance)
The second stopping criterion for MinUnconMultiVar
,
the optimizer used by this Trainer
,
is that the scaled distance between the last two steps be less than the step tolerance.
stepTolerance
- A double
which is the step tolerance.
Default: 3.66685e-11.MinUnconMultiVar.setStepTolerance(double)
public void setUseBackPropagation(boolean flag)
By default, the quasi-newton algorithm optimizes the network using numerical gradients. This method directs the quasi-newton trainer to use the back propagation algorithm for gradient calculations during network training. Depending upon the data and network architecture, one approach is typically faster than the other, or is less sensitive to finding local network optima.
flag
- boolean
specifies whether or not to use the back
propagation algorithm for gradient calculations. Default value is true
.public void train(Network network, double[][] xData, double[][] yData)
Each row of xData
and yData
contains a training pattern. The number of rows in these two arrays
must be at least equal to the number of weights in the network.
train
in interface Trainer
network
- The Network
to be trained.xData
- An input double
matrix containing training patterns.
The number of columns in xData
must equal the number of nodes in the input layer.yData
- An output double
matrix containing output training patterns.
The number of columns in yData
must equal the number of perceptrons in the output layer.
|
JMSLTM Numerical Library 4.0 | ||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | ||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |