org.encog.neural.prune
Class PruneIncremental

java.lang.Object
  extended by org.encog.util.concurrency.job.ConcurrentJob
      extended by org.encog.neural.prune.PruneIncremental
All Implemented Interfaces:
Runnable, MultiThreadable

public class PruneIncremental
extends ConcurrentJob

This class is used to help determine the optimal configuration for the hidden layers of a neural network. It can accept a pattern, which specifies the type of neural network to create, and a list of the maximum and minimum hidden layer neurons. It will then attempt to train the neural network at all configurations and see which hidden neuron counts work the best. This method does not simply choose the network with the lowest error rate. A specifiable number of best networks are kept, which represent the networks with the lowest error rates. From this collection of networks, the best network is defined to be the one with the fewest number of connections. Not all starting random weights are created equal. Because of this, an option is provided to allow you to choose how many attempts you want the process to make, with different weights. All random weights are created using the default Nguyen-Widrow method normally used by Encog.


Constructor Summary
PruneIncremental(MLDataSet training, NeuralNetworkPattern pattern, int iterations, int weightTries, int numTopResults, StatusReportable report)
          Construct an object to determine the optimal number of hidden layers and neurons for the specified training data and pattern.
 
Method Summary
 void addHiddenLayer(int min, int max)
          Add a hidden layer's min and max.
 BasicNetwork getBestNetwork()
           
 List<HiddenLayerParams> getHidden()
           
 int getHidden1Size()
           
 int getHidden2Size()
           
 double getHigh()
           
 int getIterations()
           
 double getLow()
           
 NeuralNetworkPattern getPattern()
           
 double[][] getResults()
           
 double[] getTopErrors()
           
 BasicNetwork[] getTopNetworks()
           
 MLDataSet getTraining()
           
 void init()
          Init for prune.
 int loadWorkload()
          Get the next workload.
static String networkToString(BasicNetwork network)
          Format the network as a human readable string that lists the hidden layers.
 void performJobUnit(JobUnitContext context)
          Perform an individual job unit, which is a single network to train and evaluate.
 void process()
          Begin the prune process.
 Object requestNextTask()
          Request the next task.
 
Methods inherited from class org.encog.util.concurrency.job.ConcurrentJob
getShouldStop, getThreadCount, isRunning, processBackground, reportStatus, run, setReport, setThreadCount, stop
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

PruneIncremental

public PruneIncremental(MLDataSet training,
                        NeuralNetworkPattern pattern,
                        int iterations,
                        int weightTries,
                        int numTopResults,
                        StatusReportable report)
Construct an object to determine the optimal number of hidden layers and neurons for the specified training data and pattern.

Parameters:
training - The training data to use.
pattern - The network pattern to use to solve this data.
iterations - How many iterations to try per network.
weightTries - The number of random weights to use.
numTopResults - The number of "top networks" to choose the most simple "best network" from.
report - Object used to report status to.
Method Detail

networkToString

public static String networkToString(BasicNetwork network)
Format the network as a human readable string that lists the hidden layers.

Parameters:
network - The network to format.
Returns:
A human readable string.

addHiddenLayer

public void addHiddenLayer(int min,
                           int max)
Add a hidden layer's min and max. Call this once per hidden layer. Specify a zero min if it is possible to remove this hidden layer.

Parameters:
min - The minimum number of neurons for this layer.
max - The maximum number of neurons for this layer.

getBestNetwork

public BasicNetwork getBestNetwork()
Returns:
The network being processed.

getHidden

public List<HiddenLayerParams> getHidden()
Returns:
The hidden layer max and min.

getHidden1Size

public int getHidden1Size()
Returns:
The size of the first hidden layer.

getHidden2Size

public int getHidden2Size()
Returns:
The size of the second hidden layer.

getHigh

public double getHigh()
Returns:
The higest error so far.

getIterations

public int getIterations()
Returns:
The number of training iterations to try for each network.

getLow

public double getLow()
Returns:
The lowest error so far.

getPattern

public NeuralNetworkPattern getPattern()
Returns:
The network pattern to use.

getResults

public double[][] getResults()
Returns:
The error results.

getTopErrors

public double[] getTopErrors()
Returns:
the topErrors

getTopNetworks

public BasicNetwork[] getTopNetworks()
Returns:
the topNetworks

getTraining

public MLDataSet getTraining()
Returns:
The training set to use.

init

public void init()
Init for prune.


loadWorkload

public int loadWorkload()
Get the next workload. This is the number of hidden neurons. This is the total amount of work to be processed.

Specified by:
loadWorkload in class ConcurrentJob
Returns:
The amount of work to be processed by this.

performJobUnit

public void performJobUnit(JobUnitContext context)
Perform an individual job unit, which is a single network to train and evaluate.

Specified by:
performJobUnit in class ConcurrentJob
Parameters:
context - Contains information about the job unit.

process

public void process()
Begin the prune process.

Overrides:
process in class ConcurrentJob

requestNextTask

public Object requestNextTask()
Request the next task. This is the next network to attempt to train.

Specified by:
requestNextTask in class ConcurrentJob
Returns:
The next network to train.


Copyright © 2014. All Rights Reserved.