org.encog.neural.neat
Class NEATNetwork

java.lang.Object
  extended by org.encog.neural.neat.NEATNetwork
All Implemented Interfaces:
Serializable, MLError, MLInput, MLInputOutput, MLMethod, MLOutput, MLRegression

public class NEATNetwork
extends Object
implements MLRegression, MLError, Serializable

NEAT networks relieve the programmer of the need to define the hidden layer structure of the neural network. The output from the neural network can be calculated normally or using a snapshot. The snapshot mode is slower, but it can be more accurate. The snapshot handles recurrent layers better, as it takes the time to loop through the network multiple times to "flush out" the recurrent links. NeuroEvolution of Augmenting Topologies (NEAT) is a genetic algorithm for the generation of evolving artificial neural networks. It was developed by Ken Stanley while at The University of Texas at Austin. http://www.cs.ucf.edu/~kstanley/ The following Journal articles were used to implement NEAT/HyperNEAT in Encog. Provided in BibTeX form. Article{stanley:ec02,title={Evolving Neural Networks Through Augmenting Topologies}, author={Kenneth O. Stanley and Risto Miikkulainen}, volume={10}, journal={Evolutionary Computation}, number={2}, pages={99-127}, url= "http://nn.cs.utexas.edu/?stanley:ec02" , year={2002}} MISC{Gauci_abstractgenerating, author = {Jason Gauci and Kenneth Stanley}, title = {ABSTRACT Generating Large-Scale Neural Networks Through Discovering Geometric Regularities}, year = {}} INPROCEEDINGS{Whiteson05automaticfeature, author = {Shimon Whiteson and Kenneth O. Stanley and Risto Miikkulainen}, title = {Automatic feature selection in neuroevolution}, booktitle = {In Genetic and Evolutionary Computation Conference}, year = {2005}, pages = {1225--1232}, publisher = {ACM Press} }

See Also:
Serialized Form

Constructor Summary
NEATNetwork(int inputNeuronCount, int outputNeuronCount, List<NEATLink> connectionArray, ActivationFunction[] theActivationFunctions)
          Construct a NEAT network.
 
Method Summary
 double calculateError(MLDataSet data)
          Calculate the error for this neural network.
 MLData compute(MLData input)
          Compute the output from this synapse.
 int getActivationCycles()
           
 ActivationFunction[] getActivationFunctions()
           
 int getInputCount()
          
 NEATLink[] getLinks()
           
 int getOutputCount()
          
 int getOutputIndex()
           
 double[] getPostActivation()
           
 double[] getPreActivation()
           
 double getRelaxationThreshold()
           
 boolean isHasRelaxed()
           
 void setActivationCycles(int activationCycles)
          Set the number of activation cycles to use.
 void setHasRelaxed(boolean hasRelaxed)
          Set true, if the network has relaxed and values no longer changing.
 void setRelaxationThreshold(double relaxationThreshold)
          The amount of change allowed before the network is considered to have relaxed.
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

NEATNetwork

public NEATNetwork(int inputNeuronCount,
                   int outputNeuronCount,
                   List<NEATLink> connectionArray,
                   ActivationFunction[] theActivationFunctions)
Construct a NEAT network. The links that are passed in also define the neurons.

Parameters:
inputNeuronCount - The input neuron count.
outputNeuronCount - The output neuron count.
connectionArray - The links.
theActivationFunctions - The activation functions.
Method Detail

calculateError

public double calculateError(MLDataSet data)
Calculate the error for this neural network.

Specified by:
calculateError in interface MLError
Parameters:
data - The training set.
Returns:
The error percentage.

compute

public MLData compute(MLData input)
Compute the output from this synapse.

Specified by:
compute in interface MLRegression
Parameters:
input - The input to this synapse.
Returns:
The output from this synapse.

getActivationCycles

public int getActivationCycles()
Returns:
The number of activation cycles to use.

getActivationFunctions

public ActivationFunction[] getActivationFunctions()
Returns:
The activation functions.

getInputCount

public int getInputCount()

Specified by:
getInputCount in interface MLInput
Returns:
The input.

getLinks

public NEATLink[] getLinks()
Returns:
The links in the neural network.

getOutputCount

public int getOutputCount()

Specified by:
getOutputCount in interface MLOutput
Returns:
The output count.

getOutputIndex

public int getOutputIndex()
Returns:
The starting location of the output neurons.

getPostActivation

public double[] getPostActivation()
Returns:
The post-activation values, used as the output from the neurons.

getPreActivation

public double[] getPreActivation()
Returns:
The pre-activation values, used to feed the neurons.

getRelaxationThreshold

public double getRelaxationThreshold()
Returns:
The amount of change allowed before the network is considered to have relaxed.

isHasRelaxed

public boolean isHasRelaxed()
Returns:
True, if the network has relaxed and values no longer changing. Used when activationCycles is set to zero for auto.

setActivationCycles

public void setActivationCycles(int activationCycles)
Set the number of activation cycles to use.

Parameters:
activationCycles - The number of activation cycles.

setHasRelaxed

public void setHasRelaxed(boolean hasRelaxed)
Set true, if the network has relaxed and values no longer changing. Used when activationCycles is set to zero for auto.

Parameters:
hasRelaxed - True if the network has relaxed.

setRelaxationThreshold

public void setRelaxationThreshold(double relaxationThreshold)
The amount of change allowed before the network is considered to have relaxed.

Parameters:
relaxationThreshold - The relaxation threshold.


Copyright © 2014. All Rights Reserved.