Evolving Neural Networks

Kristina Georgieva
7 min readSep 23, 2018

Traditionally neural networks are trained by adjusting weights based on a measure of error being passed back through the network. This error is calculated by comparing the result of the input fed through the network against the expected value. The person creating the neural network would spend some time fiddling with the neural network’s parameters until the network can learn from the given data by adjusting its weights using the said error.

This article is a high level introduction to how evolutionary algorithms can be used to ease this process.

Before I begin, let’s take a look at some existing techniques for parameter optimization.

Automatic parameter determination

There are already some existing methods to automatically derive the parameters for ML algorithms. You can read about some of the existing tools here:

  • Auto-Keras: A library for automated machine learning — does an automatic search for parameters and architecture.
  • Bayesian optimization: a statistical optimization technique for maximizing the performance of an algorithm through smart estimation of its parameters.
  • DataRobot: A tool that focuses on providing an end to end ML experience (from data preparation to ML model deployment). It also allows for automated ML and comparison of models.
  • Dataiku: Another tool focusing on providing an end to end ML experience with the option to do automated ML.

Evolutionary algorithms

Evolutionary algorithms are algorithms that model the process of evolution. This is done by having a population of individuals, each consisting of sets of genes. Each gene represents an attribute/feature of the randomly generated data that you are trying to evolve into something meaningful.

Unlike ML algorithms, evolutionary algorithms start with no data. Instead, we have a measure of what we are trying to achieve (for this article, for example, we want to maximize the accuracy of our neural network, based on the parameters that we use to train it). We then change the individuals to try to fit this requirement as best as possible.

The genetic algorithm is the most popular and common of the evolutionary algorithms. It evolves individuals through the following steps:

  1. Randomly initialize individuals
  2. For a limited number of generations
  • Perform mutation — replacing a gene randomly or through some more sophisticated method.
  • Perform crossover — merging individuals together, resulting in new individuals with various genes from each parent.
  • Calculate the fitness of each individual — This refers to the function representing your problem. This function is applied to each individual in order to determine how good each individual is.
  • Selection — choosing which individuals survive to the next generation based on the fitness calculated above. This forms the population for the next generation in which these steps are repeated.

3. Select the individual with the best result, that is, the result with the highest/lowest fitness.

For a deeper dive into genetic algorithms check this article out.

Evolving neural networks

There are various approaches where evolutionary algorithms can be used for neural networks. These approaches aim to ease the process of designing a neural network by automating a set of steps. This section gives a high-level overview of each.

Evolving neural network parameters

This refers to determining the training parameters of a neural network, such as the learning rate, activation function, etc.

Evolving neural network parameters using a genetic algorithm follows the same steps as above, where:

  • Each gene of an individual is a parameter
  • Each individual is a combination of parameters, like in the picture below
  • The fitness function consists of:
  • Training the neural network given the parameters represented by an individual
  • Calculating the accuracy /f1-score (or any other prefered measure of neural network performance) based on a test set as the result of the fitness function

The image below shows an example of how the above individual’s layers parameters are translated to a network:

Advantages

  • Automatically being able to go through many parameter combinations in a guided rather than brute force manner
  • Can add bounds to the parameters
  • Can evolve more sophisticated parameters, such as categorical (for example: type of optimization) as the conversion from individual to the neural network is managed by you.

Disadvantages

  • Can be slow as you are training total_individuals * total_generations neural networks
  • May need to decide on genetic algorithm parameters (though the defaults are usually ok for this purpose).

Evolving features for the neural network

Part of training a neural network is selecting the most appropriate data to feed into the network. Given a set of parameters that perform more or less ok with a large portion of the data, we can filter this data to the more meaningful attributes through evolution. The process is the same as the genetic algorithm described above, where:

  • Each gene of the individual is an attribute
  • Each individual is a set of attributes which are inputted to the network. An example individual is shown below, where 1 represents a feature that should be fed to the network, and 0 one that should not.
  • The fitness function consists of:
  • Training the neural network given a set of predefined parameters and only feeding the selected features in while training
  • Calculating the accuracy /f1-score (or any other prefered measure of neural network performance) based on a test set as the result of the fitness function

What may work for some use cases is using the larger set of attributes to evolve the parameters of the neural network, and then use the resulting neural network parameters to evolve the features as described in this subsection.

The image below shows an example of how the above individual would translate to a network’s inputs:

Advantages

  • Automatically being able to select features in a guided rather than brute force manner, which can be advantageous when the usefulness of a feature is not easily determinable through analysis.

Disadvantages

  • Can be slow as you are training total_individuals * total_generations neural networks
  • May need to decide on genetic algorithm parameters (though the defaults are usually ok for this purpose).

Evolving the weights directly (Neuroevolution)

Instead of determining good parameters to train the neural network, you can also evolve the weights themselves. This means that instead of using backpropagation to pass the error back and adjust the weights for some number of epochs, you can:

  • Choose an architecture (layers, size of layers, activation function)
  • Evolve weights
  • Test the new neural network

Represented in the same way as the other evolution options, we can look at it as follows:

  • Each gene of the individual is a weight
  • Each individual represents a neural network with a predefined architecture
  • The fitness function consists of:
  • Replacing the weights of the predefined neural network with the gene values of the individual
  • Calculating the accuracy /f1-score (or any other prefered measure of neural network performance) based on a test set as the result of the fitness function

The image below shows an example of how the above individual may be translated to the network’s weights:

Advantages

  • Automatically being able to get a well fitting network for the given data
  • Less likely to get stuck in local minima, due to sampling various parts of the search space.

Disadvantages

  • Can be slow as you are training total_individuals * total_generations neural networks
  • May need to decide on genetic algorithm parameters (though the defaults are usually ok for this purpose).

For more details on Neuroevolution check out this article.

More

There is also research on evolving neural network architectures together with the weights using evolutionary programming. I will not go into details in this article, as the approach is slightly different from a straightforward genetic algorithm and involves additional rules. I also don’t have first-hand experience with this method, whereas I do with the other three.

Conclusions

There are 4 ways in which one can use evolutionary algorithms, such as the genetic algorithm to design neural networks, namely:

  • Evolving the parameters for the neural network training
  • Evolving the features to be fed into the network
  • Evolving the weights of the network with a predefined architecture
  • Evolving the network architecture together with the weights

The main advantage is that evolutionary algorithms allow for a guided exploratory search of the criterions of the neural network. The main disadvantage is that this involves training many neural networks, as each individual is a different network — which could be slow depending on the complexity of the problem.

Originally published at intothedepthsofdataengineering.wordpress.com on September 23, 2018.

--

--

Kristina Georgieva

Data scientist, software engineer, poet, writer, blogger, ammature painter