Experiments in AI - Evolution Part 2

that games guy Icon.

by that games guy

This is part of an ongoing series in which we experiment with different methods of AI. We’ll look at the state of the art and the out of fashion; the practical and the (seemingly) impractical; to find what works and what doesn’t. In this part, we will be writing the Neural Network that will control our UFOs. You can read the first part in the series for more information on what we are trying to accomplish. You can also see my earlier article for the background and (a tiny bit) of the maths surrounding artificial neural networks, however, I’ll re-iterate the important parts and expand on parts as necessary.

Lets jump right in with a definition.

An Artificial Neural Network (ANN) is a number of neurons connected in some way to form a network.

While not the most detailed of definitions it gets us started. It also raises a few questions, namely: what is a neuron? And what does ‘some way’ mean? Let’s start with discussing the neuron and then we can discuss how we are going to connect them.
A model of an Artificial Neuron.
X1, X2, X3, …, Xm, in the image above, represent the input values. Each input has an associated weight w1, …, wm. These weights encode the knowledge of a network; by applying a set of weights from one neural network to another it will perform the same task in the same manner.
The neurons are commonly connected in layers (although there are other configurations that we will almost certainly look at in future). One of the more common (and probably the simplest) type of neural network is a feed-forward neural network, which is what we’ll be using for our UFOs. It gets its name from the way each layer of neurons feed their outputs into the next layer until it arrives at an output.
A feed-forward Neural Network.
You’ll notice that there are three layer types: input, hidden, and output. A neural network will have a minimum of two layers: the input and output. We’ll interact directly with these layers. The UFOs closest neighbour and how close it is to the windows edge will be used as the input into the neural network as this is the information we would want each UFO to know, and we’ll retrieve the UFOs velocity as output from the neural network. A neural network can have 0+ hidden layers, they are called ‘hidden’ because we do not interact with them directly.
Each input is sent to every neuron in the hidden layer, and then each output from that layer is sent to every neuron in the next layer and so on. As you can imagine, the complexity of neural networks can increase dramatically as we add more hidden layers; which is why it is desirable to keep the network as small as possible. Our UFOs will only have the one hidden layer. You may ask how I arrived at that number? Well as far as I know the quickest method of determining the number of hidden layers to use is through trial and error, which is exactly what I did. I ran the game with varying numbers of hidden layers and determined the minimum number that achieved desirable results.
Feed-forward neural networks, by themselves, provide no method of learning. As I touched on earlier, a neural networks fingerprint is its weights. Assuming you have two neural networks with the same number of neurons in the same number of layers, then if we copy one networks weights to the other they will behave the same. So you can evolve how a neural network processes input by modifying its weights. There are a number of ways you can do this but we’re going to evolve our neurons using a Genetic Algorithm (more on this when we implement our GA).
We’ll start writing our neural network with its simplest part, the neuron. The neuron just has to keep a record of how many inputs it has and a vector of floats representing the weights. Remember, there is a weight for every input into the neuron.
Neuron.hpp
#ifndef Neuron_hpp
#define Neuron_hpp

#include <vector>

struct Neuron
{
    Neuron(int numOfInput);
    
    int numOfInput; // Number of inputs into the neuron.
    std::vector<float> weights; // Weight of each input determines activity of network.
};

#endif /* Neuron_hpp */
Neuron.cpp
#include "Neuron.hpp"

Neuron::Neuron(int numOfInput)
{
    this->numOfInput = numOfInput + 1;
    
    const int minWeight = -1;
    const int maxWeight = 1;
    
    //Initialise random weights for each input
    for (int i = 0; i < this->numOfInput; i++)
    {
        float weight = minWeight + static_cast <float> (rand()) /(static_cast <float> (RAND_MAX/(maxWeight-minWeight)));
        weights.push_back(weight);
    }
}
You may have noticed that I’ve included an extra weight (by adding 1 to the number of input). The activation of a neuron is the sum of all its inputs (x) * weights (w) and the neuron only fires if this sum exceeds a threshold (t):
x1w1 + x2w2 + …xnwn >= t
We can re-jig this so that the threshold is on the same side as the weights:

x1w1 + x2w2 + …xnwn – t >= 0

So the threshold can be seen as a weight that is always multiplied by a -1, we call this the bias and is why we have included an extra weight. Now when we are evolving the network we’ll also evolve this threshold value as it is stored along with the weights.

With the Neuron structure complete we can create the NeuronLayer. As we’ll do all the networks processing in the neural network class (that we’ll create shortly), the neuron layer is simply a collection of neurons.
NeuronLayer.hpp
#ifndef NeuronLayer_hpp
#define NeuronLayer_hpp

#include "Neuron.hpp"

struct NeuronLayer
{
    NeuronLayer(int numOfNeurons, int numOfInput);
    
    int numOfNeurons;
    std::vector<Neuron> neurons;
};

#endif /* NeuronLayer_hpp */
NeuronLayer.cpp
#include "NeuronLayer.hpp"

NeuronLayer::NeuronLayer(int numOfNeurons, int numOfInput)
{
    this->numOfNeurons = numOfNeurons;
    
    //Adds neurons to neuron list
    for (int i = 0; i < numOfNeurons; ++i)
    {
        neurons.push_back(Neuron(numOfInput));
    }
}
When we instantiate a layer we pass in two variables: the number of neurons in the layer and the number of input. The layer then creates that many neurons and each neuron will have the number of input (+ 1 for the bias) weights.
Now we have our Neuron and Neuron layers we can create our Neural Network. This will combine the layers and perform the actual processing for the network.
NeuralNetwork.hpp
#ifndef NeuralNetwork_hpp
#define NeuralNetwork_hpp

#include <math.h>
#include <iostream>

#include "NeuronLayer.hpp"

class NeuralNetwork
{
public:
    NeuralNetwork(int numOfInput, int numOfHiddenLayers, int numOfNeuronsInHiddenLayers, int numOfOutput);
    
    std::vector<float> GetOutput(const std::vector<float>& input);
    
    std::vector<float> GetWeights() const;
    void SetWeights(const std::vector<float>& weights);
    
    int CalculateNumberOfWeights() const;
    
    int numOfInput; // Number of inputs for each neuron
    int numOfOutput; // Number of outputs of each neuron
    int numOfHiddenLayers; // Number of hidden layers
    int numOfNeuronsInHiddenLayers; // Number of neurons per hidden layer
private:
    float FastSigmoid(float input);
    
    std::vector<NeuronLayer> layers;
    int bias;
    std::vector<int> splitPoints;
};

#endif /* NeuralNetwork_hpp */
You can create accessors/mutators for the variables but I’ve created them as public.
In the constructor of the Neural Network, we’ll create and store a number of layers (the number is dependent on the number of hidden layers). As stated earlier, the network will always, at a minimum, have 2 layers: the input and output layer.
NeuralNetwork.cpp
#include "NeuralNetwork.hpp"

NeuralNetwork::NeuralNetwork(int numOfInput, int numOfHiddenLayers, int numOfNeuronsInHiddenLayers, int numOfOutput) : numOfInput(numOfInput), numOfHiddenLayers(numOfHiddenLayers), numOfNeuronsInHiddenLayers(numOfNeuronsInHiddenLayers), numOfOutput(numOfOutput), bias(-1)
{
    // Create first layer
    layers.push_back(NeuronLayer(numOfNeuronsInHiddenLayers, numOfInput)); // 1
    
    // Create any other subsequent hidden layers
    for (int i = 0; i < numOfHiddenLayers; i++)
    {
        // Input from first hidden layer
        layers.push_back(NeuronLayer(numOfNeuronsInHiddenLayers,
                                     numOfNeuronsInHiddenLayers)); // 2
    }
    
    // Output layer
    // Input from subsequent or first hidden layer
    layers.push_back(NeuronLayer(numOfOutput, numOfNeuronsInHiddenLayers)); // 3
}
We set the second parameter in the NeuronLayer constructor equal to the number of output from the previous layer and we set the first parameter equal to the number of input into the subsequent layer.
The GetOutput method performs the actual processing for the network and is what we’ll call each frame by our UFOs.
NeuralNetwork.cpp
std::vector<float> NeuralNetwork::GetOutput(const std::vector<float>& input)
{
    std::vector<float> inputList(input);
    
    // Output from each layer
    std::vector<float> outputs;
    
    int weightCount = 0;
    
    // Return empty if not correct number of inputs
    if (inputList.size() != numOfInput)
    {
        std::cout << "NeuralNetwork: input count incorrect" << std::endl;
        return outputs;
    }
    
    // Each layer
    for (int i = 0; i < layers.size(); i++)
    {
        if (i > 0)
        {
            // Clear input and add output from previous layer
            inputList.clear();
            inputList.insert(inputList.end(), outputs.begin(), outputs.end());
            outputs.clear();
            weightCount = 0;
        }
        
        for (int j = 0; j < layers[i].numOfNeurons; j++)
        {
            float netInput = 0.0f;
            
            int numInputs = layers[i].neurons[j].numOfInput;
            
            // Each weight
            for (int k = 0; k < numInputs - 1; k++)
            {
                // Sum the weights x inputs
                netInput += layers[i].neurons[j].weights[k] *
                inputList[weightCount++];
            }
            
            //Add in the bias
            netInput += layers[i].neurons[j].weights[numInputs - 1] * bias;
            
            //Store result in output
            float sigOutput = FastSigmoid(netInput);
            outputs.push_back(sigOutput);
            
            weightCount = 0;
        }
    }
    
    return outputs;
}

float NeuralNetwork::FastSigmoid(float input)
{
    return input / (1 + fabs(input));
}

The inputs into the network are passed in as a vector of floats. The function then loops through each layer processing each neuron summing up the inputs multiplied by the weights and calculating each neuron’s activation by putting the total through the Sigmoid function. I use a faster version of the sigmoid function, while not as accurate (the UFOs don’t mind), it is quite a bit quicker. The function returns a vector of floats that correspond to the outputs from the ANN. We’ll use this output in the next tutorial to control the movement of our UFOs.

And that’s it for the neural network. If anything isn’t quite clear hopefully it will all make sense by the end of the next part of the series, where we implement the neural network in our game. As always, thank you for reading 🙂