In this part, we will be implementing the neural network that we wrote in the last tutorial as the first step towards providing our little UFOs with some form of intelligence.

This is part of an ongoing series in which we experiment with different methods of AI. We’ll look at the state of the art and the out of fashion; the practical and the (seemingly) impractical; to find what works and what doesn’t. 

You can download the source code for the project here. You’ll find a folder for each part of the experiment so you can jump in and follow along. I’ll go through the code in some detail but there is a lot to cover so will brush over some programming concepts to focus on the AI components we are writing. 

You can read the first tutorial in the series for more information on what we are trying to accomplish.In this part, we will be implementing the neural network that we wrote in the last tutorial as the first step towards imbuing our little UFOs with some form of intelligence.

Before we can implement our neural network for the UFOs we need to know what inputs we are going to feed into the network and how we are going to use the network’s output. It’s important to get these right otherwise our neural network will perform very poorly; for example, we could provide each UFOs neural network with the position of every other UFO but it would take a lot of adjusting before a relationship between them can be established, if it can establish one at all. For that reason, we want to keep the inputs to a minimum but also ensure that they provide all the information our UFO will need. To understand what input our UFOs need we need to go the goal we outlined in the first part of the series:

Have 60+ UFOs onscreen that have taught themselves to avoid each other and the sides of their environment.

So we have a rough idea of what our UFOs need to know: at the very least they should be aware of where the closest UFO and the sides of the window are in relation to them. We could start with a set of inputs that look something like this:

  1. X position of UFO
  2. Y position of UFO
  3. X position of closest UFO
  4. Y position of closest UFO
  5. Normalised distance to closest UFO
  6. Normalised distance to left of screen
  7. Normalised distance to right of screen
  8. Normalised distance to top of screen
  9. Normalised distance to bottom of screen

But we can reduce this further by thinking in directions rather than absolute positions. Rather than use the world position of the closest UFO, we can use the direction:

  1. X normalised direction to closest UFO
  2. Y normalised direction to closest UFO
  3. Normalised distance to closest UFO
  4. Normalised distance to left of screen
  5. Normalised distance to right of screen
  6. Normalised distance to top of screen
  7. Normalised distance to bottom of screen

This reduces our input to 7, which should hopefully be small enough to get us started. If we run into difficulty, i.e. the UFOs are not learning as we hoped they would, then the input into the neural network is the first thing we’ll adjust.

Input into the neural network for each UFO 

Input into the neural network for each UFO.
 

You’ve probably noticed that each input is normalised. I’ve done this as it’s a good idea to standardise the input into the neural network. If we start using large numbers for 1 or 2 inputs and smaller numbers for the rest, the neural network will be much more sensitive to the larger numbers. To combat this all input into the neural network will be in a similar range (either -1 to 1 for directions or 0 to 1 for distances). With 0 representing that the UFO or screen edge is as far away as possible. I’ll go through how this is achieved when we write the code that will generate the input.

We’ll be using the output from the neural network to move our UFOs around the environment so we really only need 2 output from the neural network:

  1. X velocity
  2. Y velocity

We’ll set the output from the neural network each frame as our current velocity scaled by delta time.With the inputs and outputs decided we can start with the implementation. Firstly, the UFO will need some way of utilising the neural network. We’ll do this using the component system. Start by creating a component called C_NeuralNetwork.

C_NeuralNetwork.hpp
#ifndef C_NeuralNetwork_hpp
#define C_NeuralNetwork_hpp

#include "Component.hpp"
#include "C_Velocity.hpp"
#include "NeuralNetwork.hpp"
#include "ObjectCollection.hpp"
#include “C_Sight.hpp"

class C_NeuralNetwork : public Component
{
public:
C_NeuralNetwork(Object* owner);

void Awake() override;

void Update(float deltaTime) override;

void SetWindowSize(const sf::Vector2u& windowSize);

private:
std::vector<float> BuildNetworkInput();

const int neuralNumOfInput = 7;
const int neuralNumOfHiddenLayers = 1;
const int neuralNumOfNeuronsInHiddenLayer = 10;
const int neuralNumOfOutput = 2;

sf::Vector2u windowSize; // We need to know the
float maxMoveForce;
std::shared_ptr<C_Velocity> velocity;
std::shared_ptr<C_Sight> sight;
NeuralNetwork neuralNetwork;
};


#endif /* C_NeuralNetwork_hpp */

As we’ve already decided on the number of input and output (7 and 2 respectively) we store them in this class. We’ve also defined how many hidden layers we want (1) and how many neurons there will be in that hidden layer (10). There is no definitive way of deciding what is the optimal number of hidden layers and neurons to use. We know that we want to keep them to a minimum because it can become computationally expensive, especially with the large number of UFOs we want onscreen, each with their own neural network. So it basically comes down to trial and error, I ran a number of simulations with different values for each run and slowly progressed towards a more efficient learning process.The constructor, Awake, and SetWindowSize are straightforward to implement.

C_NeuralNetwork.cpp
#include "C_NeuralNetwork.hpp"

C_NeuralNetwork::C_NeuralNetwork(Object* owner) : Component(owner), maxMoveForce(1400.f), neuralNetwork(neuralNumOfInput, neuralNumOfHiddenLayers, neuralNumOfNeuronsInHiddenLayer, neuralNumOfOutput), windowSize(1920, 1080)
{

}

void C_NeuralNetwork::Awake()
{
velocity = owner->GetComponent<C_Velocity>();
sight = owner->GetComponent<C_Sight>();
}

void C_NeuralNetwork::SetWindowSize(const sf::Vector2u& windowSize)
{
this->windowSize = windowSize;
}

The Awake method retrieves a reference to the velocity and sight component. We’ll use the velocity component in the Update function to move the UFO, and the sight component in the BuildNetworkInput method to find the closest UFO, which we’ll write next.

C_NeuralNetwork.cpp
std::vector<float> C_NeuralNetwork::BuildNetworkInput()
{
std::vector<float> networkInput;

for (int i = 0; i < neuralNumOfInput; i++)
{
networkInput.push_back(0.f); // 1
}

std::shared_ptr<UFOData> closest = sight->GetClosest); // 2

if(closest)
{
sf::Vector2f to = closest->heading / closest->distance;

networkInput[0] = to.x; // 3
networkInput[1] = to.y;

// We need to convert the distance to a number between 0 and 1 to be used as input.
float normalisedDistance = closest->distance / sight->GetRadius();

// We want a higher number to represent a closer agent so we invert the number.
networkInput[2] = 1 - normalisedDistance; // 4

const sf::Vector2f& pos = owner->transform->GetPosition();

float leftDistance = pos.x;
float topDistance = pos.y;
float rightDistance = fabs(windowSize.x - pos.x);
float bottomDistance = fabs(windowSize.y - pos.y);

networkInput[3] = 1 - (leftDistance / windowSize.x); // 5
networkInput[4] = 1 - (rightDistance / windowSize.x);
networkInput[5] = 1 - (topDistance / windowSize.y);
networkInput[6] = 1 - (bottomDistance / windowSize.y);

// Could use below but found agents evolve quicker when using discrete values for each wall.
//networkInput[3] = pos.x / screenWidth;
//networkInput[4] = pos.y / screenHeight;
}

return networkInput;
}

  1. We make sure that the inputs default state is 0. I chose 0 because that represents the furthest distance i.e. if no UFO is found within the sight range that input will be set to 0.
  2. This retrieves the closest UFO within the UFOs sight radius. If there isn’t one a nulptr is returned. For more information on how this component operates see the C_Sight class in the Part 2 folder in the Github link at the top of the page.
  3. The first two input into the neural network is the x and y direction to the closest UFO.
  4. The third input is the distance to the UFO. This trends towards 1 as the other UFO gets closer to this one.
  5. wThe last four inputs represent the distance to the sides of the environment (in this case the screen’s edge). These numbers also trend towards 1 as the UFO gets closer to that particular edge.

Now that we have our input its time to implement the Update method. This will pass the input to the neural network, retrieve the output, and use that to move our UFOs. This function is called every frame.

C_NeuralNetwork.cpp
void C_NeuralNetwork::Update(float deltaTime)
{
// Retrieves vector of floats to use as input into the neural network
std::vector<float> neuralNetInput = BuildNetworkInput();

// Get output from neural network
std::vector<float> neuralNetOutput = neuralNetwork.GetOutput(neuralNetInput);

float x = neuralNetOutput[0];
float y = neuralNetOutput[1];

const sf::Vector2f move = sf::Vector2f(x, y) * maxMoveForce * deltaTime;

velocity->Set(move);
}

This method is nice and simple as we’ve written the more complicated code elsewhere. It simply calls our previously written BuildNetworkInput to create the input, it then passes this to our neural network and retrieves the output. The output is then used to move our UFO using the Velocity component.Before we can take advantage of the neural network we have to add the now complete component to the UFOs we create. We’ll do this in the SpawnUFO method in SceneGame.

SceneGame.hpp
#include “C_NeuralNetwork.hpp"
SceneGame.cpp
void SceneGame::SpawnUFO()
{


// Remove the below lines. Our neural network will now set the UFOs velocity.
/*
const float maxVelocity = 80.f;
const float range = maxVelocity * 2.f;
const float randVelX = range * ((((float) rand()) / (float) RAND_MAX)) - maxVelocity;
const float randVelY = range * ((((float) rand()) / (float) RAND_MAX)) - maxVelocity;
auto velocity = ufo->AddComponent<C_Velocity>();
velocity->Set({randVelX, randVelY});
*/

// And add these lines:
ufo->AddComponent<C_Velocity>();
auto sight = ufo->AddComponent<C_Sight>();
sight->SetObjectCollection(&objects);
auto neuralNet = ufo->AddComponent<C_NeuralNetwork>();
neuralNet->SetWindowSize(windowSize);


objects.Add(ufo); // Adds object to game.
}

We add our newly created neural network component and the sight component I discussed earlier. I’ve also removed the code that sets a random velocity because we now let the neural network handle that.And that’s it now our UFOs have a brain! Albeit a rather silly brain as you’ll soon see when you run the game.

UFOs with a brain 

UFOs with a brain. Still not so clever.
 

You’ll notice that while some UFOs may interact in some way with the UFOs surrounding them by moving away or even towards them, most UFOs will not do any better than when we were setting a random velocity. This is because we are initialising all the neural networks with random weights and they are sticking with those random weights. You may get lucky and one set of random weights just happens to find the solution to our problem but its very unlikely. And it does not matter how long you run the game, progressive generations of UFO will be no better than their predecessors. In the next tutorial, we’ll begin to rectify this and start work on the genetic algorithm that we’ll use to evolve the neural networks so, over-time, they become better at the task we’ve assigned them.

Thank you for reading 🙂