Artificial Life

https://github.com/GandhiGames/alife_simulation

Artificial Life; often abbreviated to alife, is defined as a “computer simulation of life, often used to study essential properties of living systems (such as evolution and adaptive behaviour)”.

(Langton 1989) defined Artificial Life at the first conference specifically for aLife. However long before being officially recognised, John von Neumann and Norbert Wiener contributed to the field. (Neumann & Jeffress 1951) delivered a lecture entitled The General and Logical Theory of Automata. In this lecture they outlined an automata, a machine whose actions are based on environmental information combined with its programming. (Von Neumann 1966) later created a self-producing automata model. This was based on a number of fundamental properties of living organisms, with a focus on reproduction.

(Wiener 1948) applied Information theory and homeostasis to the study of living organisms and examined the “…tendency of an organism or a cell to regulate its internal conditions, usually by a system of feedback controls, so as to stabilize health and functioning…” This was one of the first publications that discussed, in depth, the concept of feedback. Feedback involves a loop, whereby information from the output is used as input in future computations. (Moore 1956) extended this concept by designing Artificial Living Plants that could self-replicate and used feedback extensively. These machines would use simulations of natural resources, such as air, soil and water and are based on their natural counterparts.

In 1970, John Conway outlined a cellular automaton, the Game of Life. It consists of a number of cells whose actions are based on simple rules that can produce complex patterns. An example output is shown in the image below.

Conway designed this simulation in response to a hypothetical machine by Von Neumann and is an example of Cellular Automata. A Cellular Automata (CA) is a system based on a grid, where specific rules are applied to the cells.

Stephen Wolfram arguably led research into Artificial Life in the 1980s. (Wolfram 1982; Wolfram 1984; Wolfram 1986) studied statistical mechanics, basic theory, models of complexity, algebraic properties, and computation theory of cellular automata. Through his exploration of alife, he conceived the Wolfram Code, a naming and classification system for one-dimensional cellular automata (similar to the game of life).

Procedural Models

Models based on procedural methods are those that do not involve evolution of behaviour over-time. The behaviour is hard-coded before the simulation begins. Early work in the study of collective behaviour used procedural behavioural models to produce the desired behaviour. (Reynolds 1987) created an Artificial Life simulation of the collective behaviour of birds. There was no pretence to represent real world behaviour, the study instead focused on the flocking behaviour of the collective. The agents, named BOID’s, executed three rules in the presence of neighbours; using these rules complex global behaviour emerged. This would provide groundwork for further simulations that implement evolution and with behavioural patterns more closely related to real world entities.

(Mataric 1992) developed automatons to produce flocking behaviour and found that to implement collective behaviour successfully, a number of interactions need to be considered: collision avoidance, dispersion, following, homing, and aggregation. These interactions were programmed into the robots with weights to determine which is more likely to execute at a certain time, however because behaviour was decided at creation, this does not allow for evolution or emergent behaviours.

(Zaera and Cliff 1996) applied the interaction method for BOIDS to artificial fish; however the rules did not prove sufficient to model the characteristics of a school. This shows that further research into interactions would need to be undertaken.

Artificial Intelligent Models

The concept of Artificial Intelligence and a number of components (Neural Networks and Genetic Algorithms) are discussed in the next section. This section provides a brief history of previous Artificial Life models based on these concepts.

(Husbands et al. 1997) states that there is “no evidence that humans are capable of designing systems with these characteristics using traditional analytical approaches” (the traditional approaches refer to the procedural methods discussed previously). They proceed to recommend artificial intelligence to automate the process.

Recent research into alife uses a number of evolutionary and AI techniques, including ANN’s and GA’s. They relied on evolved sensory controllers and GA’s to produce emergent behaviour with mixed results. (Reynolds 1993) created an environment with predators, a number of prey called critters and obstacles. There was no intention to realistically model the evolution of emergent behaviour but instead provides a theoretical example of how this behaviour may arise over generations.

(Zaera & Cliff 1996) used artificial evolution to develop simulated animals called animats. Sensory-motor controllers were evolved using a neural network and simple group behaviours (dispersal and aggregation) were shown.

(Ward et al. 2001) focused on schooling behaviour. A neural network and encoded weights representing chromosomes were employed. The physiology of the agents was examined and modelled, and this study introduced visual (sight) and lateral line (hearing) radius. The output from the neural net moved the agent left or right. It is clear from past research that the inputs into the Neural Network and the fitness function are critical to the success of a simulation.

(Barmpoutis & Dargush 2007) applied localised evolution. This is similar to Conway’s Game of Life but has evolutionary components. An agent can change its encoded chromosomes with its neighbour based on a simple rule and a Genetic Algorithm (explained under the Evolutionary Computing section) is used to crossover an individual’s schema with that of its neighbour. For evaluation purposes a number of optimisation problems were executed.

(Kwasnicka et al. 2011) proposed FLOCK, based on many of the features of Ward’s system, it used a Genetic Algorithm and Artificial Neural Network, and agents could move either left or right at any given moment. However, no flocking behaviour was observed using open-ended evolution. Steered evolution was applied (whereby the fitness of an agent is relative to their proximity to other agents) and flocking behaviour was then shown.

As always, thank you for reading 🙂