Evolutionary Algorithm
An evolutionary algorithm (EA) is a class of computational methods inspired by the principles of biological evolution and natural selection. These algorithms are designed to solve complex optimisation and search problems for which exact, analytical, or satisfactory deterministic solutions are unknown or impractical. Evolutionary algorithms belong to the broader class of metaheuristics and are a subset of population-based bio-inspired algorithms, forming a core component of evolutionary computation, which itself lies within the wider field of computational intelligence.
EAs are particularly valued for their flexibility and robustness, as they make minimal assumptions about the structure of the problem or the shape of the fitness landscape. They are widely applied in engineering, artificial intelligence, economics, biology, and many other domains.
Biological Inspiration and Core Mechanisms
Evolutionary algorithms abstract and imitate several key mechanisms of biological evolution. In an EA, candidate solutions to a problem are treated as individuals within a population, and the quality of each solution is assessed using a fitness function, analogous to biological fitness.
The principal evolutionary mechanisms modelled in EAs include:
- Reproduction, whereby new candidate solutions are generated
- Mutation, introducing random variation into individuals
- Genetic recombination (crossover), combining information from parent solutions
- Natural selection, favouring fitter individuals for survival and reproduction
Through repeated application of these operators across generations, the population evolves towards increasingly better solutions. The fitness function plays a central role, as it defines what constitutes an improvement and guides the evolutionary search process.
Generic Evolutionary Algorithm Structure
Although many variants exist, most evolutionary algorithms follow a common procedural framework. A typical EA operates as follows:
- An initial population of individuals is generated, often randomly
- Each individual is evaluated using the fitness function
- A termination condition is checked, such as reaching a satisfactory fitness level or a maximum number of generations
- Individuals with higher fitness are selected as parents
- Offspring are produced through reproduction, optionally using crossover
- Mutation operators are applied to introduce variation
- Less fit individuals are replaced by new offspring, simulating natural selection
- The process repeats from fitness evaluation
This iterative cycle allows populations to adapt over time, progressively improving solution quality.
Computational Characteristics
Evolutionary algorithms are well suited to problems with complex, noisy, discontinuous, or multi-modal fitness landscapes. Because they rely on population-based search rather than gradient information, they can explore large and irregular search spaces effectively.
However, a significant limitation in practical applications is computational complexity, which is often dominated by the cost of evaluating the fitness function. In many real-world problems, fitness evaluation is expensive, leading to high overall runtime. Techniques such as fitness approximation and surrogate modelling are therefore frequently used to reduce computational burden.
Despite their apparent simplicity, evolutionary algorithms are capable of solving highly complex problems, and there is often no direct correlation between the complexity of the algorithm itself and the difficulty of the problem it can address.
Major Types of Evolutionary Algorithms
Evolutionary algorithms differ primarily in how solutions are represented and how evolutionary operators are implemented.
Genetic Algorithms (GAs)This is the most widely used type of EA. Solutions are typically encoded as strings of numbers, traditionally binary, though real-valued and problem-specific encodings are common. Genetic algorithms rely on crossover and mutation to explore the search space and are extensively applied in optimisation problems.
Genetic Programming (GP)In genetic programming, individuals are computer programs rather than fixed-length strings. Fitness is determined by the ability of a program to solve a given task. GP has many variants and is commonly used for symbolic regression, automated program synthesis, and modelling tasks.
Evolution Strategies (ES)Evolution strategies use vectors of real numbers to represent solutions and typically employ deterministic parent selection. A defining feature is the use of self-adaptive mutation rates, allowing the algorithm to adjust its own search behaviour dynamically. ES is mainly applied to numerical optimisation problems.
Differential Evolution (DE)Differential evolution operates on real-valued vectors and generates new candidate solutions based on scaled differences between existing individuals. It is particularly effective for continuous numerical optimisation.
Coevolutionary AlgorithmsIn coevolutionary algorithms, the fitness of an individual depends on interactions with other individuals. Solutions may compete or cooperate, making these methods suitable for dynamic, interactive, or adversarial problem domains.
NeuroevolutionThis class of algorithms evolves artificial neural networks by encoding network structure and connection weights in the genome. Encodings may be direct or indirect, enabling the evolution of complex architectures.
Learning Classifier Systems (LCS)Here, solutions consist of sets of rules or classifiers. Michigan-style LCS evolve individual rules, whereas Pittsburgh-style LCS evolve entire rule sets. Fitness is often determined through reinforcement learning or supervised learning methods.
Quality–Diversity AlgorithmsQuality–diversity (QD) algorithms aim to discover a diverse set of high-quality solutions rather than a single optimum. They explicitly encourage behavioural or structural diversity while maintaining performance, making them valuable for exploration-heavy tasks.
Theoretical Foundations
Several theoretical principles underpin evolutionary algorithms.
No Free Lunch TheoremThe no free lunch theorem for optimisation states that, when averaged over all possible problems, all optimisation algorithms perform equally well. Consequently, no EA is universally superior. Improved performance can only be achieved by exploiting problem-specific knowledge, such as tailored representations, mutation strengths, or informed initial populations.
Memetic AlgorithmsIncorporating local search heuristics or problem-specific procedures into an EA results in a memetic algorithm. These hybrid approaches often improve convergence speed and robustness and are widely used in practical applications.
Convergence Properties
For elitist evolutionary algorithms, in which the best individual from one generation is guaranteed to survive into the next, there exists a general proof of convergence under the assumption that an optimum exists. The sequence of best fitness values forms a non-decreasing, bounded sequence and therefore converges to an optimum.
However, elitism can increase the risk of premature convergence, particularly in panmictic population models where any individual may mate with any other. This risk can be mitigated by structured population models that restrict mate selection and slow the spread of dominant individuals.
Representation and Virtual Alphabets
The choice of representation strongly influences EA performance. The theory of virtual alphabets demonstrated that real-valued representations combined with traditional binary crossover operators may restrict access to parts of the search space. As a result, real-valued EAs typically employ arithmetic recombination operators, such as intermediate or mean-based crossover. With appropriate operators, real-valued representations often outperform binary encodings.