The Structured Complexity Group aims to develop, extend and apply automated methods for learning good solutions to problems, where the answers are expected to be complex structures.

- Problems where the structure of the answer is known, and we only need to learn the right values for a fixed set of parameters

- Problems where the detailed structure of the answer doesn't matter - there might be an unknown number of parameters, but still, only the values matter, the relationship between the values is relatively easy to determine

- Problems where a good solution is determined both by the values involved, and their relationship - sin(log(x)) is different from log(sin(x)) - and we don't know how complex a solution should be (but still we would like a simple solution if possible)

We are mainly interested in the third kind - problems where the likely answers are structurally complex. We work with stochastic methods for solving such problems (Genetic Programming and similar). These problems are generally very tough - there are very few methods which can tackle them.

- How to represent solutions to problems
- We mainly work with different kinds of grammar representation

- How to use any knowledge we may have to simplify the problems
- This is one motivation for using grammars

- How to automatically break down the problems into simpler problems where possible
- How to find simple solutions where possible, without damaging the ability to find complex solutions when necessary
- How to find structured solutions when complexity is necessary, so that we can handle the complexity
- How to understand the behaviour of current algorithms better
- How to find general solutions to families of problems, rather than just single solutions to single problems

Our work is particularly inspired by the ability of natural systems to cope with unbounded complexity (the real world), and to generate systems and solutions with highly structured complexity (our DNA is highly structured, with analogues to sub-programs and parameter passing).

- Development of new methods. We particularly focus on ways to decompose problems into simpler problems.
- Grammar-based estimation of distribution algorithms, which lead naturally into the area of identification and promotion of generalised building blocks
- Measurement of building block repetition and retention through compression metrics
- Incremental learning and developmental evaluation
- Understanding problem and population complexity through information metrics

- Understanding and extension of existing methods.
- Grammar-based representations for Genetic Programming
- Operators in Genetic Programming
- Fitness landscapes in Genetic Programming
- Diversity mechanisms in Genetic Programming
- Parallel evolutionary algorithms
- Evolutionary algorithms for dynamic environments
- Multi-objective evolutionary algorithms

- Application of existing methods. Current and previous applications include
- Ecosystem Modelling
- Software Cost Estimation
- Phased Array Radar Beam Optimisation
- Intrusion detection in computer networks