Hi, I'm Tim Tyler, and today I'll be addressing the issue of
the relative merits of intelligent design and genetic algorithms.
Intelligent design and genetic algorithms
Both intelligent design and genetic algorithms represent
optimisation strategies. Optimisation is a type of search which
is guided by a utility function.
Genetic algorithms are search strategy based on random mutations,
recombination and selection.
Intelligent design is a search strategy based on the actions of
an intelligent agent in solving the problem.
Framed in this way, it might seem obvious that an intelligent agent
would have a substantial advantage in any contest - since they can
always elect to use a genetic algorithm if they so choose - but could
also use any other search algorithm - if they felt that the
problem demanded it.
However, the situation is not a no-brainer - there are computational
overheads to intelligence - maybe the genetic algorithm will have
solved the problem before the intelligent agent has decided what
approach they will use.
The promise of genetic algorithms
In order to illustrate the promise of genetic algorithms here's
a clip from Richard Dawkins in 1987 - explaining the virtues of
the approach:
[Clip of Richard Dawkins from Horizon: The Blind Watchmaker]
Essentially, Dawkins makes two points: one is that in many complex
problems it's hard to do much better than trial and error anyway - and
the other is that genetic algorithms allow for a rapid, automated
search.
The failure of genetic algorithms
The dream of evolutionary optimisation which Dawkins spelled out has
pretty miserably failed to materialise. Evolutionary optimisation
techniques are used by engineers in solving problems - but
they have a pretty poor reputation: a generic search technique which
you try in those rare cases where you have little other information
about the structure of a problem - besides a utility function.
There are several problems with the approach.
One is that it is often hard to express what you actually want as a
utility function in the first place. You might know what
robustness or maintainability are - but expressing such
things to a computer is not always trivial. This is a problem with all
automated search techniques, of course. Specification languages are
there to help with this problem, but they are not there yet.
If you don't ask for exactly what you want, you often get
something which is brittle, incomprehensible, or otherwise unsuitable.
For example, if you are writing a computer program, one criterion is
often that the code should be self-documenting. If you don't
know how to tell the computer about this what you will get back will
often be incomprehensible, unmaintainable spaghetti code.
Also, automated search techniques often only seem to works on
small problems - and those are problems which humans can often
solve easily by other means.
What about the points that Dawkins made? Yes, automating a search
sometimes helps - though genetic algorithms are not the only automated
search technique. However, it does not seem to be true that
in many complex problems it's hard to do much better than trial and
error. Making changes at random is a particularly stupid
approach - and usually it is easy to beat it.
The future
One of the reasons genetic algorithms get used at all is because we do
not yet have machine intelligence. Once we have access to
superintelligent machines, search techniques will use intelligence
ubiquitously. Modifications will be made intelligently, tests will be
performed intelligently, and the results will be used intelligently to
design the next generation of trials.
There will be a few domains where the computational cost of
using intelligence outweighs the costs of performing additional trials
- but this will only happen in a tiny fraction of cases.
Even without machine intelligence, random mutations are
rarely an effective strategy in practice. In the future, I expect that
their utility will plummet - and intelligent design will
become ubiquitous as a search technique.