The Second Superintelligence

The Second Superintelligence

Hi, I'm Tim Tyler - and today I will be discussing the likely effect of superintelligent machines on helping to establish universal cooperation.

Universal cooperation

It has been theorised that the evolutionary process could be powerfully self-directed in the future - that all living things will form part of a universal, intelligent super-organism that gets to consciously decide what evolutionary change happens.

Such a scenario contrasts with the traditional processes of evolution by natural selection - since there are no longer multiple organisms competing with each other for reproductive success. Instead, a single large organism fills the entire ecoverse. That organism is effectively immortal, and does not reproduce itself. If such a scenario materialises, it would represent a triumph of cooperation over competition.

Looking at the evolutionary history of cooperation, it can be seen that nature regularly builds cooperative structures. Multicellular organisms arose from cooperative collections of cells; similarly, social insects are cooperative collections of multicellular organisms.

In many respects, human civilisation is following the evolutionary trajectory of the social insects. However, unlike them, we can organise structures on a planetary scale, as the current global computer network illustrates.

However, at the moment, human organisation is at a primitive level. There is no global government. Within many of the existing governments, an analog of natural selection is used to organise metabolic activities - in the form of competition between companies. Natural selection is generally a very stupid optimisation strategy - employed by engineers only when they can't find anything better to use. So: these are very much the dark ages of human cooperation.

The role of superintelligent machines

That leads us to the issue of the possible role of superintelligent machines.

Such highly-intelligent machines are probably the single most-anticipated technological development that mankind has ever seen.

Some have imagined a fusion of superintelligent machines with today's corporate culture - a future where superintelligent companies get to compete with each other.

I am rather sceptical about this idea - and think we are likely to see something rather different. I expect that the first superintelligence will rapidly expand and become ubiquitous - effectively eating the lunch of any prospective competitors in the process.

If superintelligence arises within the government, then it will probably face little competition - unless there is not yet a world government - in which case there probably soon will be.

If superintelligence arises within a company, then it will sell itself to the government, or otherwise be acquired by the government - or else it will become powerful enough to infiltrate and control the government to produce a similar result.

A superintelligence will normally be "naturally" inclined to prevent other superintelligent agents with different goals from establishing themselves. Another superintelligent agent with different goals from you represents a potential future threat to attaining your own goals.

Of course, this is true of companies as well - and yet the world is full of companies. Why do they not coalesce? Firstly, they do coalesce - we currently witness the largest corporate entities the planet has ever seen. However, there is a monopolies and mergers commission that acts to limit the power of any one company - and cartels and price-fixing are prohibited by law.

Antitrust activity

Will the monopolies and mergers commission successfully act to limit the power of a corporate superintelligence?

The relationship between antitrust regulators and powerful expansive companies has the general character of a cryptographic arms race. The government uses surveillance strategies to attempt to expose unwanted cooperation, while the companies attempt to conceal such activities - using technology such as encryption, obfuscation, steganography and misdirection.

While the government has not done too badly in these battles historically, it has not yet had to face a superintelligent opponent.

Getting in between a superintelligent agent and its goals is notoriously an unwise idea.

How smart will superintelligent machines be?

One factor affecting the outcome will be just how intelligent and powerful a superintelligent agent would be. If we are only talking about an agent which is a bit smarter than a human being, then perhaps there would be no real issue. However, my understanding is that superintelligent machines will rapidly zoom past human capabilities and leave organic brains in the dust.

During the course of human evolution a few simple mutations affecting brain growth rates resulted in the size of our ancestors' brains increasing by a factor of three - with a correspondingly large jump in performance. With a superintelligent agent, such an operation could easily be iterated until a brain is the size of a mountain was produced - a scale dwarfing the largest data-centres of today. Also, neurons are enormous, and the signs are that superintelligent engineers will be able to use nanotechnology to build universal computing machinery on a far more compact scale - and of course, smaller components mean faster operation.

We already know how important intelligence is competitively. Our brains are only a few times larger than those of chipanzees - and yet they have allowed humans to take over the world. The history of the evolution of brain capacity among our ancestors is one of an almost continuous upward slope - with very little back-sliding. At almost every turn, the smarter ancestors won out. If you look at the role of "signals intelligence" in the last world wars - you will find that it was of enormous, pivotal importance.

If you then imagine the effect of an agent thousands of times smarter than a human - it seems clear that such a development will inevitably have an enormous impact on the planet.

Superintelligent agents will create an explosion of technology

Once we have superintelligent agents, these will rapidly lead to molecular nanotechnology, nuclear fusion, and expansion into the air, into the ground, and into the oceans. There will be a dramatic explosion of technology.

Technology often contributes to inequality

Technological progress allows power concentrations to be produced and maintained - and has created vast inequalities on the planet.

We saw the first millionaire in 1716, the first billionaire in 1916 - and can expect the first trillionaire within the next decade - probably before 2016.

Technology creates inequality by creating means by which the rich may keep the poor from redistributing the rich people's wealth. Those means include political and legal developments, as well as the technology associated with security.

Information technology is a winner-takes-all ecosystem

In the field of information technology a global marketplace and the substantial opportunities for customer lock-in frequently creates a winner-take-all ecosystem. The antitrust regulators are often busy in this area.

This is likely to apply especially strongly to superintelligence - which is information technology's ultimate "killer application".

The patent system will not protect the public

The patent system is intended to encourage inventors to publish their inventions - resulting in them eventually passing into the public domain, from where they can benefit everyone. However, in computer science, intelligent programs typically reside on servers - where they are difficult for competitors to copy, and hard for regulators to inspect.

In such an environment, patents merely act to give your inventions to your competitors. Trade secrets are a sensible business model in such an ecosystem. There is no reason to rely on the law to protect you when you can protect yourself.

Code exposure via the brains of autonomous robots will not cause forks

Autonomous robots will eventually become superintelligences in their own right. Will those expose the secret of superintelligence to prospective competitors, allowing them to clone and fork the superintelligence codebase?

Probably not. Individual robots will probably be small and not very intelligent - and their brains will thus pose little threat to a global intelligence. Also, their brains could be obfuscated, making them difficult to copy and then modify. A robot maker is not necessarily giving the secret of their intelligent algorithms away merely by distributing their product.

An unprecidented opportunity

These factors mean that superintelligent machines represent an unprecidented opportunity for a small minority to gain enormous power on the planet.

Democratic forms of government will tend to oppose such activity - but revolutions have not prevented the enormous concentrations of weath and power which we have already witnessed. If government regulators get between a superintelligent agent and its goals, it is not clear who will win out.

The second superintelligence

It seems likely that a superintelligence will probably arise out of today's self-improving systems.

Companies like Google improve themselves - and contain vast quantities of information which are trade secrets. These enable them to better deal with new situations and markets.

The NSA represents a similar self-improving system with secret technologies within a government department.

It seems likely that one such organisation will pull ahead of their competitors before superintelligence is developed. When they do go on to develop a superintelligent machine, they will then be able to deploy it widely before it faces much competition.

What would happen in the case where two such agents arose at about the same time? That scenario has been treated by the popular movie, Colossus, the Forbin Project:

In the movie, the two superintelligent agents appear to collaborate. However, that seems like an unlikely outcome to me. More likely they would have incompatible goals, and each would attempt to ultimately eclipse the other.

Two superintelligences at once seems like an unlikely, but rather worrying possibility to me. A big battle over the future of the planet seems undesirable to me - if there is a walkover, then there may be fewer casualties.

Another problematical scenario is where the technology of superintelligence is developed and released by academia, or in an open-source project.

Again, there lots of superintelligent agents might form at once - and then there might be an extended period of competition before one ultimately drew ahead of the rest. Such scenarios seem unlikely to me - partly since I think constructing a superintelligence is a very complex problem - but I cannot completely rule them out.


In conclusion, superintelligent machines seem likely to contribute to the formation of universal cooperation - and may dramatically accelerate the unification of all living things - thus contributing to a world of peaceful cooperation. A world governed by competing intelligent machines seems as though it would be a relatively unlikely outcome to me.



  1. Self-Directed Evolution;
  2. One Big Organism - Will all life unite?;

Tim Tyler | Contact |