How Quickly Will Machine Intelligence Take Off?


How Quickly Will Machine Intelligence Take Off?

Hi, I'm Tim Tyler - and today I will be discussing how quickly machine intelligence might "take off".

The basic idea here is that intelligent machines will someday become able to improve their own design - and then they will "take off" - suddenly producing a period of rapid improvement.

Such a scenario presents considerable scope for social, economic and political disruption - since things could be changing very rapidly.

Here's Peter Voss on the topic:

[Peter Voss footage]

Vague statements

One problem with this is that development will not start from an all-machine "seed". Rather, in practice, the "seed" is likely to be an existing company or organisation. In which case: when do we start the clock ticking?

For example, if Google is responsible for the first machine intelligence that "takes off", should we start the clock ticking in 1996, when Google formed? That's 15 years of self-improvement already - with little sign of advanced machine intelligence. If not then, then when? A big, complicated company with many humans and machine components is simply not something that "germinates" at a particular point in time.

Also: when do we stop the clock ticking? "When the singularity happens" seems like a ridiculously vague reply - and it assumes the conclusion: that there will be some particular point in time when machines suddenly surpass human capabilities.

More recently Peter Voss has explained one possible cause of slow progress:

[Peter Voss footage]

Agents typically learn by performing experiments, observing the results, and then designing new experiments. If those are real-world experiments, then they take time.

However, for some applications, experiments can be done in a synthetic universe - for example, if your driving problem is "playing go". For other applications, experiments can be done in virtual worlds. Also, in some cases, experiments can be performed at incredible speeds - for example, if developing molecular nanotechnology.

Rapid takeoff critics

Not everyone is convinced that the rate of progress will suddenly increase with the advent of human-level machine intelligence.

For example Ray Kurzweil argues against that idea here:

[Ray Kurzweil footage]

However, faith in a continued exponential rate of progress is questioned by his opponents.

The arguments of J Storrs Hall

Another sceptic is J Storrs Hall - henceforth, JoSH. In a 2008 paper entitled "Engineering Utopia", JoSH is critical of rapid takeoff scenarios.

He also presents criticisms in his book, Beyond AI.

I find many of the points he raises unconvining - though I generally agree with the spirit of his conclusions.

JoSH raises the question what proportion of resources an machine intelligence would be able to invest in self-improvement - as opposed to earning a living. He argues that this proportion is likely to be small - and that therefore, progress will be relatively slow.

The argument doesn't seem terribly convincing to me - the superintelligent agent may have considerable revenue - in which case, a small fraction of that would still represent an enormous R&D budget.

He also talks about the improbability of a superintelligent agent being responsible for its own hardware development - to quote:

Another point to note is that one model for fast self-improvement is the notion that a hyperintelligence will improve its own hardware. This argument, too, falls to an economic analysis. If the machine intelligence is not a hardware expert, it makes more sense for it to do whatever it does best, perhaps software improvement, and trade for improved hardware.

In simple terms: you’re better off buying chips from Intel than trying to build them yourself. You may improve your chip-building ability – but so will Intel; you’ll always be better off buying.

Now, I do expect that a superintelligent agent will start out trading for its hardware. If it thinks it has technology which could improve Intel's chip development, then the most obvious thing to do would be to establish a technology-sharing relationship with Intel.

However, let us imagine that such a path was blocked by government anti-trust regulators - forcing the superintelligence to either use Intel's technology or develop its own. It is not hard to imagine it getting impatient with Intel's slow rate of progress. Despite Intel's head start, the superintelligent agent might decide to use its intelligence to develop its own chip technology - and then overtake Intel.

JoSH presents some more arguments in his book, Beyond AI:

He says:

Any reasonable extrapolation of current practice predicts that early human-level machine intelligences will be secretaries and truck drivers, not computer science researchers or even programmers.

That's true - however, that might well be classified as being before the "take-off" started - by enthusiasts. He also says:

Even when a diahuman AI computer scientist is achieved, it will simply add one more scientist to the existing field, which is already bending its efforts toward improving AI. That won't speed things up much.

Maybe - but having an adult researcher that could be cloned could have a greater effect than a single human would.

JoSH cites the possibility of slow development:

Even we humans, with the built-in processing power of a supercomputer at our disposal, take years to mature. Again, once mature, a human requires about a decade to become really expert in any given field, including AI programming.

Maybe - but machines might mature more rapidly - and once they are mature they can be rapidly cloned. Another quote from JoSH:

More to the point, it takes the scientific community some extended period to develop a theory, then the engineering community some more time to put it into practice. Even if we had a complete and valid theory of mind, which we do not, putting it into software would take years; and the early versions would be incomplete and full of bugs. Human developers will need years of experience with early AIs before they get it right. Even then they will have systems that are the equivalent of slow, inexperienced humans.

Again, true - but again that might well be classified as being before the "take-off" started - by enthusiasts.

Diminishing returns

Others have argued that technology will exhibit diminishing returns. Observing that many critical scientific breakthroughs - such as evolution, relativity and quantum theory - have already been made - and some have argued that scientific development is slowing down, that the lowest-hanging fruit have already been picked - and that this will slow down the overall rate of progress.

Such views mostly fail to distinguish science from technology. Even within science, while it is true that many "fundamental" discoveries have already been made, scientific journals are more numerous now than ever - and are not running short of material.

We will see diminishing returns in technological development in the future - as we push up against physical limits. However, for the moment such limits seem to be far away - and they do not yet significantly constrain development. For the moment, technological development progresses as though some rats have found a large pile of grain.

My own scepticism

I too have expressed scepticism about the prospect of a sudden "take-off" - for example, see my essay about the intelligence explosion.

One of my arguments is as follows: there is not really any such thing as "human-level" intelligence. Rather machine intelligence has been gradually surpassing human intelligence for decades now, in the context of various different problem domains. Machines can already play better chess than humans. They are already better at performing many refactoring tasks than humans are. They excell at factoring large numbers - and so on. So, it is likely that human capabilities will be overtaken gradually, a domain at a time.

What about the idea that a self-improving system will form in the future, and spark a runaway self-development cycle?

[Steve Omohundro footage]

The idea that a self-improving system will arise at some point in the future seems naive to me. We already have self-improving systems - they are companies, and other organisations. For example, Google is a self-improving system.

Machines are already heavily involved in the design of other machines. The idea that machines will suddenly take over this task when they become smarter than we are seems naive to me. Rather there is a man-machine symbiosis involved the design machines - with the "man" part gradually being replaced by machine elements.

Won't there be a sudden speed-up when humans are finally eliminated from the loop? Probably not. By that point, automated "code wizards" will be writing most of the code anyway - so progress will already be pretty rapid. Also, humans will most probably want code reviews afterwards - an enforced "controlled ascent" - to make sure that their new toy does not get out of hand.

Vernor Vinge claims that a "rapid takeoff" would be regarded as undesirable:

[Vernor Vinge footage]

Right - so humans would recognise that - and act to slow things down - if progress showed signs of going too fast for the constructors.

Resource bonnanzas

It has been argued that the first superintelligent agents will be able to captialise on various resource bonnanzas - resulting in a discontinuous leap forwards.

One such resource is existing computer networks. However, the idea that such networks will not be being exploited before machine intelligence arrives seems strange: the world-wide mesh is a long-predicted phenomenon, and we already have some glimpses of such networks - in the shape of the ones that have already formed - and which are used to factor primes, seach for extra-teresstrials, send spam, and attack rival companies.

Another such resource is: existing experimental data - information which we have failed to make full use of so far. Well, papers re-analysing the data of others are constantly being written, and new tools for finding correlations and links between papers are evolving too. Yes a superintelligent agent will munch through this material some day - but the munching began long ago, and seems likely to gather pace before we have superintelligent agents.

Big insights

The possibility of a rapid takeoff is greater if machine intelligence is a problem that can be solved with one big insight - and it is lower if it takes many small insights to solve the problem. So far, we have seen slow gradual progress in machine intelligence. We cannot rule out the "one big insight hypothesis" completely - but there has been little-or-no sign of such a thing so far.

Brain emulation enthusiasts have speculated that we will get useful machine intelligence suddenly - when we first successfully boot up an emulation of a scanned human brain. However, this scenario seems too ridiculous to discuss further here.

Going fast

The argument that we are unlikely to see a sudden jump in machine capabilities is not an argument that we will not see rapid progress.

We probably will see extremely rapid progress. The idea is an alarming one - since rapid progress has the potential for disruptive shifts in political and economic power.

Enjoy,

References

  1. The Intelligence Explosion Is Happening Now;
  2. The Age of Virtuous Machines - by J. Storrs Hall;
  3. Toward Real AI - Peter Voss;
  4. First Annual Colloquium on the Law of Transhuman Persons - Peter Voss;
  5. Steve Omohundro SIAI Interview - Steve Omohundro;


Tim Tyler | Contact | http://alife.co.uk/