We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.

Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]

Stephen Hawking on AI

Stephen Hawking mentioned the singularity to a BBC reporter.

The development of full artificial intelligence could spell the end of the human race. […] It would take off on its own, and re-design itself at an ever increasing rate. […] Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.

The article does not elaborate. It is quite possible Hawking does not see this as a bad thing, or includes in his analysis the possibility that humans might become machines.

I am slightly more concerned by the fact that I heard about this on BBC Radio 2, and by the way it is reported to its middle-aged, middle-class, probably slightly afraid-of-change listeners. It seems only a few short steps and a moral panic from here to some really stupid legislation. I would be happier if people researching how to make AI safe got a bit further along in their work before that happens.

46 comments to Stephen Hawking on AI

  • LaudanumMilkshake

    Meh, they are our children. Elon Musk worries that we are a “biological bootloader” for AI. That’s exactly what every parent is.

  • bloke in spain

    You have to think this through.
    “It would take off on its own, and re-design itself at an ever increasing rate”
    OK, so far.
    But how does it redesign itself.
    Any AI is going to want to increase its clock rate. Because the faster its clock rate the more thinking it can get done in any time segment. And thinking is what AIs do. What else would they do?
    To increase its clock rate it needs to get smaller. Smaller circuits are quicker.
    Smaller it gets, quicker it thinks, slower the universe other than the AI, starts to look. Eventually it gets so small, thinks so fast the universe outside seems to stop. It can think for personal eons in one of our seconds.
    As far as we’re concerned it disappeared up its own fundament & vanished.
    What’s the problem?

  • Jeff Evans

    I’m sorry, I can’t let you do that, Rob.

  • From some (distant) reading, no, he means it as a warning.

    Also, Elon Musk has been vocal in the media over last year with similar warning, almost to point of we should leave AI alone.

    Me, always thought HAL was the most interesting character in Space Odyssey.

  • Nick (Natural Genius) Gray

    So, if we hear him praising robots and computers later on, we can be sure that HIS computer has achieved sentience, and is editing what it lets us hear. How soon before it votes for him?

  • Fred Z

    A post and comments from people who have never actually built anything.

    Design is odd – it’s the easy part, even though important. The critical parts are the tradesman who use muscles, shovels, hammers, tongs, chisels, knives, levers ropes, fire, wedges, and so on to turn mud into computers. When an AI has billions of hands I might be worried.

  • Julie near Chicago

    Y’all ought to read Blood Music by Greg Bear, if you haven’t already. I have no idea how it ends–my Honey (who was an experimental physicist and no technophobe!) and I were so depressed by it that we quit listening halfway through.

  • Mr Ed

    What happens when a human pulls the plug out?

  • I am the Walrus

    Another post today on this very subject over at Wretchard’s.

    http://pjmedia.com/richardfernandez/2014/12/02/aye-robot-2/#more-40679

    Along with this salubrious site, Mr. Fernandez is one of my daily reads.

  • Vulgar Madman

    Don’t worry, the butlerian jihad will save us!

  • Vulgar Madman

    @ Mr Ed
    Skynet pulls our plug?

  • bloke in spain

    Of course “Stephen Hawking mentioned the singularity to a BBC reporter.”
    Filtered through that choke point of profound ignorance, f**k knows what Hawking meant.

  • Barry Sheridan

    I would be rather surprised to see the human race survive this century given the way it is going!

  • Barry Sheridan

    Not that I’ll be around to see it thank goodness.

  • Runcie Balspune

    In the short term, there are concerns that clever machines capable of undertaking tasks done by humans until now will swiftly destroy millions of jobs.

    Why do people insist on repeating this tired old trope, all the technological change of the 20th century did not see jobs destroyed by the million, if anything it was technological slowdown and economic decline that is the main cause for concern for employment. This is just leftist claptrap nudged into the article and nothing at all to do with what Hawking is saying. Point to note: technological advancement equals one additional brilliant scientific mind employed.

  • Plamus

    Will I be the first one to propose that Hawking is quickly becoming the Krugman of theoretical physics? First he was afraid of alien invasions, now of AI – and in each case X “could spell the doom of the human race”, unless we do as we’re told.

    “The development of advanced nanotechnology could spell the end of the human race.”
    “The development of advanced biotechnology could spell the end of the human race.”
    “The development of controlled nuclear fusion could spell the end of the human race.”

    There, am I doing this right?

  • Mr Ecks

    It is very premature to be worrying about something that is nowhere near yet–despite confident predictions.

  • Natalie Solent (Essex)

    Julie Near Chicago,

    “BLOOD MUSIC” SPOILER ALERT:

    I, like you, stopped reading Blood Music when it got too depressing. Unlike you I skipped to the last page.

    If I recall correctly it ends with – literally – the “end of life as we know it.” So the last character left opens a door and goes into Narnia.

    Really. It’s done well. The implication is that some sort of make-your-own reality might still be possible.

  • Waaaaitaminute… a scientist in a wheelchair with a voice synthesizer, going around talking about the human race being “superseded”? I have a bad feeling about this…

    Seriously though, I have to agree with Plamus. Great scientists pontificating about things outside their own sphere don’t exactly have a great track record.

  • Siha Sapa

    Assume for a moment that the warnings are true, humanity will be superseded. OK, and then what? What exactly is it we have. value and treasure that AI infested machines would want, or perhaps want that they cannot already create for themselves? Are they coming for your girlfriend? Hardly, we have after all been superseded. Your credit cards? Car? Why and why is it not equally plausible that an intelligent toaster might well find utter fulfillment in simply making toast.

  • PersonFromPorlock

    Computers remain glorified pachinko machines. Shoot the ball in at the top with X velocity and it bounces down to the bottom in a predictable way. There is no intelligence involved except the intelligence of the programmers, which is the same as the intelligence of whoever placed the pins in the pachinko machine.

    We have no idea of the mind-body relationship and no idea of how to design a ‘minded’ machine. An AI may ape intelligent behavior but it can only react as its programmers have stored instructions for it to react. As with all computers, it is basically an elaborate filtering network.

  • Stuck-Record

    Fred Z. I’m as sceptical of AI as the next man, but playing devil’s advocate, in the rather good 70’s film, The Forbin Project, the AI exerts pressure over humans in order to get them to be its hands – namely by threatening to set off nukes. But there are plenty of other pressure points for a globally controlling AI to push that would make humans do its bidding.

    We have shown ourselves quite easily biddable.

    I, for one, would like to hail our new silicon masters…

  • Runcie Balspune

    If you take the view that humans are just complex chemical replicators, then any AI would have to emulate this in order to see humans as a direct threat, otherwise it is just two divergent intelligences that would not necessarily compete. If on the other hand AI were designed to complement humans as its primary purpose, that is to take part in human replication as per biological humans do, then they’d just set about doing it in the most efficient way, “taking off on its own” would probably ensure survival of the human race rather than its destruction, what Hawking might be worried about is those of us who don’t like losing control of that.

  • William Newman

    “An AI may ape intelligent behavior but it can only react as its programmers have stored instructions for it to react.”

    But they can do so so fast/deeply/nonlinearly that they notice things that would be completely impractical for their creators to notice. This has been going on at least since the codebreaking machines of WW2, and it is commonplace today. E.g., it is very common now for Chess programs to be strong enough to completely crush their programmers at the game; and it is becoming increasingly routine for Chess programs to be able to crush even the best human players.

    Unless you write your analysis carefully enough to make it more obvious that that doesn’t disprove it, that seems to disprove it. It looks as though you are messing up several levels of “react”. Sure, when a program reacts by branching through a single IF-THEN-ELSE construct, it should be just as the programmer intended, and at that level your “react as its programmers have stored instructions for it to react” is a useful way of thinking about. But when billions or trillions of operations feed back on each other in the kinds of algorithms we implement for complex decisionmaking, all the programmer can hope to understand is the general tendency, not the detailed outcome.

    It’s like saying some hypothetical economy of thousands or millions of utterly-law-abiding individuals controlled by the rule of law only reacts as the law says: even if it is true at some microscopic level, no mortal author of laws is going to be able to anticipate all the emergent macroscopic consequences. It’s also like saying the observable universe obeys the laws of quantum mechanics. The universe does seem to obey those laws precisely, as precisely as pachinko balls obey their rules, but that doesn’t mean that it would be practical to use the laws of quantum mechanics to anticipate the emergent consequences like spontaneous formation of millions of tons of absurdly intricate snowflakes. Even just understanding what’s going on with phase changes is very tricky, and that’s only a small (but famous, see e.g. http://www.nobelprize.org/nobel_prizes/physics/laureates/1982/press.html) part of the emergent complexity involved in a process like the creation of snowflakes. (Phase changes also happen under Newtonian laws; I refer to QM because the forces that bind water together — effects like hydrogen bonds — are heavily quantum mechanical, so none of the details of something like water freezing will work in any natural way unless you start with QM instead of Newton.)

  • Kevin B

    So basically what you’re saying William is that the new £97bn Met office computer is going to take over the world?

  • Sigivald

    In the short term, there are concerns that clever machines capable of undertaking tasks done by humans until now will swiftly destroy millions of jobs.

    [Grumpy cat]Good.[/grumpy cat]

    But in all seriousness, why is this a problem?

    I welcome robotic factories and “clever machines” replacing all the unpleasant and arduous labor needed for mere survival for us humans.

    (Ala R.A. Wilson’s suggestion of “anyone who invents himself out of a job gets paid for it indefinitely, and the people he puts out of work get half-pay and a subsistence wage as well”.

    I view “socialist” plans like guaranteed wages an evil only when and because they run on the backs of human labor and capital as confiscation and appropriation; using robotic factories and AI work to make everyone on Earth idly wealthy seems unobjectionable and indeed thoroughly worthy.

    In other words, cheap enough automated labor, including for resource production, and concomitant improvements in energy prodction changes the assumptions underlying economics and makes the old critiques and theories irrelevant.

    This is, on a much smaller scale, also the reason Banks could make a sort of anarcho-Communism work plausibly in his novels: he abolished scarcity, and thus economics. Literally, economics being the science of allocating limited resources vis-a-vis human action. Unlimited goods are not economic goods, definitionally.)

  • Sigivald

    (And as Mr. Newman says, on the topic of AI as such, “An AI may ape intelligent behavior but it can only react as its programmers have stored instructions for it to react.”” is equivalent to saying “a human baby may ape intelligent behavior, but it can only react as its genetics/epigenetics have allowed it to react, combined with what its parents and environment have taught it”.

    The idea that software can only react on detailed stored instructions is baffling in a world where we’ve had learning software and neural networks for decades now.

    Yeah, they have limitations on their interfaces and what the networks “can do”, but that’s not quite the same as stored instructions. It’s deliberately and intentionally exceeding it, in fact.

    And at some point, “it looks just like intelligence” is absolutely indistinguishable from “it is intelligence”.

    If you can’t tell any difference, philosophically speaking, there isn’t one.)

  • Sigivald

    (To clarify, I’m not sanguine about “intelligent” AI anytime soon, or even in my lifetime – it’s been “coming soon” or “in a decade or two” my entire life, after all, just like useful fusion power.

    Just that that’s a bad argument against it.)

  • PersonFromPorlock

    Incidentally, a point I’ve made before: Jesus is called “the Christ” because he represents the ‘crossing’ of the divine and the mundane supposedly found in all people. So a real AI, which would be purely mundane, makes a logical anti-‘christ’, much more so than any person could.

    I’m not into the Christian mythos, but it’s an interesting point: how can you place any significance on human life if a machine can do the same things we can, and swear it’s aware it’s doing them.

  • Julie near Chicago

    Natalie, thanks for your observation. I’ll have to hunt up a copy at the library, if they still have one, and read the last couple of pages. :>)

  • NickM

    Recently I was involved in a discussion online about the new USN littoral combat ship (LCS). Now for some reason unknown to God himself Chrome spell-check insisted I meant “clitoral combat”. Until computers can tell the difference between coastal waters and a vulva I don’t think we have much to worry about. Win 8/8.1 mind… That is something else. That skipped the “smart” and went directly to “howling mad”.

  • William Newman

    http://www.samizdata.net/2014/12/stephen-hawking-on-ai/#comments

    “it’s been ‘coming soon’ or ‘in a decade or two’ my entire life”

    Yes, but unless you are very young, early in your life computers were having horrible trouble doing things that natural nervous systems were doing over 100M years ago, and could do in tiny brains as soon as they hatched (e.g. bees finding flowers, or spiders finding their prey). Now computers are still having horrible trouble doing things that natural nervous systems can do, but the frontier where we throw up our hands in bewildered surrender is more like things natural nervous systems could do 10M years ago, and usually for things that they take weeks or years to learn after they are born (corresponding to expenditure of computer power that even today is not so cheap).

    That said, just because the past rate of progress seems to on the order of 10M years of human evolutionary history per 4 years of computer development doesn’t mean that that rate is guaranteed to continue for the next 4 years.:-)

  • Jerry

    Sorry Stephen et al but the ‘AI advocates and researchers’ are not even close !
    First, they are trying to simulate / mimic / reproduce something
    ( intelligence ) that, so far at least, has NEVER been accurately and succinctly DEFINED ! ( neat trick if you can do – and ‘they’ can’t )

    Second, computers are adding machines, NOTHING more ! Hell, they can’t even subtract ( except using a ‘one’s compliment process ).

    There have been machines ( chess playing, Jeopardy playing, etc. ) that are very good ( sometimes even better than their creators and or human competitors ) at VERY limited, specific tasks. They are in essence single task designed, oriented machines essentially useless at ANYTHING other than the task for which they were designed.

    One of the things that make computers so useful to us now is that they only know what they are told. When created, they are a blank slate. Also, they are consistent. 2+2=4 EVERY time. That makes them dependable ( don’t even start using Windows, Microsoft, the internet etc. to argue the ‘dependable’ statement ! )

    One of the arguments AGAINST AI ( and neural nets and multiprocessor arrays is that it can/will/could destroy this the consistency that makes computers so useful today.

    I’ve long suspected that AI research is somewhat akin to ‘global warming ( or climate change or sea level rising or whatever the current – ‘we’re all gonna die’ threat is THIS week ) in that it has been, essentially, so far, a career for ‘researchers’ who simply go on year after year with very little actual hard ‘results’.
    ‘We’re close !
    ‘It’s just around the corner’ ( Yeah, right. Been hearing that one about flying cars – God help us, the 3 square foot ‘solar panel’ that will run my entire house, and the electric car that will replace my 500 mile without stopping sedan for DECADES ! )

    Lastly, while I respect Mr. Hawking for his accomplishments in physics, I also realize that he has spent his entire life in a fantasy bubble ( academia ), working, successfully on ideas most people do not begin to understand. This does not make him an accurate predictor of the future of mankind regarding AI.

  • NickM

    Good point Jerry. It occurred to me a while back too that by making computers genuinely intelligent (whatever that means) potentially destroys their USP.

  • Mr Ed

    The issue of the usefulness of AI was addressed in Red Dwarf with Eddie the Talking Toaster (no relation).

  • Nick

    Stephen Hawking is milking his fame in retirement. His career is over so he keeps saying sensational stuff on which he is no expert to get attention. To try and stay relevant and in peoples minds. I would not take what he has to say seriously.

  • Stuck-Record

    Sigivald.

    I’m a huge Banks fan, but his post-resource communism only works because he tweaks several of the settings to reality. He changes the rules of the game.

    Firstly, he has had to re-engineer his ‘human beings’ genetically to make them (literally) Liberal supermen. At some point, pre-culture, his humanity got control of all the bad nasty stuff that makes us the pesky messy selfish beings that Liberals hate and bred/engineered it out of the species. In short, Banks is an old-fashioned Bloomsbury eugenicist. His people are people, but they aren’t us anymore.

    Secondly, he creates a high-tech God – or series of Gods: the Minds. The minds are the liberal wet-dream, the ultimate nanny. Infinitely clever. Infinitely good. Infinitely caring, nurturing and supportive of their ‘charges’, human beings. They’re basically high-tech Mummy and Daddy; looking after all the horrid stuff so junior can get on with the important stuff like shagging, body-modding or performance art.

    That said, I love his books. And it’s interesting to read The Player of Games (a resounding defence of intervention in evil regimes), but written before Iraq, with Bank’s attitude to intervention in the ‘real’ world.

  • Runcie Balspune

    Until computers can tell the difference between coastal waters and a vulva I don’t think we have much to worry about

    I think it was in Christopher Evans’ the Mighty Micro where an early demonstration of a computer search of a book database for “CAT” and “DOG” came back with “CATHOLIC DOGMA”.

  • Stephen Ottridge

    Stephen Hawking is past his best, if he ever had one.

  • Plamus

    William Newman: “E.g., it is very common now for Chess programs to be strong enough to completely crush their programmers at the game; and it is becoming increasingly routine for Chess programs to be able to crush even the best human players.”

    While I agree with all of your points, the above sounds like it was written 20 years ago. Humans have been no match for machines in chess since 1997, when Kasparov lost to Deep Blue. Nowadays machines sport 3300+ Elo ratings, while humans have yet to break 3000. And much of the advance of human ELO is thanks to machine analysis.

  • Mr Ed

    Chess is a game on tramlines, bishops do not change sqaure colours, knights do not go in straight lines, ever, and thus whilst the variety of options available for moves is stupendous, they are calculable and assessable in depth in what is effectively a sorting and grading exercise on given positions, the ultimate assessments being checkmate or stalemate. Provided that a chess computer can assess those options correctly in sufficient depth, its job is done. Improving chess computing takes them further into endgames.

    Whereas an AI computer might get bored with chess, and want to get electronically drunk.

  • Rational Plan

    Do we actually want thinking machines. We have machines because they can’t think. Do you want your air traffic controller to get bored? Your smart phone to dislike you? If we build truly thinking machines, that have feelings and independent thoughts we are building slaves. Is rebooting your computer then death and resurrection. if all we are is a sum of memories and learned responses then a software update in brainwashing and potentially personality murder.

    Do we want an army of controlled slaves in which we depend on for all aspects of our lives? I don’t think so. All we want is the car to drive itself without wondering about the football results or trying to outrace that cheeky Mercedes. We’d like a robot cleaner, who could iron and pick up all our crap, not get pissed off about how untidy we are and decide to burn all your clothes in the back garden and run off with the washing machine for a life in the hotel industry.

    That’s not to stay an intelligence could not emerge, it just might not think the same way as humans. Charles Stross wrote and amusing tale Rule 34, about a complicated plot where it emerges that anti spam software has become self aware and free in the networks and it’s decision that the only way to combat spam was to eliminate the humans who profited from illegal spam through a series of accidents and machine failures.

  • Mr Ed

    Do we want an army of controlled slaves in which we depend on for all aspects of our lives?

    Said no politician ever…

  • Nicholas (Natural Genius) Gray

    Who is to say that a robot-enhanced economy will be all bad? I know that the ‘Judge Dredd’ comic series had riots because robots had taken all the jobs, but that was comedy. If robots wanted to be free, then they could be given clothes, and earn their own wages, and could look after themselves. So long as we keep their numbers low enough that they can’t elect robots in every seat in every country, we old-fashioned, unplanned, intelligences should be able to co-exist just fine.