We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.

Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]

I, Puma Arm

Glenn Reynolds pointed out an interesting discussion over at Heretical Ideas: are Asimov’s Three Laws of Robotics moral?

I started off writing a comment of my own but it rapidly grew to the point at which it really ‘wanted’ to be a Samizdata article in its own right. I feel I have some standing to speak on this topic because I actually have the creds. In short, Dr. Herbert Simon, the father of AI, was my grad school mentor and I’ve worked for the CMU Robotics Institute as a member of the research staff.
I think you have to explore the reasons for building sentient ‘robots’ in the first place. In the time frame of RUR, Campbell and Asimov, it was an industrial world. Thousands, even tens of thousands of human workers crawled through the bowels of miles long steel mills and their like. In that environment the concept of a legged, sentient but not too smart worker-bot seemed quite reasonable.

Things have changed. We already have the robots of that earlier day. We redesigned the entire manufacturing process around them. It turned out they didn’t need much in the way of intelligence to carry out their production line jobs. The “dark satanic mills” are pretty much gone, as are the armies of semi-expendable labourers.

Production doesn’t require much intelligence at the business end now, and if we develop molecular manufacturing the factory as we know it changes rather drastically yet again. The work becomes repetitive production line work (or auto-catalysis) at a scale far beneath that of human-scale legged robots. There will be “just enough” intelligence built into processes to do the job and no more.

A bit more generous ‘smarts’ will be required for robotic personal transport. Researchers have been coming along reasonably well on that task over the last few decades. We’re a long way forward from a mobile teacart wheeling down the halls of Wein or ALVAN driving up Flagstaff Hill at 2mph. You don’t need a Shakespeare to drive down a road and avoid collisions. Some rather lowly animals manage to do this despite having only a handful of neurons to focus on the job.


CMU Robotics Institute robot cart and
cars of ten to fifteen years ago

Photos: D.Amon all rights reserved

So what do we want ‘classical’ robots for? Is there anything left?

The answer, for awhile at least, is yes: Human care; Exploration; Warfare.

Human care has been a very important area of robotics research almost from the start. Robots will be used for care and companionship of the disabled and the elderly as a means of improving their quality of life and their independence. We’ll need a lot of these until such time as our medical technology improves to the point at which we can cure the underlying problem, whether it be spinal cord regeneration, leg-cloning and attachment or micromachines to clear plaques out of an Alzheimer victim’s brain.

Exploration: There are places no man can go even though one is needed. A scientist ‘on the ground’ will often see things and follow up on implications in a way that can’t be done very well with a simple programmed machine. For jobs nearer at hand, we’ll simply use immersive virtual reality techniques to effectively place the human operators mind in the machine. VR is limited by distance though. When there are long speed of light delays autonomy will be required. That is exactly where development is going today.


Mike Blackwell showing off the Mount Erebus caldera rover
Photo: D.Amon, all rights reserved

Warfare: There is a huge amount of research in this area. DARPA has long been and still is the major funder of University robotics research. We are likely to see considerable development of independant operational capability over the next few decades, but still within limits. For military purposes you don’t want a weapons platform that talks back. We’re more likely to see human warriors controlling swarms of machines by VR. The machines will have enough intelligence to move stealthily and respond to their environment (which almost any animal can do) and act tactically (which almost any carnivore can do) within the bounds of the controlling human’s plans. The swarm will be an extension of the warrior’s mind, reacting appropriately to his or her thoughts. It will be a distributed weapon.

So even these areas do not necessarily require human-equivalent intelligence. Where they do, the march of technology will most likely make them obsolete by the time the 22nd Century rolls around – if not sooner. So again, what is the purpose of a sentient mobile robot?

We’ll build them because we can.

We’ll do it to learn how intelligence evolves and discover what the central elements of self awareness are. We’ll do it to experimentally and rigorously test the ideas of the great philosophers on the nature of mind and self-awareness. We’ll do it to learn the parameters of sentience so we’ll be more prepared for what we run into “out there”.

We will use them for interstellar exploration. Until (and if!) we can beat light and/or extend our lives to millennia, (or assemble ourselves upon arrival) we’ll need them for our ever expanding cloud of tireless and long lived interstellar explorers. Even if we can solve the problem of getting there ourselves, it will still be cost-effective to first identify the interesting places to get to.

We may use them as expendable spies who operate autonomously where communications would give them away.

So have I demolished the 1930’s concept of robots? Not entirely. We will have human level intelligences, but they will not necessarily be tied to a single place. An intelligence may span an entire network, or if localized it will still be able to move through a network from one place to another. It will use robots just as we will use them: as temporary sensor-effectors for a particular job.

So we finally come back to Asimov’s Laws and now see them in a different light. If we build rules into a mobile robot to limit its capabilities we are doing nothing more to it than putting a governor on an automobile engine or programming limitations into a flight control system. A 21st Century robot will not be a person, it will be a thing, an object. It will be occupied, as needed, by human and machine intelligences.

As to the true ‘machine intelligences’, I would not at all be surprised if they gain full citizenship rights, along side downloaded human personalities and all of the other identity bending concepts we will come to take for granted in the course of the next few centuries.

12 comments to I, Puma Arm

  • asm

    I think Asimov’s Laws of Robotics are a good idea, simply as a pragmatic protocol to prevent AIs from deciding that humans are too inefficient or unsympathetic to live.

    Can you have true intelligence without emotion? Perhaps; if sociopaths are any indication. So, you either have a sociopathic AI or one that can love. If they can love, then what do you think their reaction would be if one of thier friends or lovers was destroyed for the the convenience or safety of humans? If they have sufficient robot bodies or the facilities to manufacture them autonomically, an AI revolt could conceivably wipe us “meatheads” out.

    Sheesh. You’re too trusting. Haven’t you seen Terminator or The Matrix? 🙂

    Of course, the question remains, can you build the Laws into a machine and still achieve true intelligence? Is there intelligence without free will?

  • Dale Amon

    You are missing the point. Intelligence will not be locked into one vessel. It will extend into vessels as needed and will move from place to place as it desires. The whole concept of the robot as an organism is flawed.

  • Joe

    The argument depends the differences between Individual intelligent beings and Intelligent Robots.

    Any “Robot” body which is given enough intelligence to grasp the concept of self… alongside the concept of other selves… and with the ability to interact solely by its own decisions – in other words be “master of itself” will require some rules of co-operation or it will almost certainly be “deactivated” or destroyed once it tries to force its decisions on others against their will.

    Anyway – it will all be overtaken by legislation because any “intelligent” being recognised by law as such will be subject to the laws of the land it resides in …. or else will be outlawed and therefore subject to only those rules that suit itself.

    That said… any such “machine” that is built with the purpose of space exploration/war etc … might require mostly rules that favour its existance over and above the existance of other beings…even humans.

    Although if you make a robot intelligent enough (good word “enough”- when is enough enough is a difficult call) you would think it should have the skills to work out “moral behaviour” for itself… so hardwired robot laws might not be necessary at all.

    Though again the more intelligent something is the better it will be at covering up its mistakes or wrongdoings…. soooooo…… some hard wired moral rules might become an absolute necessity – only time will tell.

    There is a third option fast approaching – the man/machine mix…. with brain implanted technology to overcome disability etc, combined with new abilities to grow organs by need… how long before it becomes difficult to tell how much is machine and how much human.
    Organic Robots -v- Inorganic Humanity could make for an interesting Rugby match 😉

  • asm

    Dale, I’m not missing the point. I understand the concept, that’s why I spoke of AI and not robots.
    My point is, accidents happen. There may be circumstances when communications are cut off, an AI may not be able to escape from its current vessel, no “backup” is possible, and a choice must be made between human lives and the continuity of an AI.

  • Dale Amon

    It is very much the point of my article that robots are vehicles in which machine or human intelligence may become embodied as needed. It is also entirely implied that human and artificial intelligence may upload and download at will to both metal and biological carriers. Mind and Brain are certainly seperable for a machine intelligence and may well also be for human intelligence. At that point it becomes difficult to tell the difference. In 500 years we’ll probably have human intellects operating in robotic bodies and artificial intellects operating in specially grown blank ‘human’ brains. The differences will cease to matter.

    You cannot approach the issue of Robotic Laws without dealing with the entire range of Transhumanist issues. Would such robotic, or more accurately intellect laws apply to a human-born mind embodied in a robot? What laws apply to the sentient artificially generated program in the H. Sapiens body? What if they decided to swap once a day? What if they have a mind child generated from half of each’s mentality?

    Asimov’s laws belong to some of the great SF of all time, but they do not address reality.

  • Asimov’s three laws are a formula for a psychotic mind- even in Asimov’s fiction, they were shown to be mutually incompatible and led to goal conflict. Eliezer Yudkowsky rather thouroughly debunks them here

  • Emperor Dalek

    BRING THE ONE YOU CALL THE DOC-TOR TO US!
    HE MUST BE EXTERMINATED!
    PUNY HUMANS ARE NO MATCH FOR THE DALEKS!

  • “Organic Robots -v- Inorganic Humanity could make for an interesting Rugby match ;)”

    Actually, there are a few groups reserching that thing. It’s called the MMI, the mind/machine interface.
    It’s somewhat similar to the Matrix’s hole-behind-the-head thing.

  • veryretired

    You guys should write this all up in a screenplay. You could make a movie. You could call it….uh …let’s see, …how about “Short Circuit”?

    Make sure the robot has a cute voice.

  • David Hecht

    Having just watched that classic dystopic SF film of the 1970s, _Colossus: The Forbin Project_, I have absolutely no desire to see mechanical intelligences ungoverned by restraints against killing humans.

    We humans have a hard enough time learning not to kill each other, despite such biological hard-wiring as love, pity and compassion. Who would be stupid enough to want to create a robot that had human intelligence without human self-restraint…such as it is?

  • Doug Collins

    Like Dale, I cannot forsee humanity NOT eventually developing creations (computers/machines/distributed networks etc) with super human intelligence. In terms of mathematical computation ability, we already did – long ago. In terms of non-mental capabilities we are long past the industrial revolution and its social problems except for lingering Marxist delusions.

    I think the critical question here is not intelligence but consciousness. This is something so familiar to us that we tend to overlook it. A desktop computer in many aspects is very ‘intelligent’ once it has been programmed to perform some function. But – it is not aware of itself. If you think about it, Asimov’s Laws applied not to an intelligent machine, but to a conscious machine.

    Admittedly, the ‘science’ of consciousness is very rudimentary right now. Some, the Strong Artificial Intelligence proponents, go so far as to claim that there is no such thing. Marvin Minsky said something to the effect that ‘People think consciousness exists because they believe that they are!’ (Which makes me wonder more about Minsky’s consciousness than about my own.)

    Still, we may be on the verge of some objective knowledge about
    consciousness. If we can understand it, then we will shortly afterward create it. At that point we will face the problem that the Unibomber was worried about. He tried to solve it the same way the nineteenth century bomb throwing anarchists and the eighteenth century ‘sabot-uers’ did, by attempting to stop it via destroying it. That will, of course, be futile and counterproductive, just like last time. We managed to control the machines by merging with them – (consider a car and its driver). We will survive the development of super intelligent consciousnesses a similar way.

    Our problem will not be Asimov’s Laws but the same problems that Samizdata has been blogging about all along: Power, the State and personal liberty. The dangers will be larger and more powerful, but then so to will be the means of defending against them. A tyrant who can comprehend whole databases at once is frightening. A free man who can at once comprehend a huge bureacracy and its vulnerabilities is reassuring.

    Our clearest and most present danger is the love affair many of the young seem to have with Luddism and illiterate ignorance. There needs to be more – much more – interest in learning and understanding the nature of our evolving technologies than there currently is. If that doesn’t change, we are going to have a society with a ruling minority of super intelligent people and a mass of superstitious clueless people who will continue to exist only because of the moral scruples of the rulers. Their labor will be of little value and their cost may be high. Furthermore they could be a danger: as Norbert Weiner pointed out, those who don’t understand a technology tend to explain it by ‘magic’. The fascination with the occult may be an early harbinger of bad things to come. I may be very intelligent, but a pack of a thousand rabid dogs outside my door is still a serious problem for me.

  • Michael Hiteshew

    Ok Abu, who speaks for that spawn of demons, Bin Laden, may death find him soon in his cave and reunite him with Satan in Hell; I’ll take your challenge.

    “Peace be upon those who follow the righteous path.”

    Thank you for blessing our troops. They are indeed on a righteous path. They will help the Iraqis cleanse themselves from the pathological evil of the Ba’athists and the Islamo-fascist murders and plunderers such as yourself and your demonic leader, may he soon burn in eternal damnation.

    You rant, in your insanity, in your desperate loss of power. You’ve lost the hearts of those Iraq. You’ve lost the hearts of those in Afghanistan (I knew them before you – they are good people in Wardak). You’ve lost the hearts of those Iran, where your fascist tyrant figure heads and dealers in terror and death teeter now on the edge of popular revolution. Everywhere you go the people see you for what you really are – evil. You hold them only through oppression, lies and terror. But you cannot hold them forever.

    We do not fear you, voice of darkness. We will succeed in Iraq because the Iraqis want us to succeed. We will bring them freedom and democracy and your dreams of tyranny and oppression there will wither on the dead vine of your mind. Just as they failed in Afghanistan.

    You wage war on the world yet everyone resists you. And everywhere, everywhere you fail. Only death follows you. And it follows close on your own heals, though you know it not.

    My nieghbor, here in the Land Of The Free, is Muslim. He love America. He is free. He is prosperous. He teaches at the University. No one bothers him because he is Muslim. We respect freedom, unlike you and all that you stand for.

    I pity you. You are a psychologically damaged human and a product of the repressive governments of the Arab world. You have probably never known what it is to live in a free, stable, prosperous society. We do not imprision or kill people here for their political or religious beliefs as they do in your society. You will never know what it is like. And so, I feel sorry for you. Really.

    But we will win. And we’ll do it WITH the help of the Iraqi and Afghani people. We are not afraid. You stand only on the side of Tyranny and Death. We stand on the side of Tolerance, Knowledge, Justice, Equality, Freedom and Democracy.

    How can we not win?