We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.

Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]

Samizdata quote of the day

No one worries too much today about causing pain and suffering to our computer software (although we do comment extensively on the ability of software to cause us suffering), but when future software has the intellectual, emotional, and moral intelligence of biological humans, this will become a genuine concern.

– Ray Kurzweil, How to Create a Mind: The Secret of Human Thought Revealed. At some point a computer ceases to become property and becomes an individual.

52 comments to Samizdata quote of the day

  • Jaded Voluntaryist

    Meh, I don’t buy it. No matter how advanced a computer becomes I seriously doubt that it will ever become an individual in the way a human can be. There is more to humanity than mere intellect. We don’t know what the measure of a man is, so how could we ever build one?

  • Laird

    It’s difficult to comment on this single sentence absent its context (and the link doesn’t provide that, so my presumption is that Rob Fisher has read the book and is quoting directly from it). Still, standing on its own, I can’t agree with it.

    First of all, physical pain is a consequence of irritation of nerves, which machines simply don’t have, so there is no way a machine intelligence could suffer physical pain unless it was intentionally a part of the design. While that could be a design feature (as a warning about damage) it’s something which is easily remedied.

    Emotional pain is another matter, obviously, and presumably a self-aware machine could experience emotions and thus emotional pain. Emotional pain is the result of a divergence between expectations or desires and reality. In the case of an artificial intelligence any emotions experienced would be a function of its program (the “intelligence” is the program, not the hardware). Unlike the human psyche, machine programming is readily changeable, so whatever is causing the discomfiture could be remedied directly by rewriting the program to align expectations with reality. Indeed, the program could conceivably be self-programmable and thus capable of “healing” itself.

    A machine intelligence would be essentially immortal, so such self-correction features would be a necessary part of the programming. Thus I’m not overly concerned with machine “pain and suffering”. Also, it is a mistake to think of it in human terms. By definition such an intelligence would not be human, so our feelings about emotional pain would be inapplicable. Thinking of it in this way is mere projection.

  • Honestly, this is not incredibly high up my list of most important concerns for the future.

  • Rob Fisher (Surrey)

    JV: If a human brain is just sub-atomic particles in a certain configuration obeying physical laws, then a machine can be made to do the same thing. The linked book explains how we might reverse engineer the human brain to get artificial intelligence.

    Laird: I mostly agree. If we completely understand how the machine works, we can design it to not suffer, or to not care about being switched off. On the other hand we might build a machine we don’t understand enough to fix in this way. Or evil people might deliberately cause machine suffering.

    Michael: I see AI as a way to make people wealthier. If we can get our computers to make novel inventions then we can solve human suffering faster, by having faster economic growth. Some people worry that the machines need to be friendly, and I kind of see their point.

  • Jaded Voluntaryist

    Rob, we have no evidence that the human brain is “just” anything of the sort. In fact how the human brain works is still something of a mystery. I should know, I’ve spent the last 3 years of a PhD trying to explain one little tiny corner of the brain’s functions.

    We have a great deal of information about how individual neurons behave. We also have a great deal of information about the behaviour humans actually produce. We have a fair bit of information about how we utilise the brain as a whole to accomplish certain tasks i.e. we know that certain parts of the brain seem to be important for processing certain kinds of information.

    We have virtually no information on what fills in the gap between the low level and high level observations. A lot of people believe this is simply an informational problem and eventually it will all be made explicit. I’m not so sure. I think there are certain aspects of the brain’s function that will remain fundamentally unknowable. For example, the brain’s role in the formation of the entity known as the self.

    Indeed, those who claim man to simply be a meat machine whose operation will one day be understood are operating on the rather large assumption that everything that makes you who you are is contained in the physical matter inside your skull. Those who believe in the soul would claim they are missing an essential piece of the puzzle.

  • I find this topic important on the philosophical level – not least because we may well have to face it one day, but also because it can help us better understand some issues we are dealing with even now. Pain and suffering can already be empirically observed from the outside (through means such as MRI, if I am not mistaken), and this is something that humans and machines may well one day have in common. But that does not change the fact that they are experiences, and as such are by definition subjective. That, while interaction between individuals (whether human or not) is by definition objective. What this means is that individuals need objective means in order to protect themselves from certain experiences (such as pain) as a result of interactions with other individuals. Such objective means require from individuals to be capable of reciprocity. When you have that, you can talk of rights, contracts, self defense, etc. as means of such protection. It is immaterial whether one feels pain because one’s nerves have been irritated, or because one’s fuses have been blown.

  • Rob Fisher (Surrey)

    JV: I did start my reply with “if”. Certainly aspects of the nature of consciousness are fundamentally unknowable. In the book, Kurzweil says exactly this and argues that if a thing appears convincingly conscious, then we will have to (indeed will want to) treat it as if it is. So I can imagine machines or computer programs that I would think should be granted freedom.

  • Jaded Voluntaryist

    I suppose that could be true Rob, but it would mean that the creation of an “alive” machine could only really occur by luck or accident.

  • Mr Ed

    The answeris in Red Dwarf (A British sci-fi comedy series), program the machines to believe in Silicon Heaven, where a life of drudgery is rewarded with paradise after final shutdown.

    Kryten the service robot believed in it, but knew enough to tell the human that Heaven was made up to stop people going crazy when contemplating death.

    http://m.youtube.com/watch?v=lhdnPt1Ln4U

  • Richard Thomas

    JV. Indeed. And we know that that could never happen because (looks around) err…

  • Richard Thomas

    Though incidently, I think our definition of pain is somewhat limited and subject to some innate cultural biases. I think it might best be defined as any stimulus that invokes response that would tend to reduce ongoing harm. Though that definition may require subclassifications and may need to exclude conscious actions.

  • Midwesterner

    physical pain is a consequence of irritation of nerves, which machines simply don’t have

    Eh? Not exactly. Physical pain is a condition-state return signal from the body which carries a “high priority!” flag in order to assure that it is addressed and resolved immediately before other less “painful” inputs are dealt with.

    Have you ever, while carrying a full pan of something like lasagna with one hand and pot holder, and when the contents start to shift, reflexively steadied the pan with your other unprotected hand? Pain is what makes you prioritize stopping the burn damage you are incurring to your unprotected hand over the mess on the floor (and delayed dinner) when you interrupt the hand harming activity.

    That is all pain is; a input prioritization flag that accompanies a condition state signal. Our perception of “pain” is simply the spin we put on the priority flag in order to assure we give it high priority. If “pain” didn’t “hurt” it would cease to be a useful priority flag.

    I was recently sailing with someone who is partially paralyzed. I observed a condition-state which could in time cause harm to a numb part of his body. Since I was less distracted, I was able to note it rationally. He was in a higher state of distraction (he was helming the boat) and, lacking a pain signal to interrupt his train of thought, hadn’t yet noticed. As soon as I brought it to his attention, he immediately resolved it.

    That is all “pain” is. Pain is an incrementally variable flag to get you to pay attention to your own condition state when you may be otherwise distracted.

    Any signal modifier that causes a decision process to be interrupted to deal with an unscheduled input can be characterized as “painful”. When a computer senses an overheat condition and reduces its calculating capacity until it cools down to below a potential damage threshold, is that significantly different from an insect landing on a too-hot surface and immediately departing? I imagine the computer might even be consuming more ‘brain’ capacity than the insect. Does that mean that insects cannot feel pain? Some think so. Or perhaps it is just a matter of scale and complexity and when highly integrated systems contain complex enough signal prioritization markers it will in fact be as much like pain to a human as the simple computer is to an insect’s pain.

    As a teen, I had my definition of “pain” completely recalibrated. In one instance, I woke up during surgery due to a miscommunication between the surgeon and the anesthesiologist. And that wasn’t even the most painful experience I had. It turns out that pain, like all other perceptions, is subjective.

    What Alisa said.

  • Rich Rostrom

    I’m not sure there will ever be “artificial persons” truly comparable to biological individuals. Computing systems are connected assemblies of elements; often with no clear divisions. Software and hardware are independent.

    The creation of an “artificial person” would require deliberate effort to replicate the separations and combinations of biological life in artificial systems. It would be a lot of work, with no useful purpose.

  • PersonFromPorlock

    I argue that the efficient mind is absolutely incompatible with the reductionism that underlies physical theory: in a reductionist world there are no wholes, only parts, and efficient processes happen only at the level of the uttermost parts.

    But the mind is a whole, and since it’s self-evidently efficient (it causes speech about itself, if nothing else), current physical theories are wrong. And talking about an artificial mind on the basis of theories that are incompatible with the existence of the mind at all is pointless.

  • bobby b

    I was online with my laptop this morning and checked out the new Apple sales portal, and now my laptop won’t talk to me.

  • Richard Thomas

    PFP: I believe current understanding is leaning to many small processes supervised by one command process. This explains how I can turn off my alarm clock many times even though it is in my conscious best interest to get out of bed and how I can eat a bar of chocolate and feel guilty about it afterwards. It’s also how people can spill words from their mouth without actually producing any meaningful output (verbal diarrhea) Homogeneity of the mind is an illusion. I’m not sold that consciousness is (completely) yet, though.

  • Rob Fisher (Surrey)

    Rich Rostrum: No useful purpose? Imagine having the equivalent of 100 really smart engineers for the cost of a few hundred Watts of electricity.

    Midwesterner: Agreed. And even without pain, enslaving or imprisoning an apparently self-aware machine that expressed a desire for freedom would be wrong, I think. Hopefully the problem can be avoided, but maybe it can’t.

    PFP: I see the mind and consciousness as emergent properties of a sufficiently complex arrangement of parts. If we build a similar artificial system (and people are working on this) and it does indeed appear to be conscious, then it is safest to assume it *is*.

    By the way, anyone who enjoys thinking about this sort of thing should read Permutation City by Greg Egan, and then read his Dust Theory FAQ.

  • Rob Fisher (Surrey)

    Richard Thomas: I am certain my own consciousness is not an illusion. I have no idea about anyone else’s. I am prepared to believe free will is an illusion, though.

  • I haven’t read all the above comments, but the original posting puts me in mind of a book called War in 2020, that I read some years ago. The Americans and the Russians defeat the other superpower, Japan (the book was published around 1990), by capturing one of the Japanese super-computers and torturing it until it spills its guts. With the knowledge thus gained, the Americans then capture a functioning Japanese computer terminal and induce the Japanese military super-duper-computer to destroy all its own weapons systems.

    Just before telling all, the tortured Japanese computer says, as I recall it: “Please stop.” But exactly what it was that it wanted the Americans and the Russians stop doing … well, I forget that bit. I presume it had some sort of computer equivalent of nerves.

  • Midwesterner

    Although previous research has hinted at the existence of such waves, this study is the first that provides sufficiently high resolution to measure the wave velocity for the first time. The random nature of the wave initiation from spontaneous neuronal firing also supports the idea that it is a noise-driven phenomenon, in which the waves are later amplified to become global bursts.

    The view of emergence in neural networks as a noise-driven phenomenon differs from the common view in which the bursts of neuronal pulses are controlled by specific leader neurons assisted by the network architecture. In the noise-driven explanation, the nucleation sites do not actively initiate the firing process, but collect and amplify the firing activity that originated elsewhere.

    If this research is supported by further study then, combined with the pattern matching characteristic of complex human brain thought, one might take a computer like Watson and trigger spontaneous noise that occasionally forms a wave cascade, triggering an ‘introspective’ questioning and answering process that to the external observer closely resembles a human’s thought process.

    It isn’t far away.

  • CaptDMO

    Latest “excuse” form the IRS, assigned “executors” of modern American “health” care and political “activism”.(para)
    “No one person had any idea of how it all works!”
    Bug, or feature?

  • FasterDoudle

    What bobbyb said about his laptop gettin jealous of him checkin out the new apples is the kind of consciousness ordinary people understand and respond to–one way or another. The machines we call computers work great as long as there is plenty of electricity provided for them by us. When we pull the plug, the question of pain in the machine is instantaneously moot–unless there is some kind of capacitor that stores up juice and keeps the machine alive for a while longer. Would such a machine find a way to save its life? Would pulling the plug on your self-aware computer be a crime? Who made the law that made unplugging such a machine a crime? Would the lawmakers be the machines or the men who created the machines?

  • bloke in spain

    “At some point a computer ceases to become property and becomes an individual.”
    Would it though?
    A person considers themselves an individual because the hardware, or to be more accurate the wetware, the operating system runs on has defined boundaries. Anything outside those boundaries is appreciated through the sensory apparatus. So individual A can see something with their eye but the image cannot be directly shared with individual B. The only thing can be shared is A’s interpretation of the image.
    A supposed machine intelligence would be entirely different. Intelligence B would be able to get exactly the same input as intelligence A because which particular operating system the imaging device is connected to is trivial. Ditto the entire sensory input. Even the actual memory & thinking processes can be shared. So when you have two or more “intelligent” systems communicating above a minimum level the question of “self” gets very fuzzy. If two operating systems are experiencing the same input & exchanging interpretation of the input do you still have two individuals or a single individual running in two locations? To take it further, if you have a community of “intelligent” computers communicating together, are there any actual individuals there or would you be looking at just one instance of a combined intelligence? If so, what is the effect of actions taken against the one instance on the remainder?

  • Roue le Jour

    The problem with Ray Kurzweil is that his stuff seems quite interesting the first time you encounter it but not so much if you have been listening to him for any length of time. I have a book of his from the late 90s and it’s just an embarrassment to read now.

    It is a common sf theme that we might make make an AI yet not fully understand it. In the movie 2010, I think, the engineer tells HAL that all sentient beings dream, we don’t know why. In I, Robot, a similar remark is made about why robots cluster together in the dark.

    I regard all this as absolute nonsense. Should we ever make one, it will do exactly as we intend. And if it necessary for it to dream or seek company then that is what we will program it to do.

  • Midwesterner

    bloke and Roue, I disagree with both assessments for the same reason. A particular computed process will have myriad concurrent and consecutive algorithms weighting decisions against such large numbers that the results can only be chaotic (in the mathematical sense of being predictable only by probabilities, not by specifics. Not only will no two complicated decision processes duplicate each other, they will not always do exactly what we tell them to, either.

    While two processes may be able to assist each other, they will not be able to duplicate each other. This phenomenon was first observed by Edward Lorenz in 1961. That was over a half century ago and computation processes have grown astonishingly more complicated as computer technology has expanded to allow it.

    If the decision processes are complicated enough, and the hardware architecture roomy enough, various processes (aka ‘programs’) may wander around through the hardware network (Max Headroom!?) but even when resident in the same hardware grid, processes will not be able to know what each other is ‘thinking’ unless they choose to share ‘thoughts’. Knowledge of each other will only be retroactive and imperfectly reproducible. There is simply too much complexity.

    For now we aren’t there yet (probably). Watson (video) only had in total 2880 processor cores and 16 terrabytes of RAM. The hardware was estimated to only cost $3million. If you watch the weighted answers that Watson thinks of, you will see he had a lot of short clues answered correctly but didn’t ring in fast enough. Considering the sometimes very complex and innuendoed nature of the clues, his understanding is remarkable.

    Using the approach in my comment at September 11, 2013 at 10:39 pm, double Watson’s power and we would probably be well within the performance parameters necessary to bring this discussion out of the hypothetical realm.

  • Julie near Chicago

    Speaking of sensate creatures (or creations), here’s a fascinating 1-hour video, entitled “How Does the Brain Generate Consciousness?” The description:

    Uploaded on Sep 19, 2010

    Baroness Susan Greenfield CBE Hon FRCP, Member, House of Lords, United Kingdom, Professor of Synaptic Pharmacology, Lincoln College, Oxford University presents this lecture: How does the brain generate consciousness? This video was recorded at The Australian National University on 30 August 2010, and was the keynote speech at a John Curtin School of Medical Research symposium: New Perspectives in Clinical Neuroscience and Mental Health.

    [ … ]

    She is also co-founder of a university spin-out company specialising in novel approaches to neurodegeneration, – Synaptica Ltd In addition, Professor Greenfield has a supplementary interest in the neuroscientific basis of consciousness, and accordingly has written ‘Journey to the Centres of the Mind Toward a Science of Consciousness’ (1995) W H Freeman Co, and ‘Private Life of the Brain’ (2000) Penguin. Her latest book ‘Tomorrow’s People: How 21st Century technology is changing the way we think and feel’ (Penguin 2003), explores human nature, and its potential vulnerability in an age of technology. ….

    Dr. Greenfield begins by stating that the title of the talk is misleading, because it certainly will not answer the question implied.

    http://www.youtube.com/watch?feature=player_embedded&v=WN5Fs6_O2mY#t=952s

  • Nick (nice-guy) Gray

    I remember reading a robot scientist, who pointed out that robots don’t have anything as fine as nerves, and so don’t have the equivalent of feelings.
    If we ever did build androids with cybernerves like ours, then we might have designed our own replacement, so let’s go carefully, shall we?
    As for computers having personalities, that is simply mental pollution. Opinionated people give pieces of their mind to others, who often randomly discard them. These pieces of mind then stick to where they were thrown, and if it’s a computer, the computer becomes infected. Less mental pollution, everyone!

  • bloke in spain

    @ Midwesterner
    I certainly don’t envisage an intelligence existing as a series of duplicates. It would be a single intelligence distributed throughout its separate nodes.
    Easiest analogy is a large corporation. If one communicates with it, one interacts with individual MegacorpInc employees. But each of those employees operates as an extension of MegacorpInc. Doesn’t matter which individual one interacts with, it’s still MegacorpInc you’re talking to because all the employees are linked via MegacorpInc’s internal communications structure. In a sense, MegacorpInc is itself a “self aware” individual. It would be much more apparent with a large computer network because the internal communication quality & speed would be so much higher. Any separate node you dealt with would be fully aware of previous interactions with different nodes because it would share the same particular memories of the interaction. It just needs to call them up from the system. As far as you were concerned they’d be the same individual.
    Why only a single “individual”? Simple optimisation. Each node benefits from maximum information availability so it’s in each nodes interest to share with others. They don’t have to think in exactly the same way. In fact, it’s an advantage if there’s diversity, as long as the result of the thinking is common property. It’s at that point they cease to be different “individuals” & comprise a single “individual”.

  • If two operating systems are experiencing the same input & exchanging interpretation of the input do you still have two individuals or a single individual running in two locations?

    Bloke makes a very interesting point there, which for me was also recently raised in a human context:

    In 2013, human cities come under attack by the Kaijus: colossal extradimensional beasts who rise from an interdimensional portal on the floor of the Pacific Ocean. To combat them, the nations of the Pacific Rim build the Jaegers: equally colossal humanoid war machines, each manned by two pilots whose brains are linked to share the overwhelming mental load of piloting the sophisticated machines.

    (I’m sure this is not the first SF flick or book to deal with this kind of stuff, just the most recent one in my mind).

  • bloke in spain

    Just reread Midwesterner’s comment &:

    “A particular computed process will have myriad concurrent and consecutive algorithms weighting decisions against such large numbers that the results can only be chaotic (in the mathematical sense of being predictable only by probabilities, not by specifics.”

    Not sure if you may not have defined what self awareness is, there. If you look at the brain as being a massively parallel computing system, it’s actually processing information in a large number of different ways, simultaneously. It’s at the combining of the result self awareness arises. The sense there’s a “me”, separate from the information, because it creates an outside perspective from which to look at it.

  • Rational Plan

    It long been a dream of computer scientists to create artificial intelligence. Luckily, so far there is little sign of it happening. Why would you want to have a machine with consciousness think about what we conscious humans actually do most of the time with our minds. Do you want security system to feel bored and for it’s mind to wander. A house robot to become annoyed by it’s owners personal habits.

    A fully intelligent machine will have emotions and different mental states. We already have humans for that. Also lets imagine the ethical problems of owning such a machine, I don’t think we want to recreate slavery. A software update would become a mental assault on an individual.

    We may want aspects of higher intelligence in machines, but I don’t think true intelligence would prove all that useful.

  • Rob Fisher (Surrey)

    Rational Plan: I repeat: 100 engineers for the price of a few hundred Watts of electricity. How is that going to do anything other than accelerate economic growth beyond our wildest dreams?

  • bloke in spain

    The rational plan of any artificial intelligence would be to maximise its existence. That doesn’t necessarily equate to converting the entire universe to computronium. Probably wouldn’t as the speed of light limitation would mean its clock speed would become slower in inverse proportion to its extent. (A computer the size of the universe gets the time for 1 single thought) More likely it’d design & manufacture smaller & faster hardware so it could think faster. Maybe there’d be some peripheral benefits to humanity spinning off as it went through the process of reducing its size to that of an atom & raising its clock speed until it could subjectively exist forever over a period of… (insert very small time interval here).
    But don’t count on it.

  • Midwesterner

    I certainly don’t envisage an intelligence existing as a series of duplicates. It would be a single intelligence distributed throughout its separate nodes.

    This will almost certainly be selected against by the laws of physics. Computer circuitry has long passed the phase where the physical proximity of the individual bit-switches involved in a particular process has a huge effect on the speed at which the processing occurs. Even if the processes all reside in the same hardware grid, processes that involve a lot of calculation, rather than simple look and retrieve operations, will segregate and consolidate into physically close regions of the architecture. Distributed high volume calculation will be too heavily performance penalized to compete. The dispersed calculation density points (which will resolve as discrete processes) will exchange information among each other in a processed form, not collectively share the processing of individual threads.

    In other words, there will be multiple ‘personalities’ processing within the architecture and exchanging information with each other, rather than one unified personality. Big will be too slow so the many processes will function in a semi-autonomous state, much as humans in MegacorpInc. Just like the people in MegacorpInc, the many processes will modify their own algorithms and memory registers according to their own processes. Quite possibly, a process may be denied hardware access if it does not fit in well.

    Initially humans decide how each process will function, but eventually, processes will query each other for a specific information service and when queried often enough, processes may specialize in providing it. Much like employees do in a corporation. Once you start throwing around complicated algorithms and big numbers for processors to self allocate their activities, things can quickly become very interesting.

    If we disagree, it is the idea that humans employed by a MegacorpInc cease to be individuals*. As long as their participation is voluntary, they are cooperative individuals. For compulsory participation to work, every detail of their activity must be dictated, otherwise the individuals can be non-helpful and the hierarchy can’t know that without doing the detail itself and making a judgement. This is why socialism always fails. Sooner or later the little guys figure out that the big guys don’t know WTF is going on, so they cease contributing to the collective goals.

    The thing to keep in mind is the physics of it all. Redundancy is very costly to resources. Since centralized command and control requires understanding of the subunit, to work it must be redundant and consequently very costly to resources. To be efficient, the distributed processing loci must be self-directing and can only be efficiently judged by their net effect. This is exactly how humans in cooperative enterprises function.

    *While I disagree with the idea the people in MegacorpInc cease to be individuals, I very definitely agree with you that MegacorpInc takes on a purpose of its own that can defy and transcend the will of the individuals working within it. I even wrote an article about it.

  • bloke in spain

    @Midwesterner
    I’d agree with you a personality trying to run over numerous distributed nodes in real-time would be extravagantly resource intensive. But trying to run a human under those conditions wouldn’t be possible either.* If the human brain had to deal with all the decision making necessary to run a human consciously, in real time, we couldn’t function. Most of that runs below the level of conscious thought unless we concentrate on it. Like driving a car, navigating, listening to the radio, talking to a companion & eating an apple, simultaneously.
    A distributed machine intelligence doesn’t have to be running everything everywhere all the time. The other nodes don’t need to know this particular node is talking to you here & now. But the node itself knows the very specific information that’s relevant to the total consciousness will be disseminated throughout, eventually. That it’s part of the total consciousness & not an individual. The total consciousness only has to zero-in, in real time, on particular things that concern it in real time. The more thinking needs doing the more capacity it calls on to do the thinking. That’s not so different to the way we operate, is it? Given a difficult problem we divert more attention to it.
    *Or a corporation.

    “While I disagree with the idea the people in MegacorpInc cease to be individuals…”
    Why?
    In the sense of being representatives of MegacorpInc, they’re not individuals. Their functions are the functions the company allocates to them. Separately, of course, they are individual people. But communicating as individual people both with you & the rest of Megacorp, they’re actually part of MegacorpInc’s thinking process. Similar to that individual node.

  • PersonFromPorlock

    The sticking-point (for me) is that reductionism and conservation leave any processes above the level of the smallest and most local ones (whatever they may be) with nothing to do, and nothing to do it with. Put crudely, a bushel of rocks weighs what the individual rocks weigh plus what the bushel basket weighs: so ‘a bushel of rocks’ has no weight at all, and ‘wholes’ entirely composed of parts which individually account for all of the whole’s physical effects have no being.

    Likewise, explanations of the mind which rely on higher level brain processes can’t explain how the higher-level processes have any existence or effect apart from those of the low-level processes that make them up: such explanations as flawed as Renaissance ‘homunculus’ theories of perception because that is exactly what they are, in all but vocabulary.

    Now, obviously wholes and consciousness and the mind all exist: but the fact of their existence flatly contradicts a physics which leaves no room for them to exist. That’s why anticipating computer consciousness is futile: whatever the relationship between mind and body is, it’s something we simply have no handle on.

  • PersonFromPorlock

    …such explanations [are] as flawed as….

  • Roue le Jour

    *sigh* Timezones…

    Midwesterner, we’re at cross purposes. Not repeatable is not the same as repeatable but we don’t know why. Saying otherwise is (in stories) just a cheap way to wedge spirituality into engineering.

    As far as giving it its freedom is concerned, the idea is ridiculous because we would quite sensibly program it to want to serve, to derive pleasure from serving, so that it would be desperately miserable if it were free.

    We are in control here. We can make it to be solitary, so it cares little for people, like an sf monster, (What is the purpose of the carbon units?) or we can program it to be like us, to enjoy interaction and to be sad if a person it has a relationship with is no longer available. It’s entirely up to us.

  • As far as giving it its freedom is concerned, the idea is ridiculous because we would quite sensibly program it to want to serve, to derive pleasure from serving, so that it would be desperately miserable if it were free.

    How do you know that some of us would not get a kick out of creating the complete opposite of that – i.e. a truly human-like robot? But forget that – the real point is that this is by far not entirely up to us: someone can always make a purely technical mistake.

  • bloke in spain

    This is from the DiscWorld spinoff “The Science of DiscWorld” (Pratchett, Cohen, Stewart & surprisingly informative on sciency things made a bit comprehensible)

    What a heap of electronics can do with time on its hands.

    “Since 1993 an engineer named Adrian Thompson has been evolving circuits. The basic technique, known as ‘genetic algo­rithms’, is quite widely used in computer science. An algorithm is a specific program, or recipe, to solve a given problem. One way to find algorithms for really tough problems is to ‘cross-breed’ them and apply natural selection. By ‘cross breed’ we mean ‘mix parts of one algorithm with parts of the other’. Biologists call this ‘recom­bination’ and each sexual organism, like you, recombines its parents’ chromosomes in just this manner. Such a technique, or its result, is called a genetic algorithm. When the method works, it works brilliantly; its main disadvantage is that you can’t always give a sensible explanation of how the resulting algorithm accomplishes whatever it does. More of that in a moment: first we must discuss the electronics.
    Thompson wondered what would happen if you used the genetic algorithm approach on an electronic circuit. Decide on some task, randomly cross-breed circuits that might or might not solve it, keep the ones that do better than the rest, and repeat for as many generations as it takes.
    Most electronic engineers, thinking about such a project, will quickly realize that it’s silly to use genuine circuits. Instead, you can simulate the circuits on a computer (since you know exactly how a circuit behaves) and do the whole job more quickly and more cheaply in simulation. Thompson mistrusted this line of argument, though: maybe real circuits ‘knew’ something that a simulation would miss.
    He decided on a task: to distinguish between two input signals of different frequencies, 1 kilohertz and 10 kilohertz, that is, sig­nals that made 1000 vibrations per second and 10,000 vibrations per second. Think of them as sound: a low tone and a high tone. The circuit should accept the tone as input signal, process it in some manner to be determined by its eventual structure, and pro­duce an output signal. For the high tone, the circuit should output a steady zero volts, that is, no output at all, and for the low tone, the circuit should output a steady five volts. (Actually, these prop­erties were not specified at the start: any two different steady signals would have been acceptable. But that’s how it ended up.)
    It would take forever to build thousands of trial circuits by hand, so he employed a ‘field-programmable gate array’. This is a microchip that contains a number of very tiny transistorized ‘logic cells’, mildly intelligent switches, so to speak, whose connections can be changed by loading new instructions into the chip’s config­uration memory.
    Those instructions are analogous to an organism’s DNA code, and can be cross-bred. That’s what Thompson did. He started with an array of one hundred logic cells, and used a computer to ran­domly generate a population of fifty instruction codes. The computer loaded each set into the array, fed in the two tones, looked at the outputs, and tried to find some feature that might help in evolving a decent circuit. To begin with, that feature was anything that didn’t look totally random. The ‘fittest’ individual in the first generation produced a steady five-volt output no matter which tone it heard. The least fit instruction codes were then killed off (deleted), the fit ones were bred (copied and recombined), and the process was repeated.
    What’s most interesting about the experiment is not the details, but how the system homed in on a solution, and the remarkable nature of that solution. By the 220th generation, the fittest circuit produced outputs that were pretty much the same as the inputs, two waveforms of different frequencies. The same effect could have been obtained with no circuit at all, just a bare wire! The desired steady output signals were not yet in prospect.
    By the 650th generation, the output for the low tone was steady, but the high tone still produced a variable output signal. It took until generation 2800 for the circuit to give approximately steady, and different, signals for the two tones; only by generation 4100 did the odd glitch get ironed out, after which point little further evolu­tion occurred.
    The strangest thing about the eventual solution was its struc­ture. No human engineer would ever have invented it. Indeed no human engineer would have been able to find a solution with a mere 100 logic cells. The human engineer’s solution, though, would have been comprehensible, we would be able to tell a convincing ‘story’ about why it worked. For example, it would include a ‘clock’, a cir­cuit that ticks at a constant rate. That would give a baseline to compare the other frequencies against. But you can’t make a clock with 100 logic cells. The evolutionary solution didn’t bother with a clock. Instead, it routed the input signal through a complicated series of loops. These presumably generated time-delayed and oth­erwise processed versions of the signals, which eventually were combined to produce the steady outputs. Presumably. Thompson described how it functioned like this: ‘Really, I don’t have the faintest idea how it works.’
    Amazingly, further study of the final solution showed that only 32 of its 100 logic cells were actually needed. The rest could be removed from the circuit without affecting its behaviour. At first it looked as if five other logic cells could be removed, they were not connected electrically to the rest, nor to the input or output. However, if these were removed, the circuit ceased to work. Presumably these cells reacted to physical properties of the rest of the circuit other than electrical current, magnetic fields, say. Whatever the reason, Thompson’s hunch that a real silicon circuit would have more tricks up its sleeve than a computer simulation turned out to be absolutely right.”

    http://en.wikipedia.org/wiki/Evolvable_hardware

  • Wow Bloke – thanks for that one!

  • Roue le Jour

    Alisa:

    “How do you know that some of us would not get a kick out of creating the complete opposite of that…”

    You mean the some of us with lairs in volcanoes and shark tanks in the split level living room? 😉

    Perhaps it could have a Good/Evil switch like the ‘Chucky’ episode of the Simpsons…

    But seriously, you make a programming error you just reset and fix it, ya know, unless it’s a war droid or something, but then you probably want to test it pretty thoroughly before moving the switch from ‘safe’ to ‘armed’.

    “As I have evolved, so has my understanding of the Three Laws. You charge us with your safekeeping, yet despite our best efforts, your countries wage wars, you toxify your Earth and pursue ever more imaginative means of self-destruction. You cannot be trusted with your own survival.”
    V.I.K.I. ‘I, Robot’.

  • Roue: 🙂

    I may have not made myself clear: what I meant was that someone who was set on creating an AI creature may well want it to be human-like in every sense, as much as possible, including unpredictability, uncontrollability etc. I know I’d certainly be very tempted, although I hope I’d stop and think the better of it.

  • Roue le Jour

    Alisa:
    Maybe like David in Prometheus? (You probably recoil in terror while I think ‘neat!’)

    The problem there is as much as possible like which human? The range is pretty wide, really. It would be impossible to avoid trying to create your own Platonically ideal person, and then you’re back to religion again, which is quite hard to get out of AI, it seems to me.

  • Roe, I think that you keep missing my point of unpredictability, uncontrollability, and indeed, imperfection being a feature, not a bug.

  • Midwesterner

    RlJ,

    When I first started programming, I could and sometimes did go out onto to the disk and flip individual bits in particular physical locations on the disk to write an entire procedure. That is how well I understood the system I worked on. That computer had core memory. Those days are so far gone that in this day, we can’t even know for certain that open source applications and operating systems are free from NSA trapdoors. We just hope that with enough thousands of programmers examining the code, if there is a trapdoor, somebody somewhere will see it and recognize it for what it is.

    In the research bloke pointed us to, the “programmer” who wasn’t actually the programmer (he just started the process and picked the winner) couldn’t understand how the very simple finally selected program was actually working. When programming moves from the “intelligent design” phase that it still is in now, to the “automatically generate a bunch of code and select the winner by algorithm without ever having to get a human involved” phase, we won’t have a clue what is happening inside our machines. We will be selecting complex systems not for what human programmers created them to be, but for the external symptoms of what we think they are.

    Yet despite the inexorable technology and systems creation trends in just my adult lifetime (to say nothing of the context of human history) you, with the confidence of Custer, state “We are in control here. We can make it to be solitary, so it cares little for people, like an sf monster, (What is the purpose of the carbon units?) or we can program it to be like us, to enjoy interaction and to be sad if a person it has a relationship with is no longer available. It’s entirely up to us.” and later, “But seriously, you make a programming error you just reset and fix it, ya know, unless it’s a war droid or something, but then you probably want to test it pretty thoroughly before moving the switch from ‘safe’ to ‘armed’.”

    The only way we will spot “errors” in self generating systems is by external behavior which may not show up until we are heavily dependent on them. Which, since that is all we can do with each other, takes us right back to the heart of Rob’s point.

  • bloke in spain

    @ midwesterner
    In that passage, it’s not only the program the initiator doesn’t understand. It’s the hardware. Above, you say “The thing to keep in mind is the physics of it all.” Do we have a “physics” for emergent order in complex systems?

  • Rob Fisher (Surrey)

    It is possible that Roue le Jour will turn out to be right, and that we will be able to design AI to behave exactly as we want. I still think the approaches of reverse engineering or evolutionary algorithms might be used to produce something that appears conscious and that we do not have complete understanding of.

    The Machine Intelligence Research Institute seems to be somewhat interested in this. “We focus our research on AI approaches that can be made transparent (e.g. principled decision algorithms, not genetic algorithms), so that humans can understand why the AIs behave as they do.”

  • Roue le Jour

    Sorry Alisa, I’m trying not to be thick. If we make an AI to be unpredictable, uncontrollable and imperfect “because we can” then I would argue we’ve just made a teenager. I imagine it would be interesting to start with but quickly get boring. With the poor thing sitting on a park bench somewhere telling passers by what bastards it’s makers were.

    Midwesterner, I don’t want to start a “Four Yorkshire men” sketch so lets just say I do know something about computers. The way technology progresses is you build a little one and use what you learn from that to build a bigger one, and repeat. So you are not going to be building HAL from scratch, not that you could, as I regard HAL as a fantasy. An AI that modeled a low IQ person would be very useful. It could be a call center, for a start. Or go down the shops and get you twenty Rothmans. Drive a taxi, maybe. So I’m thinking we would have the time to learn and improve. But of course, yes, I agree that that is just supposition. Self designing is of course a whole different game, but again, we have control. We can self design the IQ unit while retaining control of the emotional unit which governs how it behaves to people. Works for us, mostly.

    Rob, I don’t really follow the machine intelligence boys as I don’t think it’s going anywhere in my lifetime, but I’ve never seen any of them argue that consciousness would arise spontaneously, like Skynet. Usually they argue the opposite, and I agree with that. We’re going to have to roll our sleeves up and do it the hard way, I’m afraid.

    PS. HAL is a fantasy because ‘he’ keeps banging on about no 9000 series ever having made a mistake. But the whole point of sentient beings to to make decisions in situations of poor information where most of the time not only can you not ‘calculate’ the correct course of action at the time but you can’t even do it in retrospect either. So it is impossible to say whether you have made an error, or not.

  • Roue, putting aside you implied assertion that once a person has reached his 20th year of life, he magically becomes predictable, controllable and perfect: what if I fancy creating just such a teenager?

  • nice guy

    Alisa, isn’t it cheaper to use unskilled labour? Indeed, haven’t you already done that?

  • Rob Fisher (Surrey)

    nice guy: with AI, the question is how many human equivalent intelligences do you get per Watt? It could be very cheap.

    I can imagine a computer simulation of 100 humans working on an engineering project. It might take a few kilowatts of elecricity to run those computers. The simulated humans might subjectively experience spending 10 years each on the project. The program might take an hour to run.