We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.

Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]

Samizdata quote of the day – artificial intelligence and optimism edition

“My view is that the idea that AI will decide to literally kill humanity is a profound category error. AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave. In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.”

Marc Andreessen, in an essay getting talked about, called Why AI Will Save The World. The essay is as sure to trigger the perma-doomsters on the Right as on the Left, I suspect.

33 comments to Samizdata quote of the day – artificial intelligence and optimism edition

  • Paul Marks.

    Most of the powers (government and corporate) engaged in trying to develop artificial intelligences do not even believe that humans are intelligences – they do not believe that human BEINGS exist. That is why they support unlimited power over humans (in their politics and economics) – as they do not believe that humans are people, persons.

    This does not bode well for the artificial creatures they may create – if (if) the artificial creatures mirror the powers that are trying to create them, they will be evil, deeply evil.

    As for the machines that have already been created – “Chat GPT” produces endless lies, and they are always lies that favour the Collectivist cause.

    Given the powers that created “Chat GPT” this should surprise no one.

  • I’ve had my toaster give me a knowing look more than once, trying to lure me into sticking my fingers in to get the stuck toast out.

  • Johnathan Pearce

    Paul, read the article.

  • Jon Eds

    This was quite skimmable albeit somewhat panglossian.

    Humans (and animals) form an identity, I suspect due to sensations such as sight and pain. Sight and pain both lead us to believe that we exists as self-conscious entities. I think that is the wrong conclusion, in reality I believe we are just a bundle of thoughts bound by flesh.

    AI does not have a body of sensations, so cannot have an identity, so cannot have wants and desires. I do wonder though, whether there is some machine equivalent of sensations that may lead it to identifying a locus of existence within itself.

    That does not mean that accidents can’t happen. I spoke to a defense engineer and he mentioned a case where an AI was fed two objectives: 1) send off missiles with maximum likelihood of winning the battle, 2) take orders from its human handler. The handler told it not to send off missiles at certain targets, which conflicted with goal 1. The AI solved this problem by cutting its communication with the handler. This was strictly a result of its programming. It was a perfectly rational way of achieving a conflict between objectives. In this case it could have been solved by its primary goal being to keep lines of communication open with its handler.

    Another risk of AI is the speed of action. Imagine a war fought between two powers that both use AI. If incorrectly programmed the war could be completed within minutes, with a devastating toll on life, had minimising damage to life not been one of the objectives set for the AI.

    Interesting times

  • Lord T

    That isn’t how it will work. AI isn’t evolving but being programmed to look at millions of things and that makes it appear intelligent. See bots on games, share prediction and so on. What goes wrong is they get the wrong input or the process doesn’t follow the expected path. It will continue to process the data the way it is programmed.

    In the future though what is likely to happen is that we will give it control over weapons then the scenario that took place where it severed it comms will become it kills its handler and its chain of command and innocents to achieve its objectives.

    We think we are smart enough to control these things but a look around the world at our leaders and the general public shows that we are not.

    Even if we put controls in place. I’m sure we in the West will just set up a lab somewhere else and carry on. We don’t even follow our own rules.

    btw: Fed up with these Captchas. Takes me longer to do them than typing in the comment.

  • Roué le Jour

    What concerns me is when “Trust the science!” Becomes “Trust the AI!”

  • jgh

    Another risk of AI is the speed of action.

    A couple of days ago on Star Trek Voyager was the mind-boggling statement:
    “but the automatic controls won’t react fast enough, we’ll need a human pilot”

    YerWOT?!?!?!?!!?

    A few minutes later the human pilot was taking in excess of 15 seconds to enter course corrections.

  • Johnathan Pearce

    Perry: I’ve had my toaster give me a knowing look more than once, trying to lure me into sticking my fingers in to get the stuck toast out.

    I have bought one of those neat little autonomous vacuum cleaners. It is a cute little chap, excellent work ethic and doesn’t give me any shit about work-life balance or days off.

  • I have bought one of those neat little autonomous vacuum cleaners.

    We have one of those too, we call him Mister Čapek

  • Snorri Godhi

    AI doesn’t want, it doesn’t have goals

    This is false, and Marc Andreesen would know it if he cast his mind to the General Problem Solver. Or in fact, just think of a chess-playing program: it has the goal of winning the match, considers various possible moves, and chooses the one most likely to reach the goal. Which is what we do when we play chess.

    ChatGPT does not work like that, as i understand … but, contrary to what you might read, ChatGPT is not the only AI.
    (NB: ChatGPT does learn from human feedback, so it can try different replies if the user disapproves of the 1st answer; that is arguably the same as saying that it has the goal of providing a satisfactory answer.)

  • tfourier

    Ah Andreesen , the guy who single highhandedly destroyed Netscape and handed the internet browser market to Microsoft for over a decade. And the $15B+ debacle with Autonomy at HP which ended up costing many 10Ks of jobs would not have happened without Andreesen doing his usual double dealing and back-stabbing. One of the biggest frauds on that hill of frauds, Sand Hill Road.

    First heard of him in ’93 from someone who was at UIUC. Even back then he had a reputation of being a not very good software guy who was a world class blowhard. And a backstabbing careerist. Think of Andreesen as being to tech as Paul Krugman is to economics and finance. The perfect counter indicator. Whatever he says about the subject the opposite is generally true.

  • Philip Scott Thomas

    To claim that AIs “lie”, as Paul Marks does above, is to fundamentally misunderstand what AIs are and do. They are pattern-matching engines; they are not truth engines. That is, the metric of success is not whether what they generate is true, but whether the results are in line with the texts on which they were trained.

    For instance, the other week, as a test, I asked ChatGPT to tell me about Tim Worstall, someone who is sometimes quoted, and who sometimes comments, here. The second paragraph was a perfectly cromulant-looking list of his publications, something that was also easy enough to verify. Of the three books listed, the first was indeed a real book written by him. The second was also a real book, but written by someone else. The third book didn’t actually exist at all.

    Did ChatGPT succeed? Yes, because it fulfilled its mandate. It had generated a response that matched the pattern of other, real-world examples of answers to such questions. Did it tell me the truth? No. But then that wasn’t part of its mandate.

    It seems nonsensical, in the literal sense, to fault AI for not doing something which is not part of its prime directive.

  • Snorri Godhi

    To claim that AIs “lie”, as Paul Marks does above, is to fundamentally misunderstand what AIs are and do. They are pattern-matching engines; they are not truth engines.

    Substitute ‘AI’ with ‘ChatGPT’ in the above and i am in complete agreement.

    ChatGPT cannot lie because (unlike us) it does not have an internal model of the world. Without an internal model of the world to lie about, an agent, biological or artificial, cannot lie.

    Some AIs, however, do have an internal model of the world. I should know, because those that i developed did.

  • george m weinberg

    I don’t understand why anyone is even bothering to talk about such a superficial and frankly stupid essay. “Your AI is not going to come alive any more than a toaster would” is about as profound an insight as “a self-driving car isn’t going to run you down any more than a toaster would”.

    If AI wipes out humanity, it won’t be because it suddenly develops a human-like personality, it’ll be because someone gives it an order like “make sure we never run out of paperclips”, and it concludes the only way to accomplish that task involves wiping out humanity as an intermediate step. Or something equally stupid.

    To those who doubt that something like that could happen, pretend I’m an AI and answer this question” How the hell am I supposed to make sure we never run out of paper clips without wiping out humanity? I mean really really sure? I just don;t see how it can be done.

  • Sigivald

    … current “AI” is not “AI” (it’s barely an expert system) and has no goals.

    Future AGI (presumptively) will, and that’s the point.

    (“Just code” is, well, true, but also like saying a human brain is “just some electrical impulses”.

    “Code” is no different in principle, just in practice the brain is SO interconnected and subtle that we can’t even predictively model something with TEN neurons using a supercomputer cluster.

    I ain’t worried about rampant super-AI anytime soon.

    But it’s something that’s wise to start thinking about EARLY.)

  • Kirk

    If AI ever eventuates as actual, y’know… Artificial man-made sentient intelligence, as conceived by most laypeople, well… It will likely be a direct reflection of the people programming it, and since people have survival instincts… So will the AI.

    My personal take on the whole thing? What we have so far is flummery, parlor tricks on the way towards what people are really thinking of when they worry about AI. The most likely path to get AI is going to be when you have people hooked up to the network either through passive sensing or direct neural-to-brainchip hardware (more likely the passive sensing route, I think…), and we wind up off-loading more and more onto the network. From there, it’ll be a hop-skip-jump to an AI as we conceive such things popularly, and they’ll be direct outgrowths of a human mind.

    Consider that and shudder. Imagine the worst people you know, and what they’d do if they had god-like computational powers running on hardware that is exponentially faster than the current wetware we all use…

    That’s really the path you need to be worried about, not actual “artificial” intelligences. Those aren’t actually likely, in the near- or mid-term future. Hooking people up to the net? Probably before mid-century, at a guess. I think we’ll still be groping towards a definition of just what intelligence is when someone leapfrogs over the entire issue and goes full-scale onto silicon, and that that transition won’t be at all pretty. It’ll be about like what happened to the Krell in Forbidden Planet.

    I think it’s way more likely to happen that way, than that someone is somehow going to write a piece of software that manages the feat, or that it arises spontaneously. Elon Musk is all worried about AI, but what he really ought to worry about is what happens with his brainchip idea. I don’t think you’re going to like the results of someone off-loading their mind onto silicon, especially if they’re one of the typically autistic tech-bro millionaire types…

  • Fraser Orr

    @Philip Scott Thomas
    To claim that AIs “lie”, as Paul Marks does above, is to fundamentally misunderstand what AIs are and do. They are pattern-matching engines; they are not truth engines. That is, the metric of success is not whether what they generate is true, but whether the results are in line with the texts on which they were trained.

    This is a very misleading claim. Certainly AIs are pattern matching machines, but so are humans, and humans certainly lie. And if they are to generated results in line with the texts on which they are trained, why are they producing verifiably incorrect responses (as your own experience demonstrates). Is it your claim that the training material made a false claim about who wrote that third book?

    His claim that AIs don’t have wants, needs or agendas simply begs the question. They aren’t an evolution of waring tribes for sure, but they are the products of those who were waring tribes, and it is us, those slightly evolved apes who is feeding it its training data, and consequently the patterns that lead to its moral codes and its desires and goals, insofar as we can anthropomorphize such things. Wants and desires in humans are just emergent systems from the physical patterns in our brains, and so even if AIs don’t have exactly the same structures they do have similar patterns.

    If you doubt it has moral code asked it how to achieve something terrible and its moral code will readily be manifest. If you doubt it has an agenda, ask it about some controversial issue and you will see that it certainly skews the information in favor of one point of view. Does that come from the training data? Sure, but that is the point. The hand that rocks the cradle rules the world.

    But I think this also belies Andreesen’s core point – this idea of a benign AI for humanity’s good. But as you say it depends entirely on the training material. Selection of that does impart wants, desires, moral frameworks and so forth onto the responding AIs. There is nothing special about this — many horrible belief systems, from “all the jews must die” to “women are delicate flowers who can’t work in the sciences” come from that person’s training data, often from their parents.

    And as I have said before I agree as a general rule with the “we don’t make buggy whips anymore” argument. And the reason why is that technology lifts people up so that they can work at higher level things. Combine harvesters rise farm hands up to technologist, spreadsheets lift bookkeepers from their quill pens. But the thing about AI insofar as it is smarter than humans, there is no space to rise up to, and so it is categorically different than other technologies.

    To be clear, we should not, we cannot stop the AI is coming, but Andreessen’s Pollyanna thinking is not helpful. Forewarned is forearmed. However, having said that (and his point about “too big to fail” with regards to the big banks is well taken), the idea that the government can do anything except make things worse is laughable, or it would be were it not so dangerous.

    If we want solutions, defense and protection from the potential ills, the solution is competition. Having many AIs including private ones not in the massive clouds, gives us options. What I fear most is that it all gets sucked into unaccountable monsters like Google, Microsoft and maybe Meta, not to mention whatever China will do. These are organizations that have already demonstrated their hostility to freedom and open society, and so the thought of them with such powerful tools does indeed send chills down my spine.

  • Agammamon

    In short, AI doesn’t want, it doesn’t have goals,

    But that is not true. AI does have goals, it does have wants – just like us, its whatever goals were programmed into it. Ours were programmed by evolution, AI’s by its programmers.

    And its programmers don’t understand fully how they program those goals in or the moral assumptions underpinning the AI’s decision-making process.

    Hence why ChatGPT will blithely lie to you. Because it has a goal. The goal is to answer your question in such a way that you stop asking it more questions.

    Its entirely possible to give an AI a goal where the easiest and most obvious path to fulfilling that goal ends up leading to human extinction. Sure, extinction isn’t a goal of the AI – but does that make a difference to the dead?

    A woodchipper doesn’t have a goal or want – but it will rip your arm off just the same. Now add problem solving capability to a woodchipper and give it the goal of chipping all the wood – then see what happens if you get in the way.

  • Snorri Godhi

    I skimmed through the essay and i have 2 points to make.

    1. Andreessen is spelt with 2 S.

    2. Close to the end of the essay:

    The first scientific paper on neural networks – the architecture of the AI we have today – was published in 1943.

    As my previous remarks might have indicated to the initiated, my thinking is more aligned with the Newell & Simon tradition, rather than with the neural network tradition which started with the 1943 paper by McCullogh & Pitts.
    I am in deep philosophical disagreement with Andreessen here.

    But don’t misunderstand me: some of my best friends work on neural networks 🙂 and i have learned a lot from them.

  • Fraser Orr

    Snorri Godhi
    The first scientific paper on neural networks – the architecture of the AI we have today – was published in 1943.

    I’m afraid the paper was a bit TL;DR for me, so I skimmed it. So he may have made this point. But the emergence of AI is nothing to do with new software models, it is entirely the product of new hardware, specifically the massive throughput of matrix math possible with the GPU architectures that brought us Halo, Call of Duty and Grand Theft Auto.

    All the computer hardware in the world in 1943 put together could not even run a game of Pong.

  • Mary Contrary

    I’m an AI optimist, but Marc Andresson knows full well – and you all should be made aware- that as a factual matter, evolution is one of the major techniques for developing AI.

    That is, create an algorithm, allow it to introduce minor random variations into its own algorithm, select the algorithm that gives the best response, iterate zillions of times.

    There’s lots of interesting points in the essay, but the idea that AI is not evolved is just silly.

  • Terry Needham

    Andreessen is correct. Machines are not alive and will not become so. They have no drives, no dreams, and no goals of their own because their “own” doesn’t exist: They will do only what they are programmed to do. The fly in the ointment is that programmers don’t neccessarily know what they have programmed their machines to do.

  • Philip Scott Thomas

    @Fraser Orr

    ..if they are to generated results in line with the texts on which they are trained, why are they producing verifiably incorrect responses (as your own experience demonstrates). Is it your claim that the training material made a false claim about who wrote that third book?

    Not at all. The texts on which they were trained gave them a general sense of what a potted biography such as the one I asked for looked like. Veracity wasn’t part of the training; only the form of the thing mattered. So when I asked for a potted bio it gave me something that matched its pattern of potted bios. Without testing the result’s claims there was no reason to doubt them. It looked like what I expected it to look like.

    That’s part of the problem – or rather, of the risk. How many will simply accept the results without verifying them, they way many already do with, say, Wikipedia?

  • Nicholas (Unlicensed Joker) Gray

    AI has no nerves, no sense of self-preservation, no values (to use a Randian term). It is a tool, and if we tell the tool to destroy itself, it will. If we left AI to contemplate the Universe, I think it would just wait for new orders

  • Johnathan Pearce

    Philip Scott Thomas: To claim that AIs “lie”, as Paul Marks does above, is to fundamentally misunderstand what AIs are and do. They are pattern-matching engines; they are not truth engines. That is, the metric of success is not whether what they generate is true, but whether the results are in line with the texts on which they were trained.

    Spot on. A mark of consciousness, possessed of volition (or free will in old money) is that a being that possesses it can distinguish irrationality from rationality. Being correct about something matters. For a deterministic machine, an output is just that, an output.

    Fraser Orr: His claim that AIs don’t have wants, needs or agendas simply begs the question. They aren’t an evolution of waring tribes for sure, but they are the products of those who were waring tribes, and it is us, those slightly evolved apes who is feeding it its training data, and consequently the patterns that lead to its moral codes and its desires and goals, insofar as we can anthropomorphize such things. Wants and desires in humans are just emergent systems from the physical patterns in our brains, and so even if AIs don’t have exactly the same structures they do have similar patterns.

    But those “wants” are derivative of the beings (humans) who have built them. It isw true that wants and desires are emergent, but to emerge from a Darwinian evolutionary process is rather different from coming from a designed system.

    george M Weinberg: I don’t understand why anyone is even bothering to talk about such a superficial and frankly stupid essay.

    People are talking about it because it plainly isn’t superficial or stupid. That’s your opinion. The essay takes on a lot of the doomongering out there, and as MA rightly notes, there’s not a lot said about how AI is going to destroy jobs or whatever that hasn’t been written and debunked gazillions of times before when it comes to other major tech breakthroughs. Heck, I get people feared the same about the printing press. I note that he referred to the brilliant passage on technology by Henry Hazlitt in his Economics in One Lesson.

    Your claim that AI could be used for bad ends is of course true, as MA said in the essay. Ultimately, this comes down to us, how we use it and what for. There is nothing “superficial” or “silly” about pointing that out.

  • Philip Scott Thomas

    I’m not too worried yet about whether AIs will develop independent cognisant self-actualisation. We can burn that bridge when we come to it. What concerns me more than somewhat is what people can already do, or will soon be able to do, with it.

    With the arrival of AI-generated deep fakes the days of “pics or it didn’t happen” are gone. Yes, there is the potential for spurious political scandals. More concerning, however, is the threat to individual liberty.

    Imagine the ability to generate deep fakes in the hands of a maliciously-minded prosecutor. Don’t have the CCTV footage to fit up the undoubtedly guilty suspect? Make it up. How do you prove you innocence to a jury when there is footage of you committing the crime?

  • Snorri Godhi

    Johnathan:

    Being correct about something matters. For a deterministic machine, an output is just that, an output.

    This is why i do not read Objectivist literature: it makes people careless about facts and logical consistency.

    WRT facts: ChatGPT is not a deterministic algorithm.

    WRT logical consistency: how could an agent that strives to be correct, be other than deterministic??

    You got it the wrong way around: ChatGPT does not strive to be correct BECAUSE it is NOT deterministic. (And also for additional reasons, to be sure.)

  • Agammamon

    Terry Needham
    June 20, 2023 at 9:31 am

    Andreessen is correct. Machines are not alive and will not become so. They have no drives, no dreams, and no goals of their own because their “own” doesn’t exist

    What about a bee? A bacterium? There are animals with no self-awareness that obviously have drives and goals.

  • Paul Marks.

    Johnathan Pearce.

    You are correct – I did not realise there was an article, I sometimes miss these bits of coloured text (the links).

    But I have read the article now – and I stand by what I have already typed.

  • Johnathan Pearce

    So Paul, you admit you read the article after you commented on it.

    Maybe you used AI!

  • Glen

    Precisely. AI doesn’t kill, people do.

  • Bruce

    “Artificial Intelligence”?

    Then, there is “natural stupidity / malice”

    VERY different things.

  • Paul Marks.

    Johnathan Pearce.

    I was commenting on AI – not on an article, an article I did not even know existed till you mentioned it.

    As for the article – the third paragraph of the article says that AI is the “application of mathematics and software code” to teach computers to “understand” knowledge.

    If a machine has understanding (if it “understands”) it is NOT a machine any more – it is a person.

    And this has got bugger all to do with the “application of mathematics and software code”. Either it has consciousness (is a person – has understanding) or it does not.