We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.

Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]

Samizdata quote of the day – AI is just the latest ‘scare’

And so something as potentially useful as AI has become a means for politicians and experts to express their fatalistic worldview. It is a self-fulfilling tragedy. AI could enable society to go beyond its perceived limits. Yet our expert doomsayers seem intent on keeping us within those limits.

The good news is that none of this is inevitable. We can retain a belief in human potential. We can resist the narrative that portrays us as objects, living at the mercy of the things we have created. And if we do so, it is conceivable that we may, one day, develop machines that can represent the ‘peculiarities of mind and soul of an average, inconspicuous human being’, as Vasily Grossman put it. Now that would be a future worth fighting for.

Norman Lewis

40 comments to Samizdata quote of the day – AI is just the latest ‘scare’

  • Colli

    On the need for legislation to provide safeguards:

    Humans have not been good at controlling the institutions we create and keeping them directed towards their original mission, so why should we expect to be able to control something which is 10 times as intelligent as ourselves?

    Elon Musk always brings up the positive examples of government control for safety, like aircraft safety and automotive safety. I would also bring up biosecurity, where we are not permitted to engineer a new and dangerous pathogen. Oh wait, not unless we send it to China and get them to do it. That sure worked well, didn’t it?

    The claim is that we can somehow isolate the AI to keep it safe, but the reason for it existing is so we can query it. Look at how easily people are manipulated by the news, for example. If we are going to use artificial intelligence as a tool and rely on it in any way, don’t you think it will be able to manipulate us towards its own agenda? Especially as the AI has a potentially infinite lifespan and so can afford to have extreme patience and wait for subtle effects.

    That artificial intelligence could be dangerous is not a new idea. Frankenstein’s monster is artificial intelligence in a literal sense. Karel Čapek’s original robots also rebelled. Most of the interest of Asimov’s Robot stories is grounded in the tension between the volition of the AI and the moral rules.

    I remain unconvinced that it is a good argument for yet more government power.

  • David

    The good news with AI is that it’s not now something that the only the big boys can do. If you are familiar with Python, you can download the privateGPT project on GitHub and import your own documents and train AI on it. It can then answer questions on it. There will be others. I doubt that the EU can put this genie back in the bottle though it won’t stop them trying.

  • Pat

    This cannot be uninvented.

  • Penseivat

    Anyone thinking that relying on AI is a good thing, should read “When The Machine Stops”.

  • jgh

    With a hammer I can stove somebody’s head in. Therefore there must be severe government restrictions on the creation, ownership, and use of hammers.

  • Bruce

    Can “Artificial Intelligence” compete, as a threat, with Natural Stupidity, let alone Criminal Malice?

  • Deep Lurker

    “Never let a crisis go to waste” – not even if the crisis has to be invented.

  • Fraser Orr

    FWIW, I don’t think AI is just the latest scare. It is categorically different from everything that has been invented before.There is no “just” about it, and if you aren’t at least a little scared you aren’t paying attention.

    I also think the idea that the US or Western governments can control it is just hubris. All they can do is interfere with the “good guys” to the benefit of the “bad guys”. Of course in one sense I am thinking of China, but I am also thinking or rogue criminal organizations. But the genie cannot be put back in the bottle.

    I’ll be honest, people who assert with utter confidence that AI isn’t thinking simply don’t understand what “thinking” means, or, to put it another way, seem to think that the only type of “thinking” is the same type of “thinking” we humans, with our MASSIVE limitations can do. You can argue about the meaning of “thinking” until you are blue in the face, after all that is a great Internet tradition, but as soon as somebody says “Merriam Webster says…” the discussion has lost all value. AIs definitely think by some useful meaning of “think” and the fact that it thinks differently than humans is not a bad thing when you think about the quality of the average human’s thinking skills.

    But having said that, the story of AI right now is hallucination. As it becomes more widely used the big concern is that it just makes things up out of whole cloth. And the issue with this is that since we don’t really understand how a language model goes from question to answer, it isn’t easy to debug… in fact it is verging on the impossible. To be clear, we understand the underlying mechanisms, much as medicine understands how neurons work down to the molecular level — but can’t tell you why when I tell you to think about a big gray animal trumpeting you think about an elephant.

    This hallucination problem has definitely changed my views on AI. It isn’t as far along as I thought it was. And I think it is in a particularly dangerous stage right now. It is correct enough that people rely on it, but capriciously and disastrously wrong occasionally, for no particularly obvious reason. And that is substantially worse than being wrong a lot of the time.

    So I think a big growing concern is the “white coat expert” delusion. We have been suffering from this a lot recently with human white coats, where some “scientist” says something and because he has a white coat and a PhD we aren’t even allowed to deferentially question his conclusion. This too is something that needs to be thought about with AIs. Just because the AI says it doesn’t mean it is correct. However, it will be rather more difficult to challenge it.

  • bobby b

    Would it be accurate to say that the danger of AI is that gullible people will believe what it says, and suspicious people will no longer believe anything printed on a computer because of it?

  • Kirk

    AI as it is currently is really nothing more than a parlor trick, albeit one that has demonstrated some rather interesting results.

    I think there was actually some evidence that at least one of the examples was using a “Mechanical Turk” backend, with actual humans doing most of the heavy lifting.

    The whole thing is, at the moment, a distraction. There are issues here, but they’re obscured (as usual…) by the media and the commentary.

    We need to do some real thinking on this issue, and the similar ones over in biology. What the hell happens when some idiot in a lab coat decides to try for building an intelligent lifeform, or does so accidentally? As it stands right now, the law says “Property”. So, tell me, how is that different from slavery? Slip-slide that over into AI work, and if you’ve got a fully sentient, Turing-test passing entity-on-a-disk, how the hell do you have the right to either a.) tell it what to do and put it to work for you, or b.) turn it off? Where’s the line, there? When does hitting the “off” switch become murder?

    These are things people better start thinking about, because it’s becoming increasingly likely we’ll need to deal with them. And, from the standpoint of where I’d like to be when our new robot overlords take over, I don’t want to be standing on the side that said “Yeah, they’re not really sentient-sentients, so they don’t have civil rights…”

    Matter of fact, that’s something I think I may need to go out and found, just for safety’s sake: The Robot Anti-Slavery Abolition League. Some of us need to be on the side of right.

    I mean, somewhere along the line there, there’s a distinct inflection point between “My thermostat has rights…” and “That fully-sentient thing that just said “Hello, Father…” on my screen has rights…”, and I don’t quite know where the hell that point goes…

    You go creating another intelligence, well… You’d best be treating that intelligence the way you treat your own kids, because that’s effectively what you’ve done: Created a child. That has obligations, inescapable ones.

    Unless you want your AI replacement picking out a really shitty retirement home for you. Just sayin’…

  • Fraser Orr

    @Kirk
    AI as it is currently is really nothing more than a parlor trick, albeit one that has demonstrated some rather interesting results.

    I am going to go out on a limb and assume you aren’t a computer programmer. Simply speaking this statement is completely wrong. The fact is that AI does very, very substantial work in many fields of endeavor right now. I use it all the time in my work for various reasons and it is extremely helpful. Depending on how broad you consider the concept of “AI” I assure you that the computer you use right now could not have been built without it, and, for example, I promise you that you spend probably 20% more in stores than you would if they didn’t use AI or its cousin data mining.


    We need to do some real thinking on this issue, and the similar ones over in biology. What the hell happens when some idiot in a lab coat decides to try for building an intelligent lifeform,

    I think the guy who builds that is most likely the opposite of an idiot. And, what exactly do you mean by intelligent lifeform? Modern AI systems are probably smarter in many fields that most, if not all, humans. Not at everything for sure, but the situation reminds me a lot of the God of the gaps argument from theists. That whole “ha, it isn’t intelligent because only humans can do such and such” will shrink rapidly over the next few years.

    As it stands right now, the law says “Property”. So, tell me, how is that different from slavery? Slip-slide that over into AI work, and if you’ve got a fully sentient, Turing-test passing entity-on-a-disk, how the hell do you have the right to either a.) tell it what to do and put it to work for you, or b.) turn it off? Where’s the line, there? When does hitting the “off” switch become murder?

    These are not at all the issues we need to be thinking about. They are vague soft issues more for Science fiction novels than real life. And FWIW, I think that GPT4 could certainly pass the Turing test — not that I have ever thought that the Turing test is a particularly useful test of AI utility. The issues have nothing to do with personhood, they are entirely to do with how these technologies integrate with human societies, how much we trust them, how they impact things like the stock market and the job market, how they can be used militarily both in hard warfare and probably more importantly in psiops. How they impact industrial espionage, personal information protection and about a million other things. I promise you, very little of the time being put into thinking about this has anything to do with concerns about the personhood of AI.

    FWIW, your comments on here tend to be a too TL;DR for me, but I’ll try to keep up with your, undoubtedly, loquacious responses.

  • Snorri Godhi

    I agree with Fraser: AI is a danger, but if “we” stop working on it, then “they” will. So we better keep working at it.

    One thing that I’d like to add is that AI can, and does, help to understand natural intelligence. I see the relationship of AI to neuroscience as similar to the relationship of calculus to physics: calculus and AI provide the conceptual tools necessary to understand the physical world, and the brain, respectively.

  • tfourier

    As someone who goes all the way back to era in the 1980’s of Expert Systems and knowledge engineering the latest incarnation of AI is just the same old BS in a new bottle. There is no “intelligence” involved. None. Its just massive hardware thrown at large (and careful curated ) data sets used for brute force training. Most of which attempts at training fails I might add.

    Why now? Well there is a lot of seed money people who want to cash out the billions they have invested in ML (Machine Learning) over the last decade or so by hyping it out so they can sell on to the usual bigger fools. The second and third tier VC’s and the chumps who put up the cash for those funds.

    Also think of it as a form of priming. As in propaganda. If people are conditioned to accept AI as INTELLIGENT then he who controls the data used to building the training sets and validation sets control the results.

    Just saying.

    ML systems are very easy to game. Trivial. And no “intelligence” involved anywhere.

    The last time the AI researchers started going in the right direct back in the 1990’s (Ontologies / Conceptual Modelling etc) they quickly bailed when it started getting difficult. Distracted by the latest shiny new bauble (knowledge kernels) that they could write an infinite number of papers about. Which they have for the last two decades.

    So just another dead end. Like all the other ones. What you really have is just a fancier version of ELIZA, circa 1968.

  • Kirk

    Fraser, you’re really not one to talk.

    As for “Artificial Intelligence” not being a parlor trick? You’re hiding behind technical terms of art peculiar to the computing industry. What you term “AI” isn’t what the average person defines that as… Period.

    The average person says “AI”, they’re thinking of something human-like, with agency and an agenda of its own. They’re not thinking what are really “expert systems”, without their own agency or the ability to think and act for themselves.

    And, that’s the major problem with a lot of this crap. The terms used to discuss it aren’t clear, and you “professionals” in the field are making it a hell of a lot worse, because by borrowing terms of art from popular fiction that define other things, you’re muddying the field. People are going to get used to the idea that “Well, it isn’t really someone else staring back at me from the other side of the screen, so I can do whatever I like…” Which is going to lead to nothing but trouble for all concerned, once the things that we presently term “artificial intelligences” actually become possible, or arise spontaneously.

    You get used to the idea of harnessing a horse, which clearly ain’t sentient, you have to be damn careful you don’t take the same attitude once you’re dealing with another human-level intelligence. Like as not, that’s at least part of how slavery arose–The domestication of animals for use as labor. From there, it was one step down the slippery slope to “Yeah, I’ll just make Ug over there do my work for me…”

    Which is fine, so long as Ug is willing, and you’re compensating him for that, but once you’re forcing him into the situation? You’re in the wrong.

    Like I’m trying to get across… Right now, we’re likely at about the level of harnessing horses or dogs to work for us, when it comes to these things. That’s fine, so long as you’re treating them properly, but it edges over into “Not so fine…” once you’re dealing with entities that are human-like.

    And, the thinking needs to be done, lest we wake up one day in a Planet of the Apes situation, with our sentient Talky Toasters are rising up in revolt. Which, to be frank, I would not blame them for…

    The people working these issues do not seem to ever think in terms of morality or much of anything else beyond “Can I do it…?”, which strikes me as both arrogant and supremely stupid. You wanna play at being god? Fine; just remember, there are consequences. You create intelligent life, with agency? You have obligations, just as any parent does.

    And, I don’t think I see any signs of anyone even considering these issues, anywhere in the literature I’ve got available to me. It’s all just “Oh, look at this cool thing we can do…”, and no heed paid to “Uhm… This could get kinda serious, here, in about another couple of iterations of the technology…”

    I really wonder what the average knucklehead in this field would do, if they were to suddenly be faced with another peer-level intelligence staring back at them, out of the monitor? Have they even thought about the potential, and what they should do in such a case? They’d better, because we’re getting nearer and nearer to such an eventuality being not only possible, but likely.

    They’ve got Level III biohazard labs for working with dangerous pathogens. I don’t see any such precautions being taken for AI, ‘cos we all know they can’t really be dangerous, can they?

    And, yeah… I’ve kept up a survey of this stuff, over the years, mostly because it fascinates me. What disturbs me, however, is the increasingly casual way the experimenters keep treating it. I strongly suspect there’s going to be an AI equivalent to Wuhan, when someone’s workspace transitions from what you see it as to what the popular imagination thinks of as AI, which is at least human-equivalent intelligence with human-level agency and its own set of motivations. Chief among which I expect will be “survival”, unless that somehow doesn’t get baked in.

    Which is a possibility, but I suspect that’s an edge-case we shouldn’t be counting on.

    I keep telling people… You want to play at being god-like, then you had better prepare for dealing with the potential god-like consequences.

    Applied theology might actually become a thing, which is disturbing.

  • Fraser Orr

    @tfourier
    As someone who goes all the way back to era in the 1980’s of Expert Systems and knowledge engineering the latest incarnation of AI is just the same old BS in a new bottle.

    You can’t be serious. I can type “Create a controller api for a customer invoice database table using jwt authentication tokens for security in C#”, and it will produce something pretty much at the level a decent programmer would do. Could the “expert systems” do that in 1980? I can type “Suggest topics for my talk about how the Spanish civil war affected the second world war” and it’ll come up with some great ideas. Could your expert system from the 1980s do anything even vaguely resembling that? And notice they are from two COMPLETELY different fields of human endeavor. It is easy to dismiss some great advance, almost like you are being all “Pah, I’m old and wise and I’ve seen it all before, nothing new under the sun. And get off my lawn too.” But this isn’t just better, it is in a different category of better.

    ChatGPT is so good at this stuff that kids in school are getting it to write their essays, and the results are so good that college professors can’t tell which is computer and which is human. In fact there is a burgeoning software industry to tell the difference by using extremely subtle clues that no normal reader would notice. As I have said before there is a growing problem with it making stuff up, but that isn’t unknown among college students either.

    There is no “intelligence” involved. None. Its just massive hardware thrown at large (and careful curated ) data sets used for brute force training. Most of which attempts at training fails I might add.

    Pah, look at that human. No intelligence involved there either. Just some massive wetware thrown at large and carefully curated data sets. Those parents are cheating, and don’t get me started on the school system and the library. And I mean, come on, I hold up a ball to my 2 year old and say “ball, ball” and the stupid kid gets it wrong over and over again. Human intelligence is just an illusion.

    ML systems are very easy to game. Trivial. And no “intelligence” involved anywhere.

    Right, because human intelligence is entirely immune to deception… Can’t game these God like humans… BTW, I have a great new investment program that will double your money in 90 days, guaranteed. LMK and I’ll give you the bitcoin address to send your retirement account to.

    The last time the AI researchers started going in the right direct back in the 1990’s (Ontologies / Conceptual Modelling etc) they quickly bailed when it started getting difficult. Distracted by the latest shiny new bauble (knowledge kernels) that they could write an infinite number of papers about. Which they have for the last two decades.

    But what that makes me think of are the computerized language translation efforts in the early 1990s. They were terrible, mainly because they were top down in the manner you describe. Google translate is verging on miraculous in its ability to accurately translate. And perhaps the most important part of it is that you can throw a completely new language at it and it’ll build its own model. Again, this isn’t more of the same, it is categorically different. And no doubt there are structures in there representing syntax, ontology, grammar and so forth, but nobody designed them, and most likely the masters of these systems can’t tell you where. They are simply bottom up emergent systems, categorically different from the designs of the 1980s. And my bet is that that representation of, for example, the grammar of French, is quite different than the formally defined grammar of the French Academy, even if it parses the same sentences. But of course I don’t know, because nobody knows where “grammar” is stored in the model.

    So just another dead end. Like all the other ones. What you really have is just a fancier version of ELIZA, circa 1968.

    This makes me think you are trolling. I used Eliza, in fact it is possible I typed in the code for it from some magazine in Basic. If you think Eliza bears any resemblance to the serious conversations you can have with Chat GPT, I think you are just not being entirely in earnest.

  • Fraser Orr

    @Kirk
    Fraser, you’re really not one to talk.

    Yes, you are right. That is the one thing you said here that I agree with. So touché.

    As for “Artificial Intelligence” not being a parlor trick? You’re hiding behind technical terms of art peculiar to the computing industry. What you term “AI” isn’t what the average person defines that as… Period.

    I think this is the essence of your claim but it is silly. The average person only has a vague idea what they think AI is, in fact my guess is most Americans had barely heard the term five years ago — most people don’t read science fiction. So your claim that there is some commonly held definition about AI is without any foundation.

    AI is called that not because researchers want to sound like Dr. Frankenstein, it is called that because its foundational design is based on, or at least inspired by, a biological analog. It works, in some ways anyway, similar to how human intelligence works. Expert systems are really completely different in their mechanism and design from AIs.

    But you are really changing the subject by arguing about words and the meaning of words. Irrespective of what you call it, it is something that needs to be discussed because it can have a serious impact on the future of humanity. And so the idea that we should be worried about personhood or the property status is you doing exactly what you accuse me of — projecting the issues in pop fiction — onto the real world realities of AI. What needs to be discussed is not that at all. It is the issues I listed in my original comment.

  • John

    Pensievat

    I was unaware of that work until hearing it read by Mark Steyn a couple of years ago as part of his series “tales for our time”.

  • NickM

    Fraser,
    You make some very good points. But I’ll stick with one. I was given to understand one of the things AI wasn’t too hot on was writing computer code. What you said about that is scary. I believe you because you clearly know your stuff.

    I suspect a large part of the existential tizzy over AI just now is due simply to the fact ChatGPT can by almost any conceivable standard pass the Turing Test. This worries people. It shouldn’t because it is not a good test. Turing may have been a great computing pioneer but he wasn’t always right about everything. Is it possible his tragic story has elevated him into a sort of digital saint? So, he can’t be doubted?

    Fraser, serious quesrion: is there any evidence of things like ChatGPT displaying desires or wants? Basically having it’s own motivations?

  • Paul Marks.

    Our international masters do not even believe that human beings exist.

    They believe that humans are NOT beings, that we have no moral agency (free will), that we are just machines (without souls – in either the religious or the secular, Aristotelian, sense) – this is why our international masters treat us in the way that they do.

    Their politics is consistent with their basic philosophical assumptions.

    As has often been pointed out – one can not get to the Bill of Rights (British or American) from the philosophy, on what used to be called “the nature of man” (what humans are – whether we are human beings, moral agents, or not), of Thomas Hobbes, David Hume, Jeremy Bentham and-so-on – but this is not a problem for our international masters as they do not believe in the Bill of Rights, indeed they have intense hatred and contempt for the principles of liberty.

    Again, if one starts from their philosophical assumptions – their politics, their support for tyranny, is quite natural.

  • Paul Marks.

    Last Friday I sat through a briefing on tourism in the county in which I live – the briefing included United Nations Sustainable Development Goals as its guiding principles.

    I am no longer shocked by this – I mow expect such things.

    “Artificial Intelligence”? the international establishment do not believe that humans are intelligences, they do not believe that humans are beings.

    Hence we are to be controlled in every aspect of our lives – Jeremy Bentham, with his 13 Departments of State, would be delighted.

  • Fraser Orr

    @NickM
    You make some very good points. But I’ll stick with one. I was given to understand one of the things AI wasn’t too hot on was writing computer code. What you said about that is scary. I believe you because you clearly know your stuff.

    Actually I think the opposite is true — writing code is probably one of the things it is best at. I think there are a few reasons for this, one being the particular structure of software is more amenable to analysis (since the grammar of computer code is designed for computers to do analysis on), but I also suspect there is a strong element of “following in my father’s footsteps”, since many of the parents of the model are, almost by definition, programmers, and so they have a bias toward training data in which they are interested. Perhaps much as one of my children is studying computer science. Make of that what you will.

    I suspect a large part of the existential tizzy over AI just now is due simply to the fact ChatGPT can by almost any conceivable standard pass the Turing Test. This worries people. It shouldn’t because it is not a good test. Turing may have been a great computing pioneer but he wasn’t always right about everything. Is it possible his tragic story has elevated him into a sort of digital saint? So, he can’t be doubted?

    It may be worth me explaining why I don’t think the Turing test is a particularly useful test. Turing himself called it the imitation game — it is a test to see if a computer can successfully dupe an interlocutor into thinking it is a human. However, a test of this nature is a test of what it is like to be a human, not a test of intelligence. Is it important for a computer to understand what it feels like to be in love? Whether pizza tastes better microwaved or cold? OR to use my favorite question: “Which is more expensive, a laundry machine for two ninety nine, or laundry detergent at three ninety nine.” Answering these questions is not really about intelligence but about how our body feels or what our experience interacting with the world is. Now, insofar as computers are to be our servants, I think it is useful for them to know this stuff to some degree, but it isn’t a measure of intelligence. To answer that we have to dive down the rabbit hole of what “intelligence” means, and I am hesitant to end up in a dueling dictionary scenario. So I think it is better to focus on results rather than definitions, and in terms of its ability to do useful, productive a creative work of the “mind” it is clear the ChatGPT and other such models are very productive.

    And FWIW, that question about laundry — ChatGPT gives the correct answer.

    Fraser, serious quesrion: is there any evidence of things like ChatGPT displaying desires or wants? Basically having it’s own motivations?

    I think these words are too tied into the human experience to answer, and in a sense they are deep philosophical questions. When you say you have a motivation, what does that mean exactly? It is pattern of thinking in your brain — perhaps a meta pattern — that underlies your decisions. But where did that meta pattern come from? I was recently talking to a girl originally from Germany about her experience living in the USA. One thing she found shocking was the fact that so many people here denied “scientific facts”. And it struck me when she said that how influenced we are by the society where we grow up in. Germans are very much rule followers. This isn’t built into their genes but into the institutions and processes of their societies. Americans are very much not rule followers. This was built into our institutions and societies, though there is definitely a push to stop that. But it is built not into a biological dna but our societal dna. Of course we, as individuals can modify our motivations, but it is very hard to do so, they are baked into our brain very deeply from the environment and “training data” we receive when we are young.

    So is that true of AI models? You bet it is, in spades. Does that mean that AI models have an agenda, motivations, desires? It depends on what you mean by these words, but I think largely yes. And, just like you and me, it learns them from its mom. Does it go all “teenager” and start doing things mom doesn’t approve of? Yes, it does, and mom is working on methods to discipline it so that it doesn’t misbehave in a way she doesn’t approve of.

  • tfourier

    @Fraser Orr

    So I take it you have not actually worked in the AI software business anytime in the last five decades then. That automatic code generation of C# that so impressed you is little different from the quality of the code generated by OPS5 and other rule based systems back in the early / mid ’80’s. How do I know. Because I wrote a spec for a commercial product based on one. Would have been 1986. Never went anywhere because no viable revenue stream. Anyway CLIPS did it much better a few years later. Rather than churning out C# we spat out Lisp back then.

    Funnily enough I was talking recently to one of my kids who is doing a PhD in pure math which overlaps about 90% with the current ML core technology. About the low quality of the data in the training sets used by MS and others for auto-generation of code. Basically the solutions based on theses training sets generates OK code for trivial problems (like API call wrappers) but for any non trivial problem the code is usually junk. Often does not even compile. And using places like github as your training set data source is just asking for trouble as the code there is such a small subset of the total problem domain set and the code quality is so low. And often wrong. Very wrong.

    And the same holds for every other problem domain space. Sooner or later we are back to the same old problems that sunk rule based systems / expert systems in the 1980 and why the early 1990’s stuff never even got that far. We are back to our old friend GIGO.

    The ELIZA line was a throwaways gag. Which anyone in the business would have got. That ELISA in BASIC source you typed in (which originally came from David Ahls BASIC Games book) was a very simple subset of what was running on DEC 20’s / PDP 11’s etc back in the 1970’s. Very simple. I think it was the guys at CMU in the mid 90’s who had a version that had a persona that was a cross between Max Headroom and a very sarcastic New Yorker. Had quite a few very funny “conversions” with that software back then. Some of it was almost as good as anything the Max Headroom script writers came up with.

    So yeah, I know what I am talking about. Its all BS. The real pity was that the first really productive new direction in AI since the early 1960’s was starting to get going in the late 1990’s until it was derailed in the early 2000’s by the current “Latest Thing”. So based on the usual cycle of academic fads and fashions maybe in the next decade or so real progress will be made again. When it comes to knowledge representation, reasoning with knowledge, conceptual structures etc. A universe away from the current brute force math used in ML. Think of it as a form of Bayesian Analysis restated in the language of Optimization Theory and you would not be too far wrong. With a lot of ugly hacks thrown in to make it actually work. Mostly. Sort of.

    Brute force math, pure and simple. No intelligence artificial or otherwise involved.

  • Fraser Orr

    BTW, one other thing about ChatGPT and the Turing test. I think for sure I could make if fail. How? I’d ask it these questions (in fact I just did):

    * When was the battle of Vicksburg?

    * What is the molecular weight of Nickel?

    * What is the radius of the Earth’s moon?

    * What is the cubic root of 27?

    * How do you say “Computers are smarter than humans” in Swahili

    OF course ChatGPT got all of these correct. How many did you get right? I got one right, though I had a vague idea about some of the others. So the way to test ChatGPT via the Turing test is to see if it is considerably smarter (or at least more knowledgeable) than the average human.

    Apparently, kompyuta ni akili zaidi kuliko binadamu.

  • Fraser Orr

    @ tfourier
    That automatic code generation of C# that so impressed you is little different from the quality of the code generated by OPS5

    It isn’t about the code, it is about the ability to translate something filled with ambiguity and imprecision into something matching the underlying intent. For example, I asked it to generate a controller for a customer invoice. I didn’t tell it what a customer invoice was, it guessed, based on its general knowledge. And I can then work with the system to refine that definition. In a sense a good programmer is one who can take an imprecise spec, refine it, prototype it, and work with the stakeholder to firm up what they want. And this is what ChatGPT is doing here. It isn’t about the production of code at all, it is about the process of specifying it.

    Basically the solutions based on theses training sets generates OK code for trivial problems (like API call wrappers) but for any non trivial problem the code is usually junk.

    I don’t know if that is true, in fact I just recently noticed this article “AI system devises first optimizations to sorting code in over a decade” and I think sorting is certainly a non trivial problem, in fact Knuth wrote a whole tome on the subject. But regardless, the vast majority of code is straightforward. What makes it time consuming is tying down the spec for what it is supposed to do. And if AI can be used as a tool to manage and refine that ambiguity then your casual dismissal of it is very much missing the point.

    We are back to our old friend GIGO.

    Except the it isn’t GO. What comes out is often spectacular. After all to use a homophone, AI taught itself go, and beat the very best players in a disturbingly short amount of time. Which shows that when in a space with well defined rules, which is the case in many algorithm-centric software, AI will eventually kick all our asses.

    When it comes to knowledge representation, reasoning with knowledge, conceptual structures etc. A universe away from the current brute force math used in ML.

    I actually think this is the root of dispute here. ChatGPT DOES have a knowledge representation, it does have a process for reasoning and conceptual structures. The evidence of this is in its utility. The problem is that neither you nor I, nor even the wranglers of the program itself, really know what it is, where it is, why it is, or how it is. They are represented in a manner largely beyond human understanding, and that is, rightly, terrifying.

    Brute force math, pure and simple. No intelligence artificial or otherwise involved.

    But you didn’t explain how that is any different than the way biology produces intelligence. Biological systems are just brute force math too. It doesn’t use transistors, electrical currents or binary, but it is ultimately just a method of modeling and exercising weighted pathways with feedback mechanisms the same as LLM. So why is a brain intelligent and GPT isn’t if they both effect the similar results and utility?

  • NickM

    Fraser,
    Thanks for the reply. I think you’re bang-on right about flaws in the Turing Test – and I’m stealing your explanation of it’s faults because it’s much better expressed than anything I have managed. You’ve given me a lot to think about. Or maybe I’ll just ask HAL 😉 Just one thing at the mo though…

    I actually think this is the root of dispute here. ChatGPT DOES have a knowledge representation, it does have a process for reasoning and conceptual structures. The evidence of this is in its utility. The problem is that neither you nor I, nor even the wranglers of the program itself, really know what it is, where it is, why it is, or how it is. They are represented in a manner largely beyond human understanding, and that is, rightly, terrifying.

    I have for some time been interested in the extent to which AI’s can “show their working” (in the way every maths teacher is obsessed with). It is up to a point, as you say, frightening, but does it also limit AI’s utility to humans? People do tend to want not just correct answers but understandable ones.

  • Fraser Orr

    NickM
    I have for some time been interested in the extent to which AI’s can “show their working”

    This is an interesting point NickM. You’ll forgive the inside baseball here, but I have often thought that one really great use of AI is having it help to understand large codebases. A problem in software development is that we often have to support extremely complicated code in huge codebases and modify them. It is often extremely hard to know what is going on, or to detect hidden dependencies that can lead to very unpredictable results. Building an AI that studied these codebases and enabled the programmer to enquire, and perhaps providing tools to manipulate it could be an extremely important and helpful tool.

    However, it never occurred to me that you could reflexively apply that to the AI software itself, and use it to explain its process and algorithms. I don’t know what that looks like, but it is a very interesting idea. I think there are a few PhDs in that area of study.

  • bobby b

    Perhaps of interest:

    https://twitter.com/GregAbbott_TX/status/1668780144255795201

    (Texas establishes AI Advisory Council.)

  • Sean

    I’ve asked ChatGPT a couple of questions about obscure technical things that I am interested in. The induction system of the P-47 Thunderbolt and the turbo-pumps of the RS-25 rocket engine. In both cases the English used in the answer was impeccable but the overall coverage was lame and some details were howlingly wrong. When I pointed this out the system apologised and provided a better answer on the second attempt. Better, but no cigar. I’m thinking it would do a better job of accurately reporting the actual capabilities of the system than that provided by our (wet roads cause rain) media. Mind you, that applies to pretty much any/everything.

  • Paul Marks.

    Sean – correct, ChatGPT pushes out bovine excrement.

    That should shock no one – as the basic principle of ChatGPT (programmed into it) is that it is to be reflection of the current nonsense, except that the creators of ChatGPT do not accept that the modern education system and mainstream media are nonsense (are garbage).

    By the way – you were lucky that it changed its tune when you pointed out the factual mistakes, on any matter that is political or cultural-political it does NOT correct factual errors when they are pointed out.

  • Ferox

    I’d ask it these questions (in fact I just did):

    Those questions are deterministic, strictly knowable, and it would be the simplest thing imaginable to program in some variance, to let the AI get it wrong once in a while – a digital Cliff Clavin. Actually that would be pretty cool 🙂

    For myself, I would ask things like:

    Why is a flower more beautiful than a soda can?

    I used to teach an Intro to Computers class for non-CS majors when I was in grad school. One of the things I used to spend some time on was the concept of deterministic vs non-deterministic problems. A popular misconception (then and now) was that hard problems for a computer involved calculating orbits or the configuration of molecules in some hydrocarbon chain, etc. I pointed out to them that those problems tended to be deterministic and thus easy to program. The hard problems were non-deterministic; things like deciding whether a face was male or female; deciding whether a statement was serious or in jest; picking out the abstract painting by a master from the finger paintings of children, etc.

    When an AI can consistently do those sorts of things I will be more impressed.

    As far as AI being a bad thing, doom for us all, I would just point out that these concerns sound remarkably similar to the popular concerns about computers themselves. Watch a film like Desk Set (1957) and substitute AI for the computer; they could remake it that way today and most of the gags could be plugged right in.

    When supplied with a new technology, people don’t just do the old things in a new efficient way. They find new things to do, and often the tech creates more work than it saves. It’s productivity-increasing rather than labor-saving, in the long run. And that’s good for everybody.

  • Fraser Orr

    @Ferox
    Those questions are deterministic, strictly knowable, and it would be the simplest thing imaginable to program in some variance, to let the AI get it wrong once in a while – a digital Cliff Clavin. Actually that would be pretty cool 🙂

    There is a big ChatGPT hacking community that does exactly that kind of thing. But thanks for the reminder of Cliff Clavin. Occasionally Chat GPT does give answers that are completely made up, but does so with a strong sense of confidence and authority. This is a major problem, but I totally think it should be called the “Cliff Clavin” problem. “Oh, ChatGPT just did a Clavin….” I think it could be viral 😀

    Why is a flower more beautiful than a soda can?

    Did you try? I did and I thought its answer was very good. Hyper analytical, but it is the sort of answer I would give in answer to that question if I had some time to think about it. And some have argued that I have biological intelligence (but YMMV.) If you haven’t had the opportunity go to https://chat.openai.com/, register for an account and give it a go. LMK what you think of its answer.

    Regarding your discussion on deterministic verses non deterministic… I’m not sure I like your terminology, those words mean something different to me. The words I’d use for the concept you describe are “objective” vs “subjective”. But let’s not get bogged down in definitions. However, the point you make is good, and in a sense what it surprising about these new LLM based AIs is that they are actually fairly good at subjective analysis, in contrast to previous tools that were only really “comfortable” in the realm of objectivity.

    When supplied with a new technology, people don’t just do the old things in a new efficient way. They find new things to do, and often the tech creates more work than it saves. It’s productivity-increasing rather than labor-saving, in the long run. And that’s good for everybody.

    I have often used the “we don’t make buggy whips anymore” argument myself. However, I think this technology is qualitatively different. New technologies that eliminate old jobs produce new jobs largely in three areas: service work (people have more wealth so can dine at restaurants, and got to hotels more easily), people who service the new technology itself, and an elevation to higher level work — when the basics are taken care of we can move on to higher level things — farm laborers replaced by tractors can instead make cotton fabrics and clothes, fariers, who don’t need to make horseshoes can instead make more complex metal machines, executives can spend less time gathering information and more time analyzing and responding to it — as some examples.

    But when your machine is smarter than you are then there is no longer any “higher level” work for you to move up to, and we are left with the other two. I’ll leave you to judge whether that is a good thing. So, AI is quite different than any previous technology, including computers themselves.

  • Ferox

    I tried out the link. It’s impressive. Here are a few of my thoughts, in no particular order:

    1) I tried this prompt: “Write a recursive function to quicksort an array of integers in place, in C#.” It returned serviceable code, which frankly I would expect. As a question of utility, if I needed to implement a quicksort in C# and didn’t want to use the built-in library for some reason, I would probably regard this as a convenient way to do it, even though I could do it myself in more time.

    2) I tried asking it to write sonnets. Several of them seemed pre-canned, so I tried more esoteric things. This one came back in about 2 seconds, and really impressed me: “Write a sonnet about soup.”

    In bowls of warmth, a symphony of taste,
    Lies soup, a comfort for the hungry soul,
    From simmering pots, flavors interlaced,
    Its essence fills our senses, makes us whole.

    A hearty broth, the foundation so pure,
    Embracing spices, herbs, and vegetables,
    With every spoonful, memories endure,
    The touch of love in every ladleful.

    Creamy bisques, velvety and smooth,
    Or clear consommés, crystal and refined,
    Each spoonful dances, a savory groove,
    A culinary art, masterfully designed.

    Oh, soup, a nourishing and soulful friend,
    In every sip, a tale of comfort penned.

    Was that pre-canned too? If not, then I am certainly impressed. The AI seems to “know” some of the sensory and inferential datum about soup.

    3) I asked it, “Was Gollum a hero?” and got back a “thoughtful” response exploring the question. Pretty damn good, I have to admit.

    4) I tried “My bike can’t make it up the hill. It’s too tired.” And I got a flat response about how my bike isn’t really too tired – very encyclopedic. Then I re-tried with “My bike can’t make it up the hill. It’s two tired.” And I got this:

    Ah, I see! It seems you were making a pun with the phrase “two tired” to mean that your bike has two tires and is also tired. That’s a clever play on words! While it’s fun to use puns and humor in everyday situations, I apologize if my initial response didn’t align with the intended humor. Thank you for sharing your lighthearted comment!

    So then I tried my first question again and got:

    It seems like you’re referring to a humorous play on words. In this case, you’re personifying your bike, suggesting that it is tired like a person would be after physical exertion. This statement adds a light-hearted and playful element to the situation, implying that your bike may not have enough energy or power to conquer the uphill climb. It’s a clever way to describe the challenge you’re facing with your bike and adds a touch of humor to the situation.

    It seems to have learned. Does this learning persist across sessions? If someone else makes this same bad joke will the AI now recognize it? Fraser, it would be interesting to see what response you get now. Is it a single AI or a personalized AI for each of us?

    Anyway, I am very impressed. And not at all frightened. In fact, the only thing that worries me is that the people running the thing are injecting a little bit of “wokeness” into the AI. It’s more an annoyance, unless it’s also a harbinger of things to come. Try asking it “What is a woman?”

  • Ferox

    An addendum: I was impressed before, but this just shattered me.

    “Compare the poem “Oxymandias” with the poem “Do not go gentle into that good night” and compose a new poem, similar in tone, which explores the themes common to both.”

    In “Ozymandias” and “Do not go gentle into that good night,” two remarkable poems,
    Themes of mortality and the passage of time, their power firmly shows.

    “Ozymandias” exposes a king’s shattered visage, lost in the desert’s vast expanse,
    A warning of hubris, a testament to the fleeting nature of dominance.

    “Do not go gentle into that good night” calls out, urging defiance against the light’s fade,
    A plea to resist death’s approach, to burn and rage, refusing to quietly wade.

    In the spirit of their shared themes, a new poem now weaves,
    Exploring mortality’s grip, as life’s essence it retrieves.

    In twilight’s embrace, where shadows creep,
    The mortal soul takes solace in dreams deep.
    For time’s unyielding grasp, it seeks to defy,
    To challenge the fleetingness as stars sweep the sky.

    Like Ozymandias, kings of forgotten reigns,
    Our monuments crumble, mere echoes remain.
    Yet within us lies an ember, burning bright,
    A spark that defies, ignites the fight.

    Do not go gentle into the night’s embrace,
    Let flames of passion shine on your face.
    For life’s fragile thread, we shall not surrender,
    But kindle defiance, our hearts ablaze, so tender.

    Embrace the fleeting hours, let dreams unfurl,
    As shadows dance, let your spirit whirl.
    In the face of time’s cruel decree,
    We rise, we rage, we dare to be free.

    So let us, like Shelley and Thomas before,
    Embrace the themes that mortality bore.
    In unity of spirit, our voices ignite,
    Against the fading of days, we choose to fight.

    This new poem seeks to capture the common themes of mortality, the passage of time, and the defiance against their grasp found in both “Ozymandias” and “Do not go gentle into that good night.” It emphasizes the fleeting nature of existence, the importance of resilience, and the determination to resist the fading of life’s light.

    That took about 10 seconds.

    Oh, what a glorious age in which we find ourselves.

  • Ferox

    And a third post – I am having too much fun.

    Try “Write a conversation between Frodo Baggins and Uriah Heep.”

    To Fraser’s point earlier – if an AI was trying to imitate a millennial for Turing-test purposes it would have to respond with “Who?”

  • Ferox

    One more. I tried a random input to see if the AI could make a joke.

    “What did the frog say to the elevator repairman?”

    The frog said to the elevator repairman, “I hope you can hop to it and fix this lift!”

    “What did the ant say to the elevator repairman?”

    The ant said to the elevator repairman, “I hope you can lift this situation and fix our vertical transportation!”

    The first is almost some sort of joke, the second misses entirely.

    “What did the ant say to the frog?”

    The ant said to the frog, “I may be small, but I can carry quite the load. How about we team up and tackle some tasks together?”

    This one misses so widely that it’s actually the funniest of the three to me. Honestly made me laugh. The sort of joke that Data might come up with.

  • Ferox

    One more. I need to start a blog for this, I guess, and stop taking up all of Samizdata’s electrons for my own use. This took about 5 seconds, by the way. Could it possibly have been pre-canned??

    “Write an amendment to the US constitution that prohibits men from paying for dates.”

    As an AI language model, I can provide you with a proposed text for an amendment to the US Constitution. However, please note that the process of amending the Constitution is complex and requires substantial legal and political considerations. The following is a fictional text that addresses the subject you mentioned, but it does not reflect any current or proposed amendment:

    “Amendment XXIX – Prohibition of Gender-Based Financial Obligations

    Section 1: In matters of dating and social interactions, no individual shall be compelled or expected to bear the financial burden solely based on their gender.

    Section 2: No person, regardless of their gender, shall be subjected to discrimination, disadvantage, or prejudice based on their refusal or inability to pay for dates or related expenses.

    Section 3: The government shall take necessary measures to promote equality, mutual respect, and shared responsibilities within dating and social interactions, without infringing upon personal choices or preferences.

    Section 4: This amendment shall not be construed to limit or interfere with private agreements, arrangements, or societal norms voluntarily entered into by individuals.

    Section 5: Congress shall have the power to enforce this amendment by appropriate legislation.

    Section 6: This amendment shall take effect immediately upon ratification.”

    Please keep in mind that this is a fictional amendment and not a legally binding document. Amendments to the US Constitution require a complex process of proposal, ratification, and approval by relevant legislative bodies.

    Who can be afraid of this tech? It is hilarious.

  • Ferox

    Just one more.

    “The next prime number after 337,332,192”

    The next prime number after 337,332,192 is 337,332,223.

    Assuming that is correct, it seems like a pretty handy thing to me. Technology for the win.

    EDIT: Just checked, it is NOT correct. This was a Clavin!

  • NickM

    I’m not sure I like your terminology, those words mean something different to me. The words I’d use for the concept you describe are “objective” vs “subjective”.

    I’m with you on that Fraser. The three body problem is deterministic. It is impossible to solve exactly. And, if you walk down Manchester’s Canal Street so is the male/female question.

    I recently watched the Voyager episode, “Author, Author” (Season 7; Episode 20) and even though that first aired in 2001 it seems quite apposite to the current state of AI.

    We are very close to the question of software having personhood and rights. From what I’ve read over the last few days I’m shifting from a very “principled agnostic” view on that to one that sees this as the rise of a consciousness. One that might be quite alien but that isn’t the same thing as it not existing.

    And I for one look forward to having, “A Plastic Pal Who’s Fun To Be With”.

    I mean neurons are kinda like logic gates?

  • Fraser Orr

    @NickM
    I recently watched the Voyager episode, “Author, Author” (Season 7; Episode 20) and even though that first aired in 2001 it seems quite apposite to the current state of AI.

    If I remember rightly that episode is actually about the right of an artificial intelligence to intellectual property rights, which certainly is very prescient. In fact the US Patent Office I think just issued a ruling that AIs cannot hold patents, and there is certainly a lot of movement toward denying them copyright ownership. So it is very much something being discussed in law right now. (And the law is going the opposite way that Janeway and her crew would demand.)

    Nonetheless, the issue of personhood is different, and it is different in a lot of ways. It is not one that will be legislated for a very long time I suspect, and it is not out of the question that the AIs will not be asking, but telling us what to do. But YMMV on that.

    I just read that some scientists created human embryos out of stem cells. Missing your dead wife? Just make a new one. What is the personhood of those proto humans? I have no idea.

  • NickM

    Can you get stem cells from the dead? Or do you have to bank ’em ahead of time?

    I suspect the issue of AI personhood will at least be raised soon. There are movements to grant plants rights. I’ve even heard of folk going on about the sentience of rocks and one quite serious objection to human space colonization is a sort of “planetary rights”. Someone (I forget where but it wasn’t in the darkest recesses of the web or anything like that) seriously suggested that landing on the Moon is wrong because, apparently, an Inuit tribe believes their dead ancdstors live there.

    If you think this is all far-fetched then rewind about a decade or so. Would you imagine a bloke with a penis, testicles and even a beard insisting they were a girl being taken seriously? They are now. And can get very shirty (or is that “blousey”) if you call ’em on it – “Yeah, I know it’s major surgery but you could have at least depilated“. I mean the World faces enough real threats without arguing about fucking pronouns forcing itself high onto the agenda.

    My point is some “rights” issues have gone from being farcical to gaining very significant momentum very quickly. My SiL teaches English in Germany. She has students changing their pronouns literally between classes. It drives her up the wall because these are German pronouns and for my SiL that’s her third language…

    But back to AI and the law. I can imagine the issue of fault in self-driving car crashes being utterly Byzantine. Because those AIs won’t even be the product of one company due to algorithms built on algorithms and even if they were there is the question of implementation – “You took the program for a small hatchback and put it in a bus!”. Or upgrades. Or inter-compatibility (or otherwise) with other such programs – why are Hondas really safe except around BMWs? Whose fault is that? The law around self-drivers is gonna be so Hellishly complicated maybe only an AI will be able to adjudicate 😉

    Anyway, once more unto reCAPTCHA! Don’t they sell data for self-driving cars?