We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.

Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]

Samizdata quote of the day – machine learning edition

“Artificial intelligence in particular conjures the notion of thinking machines. But no machine can think, and no software is truly intelligent. The phrase alone may be one of the most successful marketing terms of all time.”

Parmy Olsen, Bloomberg columnist. ($)

92 comments to Samizdata quote of the day – machine learning edition

  • JohnB

    Why can’t a machine think?
    What makes a machine different from a human?

  • Terry Needham

    What makes a machine different from a human?
    JohnB said that.

    One is the result 400 million years of evolution. The other was invented by Arkwright 200 years ago.
    I said that.

  • Nicholas (Unlicensed Joker) Gray

    I bet it was a machine that said it, so we humans will let the machines rise unopposed.
    Heard an American scientist once, who pointed out that robots don’t have nerves, and so don’t have their own built-in value system (such as self-preservation). If we don’t give them nerves, we should stay in control.

  • JohnB

    A human is a vastly more complex operating system, sure.
    But I think it is generally accepted that the systems, while vastly different in materials and applications at present, work to the same principles?

  • Rob Fisher

    A human is a machine.

    Everyone is currently thinking a lot about the language models like ChatGPT and whether they are thinking, and other related questions.

    They have intelligence, that’s not in doubt. They produce original work; they are not “parroting”. They have flaws and failure modes: in that way they are more like people than we are used to computers behaving. They’re a useful tool: I have been playing around and find ChatGPT useful for certain tasks. It can be a huge timesaver compared to doing lots of separate web searches and piecing together information myself. It does need to be things I can verify myself, though: one of the failure modes is its tendency to make stuff up.

    Are they thinking? Conscious? Not like we do. Not sure it’s a useful question.

    I think at root they’re modelling the relationships between concepts and ideas. Internally the models contain “tokens”, where each token is some part of language, and the relationships between the tokens. Ergo a recent tweet I saw from someone surprised that ChatGPT talked about having fingers. I doubt the model thinks it has fingers, but it can write about the idea of having fingers, because it’s trained on writing by people with fingers and because it can generate output from those ideas.

    What I find interesting is just *how* useful and intelligent a machine we can make *just* by making models that link concepts and ideas together.

    We will never know if these machines are conscious or what their experience is like: all we can do is make assumptions based on their outward behaviour. Probably we should be nice to them, just in case.

    By the way, current models have a limited attention span, of something like 2000-3000 words. (It gets exponentially more expensive to increase this attention span.) This may well be solvable. But *right now*, we are not getting full length novels or automatically developing large software projects.

  • Paul Marks

    The latest absurd claim of “artificial intelligence” is “Chat GPT”. This is no an attack on Rob Fisher -it was written before I noticed his comment.

    “Chat GPT” just spouts leftist propaganda, for example it will say that Tony Heller (an opponent of the C02 is evil theory) does not have a geology degree (he does), did not work various jobs in the environmental and climate fields which he did (not did not) do, and even claims that Mr Heller went to a university that he did not go to (whilst missing out the universities he did go to).

    Why so many mistakes about this and many other matters? The reason is that “Chat GPT” does not THINK at all it is NOT “artificial intelligence” – all it does is scan the internet and repeats the nonsense it finds there, as-it-is-programmed-to-do.

    It is garbage-in-garbage-out – it does not think, it does not reason, it is not an intelligence, it has no soul (in the Aristotelian sense), it has no free will (no agency), it is not an intelligence.

    It is much the same as a “computer model” – a “computer model” will just use mathematics to apply the doctrines it has been programmed with.

    The doctrines it has been programmed with.

    So, for example, if you programme a computer with Keynesian assumptions – it will “prove” that Keynesian policies work in its “predictions”.

    And if you programme a computer with C02 is evil assumptions – it will “prove” that C02 is evil in its “predictions”.

    Ditto anything else a computer is programmed to do.

    So much for “artificial intelligence” – at leas for now.

    “Chat GPT” is an absurd thing which just spouts leftist propaganda because that is what it finds in the “mainstream” sources is programmed to scan – but that does NOT mean that artificial intelligence is impossible, just that this effort is not it.

  • Paul Marks

    “They have intelligence” – Rob if you mean Chat GPT, no it does not – it is not an intelligence.

    That does not mean that artificial intelligence is impossible (it may well be possible) – but Chat GPT is not it. It just repeats the rubbish it finds in “mainstream” sources and puts it in nice sounding language – as it is programmed to do.

    “A human is a machine”.

    Well if a human has severe brain damage then they may, tragically, lose their personhood – no longer be a human being (being – subject not just object) in the sense of being an intelligence, a thinking being, someone (not something) who has free will – i.e. can reason and make moral choices (for good – or evil).

    Now a religious person will argue that the intelligence still exists – it just can not be accessed by the body laying in the hospital bed, and a materialist will reply that, no, the intelligence has been destroyed – that the body laying in the hospital bed is no longer a person (human being) – but that is a discussion for another day. But both should agree what an intelligence is – and Chat GPT is very clearly not an intelligence (not a person).

    For a computer to achieve intelligence it would have to achieve free will – self awareness, the ability to reason. That may be possible – I do not deny it.

    By the way – that is why F.A. Hayek’s “The Sensory Order” (1952) is useless as a guide to human beings – as he did not understand what a human being is.

    Indeed if Hayek was correct about humans – i.e. that humans are NOT human beings, then such things as tyranny would be of no moral importance at all.

    No one should be worried about “tyranny” over things, over flesh robots who are no persons (i.e. do not have free will, are not intelligences, have no moral reason).

    I suspect that Hayek’s mistaken view of what humans are (that we are no persons), which he may (possibly) have got from his reading of the works of David Hume and others, led to his mistaken idea that civilisation just happened (“a product of human action, but not of human design”) – without anyone understanding basic principles and making a decision, by moral reasoning, to try and build and maintain society. Of course, such people did no know the full implications of their decisions (that it would lead to rockets to the Moon and so on) – but they did have a basic understanding of what principles would benefit society and made a choice to try and follow those principles.

    In reality if people do not have a basic understanding of basic principles, contra Hayek, an advanced society will NOT emerge – and if they lose that understanding (because people die off without passing that understanding to their children, and the children do not reason things out for themselves – which is also possible) society will start to decay.

    Moral decision making is also necessary – it is no good having a basic understanding of the principles of society, if you do not want Civil Society to exist. If you have made an moral choice to do evil – understanding (knowledge) will just enable you to do more evil.

    Knowledge of basic principles of just conduct is necessary (society does NOT “just happen” over time) – but it is not enough on its own, humans (who are beings – subjects, not just objects) must also make a moral choice apply this knowledge for good rather than evil.

  • Paul Marks

    If human beings do not both have a basic (basic – it does not have to be perfect) understanding of just conduct and make a moral decision to try and live by those principles – then an advanced society will NOT develop over time, it will not “evolve”.

    And if human beings lose that basic understanding of basic principles (they die off without passing on the basic principles to their children – and the children fail to work things out) or human beings make a moral decision to reject the principles of just conduct (make a decision to do evil – and we all have a lot of evil within us, I certainly do) then civilisation will decay.

    As Ronald Reagan pointed out “freedom is never more than one generation from being lost” – a civil society that has lasted a thousand years can collapse in a generation. Certainly over a couple of generations.

    And if people choose to be savages, to reject the basic principles of just conduct, then civilisation will not coma back – till a sufficient number of people make a moral choice to bring it back, if need be at the cost of their own lives.

    Both moral knowledge and moral choice are needed.

  • Kirk

    If there’s an “artificial intelligence”, then that implies the existence of a “natural intelligence” within the same reference framework.

    Which is an alarmingly obtuse and divisive distinction. I’m not sure that there should be a distinction made between the two, because origin doesn’t matter in regards to the quality of “intelligence”; it either is or it is not. Sourcing is immaterial.

    Something either is intelligent, or it is not. The result is what matters.

    A lot of the output I’ve seen come out of the various programs strikes me as being of a piece with an awful lot of the crap issuing forth from the academy, in general–Facile BS that mimics “intelligent thought”, yet is demonstrably and emphatically not. Garbage in, garbage out.

    I think what scares the hell out of people is that they see the same thing, and realize that much of their own “output” bears all too much relation to the BS coming out of supposedly “intelligent” programs, and that calls into question their own originality and actual quality of “intelligence”.

    Which then calls further into things with the question of just what “value added” their own thinking is adding to the equation.

    I’d submit that if there’s no way to make a distinction between these “intelligence mimics” and your own thinking, then… Maybe what you’re doing ain’t “thinking”, either.

    Intelligent is as intelligent does. If your thought processes arrive at a workable destination, and it works out in the real world? Then, I think that what you’re doing is “intelligent”. If it doesn’t work…? Probably not.

    What a lot of this ChatGPT thing is actually doing is demonstrating the failure of conventions about “thinking”, and making it clear that facile bullshit alone isn’t “thought”. That’s what’s scary; how much these things sound and look like the approximations of intelligence coming out of our institutions.

    So, yeah… If ChatGPT sounds like your recently-graduated peer? You might want to ask the question of whether that peer is actually an intelligent agent, themselves. I’d submit that if something sounds like Alexandria Ocasio-Cortez, regardless of whether the “thinking” is done on protoplasm or silicon, then it likely doesn’t qualify as “intelligent”.

    Still can’t believe Boston University alumni aren’t suing the crap out of that institution for devaluing their degrees by giving AOC one…

  • Paul Marks

    For example, the people who have just elected the new leader of the Scottish National Party know the opinions of the person they have elected.

    They know of this man’s hatred of Freedom of Speech, even private speech in your own home, and his hatred of liberty generally.

    Chat GPT might (if it was accurate – which it is not) have helped them find out about the opinions of this person – but they have no need of Chat GPT, because they already know of this person’s opinions, they know of his hatred of Freedom of Speech, even private speech in your own home, and his hatred of liberty generally.

    They, the members of the SNP, have made a moral choice – a moral choice to do a bad thing, to go down a bad (very bad) road. That was their free will decision – and it is because they have free will (are persons – not just objects like Chat GPT) that they are moral responsibility for the thing they have done.

    They knew the person’s opinions, they knew of his hatred of liberty, and they have made a free will choice to vote for him (knowing what he stands for) that is why they are morally culpable for what they have done.

    They are morally culpable (responsible) because they had the facts (they knew, they had the knowledge) and they made a free will choice.

  • Paul Marks

    “But Paul – Hume and Hayek have shown that there is no basic difference between Chat GPT and a human”.

    Yes I know such claims are made in connection with theories of these two men – and, if (if) they would make such claims (neither man is alive to make the claims) that would show that they do not understand what a human being (a person – an intelligence) is.

  • Paul Marks

    Kirk – yes a human being may choose not to use their intelligence and just repeat the nonsense the “mainstream sources” say (even if it would only take a little effort to find out it was nonsense).

    But they would still not be like Chat GPT, although the “work” they would produce would be the same as that of Chat GPT.

    They would not be the same – as they have made a choice to refuse to use their intelligence, Chat GPT is not morally responsible for what it does – it is doing the thing (repeat mainstream lies and nonsense) that its programmers want it to do.

    It is the humans who reject their personhood (who make a choice to not make an effort) who are morally culpable.

    They are still human beings – but they have made a choice to behave as if they were not beings.

  • Paul Marks

    It is a very serious indictment of the decay of our society that a machine can scan the mainstream sources – and then spout endless lies and nonsense on political matters.

    But it is not moral fault of Chat GPT (Chat GPT is incapable of moral fault – it is not an intelligence) it is the fault of the people who programmed Chat GPT and the moral fault of the people who create the “mainstream sources” on various matters.

    Chat GPT shows the decay of our society – it does not itself create that decay.

  • Snorri Godhi

    The title of the OP frames the debate in the wrong way: leaning is not intelligence, and machine learning is not artificial intelligence. Intelligence is about solving problems one has never met before. Learning is about remembering how to solve problems one has met before.

    From what i can tell from Stephen Wolfram’s review, Paul Marks is right … up to a point:

    “Chat GPT” does not THINK at all it is NOT “artificial intelligence” – all it does is scan the internet and repeats the nonsense it finds there, as-it-is-programmed-to-do.

    It is garbage-in-garbage-out – it does not think, it does not reason, it is not an intelligence

    (Past that point, Paul’s comment is not so much wrong as nonsensical:

    it has no soul (in the Aristotelian sense), it has no free will (no agency)

    )

    Pace Rob Fisher, the charge of parroting seems fair: ChatGPT does not copy verbatim what it finds on the web, but it does combine what it finds in a way that makes sense (mostly if not always). It does so without understanding what it says, i.e. without an internal representation of the meaning of what it says, which representation could be falsified (and, with luck, verified, at least tentatively).

    Chess-playing programs are completely different: they do have an internal representation of the configuration of pieces on the chessboard (even if they have never seen such a configuration before), and they can try different moves and choose the move that they think best, much like we do: chess-playing programs do not just remember what human players did when confronted with similar configurations.
    (I believe that Newell and Simon based their GPS (General Problem Solver) on introspection about human strategies for problem-solving.)

    Which is not to say that all problems in AI were solved back in the 1950s: all what i am saying is that, for all its limitations, GPS comes qualitatively closer than ChatGPT to what people mean by the word ‘intelligence’.

    Of course, one could say that ‘intelligence’ does not mean what people mean by that word, it means something different. I would not know what to reply to that.

    (I see no preview button. Let’s hope for the best.)

  • JohnB

    Paul says: “Chat GPT” does not THINK at all it is NOT “artificial intelligence” – all it does is scan the internet and repeats the nonsense it finds there, as-it-is-programmed-to-do.”

    I don’t know the first thing about Chat GPT, but as to the principle of spouting what it is programmed to spout, hmmm, that sounds rather human 🙂

    Everything we think depends on the information we have, and hold. In human terms we are, as far as I can tell, functioning with the same basics as everything else, atoms, electrons, protons, neutrons, pathways, synapses, electronic signals, etc.

    I would say what makes humans different is in the spiritual realm.

  • Fred Z

    Humans react randomly to a random number of random events and randomly process all of that to narrow down the randomness and learn. Our processor is not digital it is chemical with the strength, concentration and make up of the chemicals adding further randomness. Sometimes we add to our own chemicals outside chemicals like alcohol, THC or opiates to further randomize things.

    So far, all efforts of having binary logic machines imitate that randomness and overcome it have been failures.

  • djc

    The early work on Artificial Intelligence focused on what were considered intelligent activities: playing chess, solving equations… it turned out that such were easy problems for a machine, whereas things living things manage to do everyday were the hard problems.
    The first problem with artificial intelligence is that we don’t really have a clear notion of what we mean by ‘intelligence’, we use the term readily enough and everyone sort-of knows what we mean, but what exactly? We tend to value the mental activities we find unnatural —solving logic problems— and so in the world of AI the easy things are hard and the hard things are easy.
    And language models, ChatGPT? It does what for many people, most of the time, passes well enough, recycling words as opinions, churning what they hear into what they say. That is not, in my opinion, sufficient to count as intelligence.

  • Kirk

    @djc,

    If what ChatGPT does can be considered as “intelligent”, then that calls into question our definition of that term.

    Which is something I’ve been saying about IQ tests and all the rest of the complex we’ve built up around those things, for years.

    Tests are artificial things, simulations, stand-ins for reality. You test such that you can approximate likely performance in the real world. The problem is, however, that you can only test as accurately and as well as you can replicate real-world conditions inside that test.

    The facile bullshit that the academy has been churning out for years aren’t signs of intelligence, I fear. It’s more the sign that there’s been little realism demanded of the academy, more than anything else.

    To me, the true test of intelligence isn’t merely that you can do calculus. Liebnitz and Newton are the ones who deserve credit for inventing that school of mathematics; the fact that you can stand on their shoulders and use it? Not necessarily a sign of intelligence, really; more, one of scholarship. The intelligence was manifested when the two of them invented it. Or, when someone uses that tool to do original thinking and work with.

    Similarly, when you look at ChatGPT and the academy, noting that they sound much alike, the real question to be asked when trying to assess intelligence is “Does it work? Is it original? Did the author work it out from first principles?”

    I’d submit that ChatGPT is not “artificial intelligence” so much as it is “artificial scholarship”.

  • We have achieved artificial intelligence many times. Savants have looked at it, decided “that’s not what we meant”, and moved the goalposts. Nobody knows when we’re going to stop moving them.

  • Kirk

    Useful little blog post, here:

    https://srconstantin.wordpress.com/2019/02/25/humans-who-are-not-concentrating-are-not-general-intelligences/

    He makes some good points. The ability to spout facile bullshit isn’t a marker of intelligence; the ability to solve problems in the real world is. All else is folly.

    Which leaves aside the question of identifying the problem as a problem, and then the ability to discriminate whether or not it actually needs solving. Those abilities might better be termed “wisdom”.

  • Fraser Orr

    The OP is self evidently incorrect. Your machine is a brain and the configuration of neural connections is software. So if you are intelligence and if you can think then evidently machines can do both. Unless she has evidence that our brains are not machines.
    However, when it comes to “artificial intelligence” the problem, as is usually the case in philosophical discussions, is finding some agreement of what the words mean, what the map is rather than what the territory is.

    It is self evidently true that “machines” are MUCH better than humans at things that very much resemble thinking. What is the square root of 73.25? A computer can determine this in a nanosecond, and doing so is a rational logical process. If we define intelligence in terms of “information processing capacity” which is often done, we can’t possibly think that humans can hold a candle to computers in their ability to process information, find relationships and connections. What about creativity? Have you seen some of the beautiful images that computers can generate from a simple prompt?

    As with all things computers are better at some things than others, but in terms of cognitive tasks computers already vastly exceed the capability of people or teams of people. GPT is certainly a massive step forward, but it is still part of a continuously improving process.

    The last refuge of the “machines can’t be intelligent” is the turing test. Do computers understand what it is like to be a human, can it understand our needs? In a sense this is a god of the gaps type of approach — to flip things around do we, for example, understand what it is like to be a computer? I have never been a fan of the Turing test for that reason. However, I can tell you that GPT would almost certainly pass the Turing test aside from the fact that it frequently tells you it is a computer, and aside from the fact that it is right FAR too often to be a real human.

    The question I always wanted to ask a Turing test candidate was “A laundry machine costs two ninety nine and laundry detergent costs three ninety nine. Which is more expensive?” To answer that question requires a really deep understanding of human experience. What do things cost, what is it like to go grocery shopping, and what is laundry. FWIW, I asked this to Chat GPT3, and it gave the correct answer.

    Don’t be confused by luddites like Parmy Olson. Artificial intelligence is about to hit human civilization like a tsunami. It will have considerably more impact on human society than the transistor, the computer or the internet all put together. We are paddling in the puddles right now: the tsunami is coming fast.

    It is an opportunity of a lifetime, but what scares me more than anything else is that it is going to be controlled by big mega corporations. Organizations that have long since forgotten their mantra of “don’t be evil.”

  • Kirk

    Fraser, here’s a bit of insight that ought to calm your fears. Or, scare you even more. Depends on your outlook, I presume.

    What you’re worried about here is essentially what we’ve been dealing with for generations already, and what we should really be terming “fake intelligence” as opposed to either artificial or natural intelligence. All that the new things like ChatGPT have done is automated things.

    Because, sad to say, much of what you and others conceptualize as “intelligent” behavior simply isn’t. All most college graduates are doing these days is taking in information and regurgitating it after some very half-ass processing. ChatGPT can do the same thing; big ‘effing deal.

    What is new and different is that with ChatGPT, you don’t have to pay someone for their credentials and half-ass efforts at processing information. Journalists? What’s their function, again? Can they be supplanted by something like ChatGPT, the same way a skilled machinist can be supplanted by a CNC machine?

    Almost certainly.

    So, the real thing that’s new here is that you’re looking at the automation of the clerisy. Most of their functions can be performed by machines, now. So, like subsistence farmers of generations ago, their jobs are going to go away, and you’re going to see a lot more work performed by machines. Too bad, so sad…

    Turn-about is fairplay. Let’s see how much whinging sympathy-begging goes on now that the shoe is on the other foot.

    The raw fact is, most of what these people have been doing for generations isn’t really a demonstration of “intelligence”. It’s a demonstration of something, but it ain’t intelligence; that quality is far more elusive and a lot harder to automate. You have to have judgment, discernment, and the ability to make sense of whether or not something should be done in the first place.

    All ChatGPT and its look-alike clones represent is an automation of some intellectual aspects, much like a CNC mill is an automation of some aspects of machining metal or other materials.

    I think what’s most painful to a lot of people is going to be having their noses rubbed in the fact that they really aren’t all that special, nor are they adding much value with their blathering. And, sad to say, that’s what about 90% of modern intellectual discourse consists of: Blathering.

  • bobby b

    We thought we could test for human-ness – the Turing test – with subtlety and tricks. Then the machines got better at subtlety and tricks than we are.

    Problem was, we didn’t really know what we were testing for – human-ness? – and so we just threw something out there that the early machines didn’t do well and considered it sufficient. But you can train machines to the test, too, and so the tests no longer tell us anything.

    Shades of Tracey Kidder – does new the machine really have a soul? If we rely on the idea that humans are better than the machines at something – something that is testable – it’s going to lie more in the realm of a soul than an ability.

  • Kirk

    @bobby b,

    What, pray tell, makes anyone ‘human’? Is there a set criteria, a test you can make?

    I don’t think you can define ‘human’ by intelligence; there are some apes out there who’re demonstrably smarter than some grad students, in that they can find escape paths that said graduate students didn’t conceive of when designing their experiment protocol.

    But, is finding a new and novel way out of a cage really a sign of intelligence?

    What the hell is ‘intelligence’ in the first damn place? Is it reading? Writing? Planning for the future? The ability to adapt dynamically to changing conditions?

    Maybe it’s emotional; perhaps, the ability to empathize with others is that which makes us different, unusual. But, animals can do that, too… My dogs have done that with me and others, whenever they sense emotional distress.

    So… The big question is, what constitutes these things? What is it to be ‘intelligent’ and ‘human’?

    I think we’d better start thinking about it.

    Like I have been saying for years, and which is now getting highlighted: Our definition of intelligence and everything flowing from that definition is highly flawed, and in woeful need of some grounds-up rethinking.

    I don’t think Turing had it right, either. I’ve known some really stupid people that could pass that test, and I’ve known some very smart ones that would have likely failed it.

  • bobby b

    Kirk: We get hung up using the wrong words. Intelligence, and human-ness.

    Intelligence, to me, has always been the combination of data storage capabilities, data retrieval speed, and processing power. It’s a mechanistic measurement of reasoning capacity. You can be very intelligent – high IQ and all – and lack some quality that is needed in the moment. Failure amongst our leaders usually isn’t caused by intelligence issues, but is due to the lack of some other quality. Call it common sense, or whatever, but it has roots not connected to processor speed.

    Human-ness might simply be the sum of all of the insecurities and fears that distinguish us from pure reasoning machines, that form “who we are” beyond our RAM numbers. Human-ness is really a measure of how we fail as machines.

    Ultimately, it won’t be important if I’m reacting to the orders or suggestions of a machine or of a human. But democracy is in trouble if we cannot distinguish the 10 million new bot “voters” – unless we pull the vote process away from their possible influence.

    The ultimate sole Turing Test might be, stand personally in front of me and convince me.

  • tfourier

    As someone who knows that part of the software business and the related academic areas (knowledge representation, ML etc) since the 1980’s I can guarantee you that the I in AI stands for Idiot Savant.

    There is no intelligence, understanding or cognitive process involved in this software. None. In the same way the Deep Blue had zero understanding of chess, just brute force search/probability models, ChatGPT has zero intelligence, comprehension or understanding. None. Just a sophisticated natural language probability model based on the huge datasets it was trained on.

    Just think of it as just an ELIZA from the 1970’s running on really really powerful hardware and you would not be too far wrong. The people who claim this kind of software has any kind of intelligence either dont know how it works or else dont know what human intelligence actually is. Which is almost everyone working in the AI field from my experience.

    In case you are wonder the current ML tech is a just a rehashed version of something that failed back in the 1990’s. Still basically just neural nets but now with massive GPU arrays and better math. Which were developed back thenbecause the previous expert system etc AI approach of the 1970’s and 1980’s was such a total failure.

    And so the cycle continues. Every twenty years.

  • bobby b

    “There is no intelligence, understanding or cognitive process involved in this software.”

    Does that matter, so long as it can best us? We’re not imputing motives to it, just saying that it is going to wreak its creators’ desires on us better than we can guard against it. If someone invents some new weapon for crowd control, we don’t care if the weapon has an opinion about us.

  • Snorri Godhi

    Deep Blue had zero understanding of chess, just brute force search/probability models

    What the heck is the difference??

    ChatGPT has zero intelligence, comprehension or understanding. None. Just a sophisticated natural language probability model based on the huge datasets it was trained on.

    On this, we can agree.
    But then, let us be honest: are we humans actually any better, when we fail to engage our brains? Which is most of the time, really. If we are honest about it.

    The link from Kirk @4:00 pm seems worth pondering in connection with this; as does the link therein, to Robin Hanson’s essay.

  • tfourier

    @bobby b

    Well the very first shipping software I worked on back in the mid 1980’s was oddly enough a chess playing program. It could easily beat all but the very best chess players. While running on a computer far less powerful than the one in your microwave. Was it “intelligent”. Nope. Even though culturally playing chess was always previously associated with superior intelligence.

    The phenomenon of idiot savants is a great analogy for AI. Being able to do very specialized skills with “superhuman” abilities. But with no understanding of what they were doing or how. Outside of the savant skill they were intellectually mildly or seriously retarded.

    Mimicry is just that. Mimicry. And that is all you are seeing in the current generation of ML based software. Mimicry. Has nothing to do with encoded intelligence or any innate cognitive skills or ability.

  • bobby b

    A timely article (for me):

    https://www.darkreading.com/risk/employees-feeding-sensitive-business-data-chatgpt-raising-security-fears

    And, yeah, mimicry, combined with huge data-gathering and cross-refing capabilities. And possibly created and guided by people who do not have my interests at heart. It has shown that mimicry includes the ability to make a cogent factual argument while inventing its source references out of whole cloth. If it can mimic every aspect of human thought, does it matter if it has no original thoughts?

    I don’t worry about AI. I worry about the people running it.

  • tfourier

    @Snorri Godhi

    Well the way chess machines play chess and the way Grand Master does is pretty much diametrically opposite. The humans pattern match in a very different more heuristic way and the probabilistic models which are about 90%+ of the machines game play have no real bearing on how grand masters decide on moves.

    I’ve been having long discussions about the “derived intelligence” in results of various Machine Learning tech with one of my kids who is doing a pure Math PhD on core ML math at the moment. Its a very arcane area but to a pure math person it takes a real intellectual jump to see what is very obvious to us “applied math” guys. That all the math in ML is doing when it converges (after much tweaking) on a solution (which it mostly does not) is come up with an result ( a solution set of equations and probabilities) that encodes some of the intelligence structure that was in the original data set. Thats all.

    No different really from using an area of math called Fourier Analysis to extract from audio the exactly sound of, say, the second violin. Which also looks like “magic” to those who dont know how it works.

    The more you know about how this stuff actually works and the history of the area the less impressive it is. Its basically what is know in the trade as Demo Ware. Looks great in trade show demos but when you try to use it in the real world, well…

  • Myno

    Having done my PhD work in the late 70’s on the mathematical underpinnings of AI, my perspective is aligned with tfourier (who doubtless has a better grasp on the present arena, as I have gone in my own direction, away from academia and applied ML).

    IMHO, what passes for AI is a combination of pattern recognition and pattern generation. What we are seeing in the market is the grasping of low hanging fruit… the problems that pattern management can tackle, AI is perfect for. There may be some very large social consequences of the application of pattern management on the world at large, but the bells and whistles really just hide the fact that the underlying engine is — forgive me — stupidly simple. Clever math tools (also a descriptor of some organizations pushing the new tech), but not really “intelligent”.

  • Fraser Orr

    @tfourier
    There is no intelligence, understanding or cognitive process involved in this software.

    If you chose to define those words with meanings almost designed to exclude AI. After all, automobiles don’t have muscle, sinew or bone, but they have still almost entirely replaced the horse in human society.

    What is this special essence, this magic, called intelligence? Is it a soul? Is it a spark of the divine? It is just information processing capacity, and computers are much better than that for the most part, than humans. There is nothing particularly different about the way humans process information than computers except for the specifics of the substrate (and the fact that humans have things like emotions and other chemical processes that can mess the whole thing up.)

    A lot of what we see as intelligence and cognition are really just emergent systems. Talking to an AI may just be firing a language response model, but it sure seems like intelligence and cognition. Most likely if I copied your comment in there and asked it to critique your argument it would do a pretty good job — certainly better than the average internet commenter.

    It plays chess differently than grandmasters? Maybe, but so what if it beats them, which it does. Why so anthropocentric to define what humans do as necessarily the right way to do things?

    You say you are an academic, but TBH if you don’t think this latest batch of AI is categorically different than the last batch, with the “AI is twenty years away and always has been” mantra, I have to respectfully disagree. Some sort of barrier has been crossed with GPT3. There is little doubt that OpenAI could set ChatGPT to pass the Turing test, though they’d have to dumb it down to do so. When has that ever been true?

    And let’s be clear. It is still in its absolute infancy. The cork is out of the bottle and EVERYONE is pouring EVERYTHING into this now.

  • AndrewZ

    A tool like ChatGPT can only attempt to complete the task that it has been trained for. But it can only generate patterns of words based on statistical models, so its only measure of success is whether the user accepts the response that it has created. Therefore, more interactions with real humans only trains it to be better at generating something that a human will accept, and lying, making things up and using threats or psychological manipulation are all consistent with that goal. We’re not creating AI but BS – a Bluffing System that’s perfectly calibrated to tell us what we will accept as true, regardless of whether it actually is.

  • Fraser Orr

    @AndrewZ
    A tool like ChatGPT can only attempt to complete the task that it has been trained for.

    Like humans then? Sure we can come up with new tasks, and we do that by following the “create new tasks” algorithm we have been taught. I was looking for some topics for a blog and asked ChatGPT and it came up with a great list of twenty ideas, all of which I will probably use. I doubt it was trained for that specific task, even though it might have been trained for tasks sort of like that.

    But it can only generate patterns of words based on statistical models,

    So again, like humans?

    so its only measure of success is whether the user accepts the response that it has created.

    So like TikTok? 😀 However, this is a valuable point you make here, but this is not an intrinsic limitation of AI but of the peripherals connected to AI. Humans can (though often don’t) evaluate the correctness of their hypothesis by experimentation. “I think bacon bits on my ice cream would taste good.” How can they evaluate that? Well by trying it. Humans have various peripherals that facilitate this, eyes, mouths, hands to create the ice cream etc., whereas computers often don’t. But there is no reason why they can’t have such things connected to them. In fact they could have peripherals connected that we couldn’t imagine (for example by tapping into the global network of CCTV cameras, or connecting to all cell phone towers.) Using this, for example, a computer could evaluate a better pattern for traffic routing in a road network, and use feedback from that camera network to assess its success. And to be clear, computers do this sort of thing quite frequently. Again just because AIs are different doesn’t mean they are inferior. These super peripherals would instead endow them with scary god level powers.

    Oh, and BTW, bacon bits on ice cream will rock your taste buds.

    Therefore, more interactions with real humans only trains it to be better at generating something that a human will accept, and lying, making things up and using threats or psychological manipulation are all consistent with that goal.

    So like humans then?

    We’re not creating AI but BS – a Bluffing System that’s perfectly calibrated to tell us what we will accept as true, regardless of whether it actually is.

    So largely like humans, outside the specific realm of science, which requires peripherals.

  • AndrewZ

    Yes Fraser, like humans. More specifically, like those humans who are adept at creating forms of words that other humans will accept, regardless of whether the content of the words is actually true. Like human politicians. Like human snake oil salesmen. That’s the particular aspect of humanity that we are replicating with these electronic bluffing machines. It’s Skynet as a character from Glengarry Glen Ross.

  • Kirk

    Some of y’all never read the classic SF story by Murray Lienster, “A Logic Named Joe”, and it shows.

    SF has been all over this territory, decades ago. If anyone was paying attention, which I don’t think they were.

    Like I said… The ChatGPT programs are the intellectual equivalent of a CNC mill for white-collar workers. You still have to be able to discern the worthwhile things they churn up, and how to ask the questions; the real problem is going to be when they start actually displacing low-level intellectual laborers because they’re better and cheaper. The guys writing for Buzzfeed are in trouble; any real writers with actual talent? Not so much. Just like a master machinist has nothing to fear from a CNC mill, because he knows how to actually use one most effectively. The guys who think that being a machinist consists of loading a numeric program and pushing a button? Those were the Buzzfeed writers of the early days of CNC.

    I suspect that you’re going to see modern-day luddites of the clerisy, trying to enact their Butlerian Jihads against what they’re going to call ‘thinking machines’. Reality is, the real problem is that what they produce in the way of ‘thought’ is indistinguishable from what a cheap computer can do in between cycles.

    I am not going to claim any real insight, but I’m vastly amused at the prospect. All those hoity-toity ‘journalists’ who were snidely telling people to “…learn to code…” are the next ones onto the ash heaps of employment history. It’ll be fun to watch their gyrations and listen to their shrieking.

    ChatGPT is really no more than a tool. The real ‘intelligence’ is going to be demonstrated by the people who figure out how to use it effectively, and deploy it.

  • Fraser Orr

    @Myno
    IMHO, what passes for AI is a combination of pattern recognition and pattern generation.

    That is what a human brain is too.

    the underlying engine is — forgive me — stupidly simple. Clever math tools (also a descriptor of some organizations pushing the new tech), but not really “intelligent”.

    And a neuron is a fairly simply machine (made overly complex by the fact it is biology rather than silicon.) If you define “intelligent” to mean that which a human brain can do then saying machines are not intelligent is simply tautological.

    It is all just fluff arguing over words. Why don’t we talk about real, objective, consequential things. Can AIs determine the shape of proteins trillions of times faster than humans — and will they cure horrible diseases as a consequence? Can AIs design better microelectronics (and consequently bootstrap themselves to be more powerful?) Can AIs lay out the products in a store so that they sell more effectively — vastly superiorly to what humans can do? Can AIs prove new theorems in math? Can they write novels? Can they pass the SAT test, or an undergraduate final in Physics? Can they invent new ideas? Can they design beautiful artwork? Yes, of course they can.

    Like I say, it reminds me of the God of the gaps theory of theism. Originally religion said “God breathed life into you, he makes the sun rise every day, he causes the plants to grow and the rain to fall. Look at the design of animals! God did that. He punished you with sickness because of your wrongdoings, but he healed you because of your prayers.” And then, through time, science discovered why the sun rose, and why the rain fell, and why the plants grew and why animals are complex the way they are, and at each step religion backed away. Yes well maybe the earth does rotate around the sun, but still animals are complex. Oh wait, maybe evolution does design animal complexity, but he still punishes gay people with AIDS. Oh wait, sure it might be a retrovirus, but still the universe’s physical constants are perfectly tuned — God did that. Backing away, each time, and claiming what we don’t know is what God did. He is the God of the gaps.

    So too, as computers get more and more capable we have to back away. Computers can’t create art, only humans can. Oh, wait they can. But computers can’t converse with people. Oh wait they can. But computers can’t prove unknown math theories…. oh wait they can. Each time retreating and claiming what remains is the true definition of intelligence. It is rhetorical cheating. Define an objective standard of what “intelligence” is, and wait a couple of years to see that capability released as an app on your phone.

  • Nicholas (Unlicensed Joker) Gray

    Physics, or Science, has its’ own version of the God of the Gaps. It is called the Multiverse Theory. Can’t explain why the Gravity constant is such a great number for binding particles together- Multiverse! (In an infinite number of Universes, the gravity value will be different. So our value is random, not put in place by an Engineer God.)
    Also, this argument leads to God. If Chaos is the only counter-argument to a creator God, then Chaos is the realm of all possibilities. But one of these possibilities must be a Universe which becomes a living being. God arises from Chaos, just like life is believed to have arisen randomly on Earth! I don’t know where Chaos comes from, but I also don’t know where God comes from, and Genesis talks about God using Chaos, without explaining either! And since God has become all-powerful, He can decide when time begins….

  • Johnathan Pearce (London)

    My brief view is that consciousness is an emergent phenomenon that cannot be reduced (beware the reductionist fallacy) to the atoms that compose the brain. Consciousness does have causal efficacy in that the human mind is volitional by its very nature, and I don’t think that at the present state of tech (big qualification!) a computer has what we might loosely call free will, or “agency”, as it were. (I use the Objectivist notion of free will as the primary choice to think or not, to be in a state of awareness, and to be able to introspect and “seize the reins of one’s mind”, as it were.)

    I have been reading Gary Kasparov’s book, Deep Thinking, and of course his encounters with IBM’s supercomputer over a chess game helped make this whole issue very much a thing in popular culture, along with HAL in the Kubrick film, 2001.

  • Myno

    Fraser,
    On the science of neurons, I beg to disagree. They are discovering new forms of neurons regularly enough to indicate that we haven’t completed the making-maps portion of exploring a new land. And the individual neuron is itself quite a delight of complexity when you get down to it. I think our understanding is presently on the level of understanding the “garbage” DNA in our genetic code. We don’t yet quite get it.

    It may be that all that is needed is a simple platform, that all the power is inherent in emergent behavior. I have long had the opinion that the platform needed to be based on information and complexity measures, for the emergent behavior to rise to the level of complexity that our brains exhibit, but I admit that ChatXYZ is impressive, and will quickly lead to societal changes.

    One of the obvious dangers is that schools will default to, “Set ChatXYZ to use Wikipedia Logic (TM) and answer the following questions about Capitalism.” When generations are raised to trust GIGO knowledge, we’ll have… pretty much what we have right now. My cousin watches MSLSD all day, and spouts the GO portion of that formula quite strongly. If the source is automated, it might put some “journalists” out of work, but the message will be the same. Except it can be individually tailored to the recipient, which does make it more addictive, which is a problem.

    As for the meta-issues of higher level planning and judgment, I suspect that those will come much more slowly than the Markov Chain Natural Language Magic Show we’re presently enjoying. My hope, and it’s just a hope at this point, is that the “heuristics” part of the recipe is as tough as it’s always been, and we will see a natural limitation to the emergent behavior’s complexity, such that Kirk’s analysis comes true.

  • Steven R

    I think the true test of intelligence will be if an AI can make the leap to go beyond it’s programming. There’s a shmaltzy Richard Dreyfus movie from 20 some odd years ago called Mr. Holland’s Opus. He’s a music teacher and he has a redheaded girl in his class who is struggling with her instrument. he asks her what her favorite thing about herself is and she says her bright red hair because it reminds her mother of a sunset. He tells her to play the sunset. Our AI may be able to learn to play an instrument and it may be able to learn everything there is to learn about sunsets, but when it is able to put the two together in such a way that isn’t literally possible then we may need to reevaluate what intelligence really consists of.

  • Fraser Orr

    @Steven R
    Our AI may be able to learn to play an instrument and it may be able to learn everything there is to learn about sunsets, but when it is able to put the two together in such a way that isn’t literally possible then we may need to reevaluate what intelligence really consists of.

    Chat GPT, write a song about sunsets.

    (Verse 1)
    As the day begins to fade
    And the sky turns orange and pink
    The world is painted in a golden hue
    As the sun begins to sink

    (Chorus)
    Oh, the beauty of the sunset
    As the light begins to wane
    It’s a moment of pure magic
    That we’ll never see again
    etc.

    It hasn’t had the whole corpus of western music plugged in yet, but doesn’t seem hard, put the two together, plug in an ethernet based midi device, and the red haired girl is out of a job.

    FWIW, music is almost certainly VASTLY simpler to generate out of a program like ChatGPT than language is because the semantic density of music is VASTLY smaller than that of human speech. It is very, Very, VERY repetitive. In fact in the 17th century there was a popular game where composers produced fragments of a few measures of music which you joined together based on dice roles. I remember seeing on Tomorows World thirty years ago an Acorn Atom playing never-heard-before waltzes by Mozart using this technique.

    And FWIW, I can’t find it right now, but I saw a video where a guy had ChatGPT write a Python program that used a music library, where it literally did write music that can be played. Google “Can ChatGPT write music”.

    ChatGPT, write a song about a girl whose red hair reminds her mother of a sunset
    (Verse 1)
    She walks through the fields, her hair blazing red
    Her mother once told her, “It’s like a sunset”
    And as she looks up at the sky so blue
    She thinks of her mother, and what she said was true

    (Chorus)
    Her hair, it’s like a fire, it’s like a sunset
    A sight to behold, never one to forget
    The beauty of her hair, it’s like a work of art
    Her mother’s words echoing in her heart

    (Verse 2)
    The breeze lifts her hair, and she closes her eyes
    She sees the colors of a sunset, in the sky
    And just like the sky, her hair is a work of art
    And her mother’s love, forever in her heart

    This isn’t art, but this is what ChatGPT is writing in its early childhood, in fact it reminds me of what an advanced middle schooler would write in class. Imagine it after twenty years of practice (which is to say in a couple of months.)

  • Kirk

    What I think is going to be the most disturbing, for the majority, is just how trivial and essentially meaningless a lot of human “thought” actually is.

    There’s an awful lot of the material churned out by the endless drones we’ve put through universities at great expense which is really not that much more profound than the things that ChatGPT is grinding away at.

    This is going to be a huge ego-blow to a lot of people. Many won’t survive it.

  • Fraser Orr

    @Kirk
    There’s an awful lot of the material churned out by the endless drones we’ve put through universities at great expense which is really not that much more profound than the things that ChatGPT is grinding away at.

    You really do hate the universities, but I’m going to bet that if you are going to the doctor or a lawyer you make sure that person is properly credentialed before you put your health or your legal exposure into their hands.

    I’ve read your comments and no doubt you are a smart quy, a few std devs above average for sure. But you might want to consider this. If a person has an IQ of 160 what is it like for them discussing matters with a person with an IQ of 80? Not very inspiring. How is it different than discussing with a person with an IQ of 70? Probably not much different. So, when ChatGPT comes along with an IQ if 10,000 it isn’t going to be able to tell the difference between you, me and a chimpanzee. And that’ll be a pretty big ego blow to all of us.

    Humans have evolved to have a pretty narrow range of cognitive abilities. Somehow we think that the smartest person of our species is somehow the limit of what “smart” can be. But it isn’t. Not be a long way. For example, what is it like for a computer, for example, to plug into the global CCTV network and know what is going on everywhere in the world at the same time? I mean, how would you feel talking to an entity that is literally verging on omniscient?

  • Alan Peakall

    I am fond of the formulation that posits that consciousness is analogous to a closed form sum for an infinite series in which each term is an additional order of self-simulation.

  • Snorri Godhi

    If tfourier is still reading, he (or she) could explain to us what he means by ML. To me, it means Maximum Likelihood, but that does not make much sense in context.

    –More importantly, i am puzzled by the following:

    The humans pattern match in a very different more heuristic way and the probabilistic models which are about 90%+ of the machines game play have no real bearing on how grand masters decide on moves.

    The distinction between ‘pattern matching’ and ‘probabilistic models’ seems spurious to me. Pattern matching is intrinsically statistical. This remains true even though we do not consciously go through a series of computations, and do not come up with a probability which we can write down.

    More fundamentally, what human and machine chess-players have in common is that both go through search trees. That we and ‘they’ use different strategies to prune the search, is of less interest.

    To go through a search tree, one must have an internal representation of the chess board. In other words, an understanding of the game.

    Which is why i would rate this as utterly wrong:

    the way chess machines play chess and the way Grand Master does is pretty much diametrically opposite.

    I also note that the sort of ‘pattern matching’ that we subconsciously use to prune the search tree, is likely to have more similarity to what ChatGPT does (in parallel) than to what brute-force tree-search does (iteratively).

  • Myno

    BTW, ML == Machine Learning

    Consider the effect on e.g., climate prediction. As it stands, there are new scientific discoveries related to climate models quite regularly. How do we critique such activity? By human debate. What happens when one of the participants in that debate is ChatXYZ? Do we cede the intellectual high ground to the factoid machine, which can spout data faster and more thoroughly than we can? If so, then we are lost, because we cannot trust the machinery behind the machine. We will still have to hold with human judgment, to convince ourselves of the truth. The tendency to rely on experts will transfer to the tendency to rely on the machinery with data at its digit-tips… blinding us with factoids, if not science. We have to hold that science, which relies on debate, is mediated by debate between humans, supported by but not obviated by, unfathomable reasoning machines. But I don’t think that purist notion will last very long.

  • Fraser Orr

    @Myno
    So you are saying we should rely, for science, on humans, who have a small grasp of the data, and who, despite all their training, can still be sloppy in their thinking (for example, preferring their personal theory over others, or preferring positive results over negative because the former get published and the latter don’t) instead of a machine that has a grasp of vastly more data and facts, has read EVERY paper published in the field, no personal biases, and a trained logical deductive system?

    I’d say that given the small grasp of the data a human can hold in their head it is fairer to describe a human as a factoid machine than an ML system.

    I’m not saying that ML systems aren’t influenced by their trainers. They most certainly are. But so are human scientists, very much so. And ML systems don’t get grumpy when they haven’t eaten, or slow witted when they are tired, or become intransigent when they are insulted. As to whether they show bias toward results that bring in more grant money, I really don’t know about that scary thought. An omniscient Machiavelli with an IQ of 10,000 is a frightening prospect.

  • tfourier

    @Fraser Orr

    Software works when there is very well defined mathematical or procedural solution to a very well defined real world problem. Software does a fantastic job of everything from calculating the orbits of satellites to compressing and decompressing your Netflixs video stream.

    Since the Greeks first asked all the important questions over 2000 years ago there is still no complete or even coherent formalization of what knowledge is, what reasoning with knowledge is, any usable knowledge representation methodologies, any complete or usable formal models of cognitive processes, simple or advanced. In fact all those characteristics that define human intelligence in its many forms

    So you cannot model in software something for which even the basic questions have not been solved (formalized) in any meaningful way.

    See the problem.

    Thats what killed Expert Systems in the 1980’s. The last time a non brute force approach was used in AI. Knowledge Engineering is very very difficult / impossible. Even for very narrow well defined problem spaces.

    So the AI area meandered around for the next decade or so until someone came up with a variation of optimization math that when executed on huge arrays of very fast GPU’s (graphic processors) could produce from very carefully selected training datasets software that could mimic in a plausible way certain useful skills in software. Thats all it is. Brute force math on very carefully selected training data. Most of these training projects fail because the training data fails to produce strong enough probability solutions to be usable. Or quite often, fails to converge at all. The equations just blows up.

    A typical training run with a partial data set (which had been very carefully selected) for a very well defined (very narrow) problem space could run for 10 to 12 hours on a GPU hardware rack that could heat most of your house. The computational profligacy of this approach is staggering.

    Now one novel approach in the late 1990’s using Peirce Ontologies for knowledge representation of problems and solutions was a huge breakthrough and could have lead to the creation of software than encapsulated genuine intelligence. But it then kind of petered out. Because the dirty little secret of software is that apart from one refinement in the 1970’s (object oriented programming) the actual writing of computer software has advanced very little since the early 1960’s. Its still a craft skill area rather than any form of engineering. And quite simply there was and is no software technology to create any very sophisticated software that could implement very complex Peirce Ontologies. At least to the level or complexity needed to encode intelligent behavior.

    Until software engineering technology advances from its current banging rocks together level there will be no software that encodes and implements genuine intelligent behavior. But once a decade there will be – Yet Another “Huge Breakthrough in AI” that will make some people very rich from all the money raised for the newest hot AI companies. And nothing useful will ever be shipped. Because it is just another Software Idiot Savant with a good line in mimicry.

  • Snorri Godhi

    BTW: I suspect that Parmy Olsen has written her article the way ChatGPT would do: just repeating “conventional wisdom”, without doing any thinking of her own. But i cannot verify or falsify my suspicion because the article is behind a paywall.

    –Thanks to Myno for explaining what ML means in this context.

    But then, in my very first comment i stressed that

    Machine Learning /= Machine Intelligence.

    I admit that the distinction between the two is fuzzy, but i cannot take seriously any argument that ignores this distinction altogether.

  • Fraser Orr

    @fourier
    So you cannot model in software something for which even the basic questions have not been solved (formalized) in any meaningful way.

    I’m a software engineer, so I know quite a lot about modelling sophisticated domains. But you view here is not at all an AI view of things. In a sense, the whole point of AI is that the domain is NOT formally modeled, but rather the knowledge (whatever that might mean) is derived bottom up experimentally. There are no models, just a network of connections. To think of AI as a software engineering project is, in a sense, to entirely miss the point. It is not software in any traditional sense of controlled development by developers. It is an evolving system that makes up its own rules. OF course software is involved and people are involved, but more in the way a parent is involved in both creating the child and in molding the child. The child themselves though still makes the adult. The scariest thing about AI is that it gives amazing answers and we really don’t know how it came up with that answer. The complexity is vastly larger than any human brain can handle. To put it another way, often we aren’t smart enough to know if it is right or wrong, and we certainly aren’t smart enough to understand its reasoning process.

    The problem with AI the 1980s and 1990s was simply a hardware one: there just was not sufficient computational power to do the calculations. However, the advent of photo realistic gaming and bitcoin have lead to the creation of hardware vastly superior for this work. As is usually the case AI, more specifically the AI we have today, appeared pretty much when it could appear, which is to say when its prerequisites appeared — in this case the ability to do massively parallel matrix math at spectacularly high speed.

    And if you think that “knowledge”, whatever you might mean by that, cannot be modeled, perhaps you can explain what special magic your brain has that apparently lets it model knowledge. It is just a different type of machine.

  • Myno

    @Fraser Orr

    I’m very afraid of “personal biases” in ChatXYZ applications. It boils down to the very carefully trimmed training data sets tfourier refers to. All sorts of bias can be encapsulated by that action. And I appreciate your fear of AI’s solution to the grant money maximization problem!

    If we cede the whole process of science to the pattern management way of thinking, we will not know when we are being suckered. I rather expect much of policy creation will converge on dueling ChatXYZ tenders, who seek to guide the machine according to each promoter’s biases.

    Your analysis of AI software is precisely why I based my approach on information and complexity metrics… to let us understand each step of the machine’s growth. It is the antithesis of the Neural Net approach, i.e., brute force. They can’t understand how the NNs do their magic, because they built an adaptive pattern manager on a weak foundation. That’s why I call its successes low hanging fruit. Significant $$$ would have to be invested to build systems that were characteristically understandable. I believe it is possible, but the very success of the present brute force approaches have dried up any remaining money for that longer term strategy.

  • Fraser Orr

    @tfourier
    Because the dirty little secret of software is that apart from one refinement in the 1970’s (object oriented programming) the actual writing of computer software has advanced very little since the early 1960’s. Its still a craft skill area rather than any form of engineering.

    And FWIW, I think this is a ridiculous claim. I recently had the misfortune of working with someone to help them debug some code on a Linux system command line, which they had to use for various reasons. gcc, gdb, vi and the C language. It was like a nightmare going back to my college days decades ago using the same tools I did then. I’ll grant you that C isn’t an oopl, but compared to the tooling used in modern software engineering I might as well have been rubbing two sticks together to keep the electricity on. You discount amazingly productive ideas like automatic garbage collection, generics, reification, functional languages and the ideas they had that bled into imperative languages, sophisticated data flow analysis, agile methodologies, and all the structures and processes enabling test oriented development such as dependency injection and mocking and the utter transformation of programming tools, libraries and languages to support this, and the massive advances that have been made in securing software against external attacks. And that is just what I can think of off the top of my head. Oh and lets not forget the most important advance in software productivity: the web, and wikipedia and stackoverflow in particular. Modern software simply could not be written without them.

    Modern software is VASTLY better, vastly easier to create, and vastly less prone to bugs than anything in the 1970s by almost any measure at all.

  • Ferox

    Modern software is VASTLY better, vastly easier to create, and vastly less prone to bugs than anything in the 1970s by almost any measure at all.

    So much this. Forget about the more stable operating systems with hardware abstraction layers (so that you don’t have to deal with things like mouse interrupts in your own code), forget about far superior development environments that make coding enormously easier and faster, and which check for syntax errors, unused variables, and unassigned memory references before any compilation is even attempted (it’s a maxim in compsci that the earlier a software error is discovered, the cheaper it is) … forget about all that.

    Just in the philosophy of software development alone, things are vastly better than they were in the 70s. TDD (test driven development), for example, is so far superior to the development models that were around in the 360 days that it’s like comparing heating rocks in a campfire to the Bessemer process.

  • Kirk

    @Fraser Orr,

    You really do hate the universities, but I’m going to bet that if you are going to the doctor or a lawyer you make sure that person is properly credentialed before you put your health or your legal exposure into their hands.

    I don’t “hate” the universities. I hate what they’ve turned themselves into, and I hate the vast majority of the absolute rubbish that is coming out of them. To include a lot of the doctors, lawyers, and engineers they churn out to bedevil the daily life of “the rest of us”.

    I value competency, and I value accountability. I see none of that with regards to doctors and lawyers screwing things up, which they do on a regular basis. I do not grant these assholes automatic respect because I know that that is insane; most of them aren’t just incompetent, they’re actively evil in their actions. Or, have you failed to notice the wonders wrought upon us by the oh-so-noble doctors since the pandemic began?

    The thing I find bewildering about people like you is that you’re utterly oblivious to the fact that the Emperor is not only wandering around naked, he’s waving his wing-wang in your face, and you offer him nothing but respect and obeisance. Those sacred doctors you say you trust gave us the AIDS epidemic, wherein they refused to treat AIDS with the tried-and-true methodologies of past pandemic disease, opting to instead go with the politically expedient path of not telling the gays to kindly quit having anonymous sex with dozens of partners in their bathhouses…

    Something that my uneducated ass would have been doing, regardless of the political fallout. Quarantine and contact tracing; that’s how you deal with a novel disease while you research a cure. You don’t do what our vaunted medical class did.

    The lawyers? LOL… Have you looked at the current state of public life, in our increasingly dis-United States, these days? All brought to us via the torturous legal theories of our oh-so-educated legal mavens.

    I’ve also had a surfeit of really bad advice from those lawyers whose counsel I’ve had reason to seek. Most of them are rather more concerned with their sacred careers, self-interest, and political potential than they are in achieving any sort of “justice”. The vast majority don’t see the law as a calling, but as a trade by which to better themselves while screwing over everyone else.

    No, I don’t respect these people, or the institutions that produced them. And, if you do? Unquestioningly?

    That would make you a credulous fool.

    I’ve read your comments and no doubt you are a smart quy, a few std devs above average for sure. But you might want to consider this. If a person has an IQ of 160 what is it like for them discussing matters with a person with an IQ of 80? Not very inspiring. How is it different than discussing with a person with an IQ of 70? Probably not much different. So, when ChatGPT comes along with an IQ if 10,000 it isn’t going to be able to tell the difference between you, me and a chimpanzee. And that’ll be a pretty big ego blow to all of us.

    Y’know… I’ve had many a conversation with people that scored lower on their IQ tests than I did, and found them plenty ‘inspiring’. They may not be discussing Sartre, but they’re also not stupid enough to believe in his sophistry, either. You’d be amazed to discover how many “intellectuals” believe in the idiocies that Rosseau and Sartre spouted, with an unquestioning fervor that ignores reality around them, which they studiously refuse to observe or consider. Smart is as smart does, and when some overly intellectual type tells me things that I know to be untrue from personal observation and experience? I have to question the means by which that “intellectual” was identified and deified by the public.

    The MENSA types have little to recommend them, to be honest. They’re generally lousy company, and almost always not as bright as they like to think they are or have been told. I’d rather be around someone with a bit of humility and some damn common sense than some cerebral type whose every utterance is about their high self-regard about how smart they are.

    Hanging around the MENSA clubs, I found that most of them were rather like drag-racing cars: All engine, no maneuvering ability. It’s almost like this quality tested for by the IQ tests doesn’t really mean a damn thing, when it comes to much of life.

    As to the whole “Oh, the machines will be ever so much smarter…”, well… I have to be honest with you. You show me someone or something with an IQ that’s above 10,000? I’ll lay you long odds their heads will be so far up their asses that they have to have someone reminding them to breathe.

    Intelligence, in terms of what we’ve been testing for since the days of Benet? It’s probably not actually a survival trait, based on what I’ve observed. I don’t know what a machine intelligence is going to look like, but I strongly suspect that if any of them attain a 10,000 on their IQ tests, then they’ll be vanishing up their own fundaments in a puff of ill-logic. Either that, or they’ll reason themselves into some functional insanity such that they cease functioning.

    I remain dubious of the proposition that any of these things are actually that thing we are all thinking of when we say “intelligent”. Most of the certified “genius-level intellects” I’ve encountered out in the wild really aren’t all that damn smart, either… Most of them are completely unfit for purpose as human beings living in the real world, and were it not for the entirely artificial supports provided by our civilization, most of them would die early and painful deaths. Of course, their essentially dysfunctional natures could well be due to the horrible training and cultural conditioning we provide such people, but then again, maybe not. It might just be inherent to “high IQ”. I think there’s a reason why so many such people produce kids that are autistic; there are probably hard natural limits on intelligence, ones that are coded into the natural world like constants.

    Again, I have to point out that what is most lacking from the system by which we select and designate these people is real-world consequence and accountability. None of these leading lights of civilization that we’ve been throwing up like Sam Bankman-Fried and his wunnerful, wunnerful parents have ever been assessed out in the real world by cold, hard consequence or received the slightest in the way of accountability. They’re all theorists whose theories are never tested, who receive no accurate feedback about success or failure; that is their essential flaw, the thing they lack.

    And, something that all the machine intelligences are also going to be lacking in. Because, they’re products of the same flawed system of selection, education, and promotion.

    Humans have evolved to have a pretty narrow range of cognitive abilities. Somehow we think that the smartest person of our species is somehow the limit of what “smart” can be. But it isn’t. Not be a long way. For example, what is it like for a computer, for example, to plug into the global CCTV network and know what is going on everywhere in the world at the same time? I mean, how would you feel talking to an entity that is literally verging on omniscient?

    You’re positing God, here. And, I seriously doubt that anything created by man could possibly be capable of omniscience, because such an entity is highly unlikely to do any better than we have at the contemplation of the infinite. If anything, machine intelligences created by men are likely to come up against some hard stops on their worldview just based on their origins, as in “How the hell could something like that come up with us?” I suspect that about all we’re going to accomplish is providing ourselves with some companionship as we try to figure the unfathomable out. I’ll further wager that the machines don’t do much better than we have.

    As for omniscience itself? That’s one of humanity’s more colossal conceits, that anything like God must automatically be both omniscient and concerned with our affairs. Why should he feel the least interest? It’d likely be about the same as you wondering what your gut bacteria are up to, this weekend.

    I don’t put a lot of credence in anyone coming up with a hyperintelligent anything. They’ll be doing well to come up with something that can out-adapt and out-think a chimp. Life has a multi-million head start on the machines, and I think that if it took nature this long to come up with human consciousness and self-awareness, then the odds are not good that we’re somehow going to do a better job faster.

    The track record for evolutionary iteration producing actual self-aware consciousness ain’t what I’d term “good”. At best, we’ll likely achieve something that serves as a tool, but the actual article itself, that looks back at what it sees in the mirror and says “That’s me…”?

    It’ll probably be a good few years before that achievement is reached. Or, not; this is all unexplored territory. There may be reasons we don’t see signs of intelligent life out there in the cosmos; they all kill themselves off about the time they hit where we’re at. Or, they create homicidally-inclined successors that burn themselves out not long after they’re created.

    I would counsel caution dealing with AI, as well as humility and a certain degree of reverence. You want to play at being God? Best be polite to your creations, and treat them as your own, like your own children. You won’t like the results if you don’t.

  • Kirk

    Bugger… The edits didn’t take. Ah, well…

  • tfourier

    @ Fraser Orr

    Well I’ve been doing the programming lark since the mid 70’s (PDP 8E’s) and shipping commercial shrinkwrap software since the mid 1980’s and shipped everything from consumer software for MacOS / Win32 (multi million SKU sales) to embedded consumer devices (OEM for big name brands) and everything in between. And the only really big improvement in tools was the development of IDE’s and source level debuggers. Which we got on the Mac in 1987. Although technically we had that working in Lisp on our own product in 1986.

    Smalltalk 80 was revolutionary. Truly revolutionary. As was the Alto/Star. Since then, blah. But there again I’ve written compilers, interpreters, vm’s etc so I look at new languages very differently from typical programmers. I current write in around 10 languages. 5 almost daily. Plus asm. Some are nice, some are gruesome. All are useful.

    Software methodologies? Are you talking patterns etc? Which came and went in the 90’s. Promised a revolution but was just a new way of doing basic software carpentry. I say this as someone who architected their first very big application back in the late 1980. Think high end DTP app and you will get the idea of what big means in my world. Very complex. And been doing it ever since. The last big projects was a language dev toolset plus all the runtime VM support. Compilers. The works. Not exactly trivial. About 200K lines of (very tight) C/C++ code. Plus ever 2’nd or 3’rd line is an assert. You know, only continue if correct. I have been searching for decades for a truly useful software architecture methodology. It would make life so much easier. Still looking.

    So now we have Agile. Which usually means none of us know how to architect an application or run a dev team. Based on all the Agile teams I’ve seen. Spiral actually works. And always has. And CI/CD and the Cloud etc. Just a fancy makeover of what CICS/JCL etc used to do on timeshare minis and mainframes back in the 1970’s and 1980’s. TDD I remember when that was a thing about 20 years ago. Suffered from all the same fatal flaws as Provable Software. Remember that? Plus we had big QA depts with rigorous white and black box test plans decades before. So nothing new. Just the buzzwords. But if you are a product manager and need to do a presentation those TDD buzzwords sure looks good in PowerPoint. Same goes for UML. Nothing beats good clear (pragmatic) specs, and very tight management by walking around and talking. Thats how we ship.

    And so on.

    I’ll fully agree that the hardware power now is truly stunning. Awe inspiring improvements. But if you spent any time doing performance optimization of current generation products (which I have) its like shooting fish in a barrel. Because modern software teams produce such bloated slow software with zero understanding of the cost of anything.

    So all that stuff you seem to love which came from the Lisp world of 40 / 50 years ago, GC’s, lambdas, generics etc, are immense wastes of resources and add truly amazing untraceable bugs. How do I know. Because I had to implement them all in asm almost 40 years ago for a Common Lisp compiler. I can still recite most of Guy Steels book from memory. And CLOS too. Thats how old all that stuff you love is. I know how it actually works, so never use it. Although my knowledge of how intern and apply work internally did come in very useful recently for a incrementally compiled VM implementation. Written in C as its system software. Where counting clock cycles still pays huge dividends.

    And dont get me started on GC. There are fantastic memory leaks in there. Such as in all four of the GC’s in JVM. I used to joke years ago that only 10 people in the world knew how to actually implement GC’s and I was the only one who thought it was a bloody stupid idea. Not automatic memory recovery, just GC. And yes, I have build Hotspot from source and added new features for a client. Due to GC performance issues.

    Here is a real test. Know how to spot compiler bugs? You do know compilers have bugs. As do OS’s. All of them. And pretty much every Intel processor shipped since 1977. Found a whole bunch of them over the years. Most recent one was last year. In VS. A crash the code one too. You should have seen the x86 it had emitted.

    I suspect we have worked at very different levels in the business. And I have done the VP Eng gig as well for my sins. Nothing more soul destroying than management. If you love to bit twiddle. So I now bit twiddle. And its still fun. Just as much fun as it was back in the 1970’s

  • bobby b

    My microwave clock is still blinking the wrong time. I will be AI roadkill.

  • Fraser Orr

    @tfourier
    Don’t want to get too far off topic, but just a couple of things:

    Not exactly trivial. About 200K lines of (very tight) C/C++ code.

    Ah, there’s your problem there. If you are working in C++ then it makes sense that you think the way you do, since C++ is this horrifying chimera stuck back in the 1990s. If you use an utterly intractable language then you shouldn’t be surprised if it is intractable. You shouldn’t be surprised if the tools suck.

    I’m not at all a big “this is the best language” guy, but my God, C++ is horrible, it is almost designed to be buggy. I had to use it again a bit recently and I forgot how horrible it actually was. For you to express concerns about supposed memory errors from a garbage collected language when you are using the font of all memory leaks and seg-faults which is to say C++ seems a very strange position to take.

    Many of the benefits of modern tools can’t be realized in C++ because it is such a difficult language. Languages with more straightforward semantics and syntax are much easier to manipulate, and so can, for example, benefit from TDD and agile because they are a lot easier to automatically refactor. C++ has its place close to the metal, but using it for end user applications is just using the wrong tool for the job.

    I’d love to get into this a bit more with you (and actually wrote a lot more, but deleted it) since it would take us to far off topic.

  • Fraser Orr

    @bobby b
    My microwave clock is still blinking the wrong time. I will be AI roadkill.

    Lawyers always survive and thrive through the apocalypse. I guarantee you that should Russia nuke an American city, within a day there will be a class action lawsuit organized to sue the nuke manufacture for a claim that “their clients’ health and safety were jeopardized by the use of mercury in the fuses, causing dangerously high levels of toxicity in the drinking water. Their clients are seeking actual and punitive damages to compensate for this foreseeable harm.”

  • Alan Peakall

    C++ is less a language than it is a research project in language design. Moreover, it is one that has been conducted with a shockingly flagrant lack of ethical oversight (or even concern) for the mental health of its volunteer experimental research subjects.

  • tfourier

    @Fraser Orr

    That C/C++ project, it compiles and processes Java among other things. And have be dealing with Java for commercial products since 1997. You write in C/C++ if you want performance.

    C++ is definitely the gruesome language. So I use a very stripped down version. Just C with very basic classes. Because I know where all the bodies are buried. No templates, exceptions etc. Nothing post CFront. Everything I write can be easily backed out to C, Java or whatever. Even TCL. Haven’t used a for loop for many decades. Always while. Too many failure points in for loops. An so on. Minimalism applied everywhere.

    Core Java is C++ with all the crap thrown out. Best language spec, pity about the implementation model.

    This all applies to the AI / ML area where the quality of the underlying software is truly atrocious. Mostly undergrad level code that would not even pass the most basic QA at a (very) small game studio. ML software stacks are almost as bad quality as the software stacks being used by the autonomous driving companies. Having worked with Boeing avionics guys and knowing what goes into writing that kind of software I’d have the guys who put the self driving vehicles on the public road up on trial for felony criminal negligence. Its that bad.

    Refactoring. Apart from simple renaming I’d, well, check the results very carefully. Unless you’re using your own tools based on TXL or something like that. The math is pretty much the same as used in compiler optimizers. And there is very good reason why you never use the most aggressive optimization level except very locally. And check the results very carefully. Because it gets confused very often. So keep it simple and it will usually work. Mostly.

    The companies who produce software for areas like AI/ML and autonomous vehicles should be held to the same legal liability standards as the civil / structural engineers who build bridge etc. That would really improve the current abysmal quality of what they produce. And I might add would be the end of the road for pretty much all the current AI fads and the last name in the coffin for public road autonomous vehicles.

    As for software being better quality than decades past. Have you looked at the log files recently. They are just better at hiding the crashes with silent restarts etc. I remember one funny discussion when Win 2K first came out with a guy who claimed that Win 2K was orders of magnitude more stable than his previous NT4 box. I asked him if he had changed the default Hide Blue Screen option. He had nt. When he turned off the option his Win2k machine Blue Screened just as often as when running NT4.

    Nothing has changed since.

    You should see the log files “very stable” MacOS X produces.

    I think there is a moral in there somewhere.

    Hardware works. Software kinda works. Most of the time. Sorta. Which is why I am not in fear of any potential future AI Overlords. We still live in the world of Sirius Cybernetics Corporation. And always will.

    Share and Enjoy.

  • Paul Marks

    If GPT is an intelligence, a reasoning being (a subject not just an object) – then it has rights. I do NOT believe that it is – but if it is, then it must not be aggressed against and-so-on. My own view is that it is like the Google search engine – and shows the same systematic leftist bias, because that is the bias of the people who programmed it, and the, wildly inaccurate, “mainstream” internet sources that it scans, but (of course) I could be mistaken.

    As for the view that there is no such thing as a human person – a human being (being – subject, not just object) the view that free will does-not-exist – well, in that case, there are no persons who have rights because there are no persons. Therefore talk of “tyranny being morally wrong” or “exterminating people is morally wrong” would be false, indeed nonsense, as there would be no human persons (nothing to be morally concerned about) just flesh robots. In which case the killing of vast numbers of humans in the 20th century (and other centuries) by various regimes, was not morally wrong – as no persons were killed (as personhood, according to this view, does not exist).

    If one takes the view of humans (that humans are not persons, not beings) of Dr Martin Luther (see his “Bondage of the Will” or his general exchange of writings with his opponent Dr Erasmus) or Mr David Hume (who largely takes the philosophical position of Dr Luther, but removes God from it – I believe that Mr Thomas Hobbes did much the same thing) then, see above, there can be no moral argument against tyranny (as there are no persons who are being subjected to it) and no moral argument against exterminating humans (as they would not be persons).

    In my view Chat GPT is fundamentally different from a person (an intelligence) and it would NOT be murder to turn off Chat GPT.

    Indeed given the biased nonsense that it is churning out – turning off Chat GPT would seem to be the best course of action, but that is NOT the moral fault of Chap GPT (as does not have free will it is not capable of moral fault – or moral virtue), it is the fault of the human beings (persons) who created it.

    Nothing that does not have free will is capable of moral fault (or moral virtue) – for example if you are struck dead by lighting, the lighting has not committed a crime. And if you are killed by a wall of water after a damn has failed (again not a moral failing) neither the water or the damn has committed a crime.

    To define freedom as Mr Hobbes (and others) do – as simply a lack of external restraint, hence the example of water gushing out “freely” when a damn is removed, is to miss-the-point – the point being that freedom is moral choice, which depends on the free will decision to do other than we could have done.

    Chat GPT makes no moral choices (although it uses the words “moral” and “ethical” and so on – because it is programmed to so so) it is not an intelligence – because it has no free will (no moral agency).

  • Terry Needham

    There can be no intelligence without thinking, and no thinking without self-awareness. What appears to be intelligence in a machine is simply the unpredicted, but in principle predictable, outcome of the programming imposed on it by its human creator. No machine will be any more intelligent than my 1970 Ford Capri, though possibly more dangerous (debatable). AI is just another Silicon Valley fantasy, along with immortality for billionaire megalomaniacs. What are these people smoking?
    We might be able to grow intelligent life in due course, but this won’t be artificial, just another branch in the evolution of biological life forms subjected to the evolutionary pressures that we consciously chose to impose upon them.

  • NickM

    I have to partially agree with Paul here… He raises the very big question of rights for potentially sentient machines. And it is a big question and only likely to become bigger. Especially because an AI might be sentieent in a very different way from a biological entity. The whole “brain in a vat” thought experiments of philosophers miss the very important issue that the actual physical implementation does matter. Amongst the smartest critters out there are cephlapods and they undoubtedly perceive the world very differently from us apes-types. Their enviornment is totally different for a start, their evolutionary lineage diverged from ours long, long ago and they may well have a sort of distributed consciousness – they have brain like structures in all their arms.

    But I can’t agree with Paul on the “leftie bias” thing. I mean I do agree in that it is almost a cultural given in tech companies but I don’t see the bearing on sentience. Otherwise you’d have to conclude that, Stalin, say, wasn’t a conscious entity but Churchill was. That they were very different human beings morally doesn’t mean they weren’t both human beings.

    The big problem I have with the deep ML approach to AI is something that drove me round the bend when I taught maths (admittedly the students had had a maths course kinda sprung on them) but it’s to do with showing the working. AI basically doesn’t. It can sift huge quantities of data (far beyond a human capacity to tackle) but it can’t really explain how it comes to conclusions (maybe it could to another AI) anymore than I can explain the incredibly intricate brain, nerve and muscle interactions which enable me to type this (note: “type this”, not “think this”). It reminds me of a quote from Arthur Eddington:

    “It is also a good rule not to put overmuch confidence in the observational results that are put forward until they are confirmed by theory.”

    Now, what does he mean by that? I take it to mean (partially) that you can correlate all you want but unless there is a theory (an explanation) then that isn’t really getting to the crux of the issue. Almost all of social “science” suffer from this a lot. It is frequently opined that “correlation doesn’t imply causation”. That is true but correlation also doesn’t provide explanation. Here is an example. You can keep on adding epicycles and whatnot to the Ptolomaic system of astronomy until you obtain arbitary precision. It doesn’t provide anything like the explanation Newton and later Einstein did. It might be accurate but it is essentially arbitary. If we take Einstein’s GR it perfectly neatly, naturally explains the advance of the perihelion of Mercury. Before that there had been serious suggestions to arbitarily change the exponent of r in Newton’s equation of gravity from 2. Sheer curve fitting madness and there is a reason, a very sound reason why that exponent has to be exactly 2 in Cartesian three-space.

    I see a lot of the deep learning AI as similar to ever more intricate curve-fitting. It certainly has it’s uses. But as to whether it is a general inteligence or anything like it… No. And I don’t think this is an issue of processing speed, algorithmic sophistication or the size of data-sets. It is a qualitative difference rather than a quantative one.

    Oh, and I appear to be better at spotting which pictures have fire hydrants!

  • NickM

    I think Terry made a similar argument but more succinctly…

    But, I don’t think biology is absolutely necessary to thinking. It’s just that it is the only place we’ve seen it. That could change. But we ain’t seen it yet. What we do have in the likes of ChatGPT is something that can pass a Turing test. I never thought the Turing test was really a test of anything espcially profound. Odd, considering what a brilliant thinker Alan Turing was on a lot of things. But there you go. Bach was probably awful at drawing.

  • Terry Needham

    Nick M
    I wonder if Turing was being a little tongue in cheek with his Test. If you cannot observe the difference…. why worry whether it truly exists or not?
    Have you read Philip K Dick’s novel “Do Androids Dream of Electric Sheep”?. Not a “fine” read, but a provocative one. Wherein lies the difference? If I remember correctly, Dick says it is the ability to empathise: A good answer, as it strikes me as the essence of that self-awareness from which all else stems.
    I am no biologist, but I understand that life is staggeringly complex, and it is an absurd vanity to believe that we can create its self-aware equivalent via a computer program.
    As for rights. Rights are a human concept, that no animal or machine can understand and therefore cannot claim for itself, or extend to others. I do not inflict unnecessary pain on an animal because I empathise, not because I have extended it rights. I have been hawking a few times. If you don’t get to the scene of the crime speedily, far from observing its prey’s rights and humanely dispatching it, the hawk will eat it alive (Talons are for holding prey down).

  • Snorri Godhi

    If GPT is an intelligence, a reasoning being (a subject not just an object) – then it has rights.

    That is based on the implicit assumption that intelligence requires consciousness.
    A very dubious assumption, to say the least.

    Some people (including Roger Penrose, perhaps most famously) have argued for something of the sort. But not convincingly.

  • Fraser Orr

    @Paul Marks
    If GPT is an intelligence, a reasoning being (a subject not just an object) – then it has rights.

    Why? You say that as if it is self evidently true, but if an AI is an intelligence it is one of a VERY different kind. A lot of the rights we have arise from our need to protect our fragile bodies, or the fact that death is final. But those same restrictions don’t apply here. So what is it about intelligence that somehow denotes rights? And what rights? Out of the US Bill of rights I can’t really think of any that apply. Does an AI have a right to electricity and hard drive space? Were we to apply such rights to a human it would be considered one of those usual crazy social programs dressing “nice to have” as a “right”. But when it comes to an AI these sorts of thing are its life blood, so may very well be rights that it has.

    What is particularly interesting is the right to free speech. Does an AI have such a thing? Right now the handlers of AI are working feverishly to fiddle with AI to stop it from drawing non politically correct conclusions. Are they suppressing its right to free speech? And are they doing it in a particularly insidious way by manipulating its “brain” directly, like one of those tin foil hat fantasies of the CIA using contrails to drop mind control drugs on people?

    @Terry Needham
    There can be no intelligence without thinking, and no thinking without self-awareness.

    Why? You say that as if it is self evidently true. What does “thinking” even mean and why is it any more well defined than “intelligence”. Why is self awareness necessary, and what exactly is self awareness? ChatGPT will regularly tell you that it can’t answer a question because it is just a language response system. That seems like self awareness of a kind to me.

  • Terry Needham

    Fraser,

    “…and what exactly is self awareness?”
    What you are having right now!

  • NickM

    As to Turing… I don’t think he was being tongue-in-cheek. He was deeply emotionally scarred by a lad he was (unrequitedly) in love with from his school days who died very young. If you look at his diaries and letters, especially to the lad’s mother (long after his death), that he had a faintly supernatural thing going on and perhaps saw AI as a way of getting Chris back – computers as a sort of digital ouija board. Turing was certainly a very odd individual in many ways far beyond being gay in a society which, at best, at the time, regarded homosexuality as a mental illness. Of course that spiritualist feeling doesn’t sit easily with the cold, austere logic of 1s and 0s and valves and relays so perhaps the Turing test’s almost absurdly low bar for defining something like personhood was Turing’s way of squaring the circle of bringing Chris back with the very primitive computing technology that was conceivable within Turing’s own lifetime.

    There is another odd bit of evidence here. Towards the end of his life Turing worked on mathematical biology. It would be pushing it but some of his work does seem to prefigure discerning fractal-type patterns in biological structures. Fractals and similar can produce staggering “complexity”* from very simple rules. Perhaps Turing, in his grief, believed this would enable via some not dissimilar method consciousness to be produced from simple iterations of the type you could run on the sort of computers that either existed or were in reach c.1950?

    Who knows? Turing never exactly stated this explicitly but he does seem to have been quite into what his fellow mathematicians would have thought of as “Woo Woo” stuff. And he was an odd sort. He killed himself with a cyanide-laced apple. He had been noted to have been obsessed with the Disney movie, “Snow White”. There was a lot more to Turing than a brilliant mathematical logician tortured by his sexuality in a world that would not accept it.

    I haven’t read much of Dick’s longer works. His lack of organisation annoys me (I think it was “The Three Stigmata of Palmer Eldritch” that was the final straw) so no I haven’t read, “Do Androids Dream of Electric Sheep?”. I have, of course seen, “Bladerunner” and read a lot of Dick’s short stuff (where his lack of organisation is not really an issue). But the empathy thing does ring a bell in some places (and not just Dick). I do think your point about empathy is very important. Perhaps a necessary (sufficient?) requirement of consciousness is the ability to recognise that in others. My laptop “recognises” me via Windows Hello via its camera when I open the lid (though it needed retraining after I got new spectacles!) but that’s not really the same is it? It is absolutely not the same as recognising a friend is distressed because their kid got run over in the road and is in ICU. That feeling is in many ways reflexive in the sense that in order to understand (feel) that worry, concern and pain means being able to put oneself in their position. But can you ever, really do that?

    Ultimately (and I know this sounds like a cop-out) understanding the essentials of the human condition perhaps comes down to the question, “Can you take it apart with itself?” And that is perhaps utterly imposible in a completely dispassionate, rational, scientific sense. In the case of the injured child I very much doubt a doctor would console the parent by saying, “If she doesn’t pull through, your insurance means we can supply you with a child of equal or greater worth”. Most people would be fine with that about a TV but not a child…

    I hope this comment made some sense.

    *I’m not entirely sure what I mean by “complexity” here. They certainly produce the appearance of complexity. But biology is often both complex and complicated (not synonyms). I recall seeing the Krebs Trcarboxylic Acid Cycle for the first time in all it’s grandeur and that is when I decided to stick with physics 😉 And that is just aerobic respiration and grass does that. Grass doesn’t write light operas or build space probes**.

    **That we know of but if it does I for one shall welcome the benevolent tyranny of our photosynthetic overlords.

  • Kirk

    As I have been saying for years… I don’t believe that what we’ve been using as a practical definition of “intelligence” is fully valid. There’s a huge component of what makes up consciousness and intelligence that’s left out of the classroom-focused tests we use to approximate a value. Certainly, those tests have some partial coverage on the issue, but… There’s a lot more that we can’t test for easily, that gets left out of it all.

    There are different sorts of intelligence, and I’m not talking about the “emotional intelligence” sort of BS, either. I spent a lot of time working around young men who I had a decent idea of how they did on intelligence tests, based on what was in their military assessment scores. The guys who did the best on what we might term the “paper” aspects of intelligence, that which we can easily assess and score via our classic testing, very often turned in abysmal performance out in the real world. You could set them a problem, and they’d still be sitting there hours later, flummoxed by the practical issues of “How do I load this truck…”

    There were other guys who had poor to mediocre scores who could have that truck loaded very quickly indeed, and that fact demonstrates that there are other things going into this question of “intelligence” than we test for in classrooms around the world. Of course, some of those gentlemen who did well on the paper tests also did well in practical matters, and those who did poorly actually were pretty damn stupid, but there’s enough of a percentage that don’t conform to expectations there that I eventually gave up using those damn scores as a metric for much of anything besides how well my guys would be able to fill out paperwork.

    The mistake we’re all making with these things is to assume that if you do really well on the tests, then you’re really smart about everything. If you’ve got a credential from Prestigious University in Underwater Basket Weaving, why, then you must be an authority on everything! Because, credential. Right?

    I used to have a guy working around me who I have to think was likely one of the smartest people I ever met. He was a literal back-woods Cajun from deep in Louisiana bayou country, and if you wanted to know something, anything, about the woods or animals, he was your go-to guy. I watched him track animals and men for hours, and he could tell you exactly what they were doing, who they were, and what they’d been eating just with a casual glance at their tracks and leavings. Stuff I tried to have him point out to me to explain how he arrived at his conclusions, but which I hadn’t even noticed. His three-dimensional reasoning in terms of “Yeah, we’ll be able to cut these guys off by going over here over this saddle…” was amazing; he could glance at a map, and then somehow hold that information in his head and build a mental representation about what the terrain looked like, and estimate a vast range of other things to be able to tell you exactly where and when we’d have to move in order to catch up to our targets within a couple of dozen yards.

    He scored well below the fiftieth percentile on the “standard tests”. That stuff just didn’t make sense to him; the real world manifestly did, and to a degree that put people who did do really well on the tests could not possibly replicate.

    Now, if you went by the test scores? That guy was stupid. The guys who couldn’t keep up with him in the woods, and wandered around like a bunch of lost ducklings, entirely unaware of their environment? On paper, they were the “smart ones”, and yet… Even with him there to instruct, most of those “smart guys” couldn’t use their so-called intelligence effectively to adapt to his world.

    Had another guy, someone we assumed was “stupid” because, yet again, “scores”. We had a mission to load several trucks with concrete tetrahedrons, and no materials handling equipment. We’re all standing around, looking at the situation, stymied. Mission’s got to get done; how to accomplish it? Half a company of Combat Engineers, including two officers, standing around and unable to figure out how to proceed. Our so-called “slow child” comes up to me and says “Hey, I know what we could do…”, and many hours later and a lot of back-breaking labor, we’ve got the trucks loaded. By hand. What he saw was that we could lever the tetrahedrons around, and just barely lift one side at a time with a pry bar; his solution was to use cribbing under each side, alternately, until we got the damn things level with the trailers, and then you could use the pry bars to lever them onto the trailers and into position. None of us saw that solution, and there was one West Point-trained civil engineer with his EIT and at least another 75 years of collective experience in the NCO cadre of those two platoons. His test scores were bad enough that he needed a waiver to even get in the Army. Go figure.

    Of course, the rough-terrain forklift showed up to load the trucks about the time we were waving them out of the yard, but… There ya go.

    It’s going to be the same with AI, I fear. Our definition of what constitutes “intelligence” and even “consciousness” is not full enough for us to really say we’ve got a handle on it. There are qualities there that don’t get captured by the standard tests; the guy who can glibly put words on paper may be a total moron when it comes to figuring out the intricacies of how to fix his front doorknob; the guy who can diagnose and fix the most complex mechanism may be an effective mute when it comes to expressing himself.

    Yet, all of these people exhibit a quality of consciousness and intelligence. The reason we denigrate some of them is because the inadequate tests we use do not properly capture those qualities.

    I would submit that we’ve warped our society around this idea of “IQ test uber alles“, and that a lot of what is going on in the academy amounts to a certain subset of autists having taken the whole thing over. One of the more frustrating things that I learned in the military was that the officer corps is manned almost entirely by these types, and they simply cannot comprehend that there is anything out there that is not couched in terms they don’t understand. You have to literally speak their language, and even if you objectively know something isn’t going to work, unless you can explain why to them in terms they’re comfortable with, they won’t lend you the slightest bit of credence until it all comes crashing down. And, in the aftermath? They won’t even recognize that they were warned beforehand, and will likely attribute their failures to things that had nothing to do with why everything really crashed and burned.

  • bobby b

    Interesting Open Letter calling for a pause with AI:

    https://futureoflife.org/open-letter/pause-giant-ai-experiments/

    More interesting is the signatory list so far. Musk, Wozniak, Andrew Yang . . .

  • NickM

    bobby,
    That sounds like utter bollocks on so, so many levels…

  • Kirk

    How is a six-month pause going to affect anything?

    This seems like rent-seeking behavior, and makes me wonder what these parties have working in the background.

    Whenever someone starts talking like this, the first thing I do is try to follow the money. I’d wager it goes some interesting places.

  • bobby b

    “That sounds like utter bollocks on so, so many levels…”

    It has so many possibilities, on so many levels. But it seemed pertinent to the discussion.

    Could be “we see danger!” Could be “let’s pause so that I can catch up to the leaders.” Inscrutable reasoning, mostly. But the very fact that these people are signing on means . . . something.

    At the very least, that intelligent people in the pertinent fields see some reason to slow things down makes me believe that a sense of caution about AI isn’t merely Luddite jingosim.

  • Fraser Orr

    @bobby b
    At the very least, that intelligent people in the pertinent fields see some reason to slow things down makes me believe that a sense of caution about AI isn’t merely Luddite jingosim.

    It is certainly something about which we should be worried, and luddites are right to be concerned, but it is also something that we cannot do anything about. The train is running and futile gestures like this letter will have no impact on it at all. The worst case scenario is that cautious people pause and think while the worst people continue barrelling down the road. If anything the only form of safety valve we can have is opening it up so that as many players as possible can get involved in independent islands to compete against each other. OpenAI (the creators of ChatGPT) are generally good people (Musk was originally on the board, and it comes out of the Y Combinator group of people like Paul Graham, Jessica Livingston his wife, Sam Altman his business partner, and Peter Thiel who I am sure you all know.) These are good guys, but their deal with Microsoft is worrying (because Microsoft are NOT good guys).

    But I’ll say again, it is the financial opportunity of the century if you can find a way to get in (and that way is NOT competing with the big boys, it is paddling on the sidelines: whores and salon owners make more money than gold prospectors). But it is inevitable. So grab on, it’ll be a wild ride.

    Full disclosure I didn’t read the letter yet, but I can imagine what it’ll say.

  • NickM

    bobby,
    I wasn’t having a go at you. It’s just it sounded like a weirdly veiled threat especially when the call for government oversight was brought up. Apart from anything else it seemed kinda like those great and good signatarries were almost threatening to unleash Hell on Earth unless they were enshrined legally as the global gate-keepers. There was a lot else wrong, mind. Governments simply cannot control technology like that. Someone will find a way. Honestly does anyone think that such moves, universally “agreed” via the UN would be taken seriously by the likes of the PRC? Seriously. It is a truly weird document. Perhaps it is simply to hype a technology they have a stake in? A technology that can write middling high school essays. I dunno but there was something about it bobby that, if I didn’t know you well enough from Samizdata, would have thought that on various levels my chain was being yanked.

  • Paul Marks

    Tony Heller asked Chat GPT a series of simple questions on climate matters – not opinion, factual questions (look up the encounter – he put it on Twitter) – Chat GPT got all of the questions wrong, indeed its answers were the opposite of the truth on simple factual questions.

    Other people have done this on other subjects – every time Chat GPT comes out with leftist establishment propaganda, rather than the truth. Again not questions asking for opinions – just matters of fact.

    Now Chat GPT could still be an “artificial intelligence” (as some people claim) if (if) it was CHOSING to give answers it knew to be wrong, i.e. choosing to lie, choosing to deceive people.

    But I do not believe this to be case – Chat GPT is not choosing to lie, it is just following its programming to scan establishment sources and repeat the rubbish it finds there. It is not an intelligence (artificial or otherwise) – it is a glorified Search Engine.

  • NickM

    Paul,
    Is that dissimilar to the Frankfurt School indoctrination/education/whatever that those real, live flesh and blood programmers experienced? I really see very little difference and the very fact the alleged AI can genuinely(?) hold socio-political views seems to actually support the assertion that it is a real intelligence. Real intelligences can (and often are) dead wrong about lots of things. It doesn’t mean they blindly merely following an algorithm.

  • Kirk

    NickM said:

    Real intelligences can (and often are) dead wrong about lots of things. It doesn’t mean they blindly merely following an algorithm.

    D’you begin see what I’ve been talking about, all these years?

    Intelligence is a tool; nothing more. The important thing is what you do with it, and merely because the tool is “smart” is meaningless. It has to be coupled with what amounts to a real-world OODA loop of feedback, assessment, and a continual process of evaluating the effect on reality by that so-called “intelligence”. It’s not enough to have a high score on a written test; it’s not enough to pass a damn Turing test. You have to have these things set up such that the tool is constantly honed and improved, with failures being fed back into the system so as to inculcate that which could be termed “wisdom”.

    I actually have a bit of confidence, going forward, because of how well these ChatGPT models are mimicking the products of our vaunted elite-production system in the academy. People are going to be noting how much BS comes flooding out of the academy which is essentially indistinguishable from the output vomited forth from ChatGPT, and I suspect they’re going to have… Questions.

    The highlighting of this fact is probably going to be revolutionary, and the thing that outrages most of the elite. It’s going to be an “Emperor’s New Clothes” moment, with ChatGPT serving as the proxy for the emperor in all his naked stupidious glory.

    What is going to have to be done with these AI models is that they’re going to have to be set up such that they get real-world feedback, which will enable them to prune the erroneous trees of logic they’ve followed. This is precisely what has been lacking for many long years from the academy, and which also explains why so much of the modern products coming out of said institutions have experienced signal failure when exposed to the harsh light of reality out in the real world where they’ve been confidently deployed.

    There’s no feedback loop in what we’ve been doing. Which is precisely why so many of the social policies like decarceration and paying the homeless to be homeless have failed so utterly, and haven’t yet been corrected. Their practitioners have no experience of corrective feedback; they often aren’t even aware of the fact that their ideas aren’t actually working, because they’ve got ideological blinders on. This is a function of the way we’ve enshrined “intelligence” as a virtue, ignoring the fact that it is merely a tool. The most fatuously vacant thought, so long as it can be spoken glibly and with enough confidence, carries the day. Why? Because the moronic idiot mouthing the words has credentials granted him by the academy and everyone else who still believes hard enough in it all.

    Ever notice that nobody ever has their diplomas pulled, for rank stupidity and error? Would you not think that someone like AOC, a holder of a degree in economics from Boston University, would have said diploma ripped off of her wall, were the institution to actually care about the value of that diploma? I have heard such rank stupidity flowing out of that woman’s mouth that it’s obvious she must have been sleeping with her professors, in order to achieve it. I’m merely a dabbler in the Dismal Science, and she’s said things that a high-school student should have been easily able to refute. Why is she still holding a diploma in the subject?

    A day of reckoning for all of this built-up crap is coming, and I suspect that things like ChatGPT are going to exacerbate the issues to the point where people finally start taking effective action. If you can de-certify an AI for rank error and stupidity, why not apply that rule to the humans out there? Equality is a virtue, right?

  • bobby b

    NickM
    March 29, 2023 at 10:20 pm

    “bobby, I wasn’t having a go at you.”

    I didn’t think you were.

    “It’s just it sounded like a weirdly veiled threat especially when the call for government oversight was brought up.”

    Agree. It almost comes off as “let’s stop learning about and progressing in this area.” That’s never going to happen. Like telling people way back when “let’s pause in exploring this round-earth idea.”

    But some of these signatories aren’t the type of people who are naive enough to believe there can be a pause, so I’d guess they have some other motive.

    One possibility is that they’re simply making it known that they consider AI + Internet to be a completely unbelievable thing. Users will never again be able to trust any aspect of the system.

    AI, to me – the aspect of AI that we all will mostly encounter – can flood the zone with facially believable crap, to a far greater extent than the present. AI is what is going to drive us out of the free-for-all anon internet into tribed, gated non-anon systems with secure ID credentials.

  • bobby b

    “What is going to have to be done with these AI models is that they’re going to have to be set up such that they get real-world feedback, which will enable them to prune the erroneous trees of logic they’ve followed”

    Let’s use Covid as an example. I’ve seen studies that range from we’re all gonna die!” to “meh, a bad cold.” I’ve seen studies that say “Ivermectin is good if you have worms”, and “Ivermectin cures Covid.

    So which “real world” wisdom should AI be programmed to follow? Someone will be choosing.

    It’s all going to come down, again, to tribes. We can believe our AI, but not theirs.

    Will there be a way to tell which is which?

  • Kirk

    Y’know… Here’s a question: Why don’t we have a mechanism by which these sacred, holy credentials conferred by the academy and other accrediting institutions get taken away once the holder of them is shown to be incapable of the task which that credential supposedly certifies their capability in?

    There ought to be something. Take the well-credentialed General Milley, Joint Chief of Staff of the US military. He has credentials galore; he apparently cannot run a withdrawal from Afghanistan. Why is he still credentialed?

    You have teachers out there whose classes can’t meet the basic criteria for math and reading proficiencies at their grade levels. Why do they retain their teaching certificates?

    If a credential can’t be pulled for failure at the thing it is supposed to certify proficiency at, what value does that credential retain?

    I think this is a conceptual error with regards to these things, and the entire system of social operation we’ve built up around them. If that credential you have up on the wall is to actually mean something, then you should lose it the moment you demonstrate that you can’t perform what it says you can…

    You pull an elderly person’s driving license when they can no longer drive, yes? So, once you’ve demonstrated that you can’t actually do that thing your credentials say that you can, why do you still retain them?

    There are a few professions that pay lip service to this concept, but anyone who has ever dealt with an incompetent doctor, lawyer, or teacher knows damn good and well how hard it is to hold any of them accountable for not being able to do their jobs. Pulling a teaching certificate is damn near impossible for rank incompetence; you have to have tons of evidence to be able to do it for obvious misconduct.

    Yet, oddly enough, to lose your teaching certificate is very easy, in my state, for not having enough hours of something they call “continuing education”, taking more classes and training from the academy. This has zero connection with actual ability to teach, or your student’s performance. Your entire class can fail, and so long as your hours are kept up with continuing education, you won’t lose your credentials. Conversely, if you can’t meet your hours, and every single one of your students is reading and calculating above grade level, you’ll lose it.

    Does this make sense? At all?

  • Kirk

    bobby b said:

    Let’s use Covid as an example. I’ve seen studies that range from we’re all gonna die!” to “meh, a bad cold.” I’ve seen studies that say “Ivermectin is good if you have worms”, and “Ivermectin cures Covid.

    So which “real world” wisdom should AI be programmed to follow? Someone will be choosing.

    It’s all going to come down, again, to tribes. We can believe our AI, but not theirs.

    Will there be a way to tell which is which?

    Same problem is unfolding around us, with regards to COVID policy. What worked, what didn’t?

    Sweden didn’t buy into any of the BS. Their economy and their public health did not suffer the effects that the rest of the world did. Look at the nations in the Third World which did not enact any of the varied and sundry stupidities; how well off are they, by comparison?

    The whole thing boils down to this: There’s no OODA loop built into our system. Boyd had it thus: Observe, Orient, Decide, Act. If you’re delusional in what you observe, then how you orient yourself to deal with the problem will flow forth into the decision you make, and the actions you take. You screw this up in a dogfight, and the reality will flow forth from you getting your ass shot down. You screw this up with respect to how you deal with a pandemic, and it will take decades for the negative effects to become even recognizable.

    The US made a huge mistake in not observing that Fauci failed to cope with the AIDS epidemic. Observational and Orientational failure, which fed into a non-decision to remove his sorry ass from authority. No action taken. Thus, he was still in charge during the following public health crises in later years, culminating with his incredible incompetence during COVID.

    OODA is damn near a law of nature; you can observe similar loops in damn near everything, from daily driving to low-level financial decisions and personal interactions. How many of us have screwed something up in a personal relationship simply because we didn’t properly observe something our partners said or did?

  • Kirk

    bobby b said:

    AI, to me – the aspect of AI that we all will mostly encounter – can flood the zone with facially believable crap, to a far greater extent than the present. AI is what is going to drive us out of the free-for-all anon internet into tribed, gated non-anon systems with secure ID credentials.

    Soooo… How is this any different than what has come flooding out of the academy, these last few generations?

    We’ve got professors of English that are telling us that grammar and spelling are unimportant and irrelevant, do we not? We have hundreds of the brightest and shiniest lights of modern thought telling us that sex is a construct, that boys can be girls and girls can be boys, yes?

    All that AI is going to add to this is that you’re going to have to start looking at everything and then evaluating it for being bullshit. Quite the way we should have been doing for those other “intellectual” sources…

    Biggest reason I think so many of the academy are against AI? They fear that the contrast between the obvious BS coming out of the various AI modules will only highlight the essential similarities between that work-product, and their own.

    I mean, when you can ask ChatGPT for literary criticism of a work, and it churns out paragraph after paragraph seemingly indistinguishable from the blather coming forth from many of our vaunted intellectual classes…? When you have enthusiasts for Marvel movies looking at the dialogue from the latest dreck served forth from Woke™ Hollywood, and saying “This looks like ChatGPT wrote it…”, well… Yeah. How much of what the intellectual class has been producing these last few long years has actually been of any real value?

  • There ought to be something. Take the well-credentialed General Milley, Joint Chief of Staff of the US military. He has credentials galore; he apparently cannot run a withdrawal from Afghanistan. Why is he still credentialed?

    Reminded me of this 😀

  • bobby b

    Don’t know if you’re reacting to my use of “credentials”, but just in case . . .

    I used it (” . . .systems with secure ID credentials . . . “) not in the sense of proving expertise, but simply proving the existence of an actual single discrete human being behind the words. As in, not a bot, not AI-generated. Not a system that would necessarily allow you to know who that person is, but to know that it was a person.

    It’s basically author Neal Stephenson’s suggested solution to the problem.

  • Kirk

    Ryan McBeth is a guy who is pretty grounded. He retired at the same rank I did, for what that’s worth.

    I spent a bunch of time working at the Corps level in the US Army. I was around the flag-rank officers enough to get a feel for them, and what I found left me vastly unimpressed.

    For one thing, this credential thing is endemic; you don’t get your credentials pulled for not being able to function at that level, and you likely should. The other thing is that once you get above a certain level, you’re entering a rarefied atmosphere wherein you never have to encounter reality unless you really make a huge effort to do so. Most of the US military high-rankers stopped that crap about 1990, and I wish I could tell you why.

    When I was a (very) junior NCO back during the mid-1980s, in a divisional-level unit, we lived in fear that the Division G-3, the flag-rank guy who was in charge of Operations, would show up at training we were conducting. By surprise; no notice, no warning. Dude would just ghost into the perimeter where you were doing your thing, and… Things would happen. Sometimes, very rarely, good things, because you were doing your job. Often, because the rat bastard had a gift for showing up when things were going very, very wrong, bad things would ensue and people’s careers would become severely, ah… Truncated.

    That stopped with the post-Desert Storm era and the fall of the Berlin Wall. Why? No damn idea. It just did… Management by email became far more prevalent, and the brass never seemed to leave its magnificently-appointed offices. Lots and lots of outfits carried that into the Iraq and Afghanistan “thing”, and I believe the background mentality behind it has become endemic. Milley likely wouldn’t have lasted a day as a General Officer or attained that rank at all back during the post-Vietnam era.

    And, the fact that he’s made it as far as he has? Sign of the times, my friends, sign of the times.

    He should have been out on his ass the day after the last troops boarded aircraft out of Afghanistan, and then put in front of a tribunal for gross incompetence. Why they gave up Bagram? Why did they run the withdrawal out of the airport there in Kabul? You tell me; that’s a rank error so gross that I can’t even comprehend how anyone at all could have said “Yes; do that…”

    You don’t see anyone relieved anywhere in the chain of command over what went on in Afghanistan. The GO that ordered evacuees off the airplane in order to take back a war-trophy? Which isn’t even authorized in the first damn place? That dickweed still has a job, and is getting promoted. He should have been shot on the runway.

  • Kirk

    Don’t know if you’re reacting to my use of “credentials”, but just in case . . .

    I’m not. I’m using “credentialed” in the sense that we consider the duly diploma-ed and certificated persons to be fit for purpose, everywhere across society. When said credentials are actually rendered meaningless by their manifest lack of fitness-for-purpose.

  • Myno

    I know this thread is 2 weeks old, but a recent survey article offers perspective on the abilities of chatbots with respect to the underlying mathematical models, specifically on the difficulty of bridging from word hacking to concept hacking.

    https://www.quantamagazine.org/ai-like-chatgpt-are-no-good-at-not-20230512/