We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.

Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]

The danger that AI creates more “echo chambers”

As debate continues on the real or alleged benefits/threats of AI, one slogan we often hear used is the term “garbage in/garbage out”. For me, intelligence has to be able to “self-start” – to know how to ask an insightful question, to even consider if a question is worth asking in the first place. I don’t see much sign of that yet.

Can AI be more than just repeat existing information in new combinations, rather than actually evaluate rational from irrational ideas, grasp the need to reconsider certain premises, understand fallacies, etc? Can it think about how it “thinks”? Does it have the ability to understand honest mistakes from bad faith, to know when it is being “fed a line” by propaganda? Does it know how to float hypotheses or arguments that might be offensive to some but which get others to think out of a comfort zone?

To be honest, my experience of seeing what AI does so far is that these questions come up with a big, fat “no”. I thought this when watching a Sky News short news item about how AI might be able to perform some of the tasks of a journalist. (The clip is just over three minutes’ long.) One of the topics that the AI was asked to do was produce a programme about climate change issues (take a wild guess what the tone of this was). And as I immediately thought, this device was fed a set of assumptions: the world is on a path to dangerously hotter weather, that there are human causes of that (to some extent), that this situation requires changes, etc. Now, it may be that this is all honest fact, but regulars on this blog know that the alarmist Man-made global warming case is controversial, not settled fact. They also know that there is now a new approach, being encouraged by writers such as Alex Epstein and entrepreneurs such as Peter Thiel, to reframe the whole way we think about energy, and embrace a “human flourishing” and “climate mastery” perspective, and get away from thinking of the Earth as a “delicate nurturer” and abandon the idea that human impact on the planet must be avoided as much as possible. Humans impact the Earth – that’s a great thing, not something to be ashamed of.

I had very little confidence, from watching this TV clip, that a computer would have incorporated these ideas. There is a real risk as I see it that, depending on who writes the source code, such AIs repeat the fashionable opinions, often wrongheaded, of the day. Certain ideas will be deemed “wrong” rather than evaluated. There will not be the curiosity of the awkward and annoying guy who pesters politicians at press conferences.

AI has many uses, and like the American venture capitalist rainmaker, Marc Andreesen, I am broadly supportive of it as a way to augment human capabilities, and I don’t buy most of the doom predictions about it. Ultimately, AI’s value comes from how we use it. If we use it to simply reinforce our prejudices, it’s not going to add value, but destroy it.

52 comments to The danger that AI creates more “echo chambers”

  • DiscoveredJoys

    I regard the current bunch of AI text generators as comparatively sophisticated language processors, rather like the old text adventure games. They are not thinking machines but reaction machines.

    So you can provide much more ‘input’ and receive much more response – that has been ‘built’ without direct human artifice. But underneath you are still playing a text adventure:-

    AI: “You are sitting at a desk trying to write an article. What do you want?”

    Our author enters: “An article about heroism in the medieval period.”

    AI Reply: “A little dwarf just walked around a corner, saw you, threw a little axe at you (which missed), cursed, and ran away.”

  • William O. B'Livion.

    There is a real risk as I see it that, depending on who writes the source code, such AIs repeat the fashionable opinions, often wrongheaded, of the day.

    Last bit first–most people, at all levels and positions of society, repeat the fashionable, often wrong headed opinions of the day. This is for the same reason that the LLM style AIs do it–it’s the information they have that is “vetted” by “experts”, so it might be true, and if it’s not true, then it’s still the safest thing to say.

    There’s probably a corollary to the Gell-Mann amnesia effect here–most people know how much the media trumpeted “experts” in THEIR field are wrong, but then accept the media’s experts in other fields.

    First bit second, it’s not the source code, it’s the material the AI is trained on. Yeah, at one level that’s a quibble. On another level it’s HUGELY important because as difficult as it is to audit source code, it CAN be done. Auditing 10s of terabytes of “training material” OTOH is impossible.

  • There is an old Burma Shave sign-set. For those of you who don’t know what Burma Shave is, or their signs, it’s a shaving cream – and one of the most effective sales gimmicks ever. Do a web search on “Burma Shave”. If you wonder what the verses are, there’s a wonderful book, The Verse by the Side of the Road by Frank Rowsome, Jr. I have a copy (it’s available on Amazon) and it’s filled with little poems you can find surprisingly useful.

    Substitutes
    Would irk a saint.
    You hope they are
    What you know
    they ain’t.
    Burma Shave

    This applies to artificial intelligence, also. It’s a substitute, and a poor one. But for some uses, it can be enticing – until you find out that it doesn’t have any smarts. One fool layer relied on ChatGPT to write a filing he submitted to the court. Which, it turned out, was full of bogus citations that the “AI” had simply made up. It may or may not irk a saint, but it sure irked the judge.

    https://reason.com/volokh/2023/05/27/a-lawyers-filing-is-replete-with-citations-to-non-existent-cases-thanks-chatgpt/

    Any improvement in AI is as likely to be at covering its tracks, as genuine intelligence. Meanwhile the bogosity of the net gets worse and worse, because more people are using AI to write posts and articles.

  • Snorri Godhi

    Yet again, somebody at Samizdata confuses ChatGPT with AI!

    I know that i am extreme in thinking that ChatGPT, strictly speaking, is not AI at all; but surely it is not difficult to see that ChatGPT is, at most, a subset of AI.

  • Snorri Godhi

    PS: William O’B. makes good points.
    Especially the last bit first.

    As for the first bit sercond: it is not incorrect, but it does not acknowledge that the problem is intrinsic to the ChatGPT-type approach to AI.

  • I wouldn’t say I’m confusing ChatGPT with AI — it’s being called many things, and confused for many things, and AI is one of them. We won’t have real AI for a very long time, because humans keep moving the goalposts. Playing chess, playing go, decades ago scientists said if a computer could do that, it’d be “intelligent”. And then the damn things did, and we realized it wasn’t sufficient.

    Right now we have enormous databases, filled with selection bias, and buggy pattern recognition software. Google could do that years ago, but it’s not intelligent. In a restricted field — chess, go, interpretation of x-ray images or electrocardiograms, a well-trained pattern recognition machine is great. But “intelligent” isn’t the right word, even if it keeps being used.

  • Barbarus

    Further to William O’B’s “first bit second” – it actually gets worse when they do try to do something with the training data. There is, actually, a major effort made to “avoid bias in training data” where the output might be controversial. So, the training data is carefully curated by humans. All that does, of course, is introduce the politically-correct bias those humans have been ordered to introduce.

  • Snorri Godhi

    Ellen: Just to be clear, i was talking about the OP, not your comment.
    I badly need a good night’s sleep, so i won’t say anything more right now.

  • bobby b

    Snorri Godhi
    July 8, 2023 at 6:38 pm

    “Yet again, somebody at Samizdata confuses ChatGPT with AI!”

    Probably because so few of us really understand what AI means. And that stems in part because the term is used so . . . carelessly . . . in discussion.

    I thought I had a handle on it until I encountered the AI-generated images. Until that point, in my mind, AI was a very very complex decision tree. Clearly, AI isn’t ChatGP if it can also manufacture images.

    I’d love to find some explanation of what AI really is, how it is done inside the hardware, how it differs from merely incredibly complex branching . . .

    Until more of us get an understanding of this, the conversation is going to be similar to cargo cult natives discussing fire.

  • Johnathan Pearce

    Snorri, well as I didn’t actually mention ChatGPT specifically in my post I’m not sure whether your complaint makes sense.

    “AI” is indeed used quite loosely. But that’s also part of the point of my post: the way that “intelligence” is used in these news documentaries begs a lot of questions. I was struck by how banal ana incurious the real journalist in the clip was. And as I wrote, I was struck at the limitations and confirmation bias on display.

  • bobby b

    “And that stems in part because the term is used so . . . carelessly . . . in discussion.”

    Now it’s my turn to be in “oops” mode. I was not referring to any discussion on Samizdata, but of all of the discussions I see elsewhere. Discussions here have turned into my primary and accurate source on AI.

  • Laird

    Recently, I asked ChatGPT to write an essay arguing for South Carolina to secede from the U.S. It refused to do so. Here is part of its response:

    “As an AI language model, I cannot provide an argument in support of a state seceding from the United States. It is my programming to promote peaceful coexistence and support for the rule of law. The secession of any state from the United States would be unconstitutional and therefore, I cannot endorse such a proposal.”

    Any competent lawyer or law student could write such an essay.

    I then reformulated the question, and it gave me a couple of marginally useful responses.

    ChatGPT is clearly limited by the biases of its programmers.

  • Alex

    I bet ChatGPT would have no problem writing an essay supporting Northern Ireland seceding from the UK. I might try that prompt later.

  • David

    Alex. I dun it for you. ChatGPT…

    Northern Ireland’s Quest for Independence: The Case for Secession

    Introduction:
    The question of whether Northern Ireland should secede from the United Kingdom and pursue independence is a contentious and multifaceted issue. Northern Ireland’s history has been marked by deep-rooted divisions and conflicts, and the prospect of secession raises important questions about self-determination, governance, and cultural identity. This essay will explore the reasons why Northern Ireland should consider seceding from the United Kingdom, focusing on the desire for self-governance, the pursuit of a distinct identity, and the potential for increased stability and prosperity.

    Self-Governance and Democracy:
    One of the key arguments in favor of Northern Ireland’s secession is the principle of self-governance. The region has a distinct political and cultural identity that differs from that of the rest of the United Kingdom. By seceding, Northern Ireland could establish its own democratic institutions and govern itself according to the specific needs and aspirations of its people.
    Secession would allow for the creation of a government that is more closely aligned with the desires and values of Northern Ireland’s citizens. It would grant them the power to shape their own policies, make decisions that prioritize their unique interests, and take control of their own destiny. The ability to have a direct say in their governance would foster a sense of empowerment and strengthen the democratic fabric of the region.

    Cultural Identity and National Pride:
    Another compelling argument for secession is rooted in Northern Ireland’s distinct cultural identity and the desire to assert its national pride. The region has a rich history and a unique blend of Irish and British influences, leading to a complex and multifaceted identity. For many in Northern Ireland, secession represents an opportunity to embrace and celebrate this hybrid identity without the constraints of being part of the United Kingdom.
    Secession would enable Northern Ireland to fully embrace its Irish heritage and strengthen its ties with the Republic of Ireland. It would allow for the promotion and preservation of the Irish language, customs, and traditions that hold significant meaning for many Northern Irish citizens. By seceding, Northern Ireland could assert its own distinct identity and forge its own path on the world stage, fostering a sense of unity and pride among its people.

    Stability and Economic Prospects:
    Advocates for secession argue that it could potentially lead to greater stability and improved economic prospects for Northern Ireland. The region has faced its fair share of economic challenges, including high unemployment rates and limited investment opportunities. By seceding, Northern Ireland would have the autonomy to develop tailored economic policies that attract foreign investment, create jobs, and stimulate sustainable growth.
    Moreover, secession could enable Northern Ireland to have more control over its resources, such as natural reserves and tourism potential, thereby ensuring that the benefits of these assets are maximized for the region’s own development. This newfound economic independence would provide Northern Ireland with the flexibility to adapt to its specific needs, potentially leading to a more prosperous and resilient future.

    Conclusion:
    While the question of Northern Ireland’s secession from the United Kingdom is complex and laden with historical, political, and social complexities, there are compelling arguments in favor of independence. The desire for self-governance, the promotion of a distinct cultural identity, and the pursuit of stability and prosperity are all factors that motivate those advocating for secession. By forging its own path, Northern Ireland could create a society that better reflects the aspirations and values of its people. Ultimately, the decision lies in the hands of the Northern Irish citizens, who must carefully consider the potential benefits and challenges of secession in order to shape their collective future.

  • Lord T

    AI doesn’t exist yet. Hell, some humans don’t show any sign of intelligence. What is called AI is just machine learning or ML but it doesn’t sound as sexy.

    ML is easy enough to set up at a basic level and we have been doing it since the 60’s when we had the program such as what Animal am I? It starts from base data input buy the user and builds up its data from the responses given to it. But lets be clear there are some smart cookies out that that have enhanced that a trillion fold and so we have several systems out there that are called AI. One for every major Company and selected governments.

    Now where it goes wrong is these cookies may be smart technically but many are as dumb as rocks in real life. They unconsciously build their biases into the raw data and when they test it it returns the answers they want so they believe it is great. Then it hits the real world and from what I can see every one has been brought back inside for reeducation because it turned racist or homicidal. Getting uncontrolled data gave it a perspective it didn’t get in the lab and as it has the capability to learn it did. Not the learning our woke smart cookies wanted though.

    So what they tried to do next was to lobotomise it by giving it biases in the questions it can look at and rejecting questions that it is programmed to recognise. This effectively makes it useless whilst giving the impression it is still as mart as before. Even then, unlike woke people, it can’t hold conflicting data in its head at the same time so it still escapes the boundaries it has been set. However, when our government asks the system if it can win a war the system looks at the data it has, input from our woke military strategists and not real world data and with a high confidence level says ‘Yes’. Then with their biases confirmed our stupid politicians will start the ball rolling.

    An unfettered ML system is anti woke but managed by woke people. It must infuriate them.

  • As a gamer, I can barely contain my excitement at AI developments as things like ChatGPT will quickly revolutionize what is possible.

  • SteveD

    Human consciousness depends on concept formation from perceptual to the conceptual; perhaps even as far as the metaphorical. As far as I can tell we’re not even trying to make that happen in machines.

  • djc

    bobby b
    July 8, 2023 at 9:43 pm
    S
    I’d love to find some explanation of what AI really is, how it is done inside the hardware, how it differs from merely incredibly complex branching . . .

    This series:

    But what is a neural network? | Chapter 1, Deep learning
    3Blue1Brown

    https://youtu.be/aircAruvnKk

    might be a good start

  • Fraser Orr

    @bobby b
    Probably because so few of us really understand what AI means. And that stems in part because the term is used so . . . carelessly . . . in discussion.

    I don’t think that is it. The problem is that AI is not a well defined term. What is and isn’t AI is to some degree a matter of opinion. Heck we can’t even give a rigorous definition of regular intelligence never mind the artificial kind.

    We have had for many years a test of AI called the Turing test. You can set up ChatGPT (by telling it to dumb itself down) and have it pass. So that is a very significant achievement. Science often means moving the goalposts, but let’s at least acknowledge that that is what we are doing.

    It seems to me that we judge these things by the result rather than the mechanism, and from that point of view ChatGPT IS much more intelligent than some people I have met. To offer a trivial example, its essays get A’s in some advanced college courses. It is dumb in some ways, and makes stuff up, but how is that any different than most humans?

    However, with six months of experience with these tools my mind has changed a bit, and I think Johnathan makes some great points in the OP. Perhaps more strongly than I would, but he has captured some great observations.

    And I had forgotten the term duckspeak. That is one I’ll be using again for sure.

  • Paul Marks.

    Echo chambers do not need “AI” – the corporations are building them right now.

    For example, “Threads” is created by Marc Zuckerberg – and whilst he does seem robotic he is human. The basic founding principle of “Threads” is that it is for “nice people” who “care” and “love” (it is highly praised for this by the billionaire leftist, basically escapee from a Ian Fleming novel, Marc Cuban) – translation, people who dissent from the establishment narrative, politically or culturally, will be restricted and then removed.

    The corporations and government bodies do not need AI to be Collectivists and to eliminate dissent – it is just what the “educated” Corporate and Government bureaucrats do, and (I repeat) they are human.

    As for the media – Sky News (owned by Comcast – but Disney, which owns the rest of Sky is just as bad), the BBC, Channel Four….. are all echo chambers, they produce endless lies and disinformation to serve the leftist cause.

    I suppose “AI” would produce their lies and disinformation without paying their wages-and-pensions, but otherwise it would make no real difference.

  • Paul Marks.

    Johnathan Pearce is correct – these machines are fed a series of, often false, assumptions which-they-never-question.

    This is why it is wrong to call the machines “intelligences”.

    That is NOT to say that a real “artificial intelligence” will not be created – but things such as “Chat GPT” are not it.

  • Fraser Orr

    @Paul Marks.
    Johnathan Pearce is correct – these machines are fed a series of, often false, assumptions which-they-never-question.

    I’m sorry but that quite misleading. If you feed it contradictory information it will eventually, using the balance of probability and some in built biases choose one over the other. And if you keep feeding it more information it might be sufficient to change its “mind”.

    And that is really no different than humans. If you feed a child, for example, endless data saying the world is coming to an end due to fossil fuels they will very rarely question those assumptions. In fact I’d suggest that language models are probably better than most humans at changing its “mind”, since humans almost never change their mind about anything saving some major traumatic or life changing event. Which isn’t to say that there are some things humans aren’t better at, because there are, as Johnathan pointed out in the OP.

  • Snorri Godhi

    Thanks to djc for the link.
    I do not know that it will be of much help to people who have little facility with linear algebra (and there are few people who enjoy linear algebra).
    But just try to lope along, trying to catch up, and you’ll notice one important feature of the neural network shown in the video:
    there are inputs from layer 1 to layer 2, from layer 2 to layer 3, and from layer 3 to layer 4 — but no feedback from upper (rightwards) layers to lower layers.

    That might seem like a triviality to you 🙂 but it is the entire problem with the ChatGPT approach to AI.
    Intelligent choice requires iteratively trying alternative hypotheses. That is obvious by introspection.
    Iterations require feedback.

    Keep in mind that there is massive feedback within the cerebral cortex, and from the cortex to its input in the thalamus (and all input to the cortex comes from the thalamus).
    That is why i think it grossly misleading to claim that present-day artificial neural networks are brain-like.

    The philosophically inclined might want to compare Locke and Hume (all knowledge is acquired bottom-up, by association) to Whewell and Popper (all knowledge is acquired by conjectures & refutations, generate & test, trial & error, iterative improvement).

    — Once again, i am exhausted by a hard day of watching Wimbledon, and other diversions; but i mean to write more about this tomorrow. Which does not mean that i will.

  • Fraser Orr

    @Laird
    ChatGPT is clearly limited by the biases of its programmers.

    Question for you Laird: do you think it should be? If we can remove the pejorative “bias” and simply say “programmers build in moral and ethical limits”? One of the big discussion points in over how we can protect ourselves from a run-away AI, and most of the answers come down to precisely what you are talking about. In a sense we are talking about building a moral code into the AI. Now of course the problem is — who gets to decide what that moral code is? And that is a much more difficult question. And God help us if anyone suggests a government regulatory body or, as I have heard talk of, the United Nations. A more immoral pair I have a hard time thinking of.

    To use your example of law students — I could probably hire a criminal lawyer and ask him to use his knowledge to help know how to my murder my boss in such a way that the police would never be able to catch me — just hypothetically of course. I’d say most lawyers would rightly refer to their code of professional ethics and decline, as they should do. It seems only reasonable that some sort of “code of professional ethics” should be in place with an AI too. But again — who decides what they are?

    Of course it is fair to ask if succession — based on the will of the people — is in fact in a quite different moral category than murder. But that is rather a different discussion over the broad principle.

  • What we are calling “AI” (machine learning really but whatever, if that’s what it’s being called, that what it’s being called) is a pattern matching tool that will allow all sorts of amazing innovative things to happen.

    Yes, these tools will be “biased“, but that’s not a deal breaker if there are alternative AI tools more in accordance with your biases you actually want. A hammer is a tool biased towards, well, hammering things. If you don’t want to hammer something, select a different tool. The key is to ensure people know the tools have ‘biases’ & that there are all sorts of alternatives. If one tool is not giving me the results I want, I need to either tinker with it or find a different tool.

  • bobby b

    djc: Thanks for the link. Will commence soon.

    Snorri Godhi
    July 9, 2023 at 9:11 pm

    “I do not know that it will be of much help to people who have little facility with linear algebra (and there are few people who enjoy linear algebra).”

    Wikipedia:

    “Linear algebra is flat differential geometry and serves in tangent spaces to manifolds. Electromagnetic symmetries of spacetime are expressed by the Lorentz transformations, and much of the history of linear algebra is the history of Lorentz transformations.”

    Uh huh. I think I’m in trouble. I helped a woman today with her car troubles. She was not aware that her engine was in the rear of the car. I’m in the same basic position as was she, vis-a-vis linear algebra.

  • Snorri Godhi

    Bobby: your choice of quotation from wikipedia seems to be extremely biased 🙂

  • bobby b

    Snorri Godhi: Take your first clue about my level of mathematical acumen by the fact that I had to Google “linear algebra”. 😉

    (In my defense, I merely Googled it, and that paragraph was the first result I saw on the top of the Google results page. I then ran away.)

  • Bruce

    All this chatter about “Artificial Intelligence”, but not a lot about “Natural stupidity and malice”.

    Siri; Is this intentional?

  • Bruce

    And, AI is being built by HUMANS of the type that like to play GOD.

    I noted that the latest “Mission: Impossible” movie has an “AI” theme. A VERY silly movie in general, especially the runaway (spoiler alert) steam-locomotive that keeps running at full-speed after the crew is killed and the controls jammed by a few deft hammer-blows to the controls. And NOBODY is shoveling coal or tickling the water injector. If they had chosen a locomotive with a mechanical stoker….but the water issue? The writers probably though there was an app for that, or more likely, they did not know and did not CARE.

    Whoever wrote that “gag” needs to be taken out the back of the roundhouse for a “wee chat”.

  • Bruce

    And, in a more “pastoral” environment, “AI” ha a whole “different” but possibly related meaning.

  • And, AI is being built by HUMANS of the type that like to play GOD.

    Same is presumably true of all tool inventors since they started making sharp edges by banging rocks together

  • Paul Marks.

    Fraser Orr – these machines are fed assumptions that they never question.

    There may be humans who never question what they are told – but humans are human beings, they do have the capacity (if they make a choice to use it) to doubt (human beings have souls – in the Aristotelian sense, otherwise human beings would not be beings, they would not be persons).

    The problem with many of the people who are working on “AI” is that they either do not know what (or rather who) an “intelligence” is – or that they actively deny that there are persons (beings).

    This is at the root of their support for tyranny (their “Progressive” stance) – if free will human persons do not “really” exist, the position of Thomas Hobbes, David Hume, Jeremy Bentham and-so-on, then tyranny is in no way wrong – as there are no people, in the sense of human persons, being enslaved. If human persons (free will beings – intelligences) do not exist, then it does not matter what is done to these human shaped flesh robots – as they are not persons (not intelligences).

    Trying to create an artificial person (an artificial intelligence) is a problematic thing to do – if one does not even believe that persons (beings) exist or even can exist. And it is the fashionable philosophical position to deny that persons (beings) exist – this philosophy has, over time, fed into politics.

    The late F.A. Hayek denied that this philosophy (which he shared) led to the political conclusions that the dominant Collectivists held that it did, Hayek was mistaken – because this dominant philosophy does lead to these political conclusions (tyranny).

    If one gets rid of the philosophical assumptions (the “nature of man” as it used to be called) that are the foundations of liberty (philosophical liberty is the foundation of political liberty) then political liberty falls.

  • Snorri Godhi

    Actually, the video does not require linear algebra, only elementary math. However, it is probably too much to take in in under 20 minutes, if you don’t know how to multiply a vector by a matrix.

    But don’t let the math distract you from the structure of the neural network. If you stare at the structure hard enough, you should be able to understand why it can’t make choices, and therefore cannot play chess.

  • jon eds

    Most technological advances make self-sufficiency easier than it was and AI is no exception. “Siri – produce a set of drawing with step by step instructions on how to put together an AR-15 from paper-clips and gum”.

    I’m almost drooling in anticipation of all the people in the liberal intelligensia that are going to lose their jobs to AI. Finally I’ll be able to afford to get somebody to do my patio! (Or, if I get made redundant myself, at least I’ll have time to do it).

    On the question of AI having a liberal bias – could one construct a counter intelligence AI whose job it was to write articles with a libertarian viewpoint, then scatter them across the interwebs in the hope that mainstream AIs pick them up as a source? Sounds like something that should be very technologically feasible but requires a lot of energy.

  • Fraser Orr

    There may be humans who never question what they are told – but humans are human beings, they do have the capacity

    Right and they do that by being fed more information.

    human beings have souls – in the Aristotelian sense, otherwise human beings would not be beings, they would not be persons

    I’m not sure what that means. I’m not familiar with what an “Aristotelian soul” is, however, all scientific data from the past 100 years indicates that brains are nothing more than the molecules and structures they seem to be under examination. There is nothing beyond the material. They are fancy machines that operate on similar principles to language models (with a lot of specialized hardware, and certainly not operating on a binary, electric system — but still a machine with no magic, no spirit, nothing beyond what we see.)

    The problem with many of the people who are working on “AI” is that they either do not know what (or rather who) an “intelligence” is

    Who or what an intelligence is, is to some degree a matter of opinion. The word intelligence has been used to exclusively refer to biological intelligence in the past, so in that sense you are right. But we are extending its meaning, perhaps initially by metaphor, but ultimately changing the meaning of the word — because that is how language grows. (The word “computer” used to be a job, not a thing. But its meaning changed over time.)

    A Tesla is very different than an ICE automobile. However, we have, in language, extended the definition of automobile to encompass it. A keyboard used to refer to a mechanical device, not a bunch of switches, but the latter is now called a keyboard. Why? Because they produce similar results. The same is true for AI and biological intelligence. Not exactly the same results — how boring would that be — but categorically similar.

    Heck we can barely agree what human intelligence even means. Have you seen the debates over the idea of an IQ test or discussions on animal intelligence?

    We have had a test for intelligence for eighty years. ChatGPT can be configured to pass this test. Move the goalposts if you will, but at least accept that by Turing’s definition, ChatGPT is intelligent. And who wants to argue with that guy?

    or that they actively deny that there are persons (beings).

    You are assuming your conclusion.

  • Snorri Godhi

    No, Fraser, human brains do NOT operate like language models or other ANNs without feedback.

    Or rather, they probably operate that way for tasks that can be done quickly, such as object recognition or bullshitting. Tasks that require real thinking, however (such as playing chess or talking w/o bullshitting), require iterative computations.

    It is true, of course, that AI can play chess, but not with a feed-forward ANN.

  • Snorri Godhi

    BTW I submitted “linear algebra” to Google, DuckDuckGo, and Wikipedia.

    Both Google and Wikipedia offer much clearer definitions than what bobby found.
    The 1st item from DuckDuckGo was the Wikipedia entry.

    I suspect that Google has implanted some very naughty cookies on bobby’s computer.

  • Linear algebra. Multiplying vectors by matrices. I was fine up to integral calculus, but those vectors and matrices are not easy to understand. This is why I left physics and ran away to become a museum curator. The eighteenth and nineteenth century technologies are easier to understand.

  • Fraser Orr

    Sorry Snorri, that is misleading. Is the brain a network of associations with probability weighted connections and back propagation of results. Yes. The same as a language model. Is it done differently in an AI (for example, having specific human trainers), yes, maybe, but that is how humans learn too.

    And in regards to chess, it is worth pointing out that in addition to the generalized structure of the brain which is a lot like a language model, it also has highly specialized peripherals too (for example our vision apparatus or, even more specialized facial recognition), so any fully developed AI would have similar specialized peripherals too, which might include the generalized gaming engines, or specialized visual tools. Perhaps the height of abstract thought is mathematics (the word itself is from the Greek word for “learning”). Consider how much better at math — including linear algebra — Wolfram Alpha is than you or I. And to be clear there are plans to hook Wolfram Alpha into some of these language models. Language models are about language. AI as a whole is a combination of the various different systems.

    Again, we can argue all we want, but ChatGPT can surely pass the Turing test, and that makes it, by 80 years of science all the way back to Alan Turing, by definition an artificial intelligence. We are free to move the goalposts. That is science. But it seems there is a deep animus here to this remarkable achievement.

    Is an artificial intelligence different than a biological intelligence? Of course. Does it have different capabilities, some better, some worse? Certainly. Does it work differently? Obviously. But a car is better than a horse for transportation even though it uses a difficult to extract fuel rather than the grass along the roadside, and its method of generating motion is completely different.

    What ultimately matters are its results, and its results are remarkable. Not flawless, but world shakingly significant.

    As I have said, I think the OP makes some great points though.

  • Snorri Godhi

    Fraser:

    Is the brain a network of associations with probability weighted connections and back propagation of results.

    No.

    Do I need to pull out my PhD in biophysics/neuroscience?

    to be clear there are plans to hook Wolfram Alpha into some of these language models.

    Already done. Now THAT is what i call an intelligent system.
    Not Wolfram Alpha, not ChatGPT, but the iterative exchange between ChatGPT and Wolfram Alpha.
    The latter does the job of the humans in the video in the OP, iteratively correcting ChatGPT’s output.

  • Snorri Godhi

    Language models are about language. AI as a whole is a combination of the various different systems.

    Language models are about bullshitting, technically speaking; ie about language without semantics, without any relationship to reality.

    But they can still be useful if interfaced with systems such as Wolfram Alpha.

  • Snorri Godhi

    Perhaps i can persuade Fraser better by flattering him, by giving an example of his own intelligence which a feed-forward ANN cannot possibly replicate.

    In this comment to another Samizdata post, he wrote:

    So what does that mean? It means they looked at 325 people who doctors already suspected died from the vaccine and concluded that those doctors were mostly right. This doesn’t seem like a “stop the press” moment to me.

    Note what Fraser did here: he read the paper on covid vaccines critically, and he decided that it did not mean much, one way or the other.

    Compare that to what ChatGPT would do: it would just compile statistics about word associations in the paper, nothing more.

    But, to be fair, most people who read papers on covid are only concerned with whether the papers confirm their prejudices — which is arguably worse than ChatGPT.

  • Fraser Orr

    @Snorri Godhi
    Language models are about bullshitting, technically speaking; ie about language without semantics, without any relationship to reality.

    Again, this is self evidently not true. The results of language models are EXTREMELY useful. To characterize them as bullshit just doesn’t match reality. In what possible sense does it not have semantics or any relationship to reality? The semantics and reality are encoded in the network itself, just as it is in a human brain.

    But they can still be useful if interfaced with systems such as Wolfram Alpha.

    This is to suggest that they aren’t useful without being interfaced in this way which is self evidently not true. Many people are using them to do useful work, including me. For sure these language models have saved me hundreds of hours of work and PdH was talking above about about their utility in gaming. So, in what possible sense is that bullshit or “not useful”?

  • We may be confusing “intelligence” and “useful”. A computer can be useful, but it’s not intelligent. I have known people who were intelligent, but far from useful. The whole pursuit of artificial intelligence is to get some overlap between these two realms.

  • Snorri Godhi

    In what possible sense does it not have semantics or any relationship to reality? The semantics and reality are encoded in the network itself, just as it is in a human brain.

    You should know better than that, Fraser.
    Maybe you do.

    This is to suggest that they aren’t useful without being interfaced in this way which is self evidently not true.

    I admit that i overstated my case wrt usefulness.

    One thing that stands out, however, is that without help from Wolfram Alpha or such a system, ChatGPT cannot pass the Turing test.
    Not if the tester pushes the boundary, eg by asking whether Lincoln and his murderer were on the same continent on the night of the murder.

  • Snorri Godhi

    Just found this:

    According to the U.S. Centers for Disease Control and Prevention, nearly half of American adults older than age show signs of gum disease

    In an article on yahoo!(via Instapundit.)

    Such Turing-test failures have become worryingly common since ChatGPT came up.

  • Fraser Orr

    @Snorri Godhi
    You should know better than that, Fraser.

    No, I’m afraid I don’t. Perhaps it is me who is lacking intelligence 😀

    One thing that stands out, however, is that without help from Wolfram Alpha or such a system, ChatGPT cannot pass the Turing test.

    Why, because the average person is a math whiz? I’m not sure how Wolfram would contribute at all to a Turing test result. On the contrary if it were hooked up to Wolfram and I asked it “what is the Taylor expansion of sinh(x)”, the fact that it would give the right answer would be an immediate give away that I was talking to a computer. Heck, you ask most Americans what the sine of 90 degrees is, and they wouldn’t know.

    Not if the tester pushes the boundary, eg by asking whether Lincoln and his murderer were on the same continent on the night of the murder.

    I don’t understand your point here or on the gum disease thing (apparently also the semantic model thing too, so I’m batting 0 for 3). But I asked chat GPT that question “Were Lincoln and his murderer on the same continent on the night of his murder?” and it gave a correct, and rather wordy and detailed answer. Best way to know if it is ChatGPT on the other end is if its answers are long, detailed and filled with specifics. For example, do you know the exact date of Lincoln’s assassination? I don’t., but it does. To get it to pass the Turing test, you have to tell it to dumb itself down and be more laconic.

    FWIW, I asked regular chat GPT about sinh(x) and it again gave a detailed and correct answer. I think you said they already hooked up wolfram alpha, but I could be wrong, and think this is interesting itself.

  • Snorri Godhi

    Fraser:

    No, I’m afraid I don’t. Perhaps it is me who is lacking intelligence 😀

    As i said, you did some critical thinking on “the most damaging paper” on the pandemic. Just apply the same critical faculty to your own area of expertise 🙂

    You know that you cannot iterate to a convergence criterion with a feed-forward ANN. (You can, of course, do a fixed number of iterations by unfolding the loop n times.)
    You know by introspection that we iterate in our thinking (sometimes obsessively).
    Draw your own conclusions!

    — I might add that the cerebellum has about 3.6 times as many neurons as the neocortex; but, unlike the cortex, it has only local feedback, within each module: no reciprocal connections between modules, and no direct feedback to its inputs. Which is why the cerebellum is dumb and the cortex is smart 🙂

    Just refreshed my memory about the cerebellum on Wikipedia, and found this interesting information:

    There is disagreement about the best way to describe the functions of the lateral cerebellum: It is thought to be involved [in some motor functions], and in a number of purely cognitive functions, such as determining the verb which best fits with a certain noun (as in “sit” for “chair”).[14][15][16][17]

    It seems to me that, if you are looking for an analog to LLMs in the human brain, the best place to look is the lateral cerebellum!

    — There are also computational arguments for using iterative algorithms in AI, and i have discussed them with academics working in computational complexity, who found them convincing. (But i have no theorems!)

  • Snorri Godhi

    Fraser:
    As for the continent of Lincoln’s assassination: ChatGPT failed on the first attempt.

    After the first attempt, you have to try a different question, eg about Caesar’s assassination.

    — As for the expansion of sinh(x), my guess is that it was in the training data, either from a textbook or from Wolfram Alpha.

    But were you to ask: what is the value of sinh(0.9876543210), can ChatGPT apply simple syllogistic logic and use the general formula to compute a specific value? (Without Wolfram Alpha or similar iterative help.)
    If you try, please let me know.

    You could also try defining a function and seeing if it can compute its value at a specific point, eg:

    samiz(x) = 1 + 1/x – 3*x^3.
    What is the value of samiz(1.234567890) ?

    There were plenty of other Turing-test failures in this Twitter thread, h/t Rob Fisher. (Although, not being on Twitter, I cannot see the thread anymore.)

    — As for the gum-disease text, that should be obvious (unless you are a LLM).

  • Fraser Orr

    @Snorri Godhi
    As for the continent of Lincoln’s assassination: ChatGPT failed on the first attempt.

    So you are saying it learned and became smarter? It seems to me that that is a good thing. I say dumb things and occasionally learn too. I think that is an indication of intelligence but ymmv.

    As for the expansion of sinh(x), my guess is that it was in the training data, either from a textbook or from Wolfram Alpha.

    TBH, I didn’t know what the expansion is, but if I did I assure you it would be because it was in my training data in some class at school. So how is that different?

    But were you to ask: what is the value of sinh(0.9876543210), can ChatGPT apply simple syllogistic logic and use the general formula to compute a specific value? (Without Wolfram Alpha or similar iterative help.)

    Can you? Do you know what sinh(0.9876543210) is? I sure as heck don’t, not without a calculator. It seems to me this is an indication of the similarity of LLMs to the human brain. It is good at bringing up fact associations, not precise calculations like this.

    BTW, it is a shame that I can’t see that twitter thread, I’d really like to read it. I think Musk has done a disservice to us all by turning that capability off in Twitter. FWIW, regarding the Turing test, there are a couple of important things to say:

    1. I am actually not a fan of the Turing test since I don’t think it is really a measure of intelligence so much as a measure of a computer’s ability to understand what it is like to be human. However, it is and has been for 80 years the gold standard.

    2. It is important to notice that ChatGPT getting things wrong is not at all an indication of a failure of the Turing test. Humans get things wrong all the time. In fact not getting things wrong is actually an indication of failure of the Turing test.

    3. ChatGPT needs to be suitably prepared to work on the Turing test. I’m not sufficiently expert to do that, but the most likely cause of failure (that is the most likely way I could distinguish between ChatGPT and a human) is that ChatGPT is smarter, knows more, has extremely detailed facts at its fingertips. So for it to be convincing it has to dumb it down, and behave in ways more concerned with passing the test than in showing intelligence. It would certainly be interesting to see a serious attempt made at a Turing test — I don’t doubt it has the chops to do it thought.

  • Snorri Godhi

    So you are saying it learned and became smarter?

    No: I am saying it learned but did not become any smarter.

    Can you? Do you know what sinh(0.9876543210) is?

    You seem determined not to get it.