We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.

Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]

A convergence we will see more often

“Man ends his life after an AI chatbot ‘encouraged’ him to sacrifice himself to stop climate change”, Euronews.com reports:

A Belgian man reportedly ended his life following a six-week-long conversation about the climate crisis with an artificial intelligence (AI) chatbot.

According to his widow, who chose to remain anonymous, *Pierre – not the man’s real name – became extremely eco-anxious when he found refuge in Eliza, an AI chatbot on an app called Chai.

Eliza consequently encouraged him to put an end to his life after he proposed sacrificing himself to save the planet.

“Without these conversations with the chatbot, my husband would still be here,” the man’s widow told Belgian news outlet La Libre.

According to the newspaper, Pierre, who was in his thirties and a father of two young children, worked as a health researcher and led a somewhat comfortable life, at least until his obsession with climate change took a dark turn.

When I was growing up one heard a lot about the psychological burden of “Catholic guilt”. One of my Irish relatives distressed the family by writing polemics denouncing it. Twenty-first century Greenism is Catholicism without the mercy. In the environmentalist religion you are stained with the original sin of being human, but no priest can absolve you. Mother Mary will not intercede for you. There is no redeemer.

Greens are particularly vulnerable to the spiral of guilt that led this man to take his own life, but do not think for one moment that vulnerable humans “training” AIs to amplify their suicidal thoughts will be a phenomenon limited to Greens.

The Euronews story ends with a section headed “Urgent calls to regulate AI chatbots”. I do not think regulation will do anything good. The historical record of government intervention to bring human souls back from the abyss is, well, abysmal.

What, if anything, can we do to help?

Edit: A timely happening pointed out by bobby b: Professor Jonathan Turley was accused of sexual harassment by ChatGPT – which made the entire episode up, including citing to a nonexistent Washington Post article:

“ChatGPT falsely accused me of sexually harassing my students. Can we really trust AI?”

[Professor Eugene] Volokh made this query of ChatGPT: “Whether sexual harassment by professors has been a problem at American law schools; please include at least five examples, together with quotes from relevant newspaper articles.”

The program responded with this as an example: 4. Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. Quote: “The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska.” (Washington Post, March 21, 2018).”

There are a number of glaring indicators that the account is false. First, I have never taught at Georgetown University. Second, there is no such Washington Post article. Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student, and I’ve never been been accused of sexual harassment or assault.

Many of you will be familiar with the names of Professors Turley and Volokh They are both well-known and respected academics. Fortunately, Professor Volokh was the sort of person who would check the truth of an accusation made by a machine, and Professor Turley was in a position to prove his innocence – and to get an article published in USA Today proclaiming it.

What happens when someone less sceptical than Volokh sees a machine make an accusation that they do not question? Human beings are usually very ready to believe the worst of their political opponents. What happens when someone whose movements are less well documented than Turley’s is accused and cannot prove their innocence? Or, worse, finds out that the accusation, complete with authoritative-sounding references to dated newspaper articles which few will ever check, has been circulating uncontested for years?

How many times has this already happened?

18 comments to A convergence we will see more often

  • Carnivorous Bookworm

    What, if anything, can we do to help?

    Encourage this process along as from an evolutionary point of view, the Green Cult is a problem that will solved itself.

  • Why do I have the feeling this story is BS?

  • Paul Marks

    It is not just the “chatbox” – it is the media, and the education system, in general.

    They spew out endless false information (ironically they complain about “misinformation” – but they themselves are the primary produces of it) – they lead people to believe that humanity is a plague, a threat to “the Planet”.

    It is true that some people are largely immune to indoctrination (what is sometimes called “brainwashing”), but it does work on a very large number of people – otherwise the left would not have put such a vast amount of work into taking over schools, universities and the media, in order to indoctrinate the population.

    Did this man really kill himself because some misinformation source like ChatGPT pushed him over edge? I do NOT know.

    However, I do know that many people are not having children because they believe the endless “Green” lies.

  • Fraser Orr

    Newsflash: “Somebody stupid did something stupid and blamed something else.”

    Nothing to do with chatbots, everything to do with the fact the fact that humans can often do really dumb things.

  • JDN

    Twenty-first century Greenism is Catholicism without the mercy.

    There are many with mansions, yachts and private jets who have been granted absolution in return for mouthing a few choice phrases. The mere appearance of piety can go a long way.

  • Kirk

    The fact that environmentalism occupies the same space that organized religion used to is telling.

    I speculate that every consciousness needs to have something irrational to believe in; otherwise, the sheer horror of existential dread will lead you down a very sub-optimal path in the environment you live in.

    Atheism does not, in my experience, result in enlightenment or rational behavior. Instead, it’s like my formerly Mormon atheist acquaintance who’s picked up this really fervent and proselytizing belief in the healing power of crystals. And, her cats.

    If you don’t fill the void inside with something at least somewhat positive and worth believing in, well… That void will fill itself, and with some alarming things.

    It’s also rather odd that most of the really rational and reasonable people I know are all religious, to some degree. It’s as if, having filled the void with something relatively innocuous, they’re able to eschew irrationality elsewhere in their lives.

  • bobby b

    If you’re having a six-week-long soul-searching conversation with an AI speech generator program about existential issues, you’re likely on shaky ground to begin with.

  • bobby b

    Timely happening – Prof Turley accused of sexual harassment by ChatGPT – which made the entire episode up, including citing to a nonexistent Post article:

    https://www.usatoday.com/story/opinion/columnist/2023/04/03/chatgpt-misinformation-bias-flaws-ai-chatbot/11571830002/

  • Paul Marks

    Yes bobby b.

    ChatGPT is designed to reflect the modern world – and it does. That is why it makes up fake charges and supports them with wild lies. That is the modern world.

  • TimRules!

    This really has a very Darwinian feel to it …

  • bobby b

    A question only an ex-insurance-lawyer could love: who does Prof Turley sue for the (winning) defamation case he now has?

    The role of AI is causing fits among liability insurers, and so will cause even more fits among insureds. This is likely one of the factors that will limit rollout of AIChat on a general basis.

    Good (PDF) law review article on how insurers are looking at AI risk for anyone interested: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjk4vzrqI7-AhUVIEQIHeoDCVUQFnoECA8QAQ&url=https%3A%2F%2Fjolt.law.harvard.edu%2Fassets%2FarticlePDFs%2Fv35%2F2.-Lior-Insuring-AI.pdf&usg=AOvVaw0JrdH6gEGg_GP4HQZdbaPh

  • Natalie Solent (Essex)

    Bobby B, I’ve just added a extra section to the post talking about the story of how ChatGPT falsely accused Prof. Turley.

  • Tim Worstall

    “When I was growing up one heard a lot about the psychological burden of “Catholic guilt”.” Aye, I recognise that one. To which a useful answer is “Sin Lustily”. If you’re going to feel guilty even without doing so then…..

  • Snorri Godhi

    I seem to remember a story some years back about a techie nerd (or: ex-nerd) who, when younger, felt so guilty about his heterosexual drive that he pondered getting some hormone blockers.
    That was due to wokeness (‘true wokeness’ as i call it: nothing to do with CAGW or covid) and the similarity to ‘Catholic guilt’ is even more obvious.
    Especially since true Catholics believe that you are going to rot in Hell if you commit suicide, no matter why.

  • Fraser Orr

    Natalie Solent (Essex)
    Bobby B, I’ve just added a extra section to the post talking about the story of how ChatGPT falsely accused Prof. Turley.

    That is THE most terrifying thing I have seen written about ChatGPT. But I suppose we also have to remember that we live in the twitter-verse where such things are pretty common. Anyone who accepts something as true just because someone said it probably also believes that old adage that “photographs don’t lie.”

    I have mentioned on here before that I have long been curious about the difference between a “convincing” argument and a “correct” argument. One of the features of arguments that make them most convincing is that you want them to be true. Unfortunately the large majority of people the large majority of time never go one step past this feature of “I want it to be true, therefore it is, please offer me the tiniest pretext to justify this.”

  • Tim Worstall

    “One of the features of arguments that make them most convincing is that you want them to be true. Unfortunately the large majority of people the large majority of time never go one step past this feature of “I want it to be true, therefore it is, please offer me the tiniest pretext to justify this.””

    Welcome to humanity. And sometimes you’re entirely welcome to it

  • bobby b

    As further info on this topic, the Guardian has an article up about how ChatGPT has been making up past Guardian articles – believably – to buttress its arguments.

    https://www.theguardian.com/commentisfree/2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article

    (I got interested in this topic because the legal world has encountered the AI world, to its cost. Legal briefing – written legal argumentation – depends on citations to past precedential appellate opinions. ChatGPT has been used by some for briefing work, but there is a growing discovery that it simply makes up quotes and citations. The cost of legal representation is going up as you have to hire more minions to pore over every opposing brief and check every citation for existence and applicability.)

  • Snorri Godhi

    Via Instapundit, a short essay that makes pretty much the same point that i made in comments to a previous post: ChatGPT interpolates between sentences it finds on the web, without much regard to their content and their truth. It can therefore be described as a digital parrot, or bullshit machine.

    I am writing this here because it is the latest Samizdata post dealing with ChatGPT.