We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.

Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]

Understanding Turkish geopolitics

Highly recommended…

Dropped to a ten-rupee jezail

A scrimmage in a Border Station —
A canter down some dark defile —
Two thousand pounds of education
Drops to a ten-rupee jezail —
The Crammer’s boast, the Squadron’s pride,
Shot like a rabbit in a ride!

I thought of Kipling’s poem Arithmetic on the Frontier when I saw this picture:

“Russian navy ship appears to be heavily damaged in Ukrainian sea drone attack”Sky News.

Here and now, I am glad to see an expensive defeat inflicted upon one of Putin’s warships at little cost to the Ukrainians. But the new arithmetic of war will not always give results that I like.

How AI makes dogfighting drones unbeatable

A very interesting chat about the rapid development of military AI…

The sheer dishonesty of it all

The Lancet published the chart on left with a different X-Axis to downplay fact that cold causes ten times more deaths than heat in Europe. Björn Lomborg corrected that with the chart on right.

Samizdata quote of the day – the unknown unknowns

We do not know what AI will be useful for. We do not know what it can actually do, what we want done, better than other ways of doing that thing (OK, other than writing C grade essays at GCSE level). We also do not know what might be a problem with what AI can do. We don’t know the benefits, we don’t know the risks.

We face, that is, radical uncertainty. So it’s impossible for us to plan anything. For planning assumes that we have an idea of the cost/benefit analysis so that we can say do that, don’t do t’other. And if we are radically uncertain then we can’t do that, can we?

Tim Worstall

We think we are living at the dawn of the age of AI. What if it is already sunset?

“Research finds ChatGPT & Bard headed for ‘Model Collapse'”, writes Ilkhan Ozsevim in AI Magazine:

A recent research paper titled, ‘The Curse of Recursion: Training on Generated Data Makes Models Forget’ finds that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear.

The Wall Street Journal covered the same topic, but it is behind a paywall and I have not read it: “AI Junk Is Starting to Pollute the Internet”.

They feed Large Language Models (LLMs) such as ChatGPT vast amounts of data on what humans have written on the internet. They learn so well that soon AI-generated output is all over the internet. The ever-hungry LLMs eat that, and reproduce it, and what comes out is less and less like human thought.

Here is a foretaste of one possible future from Euronews: “AI-generated ‘Heidi’ trailer goes viral and is the stuff of nightmares” – even if I do have a suspicion that to place a video so perfectly in the uncanny valley of AI-generated content still requires a human hand.

The danger that AI creates more “echo chambers”

As debate continues on the real or alleged benefits/threats of AI, one slogan we often hear used is the term “garbage in/garbage out”. For me, intelligence has to be able to “self-start” – to know how to ask an insightful question, to even consider if a question is worth asking in the first place. I don’t see much sign of that yet.

Can AI be more than just repeat existing information in new combinations, rather than actually evaluate rational from irrational ideas, grasp the need to reconsider certain premises, understand fallacies, etc? Can it think about how it “thinks”? Does it have the ability to understand honest mistakes from bad faith, to know when it is being “fed a line” by propaganda? Does it know how to float hypotheses or arguments that might be offensive to some but which get others to think out of a comfort zone?

To be honest, my experience of seeing what AI does so far is that these questions come up with a big, fat “no”. I thought this when watching a Sky News short news item about how AI might be able to perform some of the tasks of a journalist. (The clip is just over three minutes’ long.) One of the topics that the AI was asked to do was produce a programme about climate change issues (take a wild guess what the tone of this was). And as I immediately thought, this device was fed a set of assumptions: the world is on a path to dangerously hotter weather, that there are human causes of that (to some extent), that this situation requires changes, etc. Now, it may be that this is all honest fact, but regulars on this blog know that the alarmist Man-made global warming case is controversial, not settled fact. They also know that there is now a new approach, being encouraged by writers such as Alex Epstein and entrepreneurs such as Peter Thiel, to reframe the whole way we think about energy, and embrace a “human flourishing” and “climate mastery” perspective, and get away from thinking of the Earth as a “delicate nurturer” and abandon the idea that human impact on the planet must be avoided as much as possible. Humans impact the Earth – that’s a great thing, not something to be ashamed of.

I had very little confidence, from watching this TV clip, that a computer would have incorporated these ideas. There is a real risk as I see it that, depending on who writes the source code, such AIs repeat the fashionable opinions, often wrongheaded, of the day. Certain ideas will be deemed “wrong” rather than evaluated. There will not be the curiosity of the awkward and annoying guy who pesters politicians at press conferences.

AI has many uses, and like the American venture capitalist rainmaker, Marc Andreesen, I am broadly supportive of it as a way to augment human capabilities, and I don’t buy most of the doom predictions about it. Ultimately, AI’s value comes from how we use it. If we use it to simply reinforce our prejudices, it’s not going to add value, but destroy it.

And we wonder why normal people avoid going into front-line politics

Following the recent controversy about the closure of a bank account of former UKIP leader Nigel Farage (he is said to have banked at Coutts, although he did not identify that lender by name in his own story), more information about what might have caused this decision is coming out. Dominic Lawson, son of the late UK Chancellor of the Exchequer, Nigel Lawson (the TV cooking writer and literary editor Nigella Lawson is Dominic’s sister), has been through a similar process in the case of his daughter, who has Down’s Syndrome:

In 2016 we decided to open a bank account for her. She has Down’s Syndrome; this was not something she could do herself. But when my wife Rosa went to the Barclays in our nearest town (where Rosa had had an account for many years), she was told it would not be possible for Domenica to have an account. No reason was given. Fortunately Rosa knew the manager there — the position now no longer exists, and the branch itself is about to close — and he said that he would look into the matter.

He came back to Rosa: ‘I’m really sorry, but it’s out of our hands. It’s because of money-laundering risks. ‘I know this sounds ridiculous, but it’s because of Domenica’s grandfather. He is a politically exposed person.’ This was a reference to Nigel Lawson, my late father, the former chancellor, who was by then a member of the House of Lords. And as the Lords is a legislative assembly, that counted under the regulations. As, absurdly, did his granddaughter, who was of course oblivious to the bank’s implication that she might be a link to money laundering, or the funding of an international drugs cartel.

A friend of mine, known to several who write for and manage this blog, is a member of the House of Lords. I know several, in fact. Maybe they should phone their banks.

Eventually, we did manage to open an account for Domenica there, but it involved the most exhaustive form-filling, with much toing and froing between us and Barclays’ compliance people in London.

It seems that this insanity has struck sufficiently close to home that the UK government, usually a model of inanity and uselessness, is getting involved. After all, the Tories know they might be out of office soon, and might not want to go through what Mr Farage, the Lawsons, and several others have been through.

As I said in a comment on Patrick Crozier’s article about the “de-banking” of Mr Farage last week, this also demonstrates the danger of what are called central bank digital currencies. The potential for governments, such as those admiring the “social credit” regime of Communist China, to use CBDCs to enforce “correct” behaviours and suppress “bad” ones, such as blocking payments for alcohol, or closing contributions to unpopular causes, are dangerously large. Consider what the Canadian government of Justin Trudeau did to those financially supporting the anti-vaccine truckers, for example.

This other Samizdata article referred to HSBC. The Dominic Lawson article also refers to that bank in an unflattering way.

As an aside, what the current situation demonstrates is the lack of real choice in the banking system as it now operates. Linked to central banks for their funding, with CBs as “lenders of last resort”, and anti-money laundering laws imposed by force, bankers are no longer able to have confidential conversations with a client. A banker is obliged by law in most major industrialised nations to report on transactions they deem suspicious, for example, for whatever reason, and woe betide the banker that doesn’t. There are requirements such as Suspicious Transaction & Order Reports in the UK. The US Securities and Exchange Commission operates a similar process. In Switzerland, once renowned for its bank secrecy, non-domestic Swiss clients are no longer under its protection.

Politicians and their cheerleaders might applaud the “ghosting” or “de-banking” of people they dislike, but they ought to be aware that these powers cut both ways. Imagine if, for example, protest groups were to be designated as “terrorist” or whatever. Imagine if Just Stop Oil, Extinction Rebellion, or some other group, gets this treatment.

Samizdata quote of the day – Heresy must be suppressed

The cancellation of eminent science writers and statisticians like Dr. Whitehouse and Professor Fenton for ‘wrongthink’ highlights the ever-shrinking boundaries of the discourse around science and medicine and the unwillingness of science’s gatekeepers to challenge groupthink and politically sensitive dogmas. As Dr. Whitehouse says, “science thrives on debate and scrutiny”. Silencing those who challenge prevailing orthodoxies was the approach favoured by the Catholic church in 17th Century Italy and is completely at odds with the scientific method.

Richard Eldred.

I used to read every issue of New Scientist ‘back in the day’ but it has been a bastion of approved high-status groupthink for many years, suitable for cat tray liner only.

Samizdata quote of the day – artificial intelligence and optimism edition

“My view is that the idea that AI will decide to literally kill humanity is a profound category error. AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave. In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.”

Marc Andreessen, in an essay getting talked about, called Why AI Will Save The World. The essay is as sure to trigger the perma-doomsters on the Right as on the Left, I suspect.

Thoughts on AI

There is undoubtedly a revolution going on in computing capability. I remember the first time I opened up ChatGPT and asked it to write me a poem, and then realised: this is something I am not used to computers being able to do.

Computers can now respond to natural language with natural language. Let that sink in.

This is not just hype. This is a new tool completely unlike any tool we already had.

These new tools are likely to change forever the way certain types of work are done. It is important to not be left behind: AI might not take your job, but people using AI might. If you can, it is worthwhile taking the time to figure out how to use it to your advantage. Thanks to the natural language capability, it has become easier: what was previously done by meticulously gathering data sets and annotating, pre-processing and cleaning them, has been done for you with these enormous pre-trained models. What previously required learning an API and some programming can now be done by having a conversation with a chat bot.

It is not just language models, there are image, video, speech and music generation tools, too. I have mostly been playing with ChatGPT (the £20 per month service that gets you access to the GPT-4 model that is much better than 3.5), so that is mostly what I will talk about here, but it is not the only thing. “Mixed mode” is something that is around the corner, too: the combination of these models to handle natural language, visual and audio information at the same time, interchangably.

There is much potential, but there is much that is immediately useful. Right now, what can we do?

→ Continue reading: Thoughts on AI

Samizdata quote of the day – AI is just the latest ‘scare’

And so something as potentially useful as AI has become a means for politicians and experts to express their fatalistic worldview. It is a self-fulfilling tragedy. AI could enable society to go beyond its perceived limits. Yet our expert doomsayers seem intent on keeping us within those limits.

The good news is that none of this is inevitable. We can retain a belief in human potential. We can resist the narrative that portrays us as objects, living at the mercy of the things we have created. And if we do so, it is conceivable that we may, one day, develop machines that can represent the ‘peculiarities of mind and soul of an average, inconspicuous human being’, as Vasily Grossman put it. Now that would be a future worth fighting for.

Norman Lewis