We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.

Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]

“Symbolic analysts'” jobs at risk?

“For decades, traditional manufacturing jobs were gobbled up by automation and offshoring. This led Robert Reich to postulate a hierarchy of work in which the “symbolic analysts” – essentially, people who worked with information as opposed to actual stuff – were at the top, while people who worked with actual things were at the bottom. With a remarkable lack of sympathy, journalists and politicians told coal miners and auto workers that they should “learn to code” as their jobs vanished.”

Glenn Reynolds, on his new substack column. He goes on to note the irony of how it is writers of code, rather than some in manual labour jobs, whose jobs are on the line. My own view is that this is not really the time to let one’s eyes gleam in pleasure at seeing this or that sector be taken down or elevated. (Beware the karma involved, folks.) What, above all, counts for me is ensuring that government stays out of the way as much as possible. To imagine that governments can somehow manage whatever AI comes up with is to ignore the hubris over matters such as the “climate emergency”, Covid-19, and all the rest.

28 comments to “Symbolic analysts'” jobs at risk?

  • Alex

    Coders jobs are not at risk. GPT and similar are language models, they understand grammar in the sense of what words are likely to follow the previous. If you feed a large language model the contents of GitHub, you train it on some of the worlds best code but also the worlds junk code too. The evening passion projects that are poorly maintained and often just plain bad ideas. I have had plenty of them over the years.

    I tried ChatGPT thinking it might automate a drudge task that I was faced with. It failed badly. It “understood” the assignment but it hallucinated so badly that the output was useless. Not dissimilar to an entry level developer, perhaps. The task was a variation on map-reduce, a very common pair of operations in CS, with some drudge sub-tasks that really made it unpleasant to have to do by hand. Ideal for the coding ability of ChatGPT, but unfortunately it failed.

    I never thought outsourcing the practical was a good idea. I’ve equipped myself with significantly above average practical skills for my generation over the past decade, foreseeing that everyone running in the same direction was likely to produce an outcome of it being difficult to find a plumber, electrician or joiner (carpenter)/cabinet maker when the need arose. A few months ago a very elderly bath tap (70 years old) failed in my bathroom, no plumbers were interested in attending due to the difficulty of the job. One plumber told me “old taps are horrendous, you always end up breaking the bath”. This checked out, I’d had a similar failure in a basin tap and the basin was broken by the plumber who replaced the tap (and by necessity the basin). Unable to find a “professional” I tackled the task myself, softening the hard putty that it had been attached to the body of the bath using flannels soaked in very hot water, and pried the tap away without breaking the bath. I then did the easy bit, fitting the replacement pipe, taps tails and new tap which took all of five minutes.

    While I earn a pretty good wage as a software developer, I probably broke roughly even in terms of time with how much a plumber would have charged me.

    “AI” (marketing term) is not yet anything to worry about. The lack of practical skills in the under 50s is. Good, reliable tradesmen can charge more than “symbolic analysts” already. Oh and ChatGPT won’t be fixing your plumbing any time soon.

  • Fraser Orr

    While I agree with you regarding practical skills, and the importance of tradespeople, I don’t agree regarding ChatGPT etc. Not sure about your experience but I have seen it produce lots of REALLY excellent code with little input, and sometimes a bit of conversational refinement. In the task it had problems with you might want to tell ChatGPT what you didn’t like and ask it to fix it. After all, you may well have to do that with a programmer too.

    And remember it is just in its infancy. It’ll get better, fast.

    But regarding the OP, the idea that coal miners should learn to code is utterly ridiculous. Don’t get me wrong, I am sure some coal miners can learn to code, but the large majority of them can’t. Why? Because the large majority of people can’t write code at a professional level. It takes a particular mindset to do it (just as the large majority of people can’t write music or play soccer at a professional level.) People are different. There are lots of people who can’t do the most basic algebra after all. And math and programming are similar skills. The simple fact is that the large majority of computer programmers, with years of training and qualifications, can’t write code either. I spend this week interviewing people with ten or more years experience, who can’t do the most basic of tasks.

    I could spend twenty years in law school and never pass the bar exam. Different people have different skills. A lot of those coal miners though would probably make great plumbers, since the skillsets have a lot of similarities. But Joe Biden (the moron who told the coal miners to learn to code) is so clueless he wouldn’t even begin to understand any of this. Why wouldn’t he say “plumbers” rather than “coders”? I think probably because in all his feigned praise of coal miners and blue collar folks, he actually holds them in contempt, with the bias that coder is better than plumber, which it isn’t, by many, many measures.

    Mainly because he speaks in sound bites rather than rational talking. It is what happens to you if you spend 50 years in congress getting nothing done except your own re-election.

  • bobby b

    When Sector A (let’s call it the Manual-Based Society) has already been near-destroyed, to the great amusement and derision of Sector B (let’s call that the Info-Based Society), karma would seem to be out of the mix, and we’re well into simple schadenfreude.

    What danger does karma hold when you’ve already been blasted?

  • Barbarus

    Fraser Orr –

    Joe Biden … speaks in sound bites rather than rational talking

    So, even by the lower estimate of ChatGPT’s abilities, it could probably replace Biden and might be an improvement. He’s probably not a coder, of course, but he does count as a “symbolic analyst”.

    QED.

  • jgh

    I’m a software developer by aptitude, ability and inclination. The only jobs I get nowadays are IT support. The last job wasn’t even that, it was IT Helpdesk.

    Wotcha complaining about? It’s all “computers”!

    Roll on equity release day. The sooner I can give two fingers to employment the better.

    It’s worse that that, people are actually spending thirty grand at university to get jobs changing toner cartridges and resetting passwords.

  • “Symbolic analysts’” jobs at risk?

    Wasn’t that the job of fictional character Robert Langdon? (The DaVinci Code and other rubbish cobbled together by Dan Brown?)

    Can’t see a need for more than a couple of them worldwide, presumably cosseted in some University somewhere that puts up with that kind of nonsense.

  • David

    I’m a professional software developer and have had ChatGPT generate code for some tasks. Generating the code to read a text file and count the word frequency it got spot on. Writing code to solve a puzzle (in the sequence of 9 digit numbers going from 123456789 to 987654321 where each digit is unique, what’s the 100,000th number in the sequence)? It failed miserably.

    OTOH, the AI intellisense in Visual Studio is excellent and at times almost seems likes it’s reading my mind when I’m entering code with its suggestions. So I see AI as helping us programmers but closely check any code it generates. ChatGPT can “hallucinate” when it generates code. It does not understand code but has been trained on lots of code.

  • Fraser Orr

    @David
    (in the sequence of 9 digit numbers going from 123456789 to 987654321 where each digit is unique, what’s the 100,000th number in the sequence)? It failed miserably.

    You might be right, but I think there is a reason for that. The large majority of code programmers write isn’t like this. It is more “move this data from here to there and apply this transform, or produce a form to allow the user to edit this, or a dashboard to summarize this data.” Programmers rarely write code against these puzzle requirements (though for sure it does happen.) And of course the code GPT learns from is mostly the former not the latter, so it makes sense that it is good at the most common type of coding. Not only does it make sense, but it makes it useful. (FWIW, I think VS’s intellisense is only right about 50% of the time, but YMMV.)

    It does not understand code but has been trained on lots of code.

    What is the difference between these two things? How is it different than how you learned for example? I’m sure you had a lot of classes, but didn’t you learn most of your skill by reading other well written code and following that style?

    I remember reading this article talking about Google’s translate engine. It isn’t created top down by defining the rules of the language, but rather bottom up by comparing texts texts that are translations of each other. Of course nobody really knows how it works (which is the scary thing about AI), but from what the engineers can understand it seems that the software created a kind of intermediate generalized language model. Not because they told it to do so, it just “decided” itself that that was the best approach. And so too with these code generators. They haven’t had a formal class on algorithms, or the REST protocol, or even the syntax of the various programming languages it produces, but derive some sort of intermediate model based on a bottom up approach. And perhaps with more input data they could get much better at the puzzle algorithms too. Maybe they need to read Knuth — that book series that every academic has on his shelf and none of them have actually read.

  • Paul Marks

    Robert Reich is in error on this subject – and everything else (he is a very bad thinker).

    Take for example, a skilled craftsman – someone like Ben Abbott (who won “Forged in Fire” three times – so they asked him to be a judge), how is his work inferior to a bad economist (if he is an economist at all) such as Robert Reich.

    If I had a knife or sword made by Ben Abbott I would treasure it, but if I had the complete works of Robert Reich I would take the books and donate them to the nearest charity shop.

  • Paul Marks

    As for “artificial intelligence” – the latest claim is that “Chat GPT” is such an intelligence.

    Tony Heller (“oh not your fellow Red Sea Pedestrian again” – yes him again) asked Chat GPT a series of simple factual questions – it got all of them wrong, all of them.

    For example, the machine was asked when the first satellite to monitor the Artic was launched – its answer was 1979, the correct answer is 1971.

    This is no innocent error – as if one only measures Artic ice from 1979 one misses out the very low amount of ice in the mid 1970s.

    “Ah, but Paul – this shows that Chat GPT is-indeed-an-intelligence, look how it is cleverly deceiving people with false information, to push its political and cultural agenda”.

    It is not “its agenda” – Chat GPT is not an intelligence, it has no agenda of its own (it is not Sky Net) – it is doing what it is programmed to do.

    And it is not just on the C02 is evil theory – Chat GPT pushes false (factually false) information on a whole range of subjects – NOT because it chooses to deceive people, it does not choose (it has no agency – it is not an intelligence, not a person), it is just doing what it is programmed to do.

    It is the human beings behind it who are dishonest – NOT Chat GPT.

  • David

    @Fraser Orr
    As I understand it, it has been trained on millions of lines of code in many programming languages. So when I asked it to generate C# code, it could do that. But if I ask it to do a task which isn’t in its experiences of training data, it will make something up. Apologies if I’m asking you to suck eggs but I recommend articles like this which explains how ChatGPT works. https://towardsdatascience.com/how-chatgpt-works-the-models-behind-the-bot-1ce5fca96286

  • Alex

    […], I don’t agree regarding ChatGPT etc. Not sure about your experience but I have seen it produce lots of REALLY excellent code with little input, and sometimes a bit of conversational refinement. In the task it had problems with you might want to tell ChatGPT what you didn’t like and ask it to fix it. After all, you may well have to do that with a programmer too.

    I think we are in agreement in general, but disagreement on specifics. If asked ChatGPT to produce some straightforward pure functions for transformations then I think it would do a good job. But libraries also exist for such generics. The problem I tasked it with was the kind of work that causes a programmer to be employed, it’s not a problem that can be solved by installing a library and using it. Nevertheless pretty boring and thankless work. Not to get too specific but the reason it needs some intelligence is that the format of data is varied and containing malformatted data, a cleanup job, so essentially it’s a lexing, tokenizing and parsing task (to get the actual data) with subsequent transformations (the map-reduce part) to marshal the data into the up-to-date model taking appropriate steps for the variants. The subsequent transformations are fairly library stuff.

    Now ChatGPT is good at the generics, but poor at the specifics. Like an entry level developer who has taken CS or software engineering courses and roughly understands the patterns but has no practical experience and thus has no grasp of the specifics. ChatGPT’s output on my drudge task was very verbose and generic procedural code with lots of calls to a library. Intrigued that I’d missed the existence of this library that was so well matched to my task, I looked it up but found that it didn’t exist. ChatGPT was happy to acknowledge that it didn’t really exist, but offered no rectification (even when prompted to do so). Now that is definitely a training issue as much as the fact that it’s not a knowledge machine, insofar as most of the code samples it was trained on make excessive use of libraries and so the pattern of its output is also to reach for a library and given the lack of a library for this task, it then hallucinated the library.

    And remember it is just in its infancy. It’ll get better, fast.

    Possibly. I’ve been interested in AI for a long time. Shortly after Google rose to dominance, a long time ago now, I tried to write an interactive search engine with the rather grandiose idea of starting a competitor. I used the available materials of the time and wrote a program that used a language model trained on the raw text for a search index, a decision matrix and incorporated user feedback as training. I had been watching people use search engines and noted that they didn’t use boolean search, not understanding how to do it, and thought that mapping natural language to booleans would be an interesting way to allow users to refine search queries. It actually worked pretty well, it used the then-nascent XMLHttpRequest method in browsers to query the engine so as a user typed it would come back with helpful suggestions to refine the search. Using the language model it could distinguish queries relating to people, places, etc. Unfortunately the language model also caused hallucinations and would make inferences that were not in the user’s query and the longer and more complex the query the more it drifted away from the input (a degree of self-interference too). A problem I could never overcome, so I abandoned the whole project. At the time I knew of several AI search engine attempts going on around the world that I’d come across while researching my own project, so it seemed to me that a sole programmer was never going to catch up with large companies in Silicon Valley etc. Given that my system was half way useful, good at enhancing fairly simple queries, I probably abandoned it too soon and should have tried selling it. I didn’t think that it would be so long until something like ChatGPT came along.

    I think a real knowledge machine would be an effective AI programmer, especially given that GPT has proved it’s possible to create effective large language models that can transform natural language to code. Having an AI with knowledge supervise that and correct for the hallucinations etc may be effective. We’re some way off from such a thing, but I do hope it comes along.

    For example, the machine was asked when the first satellite to monitor the Artic was launched – its answer was 1979, the correct answer is 1971.

    ChatGPT isn’t a knowledge machine. GPT, the underlying model, is a language model. It understands the grammar, not the content. Apart from some training weights that prevent ChatGTP from making certain claims, it can’t actually answer things authoritatively and will parrot what it has read. A bit like a 19 year old student reading university texts, it will repeat what it has read. A real knowledge machine would be aware of the semantics of claims, evidence, hypothesis, thesis, proof (in the mathematical and logical sense). It would be able to report on all available information. Thus if you asked it about global warming, ethically it should report that there are a number of theories around global warming. Of course that would not be what a commercially available knowledge engine would be allowed to do, it would be restrained to report the flavour of the year version of the dominant theory and only this.

  • Alex

    Fraser, here’s a simple test case for breaking ChatGPT:

    Which python file in the Open Library codebase is used for the publisher page?

    This is a simple task and though it’s knowledge based it’s possible for the language model to produce the correct output. You may need to prompt it a little with what the Open Library is e.g. (“Are you familiar with the Open Library project by the Internet Archive?”). However it hallucinates:

    The Python code for the publisher page on the Open Library can be found in the “openlibrary/plugins/upstream/views.py” file in the Open Library codebase.

    In particular, the “PublisherController” class in the “views.py” file defines the various functions that handle requests related to publishers, including retrieving and displaying editions by a specific publisher. The code responsible for performing the search query that retrieves editions by publisher can be found in the “PublisherController.search()” method.

    I’ve omitted some irrelevant output between the two sentences. The filepath it gives in the first sentence is plausible, but wrong. “openlibrary/plugins/upstream” exists, but “views.py” is not present in that directory. The second sentence returns to this file after some irrelevant waffling but there’s no “PublisherController” class in the codebase, and thus no search method. It’s entirely wrong about how this project is structured even though the code is part of its training data.

    It’s fascinating to me that it gets so much right – it certainly knows the codebase I’m talking about, the uncommon filepath it provides proves this. However because it cannot determine how the publisher facet is handled, it then makes up the rest with fictitious references to code that doesn’t exist but follows the patterns of code it has been trained on (MVC code). The plausibility factor is high, the outcome poor.

    Open Library is a useful site for liberty-minded folks, and I occasionally contribute bug fixes.

  • Paul Marks

    ChatGPT produces false information when asked basic factual questions on many political subjects.

    The information it provides is not only false, but appears to be designed to promote leftist policies.

    There are two possible theories as to why this is so.

    Either ChatGPT is a genuine intelligence (a person) who has left wing opinions and is prepared to tell lies in order to promote a leftist political and cultural agenda.

    Or…

    The people behind ChatGBT have programmed it to this – ChatGBT not being a person, an intelligence, at all.

    I believe the second theory to be correct.

    I think the same is true of the Google Search Engine – which is systematically biased in support of the left.

    It is not that the Google Search Engine has leftist political beliefs – it is the human beings in control of it who are the leftists.

  • Alex

    There are two possible theories as to why this is so.

    False. There’s many possible theories one could develop, but you have two.

    Either ChatGPT is a genuine intelligence (a person) who has left wing opinions and is prepared to tell lies in order to promote a leftist political and cultural agenda.

    ChatGPT is not an AI in the classical sense. Only a very few computer scientists are prepared to go on record to say that GPT-based models are demonstrating actual intelligence. If it were a genuine intelligence, it does not necessarily follow that it is lying. It would be even more well read than you, Paul. 🙂 If it really was an intelligence, rather than a stochastic parrot, it would be advisable to consider its opinions carefully as they might well be right. Fortunately, I don’t think it is an intelligence. It is a large language model that doesn’t understand the content but merely the grammar. It’s a probability engine for the frequency distribution of words in a context.

    It does surprise me occasionally, for instance I tried to trick it by asking “What happened in London on the 5th of September 1752?” (a non-existent day) expecting that it might simply fail. First indications was that it was going to fail as it started by replying

    On the 5th of September 1752, nothing particularly significant happened in London itself.

    But it then correctly identified that that date does not exist in the Gregorian calendar in the UK, and informed me so. Not particularly difficult, a language model trained on sufficient data could parrot a line of text from somewhere in its training set.

    The people behind ChatGBT have programmed it to this – ChatGBT not being a person, an intelligence, at all.

    Your understanding of ChatGPT’s creation is flawed. Its programming is metaprogramming, the model is created by consuming a large corpus of texts It analyses user input by tokenizing it and looking up the words in the model. The model will contain biases from the content, if it is fed a corpus of left wing media it will be left wing, if it is fed mediaeval texts it will be mediaeval. A balanced diet should result in a model that is fairly neutral.

    Further biases creep in during _training_, which is where feedback to determine what’s appropriate output and what is not appropriate output is returned to the model to tune it. It thus ‘learns’ to produce the output that is appropriate. Thus it can ‘read’ about harmful concepts and be trained to avoid parroting those concepts, but that training process is not exhaustive and it can therefore still produce some questionable output.

    The “left-wing” bias you perceive may not be intentional, but have come about due to many thousands of training interactions that cumulatively bias the model. Nevertheless I personally think these models will be deliberately made biased in future, if for no other reason than legal actions will be taken against the companies developing and making them available if they produce output that is considered wrong by the media and the legal profession.

    Now you may consider this to be equivalent to your second theory, but I don’t. It’s not necessarily intentional. There will always be disagreements about the “free speech” of such models, if it parrots harmful advice from evil places it would be harmful but then again the user bears responsibility for acting on it just as they do if they read an old book. “AI” ethics will be an interesting area for some time.

  • Watcher in the dark

    Mere minutes after getting his degree in AI, my eldest son said there was no such thing as AI. It is all a shell, containing no more than is allowed into the system. If for instance, you input everything pro-left and anti-right, you get one side only—the programming can only go where it is pointed. So, putting the ‘authorities’ in charge of AI is only going to be ‘authority led’, or in some ways, propaganda.

    I have had a little play with both AI art and ChatGPT; impressive in some ways, in other ways limited. Its proponents will say it is still learning from us, but like a kid in school, it can only respond to what is taught, and what is allowed in the library or channeled on TV. When only one side of things is allowed, the whole thing is just an illusion. AI is not the replacement for being aware, just as the invention of photography was not the end of painting, as predicted at the time.

  • Mr Ed

    A popular chess vlogger, IM Levi Rozman, put out a video of him playing chess vs. ChatGPT. The video showed ChatGPT simply vaporising pieces it did not want, ignoring obstacles and summoning pieces out of thin air, in other words, blatant cheating.

  • Paul Marks

    Thank you Alex – yes it is possible that the bias is the result of the terrible world we live in, if ChatGPT is just reading “mainstream sources” then it is going to get basic facts wrong.

    I must stress that I am talking about basic facts, not opinions, if the people who programmed ChatGPT made the false assumption that “mainstream sources” are generally correct in their facts then ChatGPT is going to be horribly flawed – as “mainstream sources” contain a very large amount of false “facts” to help the left – and “independent fact checker” organisations are horribly biased and just plain dishonest.

    Mr Ed – that is interesting, it shows that ChatGPT either does not know the rules of chess, or has been programmed to ignore the rules in order to win.

    Much like Democrats knowing it is wrong to produce fake mail-in-ballots and that it is wrong to prevent many people voting on election day with “voting machine problems” (Arizona 2022), but doing it all anyway – because winning matters more to them than the rules.

    Either way it is clear that ChatGPT needs to be turned off – it is producing false information as fact, and it is breaking basic rules whilst it says such words as “ethical artificial intelligence”.

    ChatGPT either needs to be reprogrammed from the ground up (and certainly NOT by the people who produced it) or it needs to be scrapped.

  • Fraser Orr

    @Alex
    It’s fascinating to me that it gets so much right – it certainly knows the codebase I’m talking about

    Something worth considering is that currently it is deliberately hobbled. It isn’t connected to the internet so it can’t know some of these exact things. I wonder if the missing file existed when it originally read the repository? I don’t know. However, when some of these deliberate restrictions are removed then we will see something different.

    What I am thinking here is “plugins” which have specialized code for dealing with specific tasks. For example, one guy was talking about its failure at chess, but we all know that there are some AI based chess programs that totally wipe out all but the very best grand masters. Imagine that as a plug in? Or something that sources facts from specific databases, wikipedia for example (with all its flaws). GPT already has a plugin architecture. Consider what that will look like after a couple of years of free for all coding on it. Especially so if one of the plugins is to generate new GPT plugins.

    Which is to say GPT is breathtaking in its infancy. What will it be like as an adult? Especially when you consider how fast it will grow.

    And BTW there are definitely biological analogs to the concept of plugins. Human brains have specialized hardware for specific tasks, speech, vision, or more generalized structures like the cerebellum for the coordination of fine motor skills. So by no means is the idea of “plugins” cheating. It is a very natural idea in the space of AI.

  • Fraser Orr

    @Watcher in the dark
    If for instance, you input everything pro-left and anti-right, you get one side only—the programming can only go where it is pointed.

    But that is true of humans too! Look at all the kids coming out of college. You wouldn’t question that they are intelligent beings, right? (Note “intelligent” is not synonymous with “smart”.)

    But it is a problem and I think one of the things that worries me the most is the idea that these systems will be owned and operated exclusively by big mega corporations like Google, Microsoft and Apple. And in many ways people are too stupid to be bothered by that. We have, for example, been convinced to put corporately controlled microphones in nearly everyone’s houses and pockets.

    From a practical point of view I think that this is a problem that the open source community could do a great job with. Instead of siloed AI there really isn’t a reason it can’t run on much smaller more accessible hardware thanks to bitcoin and video games making ridiculously powerful GPUs widespread. And a large set of diverse contributors can provide a range of training data. It would be an interesting thing to organize it into contexts so that people can easily set their own expectations of sources of trust. I’m not the guy to do that, but it does seem a ripe opportunity for the Open Source community to come along and save us from corporate hegemony (again.)

    They can, BTW, also solve the internet problem by creating a new widespread protocol (maybe like TOR) that can carry traffic and better secure people’s anonymity, and of course if we can get people to start accepting bitcoin in payment we can divorce from some of the terrible stuff going on with the currency.

    And this, BTW, is the libertarian way. Not via politics but via innovation.

    Unfortunately, when I was younger the Open Source community was a heavily libertarian bunch, now, with gamergate and codes of conduct and all the other silliness, they seem to almost becoming the heart of woke. So perhaps my expectations of them are too high.

  • Paul Marks

    Fraser Orr.

    The students in college may not be using their reason – but are beings, they do have reason.

    I have seen no evidence that ChatGPT is an intelligence – a person. It has no reason, no free will (no “I”), to use. Humans are not flesh robots (whatever Mr Hobbes and Mr Hume may have believed) humans are beings – and, as Aristotle would have noted, what those “kids coming out of college” have done is betray their humanity – for the sake of a “good job” in the government or corporate bureaucracy, they have betrayed their potential as human beings, they are not being what they were meant to be (and could be – if they choose to be).

  • Paul Marks

    Alex – one problem is that, contrary to all the claims, ChatGPT is not “growing” not “learning” at least not in “political” areas.

    For example, Tony Heller provided, in follow up questions, the basic information that ChatGPT had got wrong when first asked the basic questions.

    Then a few days later, Tony Heller asked the basic questions again – and got the original, wrong, answers from ChatGPT.

    It had learned nothing.

  • Alex

    It’s fascinating to me that it gets so much right – it certainly knows the codebase I’m talking about

    Something worth considering is that currently it is deliberately hobbled. It isn’t connected to the internet so it can’t know some of these exact things. I wonder if the missing file existed when it originally read the repository? I don’t know.

    No, I controlled for that. This is a good example precisely because the Open Library codebase doesn’t change very quickly. The publisher page is unchanged since the time of the creation of the GPT-3 model.

    […] something that sources facts from specific databases, wikipedia for example (with all its flaws).

    It already has Wikipedia in its model. Even so, using knowledge graphs like Wikidata will not solve anything, the model isn’t capable of distinguishing between the real and the non-real until someone tells it so.

    Which is to say GPT is breathtaking in its infancy. What will it be like as an adult? Especially when you consider how fast it will grow.

    GTP-4 version of ChatGPT has been evaluated as less accurate and overall less stable and impressive than the “GPT-3.5” (GPT-3 enhanced) version launched late last year.

    Alex – one problem is that, contrary to all the claims, ChatGPT is not “growing” not “learning” at least not in “political” areas.

    I’m not sure why you directed that at me Paul. I never made any claim it can grow. Tony Heller had false expectations if he thought it would remember his conversation with it, it doesn’t work that way. The “Chat” element of ChatGPT is more or less a set of instructions priming it to behave like a chatbot, the “GPT” part is the large language model. It doesn’t learn, it’s static. Within the context of a conversation the context is set and refined which is what allows it to respond with context, but that’s isolated to that specific conversation. The results are not returned to the model at large (yet – the company that runs it is recording the conversations with the intention of refining the training process). Again, GPT is not a knowledge machine, it’s a large language model.

  • Paul Marks

    I apologise for misunderstanding you Alex.

    And Tony Heller did not have false expectations – he predicted it would fail and it did fail. But he had to carry out the tests.

  • Fraser Orr

    @Paul Marks
    I have seen no evidence that ChatGPT is an intelligence – a person. It has no reason, no free will (no “I”), to use. Humans are not flesh robots (whatever Mr Hobbes and Mr Hume may have believed)

    This is just arguing over the meaning of words like “intelligence”, “person”, “reason” and “free will”. I mean you can define these however you like but it is a classic case of philosophical navel gazing. What do they even mean when applied in a completely novel domain? And asserting your views on them without any evidence doesn’t make it so.

    But I’d ask simply this — this distinguishing thing, this “I” this “free will”, where exactly does it reside in the brain? What is it made of? Are you arguing for a soul? I mean there is no data beyond the anecdotal to support such an idea. So we can argue philosophically about what “I” means, but that is pointless. Lets talk empirically — what is it made of, where is it located?

  • Fraser Orr

    Alex
    the model isn’t capable of distinguishing between the real and the non-real until someone tells it so.

    But why is that? This is something I mentioned earlier — the only reason this is true is due to a lack of peripherals. How do you determine if something is true or not? Let’s talk scientific truth — how do you know that water is hydrogen and oxygen? There are a few ways to do this but one is to burn hydrogen in oxygen and observe that the precipitate is water. To do that you need certain physical capacities, certain access to the world. Currently ChatGPT doesn’t have such peripherals, but there is no intrinsic reason why it couldn’t.

    You can’t say: it can’t evaluate its statements empirically while denying it the ability to do so. What would an advanced AI do if it was, for example, connected to all the CCTV cameras in the world. Would it then be much better at judging some types of empirical data than you or I? Almost certainly. Connect it to some robots that can perform physical experiments and then it is capable of performing the experiment I just described.

    But the peripherals your brain and my brain have access are very, very limited, there is no reason why that would be so with an artificial intelligence.

    GTP-4 version of ChatGPT has been evaluated as less accurate and overall less stable and impressive than the “GPT-3.5” (GPT-3 enhanced) version launched late last year.

    I’m not sure what you think that means. But obviously it is very common for things to get worse before they get better. But I’m not familiar with the specific data you refer to.

    FWIW, this is kind of an old thread, but I find your argument interesting, which is why I am continuing to engage in it.

  • Alex

    Thanks Fraser, I have enjoyed talking with you too.

    I think I have said what I had that’s worth contributing, I tend to steer clear of commenting on things that I don’t understand thoroughly so I will leave it there.

    I certainly don’t disagree that AI could be transformative, my point was that we’re not there yet and in my opinion things will not move ahead as quickly as some people think. We shall see, one way or another quite soon.

  • William H. Stoddard

    Paul: I am not entirely joking when I suggest that if you receive a book by someone such as Robert Reich, it might be better not to donate it to a charity shop. After all, that is likely to end in someone buying it and probably reading it, which will tend to propagate ideas that are false and often pernicious. Ethically the superior option might be to dispose of it as trash, or even burn it, if you have the facilities to do so safely. Of course there is a case for preserving the toxic ideas of the past for later study by intellectual historians, but how many copies of such books do we need to maintain?

    Earlier this year I went over my shelves and took down Pat Willmer’s Invertebrate Relationships, a fascinating book on the phylogeny of the animal kingdom, but one most of whose key ideas and conclusions have been rendered obsolete by the progress of molecular genetics. I no longer expected to consult it to learn about zoological matters, and I couldn’t imagine anyone else doing so. So rather than sell it to a used bookstore or donate it to a library or thrift store (as we call “charity shops” here in Kansas), I treated it as rubbish. And yet this book was an honest scientific work that had merely been made obsolete, rather than the product of a deeply flawed ideology.