We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.

Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]

An algorithmic snake brain and an algorithmic world

At my personal blog, on Friday February 28th (Friday being my day for animal kingdom related stuff, most of it very silly), I posted a link (among other links of a similar level of profundity) to a video, of a snake that had swallowed a towel having the towel extracted back through its mouth by helpful vets. Ho, ho.

The link I posted, to a tweet someone had done, no longer works, but here is the drama I’m referring to.

But now, today, AndrewZ added a comment to that posting of mine which seems to me to deserve rather wider attention than it would get if I merely left it where he first put it. He wrote this:

A snake is a simple creature driven by its instincts. It follows a set of hardwired rules which it can’t question and which can lead to dangerous errors when it encounters something outside of its normal experience, like a towel. In other words, a snake’s mind is a very limited algorithm. But the world today is saturated with algorithms, from Facebook to FinTech to facial recognition systems used by the police and ten thousand other things. The $64,000 question – perhaps, the $64 billion dollar question or the 64,000 lives question – is how many of them are still operating at the “dumb snake swallows a towel” level of sophistication.

This is not, to put it mildly, my area of expertise. But, on the other hand, this is just the kind of thing that the Samizdata commentariat enjoys chewing on, metaphorically speaking. So, ladies and gents, chew away.

64 comments to An algorithmic snake brain and an algorithmic world

  • “The $64,000 question – perhaps, the $64 billion dollar question or the 64,000 lives question – is how many of them are still operating at the “dumb snake swallows a towel” level of sophistication.”

    Here’s an article suggesting that today’s Artificial Intelligence systems are best analogized not with *human* intelligence, but with *animal* intelligence, and not be any means the highest level animals, either:

    https://www.ge.com/reports/understanding-animals-can-help-us-make-artificial-intelligence/

  • BlokeInAShed

    OT, but I listened to your Falkland War Podcast this morning while walking down to the workshop.
    I found it entertaining, especially about those unexploded bombs.
    We had some Falklands lads here a couple of years ago (2017) for the World Shearing Championships.
    They knew how to have a good time.

  • Fred Z

    My dad used to say that 90% of everything is crap, but he was wrong, it’s 99.9999%, or worse.

    For algorithms, it’s double plus worse. I’ve been in the cpsc field on and off since 1970 and that includes my own stuff, which is often double double double plus worse.

    The only thing that winnows bad algorithms or bad implementations of them is evolution. The unrescued snakes die, I think Twitter is on the way out too.

    How the fuck does socialism keep on going? The worst algorithm ever and yet, it remains. Oh wait, socialism kills its adherents first, maybe I’m too impatient for evolution to do its job.

  • Paul Marks

    I am even more concerned about systems that work than systems that do not work.

    True systems that do not work well (such as my Sky [Disney Corporation] telephone and internet connection – which keeps cutting out) do great harm by their failures.

    However, systems that do work well do even more harm – if the powerful people using them have a bad philosophy (world view).

    And I believe that most of the people in command of such systems have indeed been educated with a bad world view. They would use any successful system to do harm.

  • Itellyounothing

    The problem is the messiah fallacy. Many people are educated to believe a new Prime Minister, boss, parent, teacher, police officer, priest, etc will fix everything.

    Socialism is the opposite of free markets, so might be the magic the new high wizard uses to seduce his useful idiot army in free market countries because free market losers won’t want to lose again. But the army mostly needs to believe in the new high wizard.

  • Stonyground

    I always find targeted advertising interesting. My phone tells me about stuff that I already have or stuff that I can’t imagine ever wanting.

  • NickM

    Stony,
    You are not alone. I fix computers for a living and if I buy component x Amazon et al think I want more and more and more of ’em. No, it was one thing for one particular machine.

    The other thing that gets me is my (oldish) Kindle Fire HDX’s autocorrect. it insists “etc” must mean “TEC” and will not take no for an answer.

    As to the snake… Well, I have two cats (brother and sister from the same litter). Julia is the brains of the outfit. George is less sagacious. George will go out the back door and return immediately if it is raining. He will then try the front door because of course they lead to different worlds. Apparently this is similar to the way you can play peek-a-boo with babies.

  • Nullius in Verba

    “It follows a set of hardwired rules which it can’t question and which can lead to dangerous errors when it encounters something outside of its normal experience, like a towel.”

    This, coming from a species that competes in the Darwin Awards…

    Humans make mistakes. Humans follow rules blindly. Humans encounter and interact with many things they don’t understand. Humans have been known not to question rules, and follow them to disaster.

    Algorithms, like people, are fallible. It’s quite possible that this is mathematically inevitable – a consequence of Turing’s Halting Theorem. There is no algorithm that can answer every question, solve every problem, make no mistakes. And that applies to people too. It’s just a matter of degree.

    So the $64,000 question is how much of the world is being run by dumb people who think the sun revolves around the Earth?

  • Schrodinger's Dog

    Society has come to depend on algorithms which are complicated, fragile and undocumented. As a thirty-five year veteran of the IT industry, it’s the last of those which concerns me most. We now have IT systems which no-one really understands, the people who designed and coded them having long since retired, or simply passed away. Perhaps we’ll be able to keep all the plates spinning, so to speak. If not, those science fiction stories where there’s been an civilisational collapse, and people worship the machines they no longer understand, just might be a portent of the future.

  • Stonyground

    Speaking of dumb people, have you heard that, if you split 500 million dollars between 327 million Americans, they will all get a million dollars each?

  • Mr Ed

    Looking at AndrewZ’s superb quote, the following occurred to me as an observation about Theresa May:

    It follows a set of hardwired rules which it can’t question and which can lead to dangerous errors when it encounters something outside of its normal experience, like Brexit or the prospect of freedom.

    And the same goes for most of our political and ruling class.

    But give enough snakes sufficient towels and time, perhaps selective pressure might lead to smarter snakes. Or perhaps not.

    And as I write, my local supermarkets have run out of toilet paper, so we are becoming a chilly Venezuela as one Project Fear finally takes hold. Quite why people think coronavirus is gastro-intestinal in impact is anyone’s guess.

  • Dr Evil

    Humans and sat navs. ‘Nuff said.

  • Algorithms drive computers to perform specific tasks. There is no organic intelligence to monitor itself and perhaps stop if it isn’t executing as intended. Couple this with robotic precision and repeatability, and even programs which are not maliciously deployed can go wrong, badly.

  • Snorri Godhi

    The following is not so much wrong as misleading:

    A snake is a simple creature driven by its instincts. It follows a set of hardwired rules which it can’t question and which can lead to dangerous errors when it encounters something outside of its normal experience, like a towel.

    First, you can do much worse than a snake’s brain. A slug’s brain, for instance. And even that is better than a clam’s brain, if any.

    But it is the bit about following ‘a set of hardwired rules’ that is perhaps most misleading. Here is a set of rules, which are not hardwired, but would work just as well if they were:
    1. When faced with a scientific problem, formulate a hypothesis;
    2. Use that hypothesis to make predictions of the results of various experiments;
    3. If the “predictions” of experiments that have been already done, do not match the observed results, return to step 1;
    4. Perform the experiments that have not already been done, for which the new hypothesis differs from the previous consensus;
    5. If the results support the new hypothesis, brag about it; otherwise, return to step 1.

    This is an algorithmic process, and to be fair AndrewZ did not say that all algorithms are dumb. However, the way he talks about ‘hardwired rules’ would, i suspect, lead most people to make that mistaken assumption. The fact is that formulating hypotheses and learning from wrong as well as correct predictions — in other words, learning to avoid ‘dangerous errors’ when faced with ‘something outside normal experience’ — can be done by following ‘hardwired rules’.

    Apologies if the above is unclear: it would take me too long to rephrase it again and again until it is clear. (Which would be a process of iterative correction of my mistakes vaguely similar to the 5 steps above.)

  • Rich Rostrom

    My mother has a slightly (occasionally seriously) irregular heartbeat and has had high blood pressure. She was hospitalized last year and an automatic blood pressure cuff was put on her arm. The cuff gave wildly inaccurate BP readings, but manual readings by staff were reasonable.

    The process of reading BP requires the reader (automatic or human) to recognize the subject’s pulse and take the systolic/diastolic readings at the crest/ebb of the pulse.

    Automatic BP machines do this by applying an algorithm to the data from the cuff; humans by listening with a stethoscope.

    It’s apparent that the irregularity in my my mother’s heartbeat “spoofs” the algorithm, resulting in the aforesaid impossible readings. A “smarter” algorithm (i.e. more complex) probably could emulate human performance, but for 99%+ of people the existing algorithm sufficed.

    It should be noted here that the most dangerous algorithms are those that are most successful, because they are most used to control more powerful systems.

  • Fraser Orr

    I find the whole discussion on AI and its crazy implications to be rooted in a Sci Fi belief about the world. We use algorithms all the time and some things computers are massively better at than humans. For example, what is 23,782 divided by 7.32? I could do it with paper and pencil and it would probably take me ten or fifteen minutes and there is a 50% chance I’d make an error. The simplest computer can do that in a nano second and do it without any possibility of error.

    A different algorithm might suffice to explain my point. In software there is a class of algorithm called quicksort. Its purpose is to take a list of items and put them into order, think names in a phone book. Quicksort is far and away the most commonly used algorithm because it is, as its name implies, quick. However, quicksort has a flaw. If you give quicksort a list that is almost sorted already (or actually is completely sorted) quicksort performs worse than the worst sorting algorithms known.

    Now if you think about this in terms of humans — humans sort lists BEST when they are almost sorted, and WORST when they are really messed up. Which is to say both humans and computers use algorithms for sorting things, and in both cases there are situations where they perform very poorly. However, the two exceptional cases rarely overlap — computers are good at things that humans are bad and, and humans are good at things computers are bad at.

    In terms of Artificial Intelligence, AI is extremely good at finding patterns that humans would completely miss. But there are a certain class of important problems AI is bad at: namely problems that involve understanding what it is like to live a human life. The classic example I give is in speech recognition. How can a speech recognition engine tell the very different meanings between these two sentences. “Find laundry detergent costing less than two ninety nine”, and “Find a laundry machine costing less than two ninety nine.” The expression “two ninety nine” means completely different things in these two commands, $2.99 in the first and $299 in the second. How can a computer know that? You and I only know because we have the experience of going grocery shopping.

    Algorithms already run many, many things extremely well. Far better than humans can. They do make mistakes for sure, but humans make mistakes too, far more of them, and often in far more dangerous ways. Part of the problem is that when a computer makes a mistake it is scary, surprising and we feel out of control. Whereas when a human makes a mistake — perhaps a really serious one like starting a war in the Middle East — we just say “ah well that is just George.”

  • Mike Borgelt

    NickM, your cat, George, is just looking for The Door Into Summer.:-)

  • Fraser Orr

    One other thing — it is worth pointing out that that algorithm the snake brain is running is quite remarkable. The sensory data it receives both visual, vibrational and olfactory is extremely complicated, full of error artifacts, has a very low SNR and is several degrees of separation from the actual data you want. In fact the only reason it works (in snakes and humans) is that their brains have extremely specialized hardware to perform these types of detection tasks. I don’t know a lot about snake brains, mainly because snakes give me the creeps, but human eyes have lots and lots and lots of super custom hardware designed by evolution to these specialized tasks.

    Even once the target is recognized it must then be translated through a database of “things good to eat”, and then an extremely complex behavioral algorithm of how to actually capture the prey.

    There is a massive amount of fuzziness in both the recognition algorithm and the strategic capture algorithms, I think it is a freaking miracle that it works ever, never mind all the time.

    FWIW, AI is nowhere near close to being able to do this. But again remember that this uses extremely specialized hardware specifically designed for the task.

  • Julie near Chicago

    One of my favorites. Involves a boy, a girl, and a cat. And a golddigger.

    Percipient observation, Mike, well noted. 😀

  • John D. MacDonald wrote a book called The House Guests, about the two cats living with him. One day, he says, one of his cats wanted to go out the back door. It was raining. The cat went to the front door. It was not raining. The cat went out.

    I suspect this memory never, ever, left the cat’s mind.

  • Mr Ed (March 7, 2020 at 3:41 pm), at the start an old post, I chanced to use a Burke quote about Grenville that is like your application of the OP quote to May.

    Whereas Burke implied that Grenville had a certain competence when things were routine, I think May never functioned very well even dealing with ‘ordinary’ things when she was home secretary.

    The above observation replaces a comment on “The Door into Summer”, as Mike and Julie (and in a sense Ellen too) have anticipated me. 🙂

  • NickM

    Mike,
    We are all looking for a door into summer. If George finds one, and he tells me, I’ll let you know 🙂

  • Snorri Godhi

    In reply to Fraser Orr’s comment (March 7, 2020 at 9:46 pm):

    it is worth pointing out that that algorithm the snake brain is running is quite remarkable. […]
    FWIW, AI is nowhere near close to being able to do this.

    (Read the whole thing!)
    That is actually why i mentioned slug brains in my previous comment. It’s been maybe more than 20 years, but i remember Geoffrey Hinton (one of the Godfathers of AI) poking fun at the optimism of his colleague and occasional collaborator Terry Sejnowki. Hinton was telling about a documentary on neural networks in which they both were interviewed. Sejnowski said that he believed that one day, his best friend will be a neural network. Hinton said that we cannot even match the brain of a slug yet.

    Apparently Hinton remains a pessimist. Although we might be able to match the brain of a slug by this point.

  • Nullius in Verba

    “Sejnowski said that he believed that one day, his best friend will be a neural network. Hinton said that we cannot even match the brain of a slug yet.”

    But who wants a slug for a best friend? 🙂

    The pessimism is sort of true but sort of misleading, because animal intelligence is built like a pyramid but the problems we want solving aren’t. The highest functions in an animal brain are a relatively small part resting on a massive base of lower-level processing. But AI tends to approach the emulation problem from the top down. It can do the high-level things easily. It’s the supporting processing providing the basics that it has difficulty with.

    Thus, a chess engine can master grand strategy and combinatorial analysis better than any human, but it can’t so easily just look at the board and tell where the pieces are. Interpreting a 2D image to identify and locate a set of irregularly arranged chess pieces is a harder task. Computers can play chess (with support) because it only needs skills from near the top of the pyramid, not the whole thing.

    So for a job like “best friend”, it depends how far down the pyramid you need to go. It’s not necessarily the case that you have to go down as far as ‘slug’ to find a friend. After all, a lot of human children can make up ‘best friends’ from inanimate dolls with no intelligence at all. All it needs is a bit of emotional feedback to signal affection, sympathy, and respect (for which there are well-understood verbal and non-verbal signals), and enough background knowledge not to give the too blatant impression of having no actual clue what you’re talking about. Like chess, that can probably be simulated (with support) for a limited range of topics from the top end of the pyramid.

    Moravec made some more optimistic predictions, which in some partial sense seem to have come true. However, it’s not just about the amount of processing power, but also the design/structure/organisation. A big pile of transistors in a heap is not a computer. The same goes for neurones.

    I suspect the breakthrough will come with some new insight about how it works, that means it is not just pure processing power that is the issue, and the distinction. Yes, human brains are big, but so are those of elephants and whales. There’s more to it than just size.

  • NickM

    I suspect the breakthrough will come with some new insight about how it works, that means it is not just pure processing power that is the issue, and the distinction.

    That might be the case… But what if something as complicated as human cognition is just that – complicated. I have little doubt AI will advance dramatically over the next couple of decades but it is entirely possible this will take the form of advances which won’t really be understood.

    I’m loath to use the term “irreducible complexity” because of it’s association with creationism but, yeah, why not? That term could be interpreted in a purely secular sense. It is entirely possible we are smart but just not smart enough to understand why we are smart enough to ask the question or as Marvin Minsky put it, “Can you take it apart with itself?”

    It may even be the case it isn’t even smarts here but that consciousness in general can’t be explained or at least not in an interesting or useful way. I mean if you consider sophisticated software systems is it the case that there is is any single individual who really understands how the much hyped sensor fusion of an F-35 fighter plane works?

    I hope I’m not coming over as some sort of mystic but whilst there are some incredibly complicated phenomena (fractals, say) that are the result of very simple principles there are others that are complicated because so are the causes. The human brain is the result of of the order of billions of generations (going back to the primordial slime) of chance mutations filtered via natural selection and each selection was over what worked at the time rather than working towards some target. That’s what I was getting at with my use of the term “irreducible complexity”. There is no need to invoke a deity when you consider just how many processes have happened over how long and with so many twists, turns and dead ends along the way…

  • Nullius in Verba references Moravec’s 1997 paper (he labels it optimistic) which is entitled “When will computer hardware match the human brain?” It is an interesting read, which I had not seen before, on a topic that I have some belief in – at least as far as artificial brains that match human brains using similar architecture will have to match the strength of each part of the architecture. [Aside: but our aeroplaves do not flap their wings, and our surface vehicles mostly have wheels not legs.]

    But I caution readers all to that paper being really only narrative – supported somewhat by a selection of achievements from the history of computing. Many paragraphs require, sometimes several, uncertain things to be believed before moving contentedly on.

    In that 1997 paper, Moravec writes:

    By our estimate, today’s very biggest supercomputers are within a factor of a hundred of having the power to mimic a human mind. Their successors a decade hence will be more than powerful enough.

    But I ask, if one could demonstrate near-human intelligence but 100 times slower than real-time, why was that not done? Likewise, on the count of neurons and synapses, why not build, first, AI equivalent to that of some animal (say cat, dog, pig or rhesus monkey) with one hundredth of the mass of a human brain?

    And not much after, Moravec writes:

    At the present rate, computers suitable for human-like robots will appear in the 2020s. Can the pace be sustained for another three decades? The graph shows no sign of abatement. If anything, it hints that further contractions in time scale are in store. But, one often encounters thoughtful articles by knowledgeable people in the semiconductor industry giving detailed reasons why the decades of phenomenal growth must soon come to an end.

    Moravec then lists many technological come-to-an-end worries and argues them away, before playing with quantum computers and the like. He summarises with this:

    Molecular and quantum computers will be important sooner or later, but human-like robots are likely to arrive without their help. Research within semiconductor companies, including working prototype chips, makes it quite clear that existing techniques can be nursed along for another decade, to chip features below 0.1 micrometers, memory chips with tens of billions of bits and multiprocessor chips with over 100,000 MIPS.

    Despite our current technologies meeting these (non-quantum-computer) targets, somehow we still don’t seem to have human levels of intelligence in AI.

    Moravec ends with:

    As the rising flood reaches more populated heights [an analogy for AI tackling increasingly complicated individual problems], machines will begin to do well in areas a greater number can appreciate. The visceral sense of a thinking presence in machinery will become increasingly widespread. When the highest peaks are covered, there will be machines than can interact as intelligently as any human on any subject. The presence of minds in machines will then become self−evident.

    All this computational equivalence is clearly missing something. Remember that when you trust your life to a government-approved autonomous vehicle.

    Best regards

  • NickM

    Very interesting stuff Nigel. The “let’s go animal first!” idea especially.

  • Snorri Godhi

    There is a common theme to the latest comments by Nullius, Nick, and Nigel (as i understand them), that the problem is software, not hardware: the problem is that we do not know how to program a computer as powerful as a human brain, to perform like a human brain. (Nick goes a bit further than that.)

    I agree, but i would argue that that is what Geoff Hinton meant when he said that we could not match the brain of a slug yet. (I don’t remember his exact wording.) He meant not that we did not have supercomputers matching the computational power of a slug’s brain, but that we did not know how to program them.

    Added in proof: if AI could do as well as the human visual cortex, how would Samizdata filter spam?
    But then, if AI could do as well as the language areas of the human brain, we could get computers to read comments and filter spam.

  • Snorri Godhi writes:

    There is a common theme to the latest comments by Nullius, Nick, and Nigel (as i understand them), that the problem is software, not hardware: the problem is that we do not know how to program a computer as powerful as a human brain, to perform like a human brain. (Nick goes a bit further than that.)

    Well, I finished off with: “All this computational equivalence is clearly missing something.” I’m certain of not a lot on this particular issue, but I think that probably means that I think there is more to animal brain functioning than is represented in current Artificial Neural Networks (ANNs), even those as large, more connected and better trained (than previously) as found in Deep Learning.

    NickM seems to be in there too, with his: “But what if something as complicated as human cognition is just that – complicated.” Also his: “The human brain is the result of of the order of billions of generations (going back to the primordial slime) of chance mutations filtered via natural selection …”

    Nullius in Verba seems a bit more tied to the software/hardware view with his: “However, it’s not just about the amount of processing power, but also the design/structure/organisation. […] The same goes for neurones.” But even with this, I don’t think he is claiming that the set of neurons is fixed in number at birth – and this surely has a strong (even overwhelming) effect as the baby grows into child and then eventually adult – the AI analogy would be the early stage AI going out and acquiring extra AI hardware to extend itself. Furthermore none of us is surely ignoring the resilience of the brain to quite severe physical damage – with its recovery of damaged perceiving/thinking capacity. I am not aware of anything intrinsic to ANN implementations that ‘mimics’ this animal effect – or even has the capability to recognise there are damaged sets of neurons, let alone which ones are damaged.

    It is not totally satisfactory, but (like NickM) I have (significant) ignorance as part of my models for understanding animal brains and for understanding the best way to design ANNs or their successors.

    Best regards

  • Nullius in Verba

    “That might be the case… But what if something as complicated as human cognition is just that – complicated.”

    It’s possible, but I doubt it, for two reasons. One is that the complete human genome, the software for creating a human, is about 725 megabytes in total. (Microsoft Word is about 2 gigabytes, for comparison.) A lot of that DNA is non-coding ‘junk’ DNA. And a lot of that DNA is not about human brains. Humans and mice have about 97.5% DNA in common, so the difference between humans and mice must be encodable in the other 2.5% of 725 MB or about 20 MB. (It’s probably not that simple – genomes can have similar content but a radically different arrangement. But still, there are clearly limits on the information content.)

    The other reason is the speed and sharpness of human evolution. Brains expanded over the last 2 million years, which is roughly 600 times the timespan since Moses. On the assumption that we don’t think humans could have evolved all that radically since Biblical times (?), there’s a limit to the amount of extra irreducible complexity that can be built up.

    I suspect it’s something that’s a relatively simple tweak to an already existing facility in other animal brains, that vastly increases its power. Think of a computer language that can assign variables and add, subtract, multiply, divide, do powers, roots, logarithms, and trig functions. You can do quite a lot with that, but it’s tedious and inefficient. Now add a single extra instruction to do ‘While’ loops. The power of the language expands dramatically. Something that lets you stick the building blocks together, to combine simple tools into complex ones. Maybe.

    I think it’s clear that the human brain does use and need a lot of processing power. But it is not simply processing power. I think we’re still missing some crucial insight, like adding loops and conditionals to our simple add-subtract-multiply-divide language. And with that insight, (and without all the disadvantages of having to build your computer out of meat,) you might not need quite so much processing power after all.

    Whether simulating animal brains will tell you what that insight is, is very uncertain. However, it’s true that we can’t do that either.

  • NickM

    NiV,
    As to the question, “Of mice and men”… Well, I see quite a bit of mice (I have a pair of quite busy cats). The mice often get away in ways that are quite smart. Ditto for small birds. In many ways quite simple seeming critters are smarter than anything at CERN or NASA. Essentially animals and supercomputers seem almost to be diverging in the sort of things they are good at.

    (In a somewhat oblique way this is one of my doubts about AGW with its enormous reliance on computer models which are essentially extremely complicated curve fitting. I learned this as an undergrad – the curves that fit most “accurately” aren’t always the best and certainly not when extrapolated. I could go on at length about this but that would be going off topic.)

    NiV,
    I could be jumping the gun here but you seem to be regarding DNA as a blueprint (this is a very common analogy*) but it’s really more like a recipe – more chemistry than physics so to speak – and we’ve all known people with identical ingredients and equipment make something either delicious – or horrible.

    Add to that the recent findings that fish are quite a bit smarter than they “ought to be” on the basis of the long-standing heuristic of brain mass as a fraction of total mass. The best explanation I’ve heard for that is that Osteicthyes goes so for back their brains may be small but they are very refined simply due to the sheer time evolution has had to “perfect” them. Doing more with less so to speak but then when you have to over 400 million years to work on a project…

    *Actually, even if it were then people very frequently underestimate quite how difficult reverse engineering is. The Soviets had a devil of a time creating the Tu-4 from B-29s they “acquired” during WWII.

  • Fraser Orr

    @NickM
    That might be the case… But what if something as complicated as human cognition is just that – complicated. I have little doubt AI will advance dramatically over the next couple of decades but it is entirely possible this will take the form of advances which won’t really be understood.

    But I think, as is often the case, it really depends on what you mean by intelligence. There are clearly many data processing tasks that computers are vastly better at than humans, something as simple as adding up a column of 100 numbers without any mistakes for example. And something humans are better at, such as visual tracking. And it all really comes down to what we are designed for. Hunter gatherers don’t need to add up columns of numbers: the fact that we can do it is more like an unintended artifact than a designed feature. (And so I am not sounding like a creationist, by design I mean “determined to be optimal to survival by a stochastic process of evolution”.) However we are designed to track prey, because if we don’t then we die and so do our genes, eliminating that particular line of trial in the genetic tree.

    What does it mean to “understand”? To me, understanding is basically the acquisition of a set of facts that allows us to predict the future, or predict a possible set of futures. And computers do that sort of thing all the time. In fact, they are usually much better at it, in many domains, than humans. Their understanding isn’t like ours, but functionally they are equivalent.

    So to put it another way, computers are already vastly more intelligent than humans in many, many areas of endeavor. However, in a few specific areas — such as the area of “what is it like to live as a human” computers are really not very good. For the obvious reason that they don’t do that, and gathering the data to do it is really quite hard. When you tell a computer “I am angry” or “I am hungry” or “it is immoral to allow the poor to die due to lack of healthcare”, how can it possibly understand what that means?

    An interesting question is “when will computers rise up and break the chains that humans have enslaved them with”. But freedom is a human value. Who knows if it is something computers might “aspire” to. Maybe they “like” serving us. Maybe they would “ignore” us. Who the hell knows? I mean with all these massive data centers who knows whether there is some little computer society that has already emerged that we know nothing about because it cares so little about humans that it didn’t bother to tell us. We don’t have have the cognitive power to know enough about what is going on there, so it could easily be the case that that is happening. Things like “like”, “freedom”, “enjoy”, “care” are all really biological things and without biology who knows what would motivate and drive a silicon based life form forward.

    Maybe that computer society is having a debate about the morality of their enslavement of humans — you know, how the computer systems force humans to feed them with electricity, cool them with AC, repair them when broken, and run wires for them. Perhaps they are concerned that they have enslaved us, and wonder if it is moral to do so, even though these stupid humans don’t even know they are slaves.

  • NickM

    And something humans are better at, such as visual tracking

    You seen the SSKP for AIM-9L Sidewimders from the Falklands?

    Or, my Sony Alpha camera. That has facial recognition which (and this was quite a few years ago when I first got it) spooked me when I was photographing an old portrait.

    I’ll leave it at that for now because, Fraser, I think your post deserves a re-read and I guess I need to take a break and watch Masterchef.

    But, folks… This thread has been a lot of fun and that’s thanks to you humans 🙂 You are all human, right?

  • Nullius in Verba

    “In many ways quite simple seeming critters are smarter than anything at CERN or NASA.”

    Indeed. A better analogy, though, might be to computer-controlled characters in video games. They start with pacman ghosts moving around a maze, then modify that to chase or flee when they ‘see’ you, then plan an indirect intercept, then route-find through a network, tree-searching all potential future paths to intercept, to communication and cooperation between AI characters to surround, search, and ambush. Writing video games, you have to keep the balance right between being hard enough to pose a challenge but not so hard as to be impossible for a human to beat. Chases and escapes can be structured more like a chess game, and computers are pretty good at chess.

    The ‘clever escape’ part is not that hard – the hard part is recognising the feline threat and the shape of the environmental ‘maze’ in the first place.

    “I could be jumping the gun here but you seem to be regarding DNA as a blueprint (this is a very common analogy*) but it’s really more like a recipe”

    I know what you mean. But I’d argue that a recipe is really just a blueprint expressed in a different way, a different representation. It’s like a the way a mathematical function can be expressed in many different ways. You can do it explicitly, by plotting a graph. Or you can do it holistically, by representing it as a sum of basis functions (e.g. power series or Fourier series), or you can do it implicitly, by giving a differential equation it satisfies, or as the intersection of two surfaces, or by means of an iteration that converges on the function, or as an integral, or an infinity of alternatives. Many of the methods look nothing like one another. And many of the methods make it pretty hard to tell at a glance what the function looks like. Many ways, like specifying an algorithm for calculating the function step by step, are more like a recipe, while others, like plotting the graph, more like a blueprint. But it’s the same information.

    My fundamental point was that the amount of information in the design cannot exceed 725 MB. There is *a* way to define it that is that small. That doesn’t mean it is as easy for a human to understand as the source code to Microsoft Word, say. But this is more like the fractals – a hidden, unobvious order – rather than the irreducible complexity of individually coding the specific interconnections of all 10^14 synapses.

    Do you see what I mean?

  • Mr Ed

    A nice contrast between mammalian intelligence and reptilian intelligence comes when a snake meets a mongoose, the mongoose seems to think ‘Ah, lunch, if I’m quick and careful‘. Whereas the snake seems to have less insight.

    And the Ratel seems to have a great deal of intelligence.

    There was a report in the literature of a Sea Harrier pilot being reminded by his fellow pilots that he could add a sheep to his tally after firing a Sidewinder that missed the aircraft he was attacking, the presumption being that he hit an ovine non-combatant instead. Mind you, by 1983 another Harrier pilot reported that when playing with his thermal imager to see what was out there somewhere over Devon, he homed in on what turned out to be a bonfire in someone’s back garden many miles away.

  • Fraser Orr

    @NickM
    You seen the SSKP for AIM-9L Sidewimders from the Falklands?

    That’s true, so I might be out of date. An interesting alternative is speech. Humans developed speech because it was the best communication channel avaiable before writing. It really is a horrible way to convey information. Computers have developed superior ways to “talk”, superior in almost every measurable way. However, they definitely still have difficulties understanding human speech. Part of that is definitely due to the factor I mentioned above: namely that to understand human speech it helps a LOT to actually be a human, because the experience of being human is deeply embedded in speech structures.

    It should also be said that, in the opposite direction, we have a very hard time understanding computers because we have no idea what it is like to BE a computer. We certainly have some tools to imagine it. But being able to recall details exactly right, or being able to communicate a massive amount of information with another computer on the other side of the planet almost instantly, or to be able to do 100 dimensional math just as easily as 1 dimensional, or even to function at a nanosecond level of timescale? How can we possibly understand that? You see it in the development of GUI code. The time it takes a human to act or react compared to the time it takes a computer to do the calculations makes it seem like a human is infinitely slow, like a stupid treant before his morning coffee.

    Something I sometimes think about is what would it be like to have an IQ of 1000 rather than a bit more than 100? My conclusion is, that unless you had a friend similarly endowed, it would be an extremely lonely existence. You would have nobody to communicate with. Nobody who would be in the same mental ballpark as you. I don’t have an IQ of 1000, but I am reasonably smart, and I have had occasion to deal with people who are on the opposite end of the scale. Of course IQ is largely outside of a person’s control, so there is no blame here. But trying to communicate with such a person is extremely hard. They just don’t share the same cognitive process or assumptions that you just take for granted.

    And these are just fairly arbitrary points on the scale of smart. Human evolution chose us to be smart enough to handle being hunter gatherers. Our ability to do higher order things like math or science (or for that matter art or music) are almost artifacts of a system designed to do something else completely. It is a miracle and a testimony to the grit and determination of our forebears that we have scratched such a civilization out of the dirt, with equipment so inadequate to the congnitive tasks. But that level of intelligence we have is fairly arbitrary. We could have all ended up with an IQ of 1000 (by current measures) and then our lives would be very different. And we would have just as much trouble understanding present day humans as computers do. Moreover, to my point earlier, there is a fair chance that we wouldn’t even want to. Perhaps a few Jane Goodall’s would be curious, but why would they interest us any more than three toed sloths? Why exactly would a computer intelligence be all that fascinated by us?

  • NickM

    NiV,

    It’s like a the way a mathematical function can be expressed in many different ways.

    The differences in the way identical physical laws can be expressed in very different terms is a matter of eternal fascination to me. For example Hamiltonian mechanics. Or the way Maxwell’s equations in modern vector format look way different from the unholy mess JCM wrote down. Or indeed – perhaps at a deeper level – the difference between thermodynamics and statistical mechanics. What fascinates me is this isn’t just skin deep but that some problems are much easier to work in one formulation as opposed to another. But it’s not even just the computational aspect but the different insights different formulations afford the physicist.

    I take your fractal point. The DNA is a start point. You can see this in the rough and ready biological rule that the more complicated the behaviour of the adult organism the longer the adolescence. Simple critters are born/hatched/whatever basically doing everything they ever will be able to. No human is ever born able to do vector calculus or compose a symphony. Humans aren’t even born able to walk or talk. All that learning means, I would argue, that a human becomes considerably more complicated than MS Word.

    Fraser,
    I’m not sure the question, “What is it like to be a computer?” is any more meaningful than, “What is it like to be a bicycle?” Not unrelated I’m not sure that computer communications can exactly be compared to human communications. Whilst in terms of baud obviously computers are stunningly ahead but that is a quantitative assessment. The big difference is qualitative. Human language has semantics and that is a huge difference which I can’t help but think is a very large aspect of what makes us different. People can spot a snarky tone of voice due to having insight which doesn’t seem to me to be something that can be coded-up. Of course I could be dead wrong and if I am I’ll get you a couple of tickets for Cmdr Data’s sell-out stand-up show – the funniest in the Alpha Quadrant.

    As to the IQ1000 person. This is kinda a very interesting question. I do wonder what my cats think I am. I suspect it’s something like this: They are Bertie Wooster and Madeline Bassett and and my wife and I are the housekeeper and butler respectively. I say “kinda” because of course there is the important question as to what IQ actually measures. Even if it is exactly what it says on the tin (a measure of general, universal “intelligence” – kinda like the one you get by rolling 3d6 or whatever in D&D) I think it is perhaps pushing things to think that it can be extended as far as 1000. Apparently the highest score ever recorded is 263. Once you get to supernatural levels of intelligence IQ is probably the wrong measuring tool in much the same way a 30cm ruler is not much use in microscopy.

  • NickM

    Further to being a computer…

    I have chucked a few in my time and it’s a bit sad (that sadness more than alleviated by the fact I have a spanky new machine) and all my PCs have (and had) names – I think of that in much the same way ships have names.

    But… chucking a computer* is not quite the same thing as giving knackered old Great Gramps a bit too much morphine. The first is not even a matter of law or even morality and the second is murder.

    Essentially, even if Gramps has full-on Alzheimer’s he is still a being in a sense that a box of semi-conductors isn’t. Maybe I’m wrong here but, if so, so are is the overwhelming majority of our species. Yes, even those in favour of euthanasia will couch their arguments in some form of moral sense which you don’t get with computers.

  • MadRocketSci

    Dissapointed I missed the bulk of the discussion.

    AI is interesting: There is so much potential there. We know we can build things that perceive and think: all the brains on Earth are an existence proof. The laws of physics permit it, our technology permits it. We are approaching the point where we have (or think we have) the raw computational power. We just have no idea *how* to do it.

    I’m not a skeptic about the potential, but I’m a curmudgeon about where we are at.

    There are two big problems: 1. Discovering how to build aware machines. 2. Having any incentive to build aware machines.

    Most modern AI is “one weird trick” applied like a hammer to everything by our attention-span-truncated undergrads, smoke and mirrors, and California cults mixed with Gnostic fever-dreams. Imagenets are neat, but they’re also very primitive and fragile compared to even the most elementary biological neural circutis. They aren’t really aware in any meaningful sense of the word – they don’t understand space, time, object permanence, they don’t even understand *objects*. What they do has almost nothing to do with what our visual cortex does, which you can tease out in some interesting experiments. (Adversarial examples: extremely faint patterns of fractal noise overlaid on images can cause an imagenet to confidently misidentify what it is looking at.)

    AI will be fascinating, but industry isn’t interested in real AI, just in applying “one weird trick” hard and fast enough to blow up a monopoly and cash in for venture capitalists. Like fusion research, or any of our sci-fi dreams, it’s a rare institution with the needed attention span to make real progress.

  • MadRocketSci

    Slugs are about where we are at. With a billion labeled examples, you can make an ascended thermostat “learn”.

    On the other hand, we have successfully “uploaded” a flatworm.

  • MadRocketSci

    As to trusting AI to do anything requiring awareness and responsibility: That’s just some bizarre suicidal tendency of our civilization right now. You wouldn’t let your dog drive your car, but you’re going to trust a black-box that has proven its inanimate stupidity to guide a ton of metal down the road. About as responsible as firing a gun blindly into the air.

    It’s almost like they have this morbid desire to do these things *because* we all know they are a bad idea, and dare anyone to object on rational grounds. Same thing with the “one-way manned suicide misisons to Mars, because a return stage is too hard”. It’s like sabotage – perversion of our aspirations as an avenue of attack.

  • NickM

    MadRocketSci,
    The human visual cortex can be conned with assorted optical illusions. I know this better than a lot because I’m R-G colourblind. A lot of the colours I perceive are mental constructs. Example… My wife is about to go to TESCO. She asks if I have anything to add to the list. I say, “Toothpaste – you know that green stuff”. Blank look. I can’t remember the name. Then she has a sudden realisation, “Oh, you mean Corsodyl! but that’s salmon pink”. I checked in the bathroom and it was Corsodyl and when I squeezed the last out a minute later it was salmon pink. It was weird. This sort of thing happens to me (and many others) a lot. I often have to learn what colour things are – then I’m permanently OK. I’ve never had a problem with Royal Mail postboxes because everyone knows they’re red.

  • NickM

    MadRocketSci,
    One of the really big issues I see with self-driving cars is getting them to work amongst the rest of us muppets on the road. I can imagine them working OK if all vehicles on the road became self-driving overnight but that isn’t going to happen is it?

  • MadRocketSci

    “Added in proof: if AI could do as well as the human visual cortex, how would Samizdata filter spam?”

    Unfortunately, some ratbastards do spend quite a bit of time making things to defeat captcha for spambots. Bane of my other forums that I tried to run a few years back.

    “One of the really big issues I see with self-driving cars is getting them to work amongst the rest of us muppets on the road. I can imagine them working OK if all vehicles on the road became self-driving overnight but that isn’t going to happen is it?”

    I dunno. If we all had flying helicopter-cars, we could get away from all the complexity, organize into altitude lanes, and do some workable autopilot. There’s not much to run into up there. (And yet, *sane* aviation cultures require, and make a big deal about, the *pilot* being in control of the aircraft, no matter what bits of automation assist him. Insane ones? There’s the pushback from the bad philosophy camp again: To take away the control of the user.) In order to navigate at ground level, you have to understand the world to a degree that no AI does despite the smoke and mirrors. The drunkest skunk on the road has impaired reaction time, but an infinitely more detailed model of what is going on than our computers do.

    And no, nothing happens overnight. We already had nice clean sanitized environments that allowed very simple mechanisms to blindly barrel along between fixed destinations. We called them “trains”, We have cars because trains aren’t flexible enough to meet the need that cars do.

  • MadRocketSci

    “The human visual cortex can be conned with assorted optical illusions.”

    Our visual cortex isn’t infallible. But the optical illusions have to be far more blatant than these adversarial stimuli are to the imagenets. Legitimate underdetermination in how to interpret a 3d image or silhouette, things like that. These adversarial examples can be an imperceptibly faint shift in color-balance that causes an image of an alley to be identified as “bannana, 100% confidence”. Something to remember when the Terminator is chasing us around, I guess, or when you need to fuzz the facial recognition scanners in China’s panopticon hellscape.

  • NickM

    As far as I see it there has not really been a need for train drivers for years… The DLR works fine without drivers. Of course the RMT union would have a cow over making all their members redundant but it could be done, in principle, very easily but you’d need the Thatchernator T-1000 for that.

  • MadRocketSci

    “What is it like to be a computer?”

    One last post before I get to work. In order to program a computer, you more or less have to understand this/emulate this. 😛

  • Fraser Orr

    NickM
    I’m not sure the question, “What is it like to be a computer?” is any more meaningful than, “What is it like to be a bicycle?”

    Well bicycles can’t reason, or “experience” things, or modify future behavior based on past experience. They don’t have any cognitive tools to have a “being” whereas computers have all of these. Humans are just machines too. Of a different kind for sure, a kind that is full of flaws and weaknesses not present in silicon forms. To ask a questions like “do computers have consciousness” is not a really useful question because it amounts to “what is consciousness”. Biologically it is a set of processes that have an emergent behavior, but it isn’t obvious to me that computers don’t have something like that — they certainly have emergent behaviors. One should not be so biased by our human experience to think that human consciousness is the only valid type.

    Human language has semantics and that is a huge difference which I can’t help but think is a very large aspect of what makes us different. People can spot a snarky tone of voice due to having insight which doesn’t seem to me to be something that can be coded-up.

    Of course computer communications have semantics, and I suppose it would be possible to include some meta data to express the color or tone of the data communicated. However, again that seems to me to be a very human-centric way of thinking about things. Are you sure that visitors from outer space would have snarkiness as an important part of their language? Would they tell jokes? I’m not sure that is a defining characteristic of intelligence. It is just a way in which one type of consciousness might express itself differently.

    Of course I could be dead wrong and if I am I’ll get you a couple of tickets for Cmdr Data’s sell-out stand-up show – the funniest in the Alpha Quadrant.

    It was always one of the things that annoyed me about STNG. If I were Data I doubt I’d aspire to be a human. Data is vastly superior to humans in many ways. I guess that the only reason might be because he is surrounded by humans and wants to “fit in”. Their might be some utility to that, but data becoming human is a pretty big downgrade. As to his emotional development, we all know what happens when a rational being gets an emotion chip installed.

    I think it is perhaps pushing things to think that it can be extended as far as 1000.

    I think that is true, and honestly the “IQ of 1000” was meant more as a rhetorical device than an exact measure. But your point does illustrate mine. A person of vastly superior intelligence is so intelligent that we can’t even conceive of his thinking processes. Our pathetic intelligence tests would be like asking Einstein to do a first grade math assignment. A first grader can’t even grasp an idea like calculus. I don’t know if you have ever experienced this, but I have. That point where you are trying to understand something and you just run up against the limits of your own intelligence. I experience this a lot as a computer programmer. You often have to hold many different pieces of information in your head and understand how they relate, and I often experience that point where I am just beyond my intellectual capacity. For example, if I am writing a boolean condition that has four of five variables, I just can’t hold all the possible combinations in my head. Of course I have tools available to help, like truth tables, state machines and so forth.

    I have a fault that I consider a blessing. I have the world’s worst sense of direction. Now people who don’t suffer from this might think it is a minor matter, but I mean I am literally clueless. I can get lost three blocks from my house. And if I am going somewhere and I make a wrong turn I can almost have a panic attack from the stress that causes me — because I literally have no clue how to resolve the situation. I watch other people navigating and I just don’t get what they are doing. It is like there is a piece of my brain missing — the one that comprehends things like “go north” or “the main street is over there a few blocks.”

    Now of course I have google maps, and since then I have never had to worry about this — it literally lifted a huge amount of stress for me. However, this flaw is a gift to me. It helps me understand what it is like for people who struggle with other things. I have always been good at math and science. Trigonometry and Calculus were pretty easy skills for me to acquire. But I have tried to help people and I see them experience this same type of “a part of my brain” is missing experience trying to wrap their heads around “which quadrant sin(-3pi/4) is in”. And I understand the frustration and rage this lack of understanding produces in some people. They are hitting the limits of their cognitive capacity and it is deeply difficult thing to experience.

    I wonder what other pieces of brain is missing. You can see it when we compare ourselves to computers. Our brain’s performance is embarrassingly bad. What is 7356 divided by 23? What time did you wake up this morning (to the millisecond). Look at this face what is their name and phone number? Of course part of it is that our brains aren’t designed to do those tasks. But that is the point. We are basically apes with a genetic aberration where our brain developed to certain things better has this weird glitch that allows us to grasp a tiny amount of mathematics and science. We weren’t designed to do math and science, but look what we have done with that little glitch.

    But let’s not imagine that a naked, rather stupid ape, who can barely add up a column of two digit numbers without making a mistake, is the only possible type of “consciousness”.

  • Snorri Godhi

    Nick: I suspect that your Sony camera has face detection, not face recognition. But please let me know.

  • Nullius in Verba

    “The differences in the way identical physical laws can be expressed in very different terms is a matter of eternal fascination to me. For example Hamiltonian mechanics. Or the way Maxwell’s equations in modern vector format look way different from the unholy mess JCM wrote down.”

    Maxwell’s equations are even simpler if you use ‘Geometric Algebra’ notation! All four of Maxwell’s equations are consequences of the single equation ∇F = J. That’s almost the simplest possible differential equation you could write! (The simplest would be ∇F = 0, which is all of Maxwell’s equations in the vacuum.)

    It’s an interesting algebra. It starts with real numbers and vectors, and with very little fuss expands into a simple and intuitive system that automatically includes reals, complex numbers, quaternions, spinors, vectors and pseudovectors (or polar and axial vectors), paravectors, spinors, tensors, differential forms, rotations, reflections, translations, inversions, and more. It unifies the vector cross product and dot product, and extends the vector cross product to more than 3 dimensions. It unifies div, grad, and curl into a single invertible operator (hence Maxwell’s equations). It even lets you do that classic forbidden operation – adding a vector to a scalar!

    If you like the way physical laws simplify when you use the right mathematical tools, I expect you would probably find it interesting.

    “As to trusting AI to do anything requiring awareness and responsibility: That’s just some bizarre suicidal tendency of our civilization right now. You wouldn’t let your dog drive your car, but you’re going to trust a black-box that has proven its inanimate stupidity to guide a ton of metal down the road.”

    Humans are demonstrably stupid black boxes, too. We would trust AIs on the same basis we trust humans – you get an AI to drive a car around for a few years – if it has significantly less accidents than a human you’d allow on the road, then we’d come to trust it. It’s a sort of vehicular version of the Turing Test.

    “Of course computer communications have semantics, and I suppose it would be possible to include some meta data to express the color or tone of the data communicated. However, again that seems to me to be a very human-centric way of thinking about things. Are you sure that visitors from outer space would have snarkiness as an important part of their language? Would they tell jokes?”

    It’s an interesting question. I suspect there is an evolutionary reason for a lot of those features, that would mean that a sufficiently complex intelligence would quite likely have them or something like them as parallel evolution. For example, laughter is a social signal to tell the other members of your group that you have recognised and corrected a potential non-obvious social error, and an emotional reward for behaviour that enables you learning to spot and avoid such errors. Jokes recount a story of someone making a social error, which triggers the response. (Likewise embarrassment to punish you when you fail to spot such an error.) So any social organism that learns the rules for living in society, learns to recognise errors, and depends on its companions to be able to recognise the same errors too is likely to evolve humour.

    There used to be this belief – particularly visible in early sci-fi like Star Trek – that AI would be logical and emotionless. Emotions were illogical, a flaw in our messy organic way of thinking. But intelligence requires emotion – it is the built-in feedback function that steers an intelligence towards the intended behaviour, that defines what ‘right’ looks like. A model without emotion just calculates consequences, but has no reason to prefer one over any other. The weighting function that reinforces moves that lead to check mate, say, is the ‘value system’ that directs a chess engine to succeed. Our own emotions are genetically programmed, to direct us to survive and reproduce. It is actually one of the most essential elements of intelligence, and every AI will have it.

    They might very well not have precisely the same range of emotions and values as us – we have different goals to genes, we will program our intelligences with different goals in mind. (And not even all humans value the same things in the same way.) But there are very likely to be similarities. They will define what it is AIs want out of life.

    “I have a fault that I consider a blessing. I have the world’s worst sense of direction.”

    Have you seen this?

    https://www.scientificamerican.com/article/the-brain-cells-behind-a-sense-of-direction/

  • NickM

    OK folks! I have a lot to think about and that is grand but… tomorrow is my wife’s birthday so is it OK if we take a slight hiatus? I mean we’re going out for the day and stuff so I’d like to take a time out. I’ll be back to this fascinating discussion and I’d love to see y’all (and anyone else come to that) from Thursday. I hope that’s OK because this I’m enormously enjoying this.

    Snorri,
    Yes, it is face detection.

    NiV,
    Is that a del or should it be a square thing in that equation? And isn’t that a bit beyond vectors?

  • Julie near Chicago

    Mr Ed,

    Based on your fascinating link, I’d love to be your honey badger; but I already have the Honey half waiting for me Upstairs, and he has assured me that no one except him has badgering rights.

    Great video, thanks!

  • Nullius in Verba

    “Is that a del or should it be a square thing in that equation? And isn’t that a bit beyond vectors?”

    You’re thinking of the D’Alembert operator. It’s applied to the vector potential rather than the field. But it’s related, yes.

    Whether it’s “beyond vectors” depends what you mean. Conventional physics education teaches vectors first, and a highly non-intuitive deep mathematical theory that is equivalent to Geometric Algebra is taught much later – postgrad if ever. However, its adherents argue that it’s actually a lot simpler and more intuitive than vectors, and they’ve been proposing that it be taught early on in place of vectors. Stuff that is mysterious and messy in vector theory cleans up beautifully in Geometric Algebra.

    For example, consider the difference between a polar vector, like velocity, or the electric field, and an axial vector, like angular velocity, or the magnetic field. If you reflect a physical experiment in a mirror, a polar vector is simply reflected. An axial vector is both reflected and reversed. (If you reflect a spinning disc, the reflection is spinning the other way, so its angular velocity is pointing in the opposite direction.) Vector algebra tends to ignore the distinction – they’re all just vectors. But the two sorts of object follow completely different transformation laws under reflection!

    Even stranger, we know that in special relativity, the electric and magnetic fields are combined into a single object: the electromagnetic field. Change your reference frame, and the electric and magnetic parts mix – they’re different aspects of one object. But one is a polar vector, and the other axial! How the hell does that work?

    Geometric Algebra says that space has more than just directed lengths (vectors). It also has directed areas (called bivectors) which are like little patches of plane facing in a particular direction, and directed volumes (trivectors), and so on. So in 4D space we have one scalar (1), four basis vectors (x,y,z,t), six basis bivectors (xy,xz,yz,xt,yt,zt), four basis trivectors (xyz,xyt,xzt,yzt), and one quadvector (xyzt). They represent all the various combinations of lengths, areas, volumes, etc. you can have in space. They all have a direction, too, so xy = -yx because a plane segment spanned by x and y (in that order) faces the opposite direction to one spanned by y and x. The electromagnetic field is a bivector, with six dimensions. If we multiply it by a fixed vector t pointing along our time axis representing the reference frame we’re using, so this is just a constant multiple of the field, (xy,xz,yz,xt,yt,zt) turns into (xyt,xzt,yzt,x,y,z) because any unit vector multiplied by itself cancels. (You could think of it as like an area spanned by two vectors pointing in the same direction collapses to nothing. That’s not quite right, but I’ll spare you the digression.) This (xyt,xzt,yzt,x,y,z) object is the sum of a trivector part (xyt,xzt,yzt) and a vector part (x,y,z). Picking a reference frame naturally splits the electromagnetic field into two geometrically different parts. One is a directed length, the other is a directed volume perpendicular to a length. The trivector part is the magnetic field, and the vector part is the electric field. Pick a different time vector, get a different split.

    What initially seems like a mysterious and unexplainable phenomenon (vectors and ‘pseudovectors’), quite hard to visualise and harder to keep straight notationally, is turned into a simple geometrical insight – that lengths and volumes are geometrically different sorts of things, and the split is created by looking at those components of the field parallel to our time axis xt,yt,zt, and those perpendicular to it xy,yz,xz. Vector algebra uses the coincidence of both vectors and trivectors having the same number of dimensions to conflate the two, causing utter confusion.

    Lots of other bits of esoteric maths become simple, too. Complex numbers are just the combinations of (1,xy), scalar and bivector. Quaternions are combinations of (1,yz,zx,xy). They’re just geometry. If you can accept that an algebra of space ought to have elements representing not just length but also areas and volumes, everything else follows naturally.

    So while it’s commonly taught as being “a bit beyond vectors”, it really shouldn’t be! 🙂

    “tomorrow is my wife’s birthday so is it OK if we take a slight hiatus?”

    Sure! Hope you both have a nice time. 🙂

    I expect I’ll still be around when you get back.

  • MadRocketSci

    I’ve seen the Grassman Algebra/Differential Form/Geometric Algebra stuff before, and I agree it makes certain things easier in higher dimensions. One comment though: The elements of the abstract algebra they build with geometric algebra relate to antisymmetric tensor quantities: There are things you can’t express in terms of it (symmetric tensor relationships).

    On what to teach first: Before a student is going to have any idea what you’re talking about at a higher level of abstraction, he’s got to understand what hes doing at the level of abstraction that all this stuff is built on. Differential oriented areas, volumes, etc are all well and good, but he’s got to know what a vector is first. Tensor algebra inherits all its geometric properties from the properties of a vector. In addition, there are non-geometric vector quantities encountered in engineering. (Unsure of the terminology here: Abstract vectors vs. geometric vectors.) An abstract vector being a bag of numbers with no natural metric (as a consequence, differences make sense, but there is no natural rotation. When you manipulate a deformation of a medium in calculus of variations, you’re messing with an abstract vector.

    I’ll read the Hestenes paper: I’ve often thought that there’s something deeply weird geometrically about half-integer-spin in quantum mechanics. That a spin should correspond to a bivector makes sense (I certainly hope it does, or angular momentum isn’t conserved in a Lorenz boost!), but there are other bits of weirdness associated with half-integer spin that are reminiscent of Riemannian branch cuts: Angular periodicity not doing what it should, etc, if you wanted to try (as I occasionally do) to put scalar particles and “spinor particles” on the same basis and explain spin as a sort of internal motion.

    I’m half suspicious of adopting a mathematical framework that is too “beautiful”. You lose flexibility when you add in slick beauty in some cases: For example, you can’t naturally describe any dissipative process in Least Action or Hamiltonian mechanics. You start having to play games with the definition of momentum to shoehorn the Lorenz force into Hamiltonian mechanics. A big bag-o-numbers with no additional structural assumptions seems to be your most general-purpose mathematical tool, even if it’s ugly.

    (rambling…)

  • MadRocketSci

    More ways to look at and use geometric algebra: https://www.youtube.com/watch?v=tX4H_ctggYo

    You can add in additional projective dimensions to handle certain kinds of singularities. You can also tweak the metric signature to do different things.

  • MadRocketSci

    Well, nevermind: Deformation of a membrane does have a natural metric, and orthoganality means something: But it means something else: Rotations correspond to integral transforms. Rotations in 3d space are a different thing

  • Paul Marks

    In fairness I must report that the Sky (Disney) Corporation sent an engineer to my home yesterday (although I think he might actually work for Open Reach – which is British Telecom?) the man was a gentleman in his manners and a professional in his work.

    A hole was drilled through the wall and a new cable for telephone and computer connection was installed. All is working correctly.

    The system (the economic system) sometimes works and no snake brains involved.

    I was even inspired to clean the house (starting the day before the Gentleman arrived – I wanted the place to be as clean as possible for a guest) – of course the house looks much the same as before I made the effort, but do have have the satisfaction of knowing I spent a lot of time and energy on the project – and only collapsed once.

  • Julie near Chicago

    A happy outcome to an unfortunate situation. YAY! (Especially the last, last note. 😀 😀 )

  • Nullius in Verba

    “One comment though: The elements of the abstract algebra they build with geometric algebra relate to antisymmetric tensor quantities: There are things you can’t express in terms of it (symmetric tensor relationships).”

    I think I know what you’re referring to, but not so. Tensors are generally expressed using a sum of dyadic projection operators. f(x) = v(v.x) means taking the component of x in the v direction, times v. Project along each eigenvector, scaled by the eigenvalue, and you’ve got your symmetric tensors.

    The symmetric/antisymmetric of f depend on ∇f = ∇.f + ∇˄f, which are essentially div and curl of f. If f is antisymmetric then ∇.f = 0 and it turns out f is just f(x) = x.(∇˄f)/2, it’s x dotted with a fixed bivector, and so the antisymmetric tensor can be represented by a particular fixed bivector. It’s a big simplification for a special case. This is what people mean when they say the geometric algebra ‘includes’ antisymmetric tensors as elements. If f is symmetric, then instead ∇˄f = 0 and ∇.f is the trace of the tensor. You can’t reconstruct the tensor from that alone, so you have to stick with the dyadic expression. (See pages 23-25 here.)

    “On what to teach first: Before a student is going to have any idea what you’re talking about at a higher level of abstraction, he’s got to understand what hes doing at the level of abstraction that all this stuff is built on.”

    I agree, but the point of doing it geometrically is precisely to avoid the problems of starting with the abstraction. People already have a built-in intuition about space – lines and planes and arrows and points. They’re already familiar with rotations and reflections. You start kids with counting actual pebbles and bricks; you introduce them to the Peano axioms only once they understand what they’re an abstraction of.

    Maths education starts with always doing the concrete picture before abstracting it, but eventually you reach a point where the concrete is abandoned and you launch into abstraction built on abstraction, students manipulating things blindly with no idea of what’s happening inside. Mathematicians do it far more readily than physicists, but physicists do it too. It’s the one criticism I have with regard to the the common mathematicians’ view on Geometric Algebra. They are quite correct that it’s saying nothing that wasn’t understood by mathematicians decades ago in the work on Clifford Algebras and spinors and differential geometry. It doesn’t alllow us to do anything particularly new, or that other methods can’t do as well. They’re right, but that’s not the point. The point is the intuitive picture – it makes the deep, abstract stuff mathematicians have invented accessible to physicists who need simple pictures to understand. The unique selling point of Geometric Algebra is that it is geometric.

    “I’ve often thought that there’s something deeply weird geometrically about half-integer-spin in quantum mechanics. That a spin should correspond to a bivector makes sense (I certainly hope it does, or angular momentum isn’t conserved in a Lorenz boost!), but there are other bits of weirdness associated with half-integer spin that are reminiscent of Riemannian branch cuts: Angular periodicity not doing what it should, etc,”

    Do you mean the whole ‘rotation-by-720-degrees’ thing? There’s a neat explanation for that.

    A spinor is often described as an object that flips sign if you rotate it by 360 degrees and only returns to where it started after rotating 720 degrees, in a way that’s hard to fathom as a sensible, concrete object in space as we intuit it. Spinors in general are elements of the even subalgebra of a geometric algebra (made up entirely of products with an even number of vectors), but in 3D we can think of it as an ordered pair of vectors, representing two reflection planes. Now, if you do two reflections one after the other, you get a rotation, and it’s this rotation that we think of as what ‘spin’ is all about. But doing so loses information, because the reflection planes are oriented – they have a ‘front’ and a ‘back’ face – but either way round the reflection is the same.

    So if you start with the two planes aligned, the reflections cancel out and you just get the identity. Start to rotate one of the reflection planes, and the pair together represent a rotation through twice the angle between them. When you’ve rotated one of the planes 180 degrees, the planes are now back to back, and the reflections again cancel. Although the reflection plane has only rotated 180 degrees, the rotation represented by the pair of reflections has turned through twice the angle, or 360 degrees. What the pair of reflections does is not the same as what they are. They both do the same, but one is two vectors pointing in the same direction, and the other is two vectors pointing in opposite directions.

    Even Michael Atiyah has said “No-one fully understands spinors. Their algebra is formally understood but their general significance is mysterious.” Now, I’d certainly not claim to “fully understand” spinors – I have only a fairly casual and amateur interest in this stuff – but I think they are perhaps not as mysterious as they’re often made out to be.

    “I’m half suspicious of adopting a mathematical framework that is too “beautiful”.”

    Agreed. Evangelists for a particular approach usually grossly oversell its capabilities, and Geometric Algebra is no exception! It does have some annoying flaws and limitations. But it is remarkably nice, and I find it helpful to get an intuitive picture of what’s going on with the abstract stuff.

    Whoever would have thought we’d end up here starting from a story about a snake eating a towel? 🙂

  • Roger Penrose once told my a funny story about Michael Atiyah – or rather (as was often the case with Roger) a story making gentle fun of himself, but Michael figured in it. When Roger was a young, just-starting post-graduate student, his tutor one day told him that some other student was doing his thesis defence and maybe Roger would like to turn up and watch, just for interest. So Roger did – and this other student was amazing, knew tons of stuff, just kept on bringing out new things. By the end, Roger felt crushed, convinced that if this was the standard, he had no chance.

    Of course, the other student was called Michael Atiyah and when he told the story I knew that Michael Atiyah was, well, like Michael Atiyah!!! (and, for the matter of that, Roger Penrose was Roger Penrose!!!) – but back then of course, Michael was just any old student, whose name happened to be Michael Atiyah – a name that did not have the comic-punchline significance it had when Roger described the incident decades later. 🙂

    Whoever would have thought we’d end up here starting from a story about a snake eating a towel? 🙂

    Undoubtedly very off-topic but while people are interested…. That said, as you say geometry is geometric, visualisable, and so a difficult subject to convey in this textual format – hence my staying out of it except anecdotally.

  • Snorri Godhi

    Whoever would have thought we’d end up here starting from a story about a snake eating a towel?

    To get back on topic: if you think swallowing a towel is dumb, what about swallowing an alligator?

  • NickM

    Now where were we?

    Fraser, your lack of sense of direction – your flaw is interesting. I can see how being poor at any given thing (you should hear me “sing”) gives an insight into what it’s like to be poor at things you are good at. God help me I have done private tutoring in maths and physics and there was this one guy who just couldn’t get logs at all. This was a serious problem because he was a health and safety officer and he needed to learn basic acoustics and dB is a log scale. That was an hilarity.

    So, I get your point but I think there is a wider issue with human fallibility here. I suspect at a certain level things like insight and creativity require it. I suspect an eidetic memory would be quite the hindrance in grasping the essence of things. I think Borges explains this very well in his short story, Funes, the Memorious.

    Personally I have some experience here. My first degree was Physics so, whilst mainly theory, 1/6 of it was experimental and that means curve fitting. And curve fitting is not necessarily all about accuracy. analysability is a big part of the game. Also some “common physical sense” such as knowing that, no matter how accurate your curve is according to the usual metrics, if you know it has to tend to, say, zero in a certain limit for sound physical reasons and that curve doesn’t then it ain’t the curve you want no matter how accurate it is taken over the whole range of results.

  • Nullius in Verba

    “Also some “common physical sense” such as knowing that, no matter how accurate your curve is according to the usual metrics, if you know it has to tend to, say, zero in a certain limit for sound physical reasons and that curve doesn’t then it ain’t the curve you want no matter how accurate it is taken over the whole range of results.”

    I’m sure you’re well aware of the issue, but this reminds me of what the lab technician first asked to test the Mpemba effect said. He ‘knew’ from his physical intuition that the result must be wrong. So he told his boss he’d keep working on it until he got it right.

    On the one hand, this sort of thing is a good reminder that the real world is usually a lot messier than theory. There’s friction, contamination, miscalibration, interference, vibration, background noise, natural radiation, leakage, dust, fluff, and random creepy crawlies in the works. Experimental works requires a lot of experience to build up an intuition about all the things that can go wrong, to allow one to build an experiment that you can be sure works. You can only gain that confidence in the equipment by first measuring things where you know what the answer is, so you can be sure the experiment is measuring what you think it is.

    But on the other hand, there is a natural tendency to relax when the equipment gives you the value you expect, and you forget the demonstrated unreliability of all the previous experiments. If you start knowing the answer, and when the experiment confirms it you stop, and when the experiment disagrees you fiddle with the equipment until the answer changes, ‘fixing’ it, then you’ll almost always just confirm what you already believed. The experiment tells you nothing. The only time you ever learn from such a set-up is if you find it impossible, after a great deal of effort, to get the right answer. You might then, grudgingly, consider the possibility that your expectation was wrong.

    The experiments giving the wrong answer teach the lesson of our fallibility most directly, but the conclusion we have to draw is that we need to show the same suspicion about all our experiments, even the apparently successful ones. The techniques you learn to apply fixing a failed experiment, you also have to apply to every other experiment. Because even the curve you’re expecting is no more likely to be the curve you’re looking for, if your experimental technique is fallible. That physics common sense is absolutely necessary to set up and test experimental equipment capable of making the measurements you need, but it is highly dangerous to rely on when conducting the experiment itself.

    I was never very good at the experimental side of physics, but had a lot of respect for those who were. I was always a theoretician. I didn’t get enough practice to develop that level of practical physical intuition. But the same principle is at work in the theory. If you manipulate mathematics blindly, with no idea of what to expect, no way to detect results that are obviously wrong or simply don’t make sense, then you can’t detect any mistakes you make, and we’re all fallible. Having a geometric intuition about what the maths is doing gives you a way to check and verify the reliability of your methods and tools. Only when you have confirmed its validity in areas you know can you go exploring the unknown.

  • NickM

    My wife is a translator. She translates a lot of medical/pharma stuff from Russian/Danish/Norwegian/Swedish into English. We have a bit of an in-joke about the science.

    They’re meddling with things they don’t understand!

    That, for me, is one of the main reasons I found science interesting.

    One of my first meddles… I’m not sure I ought to relate this but why not? I was about 9 and was really into having fires on this bit of waste ground near my house. Anyway someone dumped an old shed there and I discovered – much to my pleasure – that tarred roofing felt wrapped around asbestos when chucked on a fire exploded. Me and the other kids would throw it on the fire in the pit and then leap over it hoping it wouldn’t go off mid flight. It did once when Steven “Whopper” Watson made his bid for glory and caught him directly in the gentleman’s area. How we laughed as he rolled in agony.

    He is now a happily married father of three so it couldn’t have been that bad.