We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.

Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]

Two discussion points inspired by Stephen Wolfram

The first one is straightforward. The internet threw me a talk by the computer scientist and businessman Stephen Wolfram today. It lasts three minutes 21 seconds and is called “How humans can communicate with aliens”.The subject is one that has so often been used as the basis for fiction that we sometimes forget that when you look up at night, what you see is real. There is a whole universe out there. It might have intelligences in it. Mr Wolfram contends that we might have been seeing evidence of intelligences all the time without realising it.

Do you think he is right? And assuming we can talk to them, should we?

Alien contact sounds wonderful at first but then becomes terrifying as you think more deeply. The second topic for discussion I want to put forward sounds terrifying at first but then becomes –

Well, you tell me what it becomes. There is a very strange final paragraph to Mr Wolfram’s Wikipedia page:

Personal analytics

The significance data has on the products Wolfram creates transfers into his own life. He has an extensive log of personal analytics, including emails received and sent, keystrokes made, meetings and events attended, phone calls, even physical movement dating back to the 1980s. He has stated “[personal analytics] can give us a whole new dimension to experiencing our lives”.

One of my recurring nightmares is that as spy devices get smaller and the computational power available to analyse what they learn gets bigger, someone – or lots of someones – will be able to analyse my life in that sort of detail, down to every keystroke I make. It had never occurred to me to think of it as something I might like to do to myself.

Does anyone reading this do anything similar? Would you like to?

68 comments to Two discussion points inspired by Stephen Wolfram

  • Natalie Solent (Essex)

    “NO!!!!!!” to which, announcing ourselves to aliens, or doing an analysis of all your keystrokes for the last three decades?

  • Julie near Chicago

    About the video:

    We need (as always) to be careful about letting a metaphorical use of a word become a redefinition of the word.

    “Intelligence” properly speaking is only applicable to a system (and note — a human being is a system with certain particular components and characteristics) that has qualities of consciousness and self-awareness, and that can form concepts.

    Isn’t that really the bottom line?

    .

    So far, even AI units only mimic intelligence. It’s very hard to come up with an airtight definition of “consciousness” (the word actually refers to a primitive concept), but we do recognize it in some types of existents other than the human ones: Namely, many animals. It’s not the degree of “computational abililty” that determines the existence of intelligence. It’s the capacity of the system under discussion to experience itself in that indescribable way. I just don’t see any reason to think that pulsating electromagnetic fields can be properly said to experience themselves at all.

    Mind you, Fredric Brown’s story “The Waveries” has been a favorite of mine since I first read it more than 65 years ago. *g*

    .

    Another feature of intelligence is the fact that it allows its possessor to cope with Reality, with the phenomena that occur and affect it, in order to achieve ends that it can plan for. In other words it can perceive patterns and can use the perceptions to choose among various activities according to plans. And the italicized words imply that there exists a mental state (a capacity to integrate the perceptions into a certain mental construct), which implies a state of consciousness, which implies consciousness itself.

    As far as I can see, there are in our own familiar Animal Kingdom creatures with differing degrees of intelligence (cats and dogs can both figure out how to “manipulate” certain gadgets toward some desired end, and there are certainly instances where cats attempt to try what they observe works for humans. For instance, occasionally a constipated cat will actually sit on the toilet while trying to void). Are there also differing degrees in various animals’ capacities to conceptualize, and to conceptualize itself as an existent?

    It seems to me that ants, for instance, probably do not have anything that could be called a “mental state” unless you stretch the term far beyond the boundaries of its meaning; whereas dogs and cats and bonobos do have some capacity of awareness, a mental state. It seems evident that ants don’t have minds, whereas dogs do. (And they certainly have minds of their own!)

    I really think that the good Mr. Wolfram is running the risk of using possible similarities — even important ones — to deduce a fundamental sameness.

    As for AI, perhaps one day we will create an AI which generates its own consciousness while having skills that allow it to cope with the universe in which it finds itself so as to achieve ends that it wants and knows it wants; an AI which can conceptualize and knows that it can do so, however unlike the ours the concepts it creates may be. Maybe.

    Or so it seems to me.

  • Julie near Chicago

    As to the keystrokes, etc., that sounds like an obsessive disorder which if overindulged is likely to end in clinical depression, if it hasn’t already.

    At one point I got heavily into “journaling.” What can I say, it was the ’90s. It got to the point that I would spend almost the entire day writing myself up. Gad! It was an addiction, like peanuts. Just one more…. I had to have a very, very stern talk with myself. Just Say No.

    :>((

  • Umbriel

    I generally agree with Julie regarding insect cognition, though there is behavior on the part of some bugs, like spiders, and those that hunt them, that sounds like it disturbingly verges on strategy. I suspect that the continuum on which consciousness exists has a lot of gradations on it, and consciousness itself may have many different components with can be very strong or very weak to varying degrees, perhaps even among humans.

    As for data collection and interpretation — My only real indulgence in that regard has been charting the data from my utility bills over the past 20 years or so. It doesn’t reveal all that much about my personal behavior, but is an interesting circumstantial record – I can look back over it and see when we’ve taken extended vacations, when we upgraded to a low-water-consumption toilet, when that toilet malfunctioned and was prone to leak, etc. I’ve also noted the local climate trends of that area — Highly volatile winter lows vs. less volatile summer highs, and it seems to support Bjorn Lomborg’s assertion that “global warming” has made winters broadly less cold over that period without making summers particularly hotter.

  • Nullius in Verba

    One way of defining ‘intelligence’ is a general problem-solving capability, able to simulate it’s environment internally, and manipulate the model to determine what behaviours will likely lead to desired outcomes. Simulation is when the states/transitions of one bit of the physical universe match up with (to be more precise: are homomorphic with) those of another bit of the physical universe – this is how the concept of semantic meaning is instantiated. Consciousness, awareness, and self-awareness might not be necessary for that, although they may also arise naturally from them. The philosophy is controversial and complex; how qualia arise from physics is a mystery.

    While it’s hard to see how pulsating electromagnetic fields can have awareness, the same may be said of bags of meat – as in masses of brain cells firing electrical pulses at one another. In a sense, we *are* pulsating electromagnetic fields.

    As the laws of physics are self-similar, showing common patterns in many different phenomena, then yes, meaning and simulation should be common. And self-replicating patterns that enhance their own reproduction by problem-solving would be successful. The idea makes sense, although how common it might be runs into the same unknowns as in the Drake equation. Who knows?

    But if they are here, then a similarly-intelligent and only slightly more technologically advanced alien intelligence would easily be able to pretend to be us. Maybe some of the people you talk to on the internet are aliens? It makes for an interesting variation on the Turing Test!

    As for recording your life, this sounds similar to what would happen if you had a perfect cybernetic memory: if instead of forgetting most of your life, you could remember every detail. I suspect the reason it’s not an issue is for the same reason we forget – because most of our life is very boring, and we want to filter out only those useful bits we really need. The question here then is whether we can build artificial memory filters that are better than our natural, inbuilt, evolved ones.

    It’s not being observed that we fear; it’s being judged.

    It might be interesting to ask the religious this question. The Gods/Godesses we know are watching our every move, and our every thought, and will judge us when our lives are completed. Does this thought make you squirm? Or are you confident that the Gods/Godesses will judge you as sympathetically and with as much understanding as you do yourselves? Or perhaps more so? If we fear the judgement of other humans, is that not in itself a fearsome judgement of humanity? Bearing in mind most of the stories about Gods/Goddesses and their activities, is this hope well-founded?

  • Chip

    One of my ideas for writing a sci fi novel that I will never write is that – considering the pace of technological change and the age of the earth – humans long ago found the means to leave the planet and in departing, they hid the earth from outsiders and reset the evolutionary clock as an expression of sentimentality and nostalgia.

    As we approach the technological level when the first-humans left – ie, the singularity – we will discover whether they make contact, have forgotten about us, or plan to reset the clock again, as they may have done many times before.

    Of course we may also discover that they weren’t human at all – perhaps birds or octopus.

  • bobby b

    “It seems evident that ants don’t have minds, whereas dogs do.”

    But do four million ants working together have a mind? Four million bees? One mind, to which they all belong and contribute?

    Can four million unconscious, mindless parts comprise one whole mind?

    A microtome slice of my brain certainly isn’t a mind, but how about if you combine it with all of the other slices?

    Is there a geographical limit to a mind? Can all of the single-cell lifeforms on a planet comprise a mind? Can all of humanity comprise one mind overarching all of the individual minds?

    If we wish to communicate with aliens, we may have to expansively determine, first of all, what “mind” means, and then discard many of the artificial limitations that we place on the concept simply because the form of our “mind” has certain characteristics.

  • SkippyTony

    I think the discussion about definitions of intelligence seems to miss the point of the question. For the argument, if they can get here, they are intelligent enough. The question really is about how we might co-exist with another species that has our propensity to run away like a weed. If we are both apex predators with similar appetites (raw materials, resources) then it’s pretty hard to see it ending amicably. Eg either they are a resource, or we are. So, if (when?) we meet it won’t be about communication, it will be about interaction.
    As to q2, given how much of our life will endure in the digital world, I expect whole industries data mining historical artefacts as a way of scientifically predicting future behaviour – this is the holy grail of the whole HR / Management world. As I try to hammer into the young folk around me, be obsessively mindful about what you put into the digital domain, the reality is once you have published you have lost control – forever. Thank god phones with cameras were not around when I was a youth!

  • Tedd

    I would love to have the actual data to answer any question at all about myself. The most interesting questions probably can’t be answered via the kind of data Wolfram is talking about, but it would still be interesting to know how much time I actually spend at different tasks, or how frequently I do various things. I’d hate for that information to fall into the wrong hands, but I’d love to have it, myself.

  • Julie near Chicago

    NiV,

    “This sounds similar to what would happen if you had a perfect cybernetic memory: if instead of forgetting most of your life, you could remember every detail.”

    .

    Yes. It’s called hyperthymesia.

    https://www.smithsonianmag.com/innovation/rare-people-who-remember-everything-24631448/

    At last count, at least 33 people in the world could tell you what they ate for breakfast, lunch and dinner, on February 20, 1998. Or who they talked to on October 28, 1986. Pick any date and they can pull from their memory the most prosaic details of that thin slice of their personal history.

    [SNIP]

    …[R]ecently scientists at the University of California at Irvine, published a report on 11 people with superior autobiographical memory. They found, not surprisingly, that their brains are different. They had stronger “white matter” connections between their mid and forebrains, when compared with the control subjects. Also, the region of the brain often associated with Obsessive-Compulsive Disorder (OCD), was larger than normal.

    In line with that discovery, the researchers determined that the study’s subjects were more likely than usual to have OCD tendencies. Many were collectors–of magazines, shoes, videos, stamps, postcards–the type of collectors who keep intricately detailed catalogs of their prized possessions.

    The article says that this exists only in autobiographical memory:

    “In fact, [such people] generally perform no better on standard memory tests than the rest of us.”

    See also the BBC story, which differs in some details:

    http://www.bbc.com/future/story/20160125-the-blessing-and-curse-of-the-people-who-never-forget

    And there’s always the Great Foot…(Wikipedia).

  • Julie near Chicago

    bobby, you’re talking literally about the Hive Mind. There would have to some way that each of the “cells” (ants) could communicate with at least one of the others (I mean, centralized switchboard/dispatch vs. peer-to-peer networking); and furthermore, the Hive would have to be literally conscious, literally aware.

    If, that is, it has intelligence properly so called.

    Otherwise, you would for the sake of clarity of thought have to create a new term for this capacity that looks a lot like intelligence but isn’t really the condition that we mean by the term.

    .

    All this banging on about the proper meaning of words isn’t something I do to keep my fingers occupied. It’s to enable us to be clear in our thinking as well as our speech. An example:

    Ten or 15 years ago some guy named Burt (Kosko? Costco? I forget) wrote a book called Fuzzy Logic, that being the hot new concept for philosophy, math, and geekish types. I got as far as this gem:

    “After all, it’s just not true that a thing can’t be in two places at once. For example, a car that’s parked so it crosses the line between two adjacent parking spaces.”

    This was meant not as a funny, not as some sort of sarcasm, but as literal truth.

    I mean, true, the guy was an engineer (no disrespect to Samizdata engineers, who are smarter than that), but my god! Put me right off my oats, it did. Or at least, off the book. (You could theorize that he was going to point out the confusions of two senses of the word “places,” but you’d be wrong.)

    Which puts me in mind of something that ruined Hume for me way back in college. Completely O/T, but a wonderful example, if taken at face value, of a complete lack of ability to deal withe an abstraction. The Great Man wrote that it’s not so obvious that two lines cross at a single point, though it looks that way if you draw them more-or-less at right angles. But if you draw them so they intersect at smaller and smaller angles, i.e. as you draw them closer and closer to parallel, you can see how they appear actually to overlap for a distance….

    *Shrug* Well, maybe I wronged the man terribly and he was trying to point out that you can’t handle pure abstractions on the basis of how similar some real things mean to be. If so, I apologize.

    One thing I’m sure of, and that is that I do not misremember what I read! The only mistake I ever made, if I made one, would have been something like 15 years earlier.

  • NickM

    As an example of total recall might I suggest “Funes the Memorious” by Borges.

  • Mr Ed

    One of my recurring nightmares is that as spy devices get smaller and the computational power available to analyse what they learn gets bigger someone – or lots of someones – will be able to analyse my life in that sort of detail, down to every keystroke I make.

    Indeed, but there is still the issue of scarcity of time and resources and iron economic law, to do this to others requires the devotion of scarce resources, and it is unlikely to be economic. It would require a pre-existing (and always decaying) huge State to extract the resources for this sort of analysis. If the argument against the State is made, won and carried through, then that is the battle that needs to be won.

    Regarding aliens, if they are intelligent and if they have found a way to meet us, would it necessarily be the case that their incentives would be the same as ours? Might they be not only ‘Borg’-inclined, but actually able to enjoy what humans universally regard as the horrors of socialism. My point being that with humans, no matter who or where, socialism breaks down on implementation because people do not respond to collective inputs as productively or effectively as with individual reward, be they GULAG managers or peasant farmers. Even when people believe in socialism and murder for it, they cannot get it to work themselves. Aliens might be different. Or perhaps not.

  • >Mr Wolfram contends that we might have been seeing evidence of intelligences all the time without realising it. Do you think he is right?

    Of course he is right, if all he has said “we might have been”. Anything non-contradictory might be true. That’s a logical triviality. “Has a more than non-neglible chance of being true” is another matter. But what exactly he said I am not going to look up, because I’m not going to spend my time reading someone who is into personal analytics. (I don’t have the time, because I have to count the number of times I lifted my spoon to my mouth this morning when eating breakfast. Or is it the number of times I brushed my hair?)

  • Natalie Solent (Essex)

    Hector Drummond writes,

    “I’m not going to spend my time reading someone who is into personal analytics”

    Your choice, but don’t let Wolfram’s odd penchant for personal analytics deceive you into thinking that he is anything other than a very intelligent man. I first heard of him from using the Wolfram Alpha answer engine.

  • terence patrick hewett

    People engaged in the sciences and engineering tend to be obsessives so this may be regarded as typical behaviour.

  • It’s also that it’s very rare to find anyone who has anything interesting to say on the possibility of life in the Universe where they are speaking in general terms. And most of them, even the scientists, or maybe especially the scientists, tend to have little idea of how difficult probability talk becomes in those sorts of cases.

  • Greg

    Mr. Wolfram says he has “created languages”; I’m guessing he means computer languages, not “Elfin”. That use of a simple word is telling; in this context he should have clearly stated what kind of language he was talking about.

    He’s looking for evidence of a thing that most people regard as very hard to detect at a distance, but he does not define it, does not describe what the evidence of its existence might look like at a distance. Up close, interacting with an alien intelligence, I think we’d recognize it instantly. Even at a distance, if they beamed their “historical documents” (h/t Galaxy Quest) to us via television broadcasts (using our method for encoding video and audio), I think we’d recognize it instantly. But what if the nearest intelligent civilization is sending us evidence at the rate of one photon per hour (they might be very far away and radiating just a few MW)…I don’t think we’re equipped to see/hear that?

    My main impression of this video is that Mr. Wolfram is making the mistake of applying the principles of his trade (comp sci, math clearly, possibly he knows some physics?) to questions that his methods are not built to answer.

    But the rest of the discussion on what constitutes an intelligence is great! Didn’t the Enlightenment philosophers (and the Greeks and a few in between) cover this already? 🙂

  • Greg

    Or was it the Pythons who covered this? –“I drink, therefore I am”

  • Runcie Balspune

    He has an extensive log of personal analytics, including emails received and sent, keystrokes made, meetings and events attended, phone calls, even physical movement dating back to the 1980s

    A “log of emails sent” and a “log of keystrokes” are practically the same thing, even if you included the keyboard shortcuts, the fact that an email was sent implies a shortcut or mouse click was used. Any document is also a “log of keystrokes” as well. I doubt he is stupid enough to record his passwords, so I don’t really think this is proper keylogging.

    Meetings and events attended – yep, that’s called “a diary”, I have had one of those since Filofax days.

    Phone calls – yes, I seem to remember these things called “itemized phone bills”, and since the smartphone era this has been par for the course.

    Physical movement – this could mean a Fitbit?

    So this somewhat geeky guy doesn’t throw away diaries and keeps all his files, move along, nothing to see here.

    It had never occurred to me to think of it as something I might like to do to myself.

    You don’t already?

  • Mr Ecks

    This Wolfram bloke should forget about aliens and supermemory and prove how intelligent he really is by devising a way to regrow his barnet.

  • Nullius in Verba

    “The question really is about how we might co-exist with another species that has our propensity to run away like a weed. If we are both apex predators with similar appetites (raw materials, resources) then it’s pretty hard to see it ending amicably. Eg either they are a resource, or we are.”

    Or we can trade.

    Can any alien species get off the planet and explore the universe without first having discovered the economic and technological efficiencies produced by trade and free-ish markets?

    “Yes. It’s called hyperthymesia.”

    If they don’t do better on standard memory tests (i.e. cannot autobigraphically remember being told), then that sounds like they’ve got a different sort of filter, storing a different selection of stuff, rather than that they remember everything. But it’s interesting.

    “Otherwise, you would for the sake of clarity of thought have to create a new term for this capacity that looks a lot like intelligence but isn’t really the condition that we mean by the term.”

    Philosophers of the mind commonly distinguish the concepts, for that very reason. ‘Intelligence’ is attached to problem-solving, and computers appear to be able to do that without evidence of awareness. ‘Awareness’ might simply mean sensory input (so a motion detector alarm is ‘aware’ of an intruder), or it might mean what philosophers call ‘qualia’, the feeling of what it’s like to be. ‘Qualia’ is a tricky concept, because it seems to be only observable from the inside. Physics has no explanation or mechanism for it, and while it must have some sort of physical effect (or we couldn’t talk about it), it’s not clear what it is, or how it works.

    So far as physicists can tell, human brains are not doing anything but following the laws of physics, and not doing anything that a computer could not, in principle do. So if brains have qualia, which turns up when brain-like computation is done, maybe computers do too? If the atoms of the brain can generate qualia/’awareness’, then maybe the rest of the universe can too? This position is called ‘panpsychism’ – and is related to pantheism.

    If a ‘hive’ of brain cells – spread out over space and communicating with one another over non-zero time time intervals – can constitute a mind with awareness, then why not other inter-communicating systems? Insect colonies, genes and memes, human communities, entire ecologies… could the entire Earth be a living awareness, each organism a single ‘brain cell’ within it, trying to solve the question of what design survives and reproduces itself the best?

    “After all, it’s just not true that a thing can’t be in two places at once. For example, a car that’s parked so it crosses the line between two adjacent parking spaces.” This was meant not as a funny, not as some sort of sarcasm, but as literal truth.

    Sort of. The point he’s trying to get across is that human concepts are built from approximate models of reality. We simplify the world by fitting the parts of it into the categories of some simpler model that’s easier to compute with. When one model doesn’t quite work, we automatically switch to a different or more detailed model, often without even noticing we have done so. So apparently simple human concepts like “place” are actually many different interlinked models of reality, that we pick and switch between without distinguishing, sometimes part way through a sentence. Rules like “an object can’t be in two places at once” apply to some aspects of the concept and not others. So “Birmingham” is a place, “the easternmost carpark in Birmingham” is a place, “the third parking bay on the left” is a place, and “15.282 metres from the car park entrance” is a place. A car can be in all of them at the same time.

    So by “place”, we apparently mean a subset of space (and time, since carparks come and go). Subsets can overlap. And objects of non-zero size can clearly (as with the badly-parked car) occupy several non-overlappling subsets.

    What’s implicitly being missed is the common unspoken convention that the sort of subsets used as “places” are additionally restricted to match the size and behaviour of the objects we’re talking about. For one conversation we might say “My car is in Birmingham” if talking about why we need a train ticket to Birmingham. For another, “the third parking bay on the left” may be sufficient if being asked where we parked the car, so someone else can find it. Non-overlapping places sufficient to identify the object’s location both at the level of precision we’re talking about are specific to each object. My car cannot be in both Birmingham and Cardiff at the same time, unless something has gone very wrong with it!

    Artificial intelligence has a hell of a time sorting out all this mess, and then trying to express it in terms humans can understand. It’s made harder because humans don’t consciously know what the actual rules are they’re operating by. Fuzzy logic was one attempt to represent sets and categories that had some level of human-like ambiguity. It wasn’t very successful at that, in my opinion, but it has its fans.

    Or to put it another way: It’s all about the metacontext!

    ” Completely O/T, but a wonderful example, if taken at face value, of a complete lack of ability to deal withe an abstraction. The Great Man wrote that it’s not so obvious that two lines cross at a single point, though it looks that way if you draw them more-or-less at right angles. But if you draw them so they intersect at smaller and smaller angles, i.e. as you draw them closer and closer to parallel, you can see how they appear actually to overlap for a distance
.”

    Again, I think the issue he’s trying to get across is that it’s dangerous to rely on our intuition for mathematical concepts, because our intuitive concepts are these fuzzy clouds of many different often inconsistent and approximate concepts.

    How do you know two straight lines cross at one point? Well, lines in our intuitive model do. But our intuitive model is based on *drawing* them with a pencil, which means our “lines” and “points” have a thickness, which means that two “lines with thickness” intersect at “a point with thickness” only if the lines are nearly at right angles to one another.

    When mathematicians say “two straight lines intersect at a point”, they’re talking about lines of zero thickness. But we don’t have any reliable intuition for what that really means (the infinitely small can be just as strange as the infinitely large), and mathematicians have many examples where our intuition about geometry goes terribly wrong. For example, how close are “neighbouring” points on a line to one another? If zero, how can we get to a non-zero length by adding up lots of zero-sized points? It’s an argument trying to explain *why* mathematicians use abstractions and precise, non-intuitive definitions for intuitively familiar concepts.

  • Snorri Godhi

    One way of defining ‘intelligence’ is a general problem-solving capability, able to simulate it’s environment internally, and manipulate the model to determine what behaviours will likely lead to desired outcomes.

    This statement by Nullius is key, even though i am not sure that i agree with all of the rest of the comment. Intelligence is about making predictions. Intelligence is consequentialism.

    Please note that making predictions is necessary, not only for choosing an optimal course of action (and here we get into issues of “free will”), but also for checking that the mental model of the world is correct: making predictions is part of the hypothetico-deductive method.

    I agree with Nullius’ breezy dismissal of consciousness as relevant to intelligence: after all, chess-playing programs make predictions, and choose courses of action based on these predictions, without conferring any consciousness (afaik…) to the computers on which they run.

    I could add more, but i feel that’s enough to think about.

  • Snorri Godhi

    As for “personal analytics”, as a matter of fact i do it myself; but only where i think it helps.

    For one thing, i keep a record on how many bottles of beer and shots of whisky i drink, day by day. (If i have any other alcoholic drinks, i record that too.)

    I don’t need to tell you why that might be a good thing to do.
    More interesting might be that i also try to keep a diary for any trip i take, and i have made a list (possibly incomplete) of all the beds in which i slept. I feel that i would lose part of my life if i forgot about these things.

    I also keep a list of every book (even book chapter, when the book is hard going, such as The Road to Serfdom) and essay that i read. That is for several reasons:
    similarly to recording trips, i feel that i lose a part of my life for every book that i forget having read;
    opposite to the case of recording drinks, the prospect of putting on record the book that i am reading, encourages me to keep going;
    sometimes i need to remember which book chapters i have not read yet;
    and finally, when i feel like re-reading books which i re-read for pleasure, i want to make sure that i have not read it for a few years, otherwise i choose another book.

  • The personal analytics thing- well, it tracks right along with the issue of surveillance. I would suspect most of us have an anti-surveillance bias. Who will watch the watchers, etc… But while we talk principles, other people basically steal our sovereignty. So, we almost have to have ‘little data’ in order to combat ‘big data’- which keeps being touted even though there isn’t much to it. Sure, they may find a few things here and there, but it’s basically a long con, like the climate change models. Models aren’t science, but really complicated multi-variant models can require a lot of computers and programmers to create and run. Individuals in the industry may not realize how much of a scam it is, but unfortunately, we are already decades into this layer of unreality being bolted onto fields of research.

    There are likely some advantages to personal analytics, just outright. Improving sleep, or noticing some sort of habit that reduces your productivity- there will be wins here and there. But the real advantage would be when they come out with some ‘big data’ SCIENCE! and tell you to do something that contradicts your personal research.

  • Rob Fisher

    “Where is everybody?” — Enrico Fermi.

    “Any sufficiently advanced technology is indistinguishable from magic.” — Arthur C. Clarke.

    QED.

  • Rob Fisher

    Should we talk to aliens? I don’t think we will have much choice. If they want to talk to us, they will. Maybe they will scan our brains and create thousands of simulations of each of us to question in detail or put inside simulated universes to see how we respond.

    We are right to be terrified of them in case they are nasty but if they are nice they could be pretty handy to have around. The trouble is we have no information. The good thing is we don’t have to decide since we are not in control. They day they show up in their big spaceships floating over the cities, though? I might want to watch from a distance.

    Julie from Chicago: Intelligence needs “qualities of consciousness and self-awareness”. I don’t agree. I can never be completely certain that anything apart from me has consciousness. It’s a useful working assumption that other people do, but there’s no guarantee. Clever animals I can give the benefit of the doubt to, but it’s a bigger doubt. Aliens are probably too alien to be able to tell. If it has outward behaviour that is neither random nor simplistic then I’ll call that intelligence for a convenient label, at least.

    Personal analytics: I leave Google Location History on. I find it pretty interesting. Should I be scared? Maybe, but somehow I am not.

  • Tedd

    Runcie:

    Emails and documents aren’t a log of keystrokes at all. They’re the subset of keystrokes I chose to “keep,” from the subset of keystrokes used in apps that produce text. For me, that’s a pretty tiny sub-sub-set of my keystrokes. There’s probably loads of interesting information in a log of all my actual keystrokes, almost none of which is contained in my emails or text documents. How frequently do I use “power” key combinations, and which ones? What apps can we infer that I’m probably using, based on those keystrokes? How often do I revise text I’ve written? How quickly do I type? At what times of day, or moments during a given hour, do I type the most, or the least? And in which apps? And with what proportion of revisions? How long are the pauses between sets of keystrokes? Has that changed over time? Does it depend on the time of day, or the season? Does it depend on who I’m writing to, or what I’m writing, and, if so, in what ways? I could go on for some time but I’m starting to bore even myself!

  • Ian

    We have just one kind of intelligence that we are familiar with, which is human intelligence. The question is, what does it even mean to talk about other intelligences? So what one needs in order to have something one can reasonably call an intelligence is something that is computationally as sophisticated as a brain, for example. It turns out as a result of a bunch of basic science that I’ve done, actually, that there’s a thing called the principle of computational equivalence which strongly suggests that out in the universe, in nature, there are just tons of things that have the same computational sophistication as brains… whether it’s some pattern of fluid flow in the earth’s atmosphere [etc.] We have our first, sort of tame (in a sense), example of alien intelligence which is AI.

    This seems to boil down to the claim that intelligence is complex systems. This is the result of a false syllogism that he makes, to wit: brains have intelligence; brains are complex; other systems are complex, ergo these other systems have intelligence. This assumes the premise that complexity is intelligence, which is also the conclusion, so it’s a classic petitio principii. This doesn’t make the conclusion necessarily wrong, but it’s hardly a good argument.

    It’s also kinda odd that, in the first minute of the video, and following the above argument, Wolfram seemed to be heading towards a materialist version of panpsychism, i.e. that everything (including the totality of the universe itself) is complex, down to rocks and atoms and so on, the logical extension being that everything is intelligent; however, he is only arguing that everything as complex as a brain has this capacity for intelligence. Is he merely trying to avoid the ultimate extension of that argument, that the universe is intelligent, ergo God?

    Thirdly, I find the elision of the term “consciousness” in Wolfram’s video quite telling. I guess he’s one of these who views consciousness as an epiphenomenon of the brain, which conveniently avoids dealing with the “hard problem” of what (if any) is the fundamental connection between mind/consciousness/intelligence and the brain. Brains are, after all, fairly obviously neither very conscious nor very intelligent when pickled in a jar, and it’s by no means clear that merely feeding chemicals to a “dead” brain would restore consciousness or intelligence. (“Waaaaah, it’s too HARD!”)

    Partly this involves the question of whether there is any fundamental difference between consciousness and intelligence, which again has not been established and seems largely a semantic confusion — chess-playing computers are not intelligent systems because they are designed and (crucially) operated by humans, and so far as I am aware we have yet to see any signs of intelligence in any non-living system. Are patterns of fluid flow in the earth’s atmosphere acting intelligently? I don’t recognize intelligence in it, yet I can instantly spot intelligence in both simpler and more complex systems such as amƓbĂŠ and cats.

    Wolfram seems to be relying on the sympathies of his audience to overlook all this. But I suppose it doesn’t matter all that much, since his conclusion (even if true) is ultimately neither interesting (pace Natalie) nor very helpful to anyone so far as I can see, except Wolfram himself in the promotion of his computing projects.

    On the whole I tend to find physics/engineering types generally don’t do very well when engaging in this area, they come across more like materialist theologians or “materialogians”.

  • Sigivald

    “We won’t be able to, pretty much” was the thesis of a few Lem stories, I believe.

  • bobby b

    Julie near Chicago
    November 5, 2018 at 1:43 am

    “bobby, you’re talking literally about the Hive Mind. There would have to some way that each of the “cells” (ants) could communicate with at least one of the others (I mean, centralized switchboard/dispatch vs. peer-to-peer networking) . . . “

    Well, sure. Look at us. A smell of dinner hits our nose. Signals are sent along neurons. At the end of one neuron, chemicals are produced. The next neuron has a way to detect that chemical. As a result of that detection, that neuron does something – produces its own chemical to stimulate the next neurons in a chain, perhaps – until the sensing of that smell has propagated throughout our brain, and our “mind” then knows something, and takes some resultant action.

    Look at bees. A bee comes into the hive having encountered food in some direction. Bees are known to communicate by producing chemicals, which other bees detect, and, having detected them, take other actions such as propagating that signal throughout the hive and then perhaps going in the signaled direction for the food.

    No one cell in my brain can accomplish this by itself. My mind does not reside in any one specific cell. It is only the combination of those cells in a network, and their net of chemical productions and detections, that yields the results that we ascribe to “mind.” Similarly, the mind of a hive does not reside in any one bee, even the queen – it is the result of the combination of all bees in the group, and is comprised of the sum of all chemical productions and detections.

    My only point was related to our limited scope of imagination when we talk about communicating with aliens. We could end up like mites living on a scalp, wandering about seeking life while residing atop it. If we only seek our own forms of communication, we may mistake alien communication for noise.

    Heck, a sufficiently evolved alien life form might well forgo the relatively analog forms of communication that we use – interpreting sounds and writings produced by others through our gross outer senses – and move directly to the more digital forms of communication, directly to our sensing neurons. Our first communications with alien life might well be in our imaginations and in our dreams, because that would be the highest-bandwidth way for them to communicate with us.

    In which case, Mr. Wolfram, who “contends that we might have been seeing evidence of intelligences all the time without realizing it”, would have been correct.

  • Mr Ed

    Rob F

    I can never be completely certain that anything apart from me has consciousness. It’s a useful working assumption that other people do, but there’s no guarantee. Clever animals I can give the benefit of the doubt to, but it’s a bigger doubt.

    I’ve never known even a dumb dog or cat support Labour or the Dems. By which I mean, I have seen animals (even ducks) act in a manner indicative of an understanding of causation. Yet millions vote for Mr Corbyn, millions who can only be surmised to be voting for ultimately their own starvation, out of a desire to get more for nothing, when all the evidence points to them getting ‘more nothing’.

  • Nullius in Verba

    “Partly this involves the question of whether there is any fundamental difference between consciousness and intelligence, which again has not been established and seems largely a semantic confusion”

    Different people seem to have radically different definitions. I’m guessing you have a different definition of “intelligence” to mine. What’s yours?

    “chess-playing computers are not intelligent systems because they are designed and (crucially) operated by humans”

    That sounds like an interesting viewpoint. How does that follow? What is it about the definition of “intelligence” that means a human couldn’t ever design one?

    And what’s your view on an evolved chess program? (It generates programs at random, plays them against one another, and then keeps and mutates the winners. Humans set up the simulated world in which they live, but quite often there’s no human who even knows how and why the rules it picks work. Is that human designed? And could a product of evolution, whether in a world created by humans or nature, ever be intelligent?)

  • Mr Ed

    “chess-playing computers are not intelligent systems because they are designed and (crucially) operated by humans”

    Chess-playing computers have developed quite a following with human analysis of their games, as new ‘players’ are developed. They can have opening ‘books’ with a reference library of positions to work from, and can then ‘go solo’ when out of theory, but ultimately they are calculating and assessing positions, which has its own nuances, and working out all variables in a finite albeit very large system. Some powerful programs are baffled by chess puzzles that proficient humans can see as solvable but require a bit of planning and lateral thinking. Getting chess computers to have a ‘feel’ for a position is the ultimate challenge. Ultimately, they still assess their means of assesing positions rather than use intelligence.

    But to level the playing field with humans a brief while, chess computers should find the board, move their own pieces, take their opponent’s pieces, flip the clock and write down their moves on a scoresheet, and not receive colour-coded yoghurts.

  • William Newman

    “So far, even AI units only mimic intelligence.”

    No *general* intelligence so far, granted. I think, though, that things like learning to recognize handwritten letters or digits reliably from a catalog of examples, or playing Chess and Go at a superhuman level, are pretty convincing examples of intelligence, specialized though they are. If someone had figured out how to get a parrot or octopus to learn to recognize handwritten letters or digits reliably, would it be an idiomatic use of the term “intelligence” to say “but it [i.e., the animal] is only mimicking intelligence”?

  • Nullius in Verba

    “Getting chess computers to have a ‘feel’ for a position is the ultimate challenge. Ultimately, they still assess their means of assesing positions rather than use intelligence.”

    I don’t understand. What’s the difference between ‘feeling’ a position and ‘assessing’ a position, when it comes to meeting the definition of ‘intelligence’? Why is it not intelligent to ‘assess’ positions?

  • Zerren Yeoville

    No-one who has read Charles Pellegrino and George Zebrowski’s novel ‘The Killing Star’ could view the prospect of advertising our presence to the Universe at large with equanimity.

    Simply put, any space-capable civilization possesses a level of technology which makes it a potential threat to the existence of any other space-capable civilization. Therefore strict logic dictates that the competitor be wiped out as soon as its existence comes to your notice, otherwise your own civilization runs the risk, however small, of extinction. It doesn’t matter if they seem friendly – extinction is just too big a risk to take. (Here, of course, is one answer to the Fermi Paradox – they’ve either worked out the necessity of keeping quiet for themselves, or they’ve been wiped out)

    Consider “Pellegrino, Powell and Asimov’s Three Laws of Alien Behavior” as cited in ‘The Killing Star’:

    Law No. 1: Their survival will be more important than your survival. If an alien has to choose between them and us, they won’t choose us. It is difficult to imagine a contrary case; species don’t survive by being self-sacrificing.

    Law No. 2: Wimps don’t become top dogs. No species makes it to the top by being passive. The species in charge of any given planet will be highly intelligent, alert, aggressive and ruthless when necessary.

    Law No. 3: They will assume that the first two laws apply to us.

    Add to that L Neil Smith’s line from his novel ‘Forge of the Elders’ that “For a number of excellent reasons, all sapients begin as predators” … and you may readily conclude that it would be better to try to cloak our presence rather than advertise our existence.

  • Julie near Chicago

    If all the world and talk were young,
    And Truth on every speaker’s tongue,
    Then their converse might me give
    To bang on more and join their jive.

    Which I may very well do, because just about everybody so far has had something interesting to say; all of it related, some of it suitable as a main topic for discussion in its own right, and most of it providing sidelights on the point at which I was trying to focus the spotlight.

    .

    As far as the “two places at once” and the issue of crossing lines goes, the point is that apparently the concept of abstraction is missing from the engineer’s and the philosopher’s statements about these. At any rate I won’t swear that that’s not precisely Mr. Hume’s point; but at the time I thought it was clear that he was trying to use physically produced and directly observable lines, as drawn on parchment with a quill, to show that there seems to be reason to think that mathematical lines and points, which have no physical reality and are without breadth — pure abstractions suggested to imagination by reality — can have more than one point in common while yet not being in fact the same line.

    But I no longer even remember even in what mss. he made his point, and it has seemed to me lately that in some of his other writings he employed a style of bringing up a subject for argumentation by adopting the viewpoint of an arguer against his own position and arguing not against but rather from that point of view. Or, alternatively, arguing both (or at least two) of the opposing sides of the case, each presented as it could be put by a proponent of it.

    .

    As for a proposed “hive mind,” it seems possible in imagination; but while I am most certainly of the opinion that the fundamental components enabling phenomenon X are always physical, there are systems which have qualities that are “more than the sum of the parts.” Assuming that in fact there are areas or even discrete “spots” in our brains that turn out always to light up when we are in a state that we call “conscious” and never when we are not in that state, we might be able to tell when the person being tested is conscious — and note that I agree with whomever above, that consciousness itself is probably best understood as existing to varying degrees along a spectrum — the ability to deduce the state of consciousness in a subject does not give us, in our own individual self-experiencing, the ability to experience his consciousness directly ourselves.

    This, of course, is yet another place where as Dr. Korzybski observed, the map is not the territory. (Interesting short article at https://fs.blog/2015/11/map-and-territory/ .)

    .

    Pursuing that, it seems to me that bobby’s bees might be like parts of a mechanical production system where machine A produces substrate X and trips a switch that causes machines B, C, D to get going and produce components Y,Z,W, while also switching on machine E that input is incoming; when E receives the components it pushes tab Y into slot Z, tops that with W, turns on the conveyor belt, and a finished Oreo cookie drops out into the box. Or something.

    But nothing is there that’s actual evidence of anything remotely approaching “mentality,” let alone consciousness, in the system. We can daydream that they do, of course, but we can even daydream about creatures that look like horses but have a horn in the middle of their foreheads. We can even imagine stories about them, and construct imaginary universes for them to live in. So?

    .

    In any case, my single fundamental point at the very start was that words need to be used carefully, and if a word is to be used in a novel sense or a restricted sense, that ought to be made clear at the start. Context is not always really sufficient to convey the intended, modified or restricted, meaning; that’s particularly true when one is not speaking to an audience expecting the term to be used as a term of art. (It’s not just the material being presented that makes up the context in which a word is understood. Circumstances of hearing or reading also constitute a part of the context.)

    Otherwise, words become stretched, distorted, and often winkled into meanings completely at odds with the original meaning. A good deal of the much-bemoaned “dumbing down” of the general public is the result of such changes in meaning. Thus communication is more difficult and even more importantly, so is clarity of thought.

  • Nullius in Verba

    “Simply put, any space-capable civilization possesses a level of technology which makes it a potential threat to the existence of any other space-capable civilization. Therefore strict logic dictates that the competitor be wiped out as soon as its existence comes to your notice, otherwise your own civilization runs the risk, however small, of extinction.”

    Only if you think it’s safe to assume that you’re the strongest species around, that you’ll always find competitors before they become a threat, and that you’ll always win.

    There’s a special sort of self-referential strategic pattern that ought to be evolutionarily stable in very general conditions. A gene succeeds if it can trigger an organism into recognising other organisms containing the same gene and helping them, while recognising other organisms without the gene and destroying them. It recognises whether the organism contains the gene by observing its behaviour, and seeing whether it follows this pattern (i.e. helping other altruists and destroying non-altruists). As soon as a mind is sophisticated enough to model the behaviour of other organisms, this strategy becomes possible. And given it’s self-referential self-reinforcement, seems certain to evolve very quickly once that threshold is reached. It’s practically written into the laws of evolutionary mathematics.

    So while our aggressive ‘Hawk’-strategy species would cut down the competition for a long time, it would eventually run into some species bigger and stronger, who would immediately recognise from it’s behaviour that they don’t have the gene, and be targeted for destruction. (And even without the gene, more fights mean more opportunities to lose.) However, a species that showed a more tolerant attitude to weaker species (so long as they are seen to use the strategy too) could survive meeting a much stronger species, when the presence of an equivalent gene was recognised. In a universe with the gene, a hierarchy of species can co-exist and survive, and make alliances. In a universe without the gene, only the very strongest one of them can survive indefinitely. The probability of being that species is one over the number of species, and in an infinite universe that would be zero.

    Aggressive species would exist, for a while, but in the long run it’s a weaker strategy.

    “At any rate I won’t swear that that’s not precisely Mr. Hume’s point; but at the time I thought it was clear that he was trying to use physically produced and directly observable lines, as drawn on parchment with a quill, to show that there seems to be reason to think that mathematical lines and points, which have no physical reality and are without breadth — pure abstractions suggested to imagination by reality — can have more than one point in common while yet not being in fact the same line.”

    It’s actually a very significant question in the history of mathematics. People (including many of the greatest minds in mathematical history) used to think that Euclid’s axioms of geometry were ‘obviously true’, although one of them seemed less obvious than the others (the parallel postulate). In attempting to prove it, mathematicians wound up constructing alternative versions of geometry in which it was not true, and in fact straight lines can meet at multiple points.

    It was this discovery that revolutionised the understanding of axioms – that rather than being statements chosen because they were “obviously true”, they were instead an arbitrary choice; just one of infinitely many equally valid systems, each with it’s own behaviour.

    The simplest example of such a geometry is the one that geometry itself was named after: “geo – metry” the measurement of the Earth. In spherical geometry, a “straight line” follows a circumference right around the Earth (if you start walking across the Earth in a straight line and keep on going…), and any pair of straight lines always meet at two points, on opposite sides of the Earth!

    In a chosen abstraction, such as Euclid’s system, it may indeed be true and provable, but we can’t safely use our intuition, built up on our approximate observation and modelling of the physical world, to tell us. We have to prove it formally. And we have no idea whether it is actually true of our physical reality at the ultra-microscopic level. If our theories about the quantum foam are true, very likely not.

    So it’s a very interesting question!

  • Nicholas (Unlicensed Joker) Gray

    In every movie, Aliens already speak English! Since Hollywood is so full of progressive and advanced individuals, they must know stuff we don’t. So that problem has been solved.

  • Julie near Chicago

    Yes, there’s always a question as to how well an abstract logical system matches a real-world system. Even as to how well such a system could match the real thing even in principle, for anything more complicated that a finite number of apples in a basket.

    Or an imagined gin & tonic on a hot summer day. :>)

    . . .

    Excellent point, Nicholas.

  • Mr Ed

    In every movie, Aliens already speak English!

    Indeed Sir, but imagine the fuss if they spoke Spanish… 🙂

    NiV.

    A good chess player can look at a position and identify possibilities that (some current) powerful engines are stumped by, this is best shown with chess puzzles, if you play chess, you might see that the set up in this puzzle indicates a particular tactic that is counter-intuitive, which a well-known engine is stumped by until it finally sees checkmate in 7. Given the fishy nature of the position, an intellgent human skilled at the game could think of a way around the problems, discarding brute force calculations.

  • bobby b

    “Alien contact sounds wonderful at first but then becomes terrifying as you think more deeply.”

    Want some alien contact? Watch American major news channels tomorrow night – well, tonight, I guess – as the polls close and heads explode.

    We either end up with Congressional deadlock for two years, in which case Trump will be exploding heads slowly, through executive orders, or we maintain a slim conservative majority, in which case heads will be exploding immediately all through the evening. If we’re all still speaking to each other the next day, it’ll be good practice for alien contact. (Watch the Dems. Now, there’s a hive mind.)

    I have popcorn and mead, and a comfy chair. Neither outcome will ruin my night.

  • Nullius in Verba

    “A good chess player can look at a position and identify possibilities that (some current) powerful engines are stumped by”

    Sure. They use different approaches, and problems that one is good at the other isn’t, (and vice versa, I expect). I don’t have a problem with saying that humans are still a lot more intelligent than AIs, what I’m asking about is why this means AIs are not intelligent at all.

    The early AI researchers had a complaint that the goalposts kept on moving – that ‘intelligence’ was basically defined as whatever humans could do but animals and computers could not. As they achieved each goal, the definition changed. I don’t have a problem with that, if that’s what you mean. It’s human.

    But I was curious if it was that you had a fundamentally different definition of the word, like the inclusion of ‘awareness’ (qualia) as a requirement. For example, with my definition of intelligence as a general problem-solving capability, you could argue that theirs was too specialised and inflexible. They’re good on the stuff they’ve been trained with, but do something unusual and they can’t invent a completely new approach to deal with it on-the-fly. (Although I don’t see any reason why it couldn’t be programmed that way, it could be argued that the current ones don’t.) A *general* problem solving capability would be like taking a chess program, telling it the rules to another game like Khet or Dara, and seeing if it can learn to play.

    That evolving method I mentioned above probably could, but possibly it couldn’t play chess as well as one of the more specialised chess engines. So it would be legitimate to ask how well a *general* games-playing AI can play chess compared to humans, rather than one designed specially for the task. Although I’d think that might just mean they have less intelligence than humans, rather than none at all.

    But I don’t know. That might not be what you meant at all.

  • Mr Ed

    NiV

    what I’m asking about is why this means AIs are not intelligent at all.

    Because the response (from the engine in that video) is ‘dumb’, it just hacks away within its parameters without any insight into what it is doing until it can ‘see’ the solution. A poor chess player would do the same, but a smart one would look for what is effectively counter-intuitive as a way forward, of course, with a puzzle, there is an implicit indication of a solution which is an assumption a programme might not make.

    The current scope of computing power is such that chess programmes are making huge strides in approaches and ‘on-the-job learning’. Perhaps they will soon find the unanswerable opening for White, determining the game for ever.

  • Snorri Godhi

    NiV:

    what I’m asking about is why this means AIs are not intelligent at all.

    Mr Ed:

    Because the response (from the engine in that video) is ‘dumb’, it just hacks away within its parameters without any insight into what it is doing until it can ‘see’ the solution.

    But that chess-playing program must have limited depth: if it could “see” checkmate in 7 moves, then a program with more depth could have “seen” checkmate in 8 moves, 1 step earlier; and another could have “seen” checkmate in 9, and so on.

    But you (Mr Ed) seem to realize that, judging by your last paragraph; so i can only assume that what you mean is, we humans do not need to use brute force: we have better ways to prune the search tree. My objection would be: maybe we do use brute force, but it happens at a subconscious level; we are simply unaware of it.

  • Snorri Godhi

    PS: as i indicated, it is also possible that we humans do have better methods to prune the search tree; but that does not mean that such methods cannot be implemented in a general-purpose computer: it might be that these methods simply have not been formalized yet.

    Is the issue whether artificial intelligence has been achieved, or whether it is achievable? at this point in the thread, i am not sure anymore.

  • Mr Ed

    Snorri

    But you (Mr Ed) seem to realize that, judging by your last paragraph; so i can only assume that what you mean is, we humans do not need to use brute force: we have better ways to prune the search tree. My objection would be: maybe we do use brute force, but it happens at a subconscious level; we are simply unaware of it.

    I am no expert on computing, I know almost nothing about it. I was a County-level chess player in my youth. As I understand it, certain chess puzzles bamboozle what are very powerful chess engines (iirc the one in the video is Stockfish. In the video, Stockfish found the solution are being extensively led towards the solution. The ‘problem’ for chess engines AIUI is that they work on the basis of evaluations of positions that gives a ‘won’ position a certain value much greater than neutral but not at infinity (i.e. won), so that they can end up ‘groping around in the dark’ chasing apparent dead ends and sorting their own valuations, rather than seeing a ‘won’ position except when it comes to a foreseeable forced mate. I imagine that work is going on to get them to match more closely human intuition (or rather, the intuition of, say, an IM-level chess player) to enable them to sort the wheat from the chaff and prune the true, but again, simulating intelligence rather than replicating it.

  • Ian

    Partly this involves the question of whether there is any fundamental difference between consciousness and intelligence, which again has not been established and seems largely a semantic confusion”

    I’m guessing you have a different definition of “intelligence” to mine. What’s yours?

    The OED has as its first definition “the faculty of understanding; intellect”, derived from the Latin intellegere, for which the Oxford Latin Dictionary has “to grasp mentally, understand, realize”. The notion is that of a function of a conscious mind, or perhaps of a particular kind of consciousness to do with the relation between self and non-self.

    Proponents of strong AI tend to dismiss consciousness as somehow unreal or as an emergent property of matter that has been arranged in a particular way, based it seems on a rather old-fashioned mechanistic/physicalist view which posits that the universe is made of matter. However, if consciousness is a property of matter, it’s not clear how it might emerge in some places and not others, and it cannot be proven that it is not (in fact) everywhere. One ends up (like Wolfram) making arbitrary and rather weak claims about necessary complexity or somesuch.

    I take the view that consciousness or life is somehow fundamental to the universe, and (for instance) don’t accept that rocks aren’t conscious simply because they don’t move; but I can’t prove that rocks are conscious – perhaps you need to be on acid to get this 😛

    chess-playing computers are not intelligent systems because they are designed and (crucially) operated by humans

    That sounds like an interesting viewpoint. How does that follow? What is it about the definition of “intelligence” that means a human couldn’t ever design one?

    It’s not just the human design that’s important (I’ll address this later on), but the fact that (as operators) our minds are in fact part of the system and we are the ones giving meaning to the results of the software through our faculty of understanding or intelligence.

    And what’s your view on an evolved chess program? (It generates programs at random, plays them against one another, and then keeps and mutates the winners. Humans set up the simulated world in which they live, but quite often there’s no human who even knows how and why the rules it picks work. Is that human designed? And could a product of evolution, whether in a world created by humans or nature, ever be intelligent?)

    The example you give is typical of AI systems, which iteratively approximate the desired result (e.g., AIs can “learn” to draw a human face by starting out with random splotches and iteratively improving what it comes up with by comparison against photos). Sure, humans don’t know how the sausage is made, but it’s still just an algorithm directed at a specific goal and can do naught else.

    This is a world away from intelligence – much less “general intelligence” – and even in a thought experiment where an amazingly sophisticated machine or machines (of whatever size and composition) were let loose on an alien world it/they would not evolve or survive or reproduce unless programmed to do so, and could never exceed the limits of their initial parameters. There is always “hard coding” in such systems – AI machines cannot change their own programming except in science fiction, and even new machines created by the first machines could not be “conceived” outside of that framework. This would fundamentally limit any possible development, and there is no plausible “breakout” scenario that I am aware of.

    Moreover, no matter how complex it might be, such computers are not animate – they are fundamentally mechanical devices. The idea that at some level of sophistication or complexity some kind of intelligence or consciousness would arise is not supported by evidence. Any argument along the lines that unexpectedly good results might arise from the software would in my view be no more convincing than to argue that computers drawing fractals from simple equations are intelligent.

    In fact it’s rather the case so far that AI can often produce surprisingly stupid results – mistakes that really could not be made by humans – like Google Translate’s weird prediction of the Second Coming. This is despite the fact that a reasonable argument could be made that Google Translate is (in aggregate) more proficient at translation than any single human.

    To summarize, I have yet to hear any convincing argument that at some level of computational sophistication or complexity that what we think of as consciousness or intelligence would arise (certainly not from Wolfram, anyway), and the notion that it might do so is in fact largely a sentimental notion based on defunct physics and a rigid adherence to atheistic and materialistic views that a priori exclude consciousness as a thing in itself. Computation ≠ intelligence.

  • I’m a bit surprised that this discussion has gone on so long with no reference to Roger Penrose’ proof that the mind cannot be an algorithm.

    I’m not surprised that no-one has presented that proof here even in outline: just writing its many Greek and mathematical symbols would be a challenge; its length is very ill-suited to a blog comment; above all, it is sufficiently complex that, while the basic idea is not too hard to grasp, the actual proof is no holiday job to master, let alone to confirm or disprove.

    I’m well aware – and can assure you that Roger is very well aware – that a thing is not proved just because any particular commenter (supportive or sceptical) cannot point out a specific flaw in it. (I’m also aware that the proof requires the axiom of choice – but as I’m unable to prove for sure that 2 plus 2 equals 4 without the axiom of choice, that isn’t worrying me too much.)

    I’m emphatically not saying any of the commenters above should go away and remind themselves of this area. Still less am I offering to help explain it myself. (I have an intellectually-demanding day job, and the stuff I used to do in Roger’s field is a good deal harder than it, so this first comment by me in this thread will likely also be my last.) I just note that a mathematical proof exists (unfalsified as yet, that I know of) demonstrating that the only minds that we all know are conscious cannot be algorithms, whereas computers today are and run algorithms.

  • Robbo

    possibly he knows some physics?

    Look up his bio. Wolfram is one of those one-of-a-kind people who has done some very special things. Whatever he has to say is worth thinking about on a deep level.

  • Nullius in Verba

    “The OED has as its first definition “the faculty of understanding; intellect”, derived from the Latin intellegere, for which the Oxford Latin Dictionary has “to grasp mentally, understand, realize”. The notion is that of a function of a conscious mind,…”

    Thanks! OK, this seems to be a variant on the “awareness” requirement, but a different aspect of it to the qualia issue I was discussing earlier. I think this is about what semantic meaning is, and whether a computer can know about not just the data but what it means.

    This is an extremely difficult concept to get one’s head around, but I think I can give an answer. Data has a meaning when the possible states and the rules by which they can be transformed can be matched with the possible states of some external physical system and the way it behaves. Pebbles on an abacus can represent sheep in a field because the number of pebbles can be matched with the number of sheep, and the way pebbles can be added and subtracted is the same as the way sheep can be added or subtracted. Symbolically, 3 pebbles *means* 3 sheep, because the pebbles can be used to simulate what will happen if you take 3 sheep and add 3 more. Similarly, the written symbol “3” can only mean 3 sheep only if there is some physical system that will act on written symbols in a way that matches up with the behaviour of sheep.

    Thus, ‘meaning’ can be defined in physical terms. It’s a relationship between different physical systems whereby one system can simulate the possible states and behaviour of the other.

    And once it can be defined in terms of physics, there’s a possibility that computers can do it.

    I’m not going to try to write the 100,000 word essay here I’d need to try to explain all the ramifications of that, but maybe others would find it interesting to think about.

    “Proponents of strong AI tend to dismiss consciousness as somehow unreal or as an emergent property of matter that has been arranged in a particular way, based it seems on a rather old-fashioned mechanistic/physicalist view which posits that the universe is made of matter.”

    What do you think it’s made of?

    “However, if consciousness is a property of matter, it’s not clear how it might emerge in some places and not others, and it cannot be proven that it is not (in fact) everywhere.”

    Yep. Panpsychism appears to be a necessary consequence.

    “One ends up (like Wolfram) making arbitrary and rather weak claims about necessary complexity or somesuch.”

    I agree Wolfram’s claims are vague and arbitrary, like those of most people who attempt to answer this question. Unfortunately, proponents of *human* intelligence have exactly the same problem explaining how it all works. All we have got is the observation ‘from the inside’ that each of us has, and a symmetry argument by which we assume other humans have it, but a big question mark over whether symmetry can be extended to anything else. Since we’ve got no objective way of detecting or measuring it even in other humans, we can’t really say anything at all about whether computers have it. The inability to imagine that they do, or how they do, is not evidence that they don’t; only evidence of a lack of imagination.

    Personally, I think my ‘simulation’ hypothesis takes away a lot of the arbitrariness, at least so far as semantic meaning goes. Qualia are perhaps a much knottier problem!

    “Moreover, no matter how complex it might be, such computers are not animate – they are fundamentally mechanical devices.”

    Yes. So are people.

    There is no magical ingredient in the atoms that make up an organism. They’re just pushing and pulling on one another according to the same laws of physics that apply everywhere. If Cartesian mind-body dualism was true, there would have to be something in our bodies that got pushed around by forces not arising from matter, which would in principle be detectable. They’ve looked, they know pretty much how neurones work, and no such extra force or influence has ever been found.

    “I’m a bit surprised that this discussion has gone on so long with no reference to Roger Penrose’ proof that the mind cannot be an algorithm.”

    I certainly enjoyed Penrose’s book, but unfortunately his main argument doesn’t work, because the Godel argument applies just as easily to human minds.

    Godel’s theorem works by constructing an arithmetical expression that is equivalent to saying “This statement cannot ever be proved true by formal proof system X.” If X could prove it, then it would be false, which would mean the formal proof system was proving false things and hence was broken. Hence either X cannot prove this true statement true, or X is inconsistent.

    Godel’s cleverness was in wrapping this up as a statement of arithmetic, showing that no consistent proof system complex enough to implement basic arithmetic could prove every true arithmetical statement. But the trick it’s based on is quite simple, and can be applied to anything.

    Thus, I can write the statement “This statement cannot ever be proved true by humans.” If humans could prove it true, then it would be false, and the human method for generating proofs invalid. So it’s true, but no human can consistently prove it. But if a computer produced the foregoing argument, there’s no obstruction to its validity. Computers can do things humans can’t.

    And there are plenty more bizarre examples – there are theorems that can’t be proved on a Tuesday, that can’t be proved by males, that can’t be proved by written proofs containing the letter ‘e’, and so on.

    It’s highly significant for mathematics, but perhaps not so much for the philosophy of mind. All it really tells us is that humans are fallible, and can’t prove every true statement true, which we already knew and is no big surprise.

    Interestingly, there’s a classic paradox that’s secretly based on the same problem.

    A judge tells a condemned prisoner that he will be hanged at noon on one weekday in the following week but that the execution will be a surprise to the prisoner. He will not know the day of the hanging until the executioner knocks on his cell door at noon that day.

    Having reflected on his sentence, the prisoner draws the conclusion that he will escape from the hanging. His reasoning is in several parts. He begins by concluding that the “surprise hanging” can’t be on Friday, as if he hasn’t been hanged by Thursday, there is only one day left – and so it won’t be a surprise if he’s hanged on Friday. Since the judge’s sentence stipulated that the hanging would be a surprise to him, he concludes it cannot occur on Friday.

    He then reasons that the surprise hanging cannot be on Thursday either, because Friday has already been eliminated and if he hasn’t been hanged by Wednesday noon, the hanging must occur on Thursday, making a Thursday hanging not a surprise either. By similar reasoning he concludes that the hanging can also not occur on Wednesday, Tuesday or Monday. Joyfully he retires to his cell confident that the hanging will not occur at all.

    The next week, the executioner knocks on the prisoner’s door at noon on Wednesday — which, despite all the above, was an utter surprise to him. Everything the judge said came true.

    The resolution of the paradox is that the sentence passed by the judge is a ‘Godel expression’ targeted at the prisoner, specifying that the prisoner cannot evaluate it. The executioner, however, can!

    And who could ever forget the theory of Bistromathics?!

    Bistromathics itself is simply a revolutionary new way of understanding the behavior of numbers, Just as Einstein observed that space was not an absolute but depended on the observer’s movement in space and that time was not an absolute, but depended on the observer’s movement in time, so it is now realized that numbers am not absolute, but depend on the observer’s movement in restaurants.

    The first non absolute number is the number of people for whom the table is reserved. This will vary during the course of the first three telephone calls to the restaurant, and then bear no apparent relation to the number of people who actually turn up, or to the number of people who subsequently join them after the show/match/party/gig or to the number of people who leave when they see who else has turned up.

    The second non absolute number is the given time of arrival, which is now known to be one of those most bizarre of mathematical concepts, a recipriversexcluson, a number whose existence can only be defined as being anything other than itself in other words, the given time of arrival is the one moment of time at which it is impossible that any member of the party will arrive. Recipriversexclusons now play a vital part in many branches of math, including statistics and accountancy and also form the basic equations used to engineer the Somebody Else’s Problem field. …

    Sound familiar? 🙂

  • Snorri Godhi

    Mr Ed: hopefully you have a passing familiarity with the von Neumann architecture, maybe even with Turing machines. In this case, you might want to find out about Newell+Simon’s general-problem-solving paradigm. (I would recommend the book that i read, but i read it before starting to take note of the books that i read.) You will find, as i did, that it resembles what goes on in the human mind when we play chess, in as far as introspection is a guide.

    Actually, i think that, before Turing+von Neumann, if not before Newell+Simon, it was pretty much impossible for anyone to even imagine how a physical system could make anything that one can reasonably call a “choice”. Which is why, even though Locke, Anthony Collins, and Hume followed Hobbes to some extent, i think that they made radical progress by liberating their theories of mind from the constraint of **physical** determinism: Hobbes’s computational theory of mind was way too premature.

    BTW i never tried my hand at chess puzzles, but it strikes me that one could (and probably one does) start with a solution and build a puzzle around it; in which case it is no wonder that a standard chess-playing program has trouble: such programs are not designed to solve puzzles with a solution which is obtained by going through some pretty scary configurations.

  • Snorri Godhi

    Incidentally, i find Ian’s core argument unconvincing; but please note that i am in no way a “proponent of strong AI”. I believe that consciousness exists (mine, at least), but cannot say whether it plays any role in human intelligence. What i am confident of, is that most of what the human mind does, can be done by machines. In fact, it is a necessary working hypothesis in neuroscience that ALL what the human mind does, is done by a machine; that is, the human brain.

    In any case, whether problem-solving is intelligence or not, is irrelevant to the OP: Wolfram’s argument seems to be that intelligence can be anything that is “computationally as sophisticated as a brain”. My argument is that computational sophistication is not intelligence unless it is capable of problem-solving: that is a MINIMAL requirement, and yet it already rules out Wolfram’s examples of chaotic fluid flow and magnetospheres of pulsars; there is no need to bring in consciousness.

    WRT Gödel’s 1st incompleteness theorem: i very much like NiV’s very clever example:

    This statement cannot ever be proved true by humans.

    Note, however, that Penrose has moved on from Gödel’s 1st theorem to the axiom of choice as the basis of his argument; or so says Niall.

  • Mr Ed

    Snorri G.

    Thanks for the tips. I am not sure if there is a ‘custom’ that a chess puzzle (outside of Fairy Chess variants) is required to come from a position that could arise in normal play, (but simply wouldn’t barring some idiotic play), rather than something contrived, and the whole point of those puzzles I posted is that they fox engines (albeit from long before the time when engines were serious players).

    On the subject of ‘free will’ rather than ‘intelligence’, I have posted before on another thread this video by a mathematician Edward Frenkel, talking about vectors as if they have an existence, and if intelligence can be boiled down to numbers (and therefore computers). Rather long-winded, but he has a ‘point’.

  • Ian

    Nullius in Verba,

    Many thanks for your considered reply, to which I shall respond tomorrow. Ordinary life has temporarily supervened.

  • Snorri Godhi

    Mr Ed: soon after my 1st comment in this thread, i read Harry Frankfurt’s celebrated essay: Freedom of the Will and the Concept of a Person.

    As a result, my concept of “free will” has changed. When i wrote my comment, i thought:
    I use free will when choosing a chess move; a chess-playing program chooses chess moves pretty much like i do; therefore chess-playing programs have a rudimentary form of free will.
    By the Frankfurt* concept of free will, however, we do not need free will to choose chess moves.

    This does not make any difference to my political position, though: i used to think that i have enough free will AND consciousness, to make freedom from coercion worth having, while chess-playing programs do not have (enough of) either; and i still think so.

    * not to be confused with the Frankfurt School.

  • Nullius in Verba

    “soon after my 1st comment in this thread, i read Harry Frankfurt’s celebrated essay: Freedom of the Will and the Concept of a Person.”

    It’s an interesting essay.

    My own view (for whatever that’s worth) is that the concept of free will is based on a particular model of minds. An agent generates a list of options for actions to take, and chooses between them based on predicted consequences (probabilities, costs and benefits), and moral constraints. People are interested in them because they want to know that other people’s choice functions are socially acceptable. Society reduces inter-member conflict within its alliance by negotiating a shared set of behavioural constraints, indoctrinating members with a moral code to impose voluntary compliance, and adds rewards and penalties to the ‘consequences’ input where moral constraints alone are not sufficient. It then monitors members to check their internal rules are all there and working correctly.

    (We do also apply some breakdown of the mechanism in mitigation. If someone’s choice is externally determined by horrific threatened penalties, or if they are brought up in a society with a different moral code and no opportunity to choose, or if they are mentally disabled/damaged, then moral responsibility is partially assigned to those external influences. This is something like Frankfurt’s ‘second order desires’, in which someone also has to make a free choice from a list of options about what choice function/moral code to apply. But it’s still the same basic model being applied at the meta-level.)

    But ‘free will’ is about that step of generating a list of options and choosing. If there is only ever one option on the list, then there is no choice and no free will. You can’t deduce anything about what rules someone is using to make their choice if there was only ever one possible outcome.

    Regarding chess programs, whether there is only one option or many depends on at what level you look at the system. At a high level, there is a clearly an explicit list of options generated and a process of choice between them. Look closer, and the choice can be seen to be a foregone conclusion, as the estimated costs, benefits, and constraints used to pick are all pre-determined. So it’s really a question of whether you regard the higher-level view – which is simply an approximate predictive abstract mental model of the chess program – is a real thing that can have properties of its own, independent of how it’s implemented? Are ‘semantic meanings’ real? Do their properties have anything true to say about the real world?

    It’s a similar question to whether the three pebbles can in any sense symbolically ‘be’ the three sheep. (Or three goats, or three acres …, or all of these and more at once.) Do abstract numbers and symbolic, abstract relationships ‘physically exist’? The stones and the sheep are distinct objects (themselves made up of >10^23 sub-objects…). There are no special forces or distinctive influences between them. The only connection between them is the higher-level concept of their number, and their common properties under addition and subtraction.

    To the extent that three pebbles can ‘be’ the abstract number 3, so a brain or chess program can ‘be’ an agent making a choice between several options. It requires you to ignore lots of irrelevant fine details of how the systems are implemented. All it requires is that the model is useful for predicting and manipulating the actions of the system being observed. The abstraction “free will” is like the abstraction “3”. Free will can have real, physical properties and relationships in the physical world, like the number 3 can. You can make accurate predictions about the world using them – there must be something about the real world that makes that possible. They are higher-level abstractions about many parts of the world all at once, and exist independently of the lower-level concrete objects and mechanisms that make them up.

    It’s like in Plato’s “Theory of Forms” – the gross matter of which things are made are distorted shadows on a lumpy cave wall of the abstractions they represent symbolically. Plato even considered the abstractions more real than the matter. I’d not go that far – I’d say they were equally real. But in a sense Plato’s cave is the wrong way round. We only ever have mental perceptual access to the approximate models we build about the world – it’s the true nature of matter that is hidden from our view by the gates of perception.

    Maybe. I don’t think anyone really knows.

  • Ian

    Nullius in Verba,

    Data has a meaning when the possible states and the rules by which they can be transformed can be matched with the possible states of some external physical system and the way it behaves. […] Thus, ‘meaning’ can be defined in physical terms. It’s a relationship between different physical systems whereby one system can simulate the possible states and behaviour of the other.

    I’m afraid I’d have to disagree. I don’t think, in your example, that either the abacus or the sheep have any inherent meaning, but rather that we give these things meaning by our faculty of intelligence or understanding, which I believe to be a function of consciousness. Clearly an abacus has no concept of itself or its function, or its relation to the sheep; but nor am I persuaded that by increasing the complexity of the machine that at some point the device would begin to have even a rudimentary understanding of its rĂŽle in counting sheep. It’s not even clear to me that an abacus or any more complicated device could in fact exist, absent a consciousness to observe it and “give it meaning”, and without going all quantum-y I think this is not a trivial point which potentially applies to the whole universe — at least that’s a possible line of argument, though my physics is pretty sketchy these days and I’m not up to date on what the arguments against effects like this on the macro level are.

    What do you think [consciousness is] made of?

    I think it’s the other way around: that consciousness or perhaps life is a fundamental building block (in fact, the fundamental building block) of the universe, so it can’t be explained in terms of other things. Nor do I believe it can be penetrated fully by a conscious mind, being the substrate for all observation — kinda like fish not knowing what water is. Without wishing to get all mystical, I think some subjective study is possible by employing certain practices or to some extent with psychoactive drugs. And yes, I believe in psi and all of that stuff, which I think offers clues that consciousness is a non-local phenomenon, but I imagine if I start on that I’ll get thoroughly roasted 😉

    About the Penrose stuff, I was discussing this online some years back and I made much the same argument as you have, though the counter-argument was made that the G-sentence only applies to formal logic, that humans don’t necessarily think or have ideas or inspiration in this way, and that therefore this isn’t a problem for the theory. I hate to admit it, because the guy making the argument is a complete dick, but I found this convincing. And obviously I think Penrose is on the right track, but the whole thing about microtubules was rather speculative.

  • Snorri Godhi

    Nullius in Verba: your reply was welcome, but it is a bit unfocused for my taste: i like to be sure of what i agree or disagree with. So let me extract from what you write, something that i can confidently agree with:

    My own view (for whatever that’s worth) is that the concept of free will is based on a particular model of minds. An agent generates a list of options for actions to take, and chooses between them based on predicted consequences (probabilities, costs and benefits), and moral constraints.

    My only objection is that, rather than “a particular model”, it seems to me the only possible basic model of choice; “basic” in the sense that other models must include its 3 basic components:
    * a component that takes as input the current model of the environment and generates a shortlist of possible courses of action; or more generally, a shortlist of sub-goals;
    * working memory in which to store the shortlist;
    * a component which ranks the items in the shortlist according to a pre-defined value system, or a pre-defined goal.

    Actually, a full ranking is not necessary: it is sufficient to find the single “best” item in the shortlist.

    But ‘free will’ is about that step of generating a list of options and choosing. If there is only ever one option on the list, then there is no choice and no free will.

    This might seem like quibbling, but free will, as a faculty, still exists when there is only 1 option: it’s just that it is not being used.

    Now an interesting fact about the 3-component model of “free will” on which perhaps we can provisionally agree, is that it satisfies Michael Huemer’s definition:

    Free will: A person has free will if and only if he sometimes is in situations in which he can choose between two or more available actions, and which action he performs is determined by his choice.

    And yet, Huemer goes on in the following section to argue that “It seems that free will is incompatible with the law of causality.” That seems blatantly absurd to me*, since we know that computers can make choices, if programmed to do so. Now Huemer and other people might argue that what computers make are not “””real””” choices, but that is a rather obvious shifting of ground: there is nothing in Huemer’s definition that excludes the sort of choices made by computers; and it would be difficult to define “choice” in such a way that it excludes computer choices, but not human choices (unless we adopt Frankfurt’s definition of free will).

    I wish to make it clear, however, that all of the above is something that i could have written last monday. After reading Frankfurt’s essay, i now realize that there is at least 1 other consistent definition of “free will”. It does not seem to be captured by your parenthetical paragraph, however.

    * even if we neglect the fact that there is no such thing as “the law of causality”.

  • Nullius in Verba

    “My only objection is that, rather than “a particular model”, it seems to me the only possible basic model of choice”

    There are at least two mentioned here – the high level ‘choose from a list of choices’ and the low level ‘so-called choice is predetermined by your choice function; the universe is deterministic’. It’s only the latter model that is said to be incompatible with free will, the former is said to be an illusion resulting from treating the ‘chooser’ as some sort of a non-deterministic black box with ‘agency’.

    “This might seem like quibbling, but free will, as a faculty, still exists when there is only 1 option: it’s just that it is not being used.”

    It’s an interesting philosophical question as to whether a capacity that is *never* used can be said to exist – in a deterministic universe, are counterfactuals physically meaningful, or just another case of useful models?

    But in this case I meant the exercise of free will – the thing the faculty of free will gives you the ability to do. Please, consider my terminology amended accordingly.

    “That seems blatantly absurd to me*, since we know that computers can make choices, if programmed to do so.”

    It depends whether you take the high level or low level view of computers. Some would say that the action taken by a computer program is determined by its inputs. Run the same software on the same inputs, and it will always do the same thing. The choice is pre-programmed. Although there is a list of so-called ‘options’ generated by the process, all but one of them never have any chance of being picked.

    “After reading Frankfurt’s essay, i now realize that there is at least 1 other consistent definition of “free will”. It does not seem to be captured by your parenthetical paragraph, however.”

    My main issue with Frankfurt’s essay was that at first glance there seems to be no obvious difference between what he calls first order and second order desires, and no discussion offered to indicate why he thinks animals and computers don’t have the latter. There are multiple factors that go into the desirabilty assessment of an option. The drugs are desirable because you can expect a pleasurable sensation. The drugs are not desirable because they have many bad effects on your finances, health, safety from police and drug-dealers, and relationships. Both are first order desires. A plan for achieving the second one could involve measures taken to reduce the appeal of the first. But it’s basically delayed gratification – seeking a long-term pleasure by denying yourself a short-term one.

    But I took Frankfurt’s point to be about including not just the external world in your model, but also modelling your own choice function, to understand and manipulate how your short-term choices could affect your long-term gains. The distinctive issue is recursion – that the model models itself. If you model your own craving for drugs, then the problem solver can set a sub-goal of reducing or removing that craving to achieve the longer-term goal. But it’s something that any animal capable of a theory of mind should be able to do. If you can model the behaviour of other animals, it’s just as easy to model your own.

    What I meant by my parenthetical remark is just that we do sometimes model people’s moral codes and cost/benefit calculations as being deterministic consequences of external causes, rather than treating it purely as a black box with agency. We’re treating it, at least partly, as a lower-level deterministic decision engine. But it was a minor sidenote, not worth expanding on.

    “even if we neglect the fact that there is no such thing as “the law of causality”.”

    I assume he means by that the deterministic laws of physics. And I guess you might mean by “no such thing” that quantum physics is often considered not to be deterministic. (Personally, I think it is, but it’s another conversation entirely!) But I’m not sure if that’s what you meant.

  • Snorri Godhi

    Hi N.i.V.: I am a bit surprised that you are still following this. Maybe you won’t see this comment, but it will be helpful to me to write it.

    Some would say that the action taken by a computer program is determined by its inputs. Run the same software on the same inputs, and it will always do the same thing. The choice is pre-programmed. Although there is a list of so-called ‘options’ generated by the process, all but one of them never have any chance of being picked.

    That is correct … as long as the last of the 3 components that i described, is deterministic. I wrote:

    * a component which ranks the items in the shortlist according to a pre-defined value system, or a pre-defined goal.

    The implication was that the ranking is deterministic. Some randomness could be added, and it can be useful to avoid Buridan’s-ass type problems, to add unpredictability when dealing with an intelligent opponent, and to do some exploration when the ranking is uncertain. Such randomness, however, can hardly be said to add the sort of freedom that we humans find valuable in itself.

    But the main point is: as long as people talk about freedom being a matter of making choices, i submit that they are implicitly endorsing the 3-component model unless they specify a different model; and, by endorsing the 3-component model, they are implicitly accepting (a large amount of) determinism. Therefore, i don’t see how they can turn around and say that determinism precludes freedom of choice.

    As for Frankfurt’s model: i like it because
    (A) It provides a consistent model of inner conflict: people have been thinking about it since at least Augustine, but have commingled the debate with issues of moral responsibility and determinism vs indeterminism. Frankfurt explicitly says that his model does not address those issues.
    (B) It is sort of a compromise in the old British debate on liberty and necessity: the “libertarians” claimed that, to be free, one must be able to will or not to will; while the “necessitarians” claimed that that would lead to an infinite regress, since, to be free, one must also be able to will to will, or not to will to will, and so on. Frankfurt says that there is normally no reason to regress more than 1 step; and, unlike some (not all) of the old “libertarians”, his theory is fully compatible with determinism.

    Finally, i wrote:

    even if we neglect the fact that there is no such thing as “the law of causality”.

    You replied:

    I assume [Huemer] means by that the deterministic laws of physics. And I guess you might mean by “no such thing” that quantum physics is often considered not to be deterministic.

    That too, but more importantly, i prefer “determinism” to “causality”, because talking of “causes” and “effects” is pointless in physics, even in deterministic Newtonian physics. If one can do physics w/o talking about “causes”, then there is no law of causality!

  • Nullius in Verba

    “Hi N.i.V.: I am a bit surprised that you are still following this. Maybe you won’t see this comment, but it will be helpful to me to write it.”

    I missed it for a day, but then noticed the number of comments had changed. I usually check for a day or so after a thread goes quiet – but it depends how bored I am with whatever else I’m doing… 🙂

    “But the main point is: as long as people talk about freedom being a matter of making choices, i submit that they are implicitly endorsing the 3-component model unless they specify a different model; and, by endorsing the 3-component model, they are implicitly accepting (a large amount of) determinism. Therefore, i don’t see how they can turn around and say that determinism precludes freedom of choice.”

    Because the high level model doesn’t describe/include the deterministic mechanism by which the choice is made. It’s treated as a black box with ‘agency’; as a causal source. It’s modeled as being influenced by those costs/benefits, but not in sufficient detail to predict their outcome precisely. The uncertainty about the deterministic choice function gets reinterpreted as some unspecified form of non-determinism.

    It’s only when you stop modelling it as a black box agent and instead model it as a deterministic (or random) choice mechanism that the inconsistency is noted.

    “That too, but more importantly, i prefer “determinism” to “causality”, because talking of “causes” and “effects” is pointless in physics, even in deterministic Newtonian physics. If one can do physics w/o talking about “causes”, then there is no law of causality!”

    Again, ’cause and effect’ are a property of our mental models, not of the mechanistic description of physics. We build models of bits of physics that can be plugged together to predict chains of events. The model has inputs – the initial state and boundary conditions – and predicts an outcome for each combination. Our concept of ‘causality’ is a property of these models, and it doesn’t apply to deterministic Newtonian physics because (while useful) these models are ‘wrong’. (Or more precisely, approximate.)

    Much of the debate on causality is clouded by this confusion between physics and our mental models of physics. Causality is part of the same mental model as free will, in that agency is considered a ‘causal source’: an uncaused event. Human choice is commonly at the start of causal chains because that’s exactly what the causal models are for: to predict the outcomes of our choices so that we can pick the one that leads to a most desired outcome. (Or of other people’s choices, so we can figure out what which ones we need to persuade them to make.)

    It is said that all models are wrong, but some are useful. I’d propose that a model cannot be useful unless it captures something that is to some extent true and real about the universe. (Like the abstract concept of ‘number’ does.) ‘Causality’ may be only a model, a distorted shadow on the cave wall, but it’s pointing to something real.

  • Snorri Godhi

    NiV:

    It’s only when you stop modelling it as a black box agent and instead model it as a deterministic (or random) choice mechanism that the inconsistency is noted.

    I did not pay much attention to your earlier comments about the black-box level vs the mechanism level, but i feel uncomfortable with this: even if you think of the choice process as a black box, you still OUGHT to wonder whether it is deterministic or random. It’s only if you fail to wonder, that you can claim that it is neither random nor nonrandom, without worrying about the law of excluded middle.

    Incidentally: most of the liberum arbitrium/free will debate has taken place before there was a quantitative theory of probability. Does that mean that people were unable to see that determinism and randomness are the only alternatives?
    (See also my earlier comment about people being unable to think of an algorithmic model of choice before Turing and von Neumann.)

    As for causality: i only complained about a universal “law of causality”. I do not deny that it makes sense to talk of causality in systems with well-defined inputs and outputs, such as transistors, neurons, neural networks, and brains!

  • Nullius in Verba

    “Incidentally: most of the liberum arbitrium/free will debate has taken place before there was a quantitative theory of probability. Does that mean that people were unable to see that determinism and randomness are the only alternatives?”

    They understood the point, but they argued that this only proved the existence of the divinely ordained spiritual realm – as in Cartesian Dualism. Souls provided a causal source, a ‘prime mover’ independent of the mundane laws of physics. I once spent several months arguing it out with a bunch of theologians who were big fans of Thomas Aquinas (who followed Aristotle in this). There’s some fascinating stuff in the classics on ‘free will’.

    “As for causality: i only complained about a universal “law of causality”. I do not deny that it makes sense to talk of causality in systems with well-defined inputs and outputs, such as transistors, neurons, neural networks, and brains!”

    Quite so. Causality depends for its coherence on applying only to a subset of the universe with boundary conditions. Lots of laws based on boundary conditions break down when you try to extend them to the entire universe. The second law of thermodynamics is an obvious example. Not all ‘laws’ are universal.

    But I assume it was indeed causality as applied to brains that Huemer was talking about.

  • Snorri Godhi

    Nullius in Verba:

    They understood the point, but they argued that this only proved the existence of the divinely ordained spiritual realm – as in Cartesian Dualism.

    A reflection on Cartesian dualism: perhaps, in the old days, for most (not all) people the dichotomy was not between determinism and randomness, but between physical determinism and agency. Good or bad luck was attributed, not to randomness (nor deterministic unpredictability), but to the caprice of the gods. Homer is full of that.
    (Although even deterministic phenomena, such as the Sun’s (apparent) motion around the Earth, used to be attributed to the agency of gods.)

    Descartes provided a simple+coherent framework for that dichotomy.

    I once spent several months arguing it out with a bunch of theologians who were big fans of Thomas Aquinas (who followed Aristotle in this). There’s some fascinating stuff in the classics on ‘free will’.

    I know, I have been looking at the Internet Encyclopedia of Philosophy (and to a lesser extent the Stanford Encyclopedia of Philosophy) on this issue, over the last few months. (Call it the consolation of philosophy.) It seems that Peter Abelard anticipated Harry Frankfurt!

    I also scratched the surface of the British debate on liberty vs necessity. It seems to me (although this is very premature for me to say) that there were not 2, but at least 3 camps: physical necessity (Hobbes and, on the Continent, Spinoza), moral necessity, and non-necessity.

    But I assume it was indeed causality as applied to brains that Huemer was talking about.

    Huemer defined the law of causality as follows:

    3. The law of causality: The thesis that every event has a sufficient cause.
    4. Sufficient cause: A sufficient cause of an effect is a cause that, if it occurs, renders it impossible that the effect fail to occur. I.e., if the cause occurs, the effect must occur. (Distinguished from a necessary cause, in which the reverse is true: i.e., the effect cannot occur unless the necessary cause occurs.)

    On the one hand: this is not a statement about brains, but about the entire Universe.
    OTOH: whether or not it is true of the entire Universe, makes no difference in the context of Huemer’s essay, so i don’t hold it against him.

  • Nullius in Verba

    “Huemer defined the law of causality as follows: 3. The law of causality: The thesis that every event has a sufficient cause.”

    Really? Then I agree, that’s nonsense!