We are developing the social individualist meta-context for the future. From the very serious to the extremely frivolous... lets see what is on the mind of the Samizdata people.

Samizdata, derived from Samizdat /n. - a system of clandestine publication of banned literature in the USSR [Russ.,= self-publishing house]

Samizdata quote of the day

Allison thinks the [bank risk] models are doomed from the get-go because they are based on fundamentally incorrect notions. “They always assume normal curves, and they try to manage things to a 99 percent probability. That means there’s only a 1 percent probability that certain bad things can happen. Well, there’s an interesting thing with a 1 percent probability: Give it long enough, and it becomes certain.”

– Former BB&T chief executive officer John Allison, quoted on page 84 of I Am John Galt, by Donald L Luskin and Andrew Greta. (The chapter on Paul Krugman is gruesome reading.)

I heard Allison speak in London about a year ago, and he’s very good.

15 comments to Samizdata quote of the day

  • This was one of the points brought out during the Long Term Capital Management affair, the were certain of their certainty because they didn’t backtest their model with enough historical data, if they had, they would have realised their error.

    Real markets do not follow pure statistical models, although they may appear to 99% of the time as irrationality of both fear and exuberance can push markets into behaviours which are statistically improbable.

  • PeterT

    Technically true (the chance of a 1% p.a. event happening once over 100 years is 63% making some assumptions)and sounds good but I think it misses the real issue which is the fundamentally flawed nature of the current banking model.

    You can’t banish risk from business but if you have to go into such contortions to manage the risk of your business as bankers do then you might have second thoughts about the wisdom of what you are doing.

    There is an interview with John Allison on Econtalk from some time ago.

    http://www.econtalk.org/archives/_featuring/john_allison/

  • Fair enough, but we’re not talking about just a loss here, but your firm being completely wiped out ala Lehman Bros.

    Equally that potentially catastrophic event is not cumulative, it’s as likely to happen in Year 6 as year 106. Long Term Capital Management started in 1994 and went bust in 1998, after losing $4.6 billion. Although they continually made profits beforehand the risks that they were taking were far greater than their models led them to believe.

    LTCM’s strategies were compared (a contrast with the market efficiency aphorism that there are no $100 bills lying on the street, as someone else has already picked them up) to “picking up nickels in front of a bulldozer” – a likely small gain balanced against a small chance of a large loss, like the payouts from selling an out-of-the-money naked call option.

  • SC

    >Technically true

    Well, no, it’s technically false. Even if we’re talking frequencies-type probabilites like dice throws or coin tosses it’s not technically true, but the probability does get closer to one the longer you go on.

    And if the probability is not of a numerical nature, well, then in many such cases there is no increase in probability over time.

    (However, that’s all largely irrelevant to the point he’s making.)

  • Laird

    SC beat me to that comment.

    I understand the general point Allison is getting at, and don’t disagree, but (from the small portion quoted, anyway) I think he’s missing the real problems. These are (1) it’s questionable whether the statistical probability of an event occurring is (or even can be) properly calculated; (2) when the “statistically unlikely” event does occur the losses can be catastrophic; and (3) these sorts of “outlier” events are generally viewed in isolation, whereas in reality they tend both to cluster (an economic downturn reduces property values across the board and thus imperils all mortgage loans simultaneously) and be cumulative (buying your way out of a losing position tends to push all values down, including those on your other position, especially when everyone else is in the same predicament). I’m sure Allison knows this, but it doesn’t come out it the quote.

    That Luskin & Greta book looks interesting. It’s going on my list.

  • PeterT

    Yes well, that is what I meant by ‘technically’. Your not an actuary by any chance SC? 🙂

  • RRS

    In doing some research some time ago in connection with the study of constructions of algorithms, I encountered the concept of Negative Probability.

    Negative Probability should not be confused with being the “reverse” of general probability.

  • Ben

    @PeterT, the probability of it happening *at least* once is 67%.

    The *expectation* is that it will happen *on average* once in 100 years. It may happen no, one, two, three times or more.

  • Julie near Chicago

    SC and Laird both beat me to it. And while from one POV it might be a niggle, from another it’s really quite important. Misunderstanding of applied mathematics causes all sorts of stupid conclusions, such as that idiocy about voting’s being senseless because “the chances that your vote will make a difference are millions to one.”

    As a matter of fact, Laird explains some of the reasons why even the present example is not just a “niggle.”

  • Paul Marks

    I work with someone who used to be a local branch manager (in Britain – not the United States).

    He tells me what many other people have told me.

    They (the top brass back at HQ) gave orders that no one was to base lending judgements on whether an individual borrower was likely to pay the money back – instead lending was to be decided by mathematical equations back at HQ.

    And what was this mathematics based upon if it was not individual borrowers? Well, to cut a long story short, it was based upon bullshit.

    Well there you go – Paul Marks has written a comment concerning banking without touching on any of my normal concerns.

  • Mr Ed

    What was it about ‘sub-prime’ mortgages that wasn’t obvious from the name?

    Buy an investment without considering just who is payng for my profit on this asset and how and why would they do so? An aggregate of 10,000 bad investments is not smoothed out by averages, it is a catastrophe.

  • Julie near Chicago

    No, Mr. Ed, you obviously don’t understand business. You don’t have to worry about selling (or here, lending) at a net loss per unit, because what you lose you make up on volume.

    See? 😉

  • Laird

    Well, this isn’t a thread about subprime mortgages but since Mr. Ed raised a common fallacy I feel moved to rebut it. (And it gives me an opportunity to address Paul Marks’ comment as well.)

    First of all, making “10,000 bad investments” does make sense, once “bad” is properly understood. If subprime mortgages tend to default at a 10% rate, that still means that 90% of them pay as agreed. That default rate may be 5 times higher than prime mortgages, but as long as you price for the risk in the aggregate you’ll be fine. Pricing for risk is the key, and it’s one of the things which got lost when Wall Street and the megabanks discovered that industry in the mid 90’s and blew it up.

    Mortgages can be “subprime” for a number of reasons. But as that industry evolved throughout the nineties and especially the “noughties” (is that a word?) it basically came to mean that the borrower has a lower-than-acceptable credit score. But someone can have a low score for lots of reasons (did he suffer a temporary financial reverse, such as a divorce or job loss, or is he an habitual deadbeat?), and a static score doesn’t show a trendline (is it worsening or improving?). Credit scores actually tell you very little which is useful, but it became the key driver of “underwriting” (sneer quotes intentional, as there was no true underwriting involved, merely box-checking). Once upon a time (in the 80’s and early 90’s) subprime lenders carefully examined individual borrowers and assessed the specific and personal reasons for their status and their likelihood of paying. Most such lenders never even looked at credit scores. That method of underwriting is expensive, which has to be included in the interest rate. And those lenders also included in the interest rate a rational risk premium. The result was interest rates which were typically 5% or more above conventional mortgage rates. The combination of those factors (and others) made such lending eminently rational. It was a small, specialized, and profitable industry.

    But the megabanks (and especially the Wall Street investment banks) saw those profit margins and muscled in. Unfortunately, for those margins to be meaningful there had to be volume, which meant systematizing the process, which means losing the individualized underwriting that made it all work. They came to rely almost exclusively on credit scores rather than real underwriting. And in their lust for volume they priced all the risk premiums out of the product, and those loans came to be priced almost identically to conforming loans. Fannie Mae and Freddie Mac jumped in, too, exacerbating those problems. Hence the inevitable blowup. (Politically-driven “fair lending” rules, i.e. the Community Reinvestment Act, played a role, too, but its impact was not huge in comparison to the institutional evolution described above.) Subprime lending, when done properly, is not only profitable but fills a real need. But it is a tiny, niche business, not susceptible to high volumes. There is a market demand for those loans, and it will be filled once again. But not by the megabanks or the Agencies, which is precisely as it should be.

    Paul, as to your bank manager friend, that corporate policy sounds irrational but it really is not. First of all, if a bank depends upon branch managers to assess the creditworthiness of individual borrowers it is opening itself to charges of discriminatory lending (which basically means discriminatory non-lending, or credit denials). As an institution the bank has to be able to prove that its lending policies and practices are nondiscriminatory, and it can’t do that with unfettered discretion in the hands of branch managers. Second, individual branch managers would have differing degrees of ability to make those decisions, which is difficult to control for. And finally, at its heart consumer lending is essentially a statistical exercise. Permitting a branch manager to “put his thumb on the scales” destroys the utility of the statistical models. You can argue that those models may be defective, but that an entirely different point than arguing that branch managers should be permitted to make those decisions. The answer is to build better models (which most banks are continually trying to do, anyway; that’s why they hire statisticians).

    Sorry for the length of this post.

  • Tedd

    Thanks, Laird, I found that quite informative.

    I agree with what you said about building better models, but I wonder if the need to prove non-discriminatory practices isn’t at risk of making that impossible. I would have thought that factors such as race, gender, religion, and perhaps even political affiliation might need to be included in a really good model. Or is it the case that banks now have access to such a rich resource of other information about a person’s earning and spending habits that those demographic factors are no longer needed?

  • Laird

    You’re correct, Tedd: the inclusion of such obviously germane factors as race, sex, etc., are specifically prohibited from such models (and, for that matter, from the credit score calculation algorithms, too). That’s a disguised form of subsidy that results in higher costs to lower-risk individuals and lower costs to higher-risk ones. It’s just like insurance companies being prohibited from including certain factors (such as per-existing conditions or poor lifestyle choices) in their rate tables. That’s just the strange world we live in.