The Hanson-Moldbug debate

Occurred as scheduled last night. As scheduled, it was taped. I will post a link if/when I get one. I haven’t seen any such thing, so this reconstruction is entirely from memory.

I had a simple plan for handling Professor Hanson. First, I’d prepare myself both chemically and intellectually by sneaking off before the receptions to a Peet’s, where I’d order a quadruple espresso, read the Latter-Day Pamphlet No. 6 (Parliaments) on the left side of my screen, and watch the arguments in Slice v. Gannon on the right. Then, back to the reception for an hors d’oeuvre and a glass of red wine. When Dr. Hall rang the bell, I’d spring forward and disable the Professor with a hard snap-kick to the inside knee, then finish the job with a few quick rights to his bulging, alien-like forehead.

While I followed the first part to a T, I was unable to implement the second. Professor Hanson is just too nice a guy. In fact, I’d like to thank both the Professor and Dr. Hall for what I hope was an entertaining evening. The original idea for the event was, I believe, Dr. Hall’s; he suggested it to Professor Hanson, who suggested it to me. Which shows you what good guys they both are.

Professor Hanson is not, of course, a retard, and of course I never suggested that he was. Quite the contrary—he is an American social scientist of the 20th century. This phenomenon, in which non-retards express retarded ideas, is no novelty in that time and place.

As a student of history, I am entirely ignorant of all centuries before the 17th. I know a little bit about the 17th and a bit more about the late 18th. I feel I have a solid, but hardly exceptional, understanding of the 19th. On the 20th, ain’t no neighbor that can touch me.

At least, not as a generalist. Nor is this because my general understanding of the 20th is excellent. In many ways I feel I am actually stronger on the 19th. Rather, it is because everyone else’s understanding of the 20th is so poor. This is to be expected. It barely just happened.

In the early 21st century, almost everyone, even the nominal experts, still reads the 20th century through some 20th-century propaganda filter. As Francesco Nitti said of the Italians, we are ubbriacati di bugie—drunk with lies. The 20th’s nominal experts on the 17th, who generally are experts, suffer from no such intoxication. Even a Marxist, like Christopher Hill, can be perfectly perceptive and trustworthy on the 17th century.

Therefore, we can stop lubricating ourselves with lies about the 20th. And at some point, we will. If not now, when? As I told the Professor, he is as far ahead of his institution, as his institution should be ahead of him. He is at least trying to overcome his biases. He is fully engaged with his subject matter, believes sincerely in his nominal beliefs, and wants to discuss them with others. This makes him one scientocrat in a thousand.

The great intellectual mistake of the 20th century is that its governments believed they were subsidizing science, when they were actually subsidizing scientocracy. For instance, over lunch today I sat across from Ralph Merkle and Rob Freitas. “What’s the largest present obstacle to the development of this stuff?” I asked. “Money,” Ralph said.

I am slightly skeptical of diamondoid nanotechnology, (a) because I have learned to be skeptical of anything that promises results in so long a term (scientists, even real scientists, are the world’s greatest liars), and (b) because I suspect slightly that it represents an inappropriate transformation of macro-scale engineering into the nanoscale. The molecular nanotechnology we have, life, is built on low-energy bonds and runs in a wet environment with a very high error rate. Diamondoid nanotech is built on high-energy bonds, runs (at least for manufacturing) in hard vacuum, and depends on a very low error rate—errors will cascade like crazy.

Nonetheless, Merkle and Freitas are clearly real scientists, and they reacted to my objections more or less the way Kimbo Slice would react to my punches. I realized within a couple of minutes that I was wasting my time, and moved on to royalism and cryptographic weapons control. Ralph Merkle, being a very intelligent person, is very easy to instruct. If you can explain something interesting to him, in five minutes he will be explaining it to you.

These guys, however, are about as likely to get government money as Charles Manson. It will be a great day when Merkle and Freitas get all the money they need, and ITER has to hold a bake sale to buy a tokamak. Alas, they are by no means the only scientists in this position.

In the late 20th century, scientocrats of every possible flavor got all the money they needed. More, in fact. As for science, in some fields it flourished; in others, it was almost entirely defunded. There was never any shortage of cargo-cult science to fill these random holes.

The basic problem is that the robber-barons of Silicon Valley, unlike their Victorian forebears, do not realize that, if they want all this science, they will actually have to pay for it—themselves. Instead, they look at their tax forms and think: I gave at the office. But they didn’t. They gave to scientocracy. Now, they need to figure out how to patronize science—or there will be no science. Just scientific Bondo, sanded to perfection and painted with meticulous care.

Professor Hanson, while a good guy and not a retard, clearly has at best a dim sense that he is in any sense any part of any such apparatus. He is a 20th-century “social scientist”—a scientocrat by definition, a true believer in government by science and science by government. He is aware that this system does not work at all, but this does not lead him to question the entire tradition. Indeed, since his mind exists inside that tradition, he interprets it as mere reality. There’s something going on here, Mr. Jones. And you don’t know what it is—do you, Mr. Jones?

In any case—back to the debate. I should note that last night was not, in fact, my first public appearance. It was my first public appearance since I was on It’s Academic in the mid-80s. Wilde Lake High School—the quiz-buzzer terror of the lower Chesapeake basin, I’ll have you know. And I was the anchor of the Brown team that almost beat MIT’s 35-year-old grad students in the ’91 College Bowl regionals. And I have taught, a little. And I played a waiter in one of Mrs. Moldbug’s short films.

In short, I have no real training or experience in acting, speaking or debating, whereas I’m sure Professor Hanson is no stranger to the microphone. (“We’re both in the entertainment industry,” he told me after the debate. I agreed.) Therefore, my strategy would have to be extremely blunt and simple. For the most part I think it went well, though I made a couple of mistakes which I’d correct in retrospect.

In my 10-minute opening statement, I said—reconstructing, in the Thucydidean manner:

Futarchy is considered retarded because it violates both common sense and logic. If it violated only one of these, it might just be considered harmful. Since it violates both, a stronger word is called for. Since we all went to fourth grade, we all know such words. My goal tonight is to work through the logic, and leave the common sense to you.

Futarchy is the use of decision markets for sovereign decisions. Decision markets are useful given two requirements: they need to be well-trained and disinterested. Since these requirements are obviously seldom true at the sovereign level, futarchy is retarded.

Because most of us have no training or experience in managing a sovereign, and because the sovereigns we know seem quite poorly-managed, the sovereign case is a bad first example for understanding decision markets and their limitations. Let’s use a simpler example: chess.

Can a decision market play chess? Yes—given certain assumptions.

Imagine a chess game. Now, imagine a group of kibitzers watching the chess game. Now, imagine the kibitzers begin to bet on the game. The betting will create odds. The odds express each side’s probability of winning. This is a prediction market.

To turn this prediction market into a decision market, we say: could we get rid of one of the players, and just have the kibitzers play the game? Indeed we could.

We notice that after White makes a good move, White’s odds go up. After White makes a bad move, White’s odds go down. To decide between two moves A and B (or any N moves), we can take conditional bets on White’s chances if move A is made, and White’s chances if move B is made. Whichever bet produces the best odds is, in the market’s opinion, the best move. If a move is not made, all bets in that market are nullified—like a “scratch” in horse racing.

For instance, on the opening move, the conditional odds for P-K4 might be 50–50 (assuming the players are equally ranked), and the conditional odds for P-KB4 might be 40–60 (because it’s hard to recover from a strange bad opening). Therefore, White will chose P-K4 over P-KB4.

Or should. Now: in what conditions will this process actually work, i.e., generate good moves?

My point was: a market is not magic. It is just a way of collecting the votes of the market players. It is not democracy, not exactly, but in this sense it is like democracy. Under what conditions will this result be wise, rather than foolish?

Carlyle whose quote I mangled horribly enlightens us on the matter:

‘If of ten men, nine are recognizable as fools, which is a common calculation,’ says our Intermittent Friend, ’how in the name of wonder will you ever get a ballotbox to grind you out a wisdom from the votes of these ten men? Never by any conceivable ballotbox, nor by all the machinery in Bromwicham or out of it, it will you attain such a result. Not by any method under Heaven, except by suppressing, and in some good way reducing to zero, nine of those votes, can wisdom ever issue from your ten.’

[BTW, don’t bother searching for Carlyle’s ‘friend Crabbe’ or his Intermittent Radiator—Crabbe is just one of Carlyle’s many imaginary friends, like Dryasdust or Heavyside.]

Thus, we have our first requirement for success. The kibitzers need to actually be chess players. However many non-chess-players you have betting on a chess game, their bets will not express anything interesting. They will still produce a number—but that number will be noise.

In an actual betting market (as opposed to a ballotbox), there is actually some Bromwicham machinery for suppressing the fools. Namely: the fools lose money, and are forced to go home—or never (as Professor Friedman pointed out) arrive. For the wise, it is the other way around.

This Darwinian training effect is crucial to prediction markets. A market is only as wise as its players. The mere mechanism is not sufficient. A market’s opinion is the democratic vote of the players, weighted by the size of their bets. In a well-trained market, the wise will be betting with fat wallets and the fools with thin—providing Carlyle’s vote-suppressing machine.

Now, when we map from chess back to government, we see an immediate problem. Lots of people know how to play chess well. Open a chess decision market to the public, and you will get scads of chess masters. (Chess computers, even.) It’s unclear, however, that anyone in the 21st century knows how to govern well.

Certainly, our government today makes bad decisions (a claim with which Professor Hanson and I agreed fervently), so those with experience are without skill. Amateurs might do a better job. Or not. We are back to our non-chess-playing kibitzers.

There’s a worse problem, however. The market must also be disinterested.

If the predictions of a prediction market are simply thrown away, it is disinterested by definition. Its results have no side effects. If the predictions are used to make decisions, however, those decisions by definition have effects. If those effects affect a player in the market, that player is not disinterested.

So: suppose player P stands to make $X from decision D. In our chess example, he might have a side bet, paying $X, that White will open with P-KB4. Therefore, the question is: what will be his expected loss, $Y, from buying enough P-KB4 bets that White opens with P-KB4?

If X is greater than Y, manipulating the market (i.e., moving it intentionally) is profitable, and P can be expected to do it. If Y is greater than X, moving the market is unprofitable. Obviously, even in the world’s deepest prediction markets (financial markets), large transactions move markets all the time. In fact, many traders are paid the big bucks for figuring out how to place large orders in these markets without moving them.

Now, I said, there is simply no way to ensure in general that Y is greater than X. To know that Y is greater than X, there is only one way. You have to know Y, and you have to know X. Or you have to know that there is some algorithmic relationship between the two.

In futarchy, this is simply impossible to quantify or analyze. There’s no way of measuring who will profit how much from a bad decision. There’s no way to classify the market players into wise men and fools, measure the size of the money behind the wise players, and figure out Y. There’s also no way to figure out X.

Professor Hanson, so far as I can see, addresses this problem in three ways.

First, he constructs a model in which Y is infinite and X is finite. (In fact, his “wolves” not only have infinite liquidity—they know the magnitude of the manipulation.) Therefore, the model proves: Y is greater than X. As it is indeed, in the model.

Second, he does sociological experiments with undergraduates. He sets up these markets such that Y is greater than X. We can tell that Y is greater than X in the experiments—because the experiments succeed.

Third, he finds actual markets in which actual manipulations fail. Sure enough: Y is greater than X. Of course, because non-disinterested decision markets are retarded, one would not expect to discover them in reality.

None of this goes even a millimeter toward proving what needs to be proved—namely, that in all markets, Y is always greater than X. It is just a list of cases in which Y is greater than X. In two of the cases, Professor Hanson has constructed his examples himself. In the third, reality itself has performed the selection. Therefore, he succeeds in proving his assumptions.

Now, this is where I ran into a bit of trouble. As I asserted, deduction beats induction every time. You can show me all the markets in the world in which Y exceeds X—whether you’ve constructed these markets yourself, or found them in reality. But to show that your markets will not be manipulated, you need to show that Y will always exceed X. And this you cannot do, because you don’t know Y and you don’t know X, and you can make no general statement about the relationship between the two.

Philosophically, this argument is unassailable. As a matter of practical communication, however, it would have been awfully nice to show up with some actual examples of cases (in the futarchy department) which X is almost certainly greater than Y. Brad Templeton asked for this, and I had to stutter for a moment before answering, off my back foot, “cap and trade.” Clearly, in a public appearance one should never have to think on the spot. It makes one look dumb.

Cap-and-trade is an good example because while X is, obviously, enormous, there is simply no population of wise people who can predict its effects. Because nothing like this policy on this scale (e.g., reducing carbon emissions 80% by 2050, the standard proposal) has ever been pursued. To get a big Y, you need big money behind sharp predictors.

Now, as Professor Friedman (who not only has a considerable personal resemblance to Yoda—but, as soon as he opens his mouth, confirms that resemblance) pointed out, this begs a question which I asked in my essay but was going to skip in the debate—futarchy being such a target-rich environment. How do we measure success? In chess, easy. In government, hard.

You need a “national happiness” number. I believe Professor Hanson actually used some phrase of this type. Of course, Stalin’s famous quote came instantly to mind: “Life has become better, comrades. Life has become more joyful.” This is the reductio ad absurdum of the scientocratic planned economy—or would be, if anyone realized how absurd it was.

For instance, GDP (total end-consumer sales of all businesses) is a ridiculous proxy for national happiness. It is not even a unitless number—it is measured in dollars, which are anything but constant. Removing this denominator involves substantial mathematical fudge. Moreover, to accurately predict conditional impacts on this number, you need a very large impact, and you need a very good default prediction of GDP.

Moreover, the effect of carbon controls on GDP will probably be negative. No—we need a positive environmental mitigation number to add to this already-ridiculous fudgeball. And so on. In Professor Hanson’s own Foresight presentation, he had a wonderful chart of “economic growth” going back to, I kid you not, 200,000 BC. With points representing actual numbers—apparently plotted from some actual dataset, not just randomly scribbled.

Needless to say, no units appeared on the “growth” axis. There’s really nothing like a unitless number. I wanted to raise my hand and ask the Professor to define “growth,” a word he used 47 times in his presentation, apparently with the assumption that both he and his audience knew exactly what it meant. If so, I feel it could at least have, you know, units.

Not only is the utility of such a number-soup metric questionable, its predictability is extremely questionable. There is a classic business-school exercise in which the professor puts a jar of jellybeans on the desk, then asks the class to guess how many jellybeans are in the jar. Shockingly, the answers tend to fall in a bell curve with the center around the right answer.

To me, this says something about the human brain’s ability to estimate geometry. However, if the professor left the jar under the desk, and the experiment still worked, it would say something about the human brain’s ability to operate telepathically. This, of course, is Feynman’s problem of the Emperor of China’s nose.

So, my general answer: X is likely to exceed Y in a case in which there is a large side effect, and little or no predictive power in the market. Carbon mitigation is an obvious such example. It is hardly alone in this.

Honestly, I think the greatest difference between my perspective and Professor Hanson’s is just that I have much higher standards. His entire argument proceeds from the position that, since government today is so bad, anything that could be somewhat less bad is worth a look. Sure, we can’t know that Y is greater than X in all cases, but often it will be. Besides, don’t people buy decisions now? Well, gee, they sure do. So there you go.

For me, government safety is like airplane safety. Not only do I want a watertight proof that Y is greater than X, I want two or three parallel and independent proofs. At least one of them will probably turn out to be wrong. Professor Hanson is a professor, and thinks like a professor. I’m an engineer, and think like an engineer.

I am also a student of history. So I have two sources of higher standards for government design: my perfectionist engineering attitude; and the European writers of the Victorian era, whose aristocratic governments worked much better than ours, and were thus appalled by government failures which to us seem trivial and not worth mentioning.

Therefore, this idea that since we have a bad system, we should consider new and different bad systems which may (or may not) be slightly less bad, strikes me as comical. It’s actually quite easy to fix our government. All we have to do is restore the old, effective system—aristocratic and/or monarchical—which we foolishly discarded in favor of all this Bromwicham machinery. A new gizmo, a prediction market rather than a ballotbox, is not what’s needed. What’s needed is an end to gizmoes, and a return to real statesmen.

In other words: if you want to play chess, hire a chess player. In the chess example, the enthusiasm for Bromwicham machinery by which a roomful of kibitzers can, in some collective way, play chess, is easy to explain. The explanation is anarchism—the desire for no one to be making mere personal decisions at the sovereign level. Everyone wants power; all the kibitzers envy the chess player. So: let’s shoot the chess player, and let the kibitzers play the game. We shall have no king. We shall rule ourselves. Freedom! Or something like that.

And we tried this. With what results—we now see. How long will it take to admit the mistake? Alas, at least another century or two, I suspect. The fruits of anarchism! Visit Port-au-Prince before Port-au-Prince visits you.