One-sided conversation with a headless professor

I don’t actually want Professor Hanson’s head, of course. I don’t want it in a box; I don’t want it on my socks. I don’t want it stuffed, embalmed, cremated or even cryopreserved. Physically and biologically, it should remain screwed firmly to Professor Hanson’s neck—and thence, indirectly, to his chair. A post to which it in fact brings much credit. As I told the Professor at our debate, he’s as far ahead of his profession as his profession should be ahead of him.

But a man can only go so far. Metaphorically, only Professor Hanson can prune, pack and ship his own capital peduncle. He can do so by agreeing with me on these two statements:

A prediction market can be used as a decision market if (a) it is accurately integrating genuine distributed information, and (b) it is robust against any causal feedback from its potential decisions.

Assessing (a) and (b) is a nontrivial, generally non-computable task that demands good intuitive judgment, sometimes known as “wisdom.”

Thus, a prediction market is not an effective decision market if it is unable to predict accurately due to Knightian uncertainty, asymmetric information, etc., etc. (The uncertainty in Feynman’s “Emperor of China’s nose” example is Knightian—a prediction market cannot predict the length of the Emperor of China’s nose, if no one has ever seen the Emperor of China’s nose. And no, Feynman was not making an argument about biometric distributions.)

For instance, a prediction market in terrorist actions should not trade, because it is subject to both Knightian uncertainty (no one really has a computationally accurate model of terrorism) and asymmetric information (except the terrorists, whose predictions of their own actions will always be the best.) Therefore, rational actors will not trade in a terrorism prediction market. It is thus a machine for transferring money from fools to terrorists—a sort of high-tech Islamic relief fund. Alas, it’s really quite typical that the Pentagon would fund such a thing.

In addition, whatever the financial strength of the predictive forces in a market, these forces must be stronger than the financial strength of anti-predictive forces, which can profit from a decision error that balances the expected loss of their anti-predictive bets. The cost of manipulating a market is a function of the players in that market; it is not a matter of theory. It can be anywhere from zero to infinity. So can the anti-predictive profit, of course.

The good news is: if Professor Hanson will agree to these declarations, I think we can all agree that he’s earned the position of Sith Lord and the honorary title of “Darth.” When the DHL van shows up with the box, I’ll send out that red-lightsaber kit. Some assembly required. The Professor can start putting it together as soon as his new, Sith head grows back. He doesn’t even need to resign his professorship, although the tattoos may arouse some faculty attention.

The bad news is: he probably won’t. At least, I’ve already offered him the chance to turn his path away from the lies of the Jedi Council and the grotesque, stuffed corpse of the dead Republic. And he has already refused. I enclose my side of the conversation, and replace the Professor’s with codes—which he can disclose if he wants, and not if he doesn’t. I’ve also elided paragraphs in which I disclose dark secrets of the Sith order.

Subject: head


Let me know when you get the thing off, and I’ll send you that DHL label:


Professor Hanson:



Really? I’d be happy, of course, to comment in response if you have a reply.

You might focus on the Popperian question of what empirical evidence would discredit your design—if this empirical evidence does not, or if it is not evidence at all. Because frankly (unless you deny the allegations against Berlacher et al), if this doesn’t, I can’t imagine what would! Maybe you can fill that imagination in.

[SS 1] [SS 2]

Professor Hanson:



[SS #3]

If the strategy does not succeed, why do they keep doing it? What is the cause of the “familiar fact pattern” which is “rampant?” I quoted from three separate areas of financial expertise: prosecutors, journalists, and academics. You can’t possibly deny that PIPE shorting is rampant.

Therefore, if it is not as you posit successful, the onus is on you to explain this irrational behavior. (In fact, it would probably make a good paper.) I have a perfectly sensible explanation: because it works. It is not irrational, but rational.

And I really wonder what, say, Charles Darwin, would make of your persistent complaints about my word count. Dear Lord Jesus, do we need to think in sound bites just because it’s the 21st century? Must we twitter, just because we can?

Professor Hanson:



Laying off, at least if it’s this:

is totally different. It’s hedging a bet already made. Hedging is not manipulation.

In a PIPE, the bet is not already made, because the stock issue is not yet priced. The goal of manipulation is to affect the price at which the deal closes.

I can answer your question, actually. In a PIPE short, the direction of manipulation is always downward. That is, the PIPE manipulator is always shorting, never pumping. That’s why it’s a PIPE short, not a PIPE pump (the other direction of manipulation being the pump-and-dump).

As you know, a predictable price error is a predictable profit. So why are there no wolves? I’m not sure. But I suspect that there is no predictable profit, because there is no reliable way for a wolf to see that (a) the PIPE is being shorted by the new investors, and (b) if so how much.

That is, if you assume the PIPE is shorted, the error is predictable. But since the activity is after all illegal, it is not known whether it is done—thus not predictable. Thus there is no predictable profit and various sensible assumptions are not violated.

Are you ready for that DHL label yet? I’ve already paid for the shipping. You’ll just need to perform the procedure.

Me, again:

Also, another reason these markets are not predictable, despite the “rampant” unidirectional manipulation, is that, since it dilutes the company, PIPE shorting actually decreases the value of the shares it attacks. The false prophecy of the ganked decision market is also a self-fulfilling prophecy.

Thus, it is not sufficient to notice a pattern in which PIPE shares decline between the announcement and the closing, or even before the announcement. If you buy in against the shorts and your purchase is not large enough to overcome the manipulators, you will just lose money along with everyone else. You are now a dolphin, too.

Is it impossible, still, to profit from this pattern? I’m sure it isn’t. I am not a trader, but I’m sure there is action in the area. There are no trivial strategies, however, or the manipulation would not work and hence would not exist.

Professor Hanson:



I can’t imagine what you mean by “increases price accuracy.” How would that be measured? What do you even mean by “increases price accuracy?”

Are you saying that PIPE shorting is a self-fulfilling prophecy? In that case, we agree! It is most certainly a self-fulfilling prophecy. My point is that futarchy will be manipulated by those who can profit from self-fulfilling prophecy. E.g., that in the case of PIPE shorting predicts that the stock will decline—and thus dilute it, decreasing its value.

In a self-fulfilling prophecy, there is a circularity: because the prophecy is made, the prophecy comes true, because the prophecy is made. For instance, in one of these PIPE scams, the profit is made because the stock goes down. But the same action that made the stock go down—putting on the shorts—is itself the bet.

My point is that futarchy will be manipulated to be non-predictive, even anti-predictive, in precisely this way. Government will make insane decisions for the sake of funneling profit to self-interested players. It does this already! But in your system, it will do it far more efficiently. Futarchy is basically automated graft.

Therefore, if I am right about what you mean by “ex ante the possibility of such situations increases price accuracy,” it is by no means a defence. Quite the contrary—it is a confession. Susceptibility to self-fulfilling prophesies is very much a bug in a decision market. It is not a feature, and I will indeed require that head. (Blame it on too much Japanese history.)

Moreover, I am not at all convinced that this defence is even true. All self-fulfilling prophesies are profitable and (by my definition) false, but not all profitable false prophesies are self-fulfilling. Rather, the bet can take a loss, and the bettor can still win by some other mechanism.

There is an easy way for you to clear up my confusion. If my empirical data, such as it is, does not falsify your theory—what would?

Me again:

Any more thoughts? If not, I will post this exchange, or at least my side of it. I’ll also include your side if you give me permission.

In my (basically industry-centric) view, accepting the limits of prediction markets makes an argument *for* prediction markets. Obscuring these limits is an argument against them. So, if you invent the suspension bridge, and then claim that the suspension bridge can be used to bridge the Atlantic, you are sabotaging your own invention.

A prediction market can be used as a decision market if (a) it is operating correctly and accurately integrating genuine public information, and (b) it is robust against any feedback process involving the decisions it makes. Assessing (a) and (b) is a nontrivial task that may even demand good intuitive judgment, or “wisdom.”

If you can accept this result, I will harass you no more!

—and here the matter resteth.