Okay, so you’re a Loyalist now. So what?
The story of the American Rebellion, as told by Hutchinson, Oliver, and Stedman, is hardly without lessons for today. Most are subtle, and we’ll save them for later. But one is obvious: bogus, self-serving, fraudulent antihistory is being installed, as we speak, at taxpayer expense, in the tender forebrains of America’s youth. An outrage!
Indeed, by many reasonable standards, an outrage. To the Pupulupi, of Zargon Four, who have such a great respect for truth that they never say “good morning” unless they mean it—an unthinkable crime of epic proportions. To us, of Planet Earth—jaywalking. If a little official antihistory, especially surrounding the origin myth of the state, is our only problem, we don’t have a problem.
As we’ll see in this chapter, we do have a problem. But let’s get back to the Loyalism.
You don’t really need to be a convinced Loyalist to continue processing the red pill. It’s not trivial to carve a lifetime of revolutionary propaganda out of your head in one operation. Not everyone has a natural knack for self-directed neurosurgery. Realistically—there are probably a few antennae, tentacles or hyphae left in the cavity. But this is okay: we just need a hole to dig in. Now we have one, and we’re on the offensive.
What’s essential is that, after your beer with Peter Oliver, you understand Loyalism. You may not be completely sold, you may not see how simple and obviously right the Loyalist story of the American Rebellion is, but you can see how a reasonable person might see things that way.
But the Loyalist perspective remains an isolated outlier. Everything else you believe about reality is consistent with the American Revolution. With the American Rebellion—maybe not so much. Our goal in this chapter is to slide a hemostat jaw into this little tear between your parasite and the endogenous neural tissue, grab the former by its dorsal fin, and pull. There may be bleeding.
In other words: Loyalism gives us an extremely foreign perspective of the present world. There are no other Loyalists in 2009. So, when we think as Loyalists, we have no choice but to think for ourselves.
What should a Loyalist make of X, or Y, or Z, in 2009? Let’s say, for example, that Peter Oliver had spent the last 200 years asleep in Rip van Winkle’s cave, and woke up for the inauguration of Barack Hussein Obama. Can we imagine his reaction? We can try.
If we want to get really imaginative, we can imagine what I call a “reverse counterfactual.” First, imagine that the military dice had fallen otherwise and the American Rebellion was suppressed. Second, perform the standard counterfactual exercise of imagining what an intact British Empire would look like in 2009. Third, imagine the counterfactual universe invents some device that can send invisible observers into our 2009, and make a documentary for the edification of the Imperial audience—showing this awful alternate 2009, in which the Massachusetts disturbances of the 1770s were not quashed with firm, manly vigour.
What’s neat is that such a documentary could be made, with existing technology, in the real 2009. If you don’t find this a frightening exercise—try replacing the British Empire with the Confederacy or Nazi Germany. (These variants are only for battle-hardened space admirals.)
In this chapter, we’ll expand this fresh alternate reality to three more points—each of which, unlike 18th-century history, is of considerable relevance in the real world today. To preserve some suspense, we’ll give them secret acronyms: AGW, KFM, and HNU.
Each of these acronyms represents, so far as I can tell, a democratic feedback loop between public misperception and official malpractice. In other words: between lies and evil. Lies persuade well-intentioned voters to support policies which are in fact evil. Evil, being evil, has both the power and the incentive to maintain the lies. As we’ll see, these loops are quite stable, and they can be almost arbitrarily pernicious.
For each case, we’ll describe the misperception and the resulting malpractice, and suggest a new policy regime which breaks the loop. These new policies are every bit as far off the institutional map of your present government as Loyalism is off its political map, and they are not likely to happen. If you find yourself liking them—tough. That’s democracy for ya.
There is no surprise behind this acronym. You probably already have an opinion about AGW. If it’s the right opinion, please feel free to skip this section.
Adopting the pejorative tone we are shortly to encounter, and reflecting it in the opposite direction, we can call a believer in the organized scientific consensus behind AGW an AGW credulist. An unbeliever, of course, is an AGW denialist.
You’ll notice—this is a property of each of today’s cases—that there is a vast intellectual gap between the credulists and the denialists. There is no moderate position on AGW. You believe, or you don’t believe. One of the two sides is extremely right, and the other is extremely wrong. I like using pejorative terms for both, because one will turn out to be hip and ironic, and the other will turn out to be richly deserved.
As the page behind that link (on a site produced by brothers Mark and Chris Hoofnagle) so helpfully explains:
Almost every denialist argument will eventually devolve into a conspiracy. This is because denialist theories that oppose well-established science eventually need to assert deception on the part of their opponents to explain things like why every reputable scientist, journal, and opponent seems to be able to operate from the same page. In the crank mind, it isn’t because their opponents are operating from the same set of facts, it’s that all their opponents are liars (or fools) who are using the same false set of information.
But how could it be possible, for instance, for nearly every scientist in a field be working together to promote a falsehood? People who believe this is possible simply have no practical understanding of how science works as a discipline.
A fabulous question. We’ll answer it in a moment. But for now, keep the suspense. Dear reader, if you are comfortable with this tone, I suggest you read the entire post linked above. It has lots of good information about denialists, cranks, and other enemies of science.
If something strikes you as not quite right about the Hoofnagles’ tone, good. That means your head is screwed on right. However, as part of the procedure, we’ll need to expose you to an even more extreme example of it.
Warning: this may increase your heart rate. Warning two: please don’t click through this link to the blog Climate Progress, provided solely for reference purposes. Warning three: yes, the author of the words below is (as we’ll see) an influential man of real public authority.
Diagnosing a victim of anti-science syndrome (ASS)
In this post I’m going to present the general diagnosis for “anti-science syndrome” (ASS). Like most syndromes, ASS is a collection of symptoms that individually may not be serious, but taken together can be quite dangerous—at least it can be dangerous to the health and well-being of humanity if enough people actually believe the victims.
One tell-tale symptom of ASS is that a website or a writer focuses their climate attacks on non-scientists. If that non-scientist is Al Gore, this symptom alone may be definitive.
The other key symptoms involve the repetition of long-debunked denier talking points, commonly without links to supporting material. Such repetition, which can border on the pathological, is a clear warning sign.
Scientists who kept restating and republishing things that had been widely debunked in the scientific literature for many, many years would quickly be diagnosed with ASS. Such people on the web are apparently heroes—at least to the right wing and/or easily duped (see “The Deniers are winning, but only with the GOP”).
If you suspect someone of ASS, look for the repeated use of the following phrases: […]
Individually, some of these words and phrases are quite useful and indeed are commonly used by both scientists and non-scientists who are not anti-science. But the use of more than half of these in a single speech or article is pretty much a definitive diagnosis of ASS.
When someone repeats virtually all of those phrases, along with multiple references to Al Gore, they are wholly a victim of ASS—in scientific circles they are referred to as ASS-wholes.
A newly prominent ASS-whole is Harold Ambler, who managed to get this article past a HuffingtonPost intern over the weekend: “Mr. Gore: Apology Accepted.” I was not originally planning to post on this (unsourced) collection of long debunked denier talking points since, as regular readers know, my policy is not to waste time on the umpteenth debunking. Anyone who might be persuaded by Ambler’s tripe can do a simple search for each myth on RealClimate or on this blog. […]
As deniers or ASS-wholes go, Ambler is quite lame. Separate from his long list of long-debunked denier talking points, who could possibly take seriously somebody who wrote the following:
Mr. Gore has stated, regarding climate change, that “the science is in.” Well, he is absolutely right about that, except for one tiny thing. It is the biggest whopper ever sold to the public in the history of humankind.
Such a statement is anti-scientific and anti-science in the most extreme sense. It accuses the scientific community broadly defined of deliberate fraud—and not just the community of climate scientists, but the leading National Academies of Science around the world (including ours) and the American Geophysical Union, an organization of geophysicists that consists of more than 45,000 members and the American Meteorological Association and the American Association for the Advancement of Science (see “Yet more scientists call for deep GHG cuts”).
Such a statement accuses all of the member governments of the IPCC, including ours, of participating in that fraud, since they all sign off on the Assessment Reports word for word (see “Absolute MUST Read IPCC Report: Debate over, further delay fatal, action not costly”). And, of course, Ambler’s statement accuses all of the leading scientific journals of being in on this fraud, since the IPCC reports are primarily a review and synthesis of the published scientific literature.
Now, as Loyalists, what do you hear when you hear this tone? I know what I hear. What I hear is Samuel Adams, James Otis, Jr., and Joseph Hawley. The distinctive whining scream of the Puritan, speaking power to truth as is his usual fashion. Recognizable in any century.
Follow those last two links above, if you dare. Or don’t bother. What we see quickly is that, at least as regards AGW, we live in what might be called a scientific theocracy. You cannot slip a sheet of paper between Science and State. They are one and the same. Especially with our new, improved, pro-science administration, the only legitimate source of public policy on AGW happens to be… the very scientists who research it. (Professor Hansen is a fine example.)
Note that, if we substitute Science for Scripture, this is exactly the political structure of your Puritan theocracy, or your Persian theocracy for that matter. The same experts perform the intellectual analysis and dictate the resulting policies. Simple, clean, no muss, no fuss.
Of course, there is a considerable difference between Science and Scripture. And what, exactly, is that difference? We shall see in a moment. More suspense.
As always for the historian and general student of reality, the first question becomes: do we trust these people? It is possible that Science is such powerful juju that untrustworthy people, so long as they are Scientists, can be trusted. On the other hand, we would certainly want some support for this claim. And it can’t hurt to start with an assessment of individual credibility.
Normally, when we’re deciding whether to trust (say) Peter Oliver versus John Adams, we have only their words to go on. Dear reader, I invite you to test your critical faculties on the effusion above. Does it strike you as trustworthy? But fortunately, we are operating not in the past but in the present, and not in the domain of history but that of geophysics. We have more to go on.
The author of Climate Progress is one Joseph Romm. Who is Joseph Romm? His about box explains:
Joseph Romm is the editor of Climate Progress. Joe is a Senior Fellow at the Center for American Progress and was acting assistant secretary of energy for energy efficiency and renewable energy during the Clinton Administration. In December 2008, Romm was elected a Fellow of the American Association for the Advancement of Science for “distinguished service toward a sustainable energy future and for persuasive discourse on why citizens, corporations, and governments should adopt sustainable technologies.” Read what Wikipedia has to say about Joe.
(Do read what Wikipedia has to say about Joe. It has a distinctly, um, self-edited flavor.)
Here’s the problem, AGW credulists. The problem is: I know Joe Romm. And I know, without a doubt, that he is a foul creature of the night. Sadly, I cannot share this deep truth through direct osmosis, but we will arrive at it by and by.
Okay, I don’t know Joe Romm. But my mother knows Joe Romm—to be more exact, she worked for him at DOE—and I trust my mother. Here is her recollection:
Oh, yes. Romm was one of three who loaded me with work for my first few months with Energy Efficiency and Renewable Energy. He was Deputy Assistant Secretary, and ran the show with Christine Ervin (Assistant Secretary) and Brian Castelli. Christine finally got two inches from my face and announced that I was supposed to be working for her alone. Romm promulgated the idea that he was the smartest person to ever enter Forrestal. He used to regularly win the Washington Post contests for creating the best caption for captionless cartoons. Maybe that was it. At any rate, he got annoyed with me the time three of us went up to the Hill to one of the staffers on an authorization committee trying to gain turf. I was supposed to be carrying budget analysis to help, but there had been little time to prepare. The meeting was a disaster (the staffer being a lot smarter than Romm), and in the taxi back I had to listen to him blaming me for getting the numbers wrong (I can’t even remember whether they were). Shortly afterwards I was assigned a windowless office during a general office move and had plenty of time on my hands. By the way, he once borrowed from me your copy of Easterbrook’s A Moment on the Earth, apparently in order to disparage the “opposition.”
What does this tell you? Not a lot. It is just a snapshot of the world Joe Romm lives in. Notice, however, that my mother’s snapshot of Joe Romm’s world does not, in any way, resemble the image of Joe Romm’s world that you get from Joe Romm’s blog.
Basically, my mother got involved with this world by accident. More or less everyone else in EERE was there because they were true believers. My mother was there because her kids had gone to college, and she needed a job. So she wound up as a budget and policy analyst, working for the true believers.
This drove my mother up the wall. She is basically an honest person. She does not have the skill sets to work effectively as a member of a criminal organization, and she certainly did not expect the United States Department of Energy to be anything of the sort.
Yes: that’s exactly what I said. Joe Romm should be in prison. James Hansen should be in prison. Michael Mann should be in prison (and not for making Heat). These people are criminals. Sadly, no one will be arresting any of them any time soon.
What my mother found at EERE was a sort of giant, Potomac-shaped hog-trough, dispensing a billion or two a year to grunting Beltway bandits packed shoulder-to-shoulder around a vast open sewer of hot, juicy, delicious cash. This is, of course, the iron triangle of Washington fame. (I think the triangle should include at the very least the press, making it a square, which would let us add Andrew Revkin to our fantasy arrest list. All you coup plotters out there, listen up. These guys are all buddies—you can probably nab all four at the same Super Bowl party.)
In order to keep said open sewer open, EERE planners (such as my mother) had to go through the following process: they had to analyze a constant flow of scientific and engineering information from the renewable-energy researchers they supported (typically experienced recipients of such grants, which is why they call them “Beltway bandits”), decide which technologies seemed promising and which did not, support the former and cut the latter.
Now: my mother was at DOE in the mid-90s. How many successful renewable-energy technologies can you name that came out of DOE in the mid-90s? Or came out of anywhere in the mid-90s? Or came out of anywhere at all? What are the successes of renewable energy?
For that matter, even today, how many press releases have you seen reprinted in your newspaper of choice, promising that renewable-energy technology X—algae biofuel, perhaps, or Stirling engines, or thin-film solar-panels; the list is endless—would hit the market a year from now, two years from now, five years from now? For how many years have you been seeing these types of announcements? How many renewable-energy technologies have hit said market?
The reason, of course, is that most of these technologies simply don’t work. At least, not in the sense of being even remotely cost-effective. Of course, one can still tinker with them, and one never knows how tinkering will turn out. But what would happen at EERE, over and over again, is that some research program would promise result X by year Y, fail, add 1 to Y, and get more money for next year.
My mother’s job was not to evaluate renewable-energy technologies. It was to pretend to evaluate renewable-energy technologies—creating the essential illusion of science-driven public policy. Since everyone involved in this process understood that it was a farce, you can imagine the quality of the data. Meanwhile, as usual in Washington, how much money you got depended on how many friends in the right places you had. This tends not to change much from year to year, resulting in remarkably consistent budget allocations.
In other words, my mother’s work was bullshit in the best Frankfurtian sense. Some might get a kick out of this, but she is just not the type. And at the time, AGW was not the big thing it is now. So the open sewer seemed picayune. A billion here, a billion there. It sounds big to the hoi polloi, but of course it isn’t. What was not obvious in the late ’90s is that, if you can steal billions, you can steal trillions. And that is a big deal.
But I am just describing the perspective from which I, personally, arrived at AGW. You don’t know me, my mother, or Joe Romm. So we’ll need to actually consider the science—or Science, as the case may be.
But first, I want to praise Joe Romm. Because, unlike the paladins of light in this department (foremost, of course, the great Steve McIntyre—note the difference in tone), Joe Romm knows what’s at stake. Read this again:
Such a statement is anti-scientific and anti-science in the most extreme sense. It accuses the scientific community broadly defined of deliberate fraud—and not just the community of climate scientists, but the leading National Academies of Science around the world (including ours) and the American Geophysical Union, an organization of geophysicists that consists of more than 45,000 members and the American Meteorological Association and the American Association for the Advancement of Science.
Such a statement accuses all of the member governments of the IPCC, including ours, of participating in that fraud…
And it’s very interesting that we hear this from the AGW credulists, rather than the denialists. Your average AGW denialist does not want to go there. He wants the problem to be isolated. The last thing he wants is for the scientific community broadly defined, or even worse all the member governments of the IPCC, to appear in his crosshairs. (For example, McIntyre, probably quite wisely, snips all political discussion in his comments.)
For UR, the matter is just the opposite. We already suspect that these governments are Orwellian and corrupt. After all, once you’re a Loyalist, the question is settled by definition. So we are happy to hear Joe Romm’s description of the stakes. For once, he is exactly right.
Again, the problem is boolean. There is no continuum, only two perspectives.
From the viewpoint of the AGW credulist, AGW is a critically serious problem, perhaps even an emergency; AGW research is essential spending; public concern about AGW is a sign of prudent, educated citizenship; and the public-policy measures recommended by AGW researchers, such as carbon controls, are a matter of national importance.
Let’s consider, for a moment, the amazing position of the AGW credulist—not the researchers and the bureaucrats, just the ordinary schmoe who is asked to believe in this stuff. The credulist is seriously, deeply, personally concerned at a political level about the concentration of gases in Earth’s atmosphere.
My favorite introduction to American history is this 1901 essay by Charles Francis Adams, Jr., in which our historian examines the controversial issues in every Presidential election from 1856 to 1900, lamenting somewhat over their general detachment from reality. I suspect that Adams, despite his obvious sang-froid, would be truly amazed by the appearance of atmospheric chemistry in the American political mind.
But this proves nothing. As promised, we need to consider the matter from scratch. What is the Loyalist position on AGW? What we’ve established is that it walks like Puritan hysteria, it talks like Puritan hysteria, and it smells like the Devil himself. But we are better than that. We’d like to actually evaluate the matter.
What, exactly, is AGW? What is science? And what is the relationship between the two?
AGW is the result of an effect described by Arrhenius in the late 19th century, in which CO₂ in the atmosphere reflects outgoing infrared radiation back at the earth. There is no dispute as to the existence of this effect, or the increasing levels of CO₂ in Earth’s atmosphere, or the fact that this trend is produced by people burning fossil fuels.
Important facts to remember are (a) that the temperature increase is proportional not to the CO₂ level but to its logarithm (this is undisputed, but I have never, ever seen an AGW credulist mention it directly), meaning that each doubling of CO₂ produces a constant increase in total radiation; (b) that at present rates of fossil fuel use, CO₂ will be double its present value by 2255 (of course, fossil fuel use could increase, which would bring this number in—let’s pull a round figure out of our asses, and call it 2100); and (c) that doubling CO₂ increases total radiation by roughly 3.8 W/m² over the present value of 1366 W/m², or about 0.3%.
And how much temperature increase will this cause? The answer to this question is called the climate sensitivity—the function that maps an increase in incoming radiation to an increase in atmospheric temperature. (The link is to a denialist site, but there is no argument over the concept.) What is the best scientific estimate of Earth’s climate sensitivity?
Let’s postpone this question for a moment. It requires us to define science. Or Science.
Here, sadly, we must part from Joe Romm. His definition of Science is clear. Science is that which is done by scientists. Scientists are people employed, with the title of professor, by the universities. The universities are accredited by Washington. Therefore, Science, in Joe Romm’s mind, can be defined as official truth. Let’s stick with the capital letter for this one.
Note that if we replace Science with Scripture and scientists with ministers, we are back in the Massachusetts Bay Colony. We’ve reduced the scientific method to the following statement: Washington is always right. But surely not even the sage who gave us “ASS-whole” is crass enough to endorse this principle.
The conventional explanation of why science, with miniscule s, works so well, is due to Karl Popper and his concept of falsifiability. Whole forests have been cut down over this issue, but here at UR we have a very simple interpretation of falsifiability, which we’ll now share.
The unusual trustworthiness of science, despite the fact that scientists are humans and humans are not generally trustworthy, exists when (a) hypotheses are falsifiable, and (b) the professional institutions within which scientists operate promote, broadcast, and reward any falsification. We can trust a consensus of scientists on a problem for which (a) and (b) are true, because we are basing our trust on the fact that, if the hypothesis is false, a large number of very smart people has tried and failed to discover its error. This is not, of course, impossible. But it is at least unlikely.
So we have two definitions, and our $64,000 question: is Science science? That is: is the official truth of AGW, which claims the high credibility produced by Popperian falsifiability in a functioning system of critical feedback, in fact justified in claiming this credibility?
The answer is easy: no.
To understand the impact of increased CO₂, we need to know the climate sensitivity. Q: How can scientists, at least Popperian scientists, evaluate the climate sensitivity? A: They can’t. There is no falsifiable procedure which can estimate climate sensitivity.
To estimate climate sensitivity, all you need is an accurate model of Earth’s atmosphere. Likewise, to get to Alpha Centauri, all you have to do is jump very high. The difference between the computing power we have, and the computing power we would need in order to accurately model Earth’s atmosphere, is comparable to the difference between my vertical leap and the distance to Alpha Centauri. For all practical purposes, climate modeling is the equivalent of earthquake prediction: an unsolvable problem.
If you want to see this argument laid out in detail, read Pat Frank’s article in Skeptic. To my mind, all this detail about error bars simply obfuscates the fact of an unsolvable problem. The general circulation models (GCMs) that purport to simulate climate are interesting experiments, and it’s not unimpressive that they can be made to produce results that look at least reasonable. But they model the atmosphere with grid cells 100 miles on a side, and attempt to use this to predict the state of the atmosphere—a chaotic system—for the next century. This does not pass the laugh test.
There is simply no scientific way to verify or falsify the accuracy of any such piece of software. It is not practical to perturb Earth’s climate, perturb your model’s climate, and test that they both respond in the same way. And there is no other way to test a model. In the end, all you have is a curve that records past temperature, and a piece of software that generates future temperature. Perhaps if we could watch the predicted and actual curves match up for a century or so, we could generate something like statistical significance. But we can’t. And hindcasting—fitting the models to data from the past—overfits, and is completely worthless.
There are two fields of Science which contribute to the AGW conclusion: climate modeling and paleoclimatology. Michael Mann pioneered the construction of “hockey stick” graphs which appear to show “unprecedented” increases in temperature in the late 20th century. Even supposing that Mann was not a charlatan (see below), these curves would have no scientific meaning whatsoever.
It is fairly clear that Earth’s temperature has been increasing over the last few centuries, and that in the 20th century it rose from 1900 through the ’30s, fell from the ’30s through the ’70s, rose from the ’70s through the ’90s, and has been flat since the ’90s. What would it have done in the absence of increasing CO₂? Again, we have no way to know. We have no model. We cannot separate the curves. (This paper [large PDF] by Syun-Ichi Akasofu makes the point quite elegantly.)
Besides the fraud, what’s creepy about the hockey stick is that it implicitly argues causality by mere visual analogy. We see increasing temperature and increasing CO₂, so the two must be related. WTF? This is not the kind of argument that appeals to a scientist. It is the kind of argument that appeals to a voter.
What we are looking at here, I think, is what Feynman called cargo-cult science. GCMs and paleoclimatology look—to your average voter—like science with a small s. They perform huge numbers of intricate calculations, they collect vast quantities of data, and of course they are Science with a big S. It’s just that their efforts have no falsifiable predictive value. And what is much worse, they claim predictive value and are driving policy off it.
The justified arrogance of falsifiable science is such that, when science goes bad, it goes extremely bad. Langmuir’s description of pathological science is worth reading. Note that GCMs fit this profile quite well—they produce results where there should be only noise. However, it is not at all necessary to resort to erudite mathematical abstractions to catch these people in a lie. The mens rea is easy to find.
If you have any remaining doubt in the matter, here is one of Joe Romm’s posts in which, as usual, he accuses his opponents of being lying Trotskyist wreckers. In this post we see the following statement:
But I find it hilarious that the deniers and delayers still quote Christy/Spencer/UAH analysis lovingly, but to this day dismiss the “hockey stick” and anything Michael Mann writes, when his analysis was in fact vindicated by the august National Academy of Sciences in 2006.
What is Romm talking about? To understand the issue, read this PDF, then this. You’ll see that the word “vindicated” is—um—extremely unjustified. For those tempted to defend Romm on the grounds that he is a mere bureaucrat and doesn’t know better, note that he has a Ph.D. in physics from MIT. As I said: prison.
So: not only is the research behind AGW not falsifiable science, and thus not entitled to deference regardless of the personal trustworthiness of its promoters, its promoters are—in fact—snakes. It never rains but it pours. In fact, if you read Climate Audit on a regular basis, you see examples of gross scientific misconduct that would be career-ending in any legitimate field, perhaps once or twice a month. Mann’s (repeated) statistical manipulation is especially egregious, but not at all unusual.1
We also have (one) answer to the first question of the AGW credulists: how a scientific consensus can produce a fraudulent result. The answer is simple: the entire field is fraudulent. In a fraudulent pseudoscience, there is no incentive at all for uncovering error, because the only result of a successful dissent is to destroy your job and those of your peers.
I am taking some heat for all this from my peers outside Georgia Tech. The climate blog police were very upset by my congratulations to Steve upon winning the best science blog award. A recent seminar speaker was appalled to be included in the same seminar series as steve and pat, and told me i [sic] was misleading my students. I got some support for what I am doing from a program manager at NSF who I spoke with recently, who appreciated my “missionary work” over at climate audit [sic]. Another NSF program manager is apparently not at all happy about this. Some people think that my participation over here in someway “legitimizes” CA; my participation over here is not all that relevant in the overall scheme of CA. I am fully aware that many of my peers think i [sic] am crazy for doing this.
Cargo-cult scientists have to circle the wagons like this. If they piss off the NSF program managers, their life expectancy as successful grantwinners is not impressive. Real scientists have no such need to be defensive, because their program managers actually want them to expose any errors in their field.
Thus we answer the initial Hoofnagle question: the source of coordinated error is not, at all, a conspiracy. It is simply the funding source. Nearly every scientist in a field can be working together to promote a falsehood because they all get their money from Joe Romm and company. And if the falsehood is exposed rather than promoted, there is no field left. It is no more surprising that all USG-funded scientists are unanimous in promoting AGW as a global emergency, than that all Philip Morris–funded scientists are unanimous in promoting tobacco as a vitamin.
What we’re looking at here is mainstream pathological science. This is a basic and unfixable flaw in the entire Vannevar Bush design for federally-funded science. Once cranks, quacks, or charlatans get a foothold in the NSF and/or the universities, and establish their quack field as a legitimate department of Science, they are there to stay.
The mainstream cranks will not expel themselves, and there is no mechanism by which another department can attack them. In theory they are vulnerable to the democratic political system (or, at least, the Republican political system), and as we’ve seen they play up this fear quite a bit. In practice, of course, they did quite a bit more damage to Bush than he did to him.
The incentive of all federally-funded science is the same: keep your funding, and try to get more. It is not that most scientists are “in it for the money.” It is that you cannot be a successful scientist, in this era, without being a successful bureaucrat. As such you respond to bureaucratic incentives, such as the feelings of your NSF program manager.
And we start to see how this entire disaster developed. First: out of genuine curiosity, people started trying to build climate models, measure CO₂, and the like. Second: since USG is not a charity, they had to apply for grants and describe the importance of their work. Third: they noticed, consciously or subconsciously, that an easy way to make their work seem more important was to predict disastrous consequences. Fourth: the same evolutionary feedback process that, in a falsifiable science, eradicates error, operated to promote it. Researchers and fields which produced more alarming results received more funding—because, by definition, their work was more important. Iterate to the point of sheer insanity, and you have the AGW research community we have today.
There remains one loophole by which AGW credulists may defend their position. They can say (although they don’t) that, even though there is no scientific way to estimate climate sensitivity, the fact that we are poking Earth’s climate with a stick and we have no knowledge of its effect is itself egregious. This is the famous precautionary principle.
Note that now we have completely abandoned the pretense of scientific public policy. This is excellent, because it allows us to think phronetically—using the ordinary tools of common sense—about whether CO₂-triggered warming is, or is not, a genuine problem.
Here is a thought-experiment that will resolve this easily for you. In a world with no fossil fuels and a stable CO₂ level, scientists studying the sun announce that they have (never mind how) scientifically determined that its intensity will increase by 0.3% between now and 2100. You are Dictator of Earth. How do you react to this information?
Do you (a) do nothing at all; (b) keep an eye on the problem, treating it as of roughly the same significance of, say, the possibility of a Sri Lankan tea blight; or (c) immediately embark on a geoengineering scheme to counterbalance the brightening sun and keep Earth cool?
Recall from Shaviv’s math that, if we ignore feedbacks and treat Earth as a black body, the expected climate sensitivity is about 1 degree Celsius. Perhaps this is in the rough neighborhood of the actual result, and perhaps it isn’t. We also need to consider the most obvious effect of global warming, sea-level rise. The sea is rising at about two millimeters per year.
First, realize how thoroughly un-terrifying these figures are. Even if you triple them. If, as Dictator of Earth, your worst problem is oceans that will rise a foot in a century, or air that will become three degrees warmer, you simply don’t have much of a problem. What ever happened to the Nazis? Perhaps aliens could invade? Being Dictator of Earth has to be more challenging than this. If your subjects can’t handle oceans that rise by a half-centimeter a year, perhaps you need to focus on breeding more intelligent subjects.
Our trick here was to replace the “artificial” increase of CO₂ with a “natural” brightening of the sun. These have identical effects on the Earth, and identical consequences for its residents. But only one has a narrative of guilt and redemption. What we see is that the results, stripped of their Puritan moral baggage, are just not all that terrifying. Environmentalists often play this game; in the classic Jesuitical fashion of the good old Black Regiment, they will talk guilt and redemption to those who want to hear guilt and redemption, and practical consequences to those more receptive to reality. The guilt and redemption are drivel; the practical consequences, as we see when we look at them on their own, are just not that serious.
Worse, we can even question the proposition that the human consequences of a mild warming are negative. For most of the 20th century, students of global climate made a simple assumption: warmer was better. We can see this in the names that previous generations of scientists applied to past warm periods, such as the Holocene Optimum and the Medieval Optimum. “Optimum” does not mean “worse.” To the researchers who invented these names, it was just obvious that a warmer climate meant warmer temperate regions, a more fertile Earth, and more human prosperity. This perception, reached without thought of controversy by serious researchers in the 20th century, is a genuine consensus that deserves our respect.
But in the age of AGW, there is no professional incentive for researchers to study the positive effects of warming climate, and a tremendous incentive for them to study the negative effects. Of course, if you only look at the research rather than the incentives which produce it, you will come away with the conclusion that warming’s negative effects vastly outnumber its positive ones. (Indeed, in the age of Puritan environmentalism, we can barely even express the thought that a human alteration to the environment might be in some sense benign.)
Again, we see both scientific and public opinion changing not to follow the truth, but to follow the funding. The entire AGW industry is thus best explained as an intellectual pathology of the 20th century’s disastrous decision to convert disorganized, decentralized, and unofficial science into organized, centralized and official science.
This gives us our policy prescription: end all official funding of science, especially in cases in which the output of the science drives public policy. If a government is to rely on the advice of scientists, it must make sure that it is relying on actual, falsifiable science, and that the institutions producing that science have no incentive to produce anything other than the truth. The obvious way to do this is to separate science and state, for the health of both.
In a healthy society, people would still study the Earth’s climate. They might even try to model it. But they would do so for the original motivation of science: curiosity. Today, bright young people go into the environmental sciences because they offer quite a different attraction: power. The sense of status and importance held by a James Hansen, or even a Joe Romm, is hard for such as you or me to even imagine.
A key aspect of this is not merely that the AGW researchers, their protégés, and their little academic empires survive and grow, but that their advice is taken by the State—and, as a result, has what many people in the trade call impact. Of course this is just a name for power, and those who have it find it so pleasant that they are seldom inclined to consider whether they are using it for good or for evil.
If you surf from Climate Progress to Climate Audit, the change from the world of funding and impact to the world of skepticism and curiosity is unmistakable and infinitely refreshing. The former is an NGO, supported by nameless and sinister fat cats. The latter has a tip jar. ’Nuff said. Someday, all of science will return to the attitude and methodology of a Steve McIntyre, and its Washingtonian captivity will seem like no more than a bad dream.
It is almost embarrassingly easy to debunk 20th-century macroeconomics. Indeed, by failing to predict yet another vast cataclysm, one might think the field had met its end.
And indeed when we see mainstream articles with names like “How the Entire Economics Profession Failed,” we might seduce ourselves into the pleasant, Candidean belief that the “entire economics profession” was ready to resign its sinecures, and seek new employment in the lawn-care industry. Ah, if only. Yves Smith has links to a couple more pieces in this vein. Alas, they are all equally clueless.
For example, it is remarkably easy for Professor Madrick (above) to escape from the titanic disaster he seems to describe. Not counting Marxists, there are three significant schools of economic thought today: one founded by Lord Keynes and revitalized by Paul Samuelson (also known as “economics”), one founded by Irving Fisher and revitalized by Milton Friedman (also known as the Chicago School), and one founded by Ludwig von Mises and revitalized by Murray Rothbard (also known as the Austrian School).2
As a rough guess, there are ten Keynesian professors for every Fisherite, and twenty Fisherites for every Misesian. Only Keynesians and Fisherites have an influence on public policy today. And, if you read Professor Madrick’s article, he is a Keynesian and not interested in quitting his job at all. Oh, no. What he turns out to mean is that monetarist (i.e., Fisherite) economics has failed. What appears to be a mea culpa is simply a dishonest attack on the competition, rendered in the same sneering, Stalinist tone we have just seen in our AGW section, by a bureaucrat whose resume makes him sound exactly like the Joe Romm of economics. (If nothing else, dear reader, you now know what it sounds like when power is spoken to truth.)
You may ask: why is it that Misesian economics has no influence on government policy? There are many ways to divide the profession (and I’m sure some would quibble with the classification above), but there is one simple division: we can divide economics into orthodox economics and new economics. Keynes and Fisher are new economics. Mises is orthodox economics.
These terms may seem a little strange. Why is new economics, which dates to the ’20s, mainstream, and orthodox economics—which also dates to the ’20s—shunned? And from the tone that the Keynesians and monetarists use to describe Austrians—when they deign to describe them at all, which isn’t often—you’d think orthodoxy was the other way around.
But in fact, I am using the term orthodox in much the same way as Keynes himself. As anyone who has read Hazlitt’s essential Failure of the “New Economics”, the Baron was anything but a precise thinker, but he generally uses the term orthodox to describe 19th-century or at least pre-WWI economics. This certainly would include Mises, whose school is the only real 20th-century survival of anything like what Victorians called economics.
I have a very simple, precise definition of orthodox and new, which matches Keynes’ usage and seems reasonably serviceable to me. Let’s say an orthodox economist is an economist who believes that any supply of money is adequate, and the money supply should be either fixed or bound to a commodity whose supply is very difficult to expand, such as gold. A new economist is a believer in an “elastic currency”: he believes that the amount of money in a country should expand as the country “grows.” Typically this involves a belief in paper money.
By this definition, it is indeed the new economics (of Keynes and Fisher) which has failed. It has failed totally and completely, it is morally and intellectually bankrupt, it has inflicted vast suffering on humanity, and if there was any justice its acolytes would be packing their bags one jump ahead of the law. They’re not, of course.
When we remember that the world did, in fact, exist before 1914, we find it quite easy to justify the term new economics. Returning to our favorite Charles Francis Adams essay, for instance, we find the following trenchant passage:
The currency debate presented three distinct phases: first, the proposition, broached in 1867, known as the greenback theory, under which the interest-bearing bonds of the United States, issued during the Rebellion, were to be paid at maturity in United States legal tender notes, bearing no interest at all. This somewhat amazing proposition was speedily disposed of; for, early in 1869, an act was passed declaring the bonds payable “in coin.” But, as was sure to be the case, the so-called “Fiat Money” delusion had obtained a firm lodgment in the minds of a large part of the community, and to drive it out was the work of time. It assumed, too, all sorts of aspects. Dispelled in one form, it appeared in another. When, for instance, the act of 1860 settled the question as respects the redemption of the bonds, the financial crisis of 1873 re-opened it by creating an almost irresistible popular demand for a government paper currency as a permanent substitute for specie.
This passage was written in 1901. Note Adams’ perception of the paper-money advocates: they are insane, demagogic monetary cranks. Curiously enough, this is exactly how the responsible mainstream intellectual of today regards a Misesian, or any other gold-standard advocate.
Isn’t this an interesting reversal? Doesn’t it remind you slightly of our last case? Remember how the AGW promoters, shepherding a pseudoscience which has become mainstream, are so eager to dismiss their critics as pseudoscientists. These reversals happen for a reason: if you’re a quack, quackery is what you know, so the obvious way to dismiss your critics is to label them as quacks. The approach is especially attractive for the mainstream quack, who knows that faced with a pair of arguing experts, each of whom claims the other to be a quack, most spectators will pick the one who has wormed his way into the more prestigious position.
Thus we have our hypothesis already: the “Fiat Money delusion” somehow worked its way into the mainstream, displacing the old, orthodox “hard money” economics. Since it is clear that, 75 years or so later, some school of economics has failed, and since hard-money economics has been long displaced from the temples of power, the simple answer seems clear. Now, let’s try to understand it.
First, both the Keynes and Fisher schools are what a Misesian would call inflationist. (Adams would probably use the same word, too.) That is: they believe that expanding or otherwise debasing the currency is on some or all occasions beneficial to the health of the State. Again, we note the accuracy of our terms: before the 20th century, in both European and Greco-Roman times, monetary debasement was considered the pathetic act of a sick, decaying polity.
We can separate the Keynes and Fisher schools based on their preferred vehicles for inflation. Keynesians think governments should inflate the money supply through deficit spending—the “stimulus” we have grown to love so dearly. Fisherites think the best way to inflate the money supply is by fixing interest rates, a policy sometimes known as “easy” or “cheap” money. I’m afraid that, with AmeriZIRP in full swing, the Keynesians have rather the best of it. Perhaps we can give Professor Madrick credit for being right about that.
So the “new economics” does, after all, live up to its name. It is a product of the 1920s and ’30s, when Britain discovered that her World War I debts would not allow her to stay on the classical gold standard that she once had established—at least, not at the now-overvalued prewar parity. There was too much paper and not enough gold. The failure cascaded, the world switched to paper money, and a new economics was needed. Under which “going off gold” was not a failure at all, but in fact a step into a brighter new world.
Who was right? Was the end of the classical gold standard a disaster? Or were the old orthodox economists just a bunch of no-fun fuddy-duddies, who didn’t get it at all? And if so, how did they metamorphose from fuddy-duddies into nutball cranks?
First, as we’ll see below, it’s easy for us to dismiss the inflationists on logical grounds. Inflationism simply cannot be right. It violates logic. Nothing can violate logic.
Second, an orthodox economist need not be a goldbug. The difference between paper and gold, as monetary goods, is immaterial. People hold money to defer consumption into the future, not for the industrial qualities of the money itself. Gold makes a good monetary system not because gold is “intrinsically” valuable in some sense, but because the supply is strictly limited. Ideally, there would be no new gold mining at all. And we can duplicate this effect with paper money, by issuing a certain number of notes and double-promising not to issue any more. (The advantage of gold is that the promise is a lot more credible.)
Rather, the difference is between a hard or inelastic currency, and a soft or “elastic” one. The former cannot be inflated; the latter can. An ideal hard currency has no new supply.
The key fact about money is that what matters to you is not how much money you have, but what fraction of the total money supply you have. It is the latter than determines your power to exchange money for other goods, in competition with present moneyholders. E.g.: if, following Hume’s Archangel Gabriel, we turn every dollar into two dollars (being careful to adjust debts as well), we have changed nothing.
Even simple inflation—printing money and spending it, Keynesian style—can be emulated with an ideal hard currency. To “print” new money in this currency, simply confiscate it pro rata from all present holders of the currency. E.g., if you want to print 1/100th the present money supply, find every dollar in the world, pay its owner 99 cents, and use the leftover pennies to fund your plan.
The effect of this policy is precisely the same as that of inflating an elastic currency, although the elastic implementation is much more straightforward. Perhaps this is the advantage of elasticity. But it avoids the critical question, which is why we’d want to do this in the first place. Oddly enough, although we know they are semantically identical, the inflation option seems much more fair and reasonable. Oddly, too, even Adams seems to acknowledge that, although an elastic currency may be pernicious, it is desired by many.
Keynes and Fisher did not propose inflation as an all-purpose stimulant for general fun. They proposed it as a cure for economic recessions and depressions, which were certainly in no short supply at the time. We are entering a recession or depression now, so it seems wise to revisit the issue. Is cocaine a good remedy for depression? Why do so many people want to inflate?
Again, the answer is easy. What we see in a recession or depression is a drop in consumer spending. Since spending is the flip side of production, we can think of the GDP (the sum of the prices of all goods and services sold by businesses to consumers) for any country as the amount of money spent on that country’s goods and services. If that number falls by, say, 5%, the average business in the country has produced 5% too many goods and services.
Obviously, this is quite painful. And it also gives rise to calls for inflation—or, to use a more precise term, monetary dilution. There is an easy way to correct the situation to our business’s satisfaction: print 5% more money, and spend it on goods and services. Hence the “stimulus.”
If we switch back to hard-currency mode and look at what we’re doing, it is even weirder. In order to prop up consumer demand, we steal one nickel from every holder of a dollar, add it all up, and spend it on goods which we throw away. Is this healthy? Keynes thought it was.
Basically, the way to perceive the “new economics” is in exactly the same way that Adams perceived it: not a sane government policy, but a response to pressure groups. Fortunately or unfortunately, those pressures were a lot stronger after WWI than before it, and sound money went the way of the dodo. So, for example, our pressure group here is the business owner. Farmers in debt also tend to do quite well with inflation. But, again: any monetary debasement can be modeled as a monetary transfer.
As in the case of AGW, we ended up with “new economics” because that was what Washington wanted to hear. The case is the same today: Barack Obama’s “stimulus” proposal involves doubling Federal discretionary spending, i.e., everyone’s budget. Obviously, this makes quite a few people very happy. And it probably spreads the loot around a little better than if we were just to give it all, up front, to Tony Rezko.
Hence the death of orthodox economics. The orthodox economists of the 19th century, the believers in sound money, were not in general policymakers. They viewed their task as one of describing the economy, not controlling it. But in the ’20s and ’30s, when university men started to move into government, politically palatable solutions were needed. The Austrians and other orthodox historians had nothing of the sort. So they were left out of the pie when all the power got distributed, and today they have no government jobs and only a few marginal academic ones.
What at least the Austrians had, however, was an accurate understanding of the disease that the Keynesians and Fisherites were trying to treat—the pattern of repeated booms and busts. The “new economists” called it the “business cycle,” a term implying some endogenous origin in the commercial community—which, coincidentally or not, tended to align with Harding and Coolidge rather than Hoover and FDR. Bankers and economists tend to be more left-wing.
“Business cycle” is an extremely misleading phrase. A better phrase would be banking cycle. As I discussed in “The Misesian explanation of the bank crisis,” the cause of the recurrent panics and collapses is a bad accounting practice in the Anglo-American banking system, generally known as maturity mismatching.
A maturity-mismatched bank, which is any bank today, writes promises of money it doesn’t have—yet. It “borrows short and lends long,” balancing short-term liabilities (such as checking deposits, whose term is zero, as they can be withdrawn at any time) with long-term assets (such as mortgages paid over 30 years). Sometimes appearances can be deceiving. Sometimes something that sounds like a bad idea is actually just a bad idea.
Without going into too much detail, suffice it to say: while a maturity-mismatch structure is not quite the same thing as a Ponzi scheme, they both have a tendency to collapse catastrophically in a cloud of dust, leaving investors with a lot less money than they thought they had.3 Effectively, maturity mismatching lets banks teleport money from the future into the present. What’s bad is that this is inflationary, and what’s worse is that—when the scheme collapses—the inflation reverses. This creates your recessions, depressions, etc.
So we now have a perfect understanding of the origins of Fisher–Keynes inflationism. It exists not because it makes sense but because politicians desire it. Politicians desire it as a palliative for the deflationary conditions of a maturity crisis (or any other crash). In the 19th century, such crashes were often described as “shortages of money” (meaning shortages of present money). And printing will certainly solve that.
It’s important to note that while maturity-mismatch inflation has a reverse gear, and so do the open-market operations used for Fisherite monetary policy (these can either create money or retire money), Keynesian spending does not. This is a pattern that leads to long-term monetary decay: first, maturity mismatching inflates the economy and creates a huge amount of debt; second, a maturity crisis triggers a panic, the debt goes bad, and the country enters depression; and third, massive doses of Keynesian heroin are injected into its aorta, waking it up. Sadly, it will need more heroin tomorrow—and so on.
What a sane and healthy government tries to avoid is inflation dependency. This addiction is a state in which a substantial percentage of consumer spending originates in newly printed or lent money. For example, before the real-estate crash, about 5% of US GDP was home-equity withdrawals—money teleported out of the future, and into thin air. Most banks have stopped providing this service, leaving a mortgage-equity-withdrawal-shaped hole in US GDP. But President Obama will fix it, of course, with his wonderful stimulus.
We start to see how appalling the Keynesian stimulus is. First, it replaces one addiction—the vanished “home ATM”—with a new one, Federal money. Second, budgets in Washington do not get cut, at least not routinely. The stimulus will be permanent, which means we’ve replaced one addiction with another.
And third, when we do this, we shift a substantial percentage of private economic activity into the hands of Washington’s finest, who never turn down either money or power. It is probably a coincidence that the inauguration of The One coincides with the Congressional murder of America’s handmade toy industry (thanks, Ralph Nader—no, really). But it is a bit symbolic. We are heading for Brezhnev faster than most of us think.
At a higher level, both monetary policy and Keynesian stimulus pretend to be cures for the banking cycle. Neither claims to understand it at all, but both have been promising to eliminate it for the last 75 years. This has not happened, of course. The remedies are palliatives for the destructive effects of the collapses, but this is like taking cocaine for your strep throat. What it really needs is a specific cure, i.e., antibiotics.
To end the banking cycle permanently, our existing structures of long-term debt which back short-term liabilities need to be restructured. One way to do this is the classic Austrian approach: let everything collapse. If we were actually on the gold standard, this might well be our only option—but we’re not. It is much easier to transition to a fixed-supply fiat currency, which is in fact harder than gold (because there is no new production at all).
Basically, the only painless, specific, and lasting way out of the banking cycle is to purchase all financial assets with freshly-issued dollars, then sell the assets and destroy the dollars paid for them, and start lending back up with new banks and maturity-matched accounting (Chapter 4). This is a full reboot of the financial system. Accept no substitutes. Yes, it involves some inflation, but the inflation is (a) one-time, and (b) pointed at the actual problem.
Once again, this is not going to happen—despite the fact that it should be obvious. There is simply no power in the world, not even obviousness, that can displace our present economics faculty, or dislodge them from their lock on policy.
They have tenure, after all. They’re scientists, which means that if you oppose them you’re an ASS. And they will remain in power until someone drives a tank or two into Harvard Yard—which, come to think of it, doesn’t sound like such a bad idea at all.
And last but not least: our third case study in adaptive mendacity under the democratic system, HNU or human neurological uniformity.
An HNU credulist believes that modern human subpopulations are neurologically uniform. In other words, genetic differences between races (if the term is even acknowledged) are of no behavioral significance. Especially committed credulists may believe that genetic differences between individuals are of no behavioral significance, or even that human behavior has not been shaped at all by evolutionary history—both forms of the “Blank Slate” hypothesis. (If you are new to the issue, you could do a lot worse than starting with Pinker’s book.)
You may, for instance, hear phrases like “we are all the same under the skin.” Are we? (And consider the behavioral correlates.) I suppose one could step back to a less-falsified point: “we are all the same under the skull.” Evolution, in this theory, is somehow attenuated by tissue depth. Do you want to go there?
As the authors of this new book put it: given the genetic history of the human species, global equality in any quantitative trait—physical or behavioral—is about as likely as dropping a handful of quarters and having them all land on edge. Of course, as reasonable thinkers, we are prepared to consider improbable propositions. If presented with extraordinary evidence.
What, sir, is your evidence for HNU? Oh, you don’t have any. I see. Once again, we find our new friend—the mainstream crank.
You’ll note the familiar chutzpah of quackery. Lacking any positive factual argument for their hypothesis, how do the spinmeisters of HNU credulism—from Stephen Jay Gould down—operate? The answer is a one-paragraph textbook in charlatanship. This maneuver takes a gallbladder the size of a basketball, but it works perfectly.
First: shift the burden of proof to the converse of your unsupported hypothesis, defining it as the null hypothesis—true until proven false. Second: raise the standards for proving it false to an absurd and unsatisfiable level. (See this for a typical attempt to clear the ever-rising bar.) Third: declare victory.
Thus: the moon is made of green cheese. You say the moon is made of moon rock and moondust, but you have no real evidence for this claim. Astronauts landed on the moon and brought home moon rock and moondust, but this is just a superficial layer of asteroid debris around the cheese. If they go again and actually drill this time, they’ll hit cheese. If they don’t, they didn’t drill deep enough. Regardless, the moon-rock theory remains highly speculative and unproven—it is probably “junk science” funded by lunar mining interests.
And it’s just another day in your worm-eaten medulla. Hey, don’t worry—we’ve all been there.
Here is a thought I distinctly remember thinking as a teenager, quite possibly after reading one of Stephen Jay Gould’s better essays on the early hominidae: “Boy, it’s a good thing Homo erectus went extinct. Because fortunately, racism is a lie, we are all the same under the skin, and once America educates the world all God’s chilluns will go to Harvard. But we’re obviously descended from less-intelligent hominids—and if those guys were still around, we’d have a real race problem.” A testament to the art of modern crimestop, which always finds a way to disable wrongthink by removing some tiny but essential component from one’s picture of reality.
I’ll assume you’ve succumbed to the wrongthink. If not, think about it for a while. Spend some time on the Internet. Draw your own conclusions. Then continue below—or, of course, don’t.
Since you’re no longer an HNU credulist, you must be an HNU denialist—i.e., one prepared to consider patterns of genotype–phenotype correlation in behavioral traits of modern human subpopulations. Terrible! But don’t worry—if you don’t mind keeping company with the dead, you’ll find yourself in the best of company.
I am apt to suspect the Negroes to be naturally inferior to the Whites. There scarcely ever was a civilized nation of that complexion, nor even any individual, eminent either in action or speculation. No ingenious manufactures amongst them, no arts, no sciences. On the other hand, the most rude and barbarous of the Whites, such as the ancient Germans, the present Tartars, have still something eminent about them, in their valour, form of government, or some other particular. Such a uniform and constant difference could not happen, in so many countries and ages, if nature had not made an original distinction between these breeds of men. Not to mention our colonies, there are Negro slaves dispersed all over Europe, of whom none ever discovered any symptoms of ingenuity; though low people, without education, will start up amongst us, and distinguish themselves in every profession. In Jamaica, indeed, they talk of one Negro as a man of parts and learning; but it is likely he is admired for slender accomplishments, like a parrot who speaks a few words plainly.
Now, if a man was to stand up and say this today, that man would be a racist. But let’s not forget, Hume wrote this in, like, 1500 B.C. or something. (He also wrote it when there were a lot fewer Negroes around.) As Hunter S. Thompson once put it, we’ve learned a lot about race relations since then. Don’t worry, SPLC—we welcome our new Mustiphino overlords.
Seriously: should the HNU denialist accept this invidious word, racist? Better yet, should he flaunt it like a homo? Obviously, a matter of personal taste. It depends how much you want to offend people. But there is one thing to note: the common meaning of racism implies the belief that ancestry is significant information in the context of common decisions about individuals.
It should be obvious that it is not. If you want to test a job applicant’s IQ, for example, give her an IQ test. Patterns of ancestry become useful only in decisions that affect large groups of humans in the aggregate. Governments, however, must often make such decisions.
Therefore, if you are an HNU denialist and someone asks you whether you’re a racist, you can ask him if he implies the above belief, which we can call racial essentialism. (The Nazis, of course, were big essentialists.) If he says yes, tell him no. If he says no, you can tell him yes.
One also must be quite a bit more careful than Hume with the words superior and inferior. This implies some quantitative ordering of overall personal worth, an idea one would expect Hume to be the last to accept. For example, consider the proposition that Jews tend to be better chess players than Negroes, whereas Negroes tend to be better dancers than Jews. Both halves of this statement may (or may not) be true, but neither can justify us in ranking the two races overall—unless our sole criterion of personal worth is either chess or dance. Which mine isn’t.
I will take the liberty of suggesting that Hume, had he known how touchy his descendants would become on this subject, would have said that Europeans tend to have higher labor productivity than Negroes. As measured in wages, this is an easily verifiable fact of no moral significance whatsoever. (In a society which permitted both European and Negro slavery, we could compare the cost of the capital rather than the price of the rental.)
For an intelligent person in the 21st century, it is unnecessary to be even slightly neurotic about the obvious statistical differences in the average talents of human races. It so happens that, in the world of 2009, a talent for solving differential equations commands a higher salary and a larger job market than a talent for playing musical instruments. But there are exceptions: Prince is much better compensated than you. Does that make him a better person? Who could possibly care? We each are who we are, we each make the best of it. Duh.
My ideal future is one in which governments pay at most minimal attention to race. If that makes me a racist, so be it. But Orwell just came in his pants.
Obviously, once you stop believing in democracy, it is easy to stop seeing the failure of this political design in societies with a high percentage of non-Eurasian genetic ancestry as a moral reflection on persons of non-Eurasian ancestry, and start seeing it as a mere engineering failure. I.e.: if Negroes are unsuited for representative government, the fault lies entirely with the latter. Europeans are unsuited for representative government, too—just slightly less unsuited.
It’s true that our planet, at present, hosts quite a few healthy humans whose present economic productivity is negative. But this is probably best explained as a case of mere misgovernment. Civilized societies in the past have found that the demand for menial labor is, at the right price, almost inexhaustible, and have flourished with a very high ratio of laborers to elites. If present political structures fail under these demographic conditions, the fault is—once again—with the political structures. (For example, colonial Spanish America thrived peacefully under royal government, and became violent and corrupt under republican institutions.)
Should governments, for example, consider race in their immigration policies? I can’t imagine why they would want to. Surely an effective immigration policy, by definition, is one that lets in desirable subjects and keeps out undesirable ones. Whatever your definition of desirability, there are surely far more effective ways to evaluate an applicant for immigration than examining his or her ancestry, or even a full genotype. Even if we had a genotype-to-IQ function, which of course we don’t (yet), by definition an IQ test is the most effective way to test IQ.
But enough defensiveness. Let’s see what the world looks like to an HNU denialist.
As usual, we all have a complete picture of reality as consistent with HNU credulism. As usual, we have no picture whatsoever of reality as consistent with HNU denialism—except, of course, for some sketchy and invidious stereotypes of what a “racist” should think. We have no interest in nibbling at these poisoned baits.
(But we will continue to use the word Negro, which has—or had—been the most standard and precise signifier for its signified since (according to my OED) 1555. Geeze, man, talk about freakin’ Orwell. It reminds me of an old Primitive Radio Gods track, which goes: “I got a god-given right to smoke whatever I like; / Tell me how it got given to you?” Of course, the verse refers to tha chronic, not the English language. Yet the principle is the same.)
In other words: you know the complete story of race relations in America—in the reality in which Negroes are best understood as Europeans with black skin. But now we have another reality. In that other reality, what is the story of race relations in America? Whatever it is, it can’t be the same story.
Perhaps you’ve seen this issue discussed before, and it tires you. The Negro problem has vast ritual importance in the modern American mind. A fresh perspective is essential. So:
Let’s say you were a person who didn’t care at all about the Constitution, and you wanted to take America back to the past and establish a new order of hereditary nobility. What could be more deliciously reactionary than that? Real, live nobles, walking around on the street. So let’s see what it would take to make it happen.
First, we need to define noble status. Our rule is simple: if either of your parents was a noble, you’re a noble. While this is unusually inclusive for a hereditary order, it is the 21st century, after all. We can step out a little. And nobility remains a biological quality—a noble baby adopted by common parents is noble, a common baby adopted by noble parents is common.
Fine. What are the official duties and privileges of our new nobility? Obviously, we can’t really call it a noble order unless it has duties and privileges.
Well, privileges, anyway. Who needs duties? What’s the point of being a noble, if you’re going to have all these duties? Screw it, it’s the 21st century. We’ve transcended duties. On to the privileges.
The basic quality of a noble is that he or she is presumed to be better than commoners. Of course, both nobles and commoners are people. And people do vary. Individual circumstances must always be considered. However, the official presumption is that, in any conflict between a noble and a commoner, the noble is right and the commoner is wrong. Therefore, by default, the noble should win. This infallible logic is the root of our system of noble privilege.
For example, if a noble attacks a commoner, we can presume that the latter has in some way provoked or offended the former. The noble may of course be guilty of an offense, but the law must be extremely careful about establishing this. If there is a pattern of noble attacks on commoners, there is almost certainly a problem with the commoners, whose behavior should be examined and who may need supplemental education.
If a commoner attacks a noble, however, it is an extremely serious matter. And a pattern of commoner attacks on nobles is unthinkable—it is tantamount to the total breakdown of civilization. In fact, one way to measure the progress that modern society has made is that, in the lifetime of those now living, it was not at all unusual for mobs of commoners to attack and kill nobles! Needless to say, this doesn’t happen anymore.
This intentional disparity in the treatment of unofficial violence creates the familiar effect of asymmetric territorial dominance. A noble can stroll anywhere he wants, at any time of day or night, anywhere in the country. Commoners are advised not to let the sun set on them in noble neighborhoods, and if they go there during the day they should have a good reason for doing so.
One of the main safeguards for our system of noble authority is a systematic effort to prevent the emergence of commoner organizations which might exercise military or political power. Commoners may of course have friends who are other commoners, but they may not network on this basis. Nobles may and of course do form exclusive social networks on the basis of nobility.
Most interactions between commoners and nobles, of course, do not involve violence or politics. Still, by living in the same society, commoners and nobles will inevitably come into conflict. Our goal is to settle these conflicts, by default, in favor of the noble.
For example, if a business must choose whether to hire one of two equally qualified applicants, and one is a noble while the other is a commoner, it should of course choose the noble. The same is true for educational admissions and any other contest of merit. Our presumption is that while nobles are intrinsically, inherently and immeasurably superior to commoners, any mundane process for evaluating individuals will fail to detect these ethereal qualities—for which the outcome must therefore be adjusted.
Speaking of the workplace, it is especially important not to let professional circles of commoner resistance develop. Therefore, we impose heavy fines on corporations whose internal or external policies or practices do not reflect a solid pro-noble position. For example, a corporation which permits its commoner employees to express insolence or disrespect toward its noble employees, regardless of their relationship in the corporate hierarchy, is clearly liable. Any such commoner must be fired at once if the matter is brought to the management’s attention.
This is an especially valuable tool for promoting the nobility: it literally achieves that result. In practice it makes the noble in any meeting at the very least primus inter pares. Because it is imprudent for commoners to quarrel with him, he tends to get what he wants. Because he tends to get what he wants, he tends to advance in the corporate hierarchy. The result, which should be visible in any large business without dangerous commonerist tendencies, will be a predominance of nobles in top executive positions.
And, of course, this should be especially the case in government… but enough. We’ve made the point.
And what exactly is that point? Well, three points.
One: this system is profoundly unhinged and bizarre, and completely inappropriate in anything like a sane, civilized society.
Two: it is—save for the change in terminology—a fairly close description of the present legal status of non-Asian minorities (NAMs) in present-day America. (Which is by no means the only modern government to adopt such a system.)
And three: applied to the cream of America’s actual WASP–Ashkenazi aristocracy, genuine genetic elites with average IQs of 120, long histories of civic responsibility and productivity, and strong innate predilections for delayed gratification and hard work, I’m confident that this bizarre version of what we can call ignoble privilege would take no more than two generations to produce a culture of worthless, unredeemable scoundrels. Applied to populations with recent hunter-gatherer ancestry and no great reputation for sturdy moral fiber, noblesse sans oblige is a recipe for the production of absolute human garbage.
Thus, the analogy of hereditary ignobility has given us HNU denialists a desperately-needed fresh perspective on the bezonian underclasses of the hardcore, female-welfare and male-criminal variety, whatever their race, color, creed or ethnic origin. (Amazingly, Boston still has Irish bezonians.) The underclass are infinitely depraved aristocrats, with the aristocrat’s economic role of extracting profit without productivity through the use or threat of violence. The women are concubines or queens, the men are warriors or barons. In terms of sheer, industrial-strength vice, the denizens of Professor Venkatesh’s world surrender nothing to the louchest rake of the Hellfire Club, and their capacity for random mayhem might shock even the Borgias.
That this orcish parody of aristocracy was created, in the lives of those now living, out of the certainly imperfect but generally functional pre-WWII American Negro subculture, through policies designed by “social scientists” who were in fact religious moralists in disguise, is one of the larger ironies of modern history.
But perhaps I overanticipate. Strangely (or not), most Americans are not familiar with the actual history of the modern American Negro. It shows a precipitous cultural decline in the second half of the 20th century—just as our system of ignoble privilege was established. This might be a coincidence, but then again it might not.
Before 1960, most Negroes had jobs, most Negro children were born to married parents, and most cities in America had thriving Negro business districts (such as Bronzeville in Chicago). All this is gone. But for a white-assimilated minority, often more mulatto than Negro, the community has simply been shattered. A time traveller from 1960 might be excused for thinking the country had spent the last fifty years in the savage grip of the Klan. Even the great Negro contribution to American music has sunk from the genius of jazz to the barbarism of rap.
Whereas to the HNU credulist, the second half of the 20th century was the golden age of the “African-American,” with historical achievements unseen since Periclean Athens. We have developed a remarkably wide parallax here. Let’s go back and see the world through the eyes of our old, discarded, worm-installed beliefs.
If we assume HNU, the standard story makes sense—to the extent that any perspective founded on nonsense can make sense. Without the obvious answer of genetic neurological disparities, the HNU credulist applies the proper Sherlock Holmes algorithm and assumes that, absent the impossible, the only alternative is the improbable.
Thus, he ascribes the depressing sociological statistics of American Negroes to mistreatment, past and present, by whites. I.e.: racism. In the era of slavery or the era of the lynch mob, this did not seem like much of a stretch. Surely it is at least the #2 suspect.
The HNU credulist of the Gunnar Myrdal era discovered two principal aspects of this problem. One: Negroes in America had no effective political power and were often discriminated against by the government, mainly state governments in the South. Two: Europeans in America generally disliked Negroes, and preferred not to associate with them (i.e., they were racists). Therefore, the Negro problem could be solved by (a) giving Negroes money and power, and (b) educating Europeans to like and respect their Negro brothers, who (respectable scientists assured them) were exactly the same as them, under the skin.
Fifty years ago, this prescription was not absurd. America took it. It didn’t seem to be working, so we doubled the dose. And so began the usual pattern of iatrogenic escalation. Far from curing the relatively mild social pathologies of the Negro community in the early 20th century, the Myrdal therapy aggravated them, converting small precancerous lesions into vast metastatic melanomas. Of course, this called for even more medicine. And so on.
As in AGW and KFM, the feedback loop has created a business of its own. America is now inconceivable without the race industry. It has added a Hispanic underclass to its Negro problem, and its disciples in Europe have created a remarkably similar Muslim problem.
Antiracism gained power in the United States through what we call the civil-rights movement. Perhaps a more precise name would be the black-rage industry, but we can compromise and settle for black-power movement. When you hear these words, you probably think of the “carnivorous” side of the whole circus, with Huey Newton, H. Rap Brown and Field Marshal Cinque, and not the “vegetarian” side, with Martin Luther King, Jesse Jackson, etc.
But from the perspective of European-Americans, the two acted as a perfect Mutt and Jeff act. Mutt said: I’ll kill you. Jeff said: That Mutt is a really bad apple, and if you don’t give me money and power he might well kill you.
To a Loyalist, this all sounds dreadfully familiar. Remember the pattern of the American Rebellion: the likes of Otis and Sam Adams raised hell, and the likes of Burke and Pitt explained that they were raising hell because they weren’t given enough money and power. Of course, the conciliations of the latter did precisely nothing to reconcile the former to British government.
Americans failed to grasp the fundamentally predatory nature of the black-power movement. Rather than suppressing it forcefully and restoring the rule of law, the worse it behaved the more they fed it. The result was, and is, a Negro population which has essentially seceded from mainstream American culture, to the tremendous disadvantage of both parties. The resulting ghetto culture remains marinated with black-power ideology, although it is now so distant from the lives of you or me that we only notice it when a Jeremiah Wright somehow swims into view.
And meanwhile, the official story is that this entire disaster is the result of racism—i.e., Europeans who dislike Negroes, deny HNU, or both. Consider the enormous guilt complex that so many Americans have laid on themselves for answering no to the question: “Do you regularly enjoy the company of African-Americans?” It is not enough for the State to force you to believe—it must also force you to like. Emotional tyranny is old hat for any good Puritan.
Lynchmobs and segregated lunch counters are a thing of the past, but the consequences once attributed to them have only gotten worse. Therefore, the campaign against racism must only strengthen. Consider the discovery of unconscious racism. The involuntary, concealed, guilt-inducing activation of the European amygdala somehow seems to do just as good a job, if not better, as any Klan mob of keeping the black man down. We must get rid of the amygdala! Coincidentally—or not—this racist organ is also the part of the brain activated when you or I feel fear. I can’t imagine why that would be.
Step back a moment and picture your fellow Americans, who are so confident that by electing a mulatto President (more money, more power) they have brought this astounding circus to an end. Quite the contrary. They have just fed it another lollipop.4
But this is nothing new, so the consequences should not be especially devastating. The circus is awful, but it is an old dog and capable of few new tricks. Contra Jared Taylor, I expect no American Zuma to follow our new Mandela. Though some other hell no doubt awaits us.
The policy solution here is obvious: eliminate the race industry, abolish all racial privileges, including laws against “harassment” and “discrimination,” and restore unconditional freedom of speech and freedom of association. Someday, sooner or later, probably later, all this nonsense will end up in whatever dusty closet we sent the segregated water coolers to. Our government will finally forget about race and treat individuals as individuals. And the entire country will party for a week—except those who need to be arrested.
Yes. This is what happens when you think for yourself. Suddenly, your mind is full of all sorts of completely unacceptable—but strangely logical—ideas. These three cases are probably the most spectacular, but the list could easily be extended. (The good news, however, is that you’ve swallowed the sodium-metal core, and your stomach seems to still be intact.)
The thing to note about these democratic feedback loops between public miseducation and official malpractice is their tremendous stability. As a believer in democracy, you expect the system to stabilize itself, the people to magically wake up, return to sanity, and seize control of their government. It is this dream from which you need to wake. It will never happen.
But what will? Perhaps we need another dose of therapy, after all.