There are three rules you need to remember if you want to survive grad school.
Rule (a) is: never go to grad school before you’re either old enough to drink, or old enough to have had a drink. Rule (b) is: never go to grad school without first having had a real job, that is, one which you for some reason were once tricked into actually giving a crap about, at least up till they hired that horrible woman with the bad hair. Rule (c) is: never stay in grad school.
Since I have broken only (a) and (b), and managed to restrain myself on (c), I feel that while I am certainly nothing special in the world, I have some right to present myself as scarred but not devoured. Granted, those who made it all the way into the whale, and especially those who have chosen to remain there, are often wonderful people, and one should in no way be embarrassed to have them as one’s friends. But sometimes one is unsure of their voices. It is hard to always be absolutely sure.
The overwhelming fact of the modern world is that universities are not merely the charming, bucolic gardens of knowledge that they pretend to be. Granted, they retain a few leafy spots. A tree or two, a neatly sprinkled lawn. But the modern American university is a machine, and its business end—which seems to command a rapidly increasing percentage of its abdomen—is certainly sharp and appears to be rotating. It tells us it has no intention of grinding us into paste, but it would be hard to design a more impressive tool for the task.
The charge that universities are directly responsible for almost all the violence in the world today, for example, strikes me as essentially accurate. I’m sure it will strike you as absurd at best, and libelous at worst. But if you can stop these reflexes before they engage, please ask yourself whether you have ever seriously considered whether this accusation is or is not, not in some ideal world, but on the actual Earth planet we inhabit right now as I am typing this, actually true or actually false.
Because if it is true, it sure as hell explains a lot of obviously-insane crap that is otherwise extremely hard for me to understand or make any sense of whatsoever. Perhaps others can offer a better story, but until I hear otherwise—and I would like to—I will continue to assume that the universities, along with all other official information sources, are hostile replicating subsystems and need to be terminated.
Hopefully without any prejudice at all. For example, suppose you were a professor at MIT, or an assistant editor at the Times, or a senior economist at the Fed. Not really a public figure, but certainly someone with a very large pair of balls, or ovaries as it may be. Obviously you would need to find a new line of work, but it’s not clear that you would want the old one on your resume. You could say you were off trying to write a novel, or fighting as a “contractor” in Iraq, or something. Can anyone really check up on these things? Do they even want to know? And it explains your weary mien, otherwise unusual in one with no evidence or prospect of professional growth.
This is how it would go in my imaginary ideal future. Of course this bears no resemblance to anything I actually expect to actually happen in the aforementioned actual real world. I expect it will be quite a bit nastier, and not soon at all. But I certainly think the sooner it happens, the better the whole experience will be for everyone.
There is definitely no point in saving any particular department which claims to be “science,” any university which pleads that it’s “private,” any “newspaper” or “public school,” etc. The entire system of official “education” has to be completely wiped, preferably even swapping out the hardware—as we say in the trade. (Many university campuses, for example, could easily be redeveloped as prisons, luxury housing, police academies or corporate headquarters.)
It’s not clear to me that Digg, Wikipedia, arxiv.org, and other modern systems which solve, or at least purport to solve, the critical problem of separating content from nonsense, are quite ready for their new roles. But perhaps we’ll be surprised. Certainly, industry will not suffer from the impact of a large population of extremely intelligent and potentially productive individuals, who until now have been devoting their nervous systems to what might as well be Neoplatonist astrology. As for “science,” most of the advances in Western scientific history, contrary to popular belief, occurred when scientists were not servants of the State.
In any case. So this is basically my perspective on the American university system. Some will certainly take it as extreme. But I actually think it’s quite moderate.
A Navrozov moment is a moment when you realize that the university, which was established as a refuge whose purpose was to pursue truth without regard for the opinions of the world, has become a power center whose purpose is to impose its own opinions on the world. As such it has no more use for independent thought than a dog has for beets.
The name honors this piece by Andrei Navrozov, which I’m sure that, since he is a gentleman, he and his notorious pit-bull lawyers will allow me to steal. It’s from his Gingerbread Race, which is not nearly as hard to find as it should be. Navrozov, son of the equally eccentric and perceptive Lev Navrozov, is a little too concerned with Skull and Bones and not nearly concerned enough with paragraph breaks, but he is basically a sane man and a brilliant raconteur, and the following is not at all atypical.
‘The trick of being tiresome,’ said Voltaire, ‘is to tell all.’ The great historic upheavals that are the reference points of my childhood and adolescence may all be looked up in Britannica, which can equally be relied upon to furnish a superficial history of Yale, or of American universities generally. Abstractions like cultural diversity, liberal education and academic freedom have lost none of their popularity since the day I first encountered them in the admissions brochures. What no encyclopedia can be expected to suggest, however, is that what paranoid misfits like Mill and Orwell have always known to be true, namely that when, for one reason or other, a society lets go of the adversarial principle I have compared with the human soul, it develops therapeutic myths of itself which present its weaknesses as strengths, myths that displace truth in the pages of encyclopedias and allow the many to diagnose the few as paranoid misfits. The popular abstractions I place among the constituent myths of modern civilization’s public religion are not outright lies, of course. They are what Mill called half-truths, noting that ‘not the violent conflict between parts of the truth, but the quiet suppression of half of it, is the formidable evil’. By absorbing the violent shock of dissent once represented by such abstractions into its placidly gaseous whole, the religion quietly dissolves potential opposition, with the consequence that, in Mill’s words, ‘truth itself ceases to have the effect of truth by being exaggerated into falsehood’. But, as the Bach prelude, gently pealing from the chapel’s Gothic tower, stilled my paranoid aspirations, I had no time for formulations of this kind. Courses had to be chose, and under the influence of the visible environment, which I obediently interpreted as the university had intended, I chose a course of lectures on Hegel. The first paper assigned by Professor Rockmore was an analysis of a famous chapter in The Phenomenology of Mind in which Hegel examines the relationship between master and slave. I hoped to approach Hegel, and indeed all my studies in those early weeks of my first term, exactly the way such matters had been treated at the Vnukovo dinner table. Obviously none of our guests liked to be thought of as a learned bore, and consequently it was unimaginable that in the course of a conversation bubbling into the small hours, somebody would summarize a chapter from a book everyone else had read. I viewed the professor as my host, and the essay I submitted was intended to divert him by presenting Hegel as a slave to platitude, an antihero of thought, a man so wanting existential imagination that in a Napoleonic Europe steeped in serfdom he was unable to recognize serfdom as a reality transcending the insular concerns of an ambitious Privatdozent. Hegel’s idea that the slave enslaves the master, I reasoned, is not a paradox because in the broad historical context of universal servility it is sycophantic, as Proudhon’s idea that property is theft would not be a paradox in a society of thieves. As I wrote, I imagined Father and our guests, eviscerating an academician’s conceit here, taking a stab at a bureaucrat’s witticism there, Tsinandali flowing amid roars of laughter. I read the essay to Father over the telephone, adding news of this new university life of mine, which I imagined as a continuation of and perhaps even an improvement on the lost life of the Vnukovo enclave, a paradise perfected. The following week I came to class, expecting the thrill of violent conflict between parts of the truth, the thesis being that Georg Wilhelm Friedrich Hegel was a celebrated philosopher and the antithesis, that this son of a Stuttgart government clerk led an intellectually sheltered existence. The dialectic, however, did not work out as I expected. ‘May I see you for a moment?’ said Professor Rockmore. I noticed red blotches on his face. He told me that he could not give my essay a mark, and that if I wanted to stay in his course I would have to rewrite it. ‘But, Mr Rockmore, this is what I think,’ I protested, ‘these are my thoughts on Hegel’s treatment of the subject.’ He referred me to my college Dean, who received me in the Gothic grandeur of his study. The Dean advised me to withdraw from the course, explaining that it was for advanced students and closed to freshmen anyway. I have been strictly reared, as Mark Twain used to joke, but if it had not been so dark and solemn and awful there in that vast room, I believe I should have said something which could not be put into a Sunday-school book without injuring the sale of it. With a sinking heart I realized that the faux pas I had made was not unlike that of a tramp barging in on a ladies’ circle evening devoted to problems of the homeless. This Shavian dramatization aside, suppose philosophy were a science, like mathematics or chemistry, and a drunken beggar barged in to disrupt a university lecture on metal ethoxides with his ideas about ethanol and its applications. On the other hand, it would never have occurred to me to disrupt Professor Rockmore’s course in this way if the subject discussed was symbolic logic, or any branch of philosophy that borders on mathematics. I remembered that Mill, in his discussions of intellectual freedom, specifically used mathematics as an example of an exact science ‘where there is nothing at all to be said on the wrong side of the question’ in contrast to ‘every subject on which difference of opinion is possible’ and, in Mill’s view, essential to what makes a freethinker’s life worth living. Yet the subject under discussion was not Hegel’s logic but his view of slavery, a subject upon the stark reality of which Mill began reflecting wile the Jena timeserver was still alive. Besides, Hegel’s dialectical vision of the world process added a new dimension to Leibniz’s optimistic myopia, and while I considered myself no more competent to discuss Hegel’s logic than Leibniz’s mathematics, I failed to see why discussion of a subject like slavery by the former should be closed to literary intrusion when, in the case of the latter, such an intrusion had produced Candide. I then approached several of the students attending the unfortunate course of lectures, none of them, admittedly, a fellow freshman. Many were even bearded, after the Young Hegelian fashion of Professor Rockmore himself. One student essay from the unfortunate week was finally produced, complete with a top mark and the professor’s comments, whereupon, with the pain that I can only compare to that of a forcibly extracted illusion, I discovered that the bearded essayist had done just what schoolchildren do the world over, namely, repeated Hegel’s argument paraphrastically, just as if it had been the proof of a Euclidean theorem or the tale of a big bad wolf called Sein. The tramp had not quite expected, perhaps, that he would be given crumpets with tea and asked to tell the ladies what the homeless need. He might not have expected that his ladies’ kindness would outlast the short speech he planned to wind up by demanding a shilling from everyone present. But least of all he expected to find the good ladies naked, or mute, or dead. Yet this was precisely what I, in the role of tramp, admitted to university for reasons that had less to do with diversity than with the homogenizing of diversity, found there.
(It would be fun to imagine that the bearded essayist was, in fact, Daniel Larison. But I believe Larison was in diapers when Navrozov was at Yale, and a beard surpasses even his precocity.)
Now, Navrozov studied, of course, literature. I took a European history class at Hopkins, one each in Chinese, Japanese, and (definitely the most fun) early Levantine history at Brown, one creative writing course at Brown, and one each in hippie economics and hippie law at a pre-college summer in Cornell, and this is the absolute limit and total extent of my formal education in the humanities. I don’t even speak any languages, although I’m told I do a good Indian accent.
Instead I escaped alone to tell thee, for all I studied is computer science. And it is hard to make CS be about anything except actual computers and how to actually work the fsckers, although the Good Lord knows enough people have tried. In particular, there are, or at least in the early ’90s were, a few schools that had very solid programs in system software, and I actually think I wound up with a very good basic education at about a master’s level in CS.
Computer science, I hasten to say, is no less a human garbage disposal than any department at any school. It is just slightly less obvious about it. The blade is slower, the motor less powerful. Perhaps some useful activity exists in this field, perhaps there is some peach-pit stuck deep in its drain with a leaf or two of actual life clinging to the chewed and ruined stem, but do we really need to reach in and retrieve it? I mean, for example, so-called “programming language researchers” have not designed a new language whose reception among the townies, i.e., actual programmers who actually program for a living, could be described as even remotely warm, certainly in the last 30 years and arguably ever. There is a reason CS’s entire existence is notoriously debatable.
My only personal advantage in embarking on this heinous and obviously unprofitable course of study was that my parents worked for the US Federal Government, another institution whose abolition I consider urgent, if a slightly lower priority. Even while the Cold War was still on, the odor of Brezhnev was remarkable, especially where my father worked: at State, a department largely responsible for there having been a Brezhnev in the first place. (And whose tentpole can scarce conceal itself now that the EU is actually the sweet Eden those nasty Soviets would never quite let themselves be—but I digress.)
Unfortunately, when Uncle Sam’s testicles expand and press outward, they emit the shocking odor of an adult male marmoset, and this stench is now apparent to everyone between here and Saturn with a nose. It is the smell of power.
As Navrozov explains, the word “power” in Russian means “possession” and is a cognate to the English word “wield.” Since in a democracy public opinion is power and universities are the source of all legitimate opinion, they can be said in a sense to possess and wield our minds. So no one at a university should be surprised to smell the marmoset, not even in an innocent little department called “computer science,” but I knew the stench from childhood and to say I was shocked would be an understatement.
(Of course, all your actual, official dissidents or “activists” are trained from an even earlier age to misidentify the organ behind this secretion. They cauterize the imaginary gland of their paranoid fantasies with bogus moxibustions, ointments and poultices, which applied to the actual source are not just ineffective but often even nourishing. They claim to be stopping the drip, they believe they are stopping the drip, they are sure if they flagged in their effort for a second it would become a full-on faucet or even a flood. In fact the effect of their labors is at best neutral, and often constitutes actual lactation. Though the whole system is a fine case of the proverbial self-licking ice-cream cone, not to mention a substantial source of distraction or as we naively call it “employment,” we do need to remember that the origin of the fluid is not, in fact, a “scent gland.”)
So—in any case, computer science. (And definitely not Hegel.)
The only professor I ever actually learned anything from, at least anything that an ordinary person can’t learn out of a book (CS is not in general hard, and I avoided the hard parts), in one year at Johns Hopkins (don’t ask), three at Brown, and one and a half at Berkeley (really don’t ask), was a fellow by the name of F. Kenneth Zadeck, an assistant professor at Brown, who I’m very confident cannot identify me at all.
Zadeck was (and I think still is) a compiler man and a good one, one of the inventors of static single-assignment (SSA) form, an approach to compiler optimization (basically, making your binaries run as fast as possible) which was new then but has since been widely adopted. But all of this you can get out of a book. It was not the content but the way he taught that was special.
Zadeck ran his graduate seminars in a very interesting way. As in most graduate classes in CS, at least at that time, the style was to do papers. At Berkeley, for example, we would read three or four papers a week, and spend maybe half an hour discussing each. Basically the goal would be to look at this cool smart paper and see how clever the people who wrote it were. Could we be clever, like that? Perhaps we could.
This is not how it worked in Kenny’s class. He did one paper a week. And basically his methodology would be to have us read this one paper—typically a very cool paper, by people who were clearly very smart—and discuss it for at least a couple of hours.
And Zadeck’s goal was almost never to praise this paper. It was to rip it apart. It was to show us the clever ways in which the authors had disguised the fact that their work was, while still cool and certainly not inaccurate in any way, utterly useless for any practical purpose.
As, of course, are 99.99% of the things that all computer scientists have ever built. (It is an error to confuse the open-source community, which for example wrote Linux, with the academic CS world. Basically, the relationship is that the former would like the prestige and power of the latter, whereas the latter would like the success and productivity of the former. This is an unstable relationship and I think it’s not hard to predict how it will play out.)
Zadeck’s adversarial version of CS was incredibly fun. Not, of course, that as a good Brahmin child I needed any convincing, but it helped convince me to go to grad school. I wound up assuming, much like poor Navrozov, that this essentially critical, aesthetic and realistic approach was simply the right way to study system software, which would of course be the way that it was studied at a great center of the art such as Berkeley.
However, I did feel a slight twinge of concern at the realization that there was such a high level of what could only be called dishonesty in the profession. It was certainly not that the authors of these papers had failed to realize the drawbacks of their approaches, and it was also not that they had merely summarized the technical content and noted neither pros nor cons. Rather, they had explored the pros in lavish and impressive detail, and they had set up the entire structure of their problem to avoid the possibility that anyone might consider the cons. Hmm.
Then there was the fact that as a class project I actually implemented Zadeck’s SSA form, quite crudely of course, inside the GNU C compiler, which even then was a monster with hundreds of thousands of lines of code. I believe gcc has a proper implementation of SSA form now and I’m sure it works much better, and probably the fault was mine. But it disturbed me slightly that when I used my souped-up gcc, with this groundbreaking optimization model, to compile itself, it found something like three extra optimizations in the whole codebase. Hmm.
In any case, off I went to Berkeley, where I had my real Navrozov moment. Basically, what I discovered at Berkeley was that the Zadeck approach to CS is an exception—to put it mildly.
My Navrozov moment at Berkeley came from the one and only paper I published, which was a clever way of reducing the time it takes for an operating system to “context switch,” or shift between working on different processes. In a modern computer this depends on a piece of hardware called an MMU, which can be slow and cumbersome, so my paper described a way of securely separating two processes without using the MMU.
This was not even really my idea. I’d actually gotten it from a professor at Arizona, which had been my safety school and was nice enough to fly me out for a visit, whereas Berkeley knew I was lucky to have earned their blessing and didn’t need to bother. I elaborated slightly on the Arizona professor’s idea, giving him full credit of course, for my OS class at Berkeley, and the Berkeley professor was impressed enough to have me write it up and submit it to a minor conference, where it won “best student paper”—there being one other such.
In any case, this same problem was popular at the time—the only real way to succeed in CS is to invent a new problem which generates more employment for your peers—and other people at Berkeley were working on it. Two of these were a pair of third-year grad students whose names sounded a little like “Sacco and Vanzetti.”
Sacco and Vanzetti came up with an entirely different solution to the slow-MMU problem, one which if I do say so myself was less imaginative than mine, but both more general and more practical. They published theirs in a real conference, received much acclaim for it, and I believe patented it, started a so-called company and eventually sold it to Microsoft. (Ah, Bayh-Dole.)
At some point during this period, however, I realized that the entire problem was a complete and utter pseudo-problem. The reason that an MMU context-switch is slow is that, when applied to the problem it was actually designed to solve, it is more than fast enough. The lily needed no gilding at all, and it certainly did not need to be nanofabricated from isotopically pure, individually selected gold atoms. Academic CS researchers at the time, for whatever ridiculous reason (probably something to do with microkernels), thought that there should be many more fine-grained security transitions in an OS environment. In fact if anything the trend is away from multiuser computing and toward virtualized or “shared-nothing” designs in which communication between protection domains is minimal. Furthermore, if this trend actually did reverse, which it hasn’t, it would be very easy to fix MMU-based context switching to make it every bit as fast as needed.
So I am very confident that neither of these techniques, neither mine nor Sacco and Vanzetti’s, has ever been used in practice. There is no need for them, there has never been any need for them, and there will never be any need for them. And this was quite obvious in 1993.
My Navrozov moment, of course, was when I approached one of the two—Sacco, I think—and attempted to have an intellectual discussion of this realization. The story is basically the same as Navrozov’s, so it would be boring to repeat, but basically I came away with the feeling that I’d told someone his Sicilian grandmother liked to get drunk and fuck her own goats.
Which, in fact, I had. Because I’d essentially told him his research was fraudulent. The fact that my research was also fraudulent, and that neither of ours was particularly noteworthy in that regard, did not matter. And why should it? Others’ crimes cannot excuse your own.
Of course “fraud” is a strong word in the world of science, or even “computer science.” It has a generally-accepted technical definition which certainly none of us were violating. But it is also a word in the English language, and most nonscientists would agree that when you lie for money, you are committing fraud.
Suppose you are a CS researcher, let’s say in the area of “programming languages.” You are almost certainly a government contractor. You and/or your students are funded by a grant or grants, which you spent a considerable amount of effort in securing, in competition with many other researchers. The grant was approved by a board at an organization such as NSF or Darpa, and the reviewers were other researchers such as yourself—in fact, you may even know one or two of them. (Try to avoid using the word “mafia”—it is unseemly.) This outfit must in turn obtain its funding (the dirtiest word in the English language) from Congress, before which its lawyer-flanked flacks present themselves on a regular basis. Congress, in turn, receives its paycheck from good old Fedco, which gets it you know how.
As a PL researcher specifically, you are basically a mathematician. That is to say, your work consists largely of stating and proving propositions. For example, one popular area of PL research is what’s called “proof-carrying code,” which is solving a similar problem to the one that Sacco, Vanzetti and I were working on. It is also equally pointless, because the simplest possible form of “proof-carrying code” is what we programmers call “source code,” and in practice the various approaches to this problem that have been proposed—such as “typed assembly language”—amount to no more than insanely-elaborate compression algorithms. Needless to say, no such thing has ever been deployed, and nor will it ever be.
But it is also a source of essentially permanent fascination to students and researchers the world over, because it creates an infinite variety of extremely difficult problems, which one can demonstrate one’s intelligence by solving. And this is after all one of the main purposes of the university system: to employ extremely intelligent people, who might otherwise be out causing trouble, in tasks that consume their spare brainpower.
So let’s look at how the fraud works in detail. Let’s say you are a student working on one of the project groups working in this area—for example, there is one at Yale. Let’s say you have some cool idea, for something everyone used to think was absolutely impossible—let’s say, a type system in which one can write a provably-correct garbage collector. Quelle surprise! Of course, the resulting type system is expressed in the form of an 80-page proof, and the idea that any programmer would actually learn and use any such thing makes about as much sense as putting a Wal-Mart on top of Mount Everest and issuing pitons to the “greeters.” But never mind all this. The idea is cool.
So it goes to the professor, who knows how to write grants, and it gets folded into the next grant proposal. A type system is of course essentially a security system—it ensures that your program will behave properly, and not be infected by viruses and such. And Congress is concerned about computer security, which, as we know, is part of national security. With a large quantity of skillful grantsmanship, and various euphemistic ambiguities at all levels, Aunt Gladys’s tax dollars end up funding Feizhuang Zhu’s type system, Prof. Smith’s group gets its funding bumped by 10% and can take on another grad student or even a postdoc, and all is well in Mudville.
Except, of course, for the fact that the whole thing is basically a criminal enterprise, and all these crazy-smart people could actually be out doing real work, instead of spending their lives pulling each others’ dicks in this bizarre, pathetic and dishonest manner.
The problem with CS—and I suspect in other sciences, such as physics, although I am certainly not qualified to fire so much as a BB gun in the great Woit-Motl war—is that science today is, contrary to popular belief, a business.
And it is a very special kind of business. In this business, there is exactly one customer, and his name is Uncle Sam. And there are no companies in this business—apart from your “mafia,” you’re on your own. You can get students to do your programming, true, but you have to do your own research and, more importantly, your own sales.
Selling to Uncle Sam is a fascinating problem. Uncle Sam wants his serfs to know that their tax dollars are being spent on top-notch research which will make America #1. If the dollars are being spent in the constituency of a Congressman with the right seniority, this is even better. Otherwise, Uncle Sam does not give a tinker’s damn what he funds, as long as the result does not actually make him look like an idiot. Fortunately, Sen. Proxmire has departed this earth and all of your big-league journalists are pro-science pretty much the way Pat Robertson is pro-God, not to mention that if they have a BA in anything besides basketweaving it’s a surprise, so Uncle Sam is unlikely to see any trouble from this front.
The optimization algorithm for Congress is obvious, which is to keep funding what you’re already funding. Except for minor porcine concerns, Uncle Sam certainly has no reason to ever cancel, steer, or otherwise redirect any research direction. Why would he? How could he possibly know more than the researchers themselves? Who better to ask about string theory than the string theorists?
The result is that Fedco’s approach to research bears some resemblance to that of the large, and often slightly Fedco-like, software-hardware corporations that have dominated the industry for quite some time. Typically these outfits employ large numbers of researchers, at places like Microsoft Research, Sun Labs, etc. And these researchers, who are PhD types from academia, receive some mild encouragement toward productive directions, but of course have actual rank and can’t simply be told what to do, as if they were mere employees. For the most part (although with some exceptions), these corporate research arms, which are basically run as a tax writeoff and general prestige farm, are simply sponsoring these scientists’ academic careers in a way that provides less status than working at a research university, but does not involve the onerous and degrading T-word.
The result is that the researchers wind up managing themselves. And one of the things I learned after I said my goodbyes to the whale is that, again contrary to popular belief, there is this thing called management and it’s actually necessary. There are individuals who can be productive without active management, but there are no organizations that can. And when basic research is treated as a self-managing organization, you will get unproductive basic research. If you were previously unaware that there was any such thing, I’m sorry to have to break it.
Most managers are easy for a scientist to scam, in precisely the manner described above. It’s a case of what economists call “asymmetrical information,” and the result is that your research program is simply producing status and credibility for the scientist, who is in the business of demonstrating his intelligence, as if he was in the sixth grade. It takes a really talented manager—General Groves is the all-time great example—to get an organization of super-smart people to work together on a real problem. (It is worth noting that the Manhattan Project’s personnel were veterans not of Federal science, but of course of prewar science, a system under which the profession of “grantwriter” was, I believe, unknown.)
If there is any equivalent of General Groves today, his name is certainly Steve Jobs. I have never worked in a Steve-run company, but I have certainly heard the stories. And my favorite is one I heard from an Apple QA guy (QA, i.e., testing, is basically the lowest-prestige profession in the Valley) around the time Steve was returning to Apple.
Heads were of course flying left and right, all sorts of people were moving offices and changing jobs and the like, and this guy Dan, who was a project lead or something, got called in by his manager. “Hey, we’re moving to building X next week,” said the manager.
“But isn’t that where—ATG is?” ATG being the “Advanced Technology Group,” i.e., Apple’s research arm. Which was of course the most prestigious arm of the octopus.
“Yeah,” said the manager. “Hey, could you keep it under wraps for a little? I don’t know if they’ve heard.”
In fact, unless I have been misinformed, when Steve came back he laid off Apple’s entire research division. No funding cuts, no baselines, nothing. He killed the whole thing, and from what I knew of what they were doing, it was nothing but richly deserved.
Now what do you think Steve Jobs would do if they made him President? Or CEO, perhaps, of Fedco? With a mandate from the board to perform an arbitrary reorg as he saw fit? Frankly, the mind boggles.
I actually haven’t even started to explain how pernicious the university phenomenon is. For example, I haven’t justified my claim that they are responsible for most of the violence in the world today. Please remain on this channel for further eccentric and informative broadcasts.
But I will repeat my policy proposal: I believe the only effective way to deal with the universities is the Henry VIII treatment. That is, unconditional abolition and confiscation. The endowments and campuses can be treated as rough compensation for the vast streams of subsidies the universities have received since 1945. Simply wrap the whole thing up and call it a day. Let it be summer all year long.
However, I am strongly opposed to any prosecution for anyone involved in the university system, even in exceptional cases such as that of Michael Mann. I feel it’s much better to let bygones be bygones. I’m sure some will criticize me for this stand, but I will stick to it.