Personal cloud computing in 2020 (or not)

I know it’s rare that we have a technical discussion here at UR. But every once in a while the urge prevails. If nothing else, it attracts the right people to the rest of the cult.

(For readers of that Wolfram Alpha post, it seems almost superfluous to remotely diagnose today’s tech-media darling, Siri, as yet another tragic case of the hubristic user interface (HUI). Then again, if anyone can pull off hubris and exceed the gods themselves… but in my much-refurbished crystal ball, here is what I see: Siri works beautifully 98% of the time. The other 2%, it screws up. Half of these screwups are hilarious. 1% of the hilarious screwups are blogged about. And that’s enough bad PR, fairly or not, to restrict usage to a fringe. As with all previous attempts at voice-controlled computing. Open the pod bay doors, Hal.

No tool can achieve the natural “bicycle for the mind” status of a mere mental peripheral, unless the mind has an internal model of it and knows when it will work and when it won’t. This cannot be achieved unless either the mind is genuinely human and thus understood by empathy, or the actual algorithm inside the tool is so simple that the user can understand it as a predictable machine. Between these maxima lies the uncanny valley—in which multitudes perish.

The only exemption from this iron law of expensive failure, a voracious money-shark that has devoured billions of venture dollars in the last decade, is a set of devices best described, albeit pejoratively, as “toys”—applications such as search, whose output is inherently unpredictable. I.e., inherent in the concept of search is that your search results are generated by an insane robot. This is not inherent in the concept of a personal assistant, however. Also, while search results are inherently heuristic—search queries are inherently rigorous.)

In any case—computers. When I went to grad school to lurn computers, it was way back in 1992. I was pretty sure that, in the future, we would have cool shit. Instead, twenty years in the future, I find myself typing on… the same old shit. Yo! Yo neighbor, this bull shit, yo.

It would be one thing if all the bullshit actually worked. Or at least if it didn’t suck. At the very least, it would be better if our entire system software architecture—from the ’70s database at the ass end of the server room, to the ’90s Flash player playing ugly-ass Flash in your face—though it sucked giant tubes of ass like Hoover Dam in reverse, at least this was a secret. At least no one knew it was ass.

But alas. It’s even worse than that. Everyone knows the whole Internet is ass.

It’s the 21st century. We should be soaring like eagles above the 20th-century legacy bullshit, expressing only the purest of functions in the pure language of mathematics. But somehow it hasn’t happened. The technology just isn’t there, or at least it isn’t deployed. All we have is the same old assware, and no alternative but to live in its crack. Brendan Eich took what, two weeks, to build Javascript? And it has no long integers—just floating point. Millions of brown hours, deep in Brendan Eich’s valley. To be fair, the fellow appears to be sorry. Not that this helps.

So—we’re just going to assume that God won’t tolerate this shit. Not that he spares the rod. But there’s always a limit. So we’re just going to pick an arbitrary year, 2020, by which the 20th-century assware will all be gone, all of it. And software will instead be great. From top to bottom, server to client, cloud to mobile, end to end and ass to elbow. (Note that 2020 is two years before the famous HTML 5 deadline.)

The question then becomes: with this great new software infrastructure, scheduled for 2020, what the heck will we be doing? How will we be living our wondrous 2020 digital lives?

I actually have an answer to the question. The answer is: personal cloud computing. I mean, duh. Yes, I know it sounds like yet another Palo Alto buzzword. Blander, in fact, than most. Google even finds it, a little bit, in reference to various BS.

Actually, I think the transition from 2011 computing to 2020 computing—if 2020 computing is personal cloud computing, as per my buzzword, which I claim in the name of Spain—should be at least as radical a disruptive break as any previously experienced in the seismically unstable Valley of Heart’s Delight.

Consider a classic San Andreas tech-quake: the transition from minis to PCs. Cloud computing in 2011 is a lot like all computing in 1971. Why? Because industrial and consumer computing products are entirely disjoint. In 1971, you can buy a PDP-11 or you can buy a solar calculator. The PDP-11 is industrial equipment—a major capital expenditure. The solar calculator is a toy—an information appliance. The PC? The concept is barely imaginable.

In 1971, you already exist as a database row in various billing and banking systems. (I lived in Palo Alto in 1976 when I was 3. My parents apparently had Kaiser. When an employer in the late ’90s put me on Kaiser, I was amazed to be asked if I still lived on Alma Street.)

Is this Kaiser miracle personal cloud computing? No, it’s consumer cloud computing. It’s exactly the same kind of consumer cloud computing we have today. It’s your data, on someone else’s computer, running someone else’s code—an information appliance. Care for another helping of ass, Mr. Chumbolone?

What’s an information appliance? Any fixed-function device, physical or virtual, whose computing semantics the user does not control. An IA is anything that processes data, but is not a general-purpose computer. (A non-jailbroken smartphone is about half an IA and half a computer, because the user controls the apps but not the OS, and the interface is app-centric rather than document or task-centric—the OS as a whole is little more than an app virtualizer, i.e., a browser.)

To generalize slightly, we can say that in 1971, there was a market for industrial computing, and there was a market for information appliances. Not only was the connection between these two product lines roughly nil, it was more than a decade before the PC emerged to replace “smart typewriters,” and the early 2000s before Linux effectively merged the PC and workstation markets.

Today, in the cloud, you can go anywhere and rent a virtual Linux box. That’s industrial computing. You also have to cower in your closet, like me, to avoid having a Facebook profile. That’s a virtual information appliance. So is just about any other consumer cloud service. Therefore, we have industrial cloud computing that isn’t personal, and we have personal cloud computing that isn’t computing.

In fact, if you use the cloud at all seriously, you probably have 10 or 20 virtual information appliances—each one different and special. If you are especially well-organized, you may have only two or three identities for this whole motley flock, along with seven or eight passwords—at most four of which are secure. Welcome to the wonderful new world of Web 2.0. Would you like some angel funding? Ha, ha.

In the future, in 2020, you don’t have all these custom information appliances, because you have something much better: an actual computer in the sky. Instead of using Web services that run on someone else’s computer, you use your own apps running on your own (virtual) computer.

I realize—it’s completely wild, unprecedented and groundbreaking. But let’s look at an example.

Let’s imagine 2011 software is 2020 software, so we can see how this works. In 2020, of course, you use Facebook just like you do now. Facebook still rules the world. Its product is a completely different one, however—personal cloud computing.

This started in 2012, when Facebook introduced a new widget—Facebook Terminal, your personal Ubuntu install in the cloud. Everyone’s Facebook state profile now includes a virtual Linux image—a perfect simulation of an imaginary 80486. Users administer these VMs themselves, of course. In the beginning was the command line—in the end, also, is the command line. Moreover, just because it’s run from the command line on a remote server—doesn’t mean it can’t open a window in your face. If you’re still reading this, you’ve probably heard of “xterm.”

Terminal will simply bemuse Joe Sixpack at first—Facebook’s user base having come a long way from Harvard CS. But Joe learned DOS in the ’80s, so he’ll just have to get used to bash. At least it’s not sh. It has history and completion and stuff.

Furthermore, Joe has a remarkable pleasure awaiting—he can host his own apps. All the cloud apps Joe uses, he hosts himself on his own virtual computer. Yes, I know—utterly crazed.

Today, for instance, Joe might use a Web 2.0 service like mint.com. Beautifully crafted personal finance software on teh Internets. Delivered in the comfort and safety of your very own browser, which has a 1.5GB resident set and contains lines of code first checked in in 1992. Your moneys is most certainly safe with our advanced generational garbage collector. Admire its pretty twirling pinwheel as merrily your coffee steeps. Mozilla: making Emacs look tight and snappy, since the early Clinton administration.

But I digress. Where is Joe’s financial data in mint.com? In, well, mint.com. Suppose Joe wants to move his financial data to taxbrain.com? Suppose Joe decides he doesn’t like taxbrain.com, and wants to go back to mint.com? With all his data perfectly intact?

Well, in 2011, Joe could always do some yoga. He’s got an ass right there to suck. It’s just a matter of how far he can bend.

Imagine the restfulness of 2020 Joe when he finds that he can have just one computer in the sky, and he is the one who controls all its data and all of its code. Joe remembers when King Zuckerberg used to switch the UI on him, making his whole morning weird, automatically sharing his candid underwear shots with Madeleine Albright.

Now, with Facebook Terminal, Joe himself is King Customer. His Facebook UI is just a shell—starting with a login screen. Joe can put anything in his .profile or even fire it off directly from /etc/rc. He changes this shell when he damn well pleases. And where is his personal data? It’s all in his home directory. Jesus Mary Mother of God! It can’t possibly be this easy. But it is. So if he wants to switch from one personal finance app to another—same data, standard data, different app. He’s a free man.

Suppose Joe wants to go shopping on teh Internets? He doesn’t fire up his browser and go to amazon.com. He stays right there on Facebook Terminal and runs his own shopping application on his own virtual Linux box. Heck, he probably downloaded it from source and tweaked the termcap handling and/or optimization flags. (It’s a general principle that anything written for termcap won’t work on terminfo, even if it says it will.) Through an ASCII curses telnet in his Facebook Terminal—or, better yet, a Javascript X server in a Mozilla tab— he executes his shopping application (in C++ with OSF/Motif—that ultra-modern 3D look).

How does Joe’s shopping application, which he hosts himself on Facebook Terminal, communicate with Amazon and other providers? Of course, book distributors in 2020 no longer write their own UIs. They just offer REST APIs—to price a book, to search for books, to buy a book. All of online shopping works this way. The UI is separate from the service. The entire concept of a “web store” is so 2011. Because Joe controls his own server, he can use classic ’90s B2B protocols when he wants to replenish his inventory. I wouldn’t at all rule out the use of SOAP or at least XML-RPC.

So. We have a problem here, of course, because Facebook Terminal is a joke. If Facebook users were a group of 750 million “original neckbeards,” the system above would be the perfect product. Also the world would be a very different place in quite a number of ways. But let’s continue the thought-experiment and stick with this spherical cow.

Consider the difference between the imaginary Facebook Terminal and the real Facebook Connect. The former is a platform—the latter is a “platform.” There is a sort of logical pretence, at the user-interface layer, that a third-party site which uses Facebook authentication to commit, of course with your full cryptographic approval, unnatural acts upon your private data, is “your” app in just the same sense that an app on your iPhone is “your” app.

But you control one of these things, and not the other. When you host an app, you own the app. When you give your keys to a remote app, the app owns you. Or at least a chunk of you.

It’s almost impossible for a Web user of 2011 to imagine an environment in which he actually controls his own computing. An illustrative problem is that chestnut of OS designers, cross-application communication. Look at fancy latest-generation aggregators like Greplin or ifttt. These apps work their tails off to get their hooks into all your data, which is spread around the cloud like the Columbia over Texas, and reconstruct it in one place as if it was actually a single data structure. Which of course, if you had a personal cloud computer—it actually would be. And “if” and “grep” would not seem like gigantic achievements requiring multiple rounds of angel funding, now, would they?

The Facebook of 2011—and more broadly, the Web application ecosystem of 2011—is not a personal cloud computer, because it’s not a computer. Generalizing across your state in Facebook itself, plus all the external apps that use your Facebook identity, we see a collection of virtual information appliances, mutually unaware and utterly incompatible.

Even if Facebook becomes the universal authentication standard of the Web, a feat it would surely like to achieve, and surely a great advance at least in usability over the status quo, its users’ lives in the cloud would not be anything but a disconnected salad of cloud information appliances. They would not have a personal cloud computer, or anything like one. Moreover, if one of these information appliances somehow evolved into a general-purpose computer, its users would realize that they no longer needed all the other information appliances.

Comparing the consumer cloud computing of 2011 to the personal cloud computing of 2020 is like comparing the online-services world of 1991 to the Web world of 2000. It’s easy to forget that in 1991, Prodigy was still a big player. Prodigy: the Facebook of 1991. In 1991, you could use your 2400-baud modem to call any of a number of fine online services and other BBSes. By 2000, your 56K modem called only one thing: the Internet. The Internet, seen from the perspective of the Bell System, was the killer online service that killed all the other services.

Another difference between 2011 and 2020 is the business model. The Web 2.0 business model is first and foremost an advertising model. Or so at least has this present boom been built. Yo, bitches, I’ve seen a few of these booms.

Advertising is a payment model for information appliances. Your TV is an appliance. You see ads on your TV. Your PC is not an appliance. You’d find it shocking, disgraceful and pathetic if the new version of Windows Vista tried to make money by showing you ads. In fact, there have been attempts at ads on the PC—in every case, heinous, tacky and unsuccessful.

Advertising ceases to exist where an efficient payment channel arises. Why does TV show ads? Because the technical medium does not facilitate direct payment for content. It would be much more efficient for the producers of a new show to charge you fifty cents an hour, and most people would easily pay fifty cents per hour to never have to even skip past ads. Or to put it differently, fairly few people would choose to watch ads for fifty cents per hour.

Thus, if payment is straightforward, the whole inefficient pseudo-channel of advertising evaporates and the digital Mad Men are out on their asses. Taste the pain, algo-bitches! (There’s only one thing I hate more than algorithms: the pencil-necked geeks who are good at algorithms.)

In 2020, how does Joe pay for computing? He pays for three things: content, code (i.e., content), and computing resources. Probably his ISP is his host, so that’s a very straightforward billing channel for resources, easily extended to code/content. Joe would never even dream of installing an app which showed him ads. So there’s no use in figuring out what his buying patterns are, is there? Sorry, Mad Men. Go back to the math department.

Consider search in 2020. In search, too, PCC (not to be confused with proof-carrying code) separates the UI and the service. Joe uses one search app, which can be connected to any number of remote back-ends. If he doesn’t like Google’s results, he can Bing and decide, without changing his user experience at all. Result: brutal commoditization pressure in the search market, which has to bill at micropennies per query and has no channel for displaying ads—except in the results, which sucks and won’t happen. Consider Mexican bikers, cooking meth in a burned-out Googleplex.

Alas! All that is great passes rapidly away. In this imaginary 2020, we see nothing left of Silicon Valley’s existing corporate giants, except possibly a Facebook on steroids, whose information-appliance profiles have morphed into virtual Linux instances. Death by commoditization. Hey, it wouldn’t be the first time.

But wait! Can this actually happen? Is it really possible to turn everyone’s Facebook profile into a general-purpose computer? Frankly, I doubt it. If I worked at Facebook, which of course I don’t, I would be extremely skeptical of Facebook Terminal, for reasons I think are quite obvious.

In real life, this apocalypse just isn’t going to happen. In real life, 2020 will be pretty much just like 2011. And why? Because we just don’t have the software technology to build 2020. And we’re probably not about to get it, either.

Let’s look at this issue in a little more detail. But the point is obvious. Hosting mint.com is pretty much a full-time job for the guys at mint.com. Expecting Joe Sixpack to download their code, for free or for pay, and set up his own server, is just absurd.

Of course, Joe is unlikely to have a serious load issue on his private server—because he’s the only user. But still, Joe is not an Ubuntu administrator, he doesn’t want to be an Ubuntu administrator, and frankly he probably doesn’t have the raw neurological capacity to be an Ubuntu administrator. Scratching his balls, booting MS-DOS and typing “copy a:*.txt b:” is about the limit of Joe’s computational ambitions and abilities. You could put a visual interface on his console, but frankly, this would probably only confuse him more. I want to serve Joe’s needs, but I won’t let myself overestimate his qualities.

We’re starting to answer the essential question here: why hasn’t personal cloud computing already happened? Why doesn’t it work this way already? Because frankly, the idea is obvious. It’s just the actual code that isn’t there. (Here is the closest thing I’ve seen. Let’s hope Joe Sixpack is a good node.js sysadmin.)

Let’s go back to 1971. The idea of a personal computer was also obvious to people in 1971. Moore’s Law was reasonably well understood in 1971. So it was clear that, if in 1971 you could build a PDP-11 the size of a refrigerator and sell it for $20,000, in 1981 it would be possible to build a PDP-11 that fit under a desk and cost $2000.

But this irresistible logic ran into an immovable object. Who wants a PDP-11 on their desk? The PDP-11 evolved into the mighty VAX. Who wants a VAX on their desk? Even if you can build a VAX that fits on a desk and cost $2000, in what way is this a viable consumer product? It’s not, of course. Similarly, turning 700 million Facebook profiles into virtual Ubuntu images is not, in any way, a viable product strategy—or even a sane one.

The “Facebook Terminal” example is ridiculous not because the idea of personal cloud computing is ridiculous, but because “Facebook Terminal” is a ridiculous product. Specifically, the idea that, to build a virtual computer in 2011, we should design a virtual emulation of a physical computer first produced in 1981, running an OS that dates to 1971, cannot fail to excite the mirth of the 2020 epoch. (And I say this as one who still owns a copy of the seminal BSD filesystem paper, autographed by Keith Bostic.)

Again: who wants a PDP-11 on their desk? Here we encounter Gall’s law:

A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system.

If you want an Apple II, you don’t start by shrinking a PDP-11. You have to build an Apple II. If you want not an Apple II but rather an electronic typewriter, there’s a market for that. I recall that market. In the long run I’m afraid it didn’t compete too well with the PC.

But why was the Apple II simple? Because its inventors understood Gall’s law, or at least its Zen? Well… possibly. But also, simply due to the limitations of the hardware, it had to be. Early microcomputers simply did not have the computational power to run a PDP-11 OS. Thus, there was no choice but to build a new software infrastructure from very simple roots.

This is of course a notable contrast from our present age, in which your Ubuntu image, carried on the back of a sturdy Xeon, smiles cheerfully from under its seven gigabytes of crap. The Xeon can run seven gigabytes of crap—but Joe Sixpack cannot manage seven gigabytes of crap. Amazing things, of course, are done with this assware. Amazing things were also done with VMS. Amazing things were done with Windows. Don’t take it personally.

So: we’ve identified one existential obstacle to personal cloud computing. We don’t have a cloud operating system, or anything like it, which could be remotely described as simple enough to be “personal”—assuming said person is Joe Sixpack and not Dennis Ritchie. No OS, no computer, no product, no business. The thing simply cannot be done. And Gall’s law says we can’t get there from here.

But actually it’s not the only such obstacle. If we somehow surmounted this obstacle, we would face another insurmountable obstacle. It’s not just that we need a new OS to replace Unix—we also need a new network to replace the Internet.

Managing a personal server in the sky is much harder than managing a phone in your pocket. Both run apps—but the personal cloud computer is a server, and the phone is a client. The Internet is already a bit of a warzone for clients, but it’s digital Armageddon for servers. You might as well send Joe Sixpack, armed with a spoon, into the Battle of Kursk.

An Internet server is, above all, a massive fortified castle in alien zombie territory. The men who man these castles are men indeed, quick in emacs and hairy of neck. The zombies are strong, but the admins are stronger. They are well paid because they need to be, and their phones ring often in the night. Joe is a real-estate agent. No one calls him at 3 in the morning because Pakistani hackers have gotten into the main chemical supply database.

So long as this is true, it really doesn’t matter what software you’re running. If network administration alone—and if on a real computer, user-installed apps talk to foreign servers directly, and vice versa—is a job for professionals, no cloud computer on this network can conceivably be personal. It is an industrial cloud computer, not a personal one.

So: serious problem here. By 2020—two years before the apotheosis of HTML 5—we’re going to need (a) a completely new OS infrastructure, and (b) a completely new network. Or we can also, of course, remain in our present state of lame.

Can it be done? Why, sure it can be done. If anything, we have too much time. The simple fact is that our present global software infrastructure, ass though it be, is almost perfectly constructed for the job of hosting and developing the upgrade that replaces it. All we have to do is make sure there is an entirely impermeable membrane between assware and the future. Otherwise, the new infrastructure becomes fatally entangled with the old. The result: more ass.

Assware has one great virtue: ass is easy to glue. All useful software today is at least 37% pure glue. You can just count the spaces between the letters. For instance, when we see a LAMP stack, we see four letters and three gallons of glue.

It is perfectly possible to create and even deploy an entirely new system software stack, so long as it entirely eschews the charms of Unix. If your new thingy calls Unix, it is doomed. Unix is like heroin. Call Unix once—even a library, even your own library—and you will never be portable again. But a Unix program can call a pure function, and indeed loves nothing better. You can’t use ass, but ass can use you.

When people created the first simple operating systems from scratch, they had only simple computers to build them on. This enforced an essential engineering discipline and made the personal computer possible. No forces enforces this discipline now, so there is no economic motivation for creating simple system software stacks from scratch.

As for new networks—phooey. Layering a new peer-to-peer packet network over the Internet is simply what the Internet is designed for. UDP is broken in a few ways, but none that can’t be fixed. It’s simply a matter of time before a new virtual packet layer is deployed—probably one in which authentication and encryption are inherent.

Putting our virtual computer on a virtual overlay network changes the game of the network administrator, because it splits his job into two convenient halves. One, the node must protect itself against attacks on the underlying network by attackers without legitimate credentials for the overlay network. Two, the node must protect itself from attacks by legitimate but abusive overlay users.

Job one is a generic task—DOS defense of the most crude, packety sort—and can be handled by Joe’s ISP or host, not Joe himself. Attacking an overlay node at the Internet level is a lot like trying to hack an ’80s BBS by calling up the modem and whistling at it. Job two is a matter for the network administrators, not Joe himself. All of the difficulty in securing the Internet against its own users is a consequence of its original design as a globally self-trusting system. So again, we solve the problem by washing our hands completely of any and all legacy assware.

Let’s review the basic requirements for a personal cloud OS—in case you care to build one. I see only three:

First, that motherfucker needs to be simple. If there’s more than 10,000 lines of code anywhere in your solution, or the compressed source distribution exceeds 50K, Gall’s law says you lose. Various kinds of virtual Lisp machines, for instance, can easily hit this objective. But, if it’s Lisp, it had better be a simple Lisp.

What is a simple cloud computer, when introduced, version 1.0? Can it be a personal cloud computer? It cannot. The Apple II cannot exist without the Altair. With 10,000 lines of code or less, you cannot compete with Ruby on Rails for hosting the newest, greatest Twitter ripoff, just as the Altair cannot compete with the VAX—at the job of being a VAX. But the VAX also makes a pretty crappy Altair.

If history repeats itself, the 2012 ancestor of the 2020 personal cloud computer is neither the 2012 cloud information appliance, nor the 2012 industrial cloud computer. If it exists at all, it can only exist as a hobby computer—like the Altair.

A hobby computer doesn’t try to serve the needs of any existing user base. It is its own need. It exists to be played with. As it is played with, it will naturally mature and learn to serve needs. But at first, it is much more important to remain simple, than to solve any specific problem.

Second, your virtual OS needs to be semantically isolated from the host OS. Anything that can call Unix, is Unix. That’s why the Javascript/browser ecology, for all its stank, succeeds: it can’t call Unix. It could invent its own compatibility disasters, but at least it didn’t import Posix’s. If Netscape had cut a hole into Unix, it would have died without a trace—as perhaps it deserved.

The natural consequence of this restriction is that Joe’s virtual computer is, or at least should be, portable across hosts. This is a delightful service which can of course be implemented by assware with yet another layer of complexity, but should emerge naturally from any really simple system.

Third, your virtual computer needs to be a computer, i.e., capable of arbitrary general-purpose Turingy goodness. It can compute, it can store data, it can communicate with other computers—it can even talk to the old legacy Internet, albeit via a gateway. Think of any Web app you use. If Joe’s computer can’t implement this app, at least logically, it is not in some sense a computer. For example, can it virtualize itself? If not…

So my view is: not only is personal cloud computing solvable, but it’s simple by definition. So it can’t even be hard. Some neighbor should just do it. He’s got eight years.