Rise of the Virtual Machines

UPDATE 2: Since my last update, I discovered Mainframe2 another pretty amazing take on virtualization.

UPDATE: Since I wrote this post, I discovered Docker, another very interesting direction in virtual machines.

"Virtualization" is a term that's used pretty regularly – but exactly what do we mean when we use that term? There's roughly three levels of virtualization in use today:

  1. Hardware and CPU emulation. This is the lowest level of virtualization. Most recent implementations emulate a standard PC architecture and run machine code. Despite the recent growth in this area (VMWare, Parallels), this type of virtualization actually dates back to 1972 when IBM released VM/370 for mainframes.
  2. Byte code emulation. This higher level emulation implements an abstract computing environment (registers, memory, instructions) only – there is no true hardware emulation. This type of VM assumes it is running in the context of an operating system that provides all of the facilities hardware peripherals would normally provide. The Java VM, Microsoft's CLR, Python, Ruby and Perl are all examples of this type of VM.
  3. Sandboxing. This high level virtualization modifies the behavior of operating system calls to implement a container ("sandbox") in which normal applications become isolated from other applications and system resources as a matter of policy. Some examples are Apple's App SandboxiOS App Sandbox and Linux Containers.

What all three of these techniques have in common is that they mediate access between an application (or operating system) and it's execution environment. The goals are to increase security, manage resources more reliability and efficiently, and to simplify deployments. Theses benefits are behind the rapid rise of hardware virtualization over the last 5 years.

What's interesting is the parallels between virtualization and the web. A virtualization instance (or virtual machine – VM) creates a virtual environment for each application/operating system to operate within for both security and efficiency reasons. Web browsers also do the same thing with Javascript – each web page has it's own execution environment. You could call it level 2.5 virtualization as it it shares aspects of level 2 and 3 virtualization.

Virtualization can be a mind bending exercise – especially when you start looking at things like JSLinux. JSLinux is a hardware VM implemented in Javascript that runs inside of a web page. The demo is pretty amazing – it boots a relatively stock Linux kernel. The mind bending part is when you realize this is a level 1 VM implemented inside a level 2.5 VM. Technically, you should be able to run a web browser inside of JSLinux and launch yet another nested VM instance.

The Blue Pill

Where is all of this going? With almost 4 different types of VMs and proofs of concept that intermix them in reality altering ways (The Matrix anyone?) it seems we haven't reached the apex of this trend yet.

One path this could follow is Arc. Arc takes the browser in a slightly different direction from JSLinux. Arc takes Virtual Box and packages it into a browser plugin which is then combined with a specification for specing web downloadable virtual machines. This makes the following possible: install the Arc plugin into your browser, visit a web page with an Arc VM on it and the Arc plugin downloads the spec, assembles and launches the VM and you wind up securely running a native application via the web.

In other words, visiting a web page today could turn into launching a virtualized application tomorrow.

While there are clear efficiency and security benefits to this, there's also a huge developer benefit: developers would be able to implement web applications in almost any conceivable fashion. They can choose the operating system, the GUI toolkit and anything else they like as the basis of their application. The situation where the web tends to be the lowest common denominator gets turned on its head. Developers are now freed to develop with whatever tools provide the best experience and value to the end-user.

This fanciful scenario implies a possible future trend: the death of standards. Web standards exist and are adopted because they benefit developers – they create a homogenous ecosystem which is easier to write for and deploy into. But, if virtualization takes us to the point there the characteristics of the execution environment are so isolated from the application that they have no impact on it, why bother with standards?

If this outcome comes about, the irony will be that the Java VM shot too low when it first arrived on the scene claiming "write once, run anywhere". Maybe we are really going to end up with "create anything, run anywhere".

Mobile Flash is Dead - The Battle of the Web Terminals is Over

Well, get ready for the gang pile - Mobile Flash is Dead.

There will be plenty said about this topic from business and marketing perspectives. But what does this represent from a technology perspective? More than anything, I think it's just another victim in a long standing battle amongst various approaches to Web Terminals.

Some Brief History

If you take a look at the IEEE Computer Society's Timeline of Computing History you can piece together some interesting highlights in computing:

  • First computer "mainframes" in the 1950's were giant mechanisms with no remote access.
  • The 1960s saw the development of "terminals" which allowed remote access to a mainframe.
  • The 1970s up to the mid-1980s marked the rise of "personal computers" - smaller systems dedicated to an individual user.
  • Beginning in the mid-1980s, "network computing" became the popular concept. "Client" software on personal computers would talk to mainframe "server" software accessible via computer networks.
  • In 1990 Apranet is decommissioned and the Internet officially commercialized. With the introduction of the web browser in 1993, the "web" is born.
  • The 2000s see a tremendous boom in web based applications and services.
  • In 2008, the Apple App Store is opened and mobile applications explode in popularity.

 If you step back from that summary for a minute and look at it from a high-level these are the trends at work:

  • large computing systems with no network connectivity
  • large computing systems with remote terminals
  • small computing systems with no network connectivity
  • small computing systems with network connectivity to large computing systems via applications
  • small computing systems with network connectivity to large computing systems via browsers
  • small computing systems with network connectivity to large computing systems via applications

Somehow in the 2000s, the entire industry convinced itself that the web was going to be the solution to all of the computing industry's problems. I believe the reversal of this in the late 2000s is an indication of the mistake that the industry made trying to force everything onto the web.

Web Terminals

Based on that view of computing history, it's easy to see the web browser as just a throw back to terminals from the early mainframe days. Client/server applications were considered difficult to build - the environment on the personal computer was just too complex and difficult to deal with (i.e. Windows). Wouldn't it be easier to have a dumb terminal (a web browser) and put energy into building server applications in a more stable environment?

For many applications, this proved good enough. Those early browsers were pretty dumb terminals with many limitations but many developers saw concrete benefits to that approach, at least for simple applications.

Then in 1995, along came Java. It was supposed to be the answer to all of our dreams - it would allow full use of all of the computing resources on the personal computer while delivering ease of development and deployment. It would start off as a browser plugin but would certainly some day become the ultimate web smart terminal. Yet, despite all of that, Java never fulfilled that vision - and in fact is rarely used in browsers today.

The Rise of Flash

Why did Java fail to take over all of client/server computing and the web? In my opinion, one simple reason: sex appeal. What was really driving the growth of the Internet at that time was commercialization. And that commercialization was being driven by average people beginning to discover the Internet. Java was a visually ugly system with little ability to satisfy the sizzle and style needs of marketers and promoters who needed to connect with those new users.

The arrival of the Flash Player in 1997 (at a time when the web browser was still a very dumb terminal) met this need and then some. Java was languishing as the ugly step-child and the web browser's IQ was flatlined. All of a sudden the Flash Player was the ultimate smart terminal. It opened up tremendous new opportunities for rich content on the web.

This state of affairs stayed relatively stable until about the mid 2000s. Java had long given up on conquering the web having conquered the back office but the web browser was starting to learn some new tricks, though, still not quite enough at that time.

The Fall of Flash

Apple infamously began the topic of iPhone application development by telling all of their developers that all apps would be web apps - it was the future. This was met with derision and in 2008, native application development was introduced. For the first time in almost 15 years, a platform began driving the growth of traditional client/server application development again. This was the first sign of problems for Flash, the King of Web Terminals.

The next problem for Flash was that the web browser was finally starting to smarten up. Between Apple and Google, innovation in the web browser arena began to explode. Suddenly, you could start doing the same things in open standards browsers that had traditionally only been possible in Flash - a proprietary environment with expensive commercial tools.

Thus, today's news is big, but pretty inevitable given the trends. Flash was the smartest web terminal on the block for many years. However, it was just a matter of time before the browser killed it after Apple and Google threw their resources behind it.

What Next?

When viewed as web terminal technology, the battle is pretty clear. However, what about the larger conflict? What is to become of native client application development versus web terminals?

Perhaps the reversal in trend back to more native client application is not a new trend away from web terminals but more of a correction of an earlier overreach that set back client/server computing development many years.

Maybe what we are going to see is a new norm develop where web terminals (the browser) are used for simpler applications that don't need native access and need wide platform and operating system support. At the same time perhaps we will witness a renaissance in development of native applications where performance, native API access and polished user interfaces are the highest priority.

If that's how it really does play out, adding Objective-C (iOS) and Java (Android) to your resume is probably a good career move.