Animation Isn't Just for Kids

You've probably noticed how user interface animation has become more prevalent over the last 5 years. Application windows animate as they open, minimize and close. File icons animate into position when dropped in file managers. Web pages animate as they change in reaction to users.

When this trend began, many technical types in the industry lamented it and dismissed it as "eye candy". If you do a google search for "disable vista eye candy" you get over 500,000 hits. Many people just didn't see the point and others were hurt by the performance impact. The pundits collectively shook their heads at what they perceived as Microsoft delivering yet another solution in search of a problem to solve.

While Apple has great taste in design, they struggled to justify the animation capabilities they rolled out in Leopard:

"Looking pretty is a very important part of making the user interface more intuitive," Bayer said in describing the benefit of Core Animation. "The user gets more insight into what's actually happening in the software."

That's not a very convincing argument.

Animation on the Move

Based on that reaction and weak justification, you would have expected somewhat of a pull back by Apple and Microsoft. Instead, Apple iOS and Microsoft Metro (the user interface in Windows Phone that was first deployed in the now dead Zune) both took animation to a whole new level. Almost every part of the user interface animates. Buttons fade in and out, apps zoom in when launched and zoom out when backgrounded, windows slide left and right as user interfaces are navigated, etc.

Probably the most memorable and remarkable animation was the smooth scrolling on the iPhone - complete with acceleration and bouncing as the view hit its scrollng boundaries. There is something almost viscerally satisfying about playing with that feature.

It's Not About Bugs Bunny

If you go and look at the background of animation in user interfaces, you tend to find the weak justifications noted above or this in Apple's User Interface guides:

Animation is a great way to communicate effectively, as long as it doesn’t get in the way of users’ tasks or slow them down. Subtle and appropriate animation can:

  • Communicate status
  • Provide useful feedback
  • Enhance the sense of direct manipulation
  • Help people visualize the results of their actions
  • Again, pretty weak overall - but that third bullet I think gets close to the mark.

    The situation is not much better in academia. I hate to single out this one example, but the Special Interest Group on Computer-Human Interaction (SIGCHI) Fall Conference of 2009 had a presentation on a paper titled "The animated GUI: Lessons from Disney":

    Why animate?

  • Provide a natural flow
  • Focus attention on the action
  • Provide a sense of bearing
  • Engage and appeal
  • NOT: disrupt or hold back
  • While their reasoning is not invalid, I think they are describing side-effects of the animation more than anything.

    In other words, none of this describes why you get a visceral pleasure from playing with the smooth scrolling on an iPhone. Nor does it describe why people complain about the lack of it on Android.

    Step Functions Are Alien

    If you look around yourself in the world, you are surround by changes in your environment that happen in a smooth fashion. Doors smoothly transition from open to closed. The wind blows trees back and forth in a smooth fashion. The temperature changes slowly. The sun moves across the sky slowly.

    One way of looking at these changes is to describe them by a function which describes how their characteristics change over time. All of these examples would be described by functions which consist of smooth curves. This whole class of functions can be loosely described as "analog functions".

    On the other hand, a "step function" is one in which the value described jumps between values in an instantaneous fashion. If they existed in the real world, it would be like a door going from open to closed in a split instant with no smooth movement in between. When you visualize such a function it looks like stairs - hence the name.

    I believe that step functions are foreign to us as human beings. They are not common in nature and I believe that biologically we are not tuned for dealing with them. In fact, if things around you started behaving in a step function like way (doors open one instant are closed in the next, the sun is up one instanct and down the next, etc.) you would not only be unsettled, you would begin to question your own sanity.

    Animation Makes User Interfaces Unalien

    Up until about 5 years ago, we were presenting screens to users with many unnatural step function changes. Windows would instantly open and disappear. File icons would instantly jump to their proper locations. Menus would instantly appear and disappear. Viewed in this context, it's no wonder that Technophobia is a real problem.

    The conclusion I've come to is that the real reason good computer animation is effective and appealing is quite simply because it presents analog behavior to us. All of the sudden and jarring step functions have been replaced with analog functions. Items fade, slide, zoom and reshape themselves smoothly. I think this has a lot to do with the success of the iPhone and other animated GUIs in the consumer market.

    So, the next time you hear somebody in the tech community complain about "eye candy" you might want to ask them instead why it's taken us as an industry so long to figure out how to stop alienating our customers.

    Mobile Flash is Dead - The Battle of the Web Terminals is Over

    Well, get ready for the gang pile - Mobile Flash is Dead.

    There will be plenty said about this topic from business and marketing perspectives. But what does this represent from a technology perspective? More than anything, I think it's just another victim in a long standing battle amongst various approaches to Web Terminals.

    Some Brief History

    If you take a look at the IEEE Computer Society's Timeline of Computing History you can piece together some interesting highlights in computing:

    • First computer "mainframes" in the 1950's were giant mechanisms with no remote access.
    • The 1960s saw the development of "terminals" which allowed remote access to a mainframe.
    • The 1970s up to the mid-1980s marked the rise of "personal computers" - smaller systems dedicated to an individual user.
    • Beginning in the mid-1980s, "network computing" became the popular concept. "Client" software on personal computers would talk to mainframe "server" software accessible via computer networks.
    • In 1990 Apranet is decommissioned and the Internet officially commercialized. With the introduction of the web browser in 1993, the "web" is born.
    • The 2000s see a tremendous boom in web based applications and services.
    • In 2008, the Apple App Store is opened and mobile applications explode in popularity.

     If you step back from that summary for a minute and look at it from a high-level these are the trends at work:

    • large computing systems with no network connectivity
    • large computing systems with remote terminals
    • small computing systems with no network connectivity
    • small computing systems with network connectivity to large computing systems via applications
    • small computing systems with network connectivity to large computing systems via browsers
    • small computing systems with network connectivity to large computing systems via applications

    Somehow in the 2000s, the entire industry convinced itself that the web was going to be the solution to all of the computing industry's problems. I believe the reversal of this in the late 2000s is an indication of the mistake that the industry made trying to force everything onto the web.

    Web Terminals

    Based on that view of computing history, it's easy to see the web browser as just a throw back to terminals from the early mainframe days. Client/server applications were considered difficult to build - the environment on the personal computer was just too complex and difficult to deal with (i.e. Windows). Wouldn't it be easier to have a dumb terminal (a web browser) and put energy into building server applications in a more stable environment?

    For many applications, this proved good enough. Those early browsers were pretty dumb terminals with many limitations but many developers saw concrete benefits to that approach, at least for simple applications.

    Then in 1995, along came Java. It was supposed to be the answer to all of our dreams - it would allow full use of all of the computing resources on the personal computer while delivering ease of development and deployment. It would start off as a browser plugin but would certainly some day become the ultimate web smart terminal. Yet, despite all of that, Java never fulfilled that vision - and in fact is rarely used in browsers today.

    The Rise of Flash

    Why did Java fail to take over all of client/server computing and the web? In my opinion, one simple reason: sex appeal. What was really driving the growth of the Internet at that time was commercialization. And that commercialization was being driven by average people beginning to discover the Internet. Java was a visually ugly system with little ability to satisfy the sizzle and style needs of marketers and promoters who needed to connect with those new users.

    The arrival of the Flash Player in 1997 (at a time when the web browser was still a very dumb terminal) met this need and then some. Java was languishing as the ugly step-child and the web browser's IQ was flatlined. All of a sudden the Flash Player was the ultimate smart terminal. It opened up tremendous new opportunities for rich content on the web.

    This state of affairs stayed relatively stable until about the mid 2000s. Java had long given up on conquering the web having conquered the back office but the web browser was starting to learn some new tricks, though, still not quite enough at that time.

    The Fall of Flash

    Apple infamously began the topic of iPhone application development by telling all of their developers that all apps would be web apps - it was the future. This was met with derision and in 2008, native application development was introduced. For the first time in almost 15 years, a platform began driving the growth of traditional client/server application development again. This was the first sign of problems for Flash, the King of Web Terminals.

    The next problem for Flash was that the web browser was finally starting to smarten up. Between Apple and Google, innovation in the web browser arena began to explode. Suddenly, you could start doing the same things in open standards browsers that had traditionally only been possible in Flash - a proprietary environment with expensive commercial tools.

    Thus, today's news is big, but pretty inevitable given the trends. Flash was the smartest web terminal on the block for many years. However, it was just a matter of time before the browser killed it after Apple and Google threw their resources behind it.

    What Next?

    When viewed as web terminal technology, the battle is pretty clear. However, what about the larger conflict? What is to become of native client application development versus web terminals?

    Perhaps the reversal in trend back to more native client application is not a new trend away from web terminals but more of a correction of an earlier overreach that set back client/server computing development many years.

    Maybe what we are going to see is a new norm develop where web terminals (the browser) are used for simpler applications that don't need native access and need wide platform and operating system support. At the same time perhaps we will witness a renaissance in development of native applications where performance, native API access and polished user interfaces are the highest priority.

    If that's how it really does play out, adding Objective-C (iOS) and Java (Android) to your resume is probably a good career move.

    Transparent Peer-to-Peer Network Connections

    Back to My Mac

    So, I was poking around the Internet today looking at tunneling technologies and stumbled across this little gem:

    RFC 6281 - Understanding Apple's Back to My Mac (BTMM) Service

    It's quite a tasty treat:

    BTMM provides secure transport connections among a set of devices  that may be located over a dynamic and heterogeneous network environment.  Independent from whether a user is traveling and accessing the Internet via airport WiFi or staying at home behind a NAT, BTMM allows the user to connect to any Mac hosts with a click, after which the user can share files with remote computers or control the remote host through screen sharing. When a user changes locations and thus also changes the IP address of his computer (e.g., roaming around with a laptop and receiving dynamically allocated IP address), BTMM provides a means for the roaming host to update its reachability information to keep it reachable by the user's other Mac devices.  BTMM maintains end-to-end transport connections in the face of host IP address changes through the use of unique host identifiers.  It also provides a means to reach devices behind a NAT.

    It proceeds to go into considerable detail as to how they pull this off. NAT, firewalls, IPSec, DDNS oh my!

    Unfortunately, its achilles heel is the need to get incoming ports opened on your NAT/firewall device so that the BTMM peer-to-peer tunnel can connect. This is "addressed" by requiring your NAT/firewall to support NAT-PMP or UPnP. Not every provider (*cough* U-verse *cough*) supports these protocols in their equipment. So, you either have to do manual port opening and mapping or you are just out of luck.

    Can a Server Make it Work?

    This being a state-of-the-art implementation of transparent access to mobile devices stymied by such a fundamental problem, is there any hope for turn-key access for mere mortals? For some background, see this post at stackoverflow:

    Is there a way to bridge two outgoing TCP connections in order to bypass firewalls and NAT?

    NAT devices normally allow all traffic from the inside to the outside without restriction. When you start an outgoing TCP connection, the NAT device additionally sets up a mapping for the return traffic. The basic limitation, however, is on the other end of the connection - the destination device must either have an existing NAT mapping for the incoming traffic or it must be on the public Internet and not behind a NAT device.

    As suggested by the stackoverflow poster, you therefore could use a server on the Internet to shuffle packets between TCP connections from two devices that want to communicate with each other. However, you end up with a server trying to act like a router and you have traffic running through the server rather than between the two peers. That's not an ideal situation but it does work - and works so well that this is the technique used by remote desktop tool TeamViewer for their transparent access:

    When creating a session, TeamViewer determines the optimal type of connection. After the handshake through our master servers, in 70% of the cases a direct connection via UDP or TCP is established (even behind standard gateways, NATs and firewalls). The rest of the connections are routed through our highly redundant router network via TCP or http-tunnelling. You do not have to open any ports in order to work with TeamViewer! 

    Working with NAT

    Is there a way to avoid these drawbacks by using NAT itself? The idea is to leverage the standard NAT behavior so that two devices can simultaneously connect to each other and exchange traffic with no restriction. As far back as 2000 a Slashdot user sketched out a possible solution:

    TCP splicing through simultaneous SYNs

    TCP does handle simultaneous connections (see section 3.4 of the original TCP spec - RFC793). However, as the Slashdot poster indicates, the real problem is working with the NAT devices on each end to get the desired effect. The poster suggests using a third party server to assist with the connection but it still requires adjustments to NAT behavior in devices to have a chance of working.

    More Information

    The two techniques discussed above are the main basic approaches. For more detailed discussion and information about more variants see this excellent RFC:

    RFC 5128 - State of Peer-to-Peer (P2P) Communication across Network Address Translators (NATs)

    Circa 2008, this has some great information and even more references in it.

    IPv6 Saves the Day?

    Ultimately, this whole situation exists due to the fact that we use NAT devices at all. We were cursed with them as the available IPv4 addresses began running out. NAT allowed the Internet to continue to grow while kicking the address space issue down the road almost a decade. However, time finally ran out in January, 2011:

    Address allocation kicks off IPv4 endgame

    And the endgame won't last long:

    IPv4 Address Report

    By mid-2014, all public IPv4 addresses will be gone.

    Of course, this situation was planned for starting back in 1998 with the design of IPv6. It directly dealt with the address space problem by quadrupling it in size. One participant said this amount of address space is "probably sufficient to uniquely address every molecule in the solar system".

    With that much address space, there's no conceivable need for NAT boxes (ignoring any organizational want for them for policy purposes). And if you eliminate NAT boxes you eliminate the peer-to-peer communication problem.

    Transparency at Last

    With the forced adoption of IPv6 there is finally a direct and obvious path towards truly transparent peer-to-peer access. While NAT was probably directly responsible for the unfettered growth of the Internet in the face of address space shortage, it's timely death will finally enable truly turn-key peer-to-peer access.