This is an interesting topic to me. I’m interested in how bandwidth drives applications and how applications drive bandwidth.
Applications that use bandwidth often become thirsty for it. Example: X11. XWindows was a novel idea years ago to separate a programs display from it’s operation. As long as we can separate them, why not make it so we can separate them over a network. Good idea. I was enamoured with this idea and spent a lot of time playing with X and extensions and add-ons that made X work better over a WAN (dxpc and LBX). On a 10 Mbit lan, X was pretty good years ago. I had visions of using the processing power of the X Server to actually do much of the work in my application.
Example: In X11, the graphics primitives are very primitive (duh), but that’s needed. The problem is that common things cannot easily be automated without involvement of the X11 client (the program). So if you needed a menu system you could not (and still cant) just have the X server do it. You can put lots of stuff in the X server to make it effectively pre-loaded, but there is still program involvement to do it. So a whole new class of graphics toolkits came out that provide very advanced GUI elements, but still with program involvement (think of GTK or Qt…). Years ago, I had visions of being able to have a X11 client program load a script or some tokenized object code to the X server that would perform most if not all of the functions that GTK or Qt do. So that the conversation of XServer to X client would go like:
Client: Do you have the GTK primitives?
Server: Yes, I know how to do menus, buttons, sounds, and animated icons.
Client: Ok, I need you to draw a menu with this stuff in it and tell me when a user clicks on it.
Client: Here’s object code that will allow you to do a GTK icon bar.
Client: Here’s the elements I want in an icon bar.
The key here is that once the primitive is in the X server, all clients can use them and the involvment of any client program is just to create it and then get high-level events from it.
Well, what happened. Why don’t we do this today? At least in this form we don’t have quite that level of automation. First, X11 is not the predominate windowing environment. Second, adding to the X protocol is a tedious process–vendors of X Terminals need concrete standards to implement. The net result of this is that if you run a fairly complex application over even a gigabit network, it is very slow. I recently tried to use Firefox as a remote X11 client over a network as a test and it was terrible. (just so everyone knows, the primary reason for the incredible slowness is really latency, but latency and bandwidth are tied together for the purposes of this discussion)
What happened was as processing power increased, X Terminals became less viable and everyone got a computer on their desk. This means the bandwidth between X client and X Server was effectively just a context switch away so apps were developed to take advantage of this. We got beautiful anti-aliased fonts, animated menus and a host of other eye candy.
The need for remote access to applications didn’t go away. This led to the development of much simpler remote protocols like VNC and RDP. Very dumb, but sometimes that’s better. These are essentially just pictures of a remote screen, there is very little negotiation between the client and server unlike X.
So, the net result is bandwidth created an application (X11), then the application required more bandwidth than could be provided by networks, so it in effect drove bandwidth. The core need (remote access) didn’t go away so another whole class of applications came out (VNC, RDP).