Fedora 11 Hint

Hmm..  Haven’t written anything in a while, but time for something new…

This is a simple entry.  I upgraded to Fedora 11 tonight and then immediately ran ‘yum update’.  I ran into two issues:  First the upgrade had elected not to install the fc11 yum package because my fc10 package was a newer version.  The problem was that yum wouldn’t run after the upgrade.  This was resolved easily enough by forcing the installation of the fc11 yum package.

The second problem was that you need to install ‘yum-plugins-fastestmirror’ to get reasonable download performance for updated packages.  Do this before you run ‘yum update'; saves lots of time.

The Evolution of Bandwidth

This is an interesting topic to me.  I’m interested in how bandwidth drives applications and how applications drive bandwidth.

Applications that use bandwidth often become thirsty for it.  Example:  X11.  XWindows was a novel idea years ago to separate a programs display from it’s operation.  As long as we can separate them, why not make it so we can separate them over a network.  Good idea.  I was enamoured with this idea and spent a lot of time playing with X and extensions and add-ons that made X work better over a WAN (dxpc and LBX).  On a 10 Mbit lan, X was pretty good years ago.  I had visions of using the processing power of the X Server to actually do much of the work in my application.

Example:  In X11, the graphics primitives are very primitive (duh), but that’s needed.  The problem is that common things cannot easily be automated without involvement of the X11 client (the program).  So if you needed a menu system you could not (and still cant) just have the X server do it.  You can put lots of stuff in the X server to make it effectively pre-loaded, but there is still program involvement to do it.  So a whole new class of graphics toolkits came out that provide very advanced GUI elements, but still with program involvement (think of GTK or Qt…).  Years ago, I had visions of being able to have a X11 client program load a script or some tokenized object code to the X server that would perform most if not all of the functions that GTK or Qt do.  So that the conversation of XServer to X client would go like:

Client:  Do you have the GTK primitives?

Server: Yes, I know how to do menus, buttons, sounds, and animated icons.

Client: Ok, I need you to draw a menu with this stuff in it and tell me when a user clicks on it.

Client:  Here’s object code that will allow you to do a GTK icon bar.

Server:  Thanks.

Client:  Here’s the elements I want in an icon bar.

The key here is that once the primitive is in the X server, all clients can use them and the involvment of any client program is just to create it and then get high-level events from it.

Well, what happened.  Why don’t we do this today?  At least in this form we don’t have quite that level of automation.  First, X11 is not the predominate windowing environment.  Second, adding to the X protocol is a tedious process–vendors of X Terminals need concrete standards to implement.  The net result of this is that if you run a fairly complex application over even a gigabit network, it is very slow.  I recently tried to use Firefox as a remote X11 client over a network as a test and it was terrible.  (just so everyone knows, the primary reason for the incredible slowness is really latency, but latency and bandwidth are tied together for the purposes of this discussion)

What happened was as processing power increased, X Terminals became less viable and everyone got a computer on their desk.  This means the bandwidth between X client and X Server was effectively just a context switch away so apps were developed to take advantage of this.  We got beautiful anti-aliased fonts, animated menus and a host of other eye candy.

The need for remote access to applications didn’t go away.  This led to the development of much simpler remote protocols like VNC and RDP.  Very dumb, but sometimes that’s better.  These are essentially just pictures of a remote screen, there is very little negotiation between the client and server unlike X.

So, the net result is bandwidth created an application (X11), then the application required more bandwidth than could be provided by networks, so it in effect drove bandwidth.  The core need (remote access) didn’t go away so another whole class of applications came out (VNC, RDP).

Today, instead of X Terminals, we use thin clients.  Many of these thin clients usually have Firefox or IE on them and can run java and javascript.  In effect, I’ve gotten what I imagined years ago, just in a completely different form.  Instead of the X protocol it’s now HTTP.  So now, the communication between thin client and server can be a simple HTTP stream that only uses bandwidth to display as little as one page worth of data.  How the page looks is (partially) determined by the user agent (i.e. the browser).  This is good.  Cheap processing power is used to make it pretty.  The server just sends out chunks of data.  This does ultimately use less bandwidth.  Good or bad, that’s the way it is.  Next time, maybe we should have a discussion on the various inefficiencies of this model…

–bryan

Windows versus Linux

Oh, let’s beat this one to death…. not really.

I’m not going to talk about the technical merits of either OS, but about my observations of the differences between the two worlds.

In Windows, most people assume that good software requires paying for it. This means that it’s very hard to find a simple utility that just does what you want, nothing more. Software vendors want to make their products more feature laden so that they can sell the next version. Hey, it’s business right? I don’t have a problem with this model, but it means having a few pieces of software that do a lot. Usually. But, since the software was paid for, any technical problems can lead the user to a let down experience. Customers are finicky.

In the Linux world (and here I really mean Open Source), you can get everything from the complete feature laden software down to a 3 line script that does exactly what you want. Furthermore, because you get to download it for free, your expectations are lower. For me, this means I’m often surprised at the quality of the software. Sometimes I can’t believe the level of effort contributed to develop Open Source applications.

Things get interesting when you cross the two worlds. People used to Windows tend to look down on ‘free’ software. People used to the Open Source world can’t believe how much you have to pay just to get a fairly simple utility. Furthurmore, Open Source software for Windows is at times difficult to use and of a lower quality. By this, I mean the few Windows-only projects out there, not the ports of Unix/Linux open source to Windows (cygwin, The Gimp, etc).

The differences in the two worlds seem to permeate the respective cultures. For example, when I need support for an Open Source program, Google is my best friend. For example, search for the error message from many programs using Google and you will probably find many hits. For Open Source software, this results in either archived mailing list discussions or forums that result in at least a quick understanding of exactly what is happening. For Windows software, it seems that I often get links to various subscription services that don’t reveal the answer until I fork over the $$. This is not bad, just different.