Fedora Core 2

I have had Fedora Core 2 installed on my three computers for a couple of weeks now. Traditionally I have used a stock install for my gateway and laptop but have customized the desktop components of my main work station. Typically this meant building the latest GNOME from source via Garnome. Fedora Core 2 is the first Linux installation I have had in a long while that I don’t have the urge to customize. Sure, I have installed some new stuff that is not part of the distribution but the core desktop components are stock. Kudos to Fedora for putting together such a good distribution. Someday I’ll take the time to get familiar with Debian but at the moment I don’t have much incentive to.

Non-open software

I finally broke down and installed some non-open software on my desktop. This is something I try hard not to do. I don’t even have the Macromedia Flash plugin installed. I installed Real player so I could listen to CBC for the election coverage. Hopefully someday CBC will use Icecast so I don’t need this installed anymore.

2^x or 10^x?

When working on my Linux QoS project I ran into the confusion over what k,M,G mean in computing again.

Here is the problem. In normal usage k = 1000. Obvious example, kilometer == 1000 meters. However in computing the factor most used is 1024. This means a kilobyte in your computer RAM is 1024 bytes == 2^10 bytes. A megabyte in RAM is 1024 * 1024 == 1048576 bytes == 2^20 bytes. The nice network and hard drive people have decided to use powers of 10 instead of powers of two. This means your 100Mbit network is 100,000,000 bits/sec not 100 * 1024 * 1024 == 104857600 bits/sec. Hard drives are also a fun example of this. A hard drive sold with a capacity of 40 GB can store 40,000,000,000 bits but your computer calculates file sizes with powers of two so this is really 40,000,000,000 / 1024 / 1024 / 1024 == 37.25 GB to your computer.

The networking example is where the problem with Linux QoS came up. The Linux traffic control utilities use a multiplier of 1024 for kbit. My upload data rate is 640kbit. When I specified 640kbit/sec via the Linux QoS utilities I was actually specifying 640 * 1024 == 655360 bits/sec not the actual line rate which is 640,000 bits/sec. Even more fun is calculating download rates from network interface speeds. The network is rated with 10^x but the KB number you see on the screen is 2^x.

Wikipedia has a great article on this whole mess. This article explains the new prefixes that have been introduced to remove this abiguity. So from now on my computer has 512Mib of RAM, the hard drive has 40Gb of capacity and the network interface is 100Mbit/sec.

WinXP SP2 Firewall

Nice little summary of WinXP SP2 over at OSNews. It seems MS has a new firewall in this release. What interests me is this screen shot. A simple UI for allowing incoming traffic to listening processes is a great idea. However, I doubt a pop-up window is the best way to handle this. I would guess a lot of users will just hit the OK to make the pop-up window go away. A application that had a list of listening programs and allowed the user to select which ones could receive non-local packets would probably be better. A couple of years ago I started working on something like this for Linux. Unfortunately, like a lot of the play projects I start it didn’t get finished. At least I seem to have had a good idea.

Content syndication

Ever have one of those moments where you think “Why didn’t I do this before?”. During my recent forray into the Weblog world I started to realize how useful the ability to syndicate content via RSS or Atom is.

A great example of this is the ‘planet’ phenomenon started by Jeff Waugh in the Free software community. The various planet sites aggregate the Blogs of Free Software contributors into one site where you can keep track of all of them. Here is a couple of examples:

Sites like these are an amazingly good way to keep up on the Free software projects you are interested in.

Now back to the “Why didn’t I do this before?” question. I scan a lot of news sites everyday. This takes considerable time. Worse, since there is no notification of new entries I continually poll the same sites many times throughout the day. This time killer may now be a thing of the past due to RSS readers. There are RSS readers for all platforms but since I only use Linux and GNOME apps there are two that I have been playing with.

Both seem to be quite good. I like Blam’s UI a little more but Straw seems to be more mature and handles the various RSS feeds better. It also supports the Atom syndication format mentioned above. Blam is written in C# and thus requires Mono. Straw is written in Python which most linux distributions have installed by default. Both use the GTK+ bindings for their respective language so they integrate well with the rest of my GNOME desktop.

Downsides to using a RSS reader instead of visiting the site?

  • Not all sites have full RSS feeds. Slashdot and OSNews for example only give the first sentence or so of the news item. Fortunately, I was able to find a full RSS feed for Slashdot at Alterslash. From my limited experience it seems that the commercial sites are the ones that do not provide full RSS feeds. I guess this makes sense. They are not selling ad space when someone only views their content via RSS.
  • The presentation of the content is not the same. For the most part this doesn’t matter since the content is more important anyway. Especially in the case of Blog content.
  • Neat extras like the Slashdot poll are missed.

Is the time savings worth the trade offs? It’s experiment time.

BloGTK

It is very neat to be able to post to my Weblog without using a web browser. Check out BloGTK. The spell checking is a big bonus too. XML-RPC APIs like this may indeed be the future.

Network Distributed Computing

I just finished reading a sample chapter of The Scope of Network Distributed Computing which I found at OSNews. I don’t want to get into the habit of posting links to articles that are linked from major news aggregation sites but this one is particularly good. This article does a great job of showing the relationships between some of the current Internet technologies. I found the discussion on meta-information particularly interesting.

Easy to what?

Is the term “Easy to use” in the computer user interface (UI) world overloaded? I am starting to think so. Before I go any further take note that the Unix shell is a UI. Graphical UIs (GUIs) are generally considered to be the easiest way to use a computer. But are they? Here is a list of command shell steps to change the email address that gets root’s email on a Fedora Core 2 system:

  • su to root.
  • cd /etc.
  • vim aliases
  • Search for root: (/ is the search key in vim)
  • $ to go to the end of the line.
  • bdw to delete the last word on the line (this is the email address or account name).
  • A to enter insert mode.
  • Type the email address, then press ESC
  • :wq to save the file and exit vim.
  • newaliases to update the aliases database.

That seems like a lot but I timed myself and I can easily accomplish all of these steps in under 30 seconds. What could be easier? I doubt this task could be accomplished in under 30 seconds with a GUI. If you are not fluent in the Unix shell you are probably getting quite angry at me right now. “But I don’t know those commands” you say. This is where the term “easy to use” breaks down. The average computer user is not looking for easy to use. They are looking for easy to discover. The normal computer user does not care if a task takes a little longer than the optimal way. All a normal computer user cares about is the ability to easily easily re-discover the steps necessary to accomplish the task the next time they need to do it. These users don’t want to learn the skills necessary to optimally control their computer. Instead of talking about computer UIs with the term “easy to use” I think it’s time we start talking about “easy to do” and “easy to discover”.