I stumbled on a great RFC today. It’s worth the short read.
Website images that simply contain text make very bad links. An example of this can be found on the CBC‘s website. Notice the navigation elements on the left side of the page are images. There are several reasons why this is a bad idea.
The most obvious reason is that these images do not adjust their size with the font settings of the viewing browser. Modern monitors can be set to very high resolutions that can make small images next to impossible to read. If these images had been normal text the browser would render them in scale with the rest of the text. This is especially important to people with eyesight problems who set their browser font to be very large.
Secondly, the text in these images cannot be searched. For example, if you are viewing a site with hundreds of image based links it is not possible to use the searching features of your browser to find links containing certain words. Of course, a real website is probably not going to have hundreds of image based links but the principle is the same and is closely related to my main reason for writing this article which comes next.
All Mozilla based web-browsers (Mozilla Navigator, Epiphany, Firefox, etc) allow the user to simply type the text and the browser will highlight the link that matches the text. The user can simply press enter to follow that link. I encourage everyone to try this out, it is a real time saver. With this feature well designed websites can be navigated without reaching for the mouse. Off hand I don’t know if there are non-Mozilla based web-browsers that have a feature similar to this.
There are many good reasons to use images on a website. Replacing the browsers text rendering is not one of them.
Ever have one of those moments where you think “Why didn’t I do this before?”. During my recent forray into the Weblog world I started to realize how useful the ability to syndicate content via RSS or Atom is.
A great example of this is the ‘planet’ phenomenon started by Jeff Waugh in the Free software community. The various planet sites aggregate the Blogs of Free Software contributors into one site where you can keep track of all of them. Here is a couple of examples:
Sites like these are an amazingly good way to keep up on the Free software projects you are interested in.
Now back to the “Why didn’t I do this before?” question. I scan a lot of news sites everyday. This takes considerable time. Worse, since there is no notification of new entries I continually poll the same sites many times throughout the day. This time killer may now be a thing of the past due to RSS readers. There are RSS readers for all platforms but since I only use Linux and GNOME apps there are two that I have been playing with.
Both seem to be quite good. I like Blam’s UI a little more but Straw seems to be more mature and handles the various RSS feeds better. It also supports the Atom syndication format mentioned above. Blam is written in C# and thus requires Mono. Straw is written in Python which most linux distributions have installed by default. Both use the GTK+ bindings for their respective language so they integrate well with the rest of my GNOME desktop.
Downsides to using a RSS reader instead of visiting the site?
- Not all sites have full RSS feeds. Slashdot and OSNews for example only give the first sentence or so of the news item. Fortunately, I was able to find a full RSS feed for Slashdot at Alterslash. From my limited experience it seems that the commercial sites are the ones that do not provide full RSS feeds. I guess this makes sense. They are not selling ad space when someone only views their content via RSS.
- The presentation of the content is not the same. For the most part this doesn’t matter since the content is more important anyway. Especially in the case of Blog content.
- Neat extras like the Slashdot poll are missed.
Is the time savings worth the trade offs? It’s experiment time.
It is very neat to be able to post to my Weblog without using a web browser. Check out BloGTK. The spell checking is a big bonus too. XML-RPC APIs like this may indeed be the future.
I just finished reading a sample chapter of The Scope of Network Distributed Computing which I found at OSNews. I don’t want to get into the habit of posting links to articles that are linked from major news aggregation sites but this one is particularly good. This article does a great job of showing the relationships between some of the current Internet technologies. I found the discussion on meta-information particularly interesting.
Well, I have spent way more time than I should have playing with the CSS to make this site look the way I want. Hopefully it works in IE too. If you want to see an amazing example of just how much CSS can change a website without any XHTML changes check this out.
It’s odd. Having my life completely saturated with Internet technologies both in the ISP work and University I should have taken time to investigate CSS before now. Wow, this is pretty darn cool.