Tag Archives: Computers

General computer stuff.

The Future of Computing

The Future of Computing: From mainframes to microblades, farewell to GHz CPUs provides a nice overview of trends in CPU and system design. I have a couple of comments to add.

When in late 1950s computers became fast enough to relieve some of the coding burden from the shoulders of programmers high level languages were developed such as Ada, Algol, Fortran and C. While sacrificing code efficiency big time these high level languages allowed us to write code faster and thus extract more productivity gains from computers.

As time passed we kept sacrificing software performance in favor of developer productivity gains first by adopting object-oriented languages and more recently settling with garbage-collected memory, runtime interpreted languages and ‘managed’ execution. It is these “developer productivity” gains that kept the pressure on hardware developers to come up with faster and faster performing processors. So one may say that part of the reason why we ended up with gigahertz-fast CPUs was “dumb” (lazy, uneducated, expensive — pick your favorite epithet) developers.

Although true in some sense, the term developer productivity is a bit of a misnomer here. High(er) level tools and design methodologies do not just save developer time they make modern software possible. I seriously doubt that creating a web browser or any of the other huge pieces of software that we use everyday in assembly language is a tractable problem. Even if the problem could be brute forced, the resulting software would likely have a far higher defect rate than current software.

In the long term it makes little sense to burden CPU with DVD playback or SSL encryption. These and similar tasks should and with time will be handled completely by dedicated hardware that is going to be far more efficient (power and performance-wise) than CPU.

This completely ignores one of the most important aspects of fast general purpose CPUs, flexibility. For instance, a computer which relies on a MPEG decoder for video playback becomes useless when content is provided in another format. Continuing with this example, innovation in the area of video codecs would also become very difficult.

Despite the nitpicks, there is lot of good information in the article.

Operating system design

The following article offers a nice introduction to some design techniques that may be used to create more reliable operating systems.

Nevertheless, it is interesting to note that microkernels long discarded as unacceptable because of their lower performance compared with monolithic kernels might be making a comeback due to their potentially higher reliability, which many people now regard as more important than performance. The wheel of reincarnation has turned.

Can We Make Operating Systems Reliable and Secure? by Andrew S. Tanenbaum

Vim tips for DOS text files

DOS (Windows) uses CR-LF to mark the end of lines in text files. Unix just uses LF. Wikipedia has a long article on these differences if you are interested.

Viewed in older versions of vim, DOS text files had a ^M at the end of every line. This made identification of text files that had been uploaded via binary mode FTP very easy. It seems recent versions of vim auto-detect the text file type and no longer show the ^M by default.

Vim can be told to not try the DOS text file type with the ‘:set fileformats=unix’ command. If you set this option DOS text files will have the familiar ^M at the end of each line.

The text file type can be changed to Unix for the current buffer (file being edited) by ‘:set fileformat=unix’. Opening a DOS text file, setting the type to be Unix and then saving the file will convert it to a Unix text file.

Copyright in the digital world

In Copyright vs. The Laws of Physics the author discusses copying in the digital world. In reality, every action on digital information involves copying. This is the fundamental reason why copy protection on computers is so hard. Lawrence Lessig touches on this in Free Culture too.

Digital files cannot be made uncopyable, any more than water can be made not wet.
— Bruce Schneier

UPSs and testing

Today, I decided it was time to test my UPSs out to make sure they were still functioning properly. Like any well designed product UPSs just work. They fade into the background which makes it easy to forget that they may need maintenance too.

Computers can be very sensitive to power conditions. The hardware expects the power to be within a certain tolerance. Peaks or dips in the power can cause unexpected behavior. I don’t know how often bad power conditions result in crashes but it can’t help the stability of your computer. There may also be problems with cutting the power to hard drives. During a clean shutdown a hard drive will spin down and park the head. This cannot happen if the power is suddenly cut off. A good UPS not only provides power during a brownout or blackout it will also do some amount of filtering to ensure a clean power source. On the software side of things modern operating systems use RAM to cache file system operations. This means that the file you just told you word processor to save may not actually be written to the disk immediately. If the power were to drop at just the right moment the file system can be left in a inconsistent state resulting in lost data. For these reasons I view having a UPS on a computer as an absolute requirement.

The description of the file system caching above suggests a problem with testing the run time of an UPS. If a complete power drop can result in a corrupted file system then running the UPS to the point when it shuts down has the potential to be a bad thing. The solution to this problem on a Linux system is to mount the file system as read-only before running the test. If the file system is read only the OS cannot be caching any writes (because they are not allowed) so power loss should be OK (there may still be hardware problems). This can be accomplished by switching to a console (CTRL-ALT-F1 if you are in X) and then running the following commands:

init 1
df (to see the mounted file systems)
umount -r FILESYSTEM (for each file system).

It should now be safe to run test your UPSs until they cut the power. I do not know how to accomplish something similar to this on a Windows system but I expect there is a way. If anyone does know how please comment below.

It turns out my suspicions were warranted as my UPSs clearly need battery replacements. The APC Office 280 that powers my gateway computer, DSL modem and Ethernet hub lasted only a couple of seconds after the power plug was pulled. I have my primary monitor (19″) attached to an APC Back-UPS 300 which lasted only 4 minutes 22 seconds. The only good news is that the APC Back-UPS Pro 280 that powers my work station (not the monitor) lasted 12 minutes 24 seconds. Not stellar but at least it would stay online during a short blackout.

Now, the question becomes what should I replace, the batteries or the whole UPSs? It looks like batteries are going to cost about $45 for each unit. I can get a new Back-UPS CS 350 for about $100. I would hope that the power noise filtering in a more modern UPS would be better but I’m not sure that is worth double the cost. It looks like battery replacement is the way to go.

There are a couple of lessons in this adventure. First, if your UPS is more than a couple of years old take the time to test it. It’s quite likely it is not functioning as well as you think it is. Second, battery technology still sucks.

Lightening and DSL

Reading Bob’s blog entry about lightening over here made me think about lightening and DSL modems. Bob’s quite right about computers most often getting lightening damage through the phone lines not the power lines. The fact that most DSL services use external modems actually provides a extra layer of protection for your computer. For lightening to travel the phone line and damage any of my computers it would have to go through the modem and my Ethernet switch. I’m not an electrical engineer but I suspect this isn’t very likely. Of course it’s still a good idea to unplug your DSL modem during a storm.

Non-open software

I finally broke down and installed some non-open software on my desktop. This is something I try hard not to do. I don’t even have the Macromedia Flash plugin installed. I installed Real player so I could listen to CBC for the election coverage. Hopefully someday CBC will use Icecast so I don’t need this installed anymore.

2^x or 10^x?

When working on my Linux QoS project I ran into the confusion over what k,M,G mean in computing again.

Here is the problem. In normal usage k = 1000. Obvious example, kilometer == 1000 meters. However in computing the factor most used is 1024. This means a kilobyte in your computer RAM is 1024 bytes == 2^10 bytes. A megabyte in RAM is 1024 * 1024 == 1048576 bytes == 2^20 bytes. The nice network and hard drive people have decided to use powers of 10 instead of powers of two. This means your 100Mbit network is 100,000,000 bits/sec not 100 * 1024 * 1024 == 104857600 bits/sec. Hard drives are also a fun example of this. A hard drive sold with a capacity of 40 GB can store 40,000,000,000 bits but your computer calculates file sizes with powers of two so this is really 40,000,000,000 / 1024 / 1024 / 1024 == 37.25 GB to your computer.

The networking example is where the problem with Linux QoS came up. The Linux traffic control utilities use a multiplier of 1024 for kbit. My upload data rate is 640kbit. When I specified 640kbit/sec via the Linux QoS utilities I was actually specifying 640 * 1024 == 655360 bits/sec not the actual line rate which is 640,000 bits/sec. Even more fun is calculating download rates from network interface speeds. The network is rated with 10^x but the KB number you see on the screen is 2^x.

Wikipedia has a great article on this whole mess. This article explains the new prefixes that have been introduced to remove this abiguity. So from now on my computer has 512Mib of RAM, the hard drive has 40Gb of capacity and the network interface is 100Mbit/sec.