Tag Archives: Linux

x86_64 FC4 and Open Office

While attempting to compile some software on my x86_64 FC4 system I ran into a strange problem. For some reason the compile was trying to link against an i386 library. My first thought was why are there i386 libraries on my x86_64 Linux installation? Well it turns out that OpenOffice is not 64-bit clean. So, in order to have OpenOffice in x86_64 FC4 all libraries on which OpenOffice depends must be present in i386 form. This leads to duplication since the rest of the system wants the x86_64 versions. Of course this wastes a bit of disk space but disks are cheap. What is more unfortunate is that loading the i386 version of OpenOffice requires a whole bunch of i386 libraries to be loaded into memory when x86_64 equivalents are already loaded.

Lately, I have been using Gnumeric and Abiword for my office application needs so I do not require OpenOffice. Thus, removing OpenOffice and all other i386 packages from my system was the simple solution to my library linking problems.

Gnumeric and Abiword are available in the extras repository, just run “yum install gnumeric abiword”.

FC4 and CD verification

For the last several versions the Fedora Core (and previously RedHat) distribution has had the ability to verify that the downloaded CD images were successfully transfered to the newly burned discs. For people who download the images and create CDs themselves this is a fabulous feature; I am sure it has saved people from broken installations. However, as I discovered it can also lead a bit of pain.

Last week I downloaded all of the FC4 disc images and preceded to burn them to CD. After rebooting to install using the new media I discovered that the CD verification was failing for three of the five discs. So, I burned them again. Same result. Having used the CD verification for many years I had no reason to doubt it. Eventually I gave up and asked Bob to burn me a copy. Strangely, these CDs failed the verification phase as well.

Realizing that something strange was going on I started googling for similar experiences. It turns out that the CD verification can fail on certain hardware. I had simply never ran into this problem before because this was my first Fedora install on my new computer.

The solution is to boot the installation kernel with an option which tells it not to use DMA for IDE devices. At the GRUB prompt type ‘linux ide=nodma”. After doing this all discs passed their tests. There is one catch though, the Fedora installer is quite smart. If you use a kernel option to do the installation the installer decides this option must be required for successful operation. After installation I had to remove “ide=nodma” from /etc/grub.conf.

If the above wasn’t enough of an adventure I also managed to cause myself some extra pain. When I asked for a copy of FC4 to be created for me I never specified which version. My new computer has a x86_64 processor. The FC4 installation discs I borrowed were for the i386 version. After a day or so of use I realized the mistake and reinstalled with the discs that first caused the problems.

LQL# HTB control

Now that LQL-Sharp has been released I thought I should put together a quick little demonstration of just how cool it is.

I have created a extremely simple GUI control that can modify the rate and ceiling parameters of a HTB class. This control should really subclass Gtk.Widget but it serves its purpose as is.

TC HTB Control

using System;
using Gtk;
using LQL;
class HTBControl {
	private LQL.ClassHTB klass;
	private LQL.Con con;
	private Gtk.SpinButton rateSpin;
	private Gtk.SpinButton ceilSpin;
	public HTBControl(LQL.ClassHTB klass, LQL.Con con)
	{
		this.klass = klass;
		this.con = con;
		Gtk.Window myWin = new Gtk.Window("TC GTK+");
		myWin.DeleteEvent += new DeleteEventHandler(WindowDelete);
		Gtk.VBox vbox = new Gtk.VBox(false, 3);
		Gtk.HBox hbox1 = new Gtk.HBox(false, 2);
		hbox1.Add(new Gtk.Label("Rate (bytes/sec): "));
		this.rateSpin = new Gtk.SpinButton(0, 10000000, 1);
		hbox1.Add(this.rateSpin);
		vbox.Add(hbox1);
		Gtk.HBox hbox2 = new Gtk.HBox(false, 2);
		hbox2.Add(new Gtk.Label("Ceiling (bytes/sec): "));		
		this.ceilSpin = new Gtk.SpinButton(0, 10000000, 1);
		hbox2.Add(this.ceilSpin);
		vbox.Add(hbox2);
		Gtk.Button modifyButton = new Gtk.Button("Modify");
		modifyButton.Clicked += new EventHandler(Modify);
		vbox.Add(modifyButton);
		rateSpin.Value = this.klass.Rate;
		ceilSpin.Value = this.klass.Ceiling;
		myWin.Add(vbox);
		myWin.ShowAll();
	}
	static void WindowDelete(object o, DeleteEventArgs args)
	{
		Gtk.Application.Quit();
		args.RetVal = true;
	}
	void Modify(object o, EventArgs args)
	{
		this.klass.Rate = (uint) this.rateSpin.Value;
		this.klass.Ceiling = (uint) this.ceilSpin.Value;
		this.klass.Modify(this.con);
	}
}
using System;
using Gtk;
using LQL;
class MainClass {
	public static void Main(string[] args)
	{
		Application.Init();
		LQL.Con con = new LQL.Con();
		LQL.Interface nIf = con.FindInterfaceByName("eth0");
		GLib.List classes = con.ListClasses(nIf);
		foreach (LQL.Class klass in classes) {
			if (klass is LQL.ClassHTB) {
				new HTBControl((LQL.ClassHTB) klass, con);
			}
		}
		Application.Run();
	}
}

LQL#

Work has begun on the long promised Mono (C#) bindings for LQL. This little C# program will display traffic statistics for all of the queueing disciplines that are supported by the C LQL library.

using System;
using LQL;
class MainClass {
    public static void Main(string[] args) {
        Gtk.Application.Init();
        LQL.Con con = new LQL.Con();
        GLib.List ifList = con.ListInterfaces();
        foreach (LQL.Interface netInf in ifList) {
            GLib.List qdiscList = con.ListQdiscs(netInf);
            foreach (LQL.QDisc qdisc in qdiscList) {
                qdisc.UpdateStats(con);
                qdisc.PrintStats();
            }
        }
    }
}

Very exciting stuff!

LQL# is not nearly polished enough for a public release yet but I am quite happy with how the work is progressing.

LQL 0.7.0 released

Another new version of LQL is available.

0.7.0 changes:

  • Add some new test programs to the tests directory.
  • Fixed a bunch of small bugs that were found with the new test programs.
  • Change the return type of a few _new() functions from GObject to the proper object type.
  • Update docs to match new API.
  • Add some background comments to the documentation.
  • Add support for the DSMark QDisc and corresponding documentation.
  • Add –with-kernel-source=PATH option to configure so alternate kernel include directories can be specified.
  • Add support for the Netem QDisc (everything but distribution tables). This means you now need the headers from a kernel with Netem support in order to compile LQL.
  • Add support for the TCIndex classifier.
  • Put some more time into the classifiers. Still not complete.

Linux on a 512 CPU system

This still amazes me. A single Linux kernel image can be used for a 512 CPU machine. See SGI’s work here and the related SlashDot story. Sure you may have heard about Linux clusters with thousands of CPUs before. However, in most cases this was a cluster of machines with 1-4 processors each. The scalability requirements from the kernel side of things are very different when you are dealing with a few CPUs versus 512. You can find out more about the Altrix line at SGI‘s website.

Linux running the worlds fastest computer, wow!

What really fascinates me about this scalability work is the algorithms required. Making a system like this work efficiently isn’t about small optimizations it’s about having algorithms that can scale well.

One technique currently used in the Linux kernel is RCU. Linux journal has had couple of good articles on RCU which are available from their website.

LQL Update

The first release of the Linux QoS Library (LQL) which was on August 31st has been well received. LQL 0.5.0 has been downloaded about 150 times. I have received a few very nice emails from people ecstatic about it, including one even before I had completed sending the release announcements. The first LQL patch arrived in my inbox yesterday; though I haven’t had time to look at it yet.

The resumption of classes has meant I have not had as much time to work on LQL as I would like. However, I have been making slow progress on some new features.

Currently, I am adding statistics support to the QDiscs. This new API will return all of the information in struct tc_stats. The current implementation of this requires a few new classes.

-+ LQLStats
----+ LQLStatsQDisc
------+ LQLStatsQDiscHTB
------+ LQLStatsQDiscSFQ
------+ etc

The LQLQDisc class is getting a new method called lql_qdisc_get_stats() which each subclass will override to return their own LQLStatsQDisc object that contains methods specialized for the specific QDisc. So the expected usage is something like the following.

LQLStatsQDiscHTB *statsHTB = NULL;
statsHTB = lql_qdisc_htb_get_stats(LQL_QDISC(htb));
g_print("Bytes: %i\n", lql_stats_get_enqueued_bytes(LQL_STATS(statsHTB)));
g_print("Packets: %i\n", lql_stats_get_enqueued_packets(LQL_STATS(statsHTB)));

Once the QDisc statistics features are done I will begin on the classes.

Linux QoS Library (LQL) Released

It has finally happened. I have gotten a release of the Linux QoS Library (LQL) out the door.

Releasing software is actually a bit of nerve racking process. The worst part is not creating the announcement emails or filling out Freshmeat‘s forms, the worst part is worrying about what has been forgotten.

  • Missing files in the distribution? Hopefully, make distcheck covers that.
  • Bad or broken API documentation, ie spelling errors.
  • Not enough testing – What if it doesn’t work on other systems?
  • Design flaws – It is Free Software after all. Everyone can see your mistakes.

A big part of me would have liked to spend an indefinite amount of time to get a ‘perfect’ release, something I was really 100% happy with. However, that is against the release early, release often strategy that Free Software uses to such great effect. Besides, I would probably never be 100% happy with the code base anyway. Perhaps the single most important reason for this release is to let others know that the project exists.

Announcement
The Linux QoS Library (LQL) provides a GPL licensed, GObject based C API to manipulate the network queueing disciplines, classes and classifiers in the Linux kernel. LQL does not use the TC command as a back-end. Instead, LQL communicates with the Linux kernel via Netlink sockets the same way TC does.

0.5.0 — 2004-08-30

  • Initial public release.
  • I wanted to get 100% API doc coverage and a lot more testing done before I made a public release but I decided to go with the release early, release often strategy.
  • 86% API documentation coverage. A lot of the undocumented API is for the U32 classifier implementation which I am not that fond of. I think this API will change quite a bit.
  • What LQL really needs is much more testing in larger applications.
  • I make absolutely no promises that any of the API will be stable. I expect the API to change as larger programs are built with it and new limitations (and bugs) are found.

Please see http://www.coverfire.com/lql/ for more information.

Download:
http://www.coverfire.com/lql/download/lql-0.5.0.tar.gz

Secure remote backup

Every once in a while I see posts on mailing lists where people wonder about doing remote backups. I figured it may be worth while to describe how I have been doing my home work station backups for the last few years. Hopefully, this will be useful to someone.

I consider a backup system that requires frequent manual attention pretty much useless. Mainly this is because it is hard to maintain proper backup discipline when busy or away. This is especially true of media based backups. Swapping tapes or CDs to make a backup is annoying enough that the backup probably won’t get done as often as it should. Networks allow the backup process to be automated by having each system back itself up regularly and automatically to another host on the network. However, making backups to another host in the same building doesn’t help much when the building burns down. If you have computer equipment at two secure locations with a large pipe between them, automatic off-site backups are pretty easy. Unfortunately, most individuals are not this lucky. However, with the proliferation of broadband it is quite possible that you know someone who has a big enough pipe that you could send backup data to them in off hours.

This remote computer may be owned by your best friend but do you really want to make your backup data available to them? Even if you do trust this person maybe they don’t look after their machine as well as you do, their computer could be cracked etc. Clearly the remote system needs to be considered untrusted. The data is going to have to be encrypted.

My backup script basically does:

  • Create a tarball of all of the data to backup.
  • bzip2 the file to make the transfer shorter.
  • Run GNUPG to encrypt the file to myself.
  • Use SCP to transfer the file to the remote system.

Thus, this requires that you have an OpenPGP key (via GNUPG) and access via SSH (SCP) to the remote host. Transferring the file with some other, less secure method shouldn’t reduce the security of the system too much. The only problem would be if someone sniffed your authentication information and then deleted the files from the remote host. Since the files are encrypted downloading them doesn’t do the bad guy any good.

This system is not suited to backing up your media library. Mostly because of bandwidth limitations but also because incremental backups are not possible. The entire backup is sent every time.

Though the point of this entry was to just put the idea of doing backups this way in out there for Google to index I have made a copy of my backup.sh available. The script is quite simple but should provide a good starting point for anyone interested in taking the implementation further. This particular script is setup to do daily and weekly backups. It has two configuration options that specify plain text files containing lists of directories to exclude from the daily and weekly backups (see man tar for the exclude file format). What I do is exclude everything but frequently changing directories from the daily backup and only exclude media directories from the weekly.

There is one obvious catch-22 with this system. Your GNUPG keys are stored in ~/,gnupg and this directory is backed up and encrypted by these keys. If your computer is lost the only copy of your data you have left is encrypted. You now have no way to decrypt your backup. So, you need to keep a separate backup copy of your GNUPG keys somewhere else. Since you have a pass-phrase on your key (you had better anyway) these files are already encrypted.

In order to make this backup system automatic (and hence useful) it needs to be able to transfer the backup file without user intervention. With SCP this can be accomplished by creating a un-passworded SSH key-pair. This allows the host which holds the keys to login to the remote host without a password, ie without user intervention. Then the SSH_OPTIONS variable in the script can be modified to point SCP to this key. Now you can setup the script as a cron job and get your backups done automatically every night. MD5 sums are used to verify the successful transfer of the backup. The script will also email you if the backup failed.

This script could be made a bit smarter so it would delete old backups from the remote host. It does not do that right now. You’ll have to login to the remote host once in a while to delete old backups. How often you will need to do this depends on how much space the remote host has available.