Author Archives: Dan Siemon

Linux QoS Library (LQL) Released

It has finally happened. I have gotten a release of the Linux QoS Library (LQL) out the door.

Releasing software is actually a bit of nerve racking process. The worst part is not creating the announcement emails or filling out Freshmeat‘s forms, the worst part is worrying about what has been forgotten.

  • Missing files in the distribution? Hopefully, make distcheck covers that.
  • Bad or broken API documentation, ie spelling errors.
  • Not enough testing – What if it doesn’t work on other systems?
  • Design flaws – It is Free Software after all. Everyone can see your mistakes.

A big part of me would have liked to spend an indefinite amount of time to get a ‘perfect’ release, something I was really 100% happy with. However, that is against the release early, release often strategy that Free Software uses to such great effect. Besides, I would probably never be 100% happy with the code base anyway. Perhaps the single most important reason for this release is to let others know that the project exists.

Announcement
The Linux QoS Library (LQL) provides a GPL licensed, GObject based C API to manipulate the network queueing disciplines, classes and classifiers in the Linux kernel. LQL does not use the TC command as a back-end. Instead, LQL communicates with the Linux kernel via Netlink sockets the same way TC does.

0.5.0 — 2004-08-30

  • Initial public release.
  • I wanted to get 100% API doc coverage and a lot more testing done before I made a public release but I decided to go with the release early, release often strategy.
  • 86% API documentation coverage. A lot of the undocumented API is for the U32 classifier implementation which I am not that fond of. I think this API will change quite a bit.
  • What LQL really needs is much more testing in larger applications.
  • I make absolutely no promises that any of the API will be stable. I expect the API to change as larger programs are built with it and new limitations (and bugs) are found.

Please see http://www.coverfire.com/lql/ for more information.

Download:
http://www.coverfire.com/lql/download/lql-0.5.0.tar.gz

Fundamentalist

I while ago I was listening to an interview involving the IMF‘s economic policies in Argentina. One of the people being interviewed (I don’t remember the name unfortunately) offered a definition of fundamentalists that I thought was insightful. The below is only a paraphrase as it has been several weeks since I heard the interview.

Fundamentalist: A person who believes in a set of rules or ideas so strongly that even after powerful evidence that these rules are failing the person thinks that the only problem is that the rules are not being enforced strongly enough.

Bloom Filters

SlashDot recently posted an article about LOAF. LOAF is an interesting idea. The site is worth looking at for it’s own sake. However, what I found really interesting was the tidbit of computer science behind LOAF called a Bloom Filter.

A Bloom Filter uses multiple hash functions and a large bit array to represent a set. The set cannot be reconstructed from the Bloom Filter. There is a small possibility that the Bloom Filter will return a false positive but a false negative is impossible.

Here are some links with more information:

The original article on Bloom filters:

Communications of the ACM
Volume 13 , Issue 7 (July 1970)
Pages: 422 – 426
Year of Publication: 1970
ISSN:0001-0782

This article is available in the ACM digital library.

Weekend

It was a good weekend. Saturday was Kevin’s (Karen’s brother) Buck & Doe. I had a really good time. I got to be the barbeque guy which is always lots of fun at a big party. I wish I knew how many hamburgers we cooked. Only bad part of the day was the mild sunburn that resulted. You would think I would know better by now. After some post party cleanup Karen and I spent Sunday afternoon doing a few hour motorcycle ride with my parents. Basic route was Sebringville, London, Woodstock and back via Embro Road. Sometimes I miss having my own motorcycle.

Today I received my copy of Mono: A Developers Notebook. I ordered it from Chapters in the middle of last week. I love buying books on the Internet. This book looks like a good quick start guide to Mono and C#. Exactly what I need.

Secure remote backup

Every once in a while I see posts on mailing lists where people wonder about doing remote backups. I figured it may be worth while to describe how I have been doing my home work station backups for the last few years. Hopefully, this will be useful to someone.

I consider a backup system that requires frequent manual attention pretty much useless. Mainly this is because it is hard to maintain proper backup discipline when busy or away. This is especially true of media based backups. Swapping tapes or CDs to make a backup is annoying enough that the backup probably won’t get done as often as it should. Networks allow the backup process to be automated by having each system back itself up regularly and automatically to another host on the network. However, making backups to another host in the same building doesn’t help much when the building burns down. If you have computer equipment at two secure locations with a large pipe between them, automatic off-site backups are pretty easy. Unfortunately, most individuals are not this lucky. However, with the proliferation of broadband it is quite possible that you know someone who has a big enough pipe that you could send backup data to them in off hours.

This remote computer may be owned by your best friend but do you really want to make your backup data available to them? Even if you do trust this person maybe they don’t look after their machine as well as you do, their computer could be cracked etc. Clearly the remote system needs to be considered untrusted. The data is going to have to be encrypted.

My backup script basically does:

  • Create a tarball of all of the data to backup.
  • bzip2 the file to make the transfer shorter.
  • Run GNUPG to encrypt the file to myself.
  • Use SCP to transfer the file to the remote system.

Thus, this requires that you have an OpenPGP key (via GNUPG) and access via SSH (SCP) to the remote host. Transferring the file with some other, less secure method shouldn’t reduce the security of the system too much. The only problem would be if someone sniffed your authentication information and then deleted the files from the remote host. Since the files are encrypted downloading them doesn’t do the bad guy any good.

This system is not suited to backing up your media library. Mostly because of bandwidth limitations but also because incremental backups are not possible. The entire backup is sent every time.

Though the point of this entry was to just put the idea of doing backups this way in out there for Google to index I have made a copy of my backup.sh available. The script is quite simple but should provide a good starting point for anyone interested in taking the implementation further. This particular script is setup to do daily and weekly backups. It has two configuration options that specify plain text files containing lists of directories to exclude from the daily and weekly backups (see man tar for the exclude file format). What I do is exclude everything but frequently changing directories from the daily backup and only exclude media directories from the weekly.

There is one obvious catch-22 with this system. Your GNUPG keys are stored in ~/,gnupg and this directory is backed up and encrypted by these keys. If your computer is lost the only copy of your data you have left is encrypted. You now have no way to decrypt your backup. So, you need to keep a separate backup copy of your GNUPG keys somewhere else. Since you have a pass-phrase on your key (you had better anyway) these files are already encrypted.

In order to make this backup system automatic (and hence useful) it needs to be able to transfer the backup file without user intervention. With SCP this can be accomplished by creating a un-passworded SSH key-pair. This allows the host which holds the keys to login to the remote host without a password, ie without user intervention. Then the SSH_OPTIONS variable in the script can be modified to point SCP to this key. Now you can setup the script as a cron job and get your backups done automatically every night. MD5 sums are used to verify the successful transfer of the backup. The script will also email you if the backup failed.

This script could be made a bit smarter so it would delete old backups from the remote host. It does not do that right now. You’ll have to login to the remote host once in a while to delete old backups. How often you will need to do this depends on how much space the remote host has available.

Testing Meme Propagation In Blogspace: Add Your Blog!

This posting is a community experiment that tests how a meme, represented by this blog posting, spreads across blogspace, physical space and time. It will help to show how ideas travel across blogs in space and time and how blogs are connected. It may also help to show which blogs are most influential in the propagation of memes. The dataset from this experiment will be public, and can be located via Google (or Technorati) by doing a search for the GUID for this meme (below).

The original posting for this experiment is located at: Minding the Planet (Permalink: http://novaspivack.typepad.com/nova_spivacks_weblog/2004/08/a_sonar_ping_of.html) — results and commentary will appear there in the future.

Please join the test by adding your blog (see instructions, below) and inviting your friends to participate — the more the better. The data from this test will be public and open; others may use it to visualize and study the connectedness of blogspace and the propagation of memes across blogs.

The GUID for this experiment is: as098398298250swg9e98929872525389t9987898tq98wteqtgaq62010920352598gawst (this GUID enables anyone to easily search Google (or Technorati) for all blogs that participate in this experiment). Anyone is free to analyze the data of this experiment. Please publicize your analysis of the data, and/or any comments by adding comments onto the original post (see URL above). (Note: it would be interesting to see a geographic map or a temporal animation, as well as a social network map of the propagation of this meme.)

INSTRUCTIONS

To add your blog to this experiment, copy this entire posting to your blog, and then answer the questions below, substituting your own information, below, where appropriate. Other than answering the questions below, please do not alter the information, layout or format of this post in order to preserve the integrity of the data in this experiment (this will make it easier for searchers and automated bots to find and analyze the results later).

REQUIRED FIELDS (Note: Replace the answers below with your own answers)

(1) I found this experiment at URL: http://planet.gnome.org

(2) I found it via “Newsreader Software” or “Browsing the Web” or “Searching the Web” or “An E-Mail Message”: “Browsing the Web”

(3) I posted this experiment at URL: http://www.coverfire.com

(4) I posted this on date (day, month, year): 04/08/04

(5) I posted this at time (24 hour time): 08:55:00

(6) My posting location is (city, state, country): London, ON, CA

OPTIONAL SURVEY FIELDS (Replace the answers below with your own answers):

(7) My blog is hosted by: Self

(8) My age is: 26

(9) My gender is: Male

(10) My occupation is: Student

(11) I use the following RSS/Atom reader software: Blam

(12) I use the following software to post to my blog: WordPress

(13) I have been blogging since (day, month, year): 25/05/2004

(14) My web browser is: Epiphany

(15) My operating system is: Linux

The Future of Ideas

I have decided to dedicate some of my new found leisure time to educating myself about intellectual property. Anyone who has read any of my other blog entries knows this is an area of interest to me. The goal is to have an informed opinion on what I think may be the most important debate of the 21st century.

As any other geek would I am starting with Lawrence Lessig‘s books. I bought The Future of Ideas back in May but have not had time to read it until now. At the moment I am about half way through. So far, it has been an enjoyable read. This book is surprisingly fair to the companies and individuals who wish for stronger intellectual property controls though the argument goes against them. Since I did most of this reading in the local Chapters/Starbucks yesterday I picked up Lessig’s latest book, Free Culture while I was there.

Last night I wrote a little summary of the European history course I recently completed. In this summary I talked about the parallels I see between the Enlightenment and the current intellectual property debate. It turns out I was about five pages away from a point in The Future of Ideas where Lessig briefly writes about the Enlightenment. The point Lessig makes is different than mine but it is still strangely exciting to see someone as knowledgeable as Lessig see some of the same relationships I do.

The only problem with reading Lessig’s books is that he is a known opponent of stronger intellectual property laws and he sees the world through the eyes of the Internet much like I do. As interesting as the reading has been so far there has been nothing earth shattering for me. It’s basically a great presentation of ideas I already believe in. What I would like is to find some works that argue the opposite point of view. That will be the ultimate challenge to my opinions on this issue. At the present time I don’t know of any such works. If anyone reading this does please let me know.

History

I recently completed a summer term history course. The title of the course was Europe 1715 to present. Wow, I actually thought I had a clue about European history before this course. Was I ever wrong. The amazing mess that was Europe in the 1800s brings the current problems in other areas of the world into perspective. I think it’s a little too easy for people who grew up in ‘modern’ countries to think that our society has always been as stable and sane as now (or at least as stable as we think it is). This course showed me that this is certainly not true.

Also, I found the Enlightenment to be particularly interesting. The fact that ideas that we now take for granted such as people should be ruled by laws not rulers, equality of all people and the concept of individual identity come from only ~225 years ago (1750->1800 mostly) is amazing to me. These are just a few examples of Enlightenment ideas that are now central to our liberal (in a classical sense) societies.

The French revolutions of 1789 were the first major political events that centered around Enlightenment ideas. The backlash against this revolution resulted in the European Congress system that was designed to put the lid back on and restore Europe to pre-1789. That it took until the end of World War I for the ideas of the Enlightenment to come to the forefront of European politics is very alarming.

I can’t stop myself from seeing similarities between the Enlightenment and the current conversations about intellectual property. There were many entrenched interests who did their best to stop the ideas of the Enlightenment. These ideas were so powerful that even a hundred years of crushing attempts could not make them go away. The Internet and other digital technologies have fundamentally changed our world. The law hasn’t caught up to this fact yet. Companies and individuals who profit under the old system of scarcity and control are doing their best to make sure the law never does catch up. Sounds a lot like the last major intellectual revolution western society went through. I am pretty confident how this will end. The question is, do we need another century of innocent people getting jailed or worse before we see the end of the tunnel?

I need to be a little careful here. As my history professor said, despite popular belief history does not actually repeat itself. That doesn’t mean there are not any lessons to be learned though.