Tag Archives: Linux

Linux and proprietary (graphics) drivers

From New Linux look fuels old debate:

For Nvidia, intellectual property is a secondary issue. “It’s so hard to write a graphics driver that open-sourcing it would not help,” said Andrew Fear, Nvidia’s software product manager. In addition, customers aren’t asking for open-source drivers, he said.

The open-source community already maintains many drivers. Even if NVidia’s drivers are somehow better at present, I bet NVidia would be very surprised how quickly the community would improve them. “It’s so hard to write a graphics driver that open-sourcing it would not help,” sounds like something people would have said about building a high-quality operating system like Linux 10 years ago.

Secondly, as an NVidia customer, I am asking for open-source drivers. I am sick of the driver dance that closed drivers force me to go through. I want my graphics driver to be packaged and updated as necessary by my distribution just like the rest of my system. I want an open-source driver so that the Xorg developers can modify the driver to take advantage of new features and architectural changes. As the speed of development on Xorg increases (which appears to be the case in recent history) proprietary drivers are going to have more difficulty keeping pace.

The next graphics card I buy will have good open-source drivers, even if it slower than the alternative with proprietary drivers. From the article linked above, it looks like it may use an Intel graphics chip.

Note: If you don’t understand why the Linux kernel developers dislike the idea of closed-source drivers so much you should read Linux in a binary world… a doomsday scenario by Arjan van de Ven (also linked to in the quoted article).

Linux Journal’s new editor

So my favourite magazine, Linux Journal, has a new editor. Nicholas Petreley.

I have been a Linux Journal subscriber for 8+ years and I proudly have every issue on my bookshelf. I even paid for a subscription for my favourite computer store to help them gain knowledge about Linux and FOSS.

It used to be that the final page of Linux Journal had good information; news from the community, law advice etc. Now that Petreley has joined, the last page of my favourite magazine has uninformed rants that at best belong in a Slashdot comment on a KDE vs GNOME story.

I can only imagine what people new to the community will think when they pick up their first issue of Linux Journal and see that the writing style typified by Slashdot comments also makes it into the community’s print publication.

I will reserve my judgement on the article content for a couple of more issues since the articles that have been published so far were quite likely in the pipeline before Petreley got involved. However, I seriously doubt that Petreley’s biases will not bleed into the rest of the magazine.

On the plus side, the new larger, more graphical layout is quite visually appealing. To whatever extent Petreley was involved in the graphic design changes I compliment him and the rest of the Linux Journal team. Too bad the new layout does not make up for the loss in editorial quality.

The modernization of X

For those who don’t know, there is a lot of good work happening on X these days. Especially interesting is Xgl, AIGLX and the composite extension. Since Xgl and AIGLX are two different ways to bring GL-accelerated effects to the standard Linux desktop, there has been much arguing over which is the better approach.

NVidia appears to believe that the AIGLX approach is a better long-term solution but there is no denying that the combination of Xgl and compiz produce better results at present.

Despite reading extensively on both of these projects, I don’t know enough about deep graphics issues to really make a good decision as to which is better. I’ll leave that to the X people. For now, I’m just really happy to see these features coming to my Linux desktop soon!

Check out this video from Novell to see just how cool this stuff is.

Xgl demo (58MB, XVid).

Net channels: Where is the end in end-to-end?

The key design feature of the Internet is the end-to-end principle. In short, the end-to-end principle says that as much work as possible should be done at the ends of the network. This results in a very simple network core. The simplicity of the core allows it to scale. See World of Ends for more implications of the end-to-end principle.

If you ask most network people exactly where the “end” is they will probably say it is the device at the edge of the network. Some may even go as far as to say it is the operating system on the edge device. At present this is indeed the case. For example, the processing necessary to make TCP a reliable protocol happens within the operating system.

At LCA 2006, Van Jacobson recently weighed in on the network protocol processing overhead that is becoming a big problem as link data rates increase. Current operating systems are having a hard time keeping up with 10 gigabit links, especially when using TCP. In his presentation, Van Jacobson says that the placement of the TCP stack in the operating system kernel is a historical accident. This design was chosen because it was necessary to ensure Multics did not page out the TCP stack. Further, TCP in the kernel violates the end-to-end principle because the kernel is not the end, the application is. Van Jacobson offers Net channels as a possible solution to this problem. Net channels provide a simple, cache friendly way to manage network packets within a system.

The presentation discusses several ways that Net channels can improve TCP performance. The first is to use Net channels between the NIC and the current in-kernel TCP stack. The more interesting use of Net channels is to push all TCP processing into userspace. Essentially, each application would have their own TCP stack. This removes the bottleneck that the single, system-wide TCP stack creates. Amazingly, Van Jacobson presents statistics which show that this modification results in TCP processing overhead dropping by 80%. Other benefits would include a simpler kernel and the ability to have a TCP stack tuned for each application. Applying TCP bug fixes and adding new features would also become easier with TCP moved outside of the kernel.

For more information on this really amazing idea see the following resources.

Bash fork() bomb

Today, I stumbled onto the following nasty bit of shell code in SECURITY Limit User Processes over on the Gentoo Wiki. No, I haven’t switched to Gentoo.

:(){ :|:& };:

Warning, this will cause your shell to create processes as fast as it can; most likely grinding your computer to a halt if you don’t have the appropriate limits set.

After spending some time trying to figure out what this command was doing assuming the colon was functioning as a no-op, I did a quick Google search and found this nice explanation of what it actually does. So, today I learned that Bash allows functions to be defined which override built-in commands.

Software as speech

Well, my sense of software is that it’s something that is both speech and a device, depending on how you define it. When you talk about software as speech, many good things tend to flow from that. When you use software as a device you can get into great benefits and also fairly scary issues.

— Don Marti

The above was taken from the November 2005 issue of Linux Journal in an article titled “Dialogue with Don“. This article is definitely worth reading if you have access to it or can wait for it to become freely available.