Tag Archives: Internet

Interview with Van Jacobson

TCP/IP pioneer’s past is prologue from EETimes.

EET: And though packets declared victory over circuits, there seems to be renewed interest in giving IP as many circuit-like characteristics as possible.

Jacobson: I hope that the circuit obsession is transitional. Anytime you try to apply scheduling to a problem to give latency strict bounds, the advantages are not worth the cost of implementation. Strict guarantees gain you at best a 100-microsecond gain in networks, where the intrinsic jitter in the thermal conditions of the planet is 300 microseconds.

EET: So all the late-1990s studies of QoS involved people speaking different languages, coming from different perspectives.

Jacobson: QoS has been an area of immense frustration for me. We’re suffering death by 10,000 theses. It seems to be a requirement of thesis committees that a proposal must be sufficiently complicated for a paper to be accepted. Look at Infocom, look at IEEE papers; it seems as though there are 100,000 complex solutions to simple priority-based QoS problems.

The result is vastly increased noise in the signal-to-noise ratio. The working assumption is that QoS must be hard, or there wouldn’t be 50,000 papers on the subject. The telephony journals assume this as a starting point, while the IP folks feel that progress in QoS comes from going out and doing something.

Convergence (Saving the Net)

Saving the Net and network neutrality in general have become big topics lately. I have made several posts on the topic over the last few months (1, 2, 3). See Michael Geist‘s The Search for Neutrality for a bit of Canadian perspective.

With the above in mind, it was with great interest that I read this month’s installment of Geoff Huston‘s The ISP Column. The article is entitled Convergence?. I have copied a couple of choice quotes below. There is lots more good information in the article. Last month’s column, IPv6 – Extinction, Evolution or Revolution?, also offers some interesting perspectives on the future of IP service providers.

One emerging body of opinion is that the issue here is not finding the right layer for virtualization of the network, nor is it an exercise in finding just the right form of value to add to these networks, but in recognising the futility in such exercises in the first place.

By any accounts peer-to-peer file sharing has taken over the Internet, with estimates of between 45% to 70% of total internet traffic volumes being attributable to music and video sharing. This has turned the Internet into one of the more prodigious music and video distribution systems ever conceived. This shift in user behaviour has significant implications for the entertainment industry’s chosen distribution methods, and it is likely that the industry will ultimately come to terms with peer sharing technologies such as BitTorrent. The loser in all this is likely to be real time video delivery systems, so one reasonable conclusion is that real time content delivery, or Triple Play time, is over, BitTorrent has won over the user!

RFC 3028

RFC 3028 – Sieve: A Mail Filtering Language

This document describes a language for filtering e-mail messages at time of final delivery. It is designed to be implementable on either a mail client or mail server. It is meant to be extensible, simple, and independent of access protocol, mail architecture, and operating system. It is suitable for running on a mail server where users may not be allowed to execute arbitrary programs, such as on black box Internet Message Access Protocol (IMAP) servers, as it has no variables, loops, or ability to shell out to external programs.

Net channels: Where is the end in end-to-end?

The key design feature of the Internet is the end-to-end principle. In short, the end-to-end principle says that as much work as possible should be done at the ends of the network. This results in a very simple network core. The simplicity of the core allows it to scale. See World of Ends for more implications of the end-to-end principle.

If you ask most network people exactly where the “end” is they will probably say it is the device at the edge of the network. Some may even go as far as to say it is the operating system on the edge device. At present this is indeed the case. For example, the processing necessary to make TCP a reliable protocol happens within the operating system.

At LCA 2006, Van Jacobson recently weighed in on the network protocol processing overhead that is becoming a big problem as link data rates increase. Current operating systems are having a hard time keeping up with 10 gigabit links, especially when using TCP. In his presentation, Van Jacobson says that the placement of the TCP stack in the operating system kernel is a historical accident. This design was chosen because it was necessary to ensure Multics did not page out the TCP stack. Further, TCP in the kernel violates the end-to-end principle because the kernel is not the end, the application is. Van Jacobson offers Net channels as a possible solution to this problem. Net channels provide a simple, cache friendly way to manage network packets within a system.

The presentation discusses several ways that Net channels can improve TCP performance. The first is to use Net channels between the NIC and the current in-kernel TCP stack. The more interesting use of Net channels is to push all TCP processing into userspace. Essentially, each application would have their own TCP stack. This removes the bottleneck that the single, system-wide TCP stack creates. Amazingly, Van Jacobson presents statistics which show that this modification results in TCP processing overhead dropping by 80%. Other benefits would include a simpler kernel and the ability to have a TCP stack tuned for each application. Applying TCP bug fixes and adding new features would also become easier with TCP moved outside of the kernel.

For more information on this really amazing idea see the following resources.

Saving the Net

I finally got around to reading Doc Searls‘s long essay entitled Saving the Net: How to Keep the Carriers from Flushing the Net Down the Tubes which is hosted by Linux Journal. You can also find a link to Saving the Net from Searls’s blog which includes links to interesting background reading. Saving the Net is basically a response to a Business week interview with SBC CEO Edward Whitacre. When asked about Google, Vonage and other Internet companies Whitacre says:

How do you think they’re going to get to customers? Through a broadband pipe. Cable companies have them. We have them. Now what they would like to do is use my pipes free, but I ain’t going to let them do that because we have spent this capital and we have to have a return on it. So there’s going to have to be some mechanism for these people who use these pipes to pay for the portion they’re using. Why should they be allowed to use my pipes?

The Internet can’t be free in that sense, because we and the cable companies have made an investment and for a Google or Yahoo! or Vonage or anybody to expect to use these pipes [for] free is nuts!

Of course, SBC and other telecommunications companies are already being paid for their pipes. That is what their subscribers pay for, access to the Internet. The Internet includes all of these information resources and services like Google, Vonage etc. What Whitacre fails to understand is that without these companies there would be no demand for his pipes. Google and other Internet companies are driving the growth in high speed subscribers, not the other way around.

There are a couple of other interesting ideas in Saving the Net that I would like to discuss.

In the essay, Searls quotes from one of his earlier works, World of Ends.

Adding value to the Internet Lowers its Value.

Sounds screwy, but it’s true. If you optimize a network for one type of application, you de-optimize it for others. For example, if you let the network give priority to voice or video data on the grounds that they need to arrive faster, you are telling other applications that they will have to wait. And as soon as you do that, you have turned the Net from something simple for everybody into something complicated for just one purpose. It isn’t the Internet anymore.

This idea is very counter intuitive to most people. I think one of my favorite quotes helps to enlighten the idea.

Perfection is reached, not when there is no longer anything to add, but
when there is no longer anything to take away.

— Antoine de Saint-Exupery

If only more software developers would heed that message.

Another interesting topic which is touched on in Saving the Net is how linguistics can frame an argument. For example, let’s look at the term intellectual property. Every good capitalist has some understanding of how the private property system serves society. As a result, protection of property is understood to be an absolute necessity by most people. So it should be no surprise then that people often feel strongly that intellectual property also deserves the same protection; it is property after all, the name says so. Unfortunately, analogies between intellectual property and physical property are strained at best. A farmer who owns a piece of land does not hold power over all of society because any other farmer can grow the food we need if the first farmer chooses not to. Now contrast the situation of the farmer with a company who holds a patent. For a period of twenty years the patent holding company is the only entity that has the right to use the invention covered by that patent. The power over society that comes with a patent dwarfs the power that any private property owner has. This aspect alone makes analogies between real property and intellectual property flawed. Unfortunately, aspects of intellectual property like the example given above are hardly ever discussed, partly this is because by choosing to use the word property, the parameters of the discussion are already defined.