Network neutrality: Where analogies fail

I find it interesting that so much of the discussion surrounding net neutrality centers around analogies to other aspects of the modern world. A lot of these analogies are related to the transportation of goods. Courier companies such as UPS and Fedex as well as the highway network in general are the most common examples. In one of the first articles on net neutrality, Saving the Net, Doc Searls argues that the transport analogy is a major impediment to the pro-neutrality side and offers a competing analogy. This post is not about which analogy is better, it is about the problems which occur when using any analogy to discuss a complex topic.

It is easy to understand why people use analogies to discuss complex topics like net neutrality. By allowing knowledge and understanding from one area to be applied to something new, analogies are essentially a way of simplifying the world. Like any simplification, there is always some detail lost.

Analogy is a poor tool for reasoning, but a good analogy can be very effective in developing intuition.
— Barbara Simons and Jim Horning
(Communications of the ACM, Sept 2005, Inside Risks)

The very fact that analogies apply old information to new situations should give us pause in using analogy as a reasoning tool.

To see an example of this problem one only needs to examine what is perhaps the most common analogy used by the anti-neutrality folks. The analogy in question relates to the fact that UPS and other courier companies offer high priority service (overnight) as well as normal service without the negative consequences the pro-neutrality crowd fears.

In order for a courier company to begin to offer overnight package delivery, the company must add new capacity to its delivery operations. For example, a company that ships packages by truck will need to add aircraft to its operations to support cross-continent overnight delivery. Once these aircraft are in place it does not make economic sense to fly them lightly loaded. Instead, the courier company will begin to fill the remainder of the space in the planes with lower priority packages. This has the benefit of reducing the courier’s costs by reducing the number of trucks that are necessary. There is also another unintentional benefit. Although some customers have not paid for overnight delivery, the additional high speed capacity greatly increases the chance they will get that level of service anyway. As the volume of high priority packages grows, the courier’s overall operations must also grow in high priority capacity.

Compare the above situation to packet prioritization on the Internet. Unlike the courier company example, adding a high priority service does not require that the bandwidth provider add new capacity to its operations. There is no way to make light go faster. Packet prioritization simply gives the marked packets first crack at the existing capacity. Assuming the network is properly provisioned (not heavily loaded) the difference in service quality between high and low priority packets is very low, probably unnoticeable.

There is also the issue of reverse economic incentives. In order for customers who are paying for high priority service to notice an improvement the network must be congested. This creates the strange situation where allowing the network to become congested (not upgrading) could result in more customers paying for high priority service and thus increasing the bandwidth provider’s profits.

[Before anyone complains, I realize there are other aspects to network QoS such as number of hops in a path etc. I am not attempting to explain all aspects of network operations.]

On the surface, the analogy between high priority package shipment and high priority packet delivery seems like a good one. Upon closer examination, simple physical limitations show these two worlds to have very different operational characteristics and completely opposite unintentional side effects.

The point of this post is not to argue about the exact details of packet forwarding or courier company operations. The point is that centering the discussion about complex topics like network neutrality around analogies to other systems is foolish. The lost detail results in uninformed decisions.

16 thoughts on “Network neutrality: Where analogies fail”

  1. Interesting post. Thanks for that.

    Some thoughts.

    First, all analogies are, literally, wrong. Or they wouldn’t be analogies.

    Same goes for all metaphors and all similes.

    Here’s the problem: We understand everything metaphorically. That’s what cognitive linguistics teaches, and once you look closely at it, the fact is hard to deny. Try talking about life without borrowing the language of travel. Birth is “arrival”. Death is “departure”. Choices are “crossroads”. Careers are “paths”. We get “stuck in a rut” or “fall off the wagon”. Is birth really “arrival”, though? Is life really a journey? No, but we can’t help understanding everything metaphorically.

    In fact, likeness doesn’t just help understanding. It completely underpins understanding. Yet it embodies an irony. We understand A in terms of B precisely because B is different than (yet in some way similar to) A.

    Metaphor is even fundamental to mathematics. See George Lakoff’s Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being — or just check out this piece here: http://www.rattlesnake.com/notions/math-metaphor.html .

    Analogy then is in fact a *good* tool for understanding. But it’s not the only one, as you point out. It’s always important to uncover details analogies miss. And details will always be missed. All understanding is incomplete. Metaphorical conceptualization and reasoning is why. Yet we have no other choice.

    All arguments are tendentious by nature. We can’t avoid it. But we can point out all the details that get lost with every analogy, every metaphor, every simile. And thanks for doing that here.

    If you revisit what I wrote in Saving the Net, you’ll see that I explore several different metaphorical framings for the same Net that both sides of the Net Neutrality debate say they are trying to save. Each framing tends toward a different regulatory regime.

    So, to sum up, I believe you are right in the case you make for informing decisions on complex matters like this one. But I also believe your case against analogies overlooks their importance (and those of metaphor as well) to both reasoning and understanding.

    Thanks again,

    Doc

  2. You are absolutely correct.

    The era of analogs is coming to an end.

    Vinyl analogs of sounds have been replaced by digital expressions of sound with degrees of resolution or fidelity (mp3’s).

    Film with it’s analogous shadings of light has been replaced with digital expressions of light.

    So, in the realm of piblic policy analogies for phenomena must be replaced by digitalies… it’s only a mtter of time and little training for the uninitiated. Actually, digitalies will be a by-product of the singularity… when systems take over more control of reality and humans are deprecated.

    Won’t that be fun. But it will get the inaccuracies out of policy making and make the world safer for systems.

    Think digitally and if you can’t. Get out of the way.

    Make way for better algorythms. Stomp out analogy… it’s a by-product of the fuzzy thinking of humans.

  3. A better analogy: In California there was an outrage when it was disclosed that electricity companies had deliberately idled plants while supplies were tight and then waited for prices to skyrocket on the spot market. If the current Internet network infrastructure provided by the backbone providers and Internet service providers can currently support much higher speeds and data quantities to current customers, then is the act of packet filtering and setting arbitrary low speed and data caps also effectively providing an “idled” service? Is a tiered Internet service, where content providers would be effectively competing on a similar market to the electricity “spot market”, a market based entirely on Artificial Scarcity?

  4. People who intend to deceive don’t need analogies to do so. For example, the post says: “There is no way to make light go faster.” While this is literally true, in the context in which it’s offered it’s grossly misleading.

    Packets are made of bits, and bits aren’t made of light, they’re made of modulations to light or some other part of the electro-magnetic spectrum. And indeed, in engineering there are many ways to make bits move faster, primarily by encoding them more efficiently. And similarly, there are ways to make packets move faster, primarily by directing them along routes where bits move faster. Charging more for faster movement of packets allows network infrastructure companies to pay for the equipment that moves packets faster.

    So your attack on analogy simply masks a distortion of the economic and technical issues that are in fact much more clear in the analogy form than in your misrepresentation.

  5. “People who intend to deceive don’t need analogies to do so. For example, the post says: ‘There is no way to make light go faster.’ While this is literally true, in the context in which it’s offered it’s grossly misleading.”

    What is also misleading is failing to mention that to speed up some bits involves slowing down others. Unless you do the obvious thing and add more capacity to speed up all the bits.

    From where I sit, the Internet isn’t broken, so why mess with it? The Skype people managed to pull off VOIP without any upstream optimisation at all. I’ll bet someone smart will figure out TV too.

    I think the big telcos are going out of their way to paint over the issue in very traditional terms as it works to their advantage when they lobby. And they don’t lobby for you, they lobby for them They fear new things like the Internet, and wish it was more like telephones, satellite or cable TV, something they know how to make a lot of profit on without a lot of thinking. It’s typical, I think, of corporate laziness these days: if something new comes along, don’t adapt yourself to it, adapt it to you.

    “And similarly, there are ways to make packets move faster, primarily by directing them along routes where bits move faster. Charging more for faster movement of packets allows network infrastructure companies to pay for the equipment that moves packets faster.”

    That’s sounds like the kind of thing an opponent of network neutrality would say, eh Bennett?

    For some, there’s never enough. Listening to the big providers whine about not making enough money is laughable. I pay to get on the net and so does Google, Skype, Slashdot and all the rest. If you’re losing money, charge more for your service, stupid. If your customers are leaving you for another provider then maybe you’re charging too much. If you can’t figure out how they’re undercutting you, think harder. That’s how the free market everyone worships so much in the U.S. works.

    I don’t know about you, but I’m not down with the idea of paying more money just because some poor, multi-billion dollar telcos wants to build out their infrastructure to try to compete with the cable companies. If they want a piece of the cable providers, fine, but I’m not going to bankroll the experiment with nickel and dime QoS charges.

  6. Cobol says:

    What is also misleading is failing to mention that to speed up some bits involves slowing down others. Unless you do the obvious thing and add more capacity to speed up all the bits.

    Here’s where your ignorance of QoS and network engineering shows its ugly face. Priority-based QoS is primarily achieved by re-ordering transmit queues. Let’s suppose a transmit queue that includes a stream of X voice packets that desire low jitter, and a stream of 10X web packets that aren’t concerned with jitter, only with the arrival time of the last packet in the stream. The ideal ordering of the queue for the voice user is to have exactly one voice packet after every 10 web packets. This can be achieved without slowing down the last packet in the web stream. The delivery speed of the web packets is impacted by the amount of traffic the voice stream presents to the network after its stream starts and before it ends, not by the organization of the transmit queue.

    So it is a fact that QoS can be achieved on a packet network without slowing anyone down, altering the speed of light, or dancing on the head of a pin. This sort of queue manipulation costs money to perform, and consequently the carriers who offer it as a service charge for it. Google wants to use this service for free, simply because they’ve bought a service suitable for web streams. I don’t happen to think that’s right.

    Your hysterical solution to the QoS problem – that the carriers should amp up the bandwidth but not the fee – is a non-starter. Bandwidth costs money too, and it doesn’t eliminate the need for queue management, it simply moves bottlenecks to other parts of the network.

    Before writing juvenile flames, it’s always a good idea to know what you’re talking about.

  7. “Your hysterical solution to the QoS problem – that the carriers should amp up the bandwidth but not the fee – is a non-starter.”

    If you had bothered to actually read the post you would have discovered that I didn’t say they couldn’t increase their fees, they just couldn’t charge me special fees for their QoS nonsense.

    So what happens to the regular bits as the number of prioritised bits increases? If the link speed is the same, I don’t see how there wouldn’t be a slow down for the non-priority traffic. So rather than resorting to fancy queue management, you could also just see to it that the queue was empty most of the time. And unless I’m missing something, the best way to do that is to increase the link speed. Obviously you still need a queue, but adding longer queues and then trying to implement some kind of packet reordering system seems like a solution to a problem that doesn’t have to exist in the first place. If anything, it only delays the inevitable.

    There is nothing hysterical about upping bandwidth to solve Quality of Service problems. If maximum bandwidth is sufficient to handle the biggest bursts then Quality of Service becomes unnecessary. All you have worry about are things like propagation speed and processing delay and all of the other little aspects of networking I’m not entirely ignorant of.

    I just thought of an analogy, lol!

    Imagine that you have a ten lane highway that can take up to 10 cars a second, but normally does around 4. But lately, the rush-hour the traffic gets up to 12 cars per second, and traffic really starts to slow down. The truckers are screaming bloody blue murder because some of them are now arriving late. So, does it make sense to reserve one lane only for trucks, or widen the highway by two lanes?

    And last I heard the telcos have already put in the highways, they just need to attach the exits to light them up. What do you suppose is cheaper? A regular fibre router, or a fancy packet reordering fibre router?

    Since the future is going only going to see more and more traffic on the Internet, I don’t see how any of the major ISPs can get away with not dramatically increasing their link speeds anyway, both between themselves and all the way down the tiers. Push the bottlenecks down to the local ISPs or even to the end users where most of the content is being consumed and generated anyway. Charge appropriately for it based on link speed, maybe volume, perhaps both, and let the end-users decide for themselves how much and what kind of traffic they want.

    Aside from being unconvinced of the technical needs for it, I am also afraid of what the large carriers will do if network neutrality laws are not passed. I don’t know about you, but I don’t trust them for an instant to only implement packet manipulation for technical reasons. What amoral US corporation would just sit on its hands when there is juicy profit to be made by double charging the end-users for content and capabilities? It won’t start right away. First it’s going to be flashy ads for making your webcam faster. Then there will be the premium packages for the downloader, the gamer, the blogger, etc… Then comes the exclusive VOIP services (“Our Internet only works with a Bell IP phone”). Then there will be the exclusive sponsors (“MSN is our sponsoring search engine, that’s why you can’t use Google”). If control of the content is left up to these assholes the Internet will become no better than cable TV.

    And it’s not as if a network neutrality law can’t be repealed if it is found to be no good. But how hard will it be to get one passed ten years down the road when corporations have billions invested in packet control and manipulation?

  8. This was really funny:

    There is nothing hysterical about upping bandwidth to solve Quality of Service problems. If maximum bandwidth is sufficient to handle the biggest bursts then Quality of Service becomes unnecessary.

    People don’t design packet networks to go from normal load to peak load with zero degradation of service, they design for graceful degradation. The reason is economic; if the Internet was designed according to the notion that every user can put peak load on it all the time, you wouldn’t be able to pay for the connection you’d need. And it wouldn’t be a packet network any more, it would be a circuit network.

    COBOL, you don’t understand the technical issues here and I don’t have time to give you a primer on network engineering, so I’ll leave you with this thought and then go away: the phone companies are trying to figure out a way to pay for nationwide, fiber-to-the-home network, and for all the upgrades in infrastructure that deploying this system entails.

    It’s a hard financial problem, and net neutrality regulations don’t make it any easier.

  9. In reply to comment #9.

    Priority-based QoS is primarily achieved by re-ordering transmit queues.

    If you are reordering the transmit queue you are making the decision to transmit one packet before another. You ARE speeding the transmission of some packets at the expense of others.

    Will this effect the performance of the web session? Probably not noticeably since bulk data transfers via HTTP do not care much about jitter or latency. But what about my first person shooter which cares about both latency and jitter as much as the voice session does? It gets to run slower at each hop in the network because the voice company payed off my ISP. Worse still, even if I don’t use that voice application, my game packets may be slowed at each hop where they contend for the bandwidth with that voice application.

    What if I value my first person shooter’s performance above my voice session. Do I, as the ISP’s customer get to make that choice? Nope, the ISP gets to based on who pays them the most.

    This sort of queue manipulation costs money to perform, and consequently the carriers who offer it as a service charge for it.

    Firstly, I would argue that allowing ISPs to make the decision about which network applications get priority has a net negative impact on the network as a whole due to the loss of competition at higher layers. Even if it is good for the ISPs short term bottom line, it probably isn’t good for the rest of society or even the ISPs long term profitability.

    Google wants to use this service for free, simply because they’ve bought a service suitable for web streams. I don’t happen to think that’s right.

    Huh? From what I have read Google doesn’t want prioritized packet delivery at all. That is the point of the whole net neutrality stuff; all packets are equal.

    Google already pays for bandwidth just like I, you and every other person attached to the Internet does. What Google and the other pro-neutrality companies want is an unbiased and competitive landscape for Internet based companies. Allowing ISPs to make deals for special treatment will have the exact opposite consequence.

    Your hysterical solution to the QoS problem – that the carriers should amp up the bandwidth but not the fee – is a non-starter. Bandwidth costs money too, and it doesn’t eliminate the need for queue management, it simply moves bottlenecks to other parts of the network.

    Cobolhacker never suggested that carriers should not up the fees. In fact he said:

    If you’re losing money, charge more for your service, stupid.

    I have no problem with ISPs increasing fees to build out their network. However, they should be charging their customers for that build out not a third party. Build a faster network and I will gladly pay more.

    As an endnote, it amazes me how the carriers believe that they are the ones driving the growth in the Internet. Customers don’t signup for high-speed because they like their ISP; they sign up for access to Google and all of the other interesting and innovative services and content on the Internet. The carriers are really just the lucky beneficiaries of a network that allows innovation at all levels. The fact that they are willing to stifle that innovation by pre-selecting which applications will work well for their customers shows how completely clueless they are.

  10. What sour-puss you turned out to be. As I recall, you came here spoiling for a fight, and I’m the guy who took on the job. No need to get all slashdot and such like.

    “People don’t design packet networks to go from normal load to peak load with zero degradation of service, they design for graceful degradation.”

    This is true, but also the sign of design failure. The equivalent of an Olympic athlete not training to be the fastest, only kinda fast. (Hmm, more analogy…) They should be designing with the peaks in mind. There is no escaping the need for more speed as the amount of data grows. No amount of optimisation can make a 100mbps pipe take a peak load of 110.

    “The reason is economic; if the Internet was designed according to the notion that every user can put peak load on it all the time, you wouldn’t be able to pay for the connection you’d need.”

    Seven years ago DSL was only for the wealthy; now you can get it for thirty bucks a month. Seven years from now you’ll have gigabit to your home for the same price. The top tiers will be upgraded to accommodate this. The network doesn’t need to be able to take every user at full tilt all the time, by the way, it just needs to get bigger. When was the last time you picked up a telephone and found it engaged? If the telecos can figure out contention with telephone lines they can figure out provisioning for the Internet. This is merely an engineering problem, one solvable with the right application of technology. And as with any technology it will only get cheaper and faster.

    “And it wouldn’t be a packet network any more, it would be a circuit network.”

    No, it would still be a packet network, just a really fast one. A network that spends a lot of its time underutilised. I’ve never understood the notion why a network has to be used near or at its capacity for maximum economic gain. When you buy a car, do you buy one that can only do 65mph cause that’s the speed limit in the state? No, you buy one that do 110 just in case you have to pass someone. (Shit! More analogy!) The best gains come from offering reliability and performance to the customers.

    “It’s a hard financial problem, and net neutrality regulations don’t make it any easier.”

    This is simply bullshit. Network neutrality regulations have nothing to do with building out the last mile. Fibre is fibre: it will cost the same amount no matter what. Network neutrality has everything to do with making sure that once the network is built it won’t be abused.

Leave a Reply

Your email address will not be published. Required fields are marked *