Tag Archives: Internet

Small Providers and Network Neutrality

It appears that Network Neutrality will be dealt a severe blow at the FCC soon. Although some smaller ISPs are supportive of this change, I believe that it represents a major threat to the long term health of the small ISP industry.

I love small ISPs. A long time ago I owned one, I’ve worked as a network admin for one, and I spend some of my spare time on the board of directors for a small co-operative ISP. After a seven-year foray into building products for the largest ISPs in the world, I’m extremely happy to be 100% focused on building products for small ISPs. Whether this makes me biased or just passionate probably depends on your point of view.

Firstly, why do some small ISPs support the pending changes? I think this breaks down into a few high level categories:

  • It’s my network and I should be able to do what I want with it
  • Any government interference is bad
  • Why shouldn’t bandwidth hogs like Netflix pay?

It’s natural for the independent, entrepreneur types that often run small ISPs to have these views but in this instance, government is the only entity that has the power to put rules in place that can protect small ISPs from the large ISPs and from other large companies on the content side.

Before explaining, there is one fact that many ISPs don’t want to face but need to understand:

Your customers don’t pay for access to your network, there is nothing on your network they care about. Your customers pay you to connect them to the services and people they wish to communicate with.

Consider an alternate reality where the Internet did not have strong Network Neutrality norms and rules and a content service such as Netflix is born. In this world, a giant like Comcast would probably ignore the new service, cable TV is a cash cow after all, until they see strong results from Netflix or maybe see early indications that customers like this new form of content delivery better.

In this alternate reality, the rational business decision is to slow down, block or charge hefty fees to this new content upstart. Most consumers don’t have multiple real choices for Internet service, so where will they go? And better yet, if the new upstart is killed or weakened early enough, the consumer won’t even know an alternative even exists so they won’t look for another provider that doesn’t take these types of actions. The second rational business decision, after hurting this new competitor, would be to start a competing service. As an aside, 99% of the time, betting against any person or company acting in their best financial interest is stupid idea.

So how does this relate to small providers? They don’t have this kind of market power.

Your customers pay you to connect them to the services and people they wish to communicate with.

In this reality, Netflix doesn’t exist, at least not at the scale and competitiveness that it does today. Maybe even Google doesn’t.

If these services and others like them don’t exist, then why would consumers pay to connect to a small ISPs network? There is nothing on the  network the consumer care about and nothing competitive they can reach through the network. They are better off with a poor connection to a network with stuff they want than a great connection to a network with nothing they care about.

ISPs have some right to complain of the pain caused by the growth in video streaming. It’s not cheap to keep up with that rate of growth. However, the alternative of having no demand to connect to your network is much, much worse.

Another way to think about this is that it is natural for ISPs to want to have content and services that no other network does. This is simply fighting the “there is nothing on your network they care about” problem. Small ISPs can’t hope to compete in this area and to whatever extent large ISPs are successful, it will come at the cost of the smaller providers.

I don’t expect the world to end the day that Network Neutrality rules are removed. However, I do believe that this change adds risk to the long term viability of small ISPs and the industry that supports them. So, if you run a small ISP, or sell to small ISPs and support the pending changes, please think hard about this. The ‘freedom’ you are asking for extends to entities with much bigger boots than you.

BBR Congestion Talk is Online

A talk describing Google’s new TCP congestion control algorithm, BBR, is now online.

Such a beautiful and simple solution to a long standing problem. This one of those situations when you have to wonder why this wasn’t done before.

On a related note, it’s interesting how BBR separates the retransmission and congestion control (rate) logic. There is a section in An Engineering Approach to Networking where the author specifically calls out that it’s easier to solve both problems if they are considered separately vs. using the retransmission window size to control the rate as most TCP congestion control solutions do. This struck me as very interesting and I’m excited to see it demonstrated by sch_fq and BBR now.

Book: An Engineering Approach to Computer Networking

A while ago the Packet Pushers had Geoff Huston on as a guest in the future of networking series. There are lots of good ideas and contrarian opinions in that podcast episode – go listen to it.

During the episode, Geoff mentioned a book that had a big influence on him called An Engineering Approach to Computer Networking. Naturally, I ordered a copy.

The first thing you’ll notice reading this book is that some aspects are dated – it was written in 1996 after all. This becomes obvious early on when ATM is mentioned as the likely replacement for Ethernet and how it will play a major role in the future – obviously that didn’t work out.

Fortunately, most of the content is much more timeless.

Almost all networking books that try to be computer science or engineering text books are much closer to being descriptions of how IP networking works vs. really teaching the science behind networking. This book is the opposite – it’s not the book to read if you want to win an IP networking quiz show.

The principles discussed throughout the book underlie all circuit and packet switched networks so digging into details that may seem out of date is still well worth the time. This helps to build a solid foundation and gives a bit of perspective on how and why so much has stayed the same.

I’m not going to bother giving this book any kind of rating. If you have an interest in computer networking you should read it. For an even more abstract and science based view of computer networking you should also read my favourite networking book – Patterns in Networking Architecture.

Book: The Art of Network Architecture

I recently finished reading The Art of Network Architecture. If I remember correctly, I found out about this book during an episode of the Packet Pushers where the author participated.

I ordered the book based on the promise of discussion of SDN use cases and SDN networking in general. It turns out that this wasn’t the best book to dig into that area but it does offer a nice overview and reminder of networking concepts across all areas of networking from design to management. So while parts are a bit fluffy and common sense, it was worth reading in the same way a good survey paper is.

Detecting Failure

Part 1: Internet Redundancy, Or Not

Part 2: Redundant Connections to a Single Host?

In the last post I discussed how devices like your laptop and mobile phone are computing devices with multiple Internet connections not all that different from a network with multiple connections. The anecdote about Skype on a mobile phone reconnecting a call after you leave the range of Wi-Fi alludes to one key difference. That is, a device directly connected to a particular network connection can easily detect a total failure of said connection. In the example, this allowed Skype to quickly try to reconnect using the phone’s cellular connection.

Think back to our initial problem, how can a normal business get redundant Internet connections? The simplest, and at best half solution, is a router with two WAN connections and NATing out each port.

Now imagine you are using a laptop which is connected to a network with dual NATed WAN connections and you are in the middle of a Skype call. The connection associated with the Skype call will use one of the two WAN network ports and since NAT is used, the source address of the connection will be the IP address associated with the chosen WAN port. As we discussed before, this ‘binds’ the connection to the given WAN connection.

In our previous example of a phone switching to its cellular connection when the Wi-Fi connection drops, Skype was able to quickly decide to try to open another connection. This was possible because when the Wi-Fi connection dropped, Skype got a notification that its connection(s) were terminated.

In the case of a device, like our laptop, which is behind the gateway there is no such notification because no network interface attached to the local device failed. All Skype knows is that it has stopped receiving data – it has no idea why. This could be a transient error or perhaps the whole Internet died. This forces applications to rely on keep alive messages to determine when the network has failed. When a failure determination occurs, the application can try to open another connection. In the case of our dual NATed WAN connected network this new connection will succeed because the new connection will be NATed out the second WAN interface.

In the mean time, the user experienced an outage even though the network did still have an active connection to the Internet. The duration of this outage depends on how aggressive the application timeouts are. It can have short timeouts and risk flapping between connections or longer timeouts and provide a poorer experience. Of course this also assumes that the application includes this non-trivial functionality, most don’t.

Isn’t delivering packets the network’s job not the application’s?

Redundant Connections to a Single Host?

Part 1: Internet Redundancy, Or Not

Previously I wrote about how true redundancy for Internet connections is only available to Internet providers and very large enterprises. This post continues from there.

I would guess that the fact that it’s not possible to get redundant Internet access is a big surprise to people who haven’t look into it in detail. Surprisingly though, if you have a smart phone or a laptop that you plugin to an Ethernet port, you live through the problems caused by this Internet protocol design flaw every day. These problems seem so normal that you may have never considered that reality could be otherwise.

Let’s start with the example of a laptop attached to a docking station at your desk. It’s very common to use the wired Internet connection at your desk vs. wireless because it typically offers faster, more consistent service. Consider the case of needing to transfer a large file while working at your desk. You start the transfer, it’s humming along in the background, and you switch over to another task. A few minutes later you remember that you have a meeting so you yank the laptop from its docking station and walk to the meeting room.

What just happened to the file transfer? The answer of course is that the file transfer died and you may be thinking, “Of course it died. The laptop lost its connection”. This seems normal because we’re all used to this brokenness.

Think back to the previous blog post. We were trying to get redundant Internet links to a small business and households and found that it’s not possible with the Internet protocols as they exist today. Now think about what the laptop is – it’s a computing system with two Internet connections. This really isn’t very different from a network with two connections. Ultimately, the reason the file transfer died is because of all the same limitations discussed in the last post related to network redundancy. That is, solving the problem of enabling multiple redundant connections for a network, solves the problem for individual hosts as well.

Now consider a smart phone user that starts a Skype voice or video conversation while in the office and then heads to the car to go meet a client. If you’ve accidentally tried this before you know that as you leave the range of the office Wi-Fi, the connection drops. In the particular case of Skype, it may be able to rejoin the conversation after the phone switches to the cellular data connection but most applications don’t even make an attempt at this. Like the laptop, a smart phone is just a computing device with multiple Internet connections.

One last example, your office server that runs Active Directory or performs some other important function. You probably would like network redundancy for this as well right? This also isn’t possible without low level ‘hacks’ like bonding two Ethernet ports to the same switch together.

Not only does the Internet not allow for true redundancy for networks, the lack of this functionality causes trouble for end hosts as well.

Part 3: Detecting Failure

Internet Redundancy, Or Not

Imagine you are a business that wants to have redundant connections to the Internet. Given the importance of an active Internet connection for many businesses this is a reasonable thing for an IT shop or business owner to ask for. One could also consider the serious home gamer who can’t risk being cut off as another use case.

Let’s dig into the technical options for achieving Internet redundancy.

The first and most obvious path would be to purchase a router that has two WAN ports and ordering Internet service from two different providers. Bam, you are ready to go right? Well… not really.

The way this typically works is that the router will choose one of the two Internet connections for a given outbound connection. The policy could be always use connection A until it fails or be more dynamic and take some advantage of both connections at the same time. The problem with this approach is that because the traffic will be NATed towards each Internet provider, there is no way to fail a given connection from one Internet provider to another. So the failure of one of the Internet connections means that your voice call, SQL or game connection will die, probably after some annoyingly lengthy TCP or application level timeout expires. If the site is strictly doing short outbound connections such as the case with HTTP 1.1 traffic this isn’t such a big deal.

So the ‘get two standard Internet connections and a dual port WAN router’ approach sort of works. Let’s call it partially redundant.

How do we get to true redundancy that can survive a connection failure without dropping connections? To this we need the site’s network to be reachable through multiple paths. The standard way to do this is to obtain IP address space from one of the service providers or get provider independent IP space from one of the registries (such as ARIN). Given that IPv4 addresses are in short supply this isn’t a trivial task. The conditions that have to be met to get address space are well out of the reach of small and medium businesses. Even when the barriers can be met, it’s still archaic to have to do a bunch of paper work with a third party for something that is so obviously needed.

The real kicker is that the lack of IP space is only part of the problem. IPv6’s huge 128-bit address space doesn’t really help at all because to use both paths, the site or home’s IP prefix needs to exist in the global routing table. That is, every core router on the Internet needs an entry that tells it how to reach this newly announced chunk of address space. The specialized memory (CAM) used by these routers isn’t cheap so there is a strong incentive within the Internet operations community to keep this kind of redundancy out of the reach of everyone except other ISPs and large businesses.

So the simple option doesn’t really solve the problem and ‘true’ redundancy isn’t possible for most businesses. What about something over the top?

Consider a router that is connected to multiple standard Internet connections. It could maintain two tunnels, one over each connection, towards another router somewhere else on the Internet. To the rest of the Internet, this second router is where the business is connected to the Internet. If one of the site’s Internet connections fails, the routers can simply continue passing packets over the remaining live tunnel thereby maintaining connectivity to the end site. From an end user’s perspective, this solution mostly works but let’s think about the downsides. We’ve essentially made our site’s redundancy dependent on the tunnel termination router and its Internet connectivity whereas without this we are just at the mercy of the ISP’s network. Also, unless the end site obtains its own address space, this approach has all downsides of the first approach except the NAT related problems occur at the tunnel termination router instead of being on-site. Finally, if the site can get its own address space, why do the tunnel approach at all?

I should note, because someone will point it out in the comments, that for very large organizations it’s possible to get layer two connectivity to each site and essentially build their own internal internet. If they have enough public IP space they can achieve redundancy to the end site for connections with hosts on the public Internet. With private IP space, they can achieve redundancy for connections within their own network. Without public IP space, even these networks suffer from the NAT related failure modes.

To summarize, if you aren’t a very large business, there is no way to get true Internet connection redundancy with the current Internet protocols. That’s kinda sad.

Part 2: Redundant Connections to a Single Host

Part 3: Detecting Failure

Improving my home Internet performance

For a long time I’ve experimented with shaping my upstream traffic via Linux’s traffic management functionality (tc command) with the goal of improving my Internet connection’s performance. The latest incarnation of this configuration can be found in this script. Anecdotally this configuration greatly improves interactive performance. Use cases such as Skype calls work without a hitch with any other network tasks I want to run at the same time. In this post I provide some simple experimental results and compare against the default configuration.

The goals of the linked tc script are twofold: improve performance under load and stop any single host from monopolizing the available bandwidth. Performance under load is improved by shaping to just below the link bandwidth which stops packets from queuing in the DSL modem and thereby allows the Linux QoS features to manage the traffic. Achieving host fairness is accomplished by hashing the hosts on the network across a set of buckets.  Flow fairness is accomplished via the underlying fq_codel QDisc.

Subset of my home network

Figure 1: Subset of my home network

Figure 1 shows the layout of the network. Note that when I performed these tests I didn’t disconnect all other devices so these aren’t perfect lab style results.

iperf and ICMP ping

The fist test involves running iperf set to 200kbps and 500 byte packets. This is meant to be somewhat similar to what an interactive application such as a Skype call would produce. The second test used my pingexp utility to chart the ICMP ping results. For both, six different load scenarios were tested (the rows in the table). In both cases the base load was generated from HostA and the test load was generated on HostB.

 Description iperf 200kbps, 500 byte packets (HostB)
ICMP ping results [via pingexp] (HostB)
1 Unloaded 0% loss / 0.229 ms jitter  1
2 Unloaded with tc script 0% loss / 0.197 ms jitter  2
3 Three scp from HostA
0.6% loss / 2.6 ms jitter  3
4 Three scp from HostA with tc script 0% loss / 0.496 ms jitter  4
5 iperf 8Mbps, 1400 byte packets from HostA
[Attempted twice, failed on first attempt] 69% loss / 1.58 ms jitter  5
6 iperf 8Mbps, 1400 byte packets from HostA with tc script 0% loss / 0.804 ms jitter 6

Notice the large decrease in latency between rows 3 and 4. This is the result of shaping to below the link rate which stops the buffer in the DSL modem from filling.

The biggest improvement can be seen the last two rows of the table. Without the tc script there is a large amount of packet loss but with the tc script in place HostB’s traffic is affected very little by HostA’s. Due to the use of fq_codel as the underlying QDisc, it is very likely these results would be very similar if both iperf instances were run on HostA but this was not tested.

scp

The third experiment duplicated the six load scenarios above but instead used a single scp transfer on HostB as the test load. Figure 1 shows the result as captured and charted by Wireshark. Each of the six scenarios were run for approximately 20 seconds.

Bitrate of scp from HostB during test scenarios

Figure 1: Bitrate of scp from HostB during the six test scenarios

The region marked A corresponds to the two unloaded scenarios (rows 1 and 2 in the table above). As expected there is little difference as both scenarios max the upstream link when there is no contention.

Notice how much the rate drops in region B (row 3 in table) when the three scps are started on HostA. The bitrate is approximately 1/4 of the link rate which is expected since there are four scps running.

Region C (row 4 in the table) has the same four scps but with the tc script in place HostB gets 50% of the link rate and therefore HostA’s three scps share the other 50%. This shows that the tc configuration achieves per host fairness in terms of bandwidth allocation.

Region D should be ignored as it is the result of me taking too long to setup scenario 5.

Region E (row 5) is very interesting. This is where the 8Mbps iperf UDP flood starts. Notice that the scp from HostB is completely drowned out and is effectively unable to transfer any data. This is an extreme example of the kind of dramatic performance drop under load which many have come to expect from busy Internet links. As we’ll see in region F, this is not a fundamental problem with the Internet, it is the result not properly managing the buffers.

Region F (row 6) consists of the same traffic as region E expect the tc script is now in place. Like region C, HostB is now getting 50% of the available bandwidth even though HostA is trying to transmit at a rate higher than the total link rate. This shows that a bit of active queue management can make an Internet connection usable under high load.

Web Traffic

To get a sense for what difference the tc configuration makes to web performance I ran Google Chrome in benchmarking mode for the same six scenarios. The results are presented in the table below.

  url iterations via spdy doc load mean paint mean total load mean stddev Read KBps Write KBps # DOM
1 http://www.google.ca 25 false 186.7 199.7 490 188 NaN NaN 270
2 http://www.google.ca 25 false 176.2 190 380.7 181.4 NaN NaN 270
3 http://www.google.ca 25 false 834.6 843.6 1506.8 1044.6 NaN NaN 270
4 http://www.google.ca 25 false 178.2 192.4 416.7 226.1 NaN NaN 270
5 http://www.google.ca Failed Failed Failed Failed Failed Failed Failed Failed Failed
6 http://www.google.ca 25 false 175.4 188.3 380.5 176.4 NaN NaN 270

I have marked the entries in row 5 as failed because after 120 seconds a single page load had not yet completed.

Like above, the interesting rows to contrast are three and four as well as five and six. In both cases the tc configuration greatly reduced the time required to load www.google.ca.

Summary

This post presented results which showed that the performance and predictability of a DSL residential Internet connection can be greatly improved with some basic traffic management running on a Linux router. If you don’t have a Linux router you may still want to take a look at the configuration of your home router. If it supports bandwidth shaping, try setting it to just below your link rate. The results won’t be as good as presented here but it should make a noticeable improvement.