Categories
General

pfifo_fast and ECN

Summary

The default queuing discipline used on Linux network interfaces deprioritizes ECN enabled flows because it uses a deprecated definition of the IP TOS byte.

The problem

By default Linux attaches a pfifo_fast queuing discipline (QDisc) to each network interface. The pfifo_fast QDisc has three internal classes (also known as bands) numbered zero to two which are serviced in priority order. That is, any packets in class zero are sent before servicing class one, any packets in class one are sent before servicing class two. Packets are selected for each class based on the TOS value in the IP header.

The TOS byte in the IP header has an interesting history having been redefined several times. Pfifo_fast is based on the RFC 1349 definition.

   0     1     2     3     4     5     6     7
+-----+-----+-----+-----+-----+-----+-----+-----+
|   PRECEDENCE    |       TOS             | MBZ |  RFC 1349 (July 1992)
+-----+-----+-----+-----+-----+-----+-----+-----+

Note that in the above definition there is a TOS field within the TOS byte.

Each bit in the TOS field indicates a particular QoS parameter to optimize for.

Value Meaning
1000 Minimize delay (md)
0100 Maximize throughput (mt)
0010 Maximize reliability (mr)
0001 Minimize monetary cost (mmc)

Pfifo_fast uses the TOS bits to map packets into the priority classes using the following table. The general idea is to map high priority packets into class 0, normal traffic into class 1, and low priority traffic into class 2.

IP TOS field value Class
0000 1
0001 2
0010 1
0011 1
0100 2
0101 2
0110 2
0111 2
1000 0
1001 0
1010 0
1011 0
1100 1
1101 1
1110 1
1111 1

This approach looks reasonable except that RFC 1349 has been deprecated by RFC 2474 which changes the definition of the TOS byte.

   0     1     2     3     4     5     6     7
+-----+-----+-----+-----+-----+-----+-----+-----+
|               DSCP                |    CU     |  RFC 2474 (October 1998) and
+-----+-----+-----+-----+-----+-----+-----+-----+    RFC 2780 (March 2000)

In this more recent definition, the first six bits of the TOS byte are used for the Diffserv codepoint (DSCP) and the last two bits are reserved for use by explicit congestion notification (ECN). ECN allows routers along a packet’s path to signal that they are nearing congestion. This information allows the sender to slow the transmit rate without requiring a lost packet as a congestion signal. The meanings of the ECN codepoints are outlined below.

   6     7
+-----+-----+
|  0     0  |  Non-ECN capable transport
+-----+-----+

   6     7
+-----+-----+
|  1     0  |  ECN capable transport - ECT(1)
+-----+-----+

   6     7
+-----+-----+
|  0     1  |  ECN capable transport - ECT(0)
+-----+-----+

   6     7
+-----+-----+
|  1     1  |  Congestion encountered
+-----+-----+

[Yes, the middle two codepoints have the same meaning. See RFC 3168 for more information.]

When ECN is enabled, Linux sets the ECN codepoint to ECT(1) or 10 which indicates to routers on the path that ECN is supported.

Since most applications do not modify the TOS/DSCP value, the default of zero is by far the most commonly used. A zero value for the DSCP field combined with ECT(1) results in the IP TOS byte being set to 00000010.

Looking pfifo_fast’s TOS field to class mapping table (above), we can see that that a TOS field value of 00000010 results in ECN enabled packets being placed into the lowest priority (2) class. However, packets which do not use ECN, those with TOS byte 00000000, are placed into the normal priority class (1). The result is that ECN enabled packets with the default DSCP value are unduly deprioritized relative to non-ECN enabled packets.

The rest of the mappings in the pfifo_fast table effectively ignore the MMC bit so this problem is only present when the DSCP/TOS field is set to the default value (zero).

This problem could be fixed by either changing pfifo_fasts’ default priority to class mapping in sch_generic.c or changing the ip_tos2prio lookup table in route.c.

Categories
General

Network latency experiments

Recently a series of blog posts by Jim Gettys has started a lot of interesting discussions and research around the Bufferbloat problem. Bufferbloat is the term Gettys’ coined to describe huge packet buffers in network equipment which have been added through ignorance or a misguided attempt to avoid packet loss. These oversized buffers have the affect of greatly increasing latency when the network is under load.

If you’ve ever tried to use an application which requires low latency, such as VoIP or a SSH terminal at the same time as a large data transfer and experienced high latency then you have likely experienced Bufferbloat. What I find really interesting about this problem is that it is so ubiquitous that most people think this is how it is supposed to work.

I’m not going to repeat all of the details of the Bufferbloat problem here (see bufferbloat.net) but note that Bufferbloat occurs at may different places in the network. It is present within network interface device drivers, software interfaces, modems and routers.

For many the first instinct of how to respond to Bufferbloat is add traffic classification, which is often referred to simply as QoS. While this can also be a useful tool on top of the real solution it does not solve the problem. The only way to solve Bufferbloat is a combination of properly sizing the buffers and Active Queue Management (AQM).

As it turns out I’ve been mitigating the effects of Bufferbloat (to great benefit) on my home Internet connection for some time. This has been accomplished through traffic shaping, traffic classification and using sane queue lengths with Linux’s queuing disciplines. I confess to not understanding, until the recent activity, that interface queues and driver internal queues are also a big part of the latency problem. I’ve since updated my network configuration to take this into account.

In the remainder of this post I will show the effects that a few different queuing configurations have on network latency. The results will be presented using a little utility I developed called Ping-exp. The name is a bit lame but Ping-exp has made it a lot easier for me to compare the results of different network traffic configurations.

Categories
General

Next is Now

Just stumbled on this video created by Rogers.

One of a few good quotes:

“10 years ago it took 72 hours to download Godfather… – Today it takes 10 minutes – It still takes 3 hours to watch”

Categories
General

Some infrastructure links for Canada 3.0

Tomorrow the Canada 3.0 conference starts. Since I am attending the infrastructure track I thought it might be useful to collect a bunch of links relating to the Internet as infrastructure.

http://www.linuxjournal.com/content/why-internet-infrastructure-need-be-fields-study

http://hakpaksak.wordpress.com/2008/09/22/the-etymology-of-infrastructure-and-the-infrastructure-of-the-internet/

http://lafayetteprofiber.com/FactCheck/OpenSystems.html

http://news.cnet.com/Fixing-our-fraying-Internet-infrastructure/2010-1034_3-6212819.html

http://www.interesting-people.org/archives/interesting-people/200904/msg00168.html

http://www.interesting-people.org/archives/interesting-people/200904/msg00175.html

http://cis471.blogspot.com/2009/04/why-is-connectivty-in-stockholm-so-much.html

http://www.linuxjournal.com/xstatic/suitwatch/2006/suitwatch19.html

http://publius.cc/2008/05/16/doc-searls-framing-the-net

http://free-fiber-to-the-home.blogspot.com/

http://communityfiber.org/cringely.html

http://www.linuxjournal.com/article/10033

IPv6

For the first time in almost month I had a bit of free time for experimentation today so I decided it was time I set up my home network to use IPv6. I’ve tried to keep up on the development and deployment of IPv6 but besides setting up a few internal network nodes with IPv6 addresses I haven’t played with it much in the past.

Background

Before I get into configuration here are a few links to some of the better articles and videos that I discovered today. Some of these are pretty technical.

The End of the (IPv4) World

Current projections on the time frame for IPv4 address exhaustion.

What this prediction is saying is that some time between late 2009 and late 2011, and most likely in mid-2010, when you ask your local RIR for another allocation of IPv4 addresses your request is going to be denied.

IPv6 Transition Tools and Tui

This article describes 6to4 and Teredo which are two technologies that aim to ease the transition from IPv4 to IPv6.

IPv6 Deployment: Just where are we?

Current IPv6 usage estimations.

Google IPv6 Conference 2008

Videos from Google’s IPv6 conference in May 2008.

IPv6 content

Other than gaining some experience with IPv6 there really isn’t a lot of benefit to using IPv6 yet. However, if you are in any way involved with IP networking it may be time to start learning about IPv6. Current projections have the IPv4 address space being exhausted in mid-2010 (The End of the (IPv4) World).

At the moment there are very few sites available which are accessible via IPv6 and even fewer are IPv6 only. Google has setup an IPv6 version of their main search site at ipv6.google.com. It is unfortunate that Google does not have an IPv6 (AAAA) record for www.google.com yet but given that providing an AAAA record to some hosts without IPv6 connectivity can cause problems, their choice is not surprising. Hopefully these kinks can be worked out as more people gain experience with IPv6. Rather then trying to remember to type ipv6.google.com all of the time I have locally aliased www.google.com to ipv6.google.com. Everything seems to be working normally so far.

SixXS maintains a list of IPv6 content at Cool IPv6 Stuff. A couple of highlights from this list are the official Beijing 2008 Olympic website and some IPv6 only BitTorrent trackers.

It has been said that the availability of porn is what really drove the adoption of the Internet. In this spirit The Great IPv6 Experiment is collecting copyright licenses to a large amount of commercial pornography and regular television shows for distribution only via IPv6. The project is due to launch sometime “soon”. What a great idea for an experiment.

A short tutorial for IPv6 and 6to4 on Fedora 9

Getting things setup turned out to be pretty easy on Fedora. I expect the same is true of any Linux distribution although the details will differ. Since most ISPs do not have native IPv6 support, special technologies are required to connect to other IPv6 nodes over IPv4. I chose 6to4 to connect to the IPv6 network. 6to4 requires a public IP address so if you are behind NAT look into using Teredo instead. Incidentally, Teredo is supported by Windows Vista.

The first step is to enable IPv6 and 6to4 on the publicly facing network interface. The required configuration file can be found in /etc/sysconfig/network-scripts/ifcfg-XXX where XXX is the name of your publicly facing network interface. Some or all of this may be configurable through system-config-network and other GUI tools but I tend to stick to configuration files. I added the following lines:

IPV6INIT=yes
IPV6TO4INIT=yes
IPV6_CONTROL_RADVD=yes

A few new entries were also required in the global network configuration (/etc/sysconfig/network).

NETWORKING_IPV6=yes
IPV6FORWARDING=yes
IPV6_ROUTER=yes
IPV6_DEFAULTDEV="tun6to4"

Note that if the computer you are configuring is not going to act as a IPv6 gateway for other hosts on your network you probably don’t want to add IPV6FORWARDING and IPV6_ROUTER. After editing these files restart the network service.

/sbin/service network restart

You should now have a new network interface named tun6to4 with an IPv6 address starting with 2002 assigned to it. 2002 (hexadecimal notation) is the first sixteen bits of the IPv6 address space dedicated to 6to4.

/sbin/ifconfig tun6to4

Now try pinging an IPv6 addresses.

ping6 ipv6.google.com

If you can reach ipv6.google.com you have working IPv6 connectivity. If you are not configuring an IPv6 gateway you can ignore everything below this point.

In order for the configured host to act as a gateway for IPv6 traffic it needs to advertise the IPv6 network prefix to the rest of your network. IPv6 doesn’t require DHCP for automatic address configuration but does require prefix announcement so the local node can figure out its IPv6 address. Prefix advertisements are handled by the radvd daemon. Below is the configuration I used (/etc/radvd.conf). Note the leading zeros in the prefix. This indicates that radvd should create the IPv6 prefix using the special 6to4 format.

interface eth0
{
        AdvSendAdvert on;
        MinRtrAdvInterval 30;
        MaxRtrAdvInterval 100;

        prefix 0:0:0:0001::/64
        {
                AdvOnLink on;
                AdvAutonomous on;
                AdvRouterAddr off;
                Base6to4Interface ppp0;
                AdvPreferredLifetime 120;
                AdvValidLifetime 300;
        };
};

After restarting radvd the other IPv6 capable nodes on your local network should also be automatically assigned an IPv6 address starting with 2002.

/sbin/service radvd start

I’m not sure if this is the way it is supposed to work or not, but eth0 on my gateway never obtains a 2002 IPv6 address automatically (this is box radvd is running on). As a result, I assigned the IPv6 address manually. Since my external IPv4 address never changes this isn’t a problem for me but it seems wrong to have to manually change the interface address if the external IPv4 address changes even though radvd will correctly advertise the new IPv6 prefix to the rest of the network automatically.

If you already know how IPv6 addresses are constructed skip this paragraph. In what will likely be the most common deployment model, IPv6 addresses are constructed of two parts: a 64-bit network identifier (prefix) and a 64-bit host identifier. The network identifier is assigned by the ISP. In the case of 6to4 the network prefix is constructed by using your public IPv4 address in combination with the first sixteen bits of the address being set to 2002. The host or node identifier is constructed by extending the 48-bit MAC address to 64-bits.

Determining the IPv6 address to assign to the internal interface (eth0) is a little tricky. First get the network prefix portion of the IPv6 address assigned to the tun6to4 interface. You want everything before the /16. This is the first 64-bits of your IPv6 address. Then look at the link-local IPv6 address which is automatically created on eth0. This address will start with fe80. The last 64-bits of this address is also the last 64-bits of the new address because this is the MAC address of the network interface. Copy everything after “fe80::”. Append this to the previously obtained network prefix separating the values with a colon. You now have the IPv6 address. Append an “IPV6ADDR=” line to /etc/sysconfig/network-scripts/ifcfg-eth0 and restart the network service (or the interface only if you like). You should now be able to ping6 between network nodes using the 2002 prefixed IPv6 addresses.

Once you have established connectivity between the nodes try ping6ing ipv6.google.com from the internal network nodes. If the ping fails you will likely have to investigate the iptables and ip6tables rules on both the gateway and the internal nodes.

Categories
General

Amazon EC2 from a network administration perspective

There has been lots of discussion and buzz around the Amazon Web Services (AWS) lately. I posted a few links about this last week. Most of the articles that I have read on AWS speak of it from a high level. General discussions about how the service allows your web application to increase capacity as required are interesting but I was curious about the interface that these services present application developers and to the Internet. More specifically, how do the AWS interfaces compare with normal server colocation services.

Amazon AWS is actually a collection of services. The Elastic Compute Cloud (EC2) is the service most commonly discussed. Other interesting services that are part of AWS include the Simple Storage Service (S3), SimpleDB and the Simple Queue Service (SQS). This article will only discuss EC2 but does not aim to be a EC2 tutorial. Amazon provides a good user guide if you are sufficiently interested.

Everything below comes from an afternoon of experimentation with EC2. Please leave a comment with any corrections or other useful bits of information you might have.

Signing up

EC2 operates on a pay for what you use model. As a result you need a credit card to use EC2 so Amazon can bill you once per month based on your usage. The first step is to sign-up for an AWS account. This account will give you access to AWS documentation and other content. After you have an AWS account you can then enroll in EC2. It is at this point that the credit card is required.

All interaction with EC2 occurs over web service APIs. Both REST and SOAP style interfaces are supported. Web service authentication occurs via X.509 certificates or secret values depending on the web service API used. Amazon nicely offers to generate an X.509 certificate and public/private keys for you. Letting Amazon create the keys and the certificate is probably a good idea for most people since it is not an entirely trivial task. However, depending on how paranoid you are you might want to create the keys locally. Amazon says they don’t store the private keys they generate and I have no reason to doubt them but generating the keys locally reduces the possibility that your private key will be compromised.

It’s all about virtual machines

The fundamental unit in EC2 is a virtual machine. If you have experience with Xen or VMWare you can think of EC2 as a giant computer capable of hosting thousands of virtual machines. In fact, the virtualization technology used by EC2 is Xen. At present only Linux based operating systems are supported but Amazon says that they are working towards supporting additional OS’s in the future. Since Xen already has the capability to host Windows and other operating systems this certainly should be possible.

All virtual machine images in EC2 are stored in Amazon’s S3 data storage service. Think of S3 as a file system in this context. Each virtual machine image stored in S3 is assigned an Amazon Machine Image (AMI) identifier. It is this identifier that serves as the name of the virtual machine image within EC2.

Virtual machine images within EC2 can be instantiated to become a running instance. Many instances of an image can be running at any one time. Each instance has its own disks, memory, network connection etc so it is completely independent from the other instances booted from the same image. Think of the virtual machine image as an operating system installation disk. This is all very similar to VMWare and other virtualization technologies.

Amazon and the AWS community provide a large number of AMIs for various Linux distributions. Some are general images while others are configured to immediately run a Ruby on Rails application or fill some other specialized role. Of course it is also possible to create new AMIs either for public or private use. Private images are encrypted such that only EC2 has access to them. Since private images will likely contain proprietary code this is a necessary feature.

For an example of why you might want multiple images consider a three tier web application which consists of a web server tier, application tier and a database tier. By having an AMI for each of these machine types the application author can quickly bring new virtual machines in any tier online without having to make configuration changes after the new instance has booted. EC2 also allows a small amount of data to be passed to new instances. This data can be used like command line arguments. For example the address of a database server could be passed to the new instance.

Interacting with EC2

All interaction with EC2 occurs via very extensive web service APIs. Creating and destroying new instances is trivial as is obtaining information on the running instances. There is even a system in place for instances to obtain information about themselves such as their public IP address. Where applicable, such as when starting a new virtual machine instance, these web service calls must be authenticated via a X.509 certificate or a secret value.

Since not everyone will want to write their own EC2 management software Amazon provides a set of command line utilities (written in Java) which wrap the web service APIs. This allows the user to start, stop and manage EC2 instances from the command line.

Creating a new instance is as simple as:

./ec2-run-instances ami-f937d290 -k amazon

The ‘-k amazon’ specifies the name of the SSH private key to use. I’ll come back to this in a bit. Starting ten instances of this image can be accomplished by adding ‘-n 10’

./ec2-run-instances ami-f937d290 -n 10 -k amazon

It is also possible to look at the virtual machines console output. Unfortunately, this is read-only. Management activities are not possible via the console. In this case the instance identifier is passed not the AMI.

./ec2-get-console-output i-8fad57e6

Again, all of the management activities happen via web service APIs so you can build whatever management software you require.

What do the VMs look like?

Hardware platforms

At present Amazon offers three different virtual hardware platforms.

  1. Small instance:  1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of instance storage, 32-bit platform.
  2. Large instance: 7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of instance storage, 64-bit platform.
  3. Extra large instance: 15 GB of memory, 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each), 1690 GB of instance storage, 64-bit platform.

Data storage

The storage layout of a small instance running a Fedora 8 image looks like the following:

-bash-3.2# cat /proc/partitions
major minor  #blocks  name

   8     2  156352512 sda2
   8     3     917504 sda3
   8     1    1639424 sda1
-bash-3.2# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             1.6G  1.4G  140M  91% /
none                  851M     0  851M   0% /dev/shm
/dev/sda2             147G  188M  140G   1% /mnt

Output from top:

Tasks:  49 total,   1 running,  48 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.6%us,  1.0%sy,  0.0%ni, 96.1%id,  0.6%wa,  0.0%hi,  0.0%si,  1.7%st
Mem:   1740944k total,    88904k used,  1652040k free,     4520k buffers
Swap:   917496k total,        0k used,   917496k free,    33424k cached

Data persistence

The disk partitions attached to each instance are allocated when the reservation is created. While these file systems will survive a reboot they will not survive shutting the instance down. Also note that Amazon makes it clear that internal maintenance may shut down virtual machines. This basically means that you cannot consider the disks attached to the reservations as anything more then temporary storage. It is expected that applications running on the EC2 platform will make use of the S3 data storage service for data persistence. In fact, signing up for the EC2 service automatically gives access to S3.

Network configuration

Once instantiated each virtual machine has a single Ethernet interface and is assigned two IP addresses. The IP address assigned to the Ethernet interface is a RFC-1918 (private) address. This address can be used for communication between EC2 instances. The second address is a globally unique IP address. This address is not actually assigned to an interface on the virtual machine. Instead NAT is used to map the external address to the internal address. This allows the instance to be directly addressed from anywhere on the Internet but does limit communication to using the protocols supported by Amazon’s NAT system. At present traffic to and from the virtual machines is limited to the common transport layer protocols (TCP and UDP) making it impossible to use other transport protocols such as SCTP or DCCP.

Both the internal and external IP addresses are assigned to new instances at boot time. EC2 does not support static IP address assignment.

Authentication and Security

Firewall

Amazon implements firewall functionality in the NAT system which handles all public Internet traffic going to and from the EC2 instances. When instantiated each instance can be assigned a group name or use the default group. The group name functions like an access list. Changing the access rules associated with a group is accomplished with the ec2-authorize command. The following example allows SSH, HTTP and HTTPS to a group named ‘webserver’.

ec2-authorize webserver -P tcp -p 22
ec2-authorize webserver -P tcp -p 80
ec2-authorize webserver -P tcp -p 443

Instance authentication

The authentication method used to connect to an EC2 instance depends on whether or not you build your own images. If you build your own image you can use whatever authentication or management solution you like. Obvious examples include configuring the image with predefined usernames and passwords and using SSH or perhaps Webadmin. Installing SSH keys for each user and disabling password authentication is probably the best choice.

Authentication when using the publicly available images is a little more complicated. Having a default user/password combination or even default user SSH keys would allow other users to easily login to an instance booted from a publicly available image. To get around this problem Amazon has created a system whereby you can register an SSH key with EC2. During the virtual machine imaging process the public portion of this SSH key is installed as the user key for the root user.

How is this different from normal server co-location?

The biggest difference between server colocation and EC2 is the ephemeral nature of the resources in EC2. This is a positive property in that it is trivial to obtain new resources in EC2. On the negative side of things the fact that ‘machines’ can disappear and that other resources such as IP address assignments are unpredictable adds new complexities.

Machine failure

Amazon states that servers can be shut down during maintenance periods and of course hardware failures will happen. Both of these events will result in virtual machine instances ‘failing’. Since disks and therefore the data that they contain disappear when instances die it seems that the complete failure of individual virtual servers is going to be a more common event than one might expect with traditional server co-location. Consider that a massive power failure event in Amazon’s data center(s) will be the equivalent to a traditional colocation facility being destroyed. Not only do you temporarily lose operational capability but each and every server and the data they were processing and storing would be gone.

In reality every large scale web service should plan for large failure events and individual server failure is also expected to happen regularly given enough nodes. Perhaps deployment on EC2 will make these events just enough more likely to force developers to address them rather than implicitly assuming that they will never occur.

If anyone reading this has experience using EC2 I would love to hear about how often you experience virtual machine failure.

HTTP load balancing

Another interesting complication comes from the fact that EC2 does not support static IP address assignments. Often large web deployments include a device operating as a load balancer in front of many web servers. This may be a specialized device or another server running something like mod_proxy. Using example.com as an example, a typical deployment would point the DNS A records for www.example.com to the load balancer devices. When colocating a server it is normal to be assigned a block of IP addresses for your devices. This makes it easy to replace a failed load balancer node without requiring DNS changes. However, in the case of EC2 you do not know the IP address of your load balancer node until it has booted. As already discussed, this node can disappear and when its replacement comes back online it will be assigned a different IP address.

This presents a problem because the Internet’s DNS infrastructure relies on the ability of DNS servers to cache information. The length of time that a particular DNS record is cached is called the time to live (TTL). Within the TTL time a DNS server will simply return the last values it obtained for www.example.com rather than traversing the DNS hierarchy to obtain a new answer. The dynamic nature of IP address assignment inside EC2 does not mix well with long TTL values. Imagine a TTL value of one day for www.example.com and the failure of the load balancer node. The result would be up to a full day where portions of the Internet would be unable to reach www.example.com. Perhaps more inconvenient would be the user seeing another EC2 customer’s site if the address was reassigned.

In order to work around this problem one solution is to use a very low TTL value. This is the approach taken by AideRSS.

$ dig www.aiderss.com

; < <>> DiG 9.5.0b1 < <>> www.aiderss.com
;; global options:  printcmd
;; Got answer:
;; ->>HEADER< <- opcode: QUERY, status: NOERROR, id: 3721
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 5, ADDITIONAL: 5

;; QUESTION SECTION:
;www.aiderss.com.               IN      A

;; ANSWER SECTION:
www.aiderss.com.        3600    IN      CNAME   aiderss.com.
aiderss.com.            60      IN      A       72.44.48.168
. . .
$ host 72.44.48.168
168.48.44.72.in-addr.arpa domain name pointer ec2-72-44-48-168.compute-1.amazonaws.com.

AideRSS is using a sixty second TTL for the aiderss.com A record. This means that every sixty seconds all DNS servers must expire the cached value and go looking for a new value.

Another site hosted on EC2 is Mogulus (just found them when looking for EC2 customers). They take a slightly nicer approach to this problem.

$ dig www.mogulus.com

; < <>> DiG 9.5.0b1 < <>> www.mogulus.com
;; global options:  printcmd
;; Got answer:
;; ->>HEADER< <- opcode: QUERY, status: NOERROR, id: 46281
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 2

;; QUESTION SECTION:
;www.mogulus.com.               IN      A

;; ANSWER SECTION:
www.mogulus.com.        7200    IN      A       67.202.12.112
www.mogulus.com.        7200    IN      A       72.44.57.45
$ host 67.202.12.112
112.12.202.67.in-addr.arpa domain name pointer ec2-67-202-12-112.z-1.compute-1.amazonaws.com.
$ host 72.44.57.45
45.57.44.72.in-addr.arpa domain name pointer ec2-72-44-57-45.z-1.compute-1.amazonaws.com.

Rather than a single A record with a very low TTL Mogulus uses two A records pointing to two different EC2 nodes and a TTL of 7200 seconds (two hours).

Personally, I consider these low TTL values (especially the 60s one) to be mildly anti-social behavior because it forces additional work on DNS servers throughout the Internet to deal with a local problem. Amazon should consider adding the ability to statically provision IP addresses. This would allow the Internet facing EC2 nodes to have consistent addresses and thereby reduce the failover problems. Like everything else in EC2, this could be charged by usage. I’d be happy to pay a few dollars (5, 10, x?) a month for single IPv4 address within EC2 that I could assign to a node of my choosing.

Pricing

Unless you are reading this close to the date it was written it is probably a good idea to visit Amazon for pricing information instead of relying on the data here.

Instance time

When I first starting investigating EC2 I misinterpreted EC2’s pricing. I thought that instance usage was charged on a CPU time basis. This would effectively mean that an idle server would cost next to nothing. The correct interpretation is that billing is based on how long the instance is running not how much CPU it uses. The current EC2 pricing is:

  • $0.10/hour – Small Instance
  • $0.40/hour – Large Instance
  • $0.80/hour – Extra Large Instance

This makes the constant use of a single small instance cost $70/month. Pretty reasonable especially when you consider that you do not have to buy the hardware.

Data transfer

Using EC2 also incurs data transfer charges.

  • Data transfer into EC2 from the Internet: $0.10/GB.
  • Data transfer out of EC2 to the Internet: $0.18/GB (gets cheaper if you use > 10TB/month).

Data transfer between EC2 nodes and to/from the S3 persistent storage service is free. Note that S3 has its own pricing structure.

Summary

In a lot of ways EC2 is similar to server location services. At its lowest level EC2 gives you a ‘server’ to work with. Given the prices outlined above using EC2 as a colocation replacement may be a good choice depending on your requirements.

What really makes EC2 interesting is its API and dynamic nature. The EC2 API makes it possible for resources such as servers and the hosting environment in general to become a component of your application instead of something which the application is built on. Applications built on EC2 have the ability to automatically add and remove nodes as demands change. Replacing failed nodes can also be automated. Giving applications the ability to respond to their environment is very intriguing idea. Somehow it makes the application seem more alive.

A new way to look at networking

I finally got around to watching A new way to look at networking yesterday. This is a talk given by Van Jacobson at Google in 2006 (yes, it has been on my todo list for a long time).This is definitely worth watching if you are interested in networking.

A couple of quick comments (These are not particularly deep or anything. This is mostly for my own reference later.):

  • He says that the current Internet was designed for conversations between end nodes but we’re using it for information dissemination.
    • Me: This distinction relies on the data being disseminated to each user being identical. However, in the vast majority of cases even data that on the surface is identical such as web site content is actually unique for each visitor. Any site with advertisements or with customizable features are good examples. As a result we are still using the Internet for conversations in most situations.
  • He outlines the development of networking:
    • The phone network was about connecting wires. Conversations were implicit.
    • The Internet added metadata (the source and destination) to the data which allowed for a much more resilient network to be created. The Internet is about conversations between end nodes.
    • He wants to add another layer where content is addressable rather than the source or destination.
  • He argues for making implicit information explicit so the network can make more intelligent decisions.
    • This is what IP did by adding the source and destination to data.
  • His idea of identifying the data not the source or destination is very interesting. A consequences of this model is that data must be immutable, identifiable and build in metadata such as the version and the date. It strikes me how the internal operation of the Git version control system matches these requirements.

Cloud computing

It’s pretty hard to not notice the buzz around ‘cloud computing’. In large part this is due to the new services being offered by Amazon. Who would have thought that a book seller could become the infrastructure for a new generation of Internet start-ups?

Here’s a bit of information to whet your appetite. Somehow I have to find the time to play with these technologies.

Scaling with the clouds
Surviving the storm

Drawing (nearly) unlimited power from the sky
Drawing power from the sky, part 2

Categories
Musings

End-to-end in standards and software

Two things. Both relate to Microsoft but that is just by coincidence.

The first

Apparently IE8 will allow the HTML author to specify the name and version number of the browser that the page was designed for. For example, the author can add a meta tag that says essentially “IE6”. IE8 will see this tag and switch to rendering pages like IE6 does. Apparently this came about because IE7 became more standards compliant thereby ‘breaking’ many pages, especially those on intranets which require the use of IE. The new browser version tag will allow MS to update the browser engine without breaking old pages. As a result they will be forced to maintain the old broken HTML rendering engine (or at least its behavior) for a very long time. This will consume development resources that could otherwise be put into improving IE. It will also increase the size, complexity and undoubtedly the number of bugs. As for the pages broken by newer more standards compliant browsers, what is their value? Any information in a corporate Intranet or otherwise that has value will be updated to retain its value. If no one bothers to update the page is was probably nearly worthless anyway. Also, most of the HTML pages now in use are generated by a templating system of some kind. It’s not like each and every page will have to be edited by hand.

The second

The Linux kernel development process is notorious for improving (breaking) the kernel’s internal driver APIs. This means that a driver written for version 2.6.x might not even compile against 2.6.x+1 let alone be binary compatible. This of course causes all kinds of trouble for companies not willing to open source their drivers. However, the advantages of this process are huge. It is completely normal that during the development process the author will learn a lot about how the particular problem can be solved. By allowing the internal APIs to change the Linux kernel development model allows the authors to apply this new found knowledge and not be slowed down by past mistakes. As I already mentioned this causes problems for binary only kernel drivers but if the product has value the manufacturer will update the driver to work with the new kernel release. If it doesn’t have value the driver it won’t get updated and the kernel doesn’t have to carry around the baggage of supporting the old inferior design. How does this relate to Microsoft? From Greg Kroah-Hartman:

Now Windows has also rewritten their USB stack at least 3 times, with Vista, it might be 4 times, I haven’t taken a look at it yet. But each time they did a rework, and added new functions and fixed up older ones, they had to keep the old api functions around, as they have taken the stance that they can not break backward compatibility due to their stable API viewpoint. They also don’t have access to the code in all of the different drivers, so they can’t fix them up. So now the Windows core has all 3 sets of API functions in it, as they can’t delete things. That means they maintain the old functions, and have to keep them in memory all the time, and it takes up engineering time to handle all of this extra complexity. That’s their business decision to do this, and that’s fine, but with Linux, we didn’t make that decision, and it helps us remain a lot smaller, more stable, and more secure.

So what was the point?

I don’t know what to make of these two little stories but the later has been bothering me for some time. Where does the responsibility for dealing with change belong? The Internet has taught us that we should push as much work as possible to the ends of the network. The alternative is rapidly growing complexity and inflexibility in the core. It seems to me that this applies to both of the situations I outlined here as well.

The XMPP snowball is speeding up

More and more of the tech community seems to be talking about XMPP and how it fits into the future of the Internet.

One example: XMPP (a.k.a. Jabber) is the future for cloud services