End-to-end in standards and software

Two things. Both relate to Microsoft but that is just by coincidence.

The first

Apparently IE8 will allow the HTML author to specify the name and version number of the browser that the page was designed for. For example, the author can add a meta tag that says essentially “IE6”. IE8 will see this tag and switch to rendering pages like IE6 does. Apparently this came about because IE7 became more standards compliant thereby ‘breaking’ many pages, especially those on intranets which require the use of IE. The new browser version tag will allow MS to update the browser engine without breaking old pages. As a result they will be forced to maintain the old broken HTML rendering engine (or at least its behavior) for a very long time. This will consume development resources that could otherwise be put into improving IE. It will also increase the size, complexity and undoubtedly the number of bugs. As for the pages broken by newer more standards compliant browsers, what is their value? Any information in a corporate Intranet or otherwise that has value will be updated to retain its value. If no one bothers to update the page is was probably nearly worthless anyway. Also, most of the HTML pages now in use are generated by a templating system of some kind. It’s not like each and every page will have to be edited by hand.

The second

The Linux kernel development process is notorious for improving (breaking) the kernel’s internal driver APIs. This means that a driver written for version 2.6.x might not even compile against 2.6.x+1 let alone be binary compatible. This of course causes all kinds of trouble for companies not willing to open source their drivers. However, the advantages of this process are huge. It is completely normal that during the development process the author will learn a lot about how the particular problem can be solved. By allowing the internal APIs to change the Linux kernel development model allows the authors to apply this new found knowledge and not be slowed down by past mistakes. As I already mentioned this causes problems for binary only kernel drivers but if the product has value the manufacturer will update the driver to work with the new kernel release. If it doesn’t have value the driver it won’t get updated and the kernel doesn’t have to carry around the baggage of supporting the old inferior design. How does this relate to Microsoft? From Greg Kroah-Hartman:

Now Windows has also rewritten their USB stack at least 3 times, with Vista, it might be 4 times, I haven’t taken a look at it yet. But each time they did a rework, and added new functions and fixed up older ones, they had to keep the old api functions around, as they have taken the stance that they can not break backward compatibility due to their stable API viewpoint. They also don’t have access to the code in all of the different drivers, so they can’t fix them up. So now the Windows core has all 3 sets of API functions in it, as they can’t delete things. That means they maintain the old functions, and have to keep them in memory all the time, and it takes up engineering time to handle all of this extra complexity. That’s their business decision to do this, and that’s fine, but with Linux, we didn’t make that decision, and it helps us remain a lot smaller, more stable, and more secure.

So what was the point?

I don’t know what to make of these two little stories but the later has been bothering me for some time. Where does the responsibility for dealing with change belong? The Internet has taught us that we should push as much work as possible to the ends of the network. The alternative is rapidly growing complexity and inflexibility in the core. It seems to me that this applies to both of the situations I outlined here as well.

4 thoughts on “End-to-end in standards and software

  1. Andrew Delong

    With respect The First, I like to imagine how much overall human effort is saved by not breaking compatibility. It’s hard to quantify the harm that a slightly-bulkier-IE-executable incurs on literally everyone, but forcing thousands (at least) of authors/coders to revisit their web content is clearly a bigger burden on the industry than the cost of paying Microsoft engineers to figure this out. Eventually Microsoft drops old APIs — there’s just more overlap at any given time than you’ll find in Linux. (I’m ignoring the security issue ;)

    Reply
  2. Mike Burrell

    I’ve always been a fan of how OpenGL does its API revisions. Mind you I’m not working with OpenGL on a regular basis, so I can’t say for sure how well it works in practice, but I like the idea that API changes go through various stages of stability. You have vendor extensions, multi-vendor extensions, and ARB extensions, eventually leading into core API changes. From a developer standpoint, you get multiple layers of stability in the API then: if you want your code to be stable without maintaining, you can just stay away from extensions altogether.

    Reply
  3. Dan Siemon Post author

    It may be true that there is a smaller short term expenditure of energy involved in Microsoft continuing to support these broken HTML documents than there is in forcing HTML authors to fix their content. It is certainly more convenient to force the change on a single point and it may even be a good business decision for Microsoft.

    However, this convenience comes at a cost. Dropping support for broken HTML documents results in the overall complexity of the web going down both as documents become unusable and disappear and as documents are updated. The alternative path results in the complexity of the web ecosystem rising. Will the quest for backwards compatibility cause the complexity of the web to eventually rise to the point where developing web software becomes to difficult?

    Don’t just think about the browser here. Anything that has to consume the broken pages (search engines, mashups, …) is also impacted by this additional complexity.

    Reply
  4. Andrew Delong

    I definitely think old pages, even if they’re only in the Wayback Machine, have value. Software seems different from other format revision situations, like those in hardware (8-track, CD, etc). It’s easier to write an NES emulator than to port game code, right? It’s a case-by-case thing but for web standards I think compatibility’s a winner.

    And OpenGL is another API that’s backwards compatible for decades now, and last I heard 3.0 is going to still support the old APIs (immediate mode, etc) though it won’t expand on them. There’s a tremendous amount of content that relies on this compatibility, but the expertise to port them is gone and would have to be reinvested.

    Of course nobody denies that this comes at a cost to the team supporting a core API (dozens or hundreds of developers), but it’s arguably worth it.

    I read Raymond Chen’s blog occasionally (very occasionally) but he says some interesting things about API design at Microsoft, and how the old-school approach was all about compatibility at almost any cost, but now there’s been a shift to developing new APIs and throwing out the old ones. The resulting support mess isn’t rolled into the platform code, but instead gets spread out into the wild, creating maintenance a headache for developers everywhere. As a side effect, developers lose confidence in new/upcoming APIs and won’t adopt them for fear they’ll get burned one or two years down the road, even if they are better.

    Reply

Leave a Reply to Andrew Delong Cancel reply

Your email address will not be published. Required fields are marked *