Ten Reasons Why IE6 Development is Significantly Better in 2010 than 2001 (But Still Painful)

This post mentions IE6 everywhere, but it’s not about IE6 at all. In my standard rant on the three ages of web development, a standard sub-rant is our improved ability to develop on IE6:

Although it was created back in 2001, we could do far more with it in 2004 (e.g. Google Maps) than we could in 2001, and we can do far more with it in 2010 than we could in 2004.

I cite the 3,286 day old application as an example because it’s the one browser that’s still around and still must be targeted by a significant number of developers (despite the best intentions of even MS employees to “drive a stake into IE6s heart”, and thankfully, the number is starting to dwindle). It’s the 120-year old who’s witnessed the transition from horse-and-cart transportation, to men on the moon, and is still here today to speculate on the eagerly awaited ChatRoulette relaunch. We could ask how effective Netscape Navigator development is in 2010, but since no-one actually does that, any answers would be hypothetical. IE6 is something many people worked with pre Ajax era and something some people still work with post Ajax, so it’s a specimen that can be used to show how Ajax changed the world.

(You could do this kind of analysis for any platform, e.g. the design pattern phenomenon means that you would approach a C++ app very differently today than you would have in 1987, even with the same language and compiler.)

In the Ajax era, we made like engineers and got smart about how to do the best we could with the laws of physics our universe imposed on us. (Versus the new HTML5 era, where we build a better universe.) I decided to make this post to be a bit more explicit about why IE6 development is significantly better today than when IE6 came out. And by “better”, I mean faster and more powerful, if still a slow, frustrating, and not-very-powerful development cycle compared to the modern browsers.

With no further adieu, IE6 development is better today than in 2001 because …

10. Libraries If only we had jQuery in 2001. If only …

9. Knowledge JavaScript was allegedly the world’s most misunderstood language in 2001. Now it’s well-understood, good parts and bad. And so is the DOM. An abundance of blogs, forums, books, and conferences.

8. Patterns We not only know how Javascript and the DOM work, we have idioms, patterns, and best practices for putting it into action.

7. Performance IE6 is orders of magnitudes slower than today’s browsers, so managing performance is certainly important, and these days, we know much more about how to make web apps run faster, thanks to the efforts of Steve Souders et al.

6. Tools Tools from MS and others have come along over the years to improve the development experience. Firebug or Webkit Devtools they are not, but at least IE6 debugging is more than just alert boxes these days.

5. Developers, Developers, Developers Finding savvy web developers these days is vastly easier than it was pre-Ajax, when front-end development was a black art.

4. Show Me the Source We now have plenty of examples to learn from, both online, where View Source is your friend, and in the open source code repositories of the world.

3. Browser Mechanics We understand the quirks of each browser much better now, and that certainly includes the many well-documented quirks of IE6 and how to deal with them.

2. Attitude Those examples also overcome the psychological barrier, a key impediment to IE6 development. In the absence of any decent examples, it was easier for developers to shrug their shoulders and say it’s all too hard. If that sounds too mystical, see 4 minute mile. “Ajax” is more than a set of technologies, it’s also the recognition that these kinds of apps are possible.

1. Development is Just Easier These Days. No matter what kind of development you’re doing, development is just easier in 2010. Blogs, Forums, Twitter, Code Repositories, etc are improved, so this will impact on IE6 development or any other form of development, though there’s a law of diminishing returns here.

IE6 development is still a tough effort and thankfully becoming less of a requirement as people shift to modern browsers, or at least IE8. You know a browser is on its way out when “because employees are less likely to use Facebook” is one of its most compelling advantages. The Ajax era had a profound effect on the way we develop and what we know to be possible, but that era is mostly over, and that law of diminishing returns is kicking in hard. Hence, we make a new, better, platform to keep progressing what’s possible.

Will the rush of HTML5 work make IE6 development in 2020 even more doubleplusgood than in 2010? Not really. There will be a few odd improvements, but overall, HTML5 is about improving the platform, which means developers will increasingly be focused on doing things IE6 will thankfully never even try to do. The HTML5 world subsumes the Ajax world, so you get the double benefit – all of these ten Ajax benefits multiplied by all of the new features of the platform.

Ruby Script to Localise Images for Offline HTML

I’m maintaining a custom HTML5 slide framework. A little similar to the canonical slides.html5rocks.com insofar as, well, it’s HTML5 and slides horizontally! But the key difference is that it’s very markup-based – What You See Is What You Need (WYSIWYN) – so creating new slides is easy.

Anyway, I did something similar with TiddlySlides a little while ago, and @FND created a nice Python script to inline external resources – http://mini.softwareas.com/using-fnds-excellent-spapy-to-make-a-single-p. I wanted something slightly different; since this is markup, I can’t rely on <img src… pattern. Could have possibly incorporated changes into Fred’s SPA, but being that I need it for GDC presentation tomorrow, and I said the last thing to myself the last time I did these slides and it didn’t happen, I opted to make a Ruby script which is general-purpose, but meets the specific needs of my slides. See GIST

  1. #!/usr/bin/env ruby
  2. # This script will download all images referenced in URL (URLs ending in
  3. # jpg/gif/png), stick them in an images/ directory if they're not already there,
  4. # and make a new file referencing the local directory.
  5. #
  6. # The script depends on the http://github.com/nahi/httpclient library.
  7. #
  8. # USAGE
  9. # localiseImages.rb index.html
  10. # ... will create images/ containing images and local.index.html pointing to them.
  11. #
  12. # The point is to cache images so your HTML works offline. See also spa.py
  13. # http://mini.softwareas.com/using-fnds-excellent-spapy-to-make-a-single-p
  14.  
  15. require 'httpclient'
  16.  
  17. ### UTILITIES
  18. IMAGES_DIR = 'images'
  19. Dir.mkdir(IMAGES_DIR) unless File.directory?(IMAGES_DIR)
  20. def filenameize(url)
  21.   IMAGES_DIR + '/' + url.sub('http://','').gsub('/','__')
  22. end
  23.  
  24. def save(filename,contents)
  25.   file = File.new(filename, "w")
  26.   file.write(contents)
  27.   file.close
  28. end
  29.  
  30. ### CORE
  31. def saveImage(url)
  32.   save(filenameize(url), HTTPClient.new().get_content(url))
  33. end
  34.  
  35. def extractImages(filename)
  36.   contents = File.open(filename, "rb").read
  37.   localContents = String.new(contents)
  38.   contents.scan(/http://S+?.(?:jpg|gif|png)/im) { |url|
  39.     puts url
  40.     saveImage(url) unless File.file?(filenameize(url))
  41.     localContents.gsub!(url, filenameize(url))
  42.   }
  43.   save("local."+filename, localContents)
  44. end
  45.  
  46. ### COMMAND-LINE
  47. extractImages(ARGV[0])

Aside: This is also related to “offline” web technologies…my article on “Offline” recently went live at HTML5Rocks: “Offline”: What does it mean and why should I care?.

CORS, Scraping, and Microformats

Jump straight to the demo.

Cross-Origin Resource Sharing makes it possible to do arbitrary calls from a web page to any server, if the server consents. It’s a typical HTML5 play: We could do similar things before, but they were with hacks like JSONP. Cross-Origin Resource Sharing lets us can achieve more and do it cleanly. (The same could be said of Canvas/SVG vs drawing with CSS; WebSocket vs XHR-powered Comet; WebWorker vs yielding with setTimeout; Round corners vs 27 different workarounds; and we could go on.)

This has been available for a couple of years now, but I don’t see people using it. Well, I haven’t checked, but I don’t get the impression many sites are offering their content to external websites, despite social media consultants urging them to be “part of the conversation”. It’s like when people make a gorgeous iPhone app, but their website doesn’t work at all in the same phone (cough fashionhouse) . Likewise, if you’ve got a public API, but not providing JSONP/callback support, it’s not very useful either…making developers host their own cross-domain proxy is tedious. It’s cool there are services like YQL and Embed.ly for some cases, but wouldn’t it be better if web pages could just pull in all that external content directly?

Except in this case, it’s just not happening. Everyone’s offering APIs, but no-ones sharing their content through the web itself. At this point, I should remind you I haven’t actually tested my assumption and maybe everyone is serving their public content with “Access-Control-Allow-Origin: *” … but based on the lack of conversation, I am guessing in the negative. The state of the universe does need further investigation.

Anyway, what’s cool about this is you can treat the web as an API. The Web is my API. “Scraping a web page” may sound dirtier than “consuming a web service”, but it’s the cleaner approach in principle. A website sitting in your browser is a perfectly human-readable depiction of a resource your program can get hold of, so it’s an API that’s self-documenting. The best kind of API. But a whole HTML document is a lot to chew on, so we need to make sure it’s structured nicely, and that’s where microformats come in, gloriously defining lightweight standards for declaring info in your web page. There’s another HTML5 tie-in here, because we now have a similar concept in the standard, microdata.

So here’s my demo.

I went to my homepage at mahemoff.com, which is spewed out by a PHP script. I added the following line to the top of the PHP file:

  1. <?
  2.   header("Access-Control-Allow-Origin: *");
  3.   ... // the rest of my script
  4. ?>

Now any web page can pull down “http://mahemoff.com/” with a cross-domain XMLHttpRequest. This is fine for a public web page, but something you should be very careful about if the content is (a) not public; or (b) public but dependent on who’s viewing it, because XHR now has a “withCredentials” field that will cause cookies to be passed if it’s on. A malicious third-party script could create XHR, set withCredentials to true, and access your site with the user’s full credentials. Same situation as we’ve always had with JSONP, which should also only be used for public data, but now we can be more nuanced (e.g. you can allow trusted sites to do this kind of thing).

On to the client …

I started out doing a standard XHR, for sanity’s sake.

javascript
< view plain text >
  1. var xhr = new XMLHttpRequest();
  2.   xhr.open("get", "message.html", true);
  3.   xhr.onload = function() { //instead of onreadystatechange
  4.     if (xhr.readyState==4 && xhr.status==200)
  5.     document.querySelector("#sameDomain").innerHTML = xhr.responseText;
  6.   };
  7.   xhr.send(null);

Then it gets interesting. The web app makes a cross-domain call using the following facade, which I adapted from a snippet in the veritable Nick Zakas’s CORS article:

javascript
< view plain text >
  1. function get(url, onload) {
  2.     var xhr = new XMLHttpRequest();
  3.     if ("withCredentials" in xhr){
  4.       xhr.open("get", url, true);
  5.     } else if (typeof XDomainRequest != "undefined"){
  6.       xhr = new XDomainRequest();
  7.       xhr.open("get", url);
  8.     } else {
  9.       xhr = null;
  10.     }
  11.     if (xhr) {
  12.       xhr.onload = function() { onload(xhr); }
  13.       xhr.send();
  14.     }
  15.     return xhr;
  16. }

This gives us a cross-domain XHR, for any browser that supports the concept, and it makes a request the usual way, and the request works against my site, but not yours, because of the header I set earlier on my site. Now I can dump that external content in a div:

javascript
< view plain text >
  1. get("http://mahemoff.com/", function(xhr) {
  2.     document.querySelector("#crossDomain").innerHTML = xhr.responseText;
  3.     ...

(This would be a monumentally thick thing to do if you didn’t trust the source, as it could contain script tags with malicious content, or a phishing form. Normally, you’d want to sanitise or parse the content first. In any event, I’m only showing the whole thing here for demo purposes.)

Now comes the fun part: Parsing the content that came back from an external domain. It so happens that I have embedded hCard microformat content at http://mahemoff.com. It’s in the expandable business card you see on the top-left:

And the hCard content looks like this, based on :

  1. <div id="card" class="vcard">
  2.   <div class="fn">Michael&nbsp;Mahemoff</div>
  3.   <img class="photo" src="http://mahemoff.com/speak2.jpg"></img>
  4.   <div class="role">"I like to make the web better and sushi"</div>
  5.   <div class="adr">London, UK</div>
  6.   <div class="geo">
  7.     <abbr class="latitude" title="51.32">51&deg;32&#39;N</abbr>,     <abbr class="longitude" title="0">0&deg;</abbr>
  8.   </div>  <div class="email">[email protected]</div>
  9.   <div class="vcardlinks">    <a rel="me" class="url" href="http://mahemoff.com">homepage</a>
  10.     <a rel="me" class="url" href="http://twitter.com/mahmoff">twitter</a>    <a rel="me" class="url" href="http://plancast.com/mahemoff">plancast</a>
  11.   </div>
  12. </div>

It’s based on the hCard microformat, which really just tells you what to call your CSS classes…I told you microformats were lightweight! The whole idea of the card comes from Paul Downey’s genius Hardboiled hCards project.

Anyway, bottom line is we’ve just extracted some content with hCard data in it, so it should be easy to parse it in a standard way and make sense of the content. So I start looking for a hCard Javascript library and find one, that’s the beauty of standards. Even better, it’s called Sumo and it comes from Dan Webb.

The hCard library expects a DOM element containing the hCard(s), so I pluck that from the content I’ve just inserted on the page, and pass that to the library. Then it’s a matter of using the “hCard” object to render a custom UI:

javascript
< view plain text >
  1. var hcard = HCard.discover(document.querySelector("#crossDomain"))[0];
  2.  var latlong = new google.maps.LatLng(parseInt(hcard.geo.latitude), parseInt(hcard.geo.longitude));
  3.   var markerImage = new google.maps.MarkerImage(hcard.photoList[0], null, null, null, new google.maps.Size(40, 40));
  4.  var infoWindow = new google.maps.InfoWindow({content: "<a href='"+hcard.urlList[0]+"'>"+hcard.fn+"</a>", pixelOffset: new google.maps.Size(0,-20)});
  5.    ...

And I also dump the entire hCard for demo purposes, using James Padolsey’s PrettyPrint.

javascript
< view plain text >
  1. document.querySelector("#hcardInfo").appendChild(prettyPrint(hcard));

There’s lots more fun to be had with the Web as an API. According to the microformats blog, 2 million web pages now have embedded hCards. Offer that content to the HTML5 mashers of the world and they will make beautiful things.

A jQuery Inheritance Experiment

I like jQuery a lot, but I often find myself re-doing my way of OO and inheritance each time I start a new app. And so, I just did it again of course.

I was starting to write a lame HTML5 game, where you have “AlienModels” (of the MVC, not Star Trek, variety), and each with their own “AlienView”. When an AlienModel enter()s, its view will detect a “enter” event and show the alien entering the scene. Certain types of aliens will fly in, certain aliens will fade in, and so on. I started creating an AlienView abstraction, but I figured this is something jQuery can do for me. An AlienView might simply be a jQuery selector. However, this is where jQuery reaches its limits, as I want to call $(“someAlienView”).enter(50,50), and to have the alien’s entry towards co-ordinate (50,50) animated, but animated differently depending on what kind of alien it is.

So I created a framework to do this. Code and Demo here.

The usage is this:

  1. <div class="shy guy">shy</div>
  2. <div class="fly guy">fly</div>

javascript
< view plain text >
  1. $(".shy").define({
  2.       disappear: function() { $(this).slideUp(); }
  3.     });
  4.  
  5.     $(".fly").define({
  6.       disappear: function() {
  7.         $(this).fadeOut().fadeIn().fadeOut().fadeIn().fadeOut().fadeIn()
  8.                .animate({top: -200 }).hide();
  9.       }
  10.     });
  11.  
  12.     $(".guy").disappear();

When we call $(“.guy”).disappear(), what happens depends on whether this is a “.shy” or a “.fly”. This is basic polymorphism, which jQuery lacks. The $.fn.define() plugin I wrote (see the code in that example) shoehorns it in, but maybe that’s not a good idea…I’m making jQuery into something it’s not. So on IRC someone pointed me to this MooTools-jQuery article. I need to read that article. I also need to get into MooTools, which I think may be more appropriate for this kind of thing, with its unashamed use of prototype (afaict). In general, my urge for more OO and scalable architecture tells me I need to look further afield than jQuery. But syntactic sugar trumps all, so I’ll only go elsewhere if it doesn’t smell of enterprisey, false sense of security, public static voidness.

Incidentally $.fn.define() could go further than I currently have it; it would be possible to set up an inheritance chain, where you could define $(“.guy”).disappear() explicitly, so that if an element matched “.guy” but not “.shy” or “.fly”, it would follow the “.guy” rule. (But not the other way round.) I probably won’t though.

Incidentally (2), thanks @agektmr for telling me about jsdo.it, a mostly-Japanese Javascript pastebin similar toas jsBin and jsFiddle.

The Tragedy and Triumph of Podcasts

It’s now about 6 years since I discovered podcasts while listening to a pre-podcast podcast, The Gillmor Gang. It’s everything I ever wanted from radio talkback – niche topics, on-demand listening, access anywhere, rich metadata, and no music – I’ve chosen to listen to talkback for a reason (Hello Australian Broadcasting Corporation).

A perfect storm of iPods, massive bandwidth, and feed religion made podcasts possible, and they are still going on strong. However, they’ve never taken off in the mainstream, and you can’t say they haven’t had a fair chance. Apple’s inclusion of podcasts in iTunes and iOS makes them pretty darn accsessible if people want them, yet many people aren’t using them. Having informally surveyed a few people, I’ve found they aren’t actually aware how easy iTunes made it to subscribe to podcasts, so there’s more work to be done there. But I think if there was enough word-of-mouth publicity, people would be using it to subscribe. It’s not harder than uploading photos for example. (I do have many reservations about iTunes, but those are more for advanced users.)

Podcasts haven’t taken off in much the same was as RSS and feeds and news readers have never taken off. Or have they? I recently heard Jon Udell speaking on the topic (on a podcast-or-other, not his own one) and he made the point that we expected everyone would wake up in the morning and open up their reader of feeds they’d subscribed to. Didn’t happen. But feeds did happen, social feeds, in the form of Facebook, Twitter, FourSquare, Buzz, and so on. Anyway, those don’t really translate to podcasts, not yet anyway. If Huffduffer let you subscribe to all your friends’ feeds, it would be possible, at least in a geeky niche community anyway.

My main point here is to highlight a few things that haven’t happened for podcasts, and would make them better and just a bit more popular if they did. I’m not arguing these things would make podcasts wildly popular; consider this mostly a wishlist and some pointers to a few trends:

Hardware: So we have these networked devices right? The most prominent at this time being iPhone and iPad, but they still don’t sync over the cloud. Using Android recently, I’ve come to appreciate how nice it is to sync podcasts in the background, over the air. Latest podcasts are just there. Downside being, you have to use an expensive phone, which is a problem for gym and running, and also a drain on that precious battery life. While in the US, I recently picked up an Ibiza Rhapsody player, a bargain at $44 for an 8GB player which automatically connects and syncs. Would be even better if I could sign up to the Rhapsody service in the UK, but not gonna happen. The neat thing is it has podcasts built in, and lets me sync them over the fly. Downside is it doesn’t have a keyboard, so if I want a feed not in the default list, I have to type it manually using the one-at-a-time, left-right-left-right, character entry. Now I’ve been waiting for someone to release a mini Android device, so I was blitzkrieged to hear This Week In Google mention a new line of Archos “tablets”, including a 3.2″ device. Which will be perfect for gym and running, allowing me to switch between podcasts and Spotify, with both of those things syncing over the air, and at $150, cheap enough to risk overzealous destruction :). Can you say Drool.

Cloud OPML: It’s awesome we have a standard like OPML, a simple way to declare a list of feeds, AKA Reading Lists. (Technically, reading lists are a subset of OPML, but OPML is the term commonly used, so I’ll keep using it here.) However, in both podcast and feedreader world, there’s an extremely weird tendency to assume OPML lives on your hard drive. Many newsreaders and podcatchers allow you to import and export to and from OPML…but they assume OPML lives on your hard drive, not in the cloud!!! Why? I have no idea. The whole concept is inherently cloud, so it makes no sense. I just want to stick my list of podcasts on a server somewhere, and when I start using a new client, it downloads them for me. As a consequence, I’ve manually entered my subscriptions dozens of times over the years. This is especially important for mobile devices – especially ones without a keyboard – like the Rhapsody player I mentioned above. Podcatcher/Feed-reader developers, I urge you to pull down subscriptions from OPML resources in the sky…and to offer users the ability to publish their subscriptions that way too!

Archives: Sadly, podcasts don’t live on the same way blog post do. This is sad because many podcasts are reference material, not just latest news. Take a podcast like the excellent History According to Bob. Over the years, he’s produced hundreds of fine recordings on all manner of ancient and recent history. But subscribe to his podcast, and you’ll only be able to backtrack 8 episodes. Now I chose Bob as an example because he actually offers older podcasts for DVD purchase, but most podcasters would be fine to let people get hold of old podcasts; they just have no way to actually make it practical. History is not the only topic; there are podcasts about movies, science, economics, software engineering…where a 2004 podcast would be just as relevant today, if only you could get hold of it. Some podcasts include every single episode in the feed, but then certain clients will end up pulling down gigabytes of data when each user subscribes. As a user, your best best is to scour archives – if they exist – and use something like huffduffer to aggregate them. But that’s still painful and not something every user will do. Odeo was on the right track, by building up a long list of all podcasts ever produced on each feed, whether in the current feed or not. But Odeo spawned Twitter and Odeo sadly isn’t.

Integrate with Music Players: Call it, “if you can’t beat them, join them”, but I would love to see the music services embrace podcasts. Spotify, for example, has a great interface for choosing songs on the fly as well as subscribing to playlists; it could easily be extended to podcasts to become a one-stop-shop for your listening needs. Playdio is an interesting move in this direction, allowing people to record talk tracks in between music tracks, and their contact form mentions podcasts, so maybe there is hope. Still, I wish Spotify et al would just bake podcasts into the player and be done with it. And considering the social features these things are starting to have, it could actually be quite powerful.

Social: There’s not really much you can do to find out what friends are listening to and all that cal. There’s Amigofish, but it would be nice to see it baked into the players directly.

True, music will probably be in first place for the foreseeable future, mirroring reality, but its needs have already been met, much more so than talk formats, where there really hasn’t been much innovation since 2004.