The Secret to Quick Node Hacks is

to ignore security and do everything local.

I’m at the inaugural Amped event, and you could immediately guess people were going to be using Node, from the fact they were breathing. So I pointed them to my , which is indeed a total hack.

Basically, you can do any multi-user app with a server of 10-20 lines. All you do is broadcast each user’s actions. In fact, such a server would make a great open source project (but I’m otherwise occupied playing with PhoneGap and Android). Sure, Node makes servers easier, but for version 0.01, this is all you need. A video sync app would get messages like this:

{ user: "john", time: "103094401", action: "join" }
{ user: "sue", time: "103095491", action: "join" }
{ user: "john", time: "103095822", action: "takeControl" }
{ user: "john", time: "103096111", action: "play" }
{ user: "john", time: "103096483", action: "fastforward", data: { position: 2.12 } }
{ user: "sue", time: "103098111", action: "takeControl"}

This is trading off security for productivity. There’s ZERO security here, because (at least the way I implemented the video sync) any user can forge a request from any other user. But that’s the point: It’s a hackathon. Unless you’re at a security hackathon, security doesn’t matter a hoot. Nor does maintability, readability, or most other -ilities.

If I was doing this for a “real” project, I’d start the same way. I’d be wasting my employer’s/client’s time if I was spending it if I did it any other way, since 9 projects out of 10 (</blatantly made-up stat>) never get out of feasibility phase.

Ten Reasons Why IE6 Development is Significantly Better in 2010 than 2001 (But Still Painful)

This post mentions IE6 everywhere, but it’s not about IE6 at all. In my standard rant on the three ages of web development, a standard sub-rant is our improved ability to develop on IE6:

Although it was created back in 2001, we could do far more with it in 2004 (e.g. Google Maps) than we could in 2001, and we can do far more with it in 2010 than we could in 2004.

I cite the 3,286 day old application as an example because it’s the one browser that’s still around and still must be targeted by a significant number of developers (despite the best intentions of even MS employees to “drive a stake into IE6s heart”, and thankfully, the number is starting to dwindle). It’s the 120-year old who’s witnessed the transition from horse-and-cart transportation, to men on the moon, and is still here today to speculate on the eagerly awaited ChatRoulette relaunch. We could ask how effective Netscape Navigator development is in 2010, but since no-one actually does that, any answers would be hypothetical. IE6 is something many people worked with pre Ajax era and something some people still work with post Ajax, so it’s a specimen that can be used to show how Ajax changed the world.

(You could do this kind of analysis for any platform, e.g. the design pattern phenomenon means that you would approach a C++ app very differently today than you would have in 1987, even with the same language and compiler.)

In the Ajax era, we made like engineers and got smart about how to do the best we could with the laws of physics our universe imposed on us. (Versus the new HTML5 era, where we build a better universe.) I decided to make this post to be a bit more explicit about why IE6 development is significantly better today than when IE6 came out. And by “better”, I mean faster and more powerful, if still a slow, frustrating, and not-very-powerful development cycle compared to the modern browsers.

With no further adieu, IE6 development is better today than in 2001 because …

10. Libraries If only we had jQuery in 2001. If only …

9. Knowledge JavaScript was allegedly the world’s most misunderstood language in 2001. Now it’s well-understood, good parts and bad. And so is the DOM. An abundance of blogs, forums, books, and conferences.

8. Patterns We not only know how Javascript and the DOM work, we have idioms, patterns, and best practices for putting it into action.

7. Performance IE6 is orders of magnitudes slower than today’s browsers, so managing performance is certainly important, and these days, we know much more about how to make web apps run faster, thanks to the efforts of Steve Souders et al.

6. Tools Tools from MS and others have come along over the years to improve the development experience. Firebug or Webkit Devtools they are not, but at least IE6 debugging is more than just alert boxes these days.

5. Developers, Developers, Developers Finding savvy web developers these days is vastly easier than it was pre-Ajax, when front-end development was a black art.

4. Show Me the Source We now have plenty of examples to learn from, both online, where View Source is your friend, and in the open source code repositories of the world.

3. Browser Mechanics We understand the quirks of each browser much better now, and that certainly includes the many well-documented quirks of IE6 and how to deal with them.

2. Attitude Those examples also overcome the psychological barrier, a key impediment to IE6 development. In the absence of any decent examples, it was easier for developers to shrug their shoulders and say it’s all too hard. If that sounds too mystical, see 4 minute mile. “Ajax” is more than a set of technologies, it’s also the recognition that these kinds of apps are possible.

1. Development is Just Easier These Days. No matter what kind of development you’re doing, development is just easier in 2010. Blogs, Forums, Twitter, Code Repositories, etc are improved, so this will impact on IE6 development or any other form of development, though there’s a law of diminishing returns here.

IE6 development is still a tough effort and thankfully becoming less of a requirement as people shift to modern browsers, or at least IE8. You know a browser is on its way out when “because employees are less likely to use Facebook” is one of its most compelling advantages. The Ajax era had a profound effect on the way we develop and what we know to be possible, but that era is mostly over, and that law of diminishing returns is kicking in hard. Hence, we make a new, better, platform to keep progressing what’s possible.

Will the rush of HTML5 work make IE6 development in 2020 even more doubleplusgood than in 2010? Not really. There will be a few odd improvements, but overall, HTML5 is about improving the platform, which means developers will increasingly be focused on doing things IE6 will thankfully never even try to do. The HTML5 world subsumes the Ajax world, so you get the double benefit – all of these ten Ajax benefits multiplied by all of the new features of the platform.

CORS, Scraping, and Microformats

Jump straight to the demo.

Cross-Origin Resource Sharing makes it possible to do arbitrary calls from a web page to any server, if the server consents. It’s a typical HTML5 play: We could do similar things before, but they were with hacks like JSONP. Cross-Origin Resource Sharing lets us can achieve more and do it cleanly. (The same could be said of Canvas/SVG vs drawing with CSS; WebSocket vs XHR-powered Comet; WebWorker vs yielding with setTimeout; Round corners vs 27 different workarounds; and we could go on.)

This has been available for a couple of years now, but I don’t see people using it. Well, I haven’t checked, but I don’t get the impression many sites are offering their content to external websites, despite social media consultants urging them to be “part of the conversation”. It’s like when people make a gorgeous iPhone app, but their website doesn’t work at all in the same phone (cough fashionhouse) . Likewise, if you’ve got a public API, but not providing JSONP/callback support, it’s not very useful either…making developers host their own cross-domain proxy is tedious. It’s cool there are services like YQL and for some cases, but wouldn’t it be better if web pages could just pull in all that external content directly?

Except in this case, it’s just not happening. Everyone’s offering APIs, but no-ones sharing their content through the web itself. At this point, I should remind you I haven’t actually tested my assumption and maybe everyone is serving their public content with “Access-Control-Allow-Origin: *” … but based on the lack of conversation, I am guessing in the negative. The state of the universe does need further investigation.

Anyway, what’s cool about this is you can treat the web as an API. The Web is my API. “Scraping a web page” may sound dirtier than “consuming a web service”, but it’s the cleaner approach in principle. A website sitting in your browser is a perfectly human-readable depiction of a resource your program can get hold of, so it’s an API that’s self-documenting. The best kind of API. But a whole HTML document is a lot to chew on, so we need to make sure it’s structured nicely, and that’s where microformats come in, gloriously defining lightweight standards for declaring info in your web page. There’s another HTML5 tie-in here, because we now have a similar concept in the standard, microdata.

So here’s my demo.

I went to my homepage at, which is spewed out by a PHP script. I added the following line to the top of the PHP file:

  1. <?
  2.   header("Access-Control-Allow-Origin: *");
  3.   ... // the rest of my script
  4. ?>

Now any web page can pull down “” with a cross-domain XMLHttpRequest. This is fine for a public web page, but something you should be very careful about if the content is (a) not public; or (b) public but dependent on who’s viewing it, because XHR now has a “withCredentials” field that will cause cookies to be passed if it’s on. A malicious third-party script could create XHR, set withCredentials to true, and access your site with the user’s full credentials. Same situation as we’ve always had with JSONP, which should also only be used for public data, but now we can be more nuanced (e.g. you can allow trusted sites to do this kind of thing).

On to the client …

I started out doing a standard XHR, for sanity’s sake.


  1. var xhr = new XMLHttpRequest();
  2."get", "message.html", true);
  3.   xhr.onload = function() { //instead of onreadystatechange
  4.     if (xhr.readyState==4 && xhr.status==200)
  5.     document.querySelector("#sameDomain").innerHTML = xhr.responseText;
  6.   };
  7.   xhr.send(null);

Then it gets interesting. The web app makes a cross-domain call using the following facade, which I adapted from a snippet in the veritable Nick Zakas’s CORS article:


  1. function get(url, onload) {
  2.     var xhr = new XMLHttpRequest();
  3.     if ("withCredentials" in xhr){
  4."get", url, true);
  5.     } else if (typeof XDomainRequest != "undefined"){
  6.       xhr = new XDomainRequest();
  7."get", url);
  8.     } else {
  9.       xhr = null;
  10.     }
  11.     if (xhr) {
  12.       xhr.onload = function() { onload(xhr); }
  13.       xhr.send();
  14.     }
  15.     return xhr;
  16. }

This gives us a cross-domain XHR, for any browser that supports the concept, and it makes a request the usual way, and the request works against my site, but not yours, because of the header I set earlier on my site. Now I can dump that external content in a div:


  1. get("", function(xhr) {
  2.     document.querySelector("#crossDomain").innerHTML = xhr.responseText;
  3.     ...

(This would be a monumentally thick thing to do if you didn’t trust the source, as it could contain script tags with malicious content, or a phishing form. Normally, you’d want to sanitise or parse the content first. In any event, I’m only showing the whole thing here for demo purposes.)

Now comes the fun part: Parsing the content that came back from an external domain. It so happens that I have embedded hCard microformat content at It’s in the expandable business card you see on the top-left:

And the hCard content looks like this, based on :

  1. <div id="card" class="vcard">
  2.   <div class="fn">Michael&nbsp;Mahemoff</div>
  3.   <img class="photo" src=""></img>
  4.   <div class="role">"I like to make the web better and sushi"</div>
  5.   <div class="adr">London, UK</div>
  6.   <div class="geo">
  7.     <abbr class="latitude" title="51.32">51&deg;32'N</abbr>,     <abbr class="longitude" title="0">0&deg;</abbr>
  8.   </div>  <div class="email">[email protected]</div>
  9.   <div class="vcardlinks">    <a rel="me" class="url" href="">homepage</a>
  10.     <a rel="me" class="url" href="">twitter</a>    <a rel="me" class="url" href="">plancast</a>
  11.   </div>
  12. </div>

It’s based on the hCard microformat, which really just tells you what to call your CSS classes…I told you microformats were lightweight! The whole idea of the card comes from Paul Downey’s genius Hardboiled hCards project.

Anyway, bottom line is we’ve just extracted some content with hCard data in it, so it should be easy to parse it in a standard way and make sense of the content. So I start looking for a hCard Javascript library and find one, that’s the beauty of standards. Even better, it’s called Sumo and it comes from Dan Webb.

The hCard library expects a DOM element containing the hCard(s), so I pluck that from the content I’ve just inserted on the page, and pass that to the library. Then it’s a matter of using the “hCard” object to render a custom UI:


  1. var hcard ="#crossDomain"))[0];
  2.  var latlong = new google.maps.LatLng(parseInt(hcard.geo.latitude), parseInt(hcard.geo.longitude));
  3.   var markerImage = new google.maps.MarkerImage(hcard.photoList[0], null, null, null, new google.maps.Size(40, 40));
  4.  var infoWindow = new google.maps.InfoWindow({content: "<a href='"+hcard.urlList[0]+"'>"+hcard.fn+"</a>", pixelOffset: new google.maps.Size(0,-20)});
  5.    ...

And I also dump the entire hCard for demo purposes, using James Padolsey’s PrettyPrint.


  1. document.querySelector("#hcardInfo").appendChild(prettyPrint(hcard));

There’s lots more fun to be had with the Web as an API. According to the microformats blog, 2 million web pages now have embedded hCards. Offer that content to the HTML5 mashers of the world and they will make beautiful things.

A jQuery Inheritance Experiment

I like jQuery a lot, but I often find myself re-doing my way of OO and inheritance each time I start a new app. And so, I just did it again of course.

I was starting to write a lame HTML5 game, where you have “AlienModels” (of the MVC, not Star Trek, variety), and each with their own “AlienView”. When an AlienModel enter()s, its view will detect a “enter” event and show the alien entering the scene. Certain types of aliens will fly in, certain aliens will fade in, and so on. I started creating an AlienView abstraction, but I figured this is something jQuery can do for me. An AlienView might simply be a jQuery selector. However, this is where jQuery reaches its limits, as I want to call $(“someAlienView”).enter(50,50), and to have the alien’s entry towards co-ordinate (50,50) animated, but animated differently depending on what kind of alien it is.

So I created a framework to do this. Code and Demo here.

The usage is this:

  1. <div class="shy guy">shy</div>
  2. <div class="fly guy">fly</div>


  1. $(".shy").define({
  2.       disappear: function() { $(this).slideUp(); }
  3.     });
  5.     $(".fly").define({
  6.       disappear: function() {
  7.         $(this).fadeOut().fadeIn().fadeOut().fadeIn().fadeOut().fadeIn()
  8.                .animate({top: -200 }).hide();
  9.       }
  10.     });
  12.     $(".guy").disappear();

When we call $(“.guy”).disappear(), what happens depends on whether this is a “.shy” or a “.fly”. This is basic polymorphism, which jQuery lacks. The $.fn.define() plugin I wrote (see the code in that example) shoehorns it in, but maybe that’s not a good idea…I’m making jQuery into something it’s not. So on IRC someone pointed me to this MooTools-jQuery article. I need to read that article. I also need to get into MooTools, which I think may be more appropriate for this kind of thing, with its unashamed use of prototype (afaict). In general, my urge for more OO and scalable architecture tells me I need to look further afield than jQuery. But syntactic sugar trumps all, so I’ll only go elsewhere if it doesn’t smell of enterprisey, false sense of security, public static voidness.

Incidentally $.fn.define() could go further than I currently have it; it would be possible to set up an inheritance chain, where you could define $(“.guy”).disappear() explicitly, so that if an element matched “.guy” but not “.shy” or “.fly”, it would follow the “.guy” rule. (But not the other way round.) I probably won’t though.

Incidentally (2), thanks @agektmr for telling me about, a mostly-Japanese Javascript pastebin similar toas jsBin and jsFiddle.

Video Sync with WebSocket and Node

[Update 2015 – This was made on a very early version of NodeJS and may not work anymore; if you are looking for support to run this, please ask on StackOverflow.]

Wanting to explore WebSocket, I made a demo that syncs videos for all users looking at the page (See the Code). Any user can “take control”, so that the other videos will follow the controller’s video. Writing it was a cinch with the Javascript WebSocket API in the client, and the very straightforward Node WebSocket Server powering the server.

Note it’s just a simple demo – there’s no security model here and there’s a very simple sync mechanism: When a “slave” client receives the latest duration, it adds a second to account for lag, and moves the video iff there’s a 5-second discrepancy.

How does a WebSocket client look? Well, it looks a lot simpler than a traditional Comet client. Initiate the connection with “new WebSocket”:


  1. var conn;
  2.   $_("#join").onclick = function() {
  3.     conn = new WebSocket("ws://localhost:8000/test");

Upload a message to the server with send():


  1. video.addEventListener("timeupdate", function() {
  2.       if (iAmControlling()) conn.send(video.currentTime);
  3.     }, true);

Receive a message from the server with conn.onmessage():


  1. conn.onmessage = function (ev) {
  2.     if (matches =^pause (.+)$/)) {
  3.       video.pause();
  4.       video.currentTime = matches[1];
  5.     }
  6.     ...
  7.   }

And when it’s time to draw the curtains, close the connection with another one-liner:


  1. $_("#leave").onclick = function() {
  2.       conn.close();
  3.       ...
  4.     };

The server is also rather tiny. It mostly just broadcasts (sends to all clients) any incoming message and it’s up to the clients to deal with it. Which is why I say there’s no real security model here.

As you can see in the above snippets, “video” plays an excellent cameo role here, with its equally simple API. One downside, which others have mentioned too, is the lack of styling in the default video controls. It’s great the API supports automatic controls, but it’s unfortunate there’s no CSS manipulation. In this case, the “slave” browsers should be able to see the time, but not have the ability to change the progress. I hope this changes in the future, because the API is otherwise very powerful without compromising simplicity. Until then, libraries will arise to fill the gap.

The demo works in both Chrome and Safari (and they play nice simultaneously).

Using BrowserScope to Detect Geolocation Support

About Browserscope

Browserscope is a very cool tool coming out of Steve Souders’ performance efforts and being developed by Steve along with Lindsey Simon. It’s part of a trend towards crowdsourcing browser info, in similar vein to TestSwarm, for testing specific code, and along the lines of what I did with WebWait, which is to support multiple people benchmarking a website at once. (Though I’ve not set up reporting on it – one day…)

So anyway, Browserscope began life as a way to benchmark just performance, but is now a general-purpose tool for crowdsourcing tests about anything browsers do, e.g. checking for security vulnerabilities is one vital application. HTML5 does not miss out – has been ported over too – see

Geolocation Test

I decided to make my own test recently, to see what extent Geolocation is being supported by the browsers. Run the test against your browser here (it will upload only information about whether geo is supported, not your actual geolocation data):

The results so far indicate browsers either don’t support it, or support accuracy+lat+lon. I was hoping to see at least direction, and would <3 to see altitude! But it will have to wait.

How to Make your Own Browserscope Test

If you ever wanted to know how every browser under the sun handles a particular something, Browserscope is your new hotness. It makes it trivial. Register with Browserscope to get the boilerplate code. Then you just write an app that does the exercises and builds up a hashmap of the info you want to collect, and BAM, submit it. The hashmap should have values between 1 and 1000. Do that, and you’ll end up with a nice summary page showing your test results aggregated across all browsers that ran them.

The submission is taken care of by some boilerplate code – it uses on-demand Javascript to pass the data back to the third-party BrowserScope domain (you host the test yourself). Aside from writing the actual test and promoting its location, the only thing you have to do is make a nice UI to show the users what you just did, and perhaps get their consent before uploading. I think those are actually two things Browserscope could automate too, at least providing a nice default UI which test creators can customise.

Another HTML5 Tidbit: Web SQL Database

I coded up a small demo of Web SQL Database. As the demo points out, notice you can hit reload and the data remains, being that it’s one of the several techniques for offline storage.

There’s not much to say about Web SQL – you either know SQL already, in which case it’s fairly straightforward and just a matter of squeezing your SQL calls into the conventions of the API, like any other SQL framework as, lamentably, there’s never been any kind of standardisation on the wrappers that go around SQL – or you don’t know SQL, in which case there’s your bottleneck child if you intend to use Web SQL Database for offline storage.

Web SQL Database lies at the other end of the complexity spectrum from Web Storage (aka localStorage and sessionStorage), which is simply a big ol’ hashmap for your domain. The complexity is fairly manageable, though, as there’s literally a generation of SQL know-how out there, and hopefully ActiveRecord-like libraries on their way.

IndexedDB promises to offer a third way – more like the hashmap approach, but with some of the indexing-based performance gains SQL has to offer. No browsers have implemented it yet, though you can build an experimental windows version using a third-party Firebreath implementation.

Upcoming: UXCampEurope and SWDC

Google I/O is over and I’ll post a bit about the HTML5 hack session I ran later, but here I want to highlight a couple of upcoming sessions in Europe:

UXCampEurope. If this is anything like the Bay Area and London UX camps I’ve been fortunate to attend, it will be huge, and being Europe-wide and in Berlin, this expectation might just be met. I was sitting on the browser along with @mattlucht hitting refresh 10 times a second when the tickets were released at the start of the year! I’ll set up a slot called “What did HTML5 ever do for users, anyway?”. I’m planning to overview some of the features of HTML5 and its evolution from Ajax, and ask how it might be used to improve UX. It’s a camp, so I’ll also be hoping to collect contributions and writing them up somewhere.

SWDC. The first Scandinavian Web Developer Conference and Peter Svensson is organising, so I know it will be an awesome event, and the sessions speak for themselves. I’ve bumped into the guy twice in the past month – he travelled from Stockholm to DC for JSConf, and again to SF for I/O. My session is on the mobile day, called “HTML5 Gives You Wings”, focusing on HTML5 techniques for performance and app-like behaviour. Here’s the summary:

Welcome to the dynamic world of mobile development, where new browsers stay close to the edge and HTML5 is already a reality. Despite the impressive advances, many mobile apps are still bottlenecked by the network and compact processors continue to lag behind their desktop counterparts. So how can HTML5 help? This talk will focus on those features of HTML5 that are interesting for performance optimisation and the techniques for emulating native apps, such as offline data storage.

Offline Apps with Application Cache: Quickstart, Tips, and Deep Dive

I’ve been mucking with AppCache, aka ApplicationCache. It’s the secret sauce that lets you build offline apps, which is great for performance and fabulous for pretending to be an iPhone app when you’re not.

Quickstart an Offline App

As with most HTML5 technologies, the basic usage is quite simple:

  1. Create a trivial trio of HTML, CSS, and JS files, maybe with an image or two. (Or reuse an existing toy app.)
  2. Link to a “cache manifest” file from your HTML file’s html tag like so:
    1. <html manifest="clock.manifest">
  3. Make the manifest file. It’s just the literal phrase “CACHE MANIFEST” on the top line (I know, bit random), followed by the list of files in your app:
  4. Mark it as MIME type text/cache-manifest. If you’re SSHing to a virtual host (like I did with Dreamhost), all you need here is a one-line .htaccess file:
    AddType text/cache-manifest manifest

That’s it. Now those files above will be cached locally and won’t need to be downloaded again next time. To test it, load the site, shut down your net connection, and reload. It’s still there innit.

It works the same on your app-phone – on iPhone or Android (or others maybe), load the site, switch to airport mode, go back to the browser and reload … the site’s still there. Where it gets really fun is making it into a regular app:

  • On iPhone, bookmark the site with the Safari “+” button, choose “Add to Home Screen”, and the app will be sitting alongside all your other apps on the home screen.
  • On Android, bookmark the app in the browser. Hold your finger down somewhere empty on the home screen, choose add “Shortcuts” | “Bookmark”, and choose the bookmark you’ve made.

You could do the same with any website of course, but it’s particularly cool when the website is an offline one…it works just like a regular app!

Practical Tips and Troubleshooting

Purging the Cache

In practical terms, you are quickly going to come up the no. 1 problem of developing against a cache. How do you purge it whenever you update your code?

Simple. You only need to make an arbitrary change to the cache manifest file, so just include a version number in the comment:



clock.html clock.css clock.js clock.png

Keep revving it up every time you change any file, because changing those files alone won’t have any effect. And if it’s getting serious, you’ll want to practice the principles of DRY and continuous integration by automating this process.

Ridding yourself of Errors

An unfortunate thing happens when there’s a downloading error: The whole thing silently fails. Until debugging tools get better, if you find your app’s not updating, you ought to check each of the resources specified in the manifest really, really, carefully.

Cached Resources

Cache manifests are not exempt from standard caching rules, so the manifest itself, as well as other files might be cached. Pain. A good solution is to use the following Apache directives to .htaccess:

ExpiresActive On

That MIME Type

Things might fail if the Cache Manifest MIME type is wrong, so use your browser’s debugging tools or a service like Web Sniffer to check it (thanks CSS Ninja).

The Wording

The wording can also trip you up. Be really sure it’s exactly “CACHE MANIFEST” at the start of the file. Again, better debugging tools in the future will help here.

The Fancy Stuff

All of the above is enough to get up and running with an offline app, and maintaining it too. But wait, there’s complexity too!

Fine Control Over What Gets Cached

The cache manifest file actually allows three kinds of resources:

  • Online resources that can be cached (CACHED). As shown in the example above, it’s the stuff that’s available for caching.
  • “Fallback” resources in case a file isn’t cached (FALLBACK). These are like catch-all email addresses; any resource that’s not cached, is a candidate for a fallback. There’s a pattern-matching algorithm to map resources (which mightn’t have been cached) to other resources (which hopefully have been cached). For example, you probably wouldn’t cache every wikipedia article, so the fallback section would ensure a special “offline.html” page is shown for those missing articles.
  • Online-only resources (NETWORK) Resources which should never be cached, e.g. it would be a good idea not to cache stock prices if people were making million-dollar decisions on the back of what they see on your possibly-cached web application.

The syntax (adapted from the WHATWG example):


a comment


(version number unnecessary

but recommended for cache purposes)

the following "CACHE:" is optional;

a special case because top of file

is cache by default


images/sound-icon.png images/background.png

note that each file has to be put on its own line

FALLBACK: / /sorry-i-am-offline.html

NETWORK: /stocks/*

Checking if We’re Online

window.navigator.onLine … try it now in your console (and try it again with your connection off).

Monitoring State and State Changes

What do you notice about all the app cache stuff above? It’s all declarative, no Javascript. But you can go on to do extra stuff with Javascript too, via the Application Cache API. (I am somewhat skeptical about the applications for this API, since simple and declarative is the way to go for something as potentially hairy as offline…but that’s for another rant.)

The window is effective tied one-to-one to the cache object, so you can just access the cache with window.applicationCache. For example, you can get cache state with window.applicationCache.status, try it now in your console. It will probably be zero…the possible values range from 0 to 5, representing: UNCACHED, IDLE CHECKING, DOWNLOADING, UPDATEREADY, OBSOLETE. (These constants are also available on the cache object, i.e. window.applicationCache.UNCACHED is 0.)

There are also event handlers to monitor event state: onchecking, onerror, onnoupdate, ondownloading, onprogress, onupdateready, oncached, onobsolete.

Forcing Updates

applicationCache.update() will force an update, i.e. start downloading the application again if the manifest has changed. However, it won’t suddenly switch over after downloading. To do so would be to pull the rug from under your live, running, application. It will be there next time. But if you do want it right away, use applicationCache.swapCache().

Offline Mobile Apps: It’s More than Application Cache

Application cache certainly is a critical feature of mobile apps; it’s a basic user expectation that a mobile app starts right away, unlike a complex web app which must be downloaded. However, there are other things too. For example, offline storage. That’s another feature of HTML5, comes in many incaranations, and something for another day. The other really big deal is the UI. Of course, you need to simulate certain UI features of the native apps if you want your web app to go native. Hence, libraries like IUI, jQTouch, and Apple’s secret PastryKit. On the input side, your UI must also handle touch events, something some of the aformentioned brand of libraries will support. Finally, there will be specific features of the platform that can help; for example, setting the home screen icon and going full-screen so the browser chrome doesn’t show up.