The Weirdness of Ajax Redirects: Some Workarounds

I’ve been dealing with the situation when your server redirects after a DELETE. Which is actually a fairly common use case. User deletes a blog post, redirect them to the index of all posts. The article below has some Rails workarounds, but is relevant to anyone using XHR and perhaps HTML5 History, ie many single-page web apps.

It’s important stuff to know if you’re HTML5ing in earnest, because there was a recent change to Chrome which makes it standard-compliant, but fundamentally different to other browsers and also makes it handle DELETE and other verbs different to how it handles POST and GET.

DELETE redirection

This redirection becomes troublesome if you want to do this in a way which will seamlessly support Ajax calls, as well as standard web requests. It’s troublesome because of a lethal combination of two browser facts.

Firstly, XHR automatically follows redirects. Your JavaScript has no way to jump in and veto or alter any redirect that comes from the server. XHR call fires off to server, server responds with 302 status code and a location header, the browser automatically issues a new request to the specified location.

Secondly, a DELETE request, when it’s redirected, will actually cause another DELETE request to the new location. Yes, if you’re redirecting to the index, congratulations … your server now receives a request to DELETE the fraking index. You lose!!! So the very normal thing to do for a non-Ajax app suddenly becomes an epic tragedy. It’s completely unintuitive, but it’s actually the standard! I learned as much by filing an erroneous Chrome bug. Turns out Chrome’s unintuitive behaviour is actually correct and Firefox and Opera were wrong. Even more confusing, I was seeing POST requests being converted to GETs, and it turns out this is also part of the 302 standard – POSTs can be converted to GETs, but not DELETEs (or PUTs etc I assume).

Just now, I made a little workaround which seems to be working nicely in my case. In my AppController, which is my app’s own subclass of ActionController, I simply convert any 302 response (the default) to a 303, for redirect_to calls. I should maybe do this for XHR calls only, to be “pure” when a normal call comes in, but no real difference anyway.

  1. class ApplicationController < ActionController::Base
  2.  
  3.   ...
  4.  
  5.   # see src at https://github.com/rails/rails/blob/master/actionpack/lib/action_controller/metal/redirecting.rb
  6.   def redirect_to(options = {}, response_status = {})
  7.     super(options, response_status)
  8.     self.status = 303 if self.status == 302
  9.   end
  10.  
  11. end

Finding the new location

While I’m blogging this, let me tell you about another related thing that happens with redirects (this section applies to any type of request – GET, POST, DELETE, whatever). As I said above, XHR will automatically follow requests. The problem is, how do you know where the ultimate response actually came from? XHR will only tell you where the initial request went; it refuses to reveal what it did afterwards. It won’t even tell you if there was any redirect. Knowing the ultimate location is important for HTML5 apps using the History API, because you may want to pushState() to that location. The solution is to have the web service output its own location in a header. Weird that you have to, but gets the job done.

  1. class ApplicationController < ActionController::Base
  2.   ...
  3.   before_filter proc { |controller| controller.response.headers['x-url'] = controller.request.fullpath }
  4.  
  5. end

WebWorkers, Cross-Domain JSONP, and CoffeeScript

Today’s buzzword gotcha is brought to you by WebWorkers, Cross-Domain JSONP, and CoffeeScript. TL;DR: Always use “self” for WebWorker “globals”. And a library.

My Twitter app’s locking the main thread a bit, so thought I could quickly push polling functionality into a WebWorker. Not so fast …

Firstly, WebWorkers don’t do JSONP the normal way. You can’t inject a script because there’s no DOM. This means you can’t even use standalone JSONP libraries, let alone jQuery (which doesn’t even run in a WebWorker, as it assumes DOM presence, and would be a waste of bandwidth anyway).

Fortunately, we can use importScripts(). Conveniently enough, importScripts() can be used anytime and accepts any old string. So the API would actually be useful in the main thread too, but I digress …

So in Plain Old JavaScript, you would do:

javascript
< view plain text >
  1. function go() { doSomething(); }
  2. importScripts("api.twitter.com?callback=go");

Works just fine. But this won’t work in the CoffeeScript equivalent, i.e.:

  1. go = -> doSomething()  ### NOT CALLED
  2. importScripts "api.twitter.com?callback=go"

In fact, Chrome gives a mysterious error “Uncaught undefined”, while Firefox cuts to the chase: “go is not defined”.

The reason is that the CoffeeScript compiler wraps the entire thing in a closure. This is tremendously useful default behaviour, as you don’t have to worry about adding new globals, but can cause problems in situations like this. If the script comes back from Twitter and calls “go()”, where does that actually end up? With the main thread, a CoffeeScript can explicitly bust properties out of the closure using the global namespace, i.e. “window.foo” (or “global.foo” in the case of a server-side Node script). In fact, there’s a solution to this with WebWorkers too. have their own global identity, “self”.

  1. self.go = -> doSomething()  ### FTFY
  2. importScripts "api.twitter.com?callback=go"

Works.

Be sure to use “self” for other WebWorker “globals” too, i.e.:

  1. self.onmessage = (ev) -> doSomething()

Here’s a tiny jsonp library to support this:

So you can say:

“A” is for Asynchronous, Again.

Here’s an infographic I made for my QCon talk this week, Single Page Apps and the Future of History:

I think it sums up a key point about Ajax, which is sometimes overlooked: Asynchrony. One of my first posts on Ajax, back in 2005, covered it, but it still took years to really sink in. At the time, I was thinking mostly of “Asynchronous” as XHR’s asynchronous mode, so seeing it in contrast to running XHR synchronously. But really, the big deal about Ajax is the UI can carry on, while networking happens in the background. That’s what makes filling out forms faster and what makes tools like Google Docs possible, especially in multi-user mode.

Ten Reasons Why IE6 Development is Significantly Better in 2010 than 2001 (But Still Painful)

This post mentions IE6 everywhere, but it’s not about IE6 at all. In my standard rant on the three ages of web development, a standard sub-rant is our improved ability to develop on IE6:

Although it was created back in 2001, we could do far more with it in 2004 (e.g. Google Maps) than we could in 2001, and we can do far more with it in 2010 than we could in 2004.

I cite the 3,286 day old application as an example because it’s the one browser that’s still around and still must be targeted by a significant number of developers (despite the best intentions of even MS employees to “drive a stake into IE6s heart”, and thankfully, the number is starting to dwindle). It’s the 120-year old who’s witnessed the transition from horse-and-cart transportation, to men on the moon, and is still here today to speculate on the eagerly awaited ChatRoulette relaunch. We could ask how effective Netscape Navigator development is in 2010, but since no-one actually does that, any answers would be hypothetical. IE6 is something many people worked with pre Ajax era and something some people still work with post Ajax, so it’s a specimen that can be used to show how Ajax changed the world.

(You could do this kind of analysis for any platform, e.g. the design pattern phenomenon means that you would approach a C++ app very differently today than you would have in 1987, even with the same language and compiler.)

In the Ajax era, we made like engineers and got smart about how to do the best we could with the laws of physics our universe imposed on us. (Versus the new HTML5 era, where we build a better universe.) I decided to make this post to be a bit more explicit about why IE6 development is significantly better today than when IE6 came out. And by “better”, I mean faster and more powerful, if still a slow, frustrating, and not-very-powerful development cycle compared to the modern browsers.

With no further adieu, IE6 development is better today than in 2001 because …

10. Libraries If only we had jQuery in 2001. If only …

9. Knowledge JavaScript was allegedly the world’s most misunderstood language in 2001. Now it’s well-understood, good parts and bad. And so is the DOM. An abundance of blogs, forums, books, and conferences.

8. Patterns We not only know how Javascript and the DOM work, we have idioms, patterns, and best practices for putting it into action.

7. Performance IE6 is orders of magnitudes slower than today’s browsers, so managing performance is certainly important, and these days, we know much more about how to make web apps run faster, thanks to the efforts of Steve Souders et al.

6. Tools Tools from MS and others have come along over the years to improve the development experience. Firebug or Webkit Devtools they are not, but at least IE6 debugging is more than just alert boxes these days.

5. Developers, Developers, Developers Finding savvy web developers these days is vastly easier than it was pre-Ajax, when front-end development was a black art.

4. Show Me the Source We now have plenty of examples to learn from, both online, where View Source is your friend, and in the open source code repositories of the world.

3. Browser Mechanics We understand the quirks of each browser much better now, and that certainly includes the many well-documented quirks of IE6 and how to deal with them.

2. Attitude Those examples also overcome the psychological barrier, a key impediment to IE6 development. In the absence of any decent examples, it was easier for developers to shrug their shoulders and say it’s all too hard. If that sounds too mystical, see 4 minute mile. “Ajax” is more than a set of technologies, it’s also the recognition that these kinds of apps are possible.

1. Development is Just Easier These Days. No matter what kind of development you’re doing, development is just easier in 2010. Blogs, Forums, Twitter, Code Repositories, etc are improved, so this will impact on IE6 development or any other form of development, though there’s a law of diminishing returns here.

IE6 development is still a tough effort and thankfully becoming less of a requirement as people shift to modern browsers, or at least IE8. You know a browser is on its way out when “because employees are less likely to use Facebook” is one of its most compelling advantages. The Ajax era had a profound effect on the way we develop and what we know to be possible, but that era is mostly over, and that law of diminishing returns is kicking in hard. Hence, we make a new, better, platform to keep progressing what’s possible.

Will the rush of HTML5 work make IE6 development in 2020 even more doubleplusgood than in 2010? Not really. There will be a few odd improvements, but overall, HTML5 is about improving the platform, which means developers will increasingly be focused on doing things IE6 will thankfully never even try to do. The HTML5 world subsumes the Ajax world, so you get the double benefit – all of these ten Ajax benefits multiplied by all of the new features of the platform.

CORS, Scraping, and Microformats

Jump straight to the demo.

Cross-Origin Resource Sharing makes it possible to do arbitrary calls from a web page to any server, if the server consents. It’s a typical HTML5 play: We could do similar things before, but they were with hacks like JSONP. Cross-Origin Resource Sharing lets us can achieve more and do it cleanly. (The same could be said of Canvas/SVG vs drawing with CSS; WebSocket vs XHR-powered Comet; WebWorker vs yielding with setTimeout; Round corners vs 27 different workarounds; and we could go on.)

This has been available for a couple of years now, but I don’t see people using it. Well, I haven’t checked, but I don’t get the impression many sites are offering their content to external websites, despite social media consultants urging them to be “part of the conversation”. It’s like when people make a gorgeous iPhone app, but their website doesn’t work at all in the same phone (cough fashionhouse) . Likewise, if you’ve got a public API, but not providing JSONP/callback support, it’s not very useful either…making developers host their own cross-domain proxy is tedious. It’s cool there are services like YQL and Embed.ly for some cases, but wouldn’t it be better if web pages could just pull in all that external content directly?

Except in this case, it’s just not happening. Everyone’s offering APIs, but no-ones sharing their content through the web itself. At this point, I should remind you I haven’t actually tested my assumption and maybe everyone is serving their public content with “Access-Control-Allow-Origin: *” … but based on the lack of conversation, I am guessing in the negative. The state of the universe does need further investigation.

Anyway, what’s cool about this is you can treat the web as an API. The Web is my API. “Scraping a web page” may sound dirtier than “consuming a web service”, but it’s the cleaner approach in principle. A website sitting in your browser is a perfectly human-readable depiction of a resource your program can get hold of, so it’s an API that’s self-documenting. The best kind of API. But a whole HTML document is a lot to chew on, so we need to make sure it’s structured nicely, and that’s where microformats come in, gloriously defining lightweight standards for declaring info in your web page. There’s another HTML5 tie-in here, because we now have a similar concept in the standard, microdata.

So here’s my demo.

I went to my homepage at mahemoff.com, which is spewed out by a PHP script. I added the following line to the top of the PHP file:

  1. <?
  2.   header("Access-Control-Allow-Origin: *");
  3.   ... // the rest of my script
  4. ?>

Now any web page can pull down “http://mahemoff.com/” with a cross-domain XMLHttpRequest. This is fine for a public web page, but something you should be very careful about if the content is (a) not public; or (b) public but dependent on who’s viewing it, because XHR now has a “withCredentials” field that will cause cookies to be passed if it’s on. A malicious third-party script could create XHR, set withCredentials to true, and access your site with the user’s full credentials. Same situation as we’ve always had with JSONP, which should also only be used for public data, but now we can be more nuanced (e.g. you can allow trusted sites to do this kind of thing).

On to the client …

I started out doing a standard XHR, for sanity’s sake.

javascript
< view plain text >
  1. var xhr = new XMLHttpRequest();
  2.   xhr.open("get", "message.html", true);
  3.   xhr.onload = function() { //instead of onreadystatechange
  4.     if (xhr.readyState==4 && xhr.status==200)
  5.     document.querySelector("#sameDomain").innerHTML = xhr.responseText;
  6.   };
  7.   xhr.send(null);

Then it gets interesting. The web app makes a cross-domain call using the following facade, which I adapted from a snippet in the veritable Nick Zakas’s CORS article:

javascript
< view plain text >
  1. function get(url, onload) {
  2.     var xhr = new XMLHttpRequest();
  3.     if ("withCredentials" in xhr){
  4.       xhr.open("get", url, true);
  5.     } else if (typeof XDomainRequest != "undefined"){
  6.       xhr = new XDomainRequest();
  7.       xhr.open("get", url);
  8.     } else {
  9.       xhr = null;
  10.     }
  11.     if (xhr) {
  12.       xhr.onload = function() { onload(xhr); }
  13.       xhr.send();
  14.     }
  15.     return xhr;
  16. }

This gives us a cross-domain XHR, for any browser that supports the concept, and it makes a request the usual way, and the request works against my site, but not yours, because of the header I set earlier on my site. Now I can dump that external content in a div:

javascript
< view plain text >
  1. get("http://mahemoff.com/", function(xhr) {
  2.     document.querySelector("#crossDomain").innerHTML = xhr.responseText;
  3.     ...

(This would be a monumentally thick thing to do if you didn’t trust the source, as it could contain script tags with malicious content, or a phishing form. Normally, you’d want to sanitise or parse the content first. In any event, I’m only showing the whole thing here for demo purposes.)

Now comes the fun part: Parsing the content that came back from an external domain. It so happens that I have embedded hCard microformat content at http://mahemoff.com. It’s in the expandable business card you see on the top-left:

And the hCard content looks like this, based on :

  1. <div id="card" class="vcard">
  2.   <div class="fn">Michael&nbsp;Mahemoff</div>
  3.   <img class="photo" src="http://mahemoff.com/speak2.jpg"></img>
  4.   <div class="role">"I like to make the web better and sushi"</div>
  5.   <div class="adr">London, UK</div>
  6.   <div class="geo">
  7.     <abbr class="latitude" title="51.32">51&deg;32&#39;N</abbr>,     <abbr class="longitude" title="0">0&deg;</abbr>
  8.   </div>  <div class="email">[email protected]</div>
  9.   <div class="vcardlinks">    <a rel="me" class="url" href="http://mahemoff.com">homepage</a>
  10.     <a rel="me" class="url" href="http://twitter.com/mahmoff">twitter</a>    <a rel="me" class="url" href="http://plancast.com/mahemoff">plancast</a>
  11.   </div>
  12. </div>

It’s based on the hCard microformat, which really just tells you what to call your CSS classes…I told you microformats were lightweight! The whole idea of the card comes from Paul Downey’s genius Hardboiled hCards project.

Anyway, bottom line is we’ve just extracted some content with hCard data in it, so it should be easy to parse it in a standard way and make sense of the content. So I start looking for a hCard Javascript library and find one, that’s the beauty of standards. Even better, it’s called Sumo and it comes from Dan Webb.

The hCard library expects a DOM element containing the hCard(s), so I pluck that from the content I’ve just inserted on the page, and pass that to the library. Then it’s a matter of using the “hCard” object to render a custom UI:

javascript
< view plain text >
  1. var hcard = HCard.discover(document.querySelector("#crossDomain"))[0];
  2.  var latlong = new google.maps.LatLng(parseInt(hcard.geo.latitude), parseInt(hcard.geo.longitude));
  3.   var markerImage = new google.maps.MarkerImage(hcard.photoList[0], null, null, null, new google.maps.Size(40, 40));
  4.  var infoWindow = new google.maps.InfoWindow({content: "<a href='"+hcard.urlList[0]+"'>"+hcard.fn+"</a>", pixelOffset: new google.maps.Size(0,-20)});
  5.    ...

And I also dump the entire hCard for demo purposes, using James Padolsey’s PrettyPrint.

javascript
< view plain text >
  1. document.querySelector("#hcardInfo").appendChild(prettyPrint(hcard));

There’s lots more fun to be had with the Web as an API. According to the microformats blog, 2 million web pages now have embedded hCards. Offer that content to the HTML5 mashers of the world and they will make beautiful things.

Joining Google

I mentioned I’m moving on from Osmosoft a few weeks ago and after 12days and a week of doing much less than one could ever have imagined, it’s time to announce where I’m going: Google. I’ll be joining Google as a Chrome developer advocate, which means focusing on HTML5, Javascript, the Chrome browser, ChromeOS, and related technologies.

I’m very excited about the role, for reasons best explained in terms of a timeline for the web. I see the history of web technologies divided into three phases and I believe we’re currently in the transition from the second to the third phase. Which is the starting context for this role.

Dark Ages (Web as a Black Art)

Rich web apps were hard work. Black art even. During this phase, I spent most of my time on server-side languages – Perl, PHP, Java/J2EE – and Javascript was a weird sideline. Taking an interest in the user experience, I always wanted to know more about this black art, since it’s the web technologies that touch the user and have the most direct influence over their experience with any application. But what paid the bills in the late ’90s and early noughties was form submissions and page refreshes. And most Javascript content was odd little ticker tape scripts you could download, not too much to learn from, and always difficult to achieve in one’s spare time. I think this was the frustrating experience for many web developers: not enough time and hard work. I did get by and pick up a few tricks here and there, but things only took off when the A-word was unleashed …

Ajax Era (Web as an Established Practice)

Everything changed five years ago, on February 18, 2005. Jesse James Garrett hopped in the shower, came up with the term “Ajax”, and published his thoughts on this rising architectural pattern. Many people bristle at the notion that a simple term, for something people were already doing, can make such an impact. “BUT I already knew how to do that!” they cry. Good for them. But the fact is, Ajax allowed the whole developer community to rally around this style of application. Events, blogs, books, libraries, tools, the whole thing flourished. I got seriously interested, recorded a podcast explaining the topic, and started diving into all the existing applications, which ultimately led to the Ajax Patterns wiki. All those applications, and many featured in the book, were created in the Dark Ages by brilliant pioneers who had scant resources to learn from. Today, such applications can be created at exponentially faster rates. This is thanks to improved knowledge, improved libraries, and improved tools.

There are historical analogies to this era, e.g. the rapid rise of the automobile industry once all parts were in place. In that case, it was more a matter of discovering new physical and engineering principles, whereas with Ajax, it was a superficial act. The browsers were already there, with a killer feature like IE’s XMLHttpRequest sitting largely unnoticed for five years. Few people had taken the time to really understand how they worked. It was the Ajax term to trigger the boom.

HTML5 Era (Web on an Improved Platform)

(I hesitate to call this the era of HTML5, as many things happening are outside HTML5, notably improvements to CSS and the core Javascript/ECMAscript language. But HTML5 is increasingly becoming a brand, much like Ajax in its heyday, a term that’s familiar to anyone with a passing interest in technology.)

The starting point for this era is the success of open web technologies. While we continue to learn more each day, we have the basics in place today. We know how to harness HTML, CSS, Javascript to make powerful libraries and applications on the existing browsers. This is where the Ajax Era landed us. We have seen some genius hacks. In fact, we have seen MANY ingenius hacks. On-Demand Javascript and JSONP, for example, are completely fundamental to the way business is done on the web these days, and is utterly a hack. Full credit to those who came up with them, but these hacks can only go so far. They can increase programming complexity, impact on performance, and cause security threats for those not fully-versed in their capabilities. And that’s just for the things we can do. The bigger problem is that we are heavily limited in our functionality on Ajax-Era technologies. We can’t do rich graphics or video, for example. So the next stage is all about improving the underlying platform. It’s not something an application programmer, however heroic, can do on their own. It’s something that has to involve improvements from the platform providers, i.e. the browser manufacturers and others.

To draw a line in the sand, take IE6. Although it was created back in 2001, we could do far more with it in 2004 (e.g. Google Maps) than we could in 2001, and we can do far more with it in 2010 than we could in 2004. This is because of all the knowledge, libraries, and tools the Ajax Era has provided. However, we’re in serious plateau mode now. We can continue to squeeze out 1-percenters, but the low-hanging fruit was plucked and digested a long time ago. The next stage has to be an improvement to the underlying platform, and that’s what’s going on today with modern web browsers.

Once you say “okay we’re now going to build the next-gen web platform”, you’re not just going to add support for round corners. So there’s been a truckload of effort going on, in HTML5 and beyond. Graphics in the form of canvas and SVG and WebGL and CSS3, media in the form of audio and video tags, semantic markup improvements, new form controls, offline storage, support for location-awareness. And much more.

We have the new platform and alongside it, we have the activity of understanding how to use the new platform. Not just understanding the parts, but the whole. How do you put all these things together in meaningful ways. Can we show custom-font text on a canvas? If so, how? And when does it make sense? And how to avoid performance problems? Or, to take another example, what kind of CSS transforms can we use to render videos? And since we can grab image data from the videos, what are some cool things we can do with that data? And all the while, where’s the graceful degradation story here? How do these apps look in older browsers? Most of the talk so far has been on the capabilities of the platform, not patterns of usage.

All these new capabilities take us to the place we’ve been aiming for all along: true applications deluxe, not just “html++”. Of course, there’s plenty of room for both of these, but we’re now in a place where we can really build true applications with all the capabilities of a traditional native experience. We’re seeing it in a big way in mobile space. The new model is all about sandboxing, doing away with the filesystem, and explicit permissions. Exactly what the web’s good for. They’re not only a good fit technically, but they’ve become the ubiquitous lingua franca among the programming community. The technologies are familiar, as are the patterns of UI design and code composition. Those patterns will continue to evolve with HTML5 and associated technologies.

We’re just scratching the surface here; there are exciting times ahead!


I’m not an official Googler yet, nor even a Noogler, as I’ve taken time off to chill, smell the roses, and pursue a few projects (like dusting off the homepage and … watch this space. Actually, watch that space). I’m starting in about a month. And of course, even when I am official, this blog continues to be my own opinions, caveat emptor yadda.

The role covers EMEA and I’m still based in London. Despite not starting just yet, I’m looking forward to participating in tonight’s Chrome/Chromium OS and HTML5 London event. See you there!

(I’m not the only one to announce a move to Google at this time. I’m looking forward to meeting and working with many of my talented future colleagues.)

Not Your Grandpa’s Framesets: Premasagar Rose shows us IFrame 2.0!

usual live blogging caveats – spelling errors, messy, etc etc

@premasagar is visiting the Osmoplex today (thanks @jayfresh for arranging it) and is taking us through his work on iframes, widgets, and sandboxing. I’ve realised we could perhaps be collaborating as my jquery-iframe plugin is so close to his. Different emphases, but much overlap.

GitHub is where you can see what he’s been working on. Basically, this guy is a guru on all things iFrame. In particular, all the quirks around squirting dynamic content into iFrames, as opposed to pointing them elesewhere using “src”.

QUOTE OF THE DAY

“sqwidget is my tiddly”

BACKGROUND – THE EMBEDDING PROBLEM

In patternesque speak, the basic problem is:

Problem: You want to embed 3rd party content into another site.

Forces:

  • You want the 3rd party content to have its own style
  • BUT it will inherit style from the parent page

SOLUTION 1

IFrame

Works great (in fact, I’m using it in my TiddlyWiki playground app, to be documented at some point, and is similarly used in Jon Lister’s TiddlyTemplating.

The problem is you sometimes want the widget to jump out of the iframe, e.g. a ligthboxed video. So …

SOLUTION 2

CleanSlateCSS

Basically a CSS reset, but whereas CSS resets will only handle browser defaults, cleanSlate blats everything! This is exactly the kind of thing I was looking for when I was trying to cancel out tiddlywiki styling. In that case, I was flipping the entire page back and forth, so I could just cheat by removing stylesheets and re-add them. (Prem pointed out there’s a “disabled” attribute on style tags – true, so I should really use this instead, assuming it’s portable, which he thinks it is.)

Problems: - Difficult to maintain CleanSlate library, because new CSS stuff and browser quirks keep coming up - IE6 and IE7 don’t support “inherit”, so need CSS expression. - When using Javascript to interact with CSS style properties, e.g. slideDown(), these will override CleanSlate. The solution is to set the “style” attribute with !important, but it becomes an arms race! - Doesn’t solve iFrame security model

SOLUTION 3

Inject (aka squirt, inject; summary) content into a fresh iframe.

The content comes from the widget site. The sqwidget library injects it. This resolves the tension with wanting independent CSS on the same page. If the sqwidget library is running on the host page, it could even (potentially) lock down capabilities, i.e. do Caja-style sanitisation.

Sqwidget also does templating, using Resig’s micro-templating. (That thing’s getting to be very popular; I’m using it myself in Scrumptious via the UnderScoreJS library after @fnd gave us a talk about them.)

Also, prem is playing around with the idea of a div, with custom (data-*) attribute pointing to the third-party URL. You could put inside it “now loading” and then the script tag will pick those things up and load them.


Various points: