Lessons in Javascript Performance Optimisation: 90 seconds down to 3 seconds

Summary: Avoid $$(“.classname”) on large DOMs!!!

I’ve recently been optimising the guts out of a JS webapp I wrote, which was making IE crawl to a halt. I discovered this after introducing a stress-inducing data set. (Using Rails’ fixtures makes light work of this; since the fixtures are Ruby templates just like the web templates, it’s easy to introduce a loop to create lots of data.)

With a rather large data set (100+ items, each several fields), IE would take about 90 seconds to churn through the initial script before the user could do anything. Firefox would run the same thing in about 8 seconds, still too long for a web page, but incredibly about ten times as fast as IE. I’m wanting to avoid pagination at this stage, so first priority was to tweak performance and see if we can keep everything on the same page.

After some sophisticated profiling ((new Date()).getTime():D), the main culprit was revealed to be prototype’s $$. It’s a fantastic function, but if you try to grab all elements belonging to a certain class, and the DOM is really big, $$(“.cssClassName”) can be slow. REALLY SLOW in IE. Remedy:

  • Removed trivial usages of $$() – e.g. in one case, the script was using it as a simple shorthand for a couple of DOM elements, and it was easy enough to hardcode the array. i.e. $$(".instruction") becomes [$("initialInstruction"), $("finalInstruction")]. The former notation is cuter, but unfortunately impractical on a large web page.
  • Introduced the unofficial selector addon. Seems to have improved performance in more complex queries, i.e. $(“#knownId .message”), but doesn’t seem to have affected performance of $$(“.classname”).
  • Finally, I bit the bullet and scrapped $$(“.classname”) altogether. It’s more work, but the script now maintains the list of elements manually. Whenever an element is added or removed, the array must be adjusted. Furthermore, even the initialisation avoids using $$(), thanks to some server-side generated JS that explicitly declares the initial list of elements belonging to the class (i.e. the list that would normally be returned by $$()). To do this, the following function is called from onload(), generated with RHTML.

< view plain text >
  1. function findAllItems() {
  2. <% js_array = @items.map { |item| "document.getElementById('item#{item.id}'),"}.join
  3.       js_array = js_array[0..-2] if @items.length>0 # Drop extra comma at end -%>
  4.       return [<%= js_array %>];
  5. }

The last step explicitly identifies all items in the class, removing the need to discover them by traversing the DOM. I wasn’t really sure how much time it would save – after all, you still have to look the elements up in the DOM and assign them to the array. But when I tried it, the savings were supreme – on IE, from around 45 seconds to about 2 seconds.

I have also incorporated Dean Edwards’ superb onload replacement to get the ball rolling before images are loaded. It’s a neat trick and takes 5 minutes to refactor it in.

Ajax Functionality and Usability Patterns – Podcast 4 of 4: Functionality Patterns

This is the fourth and final podcast in the series on Ajax functionality and usability patterns (Book: Part 4, pp 327-530). This 54-minute podcast covers seven patterns of Ajax Architecture (Book: Chapter 17, pp 473-530):

Dedicated to the Nitobians, whose last podcast inspired me to crank another one out again. Recent events suggest it may cost me $5000 to appear on their podcast again, and as Andre points out in this podcast, the same applies for them appearing on my podcast. Thus, my simple proposal would be:

  1. Each of us appear on the others’ podcast, at $5000 each. Actually, let’s make that $50k each.
  2. Cancel the debt
  3. Now each of us can claim our podcast attracts guests who pay $50k to appear. Enough to cover headsets ($20), bandwidth ($10/month with Libsyn), and assorted beverages (name your price).
  4. Profit!!!

Soon I’ll be publishing the final podcast in the overall series, which has already been recorded, and then I’ll be taking it in a more general direction akin to the topics on this blog – talking about agile, programming (Java/Rails/etc), usability, Web2.0, as well as Ajax and the coming revolution of real-time webapps. If you have a skype account and you’d like to join me sometime, drop us an email ([email protected]). Also feel free to suggest any topics that would be good to cover.

Documentation Needs Examples (Duh)

I’m constantly amazed at the amount of documentation people are inclined to create without including a single example.

Man pages that devote pages worth of command-line options, flags, grammar, caveats, historical anecdotes, and NOT A SINGLE EXAMPLE.

Textbooks that devote pages to a particular API, then expose it all in one monolithic program.

Countless reference documentation on HTML tags and CSS grammar, and NOT A SINGLE EXAMPLE.

In a world where free videos make it stupidly obvious how to kickstart your lawmowing experience and watching a screencast precedes the creation of “Hello World” in any language you care to adopt, let’s get it straight: An example says a thousand words.

“Ajax Design Patterns” – Book of the Month

Ajax Design Patterns is Book of the Month in this month’s .Net mag (p.23, Issue 155, October, 2006). Incidentally, the mag is about the ‘Net, not specifically MS .Net (which it pre-dates).


p>The review says:

So AJAX might be the hottest thing in programming since, er, ordinary Javascript, but it’s no good just learning how to implement it – you need design inspiration too. Ajax Design Patterns fits the literary void that exists in AJAX design by using real examples of best practice to enhance your apps.



p>I’m glad they emphasise use of real examples, because we can debate ad infinitum about whether everything in the book is a pattern or not, but the more important thing is that the examples are real, concrete, and as accessible as typing a URL into your browser.

Thankfully, Ajax Design Patterns is one of the most organised books on any programming subject. It’s a massive book, but you won’t get lost as the chapters are sensibly divided up and the sound layout means there’s nothing whatsoever to fear.


p>I’ve had a lot to say about presentation of patterns in the past The fairly unusual presentation of the patterns is the reason it’s not an O’Reilly animal book, and it’s good to see it helped.

Odeo: Engineering Against Customer Loyalty

GigaOM discusses “How Odeo Screwed Up”. Odeo is a service I want to like. I promoted it to others when it came out and I frequently use it as an example of the Richer Plugin pattern as it uses an effective combination of Flash and Ajax.

However, I had to stop using Odeo six months ago, due to an astounding oversight in their central architecture for doing what they are meant to do best: manage podcasts. The problem is simply that Odeo places all podcasts into a single RSS feed. You can probably imagine the consequences.

The feed inevitably grows and grows, and suddenly you have 5000+ multimedia items for your podcatcher to fetch and sync. ITunes simply gives up. Juice/IPodder hangs for about 30 minutes and might then start updating if the stars align with the moon. Under some circumstances, like after a (likely) crash leading to a corrupt record of what you’re downloading, Juice will start downloading all 5000 podcasts.

So the architecture is kind of flawed from the get-go. A single ever-growing feed. What tools does Odeo offer to tame it? Well, you can manually chop the beanstalk down one podcast at a time. This used to be rather difficult because of overzealous use of a Yellow Fade Effect, which meant you could only delete an item every two seconds or so. Now, Odeo offers a checkbox-driven interface, but you must still manually click on each checkbox, you only have 25 checkboxes per page, and there’s no keyboard shortcuts possible in Firefox AFAICT. So still unpractical. And the beanstalk keeps growing as you slowly chop it down at the other end.

What’s blatantly obvious is that Odeo needs an auto-delete feature, e.g. delete feeds older than 1/3/7/30 days or keep a maximum of 10/100/1000 podcasts in your feed. It’s such an obvious thing, it’s almost breathtaking that it doesn’t exist. I keep double-checking as I write this, but the fact is that I’ve previously mailed support about it and there’s no such feature. I don’t really understand what’s going on, as it’s hardly a niche request; it’s something that affects every users.

Odeo will work fine for new subscribers, but as soon as you’ve been subscribed for a few months, it’s impossible to use. I mean impossible! As I say, I like the website a lot and I wish there was a way to get it working. But it’s simply not possible. Way to encourage loyalty! I can understand when this happens with a small startup, but Odeo is high-profile, VC-funded, and continues to roll out completely orthogonal products like Twitter, Hellodeo, and podcast recording. Meanwhile, their core feature has remained impossible to use for twelve months!

The GigaOM article points to an interview which illuminates how these problems have come about. (Note: updated the list numbering from the original article.)

Williams went through a tidy list of the top five Odeo screw-ups: 1. “Trying to build too much” – Odeo set out to be a podcasting company with no focus beyond that. 2. “Not building for people like ourselves” – For example, Williams doesn’t podcast himself, and he says as a result the company’s web-based recording tools were too simplistic.

The first point highlights that Odeo might be better off looking at why subscribers like me have stopped using it. I realise they are probably building services like Twitter to produce a better revenue stream, but why throw away core users?

The second point makes me wonder how many Odeo staffers actually use Odeo at all, let alone to create podcasts. Like I say, Odeo has an unusual property for a website, in that it virtually forces you to give up after using it for several months. Maybe the internal staffers rely on cron-powered SQL delete commands to flush their feeds, but there appears to be no solution for the rest of us.

I want to use Odeo again. Please let me know if anyone has a solution to the Amazing Indestructible Odeo Feed That Knows No Satiety.

The Uncanny Valley of Programming Languages

Coding Horror mentions Applescript’s well-intentioned attempt to feel like English. Quoting John Gruber

The idea was, and I suppose still is, that AppleScript’s English-like facade frees you from worrying about computer-science-y jargon like classes and objects and properties and commands, and allows you to just say what you mean and have it just work.

But saying what you mean, in English, almost never “just works” and compiles successfully as AppleScript, and so to be productive you still have to understand all of the ways that AppleScript actually works. But this is difficult, because the language syntax is optimized for English-likeness, rather than being optimized for making it clear just what the f**k is actually going on.

This is why Python and JavaScript, two other scripting language of roughly the same vintage as AppleScript, are not only better languages than AppleScript, but are easier than AppleScript, even though neither is very English-like at all. Python and JavaScript’s syntaxes are much more abstract than AppleScript’s, but they are also more obvious. (Python, in particular, celebrates obviousness.)

There’s a lot to be said for the Uncanny Valley theory:

The Uncanny Valley is an unproven hypothesis of robotics concerning the emotional response of humans to robots and other non-human entities. It was introduced by Japanese roboticist Masahiro Mori in 1970. It states that as a robot is made more humanlike in its appearance and motion, the emotional response from a human being to the robot will become increasingly positive and empathic, until a point is reached beyond which the response quickly becomes strongly repulsive. However, as the appearance and motion continue to become less distinguishable from a human being’s, the emotional response becomes positive once more and approaches human-human empathy levels.

Just as human emotions (according to that hypothesis) respond in a U-shape curve, so does performance with human-computer interfaces. When we can come up with a near-perfect human language processor, it will probably have some great applications for humankind, allowing “non-programmers” to instruct computers in far more complex ways than they can right now with the most advanced end-user programming interfaces around today (Excel and Second Life).

Until that time, attempts to make programming languages human-friendly, however well-intentioned, fall deeper into the valley. It’s like old-school Visual C++ codegen – sounds great at first, but as soon as you get one tricky use case (inevitable), it all falls down.

A more promising approach is Domain-Specific Languages (DSLs), which talk in terms familiar to the “end-user-programmer” (we need a better name for that actor), but, by accepting critical constraints, is actually workable.