Lessons in Javascript Performance Optimisation: 90 seconds down to 3 seconds

Summary: Avoid $$(“.classname”) on large DOMs!!!

I’ve recently been optimising the guts out of a JS webapp I wrote, which was making IE crawl to a halt. I discovered this after introducing a stress-inducing data set. (Using Rails’ fixtures makes light work of this; since the fixtures are Ruby templates just like the web templates, it’s easy to introduce a loop to create lots of data.)

With a rather large data set (100+ items, each several fields), IE would take about 90 seconds to churn through the initial script before the user could do anything. Firefox would run the same thing in about 8 seconds, still too long for a web page, but incredibly about ten times as fast as IE. I’m wanting to avoid pagination at this stage, so first priority was to tweak performance and see if we can keep everything on the same page.

After some sophisticated profiling ((new Date()).getTime():D), the main culprit was revealed to be prototype’s $$. It’s a fantastic function, but if you try to grab all elements belonging to a certain class, and the DOM is really big, $$(“.cssClassName”) can be slow. REALLY SLOW in IE. Remedy:

  • Removed trivial usages of $$() – e.g. in one case, the script was using it as a simple shorthand for a couple of DOM elements, and it was easy enough to hardcode the array. i.e. $$(".instruction") becomes [$(“initialInstruction”), $(“finalInstruction”)]. The former notation is cuter, but unfortunately impractical on a large web page.
  • Introduced the unofficial selector addon. Seems to have improved performance in more complex queries, i.e. $(“#knownId .message”), but doesn’t seem to have affected performance of $$(“.classname”).
  • Finally, I bit the bullet and scrapped $$(“.classname”) altogether. It’s more work, but the script now maintains the list of elements manually. Whenever an element is added or removed, the array must be adjusted. Furthermore, even the initialisation avoids using $$(), thanks to some server-side generated JS that explicitly declares the initial list of elements belonging to the class (i.e. the list that would normally be returned by $$()). To do this, the following function is called from onload(), generated with RHTML.


  1. function findAllItems() {
  2. <% js_array = @items.map { |item| "document.getElementById('item#{item.id}'),"}.join
  3.       js_array = js_array[0..-2] if @items.length>0 # Drop extra comma at end -%>
  4.       return [<%= js_array %>];
  5. }

The last step explicitly identifies all items in the class, removing the need to discover them by traversing the DOM. I wasn’t really sure how much time it would save – after all, you still have to look the elements up in the DOM and assign them to the array. But when I tried it, the savings were supreme – on IE, from around 45 seconds to about 2 seconds.

I have also incorporated Dean Edwards’ superb onload replacement to get the ball rolling before images are loaded. It’s a neat trick and takes 5 minutes to refactor it in.

10 thoughts on Lessons in Javascript Performance Optimisation: 90 seconds down to 3 seconds

  1. Marcos, Thanks for mentioning it as I did work through that blog post as well. One consequence was to unroll a function call – the same one-line function was being called 100 times, so I got rid of that. It’s a shame you still have to go to measures like this, but I don’t see serious hotspot coming any ime soon.

  2. Pingback: Ajaxian » Lessons in JavaScript Performance Optimization

  3. Michael, I commented on the Ajaxian post but thought I’d also mention it here:

    Please look into document.getElementsByClassName. It’s orders of magnitude faster, especially with the recent performance optimizations (including querying by XPath if it’s available). $$(“.foo”) can be replaced with document.getElementsByClassName(“foo”) in every instance. I guarantee that’ll fix your problems without forcing you to give up querying by class name altogether.

    That said, making $$ less costly is one of the major points of focus for Prototype’s 1.5.0 release.

  4. Andrew, thanks for the suggestion. I didn’t look into getElementsByClassName as I assumed Prototype or the faster add-on would be making that optimisation in the event of a single class being specified ($$(“.classname”)).

    I’ve subsequently tried it, but it turns out to be not much faster on IE (perhaps a 2x speedup, whereas the server-generated JS gives something like a 20x speedup).

    Maybe XPath gives a better speedup on IE.

    I’m tempted to create a benchmarking exploration tool on ajaxify – a massive dom and a JS sketchpad to benchmark queries.

  5. Pingback: Lessons in JavaScript Performance Optimization > Archives > Web 2.0 Stores

  6. Michael, which version of Prototype are you using? The performance optimizations I referred to were added about two weeks ago, so you’d have to be using bleeding-edge Prototype to reap the benefits. We’re on the verge of a 1.5 release, though, so if you want to stay on the stable branch you won’t have long to wait.

    Unfortunately, if you saw only a 100% improvement going from $$ to document.getElementsByClassName, I’m not sure the further optimizations would be significant enough. In browsers that support XPath, document.getElementsByClassName is now just as fast as (if not faster than) any other DOM query, but IE is not one of those browsers. Until IE supports DOM Level 3 XPath, there won’t be a no-drawbacks way to query the DOM in non-“standard” ways.

    In the meantime, you can improve the performance of getElementsByClassName considerably if you restrict the query to a certain subset of the page: $(‘container’).getElementsByClassName(‘foo’) [or document.getElementsByClassName(‘foo’, $(‘container’))]. This will cut down on the number of elements that must be searched through.

  7. Pingback: Tech Links

  8. Pingback: Zen Thoughts : Lessons in Javascript Performance Optimisation: 90 seconds down to 3 seconds

Leave a Reply