Ajax as a Remedy for the Cacheability-Personalization Dilemma

A pattern for your consideration, about using Ajax to help pages be RESTful.

Problem

How to personalize content and make pages cacheable and bookmarkable at the same time?

Forces

  • We want pages to have clean URLs that describe the main content being viewed. Doing so makes pages easily bookmarkable and send-to-friend-able, and also allows us to cache the page anywhere along the way. For example, viewing info about Fight Club should be http://example.com/fightclub and not http://example.com/fightclub/user-mahemoff or http://example.com/fightclub/287490270-1931321-cijE12ZSz
  • .
  • We want to personalize pages – say Hi to the user, show them personalized recommendations, etc.
  • If we personalize, but use the same URL for all users, we break REST and therefore won’t be able to cache any content. My http://example.com/fightclub is different to your http://example.com/fightclub because we each see our own recommendations inside the page.
  • But if we use diferent URLs for personalization, we can’t cache across users and pages aren’t sent-to-friend-able. If I look up and see http://example.com/fightclub/user-mahemoff, I’m probably not going to bother sending you the URL. Furthermore, my view of the page can’t be cached.

Solution

Create pages generically (same version for all users), and in this generic version, embed a remoting call which will customize the page for the current user. Serve http://example.com/fightclub to everyone. Then everyone’s browser makes a further call to grab custom content (Multi-Stage Download). This additional call is unRESTful as the server will use cookies to decide what content to return, but at least we’ve isolated that component, served the bulk of the content without caching, and given the user something they can bookmark and send to their friends.

References

11 thoughts on Ajax as a Remedy for the Cacheability-Personalization Dilemma

  1. Pingback: Ajaxian » Ajax as a Remedy for the Cacheability-Personalization Dilemma

  2. I think this would be a useful technique in only special situations. It does accomplish what you want but will require multiple downloads and will make a portion of your page unaccessible to those who have disabled Javascript (from what I have heard that is 10% of the intenet population).

    Plus I am dubious of the savings. The reason for the caching to not have a web brower contact the website. It can just retrieve the content from cache. But if it is having to retrieve a portion of the content anyway you still have to make a HTTP request. Might as well make that response a bit bigger and get rid of the multiple requests and more complex code.

    Sounds to me like this is going a little overboard on caching. Some pages are just not designed for caching. If that is the case then implement your application to use the “If-Modified-Since” header. That way the user can make their request but get back a small response in most cases.

    I think this is premature optimization.

  3. Eric I think the “caching” Michael to which refers is server-side, so that pages can be returned faster by the application. Your website would write a pregenerated version of a dynamic page and serve that up instead of hitting the database and generating all the logic/code every request. Websites that use this kind of caching can be a lot faster.

  4. We used this technique on a project (not yet launched) to edge-cache html pages, and add in all the dynamic data like ratings, preferences etc. via one or 2 ajax requests, with the data delivered as JSON. Through CSS we could also reuse page fragments (resources with their own URLs) in different contexts. This was for a high-traffic site, where optimization like this translates into significant application server load, and real dollar savings.

  5. This is a general principle: separate the cacheable content from the dynamic content. That’s one of the reasons to favor seperate CSS files versus inline CSS. Same thing for JS content.

    There is definitely a trade-off though: the first page load is going to take longer, and the UI might get modified as the personalized bits get downloaded.

  6. Eric, It wasn’t clear from the article, but as Mike pointed out, I’m talking more about caching on the server and also along the network.

    Still, browser side caching is good too – if you have the bulk of the page immediately, you can already start interacting while the personal stuff loads up.

    I believe this pattern works well for systems where the majority of users *don’t* get personalised service. Like CMS systems. For example, this blog shows me an “Edit” link when I’m logged in. 99.99% of page views don’t need that, but thanks to me, the content must be personalised. Those 99.99% of hits would require no further load of JS, because the JS would simply detect no cookie. It degrades gracefully too, because the direct “Edit” link and personalised details aren’t required functionality.

    Sam, thanks for the info. I haven’t come across any high-profile sites using this pattern (they either make URLs unique or, more often, serve different content for the same URL). It’s great to hear it’s being performed in practice. A perfect example of the benefits of writing up patterns even if they’re speculative!

    Julien, good point – you could look at this pattern as a special case of unobtrusive JS.

  7. Pingback: SitePoint Blogs » Jul 17, 2006 News Wire

  8. Pingback: Testing The Web Dot Com » Blog Archive » Ajax as a Remedy for the Cacheability-Personalization Dilemma

  9. Pingback: MondoBlog » Blog » Fast AJAX Links Collection - Num 5

  10. Pingback: WordPress “Edit This” Links via Ajax

Leave a Reply