Two-Way Web: Can You Stream In Both Directions?

Update (couple of hrs later): I mailed Alex Russell (the guy who named Comet and knows plenty about it), it sounds like he’s been investigating this whole area and he’s sent me his views.

We know about Comet (AKA Push, HTTP Streaming) and its ability to keep streaming info from server to browser. How about streaming upwards, from browser to server, and preferably in the same connection? A reader mailed me this query:

Im missing one demo, would it be possible to reuse same stream in streaming demos to send msg to server, I’ve been digging throw your examples, but they all seem to create a new connection to the server when posting, would be very interesting see a demo that does this within the same stream, and of course the server code would be as interesting as the client.

Here’s my thinking, I’m sure a lot of smart readers will know more about this and I’ll be interested in your views – is it feasible? Any online demos?

Unfortunately, I’ve not seen anyone pull this off – it’s always assumed you need a “back channel”. It’s the kind of hack someone like Google or 37S would turn around and pull off even though it’s “obviously impossible” 😉 .

There are two key issues:

(1) Server needs to start outputting before incoming request is finished. With a specialised server, this problem could be overcome.

(2) (More serious as we can’t control the browser) The browser would need to upload data in a continuous stream. You can do it with Flash/Java, but I can’t see how to do this with standard JS/HTML. If you use XHR, you’re going to call send() and wave goodbye to the entire request…there’s no support for sequencing it. Same if you submit a regular form, change IFrame’s source etc. Even if you could somehow delay reading of content so it’s not immediately uploaded, the browser would probably end up not sending anything at all as it would be waiting to fill up a packet.

I think the solution lies in the Keep-Alive extension to HTTP 1.1:

What is Keep-Alive?

The Keep-Alive extension to HTTP, as defined by the HTTP/1.1 draft, allows persistent connections. These long-lived HTTP sessions allow multiple requests to be send over the same TCP connection, and in some cases have been shown to result in an almost 50% speedup in latency times for HTML documents with lots of images.

If you google for “xmlhttprequest keep-alive” or “ajax keep-alive”, you’ll see people talking about the idea a bit, but there’s not much info on how to script it for continuous connections and no demos to be found. It would make a great experiment if someone did a proof-of-concept!

As an alternative, you could consider a thin, invisible, Flash layer to handle transport, and degrade to frequent Submission Throttling where Flash isn’t an option.


p>BTW I have a post and podcast planned about the whole two-way web thing, which will be profound (the two-way web thing, not the podcast :-)). The web is entering a new era of Real-Time Collaboration and Communication, post-Ajax (and of course building on Ajax, just as Ajax builds on the technologies of the previous era: CGI, DHTML, CSS, etc).

Update: As mentioned above, Alex Russell mailed me his views. In particular, it’s interesting to consider the possibility that browsers might transparently exploit keep-alive if you hit the server frequently enough.

So I’ve spent some time investigating this (as you might expect), and at the end of the day there’s not much to be done aside from using Flash and their XMLSocket interface. That’s an obvious possibility given the high-performance Flash communication infrastructure we have in Dojo. Doing bi-directional HTTP probably won’t happen, though, but I don’t think that’s cause for despair. In my tests, we can get really good (relative) performance out of distinct HTTP requests so long as the content of the request is kept to a minimum and the server can process the connection fast enough. HTTP keepalive exists at a level somewhat below what’s currently exposed to browsers, so if the client and server support it, frequent requests through stock XHR objects may verywell be using it anyway. We’ll have to do some significant testing to determine what conjunctions of servers/clients might do this, however.

There are even more exotic approaches available from Flash peering that I’ve been investigating as well, but they will require significantly different infrastructure from what we already deploy that I think they’re still in the land of “hrm…someday”.

First we have to solve the *regular* Comet scalability problems for existing servers and app containers.


PS: we haven’t been making much noise about it, but serious work has started on an Open Source Comet protocol with initial implmemntations in both Perl and Python over at The initial client library is Dojo-based, but we’ll be publishing the protocol so that anyone can “play” with it.

Ajax as a Remedy for the Cacheability-Personalization Dilemma

A pattern for your consideration, about using Ajax to help pages be RESTful.


How to personalize content and make pages cacheable and bookmarkable at the same time?


  • We want pages to have clean URLs that describe the main content being viewed. Doing so makes pages easily bookmarkable and send-to-friend-able, and also allows us to cache the page anywhere along the way. For example, viewing info about Fight Club should be and not or
  • .
  • We want to personalize pages – say Hi to the user, show them personalized recommendations, etc.
  • If we personalize, but use the same URL for all users, we break REST and therefore won’t be able to cache any content. My is different to your because we each see our own recommendations inside the page.
  • But if we use diferent URLs for personalization, we can’t cache across users and pages aren’t sent-to-friend-able. If I look up and see, I’m probably not going to bother sending you the URL. Furthermore, my view of the page can’t be cached.


Create pages generically (same version for all users), and in this generic version, embed a remoting call which will customize the page for the current user. Serve to everyone. Then everyone’s browser makes a further call to grab custom content (Multi-Stage Download). This additional call is unRESTful as the server will use cookies to decide what content to return, but at least we’ve isolated that component, served the bulk of the content without caching, and given the user something they can bookmark and send to their friends.


The Switch to Linux Begins?

A couple of high-profile bloggers (via Dion) make the switch from Apple to Linux and O’Reilly Radar wonders if it’s the starty of a trend. While I prefer working with Apple, I’m nonetheless an Ubuntu fan so I certainly hope this trend picks up. However, I wonder if people know what they’re getting into. In both of the Switch articles by Cory Doctorow and Mark Pilgrim, the authors focus on the reasons for moving away (more hardware bang for your buck, using mostly open-source anyway, various misgivings with Apple softtware), but neither actually explains what sort of experience they’ve had with Ubuntu and how they’ll cope with the issues that inevitably arise. Okay, so these are both insanely smart guys and can deal with it, but if others follow, they might be sorry.

The fact is, Ubuntu takes Linux a step closer to the user-friendly desktop it should be, BUT it’s still a far cry from the ease you can expect from an Apple. I’ve been using Linux for 13 years, and if there’s one golden rule that’s always applied, it’s this: At least one thing will always fail. It could be wifi, it could be X, it will probably be power management on a laptop, it could be running Skype at the same time as ITunes. Whatever it is, it will require a decision: Do I spend 2+ hours trawling for solutions and ultimately recompiling the kernel on the dubious assumption that it will resolve the issue and not break anything else in the process, or do I just live with the pain. As a student, the answer was often the former; in the real world, it’s inevitably the latter. Even with a modern, fairly Linux-friendly laptop (Toshiba Satellite Pro), Ubuntu ?5.05 still led to the aforementioned audio and power management issues, and most Ubuntu switchers are likely to come across similar issues.

Furthermore, though Mark Pilgrim complains about ITunes (as have I) and uses mostly open-source stuff available on Linux, there’s still a lot of software missing from the modern Linux desktop. You will suffer with inferior, incompatible, versions of Real and Flash, household apps like Skype will trail even further behind than on Apple, and you will end up with clumsy – if well-intentioned – impersonations of the finely polished apps you use every day (yep, such as ITunes. I’m not even going to mention the Gimp.) In addition, more specialised software will be much harder to come by. For instance, I recently needed some screencasting software, and, while options on the Mac aren’t great, they’re certainly more appealing than under Linux, where there are so many possible hardware combinations it may not work anyway. If there is useful Linux software that fits a niche, there’s a very good chance it runs under Apple too.

I love working on OSX due to the underlying command line, but I’m no Cult Of Mac guy. There’s a lot of silly things about the Mac, like hanging on to one-button touchpads, resizing windows from one corner, etc. Some might see them as cute eccentricities, some may say I don’t get the zen of Apple, whatever. All I can see is that these “features” are pretty much a 22-year old joke, though nothing I can’t live with. In addition, DarwinPorts and Fink aren’t perfect; I’ve never got gnome-terminal working with fonts that I can actually see. Furthermore, Apple support sucks in my experience. I recently suffered from a pathetic support incident involving around five prolonged calls to an offshore call centre, no resolution, and will now require some correspondence with the legal department. So I’m all for a ?revival of Linux among the uber-geeks. I’m just saying: I hope you know what you’re getting into. Cory says he’ll be blogging the experience, which will be interesting to watch.

Installing Linux? Some Tips For Switching From Apple To Linux

Here are some tips if you’re thinking of switching to Linux:

  • Go with Ubuntu. Sorry, no choice here if you’re new to Linux. Ubuntu right now is the clear choice for a standard Linux desktop setup. Best hardware compatibility, good support wiki, excellent hardware compatibility apparently due to its networked feedback facility, the power of apt-get (which beats RPM hands-down) and the most important thing: a Live distro (next point).
  • Try the Live distro first (The killer app of Ubuntu is that it supports both live and installed Linux.) Run the live distro, see how it handles your hardware, kick the tyres a bit to pinpoint the things that don’t work (see above – there will always be at least one thing that doesn’t work), and decide whether you can live with that.
  • Go for an Express installation When you proceed to install, it’s easy to go control-freak and spend hours setting things up. The problem with that is you often have to do a reinstall for one reason or another. Modern Linux systems make it easy enough to change settings later on, as well as install new software, so there’s no need to do it upfront.
  • Buy the right hardware So many people encounter problems with Linux because they’re using the wrong hardware, often hardware that is notoriously bad on Linux. Whenever you buy a laptop, a card, etc that you intend to run Linux on, do your homework first and note that manufacturers hardly ever advertise they are Linux-compatible (as they’re probably worried they’ll be obliged to support it). Google is your friend. Your friend running Linux is your friend. If you’re willing to pay for it, Macbooks are an appealing choice for for running Linux on. They’re reasonably priced, since Apple is now aiming for the mass market, and they are very standardised, which is a huge benefit when it comes to Linux. Isn’t it ironic?

Ajax Programming Patterns – Podcast 4 of 4: Performance Optimisation Patterns

The fourth and final podcast in this series of Ajax Programming Patterns. As always, the patterns are online at and covered in the book too, now available at Amazon. This 33-minute podcast covers seven patterns of Performance Optimisation:

(Note that the last two are recent additions to the wiki and just stubs at this stage.)

Okay, here endeth the series. I will soon be starting up a new series on the next group of patterns (Part 5 in the book): Functionality and Usability Patterns. There will be a change in the format, one I hope you’ll enjoy!