Ajax Live Search Demo

Over on Ajaxian, there were some interesting comments following from the “Ajaxifying the Address Bar” entry here last week. I made a little live search demo to show the kind of idea I had in mind. Please try the Live Search Demo for yourself and let me know how you find it.

The FAQ is over there, but for all the RSS readers, here’s a copy as it stands right now. Jump to the second half for the programming details.

Using the Live Search

How Do I Highlight the Icons?

It’s all demo stuff, everything gets processed by the server, but no actual lookups going on. So some crude rules apply here …

Web
Anything matches.
Phone
Type something resembling a phone number. (e.g. 555-1212)
People
Type an email address or a Simpsons family name.
Calculator
Type a basic calculator (e.g. 10/5)
News
Include a newsworthy term, e.g. “scandal” or “talks”.

Huh? It’s Supposed to be a Search Demo – How About, Oh I dunno, RESULTS??!!

There are no answers here, only questions.

It Doesn’t Work on My Browser

I’m interested in your compatability dillema. If you’ve got Javascript enabled and it’s not working, please tell me what browser you’re using.

Oh, So It’s All Pretty Lame, Huh?

Yeah, lame. The main point here is that all the terms are sent to the server, which decides on the categories. You could imagine it providing other info too, like an estimate of the result quantity. Or the results themselves.

Do you have any suggestions for a way to make the demo more realistic? Please send pointers to any interesting data repositories which could be held entirely in a memory cache of a few megs. Or any categories which could be calculated algorithmically.

Background

What’s This Live Search About?

A demo of AJAX – a new term for a relatively new way to make the web more dynamic. Written by Michael Mahemoff. Please mail any comments to [email protected].

Why do you call “Live Search” a pattern?

It’s a neat way to resolve the conflicting forces of user information needs and bandwidth limitations. In one form or another, several people have discovered this idea, so I’m capturing it as one of the Ajax patterns.

Whatever. Bodacious Icons, Dude!

If only I made them myself. They’re actually the Crystal Icons for KDE, released by Everaldo under LGPL license. Downloaded from LinuxCult.

Underlying Technology

How Does It Work?

You’re perhaps looking for some Ajax tricks. Most of it is apparent by viewing the source. I’ll highlight some of the take-home messages.

Calling the Server

Calls to the server are made by ajax.js, which offers a facade to the XMLHttpRequest object. so you can do this:

callServer(url, 'drawInitialCategories',
                      true, true, null, postvars);

… Which calls the given URL and sends the response to the drawInitialCategories() callback method. The options specify plain-text response (not XML), GET request (not POST), and null calling context object to be passed back to the callback method. Postvars is an array of variables passed as part of the request.

The library builds on some ideas posted by Richard Schwartz. And, more specifically, code posted in a comment there by John Wehr. John’s just started AjaxKit and I’ve been talking to him about contributing.

The library is seriously untested, so right now, use (if at all) with caution. The main benefit is thread safety. Compared to the standard pattern, which uses a global XMLHttpRequest, a new one is created each time. It’s also destroyed after the response returns.

Setting Up the Images

The categories are downloaded using a query to the server upon load. To keep things simple, image names are based on categories. Each image is then added to a fragment, and the whole thing is added to a div on the page in one final operation. categoriesFragment = document.createDocumentFragment(); for (i=0; i<allCategoryNames.length; i++) { categoryName = allCategoryNames[i]; var categoryImg = document.createElement(“img”); categoryImg.id = categoryName + “Category”; categoryImg.src = “http://localhost/ajax/liveSearch/images/”+categoryName+”.gif”; categoryImg.style.backgroundColor=”#cccccc”; categoryImg.title = categoryName; categoriesFragment.appendChild(categoryImg); } document.getElementById(“categories”).appendChild(categoriesFragment);

Acting On Input

Like “Google Suggest”, the search uses Submission Throttling. That is, don’t query upon each keypress. Instead, every 50 milliseconds, it checks the server if the query has changed. This is achieved with a setTimeout() at the end of the request method, which calls itself. A BASIC equivalent would be “10 Do something; Sleep; GOTO 10″.

  function requestValidCategories() {
    ...
    ...(Call the server if the query has changed)...
    ...
    setTimeout('requestValidCategories();',50);
  }

Acting On Responses

The mechanism above shoots off a query to the server whenever the query has recently changed. It’s an asynchronous query, so the response will be sent to a callback method. In this case, updateValidCategories. UpdateValidCategories walks through each category and decides if it’s currently valid. Based on its validity, it updates the background colour, cursor, and onclick action. Note that the images are transparent, so it’s easy to turn them on and off by flicking the background colour.

    if (validCategoryNames[categoryName]) {
      categoryImg.style.backgroundColor="yellow";
      categoryImg.style.cursor="pointer";
      categoryImg.title = "Category '" +
           categoryName + "'" + " probably has results";
      if (categoryImg.setAttribute) {
        categoryImg.setAttribute("onclick",
            "onCategoryClicked('" + categoryName + "')");
      }
      ...
    } else {
      ...
      ... (turn it off) ...
      ...
    }

Podcasting Thoughts – MP3 Replacements, Who Invented, AJAX, Speech-Text

With the recent news on ITunes supporting podcasts, a few thoughts. New readers of this blog might wonder why I’m mentioning podcasts … I haven’t said too much about podcasting here recently (since so many others discuss it already), but I’ve been keen on podcasting since it kicked off.

  • I’ve seen a couple of mentions recently that the BBC’s enthusiasm for podcasts is about reducing their reliance on Real Audio. This made me think: how come radio websites have released content in Real and MS formats for years, but so few have simply placed MP3s on their site instead? MP3s have, for many years, been far more useful and play at the click of a link in much the same way as the other formats. Why no MP3s? A copyright issue? The MP3 patent? If so, I hate to break it to all these new podcasters what file format their podcasts are based on? **I’m all for podcasting – it’s vastly superior to just placing MP3s online – but I’m scratching my head as to why it took the introduction of podcasting to get all these broadcasters to provide plain old MP3s.

  • The “Who Invented Podcasting” debate. Really quite a silly issue. Who invented anything? Dave Winer helped setup RSS, Adam Curry wrote a client to push audio into ITunes, Aristotle contributed to the scientific method which is necessary to develop new technology. See? No one person just turned up and invented podcasting. People have limited attention capacity and busy lives, so the media likes to present things that way, but that’s life. Adam was always going to be the man here – he introduced the first podcasting client and relentlessly pushed the whole concept by podcasting almost daily and grabbing plenty of media attention.

  • Apologies if this seems cringeworthy, but it’s hard not to notice the link to Ajax. Plenty of people are now (predictably) complaining that it’s nothing new, which misses the point. Having an evocative name for something lets you form a community, debate about it, write about it, and so on. “Audioblogging” and “Dynamic websites” just didn’t fit the bill.

  • An unexpected side benefit is that it will push speech-to-text and text-to-speech. Both extremely useful technologies. There’s a huge amount of information ready for audio format if only decent speech synthesis can be developed. These systems are early, but they’re at least present. Hopefully it won’t turn out like automatic language translation, which was amazing at first, but doesn’t seem to have gone anywhere in the past five years.

Patterns as Refactoring Tools

Fowler’s original refactoring text was based on a number of patterns. So you have a refactoring like “Introduce Null Object” which is a direct mapping to the older “Null Object” pattern. Now there’s also Joshua Kerievesky’s “Refactoring to Patterns” book which makes the idea more explicit. And in a new interview with bill Venners, Erich Gamma makes similar comments. There are two themes here: pattern refactoring for education, and pattern refactoring for development work.

Pattern Refactoring for Education

Gamma on teaching patterns (emphasis mine):

I think what you should not do is have a class and just enumerate the 23 patterns. This approach just doesn’t bring anything. You have to feel the pain of a design which has some problem. I guess you only appreciate a pattern once you have felt this design pain.

Venners on the JUnit discussion (emphasis mine):

you walk the reader through the design of JUnit by, as you wrote, “starting with nothing and applying patterns, one after another, until you have the architecture of the system.”

I remember coming across the JUnit cookbook a few years ago. It was the first design I’ve seen described as a series of refactorings, and it struck me just how clear the whole thing was. I’ve only had an opportunity to use the technique once, and it certainly worked better than the usual walkthrough.

The interview also points out that patterns in general are an excellent way to learn about OO principles like polymorphism and the other usual suspects. As I’ve written here before, principles and patterns go hand-in-hand. A well-considered set of patterns shows how to produce designs that adhere to a particular set of principles. We’ve traditionally taught principles and used examples to illustrate. That’s too much distance. Patterns are the missing link between principles and examples.

Pattern Refactoring for Education

Gamma on practical application of patterns:

Do not start immediately throwing patterns into a design, but use them as you go and understand more of the problem. Because of this I really like to use patterns after the fact, refactoring to patterns.

Again, I think this has been a surprising aspect of patterns. It follows from the popularity of refactoring and TDD and feature-driven design. Didn’t Kent Beck say something like “Get it working, then get it right”. With much less big upfront design, it only makes sense that patterns are used on a pull basis.

Retronym for Non-AJAX Apps?

We’re used to the distinction between “static” and “dynamic” websites. I think it’s fairly well-established that “static” means plain HTML with a possible sprinkling of CSS, while “dynamic” means a website with forms and maybe a bit of Javascript. “Web application” is a more sophisticated, goal-oriented, version of that.

Now that we have the “AJAX” label, what do we call the non-AJAX “dynamic websites” or “web applications”?

A few retronyms spring to mind:

  • The contemptuous: “Click’N’Wait website”
  • The nostalgic: “Classical website”
  • The contrasting: “Unajaxian website”
  • The bottom-line: “Non-sticky website”
  • The hopeful: “Unajaxified website”
  • The meme-inspired: “Not your grandson’s website”
  • The cantankerous: “Usable website”
  • The too-funky: “Old-School website”
  • The witty reference: “Website formerly known as dynamic”
  • The fashionable: ” ’90s chic website”

Do you have any other ideas?

Ajaxifying the Address Bar Interface

Jeff Attwood asks “Did you ever get the feeling that the browser address bar is the new command line?” Russel Beattie expressed a similar sentiment a little while ago:

“(I)t struck me that really what’s happening is that the search box is really becoming just the place where we ask our computers questions about anything. It’s more than just a search box, it’s actually an interface to a rudimentary Artificial Intelligence system.”

I like the way google and others do this, but I think they could do a better job.

Firstly, they usually don’t provide enough variety in the results.They’re trying to be intelligent about limiting what I’m asking for, but there’s often more than one type of result required. If I enter “GOOG”, for instance, I get a nice stock summary and search results for “GOOG”. But I’d also like some links to news and groups and so on. If I search on a real word, I’ll get a link to Answers.com. That’s not such a bad thing, but it would be useful to have direct links to thesaurus, dictionary, wikipedia, et al. Ideal if the resulting page is on answers.com and retains the term, so I can easily switch among the different categories.

The key is to display lots of results instead of a single result with a blank page, but with differing emphasis. Yes, it does add some clutter, but it’s easy enough to direct the user’s eyeballs to the most likely result. If that fails, there’s plenty of alternatives already on the page.

Second, there’s a need to be more dynamic … more, shall we say, Ajaxian. Users simply don’t know what to search on. You can hide it away in an “About” or “Help” page, or document it in a “Hacks” book, but mainstream users will never look there. With Live Search functionality, it should be possible to inform the user about the special-ness of their term. Imagine a UI with a block of icons for concepts like this:

  • Search
  • News
  • Dictionary
  • Encyclopedia
  • Phone Book

As the user types, the available options fade in and out to signal whether valid results will be returned. Even without browser-server communication, it would be possible to make a good guess in javascript just against regular expressions.

Indeed, with AJAX, you can do more than just direct the user’s search as they’re typing – you can provide results on the fly! Think google search which keeps updating the hundred results – as well as all quotes and news and all the other stuff – with each keystroke.

You might be thinking, “gross waste of computational resources, Mahemoff”. Maybe, but it’s useful to brainstorm design ideas by ignoring most of the technical constraints, and only then pare things back to reality.

In fact, I could see live search results being very feasible in what I consider to be the AJAX heartland: intranet applications. The reward per search on intranets is so much greater than on most public websites, since time wasted can amount to serious loss of productivity. Also, users are less likely to abuse the system. And the result space is much smaller. So I could see live search results being quite feasible in the enterprise.

The other issue here is customisation. With all the search engines about to go “My”-crazy, it’s clear that the results will be able to depend a lot more on the context. The content and relative emphasis of results will depend largely on the user’s context – who’s the user, where are they searching from, what’s the time, and so on.

In Search of Useful Code Comments

Couple weeks ago, I argued that self-documenting code is self-reinforcing. Some choice quotes on this topic from Mike Clarke (via his blog) (emphasis mine):

Ever since I was a wee programmer, I’ve been reminded that good code has a lot of comments … The trouble is, I can’t ever remember being taught this important lesson: Learning to remove a comment can lead to improvements in the code … It turns out that good code actually needs fewer comments than does bad code.

He goes on to suggest when comments smell sweet. Here’s my thoughts on these (which essentially amounts to some “me too” gushing):

  • Class purpose

I too think this is usually worthwhile, although by no means should it be necessary for all classes.

  • Gotchas, assumptions, limitations, explanations for decisions that aren’t obvious

There’s definitely a place for this stuff, and the aim of self-documenting shouldn’t lead to people feeling bad about commenting on this stuff. Again, it’s interesting that this is actually the most useful type of comment to have, but many people don’t like including it because it seems wrong to have a comment like “this is confusing, but we’re still waiting on an answer from the client”. That’s the same kind of thinking that vendor-written manuals often adopt – even though the product is faulty, they’ll ignore any problems and make users feel bad that they can’t work things out.

  • TODOs and problems in the code

As with the gotchas, it’s worth having this, and it’s much better when it’s in the code and you can read, update, and search for it.

  • Use of a published algorithm

And, likewise, pointers to patterns. That’s the power of patterns: mention a couple of names and you can get a quick boost in understandability. to be sure, it can still be reduced by careful design: there’s no need to say you used the Proxy pattern if you’re class is called “CalculatorProxy”, and there’s no need to say you used Factory Method if your Car class is “Car createFastCar()”.

Progressive Enhancement and Browser Plugins

In the previous entry on AJAX and server-side crypto, I alluded to progressive enhancement. Naturally enough, people have been discussing progressive enhancement wrt AJAX , and at least some AJAX applications will doubtless apply this notion. For instance, a form might be validated in real-time on the server-side if possible, but if not, it will be validated upon submit.

There’s a timely crossover with browser plugins, though. Just as AJAX is heating up, so too is work on powerful plugins like GreaseMonkey. And that’s relevant because there’s an opportunity to ultra-enhance AJAX applications. Progressive enhancement is often more about graceful degradation from the standard case, e.g. users with older browsers or smaller devices. But the principle can work in the opposite direction too: you can use a browser plugin to enhance an existing application.

I’m talking here about application-specific plugins. The web application works without the plugin, but the plugin progressively enhances the experience. It’s already happening. This Firefox wikipedia extension adds a toolbar to make the browser feel more like a word-processor (each to their own). Likewise, the Bloglines toolkit enhances the Bloglines experience. With extensions like these, AJAX applications become the middle-ground of richness: boring old HTML on the left side, and virtually a desktop experience on the right.

AJAX and the Great Data Cloud in the Sky

Office apps, backup, calendars – trying to store it online was all the rage of the mid-90s. By 2000, we would be hopping on our flying scooters and whizzing past the great data cloud in the sky.

Well, it was nice to dream.

Here are two major reasons it never happened:

  • Options for rich clients – like Java applets and downloadable applications – were never feasible for a range of reasons, meaning that web applications became the only realistic choice. But web applications gave a lousy user experience and were hampered by portability issues.
  • Trusting the host. Are you crazy? You want me to store my prized data on someone else’s box?

Well, AJAX is a big step towards rich clients, and what we’re seeing now will only get better with emerging patterns and frameworks. GMail and Backpack and the thousands of wikis are all examples of web applications with important data being stored server-side.

But right now, you have to trust the host with your data. Feasible in some cases, but it would be ideal if data could be encrypted server-side. So a big challenge right now is to work out how web applications can deal with encrypted data.

It turns out that AJAX may well offer a viable solution to server-side encryption. For the past couple of weeks, Richard Schwartz has been talking about how an AJAX application could decrypt on the fly. The idea is this: in the past, decryption in the browser would have meant a Java applet – with all the usual Java problems – or Javascript-based encryption. But Javascript can’t remember data between page loads, so the user would have to type the passphrase each time (or use frames, which is feasible though messy). An AJAX application need load only once, so the user would only have to enter the password once per session. Richard has a proof-of-concept demo too.

There’s also some further discussion about how practical this idea is. Having to type your password in each session is certainly an issue, but not a showstopper in my view, at least not for techie users. The reason is because of browser plugins. As Richard mentions, it could be solved on Firefox with a GreaseMonkey script. The reality is, it could be solved on IE too with the right plugin. So this is an example of progressive enhancement. You can type your passphrase in if you need to, but install this plugin to avoid it.

Know Thy User : It’s More than Usability

As you know, it helps usability a lot to work closely with users. What I’ll say here is that working with users helps other -ilities too. I’m defining usability as UI, human-computer dialogue, and functionality. I think it’s clear that working with users contributes greatly to these things already. Let’s now look at the “hard” non-functional requirements that are usually thought of as being intricately tied to inanimate objects.

Performance (Veloc-ity!): Why tune some code if the user hasn’t asked for it? Because you found a cool new pattern on theserverside and decided you’d get a big performance boost? It’s probably a waste of time if your users aren’t feeling the pain. Know your users, and you’ll know what, if anything, needs optimising.

Resilience (Reliabil-ity + Saf-ity): Again, it’s common to assume this comes down to measures like uptime. Not true. Your users can direct your work by telling you which aspects they value the most. Maybe your users are willing to have the system go down for ten minutes a night if it means performance improves for the rest of the day. Maybe they need the UI to be always on, but back-end messaging can be delayed. Knowing this information can save a lot of headaches and help deliver value where users really need it.

Maintainability: + Flexibility:** Understanding users and their needs can help you to understand the sort of things that will change later on. If you’re following a purely agile approach, that probably won’t apply. For everyone else, if you’re going to guess, make it accurate.

Security: Security and usability is one big pair of conflicting forces. The classic example is the password policy: make it too lenient, and you’ve broken security. Make it too hard, and you’ve got a post-it note on the user’s monitor. Security has long been recognised as a holistic discipline, involving a consideration of physical spaces, workflow, staff hierarchies, and so on. Would it make a difference if you saw that the application was only used inside a heavily guarded machine room, isolated from any network, versus an internet-enabled system that can be accessed from any PC? It should! That’s a slightly extreme example, but there are many details about security that can be picked up by working with users.

Don’t Force Users to Qualify Lookups

An unfortunate feature of many lookup services is their insistence on having the user type something, then qualify what sort of thing they’re looking up. Or, to qualify by typing in a different form field. Examples:

Qualifying might be useful as an option to help reduce results, but shouldn’t be mandatory. In many cases, there is only one answer without qualifying, so why force the user to specify a qualifier when it’s not needed?

On the bright side, Google, for instance, will offer News results when you perform a regular search. Theer were rumours a while ago it was occasionally providing images too. I think this is a good move, and they could go further. As long as it’s clear which are the results you specifically searched for, why not show results for several categories. Likewise, you rarely have to qualify searches on Amazon. You just search for a term like “Pulp Fiction” without wasting any time telling Amazon what kind of product you’re looking for.

Now, you might say performance improves when users qualify things – you have an extra constraint to your “WHERE” clause. It’s true, but let’s face it: Do you really think that’s the reason this design faux pas occurs? Experience and the fact that it probably wouldn’t make much difference would suggest the design has simply not been taken that seriously. Even if performance were considered, the only relevant measure is how quickly users can perform their tasks, and this would probably improve overall even if the CPU was doing more work. And there would be less incorrect queries if users weren’t forced to specify the information.

This problem occurs due to tech-driven, rather than user-driven design. Instead of looking at the user’s task, the designers have focused only on the database schema. A pat on the head is in order from the DBA, while the user hits the “X” icon.