Host-Proof Authentication?

Abe Fettig’s done some important experimenting to arrive at a direct remoting technique, one which bypasses the need for a Cross-Domain Proxy and doesn’t rely on cross-domain On-Demand Javascript. Compared to the latter technique, Abe’s idea is more functional, because you get the power, expressivity, and bidirectional capability of XMLHttpRequest, as opposed to the On-Demand Javascript hack, which only allows downloading, though you could perhaps pass CGI arguments with the script request, or use one of the image/stylesheet hacks to get information in the other direction.

If we can bypass the server. then we can consider the idea of Host-Proof Authentication. It’s based on Richard Schwartz’s Host-Proof Hosting idea, where encrypted data is decrypted on the fly in the browser. In similar vein, if you needed third-party authentication, these remoting hacks are one way to keep your password away from the prying eyes of the server host. A while back, one of the internet banks (egg?) copped it for asking users to give them all their cusomter IDs, passwords, etc., so they could provide a one-stop-shop service. Maybe Host-Proof Authentication would be a better approach – if not automated, a portal could be set up to allow users to shuffle funds around within the browser.

Back here on Earth, I wouldn’t in reality use Host-Proof Authentication for a critical application – not without a lot more consideration – because there are two reality checks:

  • Host-Proof Hosting is far from perfect- Alex Russell has noted it’s vulnerable to script injection attacks. See the comments in the above links for more on that. Similar comments apply to Host-Proof Authentication.
  • All these direct remoting techniques rely on some form of co-operation with the external server. e.g. Abe’s technique requires it to explicitly set the document.domain property; On-Demand Javascript requires it to expose appropriate scripts; image remoting requires the script to recognise any variables and output an invisible pixel; etc. The external API would have to explicitly let the browser perform remote authentication.

An Ajax Framework a Day!

Today’s Ajax framework is JsRia. Yesterday’s was ZK, with the Backbase entries updated too. In the past week, there were Smartclient, Ajax JSP Taglib, Ajax JSF Framework, Cajax. Here’s the diff. The week prior to that saw introduction of XOAD, Rialto, and Lotus Notes info.

Have the Ajax frameworks entered the enlightened age of singularity? (I’ve been listening to a lot of Ray Kurzweil podcasts lately, forgive me.) To some extent, yes, there is some pretty explosive growth here, because several of these frameworks really have been released in the past couple of weeks, as far as I can tell. In addition, many of the project owners and users have presumably become aware of the page, and seek to add their project or update my original description.

So I’m thinking the frameworks page needs to be split before it bursts at the seams, what do you reckon? I wish there was a way to keep them all on the same page, with a bit more Ajaxy dynamism, to let you manage and personalise things better. As I alluded to yesterday on Ajaxian, of all the Ajax projects being announced – wikis are somewhat lacking. Surprising to me, since it was one of the most obvious Ajax examples to me – one I mentioned in the original Ajax Podcast, and one of the first proof-of-concept demos I created for the patterns. One thing I’d like to see is wikis take more of a web service approach – let a thousand Ajax/Flash/Desktop wikipedia clients bloom. Sure, there are mashups now, but they’re mostly read-only, and require manual scraping. The idea I had with the Ajax Patterns Reader was to eventually let people leave feedback. There’s another demo – the portal, which grabs content from ajaxpatterns.org, and there will be a further demo coming soon. To do all that properly, I’ll likely create a web service to expose the wiki content as an RESTful API.

In closing off this tangent, here’s a question: How would it be creating a wiki from the ground up, with no UI? Just a collection of web services for managing content. I’ve found MoinMoin is more configurable and pluggable than most, but it still starts with the unnecessary premise that the UI lives in the same process as the content.

An Ajax Framework a Day!

Today’s Ajax framework is JsRia. Yesterday’s was ZK, with the Backbase entries updated too. In the past week, there were Smartclient, Ajax JSP Taglib, Ajax JSF Framework, Cajax. Here’s the diff. The week prior to that saw introduction of XOAD, Rialto, and Lotus Notes info.

Have the Ajax frameworks entered the enlightened age of singularity? (I’ve been listening to a lot of Ray Kurzweil podcasts lately, forgive me.) To some extent, yes, there is some pretty explosive growth here, because several of these frameworks really have been released in the past couple of weeks, as far as I can tell. In addition, many of the project owners and users have presumably become aware of the page, and seek to add their project or update my original description.

So I’m thinking the frameworks page needs to be split before it bursts at the seams, what do you reckon? I wish there was a way to keep them all on the same page, with a bit more Ajaxy dynamism, to let you manage and personalise things better. As I alluded to yesterday on Ajaxian, of all the Ajax projects being announced – wikis are somewhat lacking. Surprising to me, since it was one of the most obvious Ajax examples to me – one I mentioned in the original Ajax Podcast, and one of the first proof-of-concept demos I created for the patterns. One thing I’d like to see is wikis take more of a web service approach – let a thousand Ajax/Flash/Desktop wikipedia clients bloom. Sure, there are mashups now, but they’re mostly read-only, and require manual scraping. The idea I had with the Ajax Patterns Reader was to eventually let people leave feedback. There’s another demo – the portal, which grabs content from ajaxpatterns.org, and there will be a further demo coming soon. To do all that properly, I’ll likely create a web service to expose the wiki content as an RESTful API.

In closing off this tangent, here’s a question: How would it be creating a wiki from the ground up, with no UI? Just a collection of web services for managing content. I’ve found MoinMoin is more configurable and pluggable than most, but it still starts with the unnecessary premise that the UI lives in the same process as the content.

Redundant Design is Worth Fighting For

Matt @ 37Signals discusses new countdowns being used at pedestrian crossings (crosswalks). Did you ever count how many redundant messages are available at a pedestrian crossing? Good, let’s be sad together and count them, then. At a workshop one time, various attendees from different countries came up with a list of cues, something like the six below:

  • The walking man (is there a walking woman anywhere in the world?) or “Walk”/”Don’t Walk” message.
  • The main traffic lights for drivers.
  • Countdown displays.
  • Display next to the button, indicating if it’s already been pushed (in which case, currently in “Don’t Walk” mode).
  • Sound. (A continuous noise to indicate whichever phase they’re in, and/or a transition sound.)
  • Cars and pedestrians. (Not actually designed and not reliable, but certainly an indication.)

The redundancy is presumably to cope with different sets of disabilities, as well as improve safety for everyone. Software developers don’t always like redundancy – it goes against just about every fundamental design principle you care to name – but users generally benefit from it. So it’s a matter of architecting things so that redundant UI doesn’t lead to redundant code. e.g. point two event handlers to the same Command object.

Yeah, another funny thing about crossings is the button. In one place (Singapore?), I was told not to push it, because it’s only for disabled or elderly people (and of course, ignorant tourists). Everyone else just waits and it will turn green eventually.

Error Messages We’d Rather Not See

Uh, thanks for the heads-up.

Reminds me of a presentation at Interact 2001, where the laptop suddenly interrupted proceedings with that legendary message, “Your computer is now fully charged”. The presentation was about user attention, I kid you not.

And, by way of contrast, how to write good error messages: tell the user what happened, explain the consequences if it’s not obvious, outline how to fix it, explain what to do if they can’t fix it.

Error Messages We’d Rather Not See

Uh, thanks for the heads-up.

Reminds me of a presentation at Interact 2001, where the laptop suddenly interrupted proceedings with that legendary message, “Your computer is now fully charged”. The presentation was about user attention, I kid you not.

And, by way of contrast, how to write good error messages: tell the user what happened, explain the consequences if it’s not obvious, outline how to fix it, explain what to do if they can’t fix it.

Looks Good, Tastes Good

The Search Engine Experiment – a blind test where users rate relevance of results – reveals that Google is better, but not that much better. The methodology is reasonable – the only serious flaw might be if people are assuming Google is always relevant, then trying to pick the Google results. Or if people go for Google because they’re used to it, so the results are the most comfortable. For example, when I tried the test, I jumped straight for the results that included wikipedia, partly because it just felt more pure and Googlish. It turned out to be a Yahoo! result.

Anyway, taking the results at face value, how to explain MSN and Yahoo! being more relevant than the grand-daddy of search 60% of the time? Seth has a good theory:

Google is better because it feels better and quicker and leaner and easier to use. The story we tell ourselves about Google is very different, and we use it differently as a result … Music sounds better through an iPod because we think it does.

cf. Nicholas Negroponte in “Being Digital”explains he always puts on his glasses to eat steak – it tastes better that way. (BTW “Being Digital” is the greatest tech book never to make it on Joel’s MBA reading list. A real mind-opener, like Philip and Alex’s Guide to Web Design).

So Google is a cognitive dissonance machine that actually has no clothes on? Hard to believe, but bring on more of these mashup experiments.

Joining Ajaxian

I’m pleased to announce that I’ve joined Dion, Ben, and Rob as an Ajaxian.com editor. Here’s [Dion's announcement](http://ajaxian.com/archives/ 2005/11/introducing_mic.html):

  • We are proud to announce that Michael Mahemoff of the popular AjaxPatterns.org has joined the Ajaxian.com team.
  • Together, Ajaxian.com and Ajax Patterns is going to offer even more information for users of Ajax technology.
  • Expect to see cross pollination between the sites, and in the podcasts.

It will be great to get involved with the talented Ajaxian team. Ajaxian has been a great source of inspiration for the Ajax Patterns – when I created the Ajax Examples page, I thanked them for posting all the Ajax showcases, and many of those examples – as well as the ongoing community news – helped me discover and document the patterns.

Especially fun will be the combined podcasts, and expect to see and hear more info about the patterns at Ajaxian and in the Audible Ajax podcasts. Fortunately, Ajaxian.com uses a very similar Creative Contents license, so the material can be reused and incorporated in other works. BTW I’ll still post the final Basics of Ajax podcast to the standard SoftwareAs feed later this week.

We have some interesting ideas for linking between AjaxPatterns and Ajaxian. As always, please provide feedback any ideas you have about Ajaxian, AjaxPatterns, or the cross-pollination effort.

Server-Centric versus Browser-Centric

James Strachan: Is Ajax gonna kill the web frameworks?:

So is the web application of the future going to be static HTML & JavaScript, served up by Apache with Ajax interacting with a bunch of XML based web services (maybe using SOAP, maybe just REST etc)? If so, do we really need a web framework thats focussed on HTTP and HTML, or are we just gonna end up developing a bunch of XML based web services and letting Ajax do all the templating, editing and viewing?

While I’ve kept [an open mind](http://ajaxpatterns.org/Ajax_Frameworks an open mind), pure separation is certainly the approach I’ve been using and advocating. Perhaps server-centric approaches can work okay for some intranet apps where the emphasis is on getting up and running, and where many developers want to focus on server-side concepts like messaging, DB, business logic, etc. But the bulk of applications are better off as browser-centric.

With tools like Dojo, Mochikit, Scriptaculous, Ajax Pages, you can quite happily get the whole UI encapsulated in JS. Amid all the Ruby uptake (and for good reason), JS has also come along nicely over the past twelve months. Not just libraries, but techniques for OO design, testing, etc.

In addition, as others have pointed out, the approach goes hand-in-hand with SOA: if you’re going to expose a clean REST API on your server anyway, it’s a no-brainer to proceed with a pure-browser UI. Testing becomes a lot easier too. Separating model and presentation has always been a difficult problem in software … when you have a solution as good as this, you have to put up a very strong argument against it.

So why are people still using server-centric frameworks? The biggest force of resistance seems to be the misplaced notion that JS is a hack and we must avoid working with it at all costs. Or maybe it’s less ideological than that, and because people just haven’t had time to learn it. Fair enough – it’s hard enough to keep up with server-side technologies. But a little time learning the basics of JS would certainly ease the pain of web development.

Including modern words in modern dictionaries

What manner of 19th century public domain dicitonaries are packaged with 21st century software? For Montogomery Burns, these word lists would be just spifflicastic, but maybe not for the average citizen.

I just installed Thunderbird 1.5RC1, keen to check out the spell-check. Neither “blog” and “podcast” were recognised as valid words, despite one of the other new features being RSS and podcasting support! Not pointing the finger at Thunderbird, since most dictionaries in usr/dict and ispell and Office(s) seem to be equally ancient.

Many web-related terms turn out to be unsupported:

  • blog.
  • rss.
  • podcast.
  • www.
  • weblog.

  • mozilla.

  • thunderbird.
  • firefox.
  • netscape.

  • perl.

  • usenet.
  • cgi.
  • http.
  • dotcom.

  • flickr.

  • technorati.
  • google.
  • ipod.

Further curious (bordering on obsessive), I then tried OED’s top ten new entries for 2001, specifically those with exactly one term. All but one of these fails too.

  • doh.
  • balti.
  • Doh!
  • Ladette
  • Mullet (Passes spell-check.)
  • Alcopop

Yep, forget about quoting the Simpsons and partying with Red Bull. At the end of the day, it’s the Mullet that commands your respect.

I’m sure Google Labs could run some algorithm against the web to produce a more useful spell checker. It would obviously find many new words that should be added, but furthermore, it would find obscure words that should be removed. And it could probably go a lot further too, and build a very clever grammar-checking algorithm. But for now, there’s plenty of mileage to be gained from a simple manual list.