Functionality Trumps UI

Don Norman points out that usability isn’t everything, based on media comments about ITunes and Napster:

What is going on here? Both reviewers really like their iPods and the iTune service and clearly consider the entire experience far superior to that of Napster. Yet they think Napster “offers a real alternative.” Why? The business model. Napster offers unlimited access to music for a monthly fee

And the reviewer who favoured Time-Warner over Tivo due to cost and picture quality:

Compared to TiVo, the Time Warner cable box is like going to a medieval dentist. But despite his dislike, he switched anyway.

The IBM CUA “Look and Feel” iceberg made this point in the early-90s. It basically argued that usability is like an iceberg – the fully-visible UI on top actually makes up a very small proportion of overall experience … the ratio went:

  • Presentation (visual representations, aesthetics): 10%
  • Interaction (interaction techniques, device mappings, standard menus): 30%
  • Object relationships (properties, behaviours, common metaphors): 60% (“object relationships” effectively means the underlying architecture and available functionality.)

In any event, Software quality isn’t everything to an organisation, nor is usability. Every organisation has to pick its stronger functions, and just get by with the others. Some happen to focus on usability, some focus on software process in general, and some prefer to focus on advertising or IP protection or whatever else. They’re all feasible strategies. Just beware which strategy a company is following before joining – it might affect your negotiations.

The two most successful PC makers right now are Dell and Apple. One of them makes ugly machines and displays outright contempt for their customers (yes, personal experience), the other specialises in grace and gladly swaps over everyone’s broken IPods three times a year. Total opposites in the same industry, and both companies make crazy profits.

Looking Back on EJB

Ted Neward and Floyd Marinescu make some good points on EJB – is it really about distribution, impact of open-source, etc.

The history of EJBs is a good demonstration of the “You Ain’t Gonna Need It” principle applied to an entire industry. At the time, it seemed like “this is what we’ll do in the future, when we have the resources, the right app servers, the right IDEs, oh, and the need for it, so … let’s do it now instead! Hurray! Let’s party like it’s 2003!”

And then the backlash came, dipping its toes in the water by the time Rod Johnson’s “J2EE Design and Development” came out, and then outright contemptuous by the time Johnson’s “… Without EJB” call-to-arms appeared. Of course, lots of people snorted in the very face of EJB all along (serverside anyone?), but the mainstream Java attitude back then was “The EJB model is the way real software’s done. Go back to your little hacking if you feel otherwise.”

So here we are, in that future, CPUs ten times as fast, etc., massive internet usage, and what’s today’s buzzword? “Lightweight”! To be fair, today’s design paradigm is more about flexibility than low resource usage per se. As Dion Almaer said, the container itself need not be lightweight. Nor do the implementations. It’s really about the framework being as transparent as possible. (A great HCI principle: “help me think about my job, not yours”.)

EJBs will still be with us, but it’s nice to see vendors like BEA embrace Spring and related frameworks. Indeed, the whole dependency injection thing fits well with vendors – everyone needs an implementation after all.

Pattern Abuse

Tony Darugar hates patterns:

I’ve seen more horrendous programming sins committed in the name of patterns than almost any other possible justification.

I’m not sure if it’s really worse than other justifications. “XP” and “agile”, for example, are now being used to justify any (lack of) process not involving upfront design and documentation. That aside, pattern abuse is certainly a real phenomenon.

Patterns are just a form of expression. So saying “It’s a good design because it uses Prototype” is like saying “It’s a good article because it uses sarcasm”.

Tony asks if it’s the “patterns” themselves that are at fault, or the developer:

Is this really a damnation of patterns or the case of a bad developer? My point is, this guy was not born a bad developer. He was quite smart, and could’ve been useful. There are many of this guy running around. I’ve seen them.

I don’t know the guy in Tony’s story, but I’d seriously question how good a developer he is, based on this story. Anyone who thinks using a pattern is sufficient design rationale doesn’t understand patterns or design or both. Any design problem has infinite solutions and there are enough patterns in existence (even counting only the good ones) to justify many of those. So you’ll need something more than “Hey everybody, I used a pattern!” to justify you’ve done a good job.

Patterns are great, but you don’t “design by pattern”. You learn patterns and you code with your own ability and you iterate between the two.

To that end, I found test-driven design amazing when I first applied it, because the rapid code improvement loop meant patterns were just popping out. Only later did I realise how many patterns had just morphed their way into the code. The classes weren’t called “Abstract Factory” and “Business Delegate”. They just were. And, in many cases, I ended up with a far better understanding than I’d gained from just reading about them.

Alistair Cockburn’s Shu-Ha-Ri analogy analogy, taken from Aikido, is worth considering:

What is expertise?  (the Shu-Ha-Ri progression)

  Level 1
  Learning “a technique that works”
  Success is following the technique (and getting a success) 

  Level 2
  Learning limits of the technique
  Success is shifting from one technique to another 

  Level 3
  Fluid mastery - shifting techniques by moment
  Unable to describe the techniques involved

No-one can get to Level 3 expertise on all the patterns and collections out there, but whatever level we’re at, we can acknowledge that the mere application of a technique alone is not enough. Fortunately for martial arts students, they get to learn that lesson pretty quick.

Ajax Gems

I’m opening up the Ajax wiki soon and one of the really important things there will be to let everyone add examples. So I kicked off an Ajax Examples page. It’s based mostly on the original content linked from FiftyFourEleven, blatently combined with most of the showcases featured on Ajaxian. And a few others for good measure.

Here, I wanted to highlight some of the lesser-known Ajax apps that warrant 30 seconds of playtime.

  • Quek Chatting about the website you’re looking at. A limited version of “community surfing” plugins, but without the plugin. This could go far with a few tricks like bookmarklets and support for publishers to provide links to embed the chat.

  • HoverSearch Search results hover above the main page. Uses Transparent divs.

  • maps.search.ch Apparently, every square inch of Switzerland exists in a computer model. Shows that a neatly rendered model can look better than an outright photo.

  • Zuggest Excellent illustration of live search.

  • Ripped Tickets Another live search.

Heartbeat Ajax Pattern – A Code Example

Erik Pascarello (ahoy hoy Ajaxian) has created a library to track the user’s session. This is a nice implementation of an Ajaxian Heartbeat.

One of the biggest frustrations with traditional web applications is that users get timed out. With Ajax, you have a few more options:

  • Keep sending requests to the server, so the server knows the browser page is still present – the user hasn’t quit or loaded a new URL, so there’s at least a chance they’ll return.
  • As with Erik’s implementation, explicitly ask the user if they want to quit close to timeout.
  • Trace the user’s browser’s activity, to check if they’re actively working with the application. Actually, Neil Brewer of Trinsoft, alerted me, after listening to the Ajax podcast, that this is a potential negative of Ajax, the privacy concern. And “The Fonz” makes exactly this point. Actually, the whole title is “The Fonz uses XmlHttpRequest and AJAX to spy on you.”. Isn’t that cool? The Fonz delivering community service announcements in 2005, warning against the evils of XMLHttpRequest :-) ). This is certainly an issue, although to be fair, you could do the same thing without XMLHttpRequest, by just buffering it up user activity and uploading it on the next POST. Plugins will probably help here, like more end-user-oriented versions of the XMLHttpRequest debugger. In any event, for heartbeats, the solution’s simple: all you have to do is track user events in the browser, but only send a basic heartbeat if anything has actually happened – no more need be sent.

OK, so heartbeat’s about ending user frustration? A lot more than that actually. Think “Pessimistic Locking”, a very useful pattern in the enterprise that’s troublesome on the web. It has its good and bad sides relative to optimsitic locking, but on the web, the bad sides are exacerbated by the timeout issue. How do you know if the lock holder is actually working hard in the browser on the locked artifact, or if they’ve quit the browser, thrown the computer out the window, and gone fishing? Traditionally, you couldn’t tell the difference. With Ajax, you can make a much more informed guess.

Podcast: Mock Objects and Unit Testing

Testing and designing with mock objects

Welcome to the second half of this unit testing podcast series.

Last week’s podcast covered some unit-testing tips and JUnit patterns. This week covers mock objects – the how and why,

Click to download the Podcast. You can also subscribe to the feed if you want future podcasts automatically downloaded - check out the podcast FAQ at http://podca.st.

This is a Podcast – a blog entry with an embedded MP3 link. On Internet Explorer, Click the left mouse button to listen, or click the right mouse buttton and “Save As…” to download. Better yet, you can subscribe for updates straight into your PC or ipod – it’s easy, open, and free. Install the free, open-source, Ipodder client and when it starts, just paste this in: “http://www.softwareas.com/podcast/rss2″. Too easy! 15 minutes and you can be subscribed to receive thousands of MP3 podcasts – from official BBC documentaries to business personalities like Sun COO Jonathan Schwartz to scores of amateur publishers speaking about their own interests. They will download straight to your PC as soon as they’re published. You can listen to them on your PC or any portable MP3 player. If you have an IPod, programs like IPodder will push the downloaded MP3s straight into ITunes, so you can leave home each day with new content loaded on to your IPod. More info in the Podcast FAQ.

So Graduate Students Shouldn’t Blog?

Apparently, blogging is not in the interests of graduate students. Hannibal @ arstechnica agrees:

Ultimately, I think the answer to this dilemma is pretty clear: graduate students simply should not blog, and if they do blog they should never do so under their real names. As a grad student, your writing time is much better spent producing papers that will get you feedback from the folks who you’re paying to study under. Furthermore, anything that you have to say that’s even remotely interesting to anyone other than your parents and your best friend from childhood is not worth publishing online when it could easily come back to haunt you years later.

This issue is a bit confused because it mixes personal blogging with research-related blogging.

Personal blogging is a separate issue. Personal blogs are different enough to warrant a completely separate blog from the research blog, info about your new goldfish is completely orthogonal to your superconducting material investigation.

The second, more important, issue: should research students blog about their research? Absolutely! I have a bias here because I kept my research on my homepage throughout my thesis (here nowadays, how much better would a proper content management system have been :-) ). And if I were doing a PhD today, I’d certainly be blogging about it. Here’s why:

  • Timestamping your ideas: A big issue is proving your work is original, and the history of research is full of stories about people arriving at the same idea simultaneously (Who invented calculus? Who invented podcas … never mind.) Traditionally, publishing a paper was a good way to timestamp your ideas. In this wired era, you can do a lot better than that. And – precisely because there’s no formal review – you don’t have to worry about the paper being rejected and delaying that timestamp.
  • Promoting your research: Academia is a battle for hearts and minds. Paradigm shifts occur, ideas swing in and out of favour. The best academics are very smart people to be sure, but they are also tireless promoters of their ideas.
  • The conversation thing: I know a bit cliched and cringeful, but undeniable nonetheless. Keep a blog, open up comments, watch inbound links, and you’ll get a lot of feedback on your writing. For an academic, that’s almost an unfair advantage. Especially important for research-industry links.
  • Saying little things: A blog is a great way to capture all the little things. Again, here’s where the so-called problem of no peer review works to your favour. Witness Chris Andersen’s Long Tail blog. He recently explained this blogging style very well: > In the meantime, a slight explanation for why I’ve been indulging in so much theory here. I originally trained to be a physicist, in part because my hero growing up was Richard Feynman. One of the virtues of physics is that it’s based on the concept of understanding the world via first principles, the underlying rules that explain all the complexity around us. … What I’m trying to do here is to establish the first-principle rules of the Long Tail. I realize that the search for a grand unified theory is usually a recipe for ending your days muttering at a blackboard covered in scribbles. But I do think that the economics of abundance are poorly understood, and the Long Tail is as good an opportunity as any to lay out some pointers to how they might work. With your help, we’ll work through some of that here and I’ll find a way to make it easier to digest in the book.

All these web 2.0isms – blogs, podcasts, wikis – can make a big impact on academia. It’s not up to “academia” to embrace them, because there’s no such single entity as “academia”. Instead, the individual entities that do embrace them will win. Some out-of-touch institutions may ignore – or even deplore – candidates who have blogs. But those institutions will only be reducing their overall quality and doing the candidates a favour anyway. The clueful institutions will take a leaf out of industry’s book and actively encourage and host research blogs, subject to sensible guidelines. It’s difficult to imagine any other way they could get so much publicity about their research, which will in turn attract candidates at all levels, not to mention external funding.

As an aside, most people don’t “get” blogging yet because they have not yet discovered the power of RSS aggregation. I was the same – I thought “why do all these people want to write little snippets of nothing when they could organise their website logically”. Until I started using Bloglines. Google and Yahoo and MS and ITunes are likely to make aggregation a mainstream thing in the next 6-12 months.

Too Kind, Firefox!

Can software be too tolerant of errors?

The mantra of “tolerant on input, strict on output” (something like that, from Bertrand Meyer) may be true for end-users and maybe even APIs, but would you like your compiler to silently sweep warnings under the rug? In most cases, no.

This is where web development with Firefox gets interesting. Firefox is a good browser in the following two ways:

  • Tolerant of buggy HTML. The whole success of the web is due to tolerance of messy documents, broken links, bad scripting, etc., so this is actually a good thing.

  • Excellent development environment. Thanks to the Javascript console and many helpful extensions (web developer, XMLHttpRequest debugging), development is vastly superior to other browsers.

Combine these two worthy attributes and you have a problem. What would be useful is a strict extension, or maybe more to the point, an IE simulator (admittedly some aspects of Firefox and Opera already aim to be compatible with IE anyway). It would cut down on the double-testing cycle.

Fancy XML Styling: Bake it Into the Browser

There’s a lot of talk about slapping XML stylesheets over RSS feeds. A new oreillynet article (ta Mike Levin) explains how. I guess this started with a few prominent sites like the BBC strutting their style, and now everyone’s in on the act.

No doubt that it’s much better for users, but it’s unfortunate that individual developers have to set up their stylesheets – after all the bloggers are the users, and many of them simply can’t or won’t do it. I’m sure the various blogging toolkits will begin to incorporate it, but even then, let the browser take care of these things! It should certainly be in the interest of all browser manufacturers to increase the value of the web by introducing casual users to RSS.

It baffles me that browsers treat all XML files the same way. Even firefox:

This XML file does not appear to have any style information associated with it. The document tree is shown below.

It’s nice that the DOM is shown, legible and syntax-highlighted, but how about noticing the format and providing some helpful tips? Why not keep an open repository of default stylesheets, one per known XML dialect?

“Hints on Programming Language Design” by C.A.R Hoare: Quick Summary

Hoare’s “Hints on Programming Language Design” was written in December, 1973 and the first few pages on general principles are still very pertinent. Here’s a summary.

  • A programming language should support the three most difficult tasks in programming:

    • Program Design – “A good programming language should give assistance in expressing not only how the program is to run, but what it is intended to accomplish”.
    • Programming Documentation: “A good programming language will encourage and assist the programmer to write clear self-documenting code, and even perhaps to develop and display a pleasant style of writing. The readability of programs is immeasurably more important their writeability.
    • Program Debugging: Various aspects of the language help reduce confusion, also compiler should be fast.
  • Because programmers are reluctant to learn new languages, simplicity must be a priority if converts are to be gained. (Note: This is sad but true.)

  • Principles:

    • Simplicity: “Some language designers have replaced the objective of simplicity by that of modularity”. But what happens if a bug arises and the programmer doesn’t fully understand what’s going on? “Another replacement of simplicity as an objective has been orthogonality of design”. e.g. complex numbers … as with modularity, a good goal, but no substitute for simplicity.
    • Security: Shouldn’t have to remove security features when going into production. “What would we think of a sailing enthusiast who wears his lifejacket when training on dry land, but takes it off as soon as he goes to sea?”. (Note: Good argument for run-time assertions.)
    • Fast Translation.
    • Efficient object code. Don’t ignore performance and assume it can be optimised later on. (Note: are languages a reasonable exception to the optimise-on-the-fly school, or is this statement just obsolete?)
    • Readability: “In practice, experience shows that it is very unlikely that the output of a computer will ever be more readable than its input”. (Note: Sad but true again. We’re on the verge of more powerful visualisations, but still not there 32 years after this paper was written.)
  • On reflection (LISP-like programming etc): “The introduction of program structures into a language not only helps the programmer, but does not injure the efficiency of an implementation.”