Recording Internet Radio

Updated May 16, 2005: Fixed link.

James Strachan discovers Radio Lover. I use Replay Radio, another program which records internet radio and pushes it into ITunes. The big problem is handling of timezones – it doesn’t track the timezone you’re specifying program times in, so you have to keep adjusting times to account for daylight savings in your own country and that of the program you’re trying to record.

In any event, podcasting has changed all that. And if the BBC podcast feeds rumour holds up, live audio streaming will just about become a non-issue.

A Dubious Honour: “goo….oogle Runner-Up

This post on Google Suggest has rocketed me into 2nd place for searches on “gooooooooooooooooooooooooooooooooooogle”. And a respectable six – that’s 6 short of a dozen! – people or robots have visited softwareas on that basis (thus triggering this investigation). And guess what the number 1 hit is. Actually, don’t. You’ll cringe.

Now, it all makes me wonder: what if I’d used one more o: goooooooooooooooooooooooooooooooooooogle. Would the world be a different place? Has the butterfly flapped its wings? Only time well tell.

Changes Around Here

Seems the market for blackjack among software developers just got bigger. The spammers have unleashed a torrent of trackbacks over the past fortnight, finally provoking me into an upgrade. So I’ve moved to wordpress 1.5, and simultaneously pushed the older podcast MP3s off here to an archive location.

Hopefully, your aggregator won’t get the old stuff again. If it does, and you feel sufficiently riled up to send an email, I bravely ask you to save me the effort and send it directly to your favourite spammer of the month.

Incidentally, huge Chapeau to the wordpress team – amazingly slick upgrade operation. Basically just replaced the 1.2 files with the 1.5, edited in the database parameters, and hit a magic URL. WordPress took care of migration and upgraded the plugins accordingly.

Enclosures haven’t worked out so well. I had to manually add them in. Since I was using an unsupported patch, they weren’t already there, and somehow the 1.5 automatic enclosure feature just isn’t working. Some googling indicates I’m not alone there. Still, it’s a great job overall – looking forward to the new plugins. In particularly, the anti-spam ones.

Testing: Mocks and Crocks

In tests, objects can be placed in two categories:

  • “Crock” objects – Objects populated with dummy values and enshrined with dummy behaviour for the purpose of simulating an object that might be encountered in production.
  • “Mock” objects – Objects pre-loaded with predictions about how they will be used, and responsible for throwing an exception as soon as it becomes apparent a prediction won’t be met.

I make the distinction because mock objects has entered that industry buzz-word state. It has been adopted as a name for the very practices that it was intended to supplement. That the name is so evocative is a testament to the clever people who introduced the concept. “Mock” is a lot more memorable than “endo-testing assertion class”.

Crocks have a place too. Especially for system testing, it can be useful to simulate external systems rather than try to connect to them directly. You can plug and play here, wiring your application to use a mix of real systems and simulations, so as to simulate different conditions.

Also, it can be useful to build up a “universe of crocks” for testing purposes. A few prototypical instances of your classes to be used as test data. If you have user accounts, for instance, create a “joeTheSysadmin”, a “sallyTheManager”, and so on. This can work well in conjunction with mock objects. For instance, you can tell a mock to expect that “joeTheSysadmin” will be passed to it. The ObjectMother pattern supports retrieval of Crocks.

If you practice true unit testing – isolating a single class, then mocks should form a key part of the testing strategy. The class should refer only to interfaces, and you wire the tests so that these interfaces are implemented as mocks rather than using other classes in your project. Pragmatism rules the day as always, and you might find yourself using a mix of interfaces and real classes.

Some further clarification on mock objects:

  • If use a tool like JMock, you can easily create a mock and set up its expectations without hand-coding it. Indeed with the stubs() method, you can easily create Crocks as well – it lets you tell an object how to respond to a call, without setting up any expectations about whether the call will actually be made.
  • Mocking goes hand-in-hand with certain design concepts: interface-implementation separation, dependency injection, IOC/lightweight containers, law of Demeter. You don’t have to, but it helps a lot, and mock objects have actually pushed the use of these concepts across the industry.
  • If you design test-first, you’ll find that classes don’t need to expose properties as much as normal design would suggest. You won’t have getters for every property because there simply aren’t use cases that require them. Without mock-objects, you’d need to expose those properties purely for testing, which is unfortunate because exposing properties adds clutter and increases risk. With mock objects, you don’t have to test your objects by interrogating them. Instead, you can test how your object interacts with other objects.

Ubuntu Linux Experience

Since my Toshiba laptop apparently has no line in, just a microphone input, I recently picked up a little Sound Blaster MP3+, an external USB sound card. Had some major problems getting it to work with Mandrake 10, so thought I’d check out the Ubuntu live CD (5.04 – “hoary hedgehog” release), with a view to porting if it works.

Well, I got a big bonus from the start. I finally have wi-fi working. Almost out-of-the-box … Centrino wi-fi didn’t work, but it worked with a Belkin USB device. That’s cool, as I’ve tried a couple of times with Mandrake and others, on both devices, and always had to give up.. When it comes time to recompile the kernel, that’s when it’s time for me to say student days are over, and I have better things to do than spend a few hours tweaking settings, recompiling, rebooting, repeat.

Unfortunately, the soundcard situation’s not so good. It seems to be confused by the existence of two devices – the internal and external devices. And not helped by the mutual presence of OSS, the kernel-level module, and ALSA, linux sound of the future. Looks like I’ll be recording podcasts in windows in the medium term! At least I can now do it with line-in.

All good reasons to make my next PC a no-brainer … switch!

Software Architecture = Business Strategy

Clarke Ching notes that, contrary to popular belief, IT managers have a lot of influence on the business side – profits, strategy, etc. He explains why agile processes make money for out customers. True. And as well as management, there’s architecture, which is also important for business strategy. Whenever anyone influences the architecture, they’re affecting business strategy, whether they acknowledge it or not.

So the techie – programmer or architect – might say:

“I don’t care about strategy, paradigm shifts, and executive-level sushi! Data persistence, messaging protocols, devastating one-liners that make a sharp whooping noise every time there’s a lunar eclipse? That I can take care of.”

Actually, it’s in the organisation’s interests for technical decision-makers to know something about business strategy. It doesn’t have to be enough to influence it, and it doesn’t even have to be too detailed, but it’s a mistake to ignore it.

My thinking here is heavily influenced by Software Architecture in Practice (Bass et al). Design is all about making trade-offs, and this book delves into numerous case studies to illustrate how technical trade-offs are related to real-world strategy.

Case in point: the web. Why did it succeed where previous attempts at hypertext failed? Because it’s decentralised, so that it could scale up indefinitely. And it’s error-tolerant. You don’t have to write perfect HTML. Slight tangent here, but I have to add this legendary quote from Philip Greenspun:

… at least you already know how to write legal HTML:

My Samoyed is really hairy.

That is a perfectly acceptable HTML document. Type it up in a text editor, save it as index.html, and put it on your Web server. A Web server can serve it. A user with Netscape Navigator can view it. A search engine can index it.

OK, that’s the web, a global phenomenon. How about in a single company? Again, some appreciation of strategy is vital. What’s our message to customers? Beyond the obligatory lip service, how much do we care about customers? Some companies have a very high regard for ongoing customer relationships, so IT system design should be aligned with that concern. That usually means optimising on usability and performance, and ensuring the helpdesk works smoothly, for instance. If customers are using different browsers or PCs, it might be necessary to support all of them.

Other companies might care less about existing customers and more about new customer acquisition. Here, the concerns might shift towards more agile designs, so that new functionality can be more easily implemented. Thus, we would no longer be concerned about support all the browsers and platforms, as that would slow us down in adding new functionality. ** This would be an optimisation towards flexibility at the expense of portability. Business strategy has directly influenced technical decision-making.

Yes, this is all very approximate. But the decisions are real. Whenever a choice is made for a web UI framework, or a persistence strategy, or even an overall architectural style, developers should be conscious of the qualities they are trading off. Flexibility, reliability, usability, performance, maintainability, understandability, testability, uptime, etc. And once you’re aware of these qualities and you have to weigh up the pros and cons for a particular decision, you need some criteria to decide where the sweet spot lies. One solution might be particularly flexible, but not very reliable. And so on. The right way to pick an alternative is to think about which gives the best business value, and that comes from understanding whether, in this circumstance, how the business values each quality.

Finally, all this is relevant to any development process, not just agile. The implications will be different, e.g. more developers tend to make critical decisions in agile projects, and the decisions will usually be based on present needs rather than future speculation. And as a more general point, it makes sense for IT departments to support this dissemination of knowledge, again a strength of traditional agile projects. Nevertheless, the argument always holds: to deliver business value effectively, technical decisions-makers require an appreciation of business needs.

Java 1.5 Reflection: getMethod() when the method has varargs

Updated May 18, 2005 Shrunk width due to narrow WP style (it’s like coding in 30 columns per line)

package assertion;

import junit.framework.TestCase;

import java.lang.reflect.Method; import java.lang.reflect.Array;

public class ReflectionTest extends TestCase {

private class Shop {
    public void order(String ... productIds) {}
}

public void testReflectingOnVarargsIs-
                        EquivalentToReflectingOn-
                        AnyArray()
               throws NoSuchMethodException {

    Method goodMethod = Shop.class.
              getMethod("order",
              Array.newInstance(String.class,0).getClass());
    assertNotNull(goodMethod);

    try {
        Method badMethod = Shop.class.getMethod("order");
        fail();
    } catch (NoSuchMethodException e) {}

    try {
        Method badMethod = Shop.class.getMethod("order", String.class);
        fail();
    } catch (NoSuchMethodException e) {}

}

}

Podcast+Text: The AJAX Web Architecture

This podcast discusses AJAX, an architectural style for web applications that has become popular in recent months.

Click to download the podcast mp3

This is a Podcast – a blog entry with an embedded MP3 link. On Internet Explorer, Click the left mouse button to listen, or click the right mouse buttton and “Save As…” to download. Better yet, you can subscribe for updates straight into your PC or ipod – it’s easy, open, and free. Install the free, open-source, Ipodder client and when it starts, just paste this in: “http://www.softwareas.com/podcast/rss2″. Too easy! 15 minutes and you can be subscribed to receive thousands of MP3 podcasts – from official BBC documentaries to business personalities like Sun COO Jonathan Schwartz to scores of amateur publishers speaking about their own interests. They will download straight to your PC as soon as they’re published. You can listen to them on your PC or any portable MP3 player. If you have an IPod, programs like IPodder will push the downloaded MP3s straight into ITunes, so you can leave home each day with new content loaded on to your IPod. More info in the Podcast FAQ.

Quick Overview

Traditional webapps continue to send pages in their entirety, upon each user request. Consider a wiki such as Wikipedia:

  • User changes some text
  • Browser submits the new text
  • Server saves the text and sends the entire page again, updated this time
  • Browser clears previous page and draws all of new page

AJAX apps don’t redraw the whole page. Instead, they send a little request, receive a result, and adjust the page accordingly. The wiki of the future will look like this:

  • User changes some text
  • Browser submits the change
  • Server saves the change and sends a confirmation and maybe the latest timestamp.
  • Browser adjusts any information, e.g. shows the new timestamp.

For a glimpse of wikis to come, check out this Instant Edit webapp. Fire it up in two different browsers and see how the changes persist without reloading the page.

Most famously, Google has a few AJAX applications: Google Maps, GMail, Google Suggest.

YARC! Yet Another Rich Client!

AJAX, and the underlying XMLHttpRequest object, is the latest approach in the tradition of enriching the web platform. To put it into perspective, here are a few other attempts at rich applications over the years:

  • In the early 1990s, the Mosaic browser made the web clickable. It was the first window-based browser.
  • Images, then animations and sounds, were later added. ASCII art soon faded as quickly as silent film.
  • Java allowed for client-side applets.
  • Javascript (no relation) allowed for embedded programming inside HTML and led to a dramatic rise in cross-browser compatibility tables.
  • Flash was introduced, and with Flash MX, became more programmer-friendly.
  • Client-side GUI applications increasingly connected to the internet. e.g. Multiplayer games, Desktop search tools, ITunes Music Store, Auto-updating virus protection, Dreaded “free” spyware.
  • Frames, and even better, invisible IFrames allowed for invisible request submission and manipulation of the current web page.
  • Each browser continues to offer its own proprietary extensions (with some possible clicking of checkboxes or downloading of extra components): MS offers .NET support, Mozilla and Firefox offer a powerful plugin architecture, Opera offers a presentation package.

The Heart of AJAX: XMLHttpRequest

AJAX is a new name (Feb 2005) for a design style that has been possible, and in fact used sparingly, for the past couple of years. The underlying technology is XMLHttpRequest, a Javascript object that supports web requests. Every big modern language has a class like this: it takes a URL, fetches the content, and provides query support. XMLHttpRequest supports standard text as well as XML documents. That means a web page can wait for Javascript events, submit info to a web server, catch the resulting output, and play around with the current page.

One more thing about the technology: the request-response cycle is asynchronous. The XMLHttpRequest object is registered with a response handling Javascript function, and fires off to the server. Some time later, the server probably comes back, and the registered function catches the result. And that’s what AJAX stands for: Asynchronous Javascript + XML.

Let’s look at an example: This chat application is based on AJAX. You can see the Javascript here. (The entire thing has a Creative Commons license.)

When the user says something, the function sendComment() is called. It grabs the user’s message from the input field and passes it to httpSendChat, an XMLHtppRequest object. httpSendChat posts it to the server.

 
    httpSendChat.open("POST", SendChaturl, true);
    httpSendChat.setRequestHeader('Content-Type','application/x-www-form-urlencoded');
    httpSendChat.onreadystatechange = handlehHttpSendChat;
    httpSendChat.send(param);
 

The emphasised line is the registration: it says that the response will come back to the handlehHttpSendChat function. That function will discard any partial responses, and upon receiving a full response (state=4), it will redraw the chat (which involves another trip to the server):


function handlehHttpSendChat() {
  if (httpSendChat.readyState == 4) {
    receiveChatText(); //refreshes the chat after a new comment has been added (this makes it more responsive)
  }
}

receiveChatText() calls the server to send the recent discussion history, and ensures the response goes to handleHttpReceiveChat(). That function rearranges the chat text according to the recent message:

function handlehHttpReceiveChat() { if (httpReceiveChat.readyState == 4) { results = httpReceiveChat.responseText.split(‘—’); //the fields are separated by — if (results.length > 2) { for(i=0;i < (results.length-1);i=i+3) { //goes through the result one message at a time insertNewContent(results[i+1],results[i+2]); //inserts the new content into the page } lastID = results[results.length-4]; } …. }

Great, it works if I say something. But it’s a chat program. What if someone else says something? The application simply polls the server, ensuring that “receiveChatText” is called every four seconds.

All About Usability

The chief beneficiary of AJAX is the user. Web applications feel much more responsive, and the user won’t hesitate to perform actions for fear of slow response times, or outright timeouts. Furthermore, form data need not be lost due to browser crashes: using a timer, it can be sent to the server every few minutes, just like auto-backup.

For standard form-based applications, that’s a nice benefit, but hardly a killer app. Where AJAX will shine is in truely rich applications. In particular, on intranets, where many corporations have already migrated traditional GUI applications. This migration process has usually been led by technologists concerned with the infrastructural overheads of administering and upgrading standalone applications. It’s much easier to have all the applications sitting on the server and the clients running a standard web browser.

These web migrations may have improved administrability, but they have often cause users pain. Ironically, they’re often left longing for applications written a decade before the web apps. It doesn’t help that most projects are clueless with regard to usability, but even if usability is considered, the web platform is inherently unusable. The control components are simplistic and server synchronisation is confusing and time-consuming. AJAX doesn’t do anything for the controls, but at least it brings the server and client closer together.

Problems with AJAX

Some objections are taken from the resources below, especially AJAX: Promise or Hype.

It’s Hard to Code

Any Javascript usually makes life more difficult, and early discussions indicate AJAX is no different. At present, coding for AJAX may well be more difficult, although if you look at the code examples around, you’ll see that you’re not exactly facing a Turing Test either. In any event, it’s inevitable that design patterns and supportive frameworks will emerge. A few frameworks already facilitate this mode of development: the always-controversial, uber-funky Ruby On Rails, JSON-RPC-Java, Dynamic Web Remoting. SAJAX, Echo 2. Fortunately for evolution, a wide variety of approaches is being taken. Cross-fertilisation will undoubtedly follow.

In particular, the best frameworks will probably generate as much Javascript as possible, so developers don’t need to co-ordinate between Javascript and server-side controllers.

Testability May Suffer

It’s nice to be able to perform system tests with a robot like httpUnit. Any use of Javascript makes that more difficult. At the same time, because AJAX promotes a more component-based architecture, unit testing may actually be improved. With a good design, it should be quite feasible to test the scripts that are accessed by the XMLHttpRequest object.

Accessibility May Suffer

Any form of interactivity is often anathema to many different types of specialised needs. Nevertheless, this should not stop the technology from progressing, and providing rich interaction to those who can use it. As always, accessibility must be maintained, and multiple mechanisms might be required. Furthermore, new technologies can improve accessibility too. It’s easy to imagine, for example, how an AJAX-enabled site could let users quickly resize and move around certain screen elements to meet their individual needs.

AJAX Will Collapse the Network

AJAX does represent a potential challenge to networking infrastructure. Traditional web applications can feel like earlier client-server applications. Submit your offerings, then receive a response and meditate on it for a while. AJAX makes the term “web application” a lot more honest. The server really is involved, possibly even after each keystroke. Interestingly, bandwidth requirements may go down because usually only small changes need to be sent each way. However, latency is another question: using an AJAX application might feel like typing against a slow telnet connection. Stuff… hap…pens…much…sl..ower…than you…can…think… .

This will probably not be a major concern on intranets, where there are relatively few users and usually good connectivity to the server (especially as it’s often nearby). However, it’s still an open question how AJAX will be used on the public web. Certainly, it can be used to incrementally improve just about any form-based application. And it can surely go beyond that, as Google demonstrates. But can it scale to the requirements of a major site, offering a fully-scaleable wiki or genuinely playable gaming?

It’s Just a Name, the Tech’s Not New At All

XMLHttpObject has been around for a few years, but it would be hard to believe anything called “XMLHttpObject” could trigger a revolution on the PCs of the world. Frames and IFrames supported this sort of interaction even earlier. History and logic would suggest that a standard name and community, combined with some flagship applications, are powerful tools indeed. And the timing is right: users have lived with static web applications long enough, broadband is now mainstream, and the economy hungers for innovation. The raw technology may have been around, and even used in doses. But all signs indicate that the new name, given the increased need and the prominent offerings by Google offerings, constitutes a tipping point.

Bill Won’t Like It

Let’s be clear. This could have happened a lot earlier. There’s a lot of unfounded nonsense about MS on the web, but there is indeed broad agreement that MS does not benefit from adoption of rich web applications. And for pretty obvious reasons. They worked hard to innovate with IE in the mid-90s, attaining the dominant position. Consequently, they managed to crush Netscape’s dreams of replacing the Office Suite and Sun’s dreams of Java on every desktop. It’s hard to see MS doing anything about XMLHttpRequest within IE though; the interaction it provides is rich, but quite frankly, not that rich.

Flash Can Do All This, and More

Flash is a bit of a mystery, since it’s extremely cross-platform, having excellent support on all the major browsers and platforms. And yet, it’s never taken off for serious application work. In fact, it’s really been used for not much more than ads and fancy presentations. It’s certainly capable of doing much more serious applications, and maybe Flash MX will still shine. It took a long time for Macromedia to target serious development. Perhaps this was a strategic mistake, or perhaps it was an intentional means of gaining wide browser share.

Examples

Further Resources

Let Them Have Quake: Bad Programmers are Negative Contributors

I just began reading The Business of Software. The intro alludes to that old adage about a good programmer being 10-20 times as effective as a bad programmer. I’ve heard various numbers for this “Hot/Not” ratio, sometimes 100 is bandied about. I think it’s a fine way to convey a pertinent point of software development to non-techies. The figure is obviously arbitrary, so no issue there. I’ve had the (mis)pleasure of working alongside both ends of this spectrum, and, at least from what I’ve seen, it all makes sense, as long as you assume the developers are isolated.

Given a small project to work on in isolation, a hotshot hero programmer might be able to produce a high-quality system in a couple of weeks. Meanwhile, the bizarro programmer will spend some time looking things up, and eventually cobble together something vaguely meeting requirements, vaguely compiling, vaguely running, in a few months. Thus validating the 10x theory, or the 100x theory if you’re factoring in quality too.

However, the Hot/Not ratio falls down in most projects, because the bad programmer usually works in a team, not in isolation. In agile projects, as well as the industry-standard code-n-fix approach, all code is up for grabs. Agile projects usually promote collective ownership, code-n-fix projects often do promote some idea of direct ownership, but reality usually rears its ugly head and people are forced to read and write others’ code. Only in the most rigourous, well-executed, top-down, sequential, design projects can developers really work in isolation. Whether those projects are desirable is beyond the point, the point being that most projects are either agile or code-n-fix or attempts at sequential development which degenerate into the latter. And that being the code, I rest my case that most programmers work in teams.

In team-based projects, a bad programmer can be every bit as effective as a good programmer, but in the wrong direction. In other words, the same contribution with negative polarity, meaning that the “Hot/Not” ratio is negative. Perhaps -1, -10, who knows? And really, you can have the whole spectrum between those two extremes, which is why the standard “tenfold” figure is really very arbitrary.

Where the hero can singlehandedly code a system to a higher state, the bizarro hero can drag it down with the flick of a key. By a bad programmer, I’m not just talking about someone who’s development experience consists of “Parallel Computation Algorithms in 24 Hours”. I’m talking about someone with poor communication skills too. Communication skills are at least as important as technical skills, and I’m assuming we’re a little under-par here too.

Specifically, a bad programmer has a negative contribution on a team-based project, because (PC disclaimer: I’m saying “he” for concreteness):

  • Subtle bugs get introduced by careless coding. They can take a long time to locate and fix.
  • He is not interested, or perhaps not very good at, expressing what he’s done to other team members, who may find certain things curiously weird, but for no known reason.
  • He is not interested, or perhaps not very good at, listening to other team members, so will trample on their work.
  • Other developers constantly have to slow down to give explanations and help in debugging sessions. (It’s good to encourage discussion and support, but only if it doesn’t become groundhog day, and it should usually happen in both directions.)

Does all that sound like a dampening effect – e.g. one-tenth a good programmer – or a downright damaging effect – the opposite of a good programmer? Yep, the Hot/Not ratio is negative in most projects.

Now, the agile approach typically says that if all the interventions fail, you ask the person to leave the team. But there’s a practical problem: for various organisational reasons, some people must remain on a project even when they are known to have a negative contribution factor. If the “tenfold” ratio applies, the manager might reason that the bad programmer may as well do his 10% job. But if the negative theory above holds – and I maintain it does – the right thing to do is to sidestep them altogether. The best thing a manager can do in this situation is to let them have quake. That is, give them a well-spec’d PC in a corner of the room, a copy of their favourite game, a suitable game controller. And some good headphones. Or, realistically (the quake thing just sounded cool), a little greenfield tool they can develop in isolation from the code base. Alternatively, monitor changes very closely.This brand of sidelining isn’t pretty, and let’s keep in mind I’m talking about situations where the programmer has already had all the opportunities to pick things up and just wasn’t interested. It’s a pragmatic way to deal with those awkward situations where a developer really isn’t in good shape to touch the source code, but must remain on the project.

Self-Documenting Software at SPA 2005

OK, my final SPA posting. This time, some notes from my own workshop. In the next few days, I’ll be doing a podcast (yes, I’m still officially podcasting after much silence!) on self-documenting software which will cover all this.

[The slides are online at my homepage.](a href=”http://mahemoff.com/paper/software/SelfDocumentingSPA2005/selfdocumentingSpa2005.html”).

This was a 75-minute workshop to discuss practical techniques for achieving self-documenting software. I explained how usability principles can be applied to design and coding, we had a lot of interesting conversation and we performed some coding exercises.

The workshop was interactive, with a couple of exercises. One was looking at a before-and-after code example, the other was a refactoring exercise. I’ll post the code on the slides link above at some point. The conversation was good, people had a lot to say on the topic, so we didn’t get as long on the refactoring exercise as I’d planned, but the trade-off was worth it. There’s certainly a lot in this topic, an extended workshop is on the cards at some point.

  • Since the workshop was entitled “Software, Document Thyself”, I realised in retrospect some people might take this as an automatic document generation tool. So when I saw Cenqua’s April 1 Commentator Tool, I figured it was only fair to point those people in the right direction.

  • Motivation for self-documenting software lasted all of about 60 seconds … a quick survey indicated I would be preaching to the converted.

  • I showed how we can draw from HCI theory to learn about coding techniques. A well-worn assumption in HCI is that users don’t read instruction manuals and shouldn’t have to anyway, so design it to be as intuitive as possible. As Donald Norman explained, doors shouldn’t need “Push”/”Pull” labels. So the design principles for self-documenting UIs have much to offer those of self-documenting code, In particular:

    • Consistent Within an Application
    • Metaphor and Common Ontology
    • Consistent Across Applications – Standard Ways to Accomplish a Task
    • Familiar Language
    • Attention Layering – Overall Structure in a Blink
  • I offered some specific points on: reducing waste, providing affordances, refactoring, focusing on the typical case, learning trajectory – supporting the transition from novice to expert.

  • We discussed using a thesaurus and a little about my pet Programmasaurus project. An exercise involved producing synonyms and discussing where we might use them. Some interesting comments here:

    • Where you use a term, as in many HCI concepts, depends on the context. You can’t just say, use “get” here and “obtain” there.

    • There are often word pairings or groupings. For instance, some words don’t have a clear opposite term.

  • The refactoring exercise was interesting. I’d seeded a few spurious points which were quickly detected, but, moreover, there were various general ideas about how the refactorings might be accomplished. People were quick to point out Enums would help a lot with both examples. The Monty Hall example could have benefitted from enums to represent each door; the Connect 4 example from Enums to represent each side. With the Connect 4 example, there was some discussion about perhaps using java.awt.Color to encapsulate each side (red and green). Other points included elimination of useless setters/getters and renamings.

One group provided a nice demonstration of what is meant by service-oriented architecture (or inversion of control or pull principle, to use various other overloaded terms). They found one of the method names confusing, so the first thing they did was “Find Usages…” to understand its context and presumably help them rename it.