Did you hear the one about enterprise reuse?

Confirming that enterprise reuse can be a bit of a joke at times, Jason Gorman shares this fable on enterprise reuse (via another inspired Jason). Short summary: Two ladies could save 8 cents by boiling tea in the same kettle. But the cunning analyst forgets that, since they live 20 miles from each other, there will be overheads to the tune of a $20 cab ride and the travelling time.

Viewed from a high level, enterprise reuse is a noble goal; what’s the point of being a single company if everyone writes their own code? In practice, it can be fraught. Ironically, it’s usually easier to reuse publicly-available libraries (e.g. open-source libs on sourceforge) and public web services than those in the same company. The following things make reuse more digestible in an enterprise setting:

  • Language-agnostic, industry-standard, technologies Using obscure or proprietary technologies can work in an individual team, but rarely in a large enterprise; in most cases, there are simply too many factions with different skill sets and legacy code bases. There are companies that describe themselves as a “pure Java shop”, for example, but you will indefinitely find pockets working in Python, .Net, and so on. Getting an enterprise to truly standardise (not just lip service) on something non-industry-standard is futile. It takes several months for people to get really competent in a new language; in an environment full of contractors and staff turning over every few years, and full of legacy systems, you can count on the fact that there will be disparate technologies at play. It’s a good thing, too; no one language (or paradigm, for that matter), not even Java *gasp*, is the right solution to all problems.
  • Service-oriented SOA as in “built on a needs-driven basis”. The stuff that’s available for reuse is stuff that’s been abstracted from real-world projects, where at least one project already built it and at least one other project actually needs it. (Rails is successful because 37Signals uses it; there aren’t dozens of 24-month working groups involved.) Perhaps the biggest mistake enterprises make in this whole area is pushing out functionality no-one else actually wants to reuse.
  • Support trumps standardisation The best way IMO to encourage a certain technology or library is the carrot, not the stick. Make people actually want to reuse what you have to offer, rather than forcing them to do so. I am very sceptical about any situation where architects have to act as the reuse police; if the component or service was designed, documented, easily located, and served a genuine need, wouldn’t the developer be drawn towards it? Wouldn’t they actually want to use it, and maybe even give something back to it? In an ecosystem where components and services are high-quality and easily-accessed, you can forget about mandating reuse because it will happen anyway. See Web API Patterns and Documentation as Conversation for the kinds of things that will make this happen.
  • Online As a rule of thumb, offering a centralised web service is better than offering a reusable code component. The web service can (should) be easier to use and is language-agnostic. Obviously, there are sometimes situations where code components make more sense, especially from a performance perspective. I wouldn’t use an online service to create a polygon every millisecond, for example.
  • Easy to use As with any API, it should be easy to learn and make calls. For this reason, online services should be RESTful, not SOAP or CORBA or whatever MQ if you can help it.
  • Iterative progress Don’t try to bite off more than you can chew; if you start pretending *everything* can be reused, you’ll soon find that nothing gets reused.
  • Simple and parsimonious Factor out the trivial factors that relate only to one particular client. In enterprise reuse, this can be a big problem, where client projects may be the budget holder for anything reusable. It’s difficult, but someone needs to stand up and say “no, we’re not going to include feature X because no-one else would actually need it”. In software, deciding what to leave out is usually a greater challenge than coming up with new things to put in. Any feature that won’t be used by a significant proportion of client apps is going to create more clutter than its worth. In a broad-scale service, I’d say this minimum proportion should be something like 5-10% (e.g. Each method should be exercised by 5-10% of clients who used the class.) In an enterprise context, where there may only be a few clients, I’d say the criterion should be “at least 2 clients”. (There was a podcast interview a while ago, with PragDave I think, where he was asked what he would include in Rails 2.0. He essentially replied that he’s more worried about taking things out – push them out of the core distro and into plugins.)
  • Automated Sometimes, people think “it’s all under the same roof”, so getting access and learning about an API requires a call or a meeting with the owner of the reusable service/component. Whereas, if Google offers the same thing, it will provide online doco and a means of accessing it automatically, without any human intervention. An agile enterprise should aspire to do the same thing; it doesn’t have to be as polished as a public offering, but the spirit should be the same. Otherwise, it won’t scale, and the owner will soon become fed up doing the same thing over and over.

Designing Like a Pollyanna: Have your Cake and Eat it Too

“The novel’s success brought the term “pollyanna” (along with the adjective “pollyannaish” and the noun “Pollyannaism”) into the language to describe someone who is cheerfully optimistic and who always maintains a generous attitude toward the motives of other people. It also became, by extension – and contrary to the spirit of the book – a derogatory term for a naïve optimist who always expects people to act decently, despite strong evidence to the contrary.”

— Wikipedia entry for Pollyanna

A little while ago, I was talking to some colleagues about an authentication problem. I suggested we need to improve its usability and my colleague’s immediate response was to argue there’s a security-usability trade-off, i.e. by improving usability, we must resign ourselves to decreasing security. Push one lever down, the other goes up. I’ve never bought into this argument. While certain forces do often conflict with each other, they don’t always, and it’s counter-productive to constrain your solution space with immediate presumptions like that.

Clarke Ching triggered this post with his posts on Non-Zero Sumness and having your cake and eating it. The latter is an example of a twerpy category of fallacious sayings and assumptions in this world which are based on scarcity mentality. For example, the notion that you have to “divide the pie” so that a gain for A is a loss for B. The alternative (as a certain presidential candidate was ridiculed for pointing out) is to make the pie higher. Whoever invented multi-storey buildings figured this out a long time ago.

(BTW “You can’t have your cake and eat it too” is a ridulous premise; what else are you supposed to do with your cake? I see from the wikipedia article that the original phrasing “wolde you bothe eate your cake, and have your cake?” is the reverse and makes more sense – you can’t eat your cake and still have it.)

Fortunately, we have sayings and phrases which remind us that it doesn’t have to be a trade-off. Unfortunately, these terms have been over-used by the proverbial “MBA suit guy” that we’re not allowed to use them without offering an apology. e.g. “win-win” and “synergy”. (Both terms I use without apology. I will stop short of “gestalt” though.)

All the above is theoretical rambling – here are some real-world examples.

  • Security-Usability Biometric authentication allows people to enter places without typing a password or carrying a card, and is, generally (and theoretically) speaking a more accurate indication of a person’s identity. Neither security nor usability suffers from this technology; both have been enhanced. (Privacy advocates will argue there is still a trade-off at stake.)
  • Safety-Usability As with security, safety is often assumed to be at odds with usability. In our work on safety-usability patterns, we sought synergies which would ideally improve both at the same time. For example, the pattern called “Behaviour Constraint”, sometimes referred to as a “forcing function”. In some buildings, when there’s a fire, a barrier goes up on the stairs leading to the basement, to ensure people don’t keep running down there (example by Don Norman). The wheels of a plane can’t be raised if the plane isn’t moving (to prevent a pilot from actually raising them while on the tarmac). These are examples which make life easier for users, by reducing the number of decisions they have to make, and at the same time, improve safety by blocking the transition to dangerous states.
  • Ajax-Accessibility When Ajax emerged, some people had a knee-jerk response – “Ooooh! Javascript! Bad for accessibility!”. In some cases, it was true. In many other cases, though, Ajax actually improves accessibility. e.g. Ajax makes it much easier to let people choose font size and colour scheme.

Knowing that certain forces trade off against each other is still useful as a kind of broad-brush stereotype. At a meta level, it’s a kind of general software engineering pattern to say “security and usability trade off against each other”. (It’s a pattern in the sense that if you looked at hundreds of software projects, you’d see people frequently dealing with this trade-off.) I would like to explain such a pattern to a first-year software engineering student, so they can more easily reason about how these things work. It’s also useful in scrutinising a design; I would be quite comfortable ending up with a design with “high” security and “low” usability, or vice-versa, after exploring all the options, because I know these principles do usually trade off. And for that reason, I’d consider it somewhat unrealistic for a client to expect a feature to have both “high” security and “high” usability, and would only engage on such a quest after explaining to said client that the effort has a high chance of failing to meet these criteria.

So these trade-offs are useful and are my defence against the second part of the Pollyanna definition above – “a derogatory term for a naïve optimist who always expects people to act decently, despite strong evidence to the contrary.” I’m not ignoring those trade-offs, I’m just saying they shouldn’t be an automatic assumption. It plays to a higher-level rule about design and creativity: Ignore constraints at first. In too many cases, people are afraid of going down certain paths because they can immediately detect an obstacle down the road. I once tried producing a novel design with a domain specialist who would immediately shoot down most any idea I had on the basis that users would never work with it. Once I bypassed him and got to the users themselves, the situation was quite different; the users embraced the concept and gave further suggestions for improvement. The domain specialist, who had long since working as an active user, was applying constraints too early and preventing us from proving the concept (“proving” == “bashing into shape”). In many other cases, the barrier is technical – you come up with an idea and it’s immediately blocked on the basis that it can’t be done with current technology. (The sort of limiting belief you would have heard if you proposed Google Maps in 2004). In the same way, when we automatically assume “improving X will diminish Y”, what we’re doing is limiting our options prematurely. Better to ask first “is there a way I can improve both”, or, failing that, “is there a way I can deliver the critical attribute X without diminishing Y too much?”.

OAuth-OpenID: You’re Barking Up the Wrong Tree if you Think They’re the Same Thing

OAuth, OpenID…they sound like the same thing and they kind of do vaguely similar things But I’m here to tell you, OAuth is not Open ID. They have a different purpose. I’ve been playing around with OAuth a bit in the past couple weeks and have a grip on what it’s aiming to do and what it’s not aiming to do.

To start with, here’s what OAuth does have in common with Open ID:

  • They both live in the general domain of security, identity, and authorisation
  • They are open web standards. Created and evolved by people with an itch to scratch and evolved pragmatically by a loose, fluid, alliance. Think REST, not SOAP. Think Bar Camp, not The 25th Monosemiannual International Convention for the Society of Professionals who Devise Acronyms Quite a Bit.
  • They both celebrate decentralisation. There is no central Open ID or OAuth server that holds all the security information in the universe (cf Passport). Anyone can set up as a server or a client.
  • They both involve browser redirects from the website you’re trying to use – the “consumer” website – to a distinct “provider” website, and back again. Meanwhile, those websites talk to each other behind the scenes to verify what just happened.
  • The user can actively manage the provider website, exerting control over which websites can talk to it and for how long.

With that much in common, the casual observer could be forgiven for confusing them. But they’re different. Not different as in “vying to be the no. 1 standard”, but different as in “they let you do different things”. How so?

Open ID gives you one login for multiple sites. Each time you need to log into Zooomr – a site using Open ID – you will be redirected to your Open ID site where you login, and then back to Zooomr. OAuth lets you authorise one website – the consumer – to access your data from another website – the provider. For instance, you want to authorise a printing provider – call it Moo – to grab your photos from a photo repository – call it Flickr. Moo will redirect you to Flickr which will ask you, for instance, “Moo wants to download your Flickr photos. Is that cool?”, and then back to Moo to print your photos.

With OAuth, you still need to log into the provider. e.g. When Moo sends you to Flickr, you still have to log into Flickr (or be logged in already). How Flickr decides you’re logged in is completely orthogonal to OAuth. It could be a standard username-password login, it could be via a physical password device, or it could well be via Open ID.

With Open ID, there is no suggestion of two web apps sharing your data. Except in the very limited sense that the Open ID provider may hold some general information about you, e.g. some photos, addresses, phone numbers, etc., and with your consent, send it back to the consumer so you don’t have to re-enter all the boring profile details again. However, this is data of a generic, non-application-specific, nature. (And even this limited form of sharing is an extension to the core Open ID spec.) With OAuth, any information you hold on any website can be shared with another website. You could share your GMail with a clever consumer that automatically tags items by inspecting the content, if GMail was an OpenAuth consumer.

Or you could copy your GMail address book into Facebook, by allowing Facebook to read your GMail account. Right now, the only way to do that is to give Facebook your GMail username and password. Clearly a dumb thing to do, and that’s exactly the kind of thing OAuth is set up to prevent. OAuth prevents it by explicitly asking you if you want to let Facebook grab your details from the provider. That’s not a problem Open ID solves. Even if Facebook and GMail used Open ID and you had accounts with both against the same Open ID, you still couldn’t get Facebook to read your GMail account. The Open ID provider wouldn’t let Facebook log in to GMail as if it was you. Even if an Open ID extension came along to allow it, you wouldn’t want it. It’s too coarse-grained and would allow the consumer to do too much potential damage. OAuth is more fine-grained – consumers can do some things with your provider data, not everything.

Advice to OAuth Providers: Consumers, Consumers, Consumers

Gabe Wacho offers some good advice to OAuth providers:

Understand that many of the consumer applications of your service are driving users to your site, and in the world of composable services, your consumer application developers will often have choice. Choice means power. Recognize.

Several good points in his article. The main message, though, is to target consumers, not just end-users. This is sound logic, like a chocolate manufacturer seeing its retailers as a customer, as much as the guy who eats the chocolate. Stated in strategic marketing terms, Porter’s Value Chain reminds higher-level suppliers to see through the eyes of those they supply to, so they can understand how they add value further down the chain. e.g. our chocolate manufacturer considers how it can help a supermarket add value to the chocolate bar, e.g. by its product design, packaging, reporting systems, delivery mechanisms.

In the software world, one company which heavily relied on this principle is Microsoft. As mentioned in The Story Behind “Developers, Developers, Developers!!!!”, MS traditionally focused heavily on developers to make the platform flourish. On the whole, companies on the web are pretty good at this nowadays, with developer blogs and the general use of Documentation as Conversation.

Screencasts with Audio on Wink

I’m at a workshop on widgets. At a lot of workshops, people build some code, demo it, and then go away and no-one can see it running again. In an ideal world, we’d keep the apps running forever, but that relies on a complex tangle of internal and external services remaining online and staying in the same form. For example, will Twitter’s experimental OAuth have the same interface in six months time as it does today? In an experimental workshop involving mashups, there are bound to be numerous calls to services like that. The best way to preserve the work is not to keep the apps running, but to capture screencasts where the developers can explain the underlying concepts.

On Mac, I use iShowU for screencasts (like the Web-O-Random screencast I did a while ago). For Windows and Linux, though, there’s the possibility of Wink, which is nice as it’s (a) free; (b) capable of producing SWF files directly (Flash movies which can be embedded in a web page like YouTube). Last tried Wink two years ago to make some AjaxPatterns screencasts that never happened. (It’s funny to think that at the time, I was bothered about how to host and serve these files, a few MB each. Now I’d just store them on Dreamhost at 1+TB/month for about $20.) At the time, Wink didn’t handle sound, so you had to go through contortions to get an SWF movie containing the screencast with audio. Now it does, but it turns out to be not brilliant. When I tried to record with the “audio” option checked, the audio ended up being broken – 1 second on, 1 second off. Would be indicative of a buffering issue, but there’s plenty of memory available.

So here’s what I discovered, which actually works (using Wink 2.0). Instead of simultaneous audio and video, you can record audio over a single – frozen – frame. i.e. the frame will be frozen while you say your thing. It’s not Tarantino, but good enough for an explanatory screencast.

  1. Start a new project, with audio option not checked.
  2. Record the interaction without audio (Shift-Pause to start, Alt-Pause to stop). Don’t slow down on critical events as you can easily add delays during editing.
  3. Once you’re done recording, Wink will show a reel of all frames on the bottom of the screen.
  4. Click on a frame showing a critical event. On the right of the screen, there’s a dialog showing properties for this frame.
  5. Click on “+” audio button, which will produce a recorder. You can now record some audio which will sound out while showing the frame. The frame is automatically paused for as long as the audio you record.
  6. Now do Project Menu|Render and then Project Menu|View Rendered Output to see your video and hear your narration.

(I’m aware this is a plain-text post, explaining how to use software without screenshots or screencasts. Isn’t it ironic?)

Leopard Restraint

I’m as excited as anyone about Leopard. In particular, Time Machine and Spaces. Time Machine because backups have to be automated and I’ve never investigated the options. Spaces because virtual desktop is the one thing I really, really, miss from Linux. I also have hopes that Spotlight will actually be worth using. And I know there will be tons of the little things which seem pointless in isolation, but make a superb impact overall.

However, I will not be upgrading until at least the first significant patch. Leopard early adopters suffer for the rest of us. I salute you guys for your assistance in making the platform more stable. Me, I’m going to continue using Tiger until the “blue screen” problem is a thing of the past, Skype works, and MOST IMPORTANT, Rails works without the hassle I went through on Tiger when it first came out (it wasn’t very funny).

Apple and Google both have a policy of secrecy, which has been highly successful in an era where the common mantra is openness and transparency. It works fine for both of them, but it’s better for Google, where the barrier to usage is low. For Apple, there’s always a risk of charging hundreds or thousands for something that turns out to be seriously broken. They’ve been fine to date, mostly due to the zeal of the Apple crowd, as well as what must be some very savvy development and testing processes. While it works fine for Apple, I as an individual user make sure to protect myself from the risk of early releases.

The sweet spot for me and Apple is being a late Early Adopter – that’s the right balance between the increased productivity from new features and the loss of productivity from using an OS that’s been sent into the wild sans beta test. I may be a fanboy, but you won’t see me queuing up in Regent St for a week. I’ve gotten by without Leopard for the some decades; I can wait another month.