The Tragedy and Triumph of Podcasts

It’s now about 6 years since I discovered podcasts while listening to a pre-podcast podcast, The Gillmor Gang. It’s everything I ever wanted from radio talkback – niche topics, on-demand listening, access anywhere, rich metadata, and no music – I’ve chosen to listen to talkback for a reason (Hello Australian Broadcasting Corporation).

A perfect storm of iPods, massive bandwidth, and feed religion made podcasts possible, and they are still going on strong. However, they’ve never taken off in the mainstream, and you can’t say they haven’t had a fair chance. Apple’s inclusion of podcasts in iTunes and iOS makes them pretty darn accsessible if people want them, yet many people aren’t using them. Having informally surveyed a few people, I’ve found they aren’t actually aware how easy iTunes made it to subscribe to podcasts, so there’s more work to be done there. But I think if there was enough word-of-mouth publicity, people would be using it to subscribe. It’s not harder than uploading photos for example. (I do have many reservations about iTunes, but those are more for advanced users.)

Podcasts haven’t taken off in much the same was as RSS and feeds and news readers have never taken off. Or have they? I recently heard Jon Udell speaking on the topic (on a podcast-or-other, not his own one) and he made the point that we expected everyone would wake up in the morning and open up their reader of feeds they’d subscribed to. Didn’t happen. But feeds did happen, social feeds, in the form of Facebook, Twitter, FourSquare, Buzz, and so on. Anyway, those don’t really translate to podcasts, not yet anyway. If Huffduffer let you subscribe to all your friends’ feeds, it would be possible, at least in a geeky niche community anyway.

My main point here is to highlight a few things that haven’t happened for podcasts, and would make them better and just a bit more popular if they did. I’m not arguing these things would make podcasts wildly popular; consider this mostly a wishlist and some pointers to a few trends:

Hardware: So we have these networked devices right? The most prominent at this time being iPhone and iPad, but they still don’t sync over the cloud. Using Android recently, I’ve come to appreciate how nice it is to sync podcasts in the background, over the air. Latest podcasts are just there. Downside being, you have to use an expensive phone, which is a problem for gym and running, and also a drain on that precious battery life. While in the US, I recently picked up an Ibiza Rhapsody player, a bargain at $44 for an 8GB player which automatically connects and syncs. Would be even better if I could sign up to the Rhapsody service in the UK, but not gonna happen. The neat thing is it has podcasts built in, and lets me sync them over the fly. Downside is it doesn’t have a keyboard, so if I want a feed not in the default list, I have to type it manually using the one-at-a-time, left-right-left-right, character entry. Now I’ve been waiting for someone to release a mini Android device, so I was blitzkrieged to hear This Week In Google mention a new line of Archos “tablets”, including a 3.2″ device. Which will be perfect for gym and running, allowing me to switch between podcasts and Spotify, with both of those things syncing over the air, and at $150, cheap enough to risk overzealous destruction :). Can you say Drool.

Cloud OPML: It’s awesome we have a standard like OPML, a simple way to declare a list of feeds, AKA Reading Lists. (Technically, reading lists are a subset of OPML, but OPML is the term commonly used, so I’ll keep using it here.) However, in both podcast and feedreader world, there’s an extremely weird tendency to assume OPML lives on your hard drive. Many newsreaders and podcatchers allow you to import and export to and from OPML…but they assume OPML lives on your hard drive, not in the cloud!!! Why? I have no idea. The whole concept is inherently cloud, so it makes no sense. I just want to stick my list of podcasts on a server somewhere, and when I start using a new client, it downloads them for me. As a consequence, I’ve manually entered my subscriptions dozens of times over the years. This is especially important for mobile devices – especially ones without a keyboard – like the Rhapsody player I mentioned above. Podcatcher/Feed-reader developers, I urge you to pull down subscriptions from OPML resources in the sky…and to offer users the ability to publish their subscriptions that way too!

Archives: Sadly, podcasts don’t live on the same way blog post do. This is sad because many podcasts are reference material, not just latest news. Take a podcast like the excellent History According to Bob. Over the years, he’s produced hundreds of fine recordings on all manner of ancient and recent history. But subscribe to his podcast, and you’ll only be able to backtrack 8 episodes. Now I chose Bob as an example because he actually offers older podcasts for DVD purchase, but most podcasters would be fine to let people get hold of old podcasts; they just have no way to actually make it practical. History is not the only topic; there are podcasts about movies, science, economics, software engineering…where a 2004 podcast would be just as relevant today, if only you could get hold of it. Some podcasts include every single episode in the feed, but then certain clients will end up pulling down gigabytes of data when each user subscribes. As a user, your best best is to scour archives – if they exist – and use something like huffduffer to aggregate them. But that’s still painful and not something every user will do. Odeo was on the right track, by building up a long list of all podcasts ever produced on each feed, whether in the current feed or not. But Odeo spawned Twitter and Odeo sadly isn’t.

Integrate with Music Players: Call it, “if you can’t beat them, join them”, but I would love to see the music services embrace podcasts. Spotify, for example, has a great interface for choosing songs on the fly as well as subscribing to playlists; it could easily be extended to podcasts to become a one-stop-shop for your listening needs. Playdio is an interesting move in this direction, allowing people to record talk tracks in between music tracks, and their contact form mentions podcasts, so maybe there is hope. Still, I wish Spotify et al would just bake podcasts into the player and be done with it. And considering the social features these things are starting to have, it could actually be quite powerful.

Social: There’s not really much you can do to find out what friends are listening to and all that cal. There’s Amigofish, but it would be nice to see it baked into the players directly.

True, music will probably be in first place for the foreseeable future, mirroring reality, but its needs have already been met, much more so than talk formats, where there really hasn’t been much innovation since 2004.

Micropayments: What Portable Devices May Bring

Amid the tablet hype is an interesting article from Derek Powazek –

“”” I’ve been thinking about how to make money from online content since I launched Fray in 1996. Really, I can’t tell you how many nights I’ve sat up, obsessed with it. It’s been my white whale. And here’s what I’ve come up with: a little bit of advertising works, so long as it’s classy, and sell some paper if you can. But any plan that includes walling off your content from the rest of the web is destined to fail, unless it’s porn of some kind (financial data is a kind of porn).

Why? It’s not because everyone online is a cheapskate. It’s because consuming content offline is still a much better experience. Leafing through a glossy magazine is simply sexier than clicking through a PDF


Apple could release a device that makes consuming media fun, is able to show any PDF beautifully (just like the iPod would play any MP3), and offers new media for sale in the iTunes store. If they did it right, publishers like me might finally be able to sell something digital that people would actually buy. “””

Powazek is getting at some kind of payment for premium tablet-formatted content. I’m not too sure it could work, but it does make me think about micropayments.

Most of the talk about content is about buying books and subscribing to newspapers. Those are traditional business models applied to the web – paying hundreds or thousands of cents. Even that would be an improvement for publishers, who are forced to rely on ads right now.

But what if Apple took the in-app payment model and applied it to microcontent? Built that sort of thing into the core browser and viewing software? You’d end up with something like Jakib Neilson’s 1998 vision of micropayments ( – and note he’s talking about touch-screen gestures:

“”” Very cheap costs of maybe less than a cent per page would be invisible in the user interface. If it would cost half a cent to follow a link, then that link should probably be shown in exactly the same way as free links. The time to consider the payment would be more expensive than the payment itself, so the payment should be hidden for the user (except, obviously, for appearing on the monthly statement from the payment service).

Slightly more expensive costs of maybe 1-10 cents per page could be visualized by a simple glyph, a slight color change for the link, or by having the cursor show the cost as a pop-up when the user points to the link. It would also be reasonable for the user’s computer to contact a reputation service to gather information about other users’ experience with the link: If most other users felt that the destination page was not worth the cost, then a dialog box stating that fact should be shown if the user tries to activate the link. If the destination page has a good reputation rating, then it would be a waste of time to show the dialog box, and the user would be taken directly to the page if he or she activated the link.

Expensive pages costing more than maybe 10 cents would always require the user to click “OK” in a confirmation dialog before the cost was incurred.

Very expensive actions costing more than maybe $10 would require a different interaction technique than simply clicking an OK button since users often do so automatically without reading the warning text. An unusual interaction should be employed to make it clear to the user that a major expense was about to be authorized. The figure shows one of our ideas for a way of authorizing a payment: the system would show an image of a check with the cost and payee filled in. The user would authorize the payment by a sweeping gesture across the signature field, which would cause a digitized image of the user’s signature to appear. Ideally, the sweeping gesture would be done by actually touching the user’s hand to the screen, but it would also be possible to use a mouse gesture on systems without a touch screen. “””

There would also need to be some settings to cap daily spending and perhaps cap spending on a particular domain.

Will Apple do it? Probably not on Day 1. There’s much more low-hanging fruit. But they do have all the pieces in place to do it some day – they control mobile safari and the surrounding OS, the platform has users’ trust, and most importantly, they can charge micropayments seamlessly.

Will others do it? I hope so. I’d love to see this experiment play out.

Will it work? Yes, I think there’s something here. Powazek says walled content isn’t possible on the web, and his solution is that user’s will pay for a premium view of the content. I’m saying walled content could work if it was embedded into the fabric of the web and it gave you Powazek’s premium view. You could imagine a free, high-quality, article on the league website about tomorrow’s football final. It lists all the players, along with a thumbnail. Clicking on any player will set you back 10 cents – the browser subtly indicates it without obtrusively confirming with you when you click. For the 10 cents, you’ll get a tablet-customised view of your player. It hooks into phone settings – you can save the picture as your screen saver and switch your voicemail prompt to be a recording from the player. (This is much cheaper than it would cost if it were a standalone app today, but the app store itself shows that selling lots of cheap units may be a more profitable strategy than a few expensive units.) And it hooks into other apps – sports geeks can export the player’s stats into their spreadsheet app.

What lives at the URL of premium tablet content, when you’re not viewing it on a tablet? That’s the million-dollar question. On the one hand, you could say nothing, but then purists will argue it’s not part of the conversation and people won’t end up visiting it. Fair point, though I think there could still be enough interest if the landing page is high-quality enough to attract links and search results. On the other hand, the ideal thing is really progressive enhancement: the page still lives, the content is the same, but the tablet renders it. The good thing is the flexibility: website owners could make their own mind up about how to degrade, and the market will eventually learn what’s best.

People will pay a lot for content in the right form. A few years ago, the same teenagers who refused to pay for high-quality renderings of their favourite bands songs, were the same teenagers who forked out big bucks for rubbish-quality ringtones. If the tablet is a fun device to hold and surf, people will pay for tablet-formatted content.

One of the problems with a plain-old subscription model is it goes against the model of the web as a series of interconnected nodes. It’s silo-based. A micropayment model, where any page can charge a fee, might get over this problem. It does still have the grey cloud over it: will users want to constantly be making decisions about what they’re looking at? If nothing else, it might detract from the experience. That’s why the industry might evolve more into a Spotify-style “all you can eat” model. The tablet manufacturer (e.g. Apple) charges users $10/month for “premium web content”; perhaps it’s included in the purchase, depending on how the plan works regarding 3G payments. Of the $10, it keeps $3 for itself and divvies up the remaining $7 equally to the owners of all pages the user visited that month. Users might see a small star icon somewhere while viewing premium content, just so they know they’re getting their money’s worth. 10 bucks a month to see content in premium form on a device you love? Even if the raw content is freely available without the payment, I could see this model working.

Events Last Week: Web Fonts, Social Design Patterns, BT Dev Day, Real-Time Javascript

Last week saw a confluence of excellent events. In the same week as a house move, it proved to be a week of much learning and little sleep. I’d hoped to do a better write-up, it never happened, a combination of being too busy and new MAC BATTERIES SUCK, meaning the lappy couldn’t last through the whole session. But fortunately, I have some links to point to. Here’s a quick summary anyway, along with the linkage.

Web Fonts at London Web Standards

@otrops captured the live notes in glorious detail, as did Rob Crowther.

Ben is ideally placed to cover the brave new world of web fonts, being a web maker who studied fonts at uni. He walked us through the evolution of font hacks on the web: image with alt tag; CSS background image with text pushed off the page; rendering with Flash (SiFR); rendering with Canvas or SVG (Cufon, TypeFace.js), using JSON-based font spec data. It all leads up to the holy grail: @font-face.

Great, so we have @font-face, but issues remain: * The foundries – Mark Pilgrim, in no uncertain terms, complains the font vendors are stuck in the dark ages of the printing press, in their resistance to support @font-face. This seems to be changing with WOFF, a web-only format that seems to placate the foundries, who worry their fonts will be stolen. It seems more like a symbolic gesture, since the data could still be converted and in any event the print fonts could still be appropriated, but the foundries are feeling more reassured and making signs they will go along with it. * Performance issue – Bandwidth issues and Paul Irish’s “flash of unstyled text”, where the user notices the font change once the fancy @font-face font has been downloaded. * Compatibility – IE has long supported font-face, but with EOT format fonts, and that remains the case. You therefore need both types of fonts, and licenses will generally not give you both.

Social Design Patterns

I was, needless to say, psyched about this. Yahoo! has been the closest thing to a realisation of the inspiring design pattern vision of the mid-late ’90s. Patterns on the web, for both its own employees and the wider community to learn from and evolve. These are social design patterns mined by Christian Crumlish (@mediajunkie), in many respect the closest thing software has to an analogy of building architecture, where design patterns originally came from.

There are 96 patterns in all and I’m looking forward to poring through them. In these patterns are hundreds of people-years’ experience of observing real-world social systems. In my own pattern work, I’ve found it necessary to articulate the overarching design principles behind the patterns. Pattern languages should be opinionated, so it’s a good thing to make explicit your criteria for filtering design features. Christian has followed this model too, and identified 5 overarching principles:

  • Paving the cowpaths. Facilitating the patterns that are already happening, rather than imposing your own invented process. Also means evolving with your users, ev dogster started as photo sharing but evolved to social network.
  • Talk like a person.
  • Play well with others. Open standards, mashups, etc. “if you love something, set if free”
  • Learn from games.
  • game UI concepts
  • designing the rules, but ultimately letting the people who come into the space finish the experience themselves.*
  • Respect the ethical dimension.

See the wiki or the book for more details.

BT Developer Day

This was an internal conference for BTers in London covering a range of general tech trends, and also being a chance to get together and talk shop. The agenda included talks on Scala, Rails, Kanban, iPhone development, and even a lightning talk from @psd on the horrors and delights of APL.

I gave a talk on Embracing the Web, emphasising open standards and the supreme primacy of Konami Cornification.

Real-Time Javascript

At the Javascript meetup, a great talk on NodeJS and WebSockets. NodeJS is coming on thick and fast, and Makoto Inoue showed how the technology plays nicely with WebSockets. WebSockets are all about Comet-style interaction, so expect to see a lot more of this combo in the next couple years.

Luis Ciprian, visiting from Brazil, gave us an overview of XMPP and talked us through a real-time web app – a basketball score and conversation tracker – using XMPP.


Web Tablets: The Tipping Point is Nigh

It’s been said that the world hasn’t been this excited about a tablet since Moses came down the mountain. January 27, 2010, is the day Apple is slated to finally put us out of our misery and tell us what it’s all about. But imagine Steve Job moseys onto the stage, launches a forceful history of Apple’s portable record, and then announces Apple’s launching some new ipod speakers. (It happened a few years ago.) No “one more thing”. No tablet. Not even an iPhone OS 4.0.

Even if this happened (it won’t), Apple will have added huge value by sparking a conversation about the future of computing. While some say all the speculation is a waste of time, in this case, I’ve actually found some of the discourse rather fascinating. In particular, Gizmodo’s invocation of Jef Raskin and the “information appliance” dream, and John Gruber’s analysis.

I think Gruber nails it. Steve Jobs, in what many consider will be his final act at Apple, is attempting no less than the next generation of computing UI. Many people are already finding they can get by with just their iPhone for many tasks. Myself, I actually prefer to read blogs on iPhone NetNewsWire and Instapaper on iPhone Instapaper. I can read these away from the distraction of the big machine, whether at home, commuting, or in a shopping line. I’ve been trying to read stuff on mobile devices since the Palm Pilot, and now it’s truly practical. If people are finding their phone does some things better than the computer, imagine what will happen when you have a big touch-screen, let alone any secret-sauce innovations like tactile feedback or live docking into desktop equipment. I think we will find it’s more than adequate for many casual users and a valued extra device for power users.

But this is about much more than Apple. I think we can take it for granted that the medium-term future will be all about touch-screen tablets. We’ll struggle our way through questions about how to stand them up and challenges like their never-satisfying battery life. And what happens when they fall on the floor? Oh, and there will be patent wars galore. But the category will grow fast, as many people start to reap the benefits of a double whammy: better interaction, more convenient form factor.

The really interesting question is how will the UI on these tablets work? The Gizmodo and Daring Fireball articles point in the right direction – it will be more like the new “super phone” mobile generation and less like the traditional PC. Lots of sandboxing, lots of highly-customised idiosyncratic interfaces (but with common idioms), and lots of abstraction (==liberation) from the file system, lots of app stores and repositories.

Now one model for all this is iPhoneOS, the custom-built operating system Apple put together or its own phone. Is there another model?

Of course. The web.

And we can do it today. Apple won’t, others will. We have the makings of an operating system that does all that. Lots of sandboxing? Yep, the whole security model of the web assumes domains don’t trust each other, unlike traditional desktop applications. Lots of customised interfaces? Yep, with Canvas and CSS3 and SVG and WebGL and audio and video and screamingly-fast browsers and a million brilliant libraries and toolkits, yes. Lots of abstraction? Yep, the web never did like file systems, and with offline storage, it doesn’t have to. App stores? Yep, a simple system of URIs and standard HTTP security techniques can do it easily.

Most developers would rather code in technology they already know, that’s open, and has a diverse community contributing a wealth of “how-to” knowledge.

It’s all happening now. Google has ChromeOS. Palm has WebOS. Nokia and others have the W3C web widget standard. Stick these things on tablets, and a whole new generation of UI will flourish.

Tablets this Time Round are Different, Really

Convincing argument that it really is different this time round, compared to the days of “pen computing” and windows tablet edition at the start of the decade.

Microsoft always loved the stylus, but most people hate it. Apple and others understood that actually touching the screen is far more appealing than using some funky pen. And touch requires an entirely different user interface, which Microsoft was unwilling or unable to build into Windows until Windows 7. The casual observer might believe that the usability difference between pen and touch is small. But using a pen is an unnatural act, one that until very recently only a tiny minority of people ever engaged in. The psychological payoffs for using a pen on paper are the tactile feel of the paper, the instant feedback of the trail of ink and the physicality of stacks and files and binders of paper notes. Pen-based computer systems don’t offer any of those payoffs.

Touch is one of seven reasons cited. Others include the rise of e-readers, HD video on demand, app stores, and mobile-specific operating systems like Android.

A further reason would be improvements in soft keyboard technologies. That a soft keyboard could work, as on the iPhone, was a surprise to many; we probably won’t see those funky projected keyboards taking off any time soon. While some will say the tablets aren’t for performing serious work, everyone needs to type occasionally, at least to perform a web search.

Another reason is battery improvements, though we’re still a long way off from being able to play HD video, surf the web, and chat on skype, simultaneously and all day long.

Great, now we have all the reasons in the world. Unfortunately, we don’t have the CrunchPad anymore and the Apple rumour is just that. Will tablets really take off? For all the arguments, there remain some unanswered questions.

Most important, the form factor; although lighter and thinner than ever, they are still awkward to hold, maybe awkward enough for only geeks to love and certain professionals to tolerate. This is an area where Apple surprise us just as they did with the MagSafe power adaptor. It was innovative, unexpected, useful, intuitive, and a product of the physical world rather than the digital world. They’ll do well to repeat that feat with some kind of stand for the rumoured tablet.

The other thing is connectivity. I can’t see these taking off in a big way unless they ship with a 3G sim card, like the Kindle. We’re finally entering an era of more SIM cards than people and it makes a whole lot of sense – at least among retail consumers – to treat bandwidth like oxygen, instead of having to bash wifi into working for us. It’s true that wifi itself works well these days – the client detects the type of encryption and asks for a password. The problem comes when it’s restricted, i.e. a hotel has to give you a special password or you have to pay for it via a website. That’s where the friction comes, and it gets even worse with these systems that proxy everything and kill the session when you leave the browser (say, to send a tweet from a twitter app). Many of us will still need wifi as bandwidth isn’t ready for high-end uses like HD video – and also while data roaming charges remain stiflingly high – but a built-in sim card is the only way forward as the default mode of connectivity. I’m currently paying £7.50 per month (~ $12) for 3GB/month download, and came with a new USB modem. At a rate like that, it’s a no-brainer for tablets.

Notes from Paul Annett Talk

Update – some links:

Pauls deck/audio (needs a bit of time to buffer)

slide deck

Standard live blogging alert

I’m here at GumTree offices in Gumtree, thanks kindly to @cyberdees++ letting me know about this lunchtime talk. Paul Annett (@nicepaul) just gave a fun talk about “oooh that’s clever” design, which contains lots of fascintating ideas and examples around the subtle things, the hidden and unexpected and often functionally pointless, that delight people and get them talking.

Is this Stuff Relevant? 13 Million People Can’t be Wrong

To demonstrate how people are fascinated by little secret details, he begins talking about his magic trick he posted to youtube, “This’N’That magic trick” with no less than 13M views. The interesting thing is the attention to detail people have, where the most commented thing is the few frames in which the third “magic card” is exposed. It’s like they have now discovered a little secret they can tell their friends about.

Offline Examples

You see similar things offline too: (sorry no links – I’ll just let you google these)

  • The hidden arrow in the FedEx logo.
  • The hidden bear in the Toblerone “mountain” logo.
  • The Aerosmith logo, which still says Aerosmith upside-down.
  • Darren Brown’s Trick or Treat logo, where the Trick is the Treat logo upside-down.
  • The “truce” logo, where another “truce” fits into it.
  • The Venetian Snares logo.
  • Mickey Mouse symbols – Disney hides them in different places, e.g. a silluoette in a picture, a figure in a man-hole cover in the theme park, and even a field on Google Maps.
  • The bottom of an Innocent Smoothies carton, containing various messages.
  • The inside of Moo card packages, which contain little hidden cartoon figures when you dismantle them.

Hardware and Software Easter Eggs

This has been going on a while in technology with easter eggs. There are hardware easter eggs such as a cheetah on a microchip. Look at the glow an Apple Mighty Mouse makes. An irregular pattern? No…it’s definitely a real mouse outline. And of course software easter eggs, e.g. “about:mozilla” in Firefox, the saga continues.

The parallax employed on the Silverback website is a well-known example of this. (In fact, @psd talks us through it on I Can’t Believe It’s Not Flash.) Parallax has been used elsewhere since then. The holding page for twequency used it nicely. Tweet1 also has some lovely effects like this, and also has an easter egg figure (at least one) appearing. A lot can be done with parallax, e.g. you could use it jumble sites around. The effect here is just a demo, but could be expanded to something more.

It adds to the brand of a large company when you see it has a sense of humour, e.g. zooming into Google Moon would give you yellow cheese. Try “ascii art” or “recursion” searches.

dConstruct site had a nice easter egg where the hidden top bar would switch different stylesheet. Some people said it detracts from the experience, breaks proper design principles, etc.; but the point is, no, users can still get everywhere, it’s just a bit of a delighter for people in the know.


There are many interesting uses of transparency:

  • Modernista has a funnny iframe idea. Good example of transparency, which is a concept that’s under-used. Skittles also did this, much controversially, partly because they copied the idea from modernista.
  • This trailer shows fake browser with things breaking out of it to promote the movie’s 3-d ness.
  • CSS Zen Ocean (zen garden project)
  • Wario Land video breaks your expectations of YouTube, such a well known site structure. Good example of a commercial application. Likewie, iPod Touch ad on Yahoo! Games.
  • HEMA similarly breaks your expectations of an e-commerce site.

What’s the Point of All This?

Kano Model of Customer Satisfaction: 2 dimensions. Customer satisfaction and Execution. “Performance needs” tend to improve both at the same time, e.g. quick hotel check-in; quick e-commerce delivery. “Basic needs” where execution is higher won’t delight (ie improve satisfaction) but they are just expected and their absence will cause negative satisfaction. “Excitement needs” – e.g. these little design features we’ve been talking about.

These change across time. What excited you last year becomes expected this year, i.e. as enough time passes, the expected need becomes a basic need.


I asked paul about how to get these features known, when you obviously don’t want to just blab about them on your blog. With big sites, you can rely on people to discover these things by accident or investigation, but with little sites, they could sit for years without anyone spotting them. He mentioned it could be as simple as telling a few friends and see where that goes as one route. A more technical trick is positioning some text outside the document. Users will see it when they zoom out, and it would also be interesting to experiment with an image at the edge of a document turning out to be something unexpected when you zoom out to see the whole thing. “View Source” is another common thing you can play with. Tangentially related are playful copyright messages and 404 pages.

Someone mentioned “contextual delighters”. Things specific to certain users. e.g. “Welcome fellow Reddit user. Could also do the “styling visited links” hack to freak people out. Who’s going to make a site where your website tries to log into other websites using the same username and password, then says “actually you used the same password on flickr, so please change it”! (Paul urges us not to try this at home!)

Open World Forum Notes

As mentioned in the previous post, I was at Open World Forum in Paris these past couple of days. Previous notes covered today’s FOSSBazaar workshop, here’s a veritable panaply of miscellany from the other sessions.

Opening Keynotes

Risk of balkanisation in communities govt (mil.forge) commercial (gcode) ?developer (eclipse)

Worldwide IT spend 3.48T 18% of apps abandoned 55% “challenged”

Poprietary software quality 20-30 defects/kloc Open source 1-2 defects/kloc

Redhat vp (tieman) “free” means the product “ceases to exist” … It’s all about services

OSOR – Exit costs as or more important. If the cost to enter is free, exit costs become very important. Vendor lockin=no exit.

James Besson – Whiter Open Source

Open source isn’t new. e.g. Steam Engines – exchanged detailed information about their engines and what kind of efficiency. Personal exchanges, visits, publications, industry/engineering institutions.

Great inventors – had great PR. [similar comments in recent BBC IOT podcast on Leibnitz vs Newton]. We have “hero” status partly because of proprietary conditions. [Also it’s human nature].

The great innovations were limited in locale and time. e.g. steam workers in Cornwall. Shows examples of industries lasting 10-30 years.

Applies to open source? – Consolidation

  • User-friendliness? not a great concern; 2/3 of demand doesn’t require UI

  • Coexistence (commercial software, patent trolls) Uneasy, but institutions are forming to deal with patents and they’re not fatal threats. Also, there’s a “proprietary burden”; MS one of the most sued companies.

… FLOSS will be sticking around


Sourceforge talks stats (by mahemoff)

ohlo stats - language choice in open source repos (by mahemoff)

ohlo stats - jquery vs prototype in open source repos (by mahemoff)

Recently acquired ohlo – massive open source study

Git 25% / Subversion 63% / CVS 7%

Growing language – “it galls me to call Javascript a language, but

Fastest growing: Javascript Python C*

Most popular: Java Python PHP

W. Europe 48%, EE 11%, speaker points out AUSTRALIA is vastly over-represented as a contributor.

Communities Session

This was three talks from people running open source communities – Apace, Eclipse, Linux Foundations.

Apache talk

Covers how to get involved in the community, even simple things like submit a bug are a good start.

Know who you talk to – don’t lecture Roy fielding on the http spec


Was never intended to be just an IDE, the IDE was supposed to be the killer app for the platform, which was supposed to be tools around java.

After launching eclipse foundation in 2004, completely independent, unlike jcp you can’t find any small prit about IBM vetoing, and membership is set so IBM can’t control the vote.

Eclipse guy: If you’re in a commercially led ecosystem a la Microsoft, your only exit strategy is they eat you or they kill you. In an open source ecosystem, you’re working with a trusted partner.

Eidystem for innovation needs: * Licensing model * Project model * Governance model * Tech architecture • open source organisations like Linu, apache, eclipse give you these out of the box…so it’s crazy when people start their own – end up paying legal fees etc. Wastage.

Eclipse projects hidhlighted – the browser, modelling, CDT.

Jim Zemlin – Linux foundation (the best speaker of the conference IMO)

Different to apache with its alkos all volunteer staff. Budget of around $3M and 16 full time staff. Monthly conf calls, f2f once a quarter.

“everyone want linus’s autograph. I count myself as the only autograph Linus wants, he wants it every 2 weeks”. CEO, Linux foundation.

Runs big legal defence projects – patent commons, linux911 …

Linux roadmap – growing in every segment from embedded to supercomputers. Becoming de faci standar and supporting cross frtilisation eg a mobile manufacturer opted for Linux, contributed coed to reduce battery usage which went into the kernel, and saved power and cooling costs for supercomputers.

The Tweets

  • Brazil olympic victory overshadowed by today’s #OpenWorldForum #owf victory for most open sourcey government. The games relegated to page 7. Fri Oct 02 22:31:48 +0000 2009
  • Fossbazaar Conference — OpenWorldForum, October 2, 2009 Fri Oct 02 21:48:56 +0000 2009
  • Enlightened self-interest – Wikipedia, the free encyclopedia Fri Oct 02 15:42:16 +0000 2009
  • Australia is vastly over-represented as an open-source contributor :)) W Europe biggest region at 48%. #ohloh #owf #openworldforum Fri Oct 02 15:11:28 +0000 2009
  • no prizes for guessing fastest growing language. “it galls me to call javascript a language, but …” #owf #openworldforum #ohloh Fri Oct 02 15:08:15 +0000 2009
  • SourceForge speaker talking Ohloh stats: GIT now has 25% of commits! SVN 63%. CVS 7%. #owf #openworldforum Fri Oct 02 15:05:46 +0000 2009
  • report of open source used as an internal battle – product open sourced in a failed attempt to become the corporate standard #owf Fri Oct 02 13:58:12 +0000 2009
  • captchas drop conversions by 7.3% (and presumably bug the remaining 92.7%) (tx @usa2day)#fowa #ux Fri Oct 02 13:55:07 +0000 2009
  • removing barriers to access is key to a successful open source project – e.g. encouraging localisers, docs, feedback. #owf #openworldforum Fri Oct 02 13:49:58 +0000 2009
  • RT @wadje12: Q: How about countries or contributors who are not allowed to contribute? Such as Cuba vs US, or a 14 year old contribute #owf Fri Oct 02 13:43:23 +0000 2009
  • open source “community management” does not exist because you never manage people, you commit to them #owf Fri Oct 02 13:37:36 +0000 2009
  • @SiriusCorp i confirm #owf wifi is definitely of dubious stability. Fri Oct 02 08:55:26 +0000 2009
  • Shuttleworth on #ubuntu UI: want buddhist medidation style of attention: focus in one place, but aware of surroundings (==”ambient”?) #OWF Fri Oct 02 08:53:19 +0000 2009

    Mark Shuttleworth interviewed (by mahemoff)
    (getting interviewed, not his keynote from which the above tweet came)

  • Ingres CEO: not that all customers contribute code, but for those who do, that’s a strong vote for the feature they’ve built Thu Oct 01 10:27:58 +0000 2009
  • “Is Oracle an open source company?” if there was ever a moment to “pull a kanye” … l:safe-distance-from-the-stage #OWF #OpenWorldForum Thu Oct 01 09:57:27 +0000 2009
  • cloud speaker talking up and #OWF Thu Oct 01 09:41:17 +0000 2009
  • Lots of “gratis” and “libre” from our keynote speaker. Appears to be talking about beer. #owf #OpenWorldForum Thu Oct 01 08:55:58 +0000 2009
  • Keynoter announces he’s still in Paris, admonishes previous speakers for English usage, and proceeds with keynote en francais #owf Thu Oct 01 08:48:51 +0000 2009
  • no big surprise, but government IT is the big deal in the businessy end of the open source community #OWF #OpenWorldForum #NoOfficialHashTag Wed Sep 30 22:42:47 +0000 2009
  • back from #OpenWorldForum mingling in the fine setting of l’hôtel de ville, paris (“town hall” just doesn’t…well it just doesn’t.) #OWF Wed Sep 30 22:38:24 +0000 2009

    IMG_0024 (by mahemoff)

(via List Of Tweets)

IMG_0106 (by mahemoff)


p>(We thought the entire audience was going to be invited on stage, but they stopped short at surnames beginning with “K-N”.)

Notes from @detyro talk on BodyCasting at #uxcamplondon – Modelling UI with Human Actors

Quick notes

Looked at Bill Buxton – Sketching User Experiences. Lo-fi/Hi-fi – can even use video as lo-fi if keep it sketchy style production

The idea is to use human bodies to model the interaction – the humans represent different things in the UI moving around.

Reminds me of software design techniques. e.g. I recall physical versions of CRC where people represent software objects, communicating with each other. Also a good way of explaining traditional CS algorithms like bubble sort.

I participated in a demo where we were columns on a UI while a “user” shifted the columns (that is us) around.

Not only high level and sketchy, but also nice that it’s fun an light – helps people to enjoy the process.

The developers who are serving as actors in the process learn about the algorithm wh ile they act out the parts. e.g. when I did the demo, it started dawning on me after two or three “runs” that I need to start shifting out of the way. e.g. user says “c olumn D, move to between column A and B”, so initially I ignore the user being colum n E. But then I realise I’m part of the system and I have to get into gear and start shuffling along too.

Styling a Top Bar

Digg Bar launched with some controversy recently. It’s iframe trapping all over again; sites like About didn’t make themselves too popular with this technique, and so it died down until recently. I think there are legitimate uses for the top bar though; certainly the Digg Bar is useful to at least those people who are currently logged into Digg.

In fact, the reaction in the past few months suggests that top bars are here to stay. There was an initial uproar, but it seems to have been accepted, and I think top bars will start to become a fixture of the web. Given the valuable tracking data that comes from it, I can imagine dominance of the top 40 pixels of the browser window will become a big deal too…and right now, the big GYM guys (Google/Yahoo/Microsoft) aren’t doing it. Through development or acquisition, IMO that will change.

Users always trade off privacy against utility, and what can be an uproar about privacy concerns and google juice theft quickly dies down when people find value in a new feature. In this case, companies like Digg and StumbleUpon aren’t producing top bars as part of a cynical get-rich-scheme; I believe competition is too fierce to resort to cheap tricks that will get users off-side. Instead, I believe they are genuinely aiming to win, adding awesome features to improve user experience first and foremost. The Google Juice and tracking data that comes with it is a gigantic dollop of icing on the cake and top bars therefore constitute another example of having your cake and eating it too.

But this article is about web design, not web trends. An application of top bars I’ve been looking at recently is a “trails player” I’ve been building into Scrumptious lately. User creates a trail of websites for someone to visit, and each of them shows up in a trail bar at the top of the page.

To style this, I made like the web greats and peeked under the covers at a bunch of similar websites:

I’ve made a dead-simple bar to illustrate the concept.

View the top bar here.

The canonical layout works like this:

  1. <head>
  2.     <link href="layout.css" rel="stylesheet" type="text/css" />
  3.   </head>
  4.   <style>
  5.     body { overflow: hidden; }
  6.     iframe, div, body { margin: 0; padding: 0; }
  7.     #bar { position: absolute; top: 0; left: 0; width: 100%; height: 50px;
  8.               z-index: 100;
  9.               background: #ddf; border-bottom: 1px solid #888; }
  10.     #content { padding: 5px; }
  11.     iframe { margin-top: 50px; width: 100%; height: 100%; }
  12.   </style
  13.  <body>
  14.     <div id="bar">
  15.       <div id="content">I am bar.</div>
  16.     </div>
  17.     <iframe src=""></iframe>
  18.   </body>
  19. </html>

So the bar is absolute-positioned in the top-left, with width: 100% to span the entire width, and height set to whatever height you want for the bar. Make the iframe top margin match that height, and you’re set.

Note that the bar shouldn’t have any padding, otherwise it will add to the 100% width (padding isn’t included in the width under standard CSS box model) . So if you want padding, put it inside an inner div, like #content above.

An interesting thing is that you can set the iframe’s source, but you can’t actually detect it, because of cross-domain security, under modern browsers. Which is unfortunate. The user might start clicking around inside the iframe, after you have set its initial source URL, and you won’t actually know where they are at. Good for the user’s privacy, but bad if you want to provide certain awesome features. e.g. Digg would probably like to show the number of Diggs on any page users click over to, and StumbleUpon would like to show votes as well as related pages. In Scrumptious, I’d like to provide a “trails recorder” that automatically scoops up pages you visit into a trail…but I’ll have to achieve that with a browser extension.

And in all these cases, you probably want a “close bar” feature that would ideally work by setting document location to the iframe’s source. ie you want something like document.location.href=$("iframe").attr("src");. But that fails because the right-hand side is null. The best you can do is set document.location.href to the last URL you pointed the iframe at.

Osmosoft Hackathon: WikiData, a Wiki of Companies Data

End of The WikiData Hackday

Osmosoft, Hackathons

At Osmosoft, we have been engaging in a one-day hackathon about every month or so. There are several benefits:

  • It helps us prove our tech in a range of application contexts.
  • It helps improve and demonstrate our capabilities for reuse. Reuse can be as simple as slapping a Javascript file with inline comments on a server somewhere, or exposing a simple web service. With those capabilities in place, it’s possible to build something useful in a day, just as industry hackathons/camps demonstrate.
  • It helps us spread the word about web standards and why everything needs its own URI.
  • It helps us demonstrate and gain experience in agile processes.
  • It gets us Osmosofties working together in the same place at the same time, whereas day-to-day we tend to work on products in small groups or individually.

The WikiData Hackathon

The latest hackathon was last Thursday, July 16, which saw us collaborate with Avox, a company specialising in business entity data. I believe it’s working in a similar space to companies like Dunn and Bradstreet and other information providers, in that they collect, verify, and report information about companies. Avox is keen to open up parts of their database and gain the benefits of community feedback – with users being able to contribute information, leave comments, add new companies, and so on. Of course, Avox is in the business of gathering verified data, so it’s not just a case of making a new wiki and letting it run by itself. There remain challenges about how Avox will merge and verify community input, and how they will make it clear to users what’s verified and what’s not.


We had a conversation on the mailing list about what we did last time and what could do differently for this hackathon. Like a retrospective, but over email. TiddlyWeb architect Chris Dent set up some instances with sample data taken from the existing product.

Venue and Attendance

The hackathon took place in Osmosoft’s office, centered around our big table in the centre of the room. Seven people from Avox, in a range of technical and non-technical roles, attended for the duration of the event. Osmosoft had five developers working on stories, a developer working on integration and mediation, and another helping with the server and offering second-line support.

Introductions and Overview (about 1 hour)

We went round the table and everyone introduced themselves. Jeremy Ruston explained “what’s in it” for Osmosoft, as outlined above, and Ken Price outlined Avox’s interest in building wikidata. We also looked at the existing product. We then had a discussion which was mostly Osmosoft people asking Avox people questions about the nature of their business, the technologies involves, and their vision for wikidata. Paul began writing down stories on flashcards during this time.

User Stories (about 1 hour)

We finished writing stories – this was somewhat in parallel to the previous activity – and put them all up on the wall, with the magic of Blu-Tac. The stories were in the form “As a (role), I want to (task), so that I can (benefit)”. With about 20 stories, it was useful to organise them, so we grouped them according to the role. The main roles were for Avox staff and general users. There were also some stories involving 3rd party developers and employees of companies listed in the wiki.

Everyone gathered around the stories, we read them out, and we all agreed on priorities. We didn’t worry too much about ordering the stories at the bottom as it was unlikely we’d get to them during the event; if we did, we could prioritise later.

What we ended up with was a solid idea of the steel thread we were building. It would mostly be standard wiki functionality, but applied to the particular data and context of companies info. We had some bonus stories we could implement if we had time, like comments and tagging.

Planning and Infrastructure(about 30 minutes)

Developers put their initials against the first story they’d be working on, and likewise each story needed a customer representative to initial the story, so they would help refine requirements. In the event, we didn’t talk all that much with customers during development; it’s obviously an extremely important thing to do in a real project, but when you’re wanting to get real functionality out in a day, the cost of a conversation is relatively high and the benefit to the task is relatively low. It would have been different in a 2-3 day event, or with certain stories that are highly specific to the domain. The main thing was to checkpoint with customers to check that we’re generally on track.

Around noon, we drew up a timeline and agreed on a hard stop at 7pm. This means we timebox – the time is fixed and the only variable is the number of stories that get implemented in that time. We then got into

Development Sprints and Standup Meetings (about 6.5 hours)

We developed in sprints, with standup meetings in between. It was felt the previous hackathon’s hourly sprints were too frequent, so in this case the plan was every 1.5-2 hours; we decided at the end of each standup when the next would be.

Most of us developed against the user stories. We also had a very useful role fulfilled by Fred, who was designated as a general “helper” and mediator. This was the result of our feeling that things were sometimes disjointed in previous hackathons – not enough focus on integration and infrastructure. I feel that this role was very useful and Fred fulfilled it admirably, although at times he probably felt like he had 400 people talking to him at once! We also had Chris Dent working remotely to ensure the TiddlyWeb server was running and help deploy to the live site.

The tool we developed was a standard Javascript/JQuery web app (ie nothing to do with TiddlyWiki) talking to the TiddlyWeb server.

At the start, we intended to write the web app and talk to the live server. But it soon became apparent that we would step on each others’ toes by doing this, and opted for a more conventional setup where we each have our own server instance. We also had a quick debate about version control – github, private osmosoft repository, or tiddlywiki repository. This is a recurring debate which we ought to decide in advance next time. It partly arose after deciding to run our local copies of the server; it was felt this would be too much data for the tiddlywiki repo, so we used the private osmo svn. As for github, it would be nice to use git, but most of us are more familiar with SVN as we use the tiddlywiki SVN repo day-to-day, so it would cause too much complication to switch to git. Again, it might be a good idea for next time to use github instead, with some pre-work.

Presentation (about 45 minutes)

After the usual last-minute rush, we got a working product deployed. It must be noted that we let the “hard stop” slip by about 45 minutes, to about 7:30. Admittedly a bad practice, but it did yield some major results in this case, as it got us search, edit, and maps all working during that time.

Each of the developers explained what they’d worked on. We then gathered round the screen and walked through the app. Avox gave their thoughts, I recorded the video above while others broke out into informal conversations, and by that point, it was pub o’clock.

We were able to produce the steel thread we’d been planning; the key stories were demonstrated. We also implemented commenting and Google Maps integration. Being based on TiddlyWeb means we had also produced a working API “for free”; it’s just the nature of TiddlyWeb as a RESTful data container. (On a personal note, commenting was the main thing I did – I extracted the comments feature from Scrumptious into a standalone plugin, and integrated it into the emerging WikiData product. I’ll post about that separately.)



About the Video

I recorded the video above at the end of the event. One of my bugbears about hackathon events is that people spend all day coding and setting up infrastructure, and it inevitably comes down after that, or gradually falls apart as various third-party services close, databases get corrupted, people forget to keep hosting it, etc etc. In other words, you have to assume the online result of any hackathon event is transient. This is unfortunate, because the deliverable should be something that people can use to decide what happens next, whether to fund a project, and so on. While meeting minutes are often a waste of time, the artifacts that emerge in a workshop are critical to followup.

For that reason, I am perfectly passionate about taking a screencast or video to capture what took place in such events. Thanks to Paul and Ken for taking part in it.

Update: Avox CEO Ken Price (who appears in the video), has published a summary of the event.