In his book The Mathematician’s Apology (1941), the Cambridge mathematician GH Hardy expressed his reverence for pure maths, and celebrated its uselessness in the real world. Yet one of the branches of pure mathematics in which Hardy excelled was number theory, and it was this field which played a major role in the work of his younger colleague, Alan Turing, as he worked first to crack Nazi codes at Bletchley Park and then on one of the first computers.
Melvyn Bragg and guests explore the many surprising and completely unintended uses to which mathematical discoveries have been put.
The cubic equations which led, after 400 years, to the development of alternating current – and the electric chair.
The centuries-old work on games of chance which eventually contributed to the birth of population statistics.
The discovery of non-Euclidean geometry, which crucially provided an ‘off-the-shelf’ solution which helped Albert Einstein forge his theory of relativity.
The 17th-century theorem which became the basis for credit card encryption.
The relevance to the topic of this blog should be clear.
Show me someone doing something cool or experimental and I’ll show you someone who sniffs, “Why would anyone want to do this?”. The answer may be because it was fun, because they wanted to see if it was possible, or because they wanted to learn something. But whatever their primary reason, one thing’s for sure: there will be unintended consequences. The examples above show it’s happened in mathematics, and we’ve seen the same thing happen time and again in web development.
So next time you see some wit ask “Why would anyone ever need this?”, just stay schtoom, sit back, and wait six months.
usual live blogging caveats – spelling errors, messy, etc etc
@premasagar is visiting the Osmoplex today (thanks @jayfresh for arranging it) and is taking us through his work on iframes, widgets, and sandboxing. I’ve realised we could perhaps be collaborating as my jquery-iframe plugin is so close to his. Different emphases, but much overlap.
GitHub is where you can see what he’s been working on. Basically, this guy is a guru on all things iFrame. In particular, all the quirks around squirting dynamic content into iFrames, as opposed to pointing them elesewhere using “src”.
QUOTE OF THE DAY
“sqwidget is my tiddly”
BACKGROUND – THE EMBEDDING PROBLEM
In patternesque speak, the basic problem is:
Problem: You want to embed 3rd party content into another site.
You want the 3rd party content to have its own style
BUT it will inherit style from the parent page
Works great (in fact, I’m using it in my TiddlyWiki playground app, to be documented at some point, and is similarly used in Jon Lister’s TiddlyTemplating.
The problem is you sometimes want the widget to jump out of the iframe, e.g. a ligthboxed video. So …
Basically a CSS reset, but whereas CSS resets will only handle browser defaults, cleanSlate blats everything! This is exactly the kind of thing I was looking for when I was trying to cancel out tiddlywiki styling. In that case, I was flipping the entire page back and forth, so I could just cheat by removing stylesheets and re-add them. (Prem pointed out there’s a “disabled” attribute on style tags – true, so I should really use this instead, assuming it’s portable, which he thinks it is.)
– Difficult to maintain CleanSlate library, because new CSS stuff and browser quirks keep coming up
– IE6 and IE7 don’t support “inherit”, so need CSS expression.
– Doesn’t solve iFrame security model
Inject (aka squirt, inject; summary) content into a fresh iframe.
The content comes from the widget site. The sqwidget library injects it. This resolves the tension with wanting independent CSS on the same page. If the sqwidget library is running on the host page, it could even (potentially) lock down capabilities, i.e. do Caja-style sanitisation.
Sqwidget also does templating, using Resig’s micro-templating. (That thing’s getting to be very popular; I’m using it myself in Scrumptious via the UnderScoreJS library after @fnd gave us a talk about them.)
Also, prem is playing around with the idea of a div, with custom (data-*) attribute pointing to the third-party URL. You could put inside it “now loading” and then the script tag will pick those things up and load them.
for content-squirting into the iframe, you should be setting doctype, otherwise IE will load in.
Last week saw a confluence of excellent events. In the same week as a house move, it proved to be a week of much learning and little sleep. I’d hoped to do a better write-up, it never happened, a combination of being too busy and new MAC BATTERIES SUCK, meaning the lappy couldn’t last through the whole session. But fortunately, I have some links to point to. Here’s a quick summary anyway, along with the linkage.
Ben is ideally placed to cover the brave new world of web fonts, being a web maker who studied fonts at uni. He walked us through the evolution of font hacks on the web: image with alt tag; CSS background image with text pushed off the page; rendering with Flash (SiFR); rendering with Canvas or SVG (Cufon, TypeFace.js), using JSON-based font spec data. It all leads up to the holy grail: @font-face.
Great, so we have @font-face, but issues remain:
* The foundries – Mark Pilgrim, in no uncertain terms, complains the font vendors are stuck in the dark ages of the printing press, in their resistance to support @font-face. This seems to be changing with WOFF, a web-only format that seems to placate the foundries, who worry their fonts will be stolen. It seems more like a symbolic gesture, since the data could still be converted and in any event the print fonts could still be appropriated, but the foundries are feeling more reassured and making signs they will go along with it.
* Performance issue – Bandwidth issues and Paul Irish’s “flash of unstyled text”, where the user notices the font change once the fancy @font-face font has been downloaded.
* Compatibility – IE has long supported font-face, but with EOT format fonts, and that remains the case. You therefore need both types of fonts, and licenses will generally not give you both.
Social Design Patterns
I was, needless to say, psyched about this. Yahoo! has been the closest thing to a realisation of the inspiring design pattern vision of the mid-late ’90s. Patterns on the web, for both its own employees and the wider community to learn from and evolve. These are social design patterns mined by Christian Crumlish (@mediajunkie), in many respect the closest thing software has to an analogy of building architecture, where design patterns originally came from.
There are 96 patterns in all and I’m looking forward to poring through them. In these patterns are hundreds of people-years’ experience of observing real-world social systems. In my own pattern work, I’ve found it necessary to articulate the overarching design principles behind the patterns. Pattern languages should be opinionated, so it’s a good thing to make explicit your criteria for filtering design features. Christian has followed this model too, and identified 5 overarching principles:
Paving the cowpaths. Facilitating the patterns that are already happening, rather than imposing your own invented process. Also means evolving with your users, ev dogster started as photo sharing but evolved to social network.
Talk like a person.
Play well with others. Open standards, mashups, etc. “if you love something, set if free”
Learn from games.
game UI concepts
designing the rules, but ultimately letting the people who come into the space finish the experience themselves.*
This was an internal conference for BTers in London covering a range of general tech trends, and also being a chance to get together and talk shop. The agenda included talks on Scala, Rails, Kanban, iPhone development, and even a lightning talk from @psd on the horrors and delights of APL.
Luis Ciprian, visiting from Brazil, gave us an overview of XMPP and talked us through a real-time web app – a basketball score and conversation tracker – using XMPP.
It’s been said that the world hasn’t been this excited about a tablet since Moses came down the mountain. January 27, 2010, is the day Apple is slated to finally put us out of our misery and tell us what it’s all about. But imagine Steve Job moseys onto the stage, launches a forceful history of Apple’s portable record, and then announces Apple’s launching some new ipod speakers. (It happened a few years ago.) No “one more thing”. No tablet. Not even an iPhone OS 4.0.
I think Gruber nails it. Steve Jobs, in what many consider will be his final act at Apple, is attempting no less than the next generation of computing UI. Many people are already finding they can get by with just their iPhone for many tasks. Myself, I actually prefer to read blogs on iPhone NetNewsWire and Instapaper on iPhone Instapaper. I can read these away from the distraction of the big machine, whether at home, commuting, or in a shopping line. I’ve been trying to read stuff on mobile devices since the Palm Pilot, and now it’s truly practical. If people are finding their phone does some things better than the computer, imagine what will happen when you have a big touch-screen, let alone any secret-sauce innovations like tactile feedback or live docking into desktop equipment. I think we will find it’s more than adequate for many casual users and a valued extra device for power users.
But this is about much more than Apple. I think we can take it for granted that the medium-term future will be all about touch-screen tablets. We’ll struggle our way through questions about how to stand them up and challenges like their never-satisfying battery life. And what happens when they fall on the floor? Oh, and there will be patent wars galore. But the category will grow fast, as many people start to reap the benefits of a double whammy: better interaction, more convenient form factor.
The really interesting question is how will the UI on these tablets work? The Gizmodo and Daring Fireball articles point in the right direction – it will be more like the new “super phone” mobile generation and less like the traditional PC. Lots of sandboxing, lots of highly-customised idiosyncratic interfaces (but with common idioms), and lots of abstraction (==liberation) from the file system, lots of app stores and repositories.
Now one model for all this is iPhoneOS, the custom-built operating system Apple put together or its own phone. Is there another model?
Of course. The web.
And we can do it today. Apple won’t, others will. We have the makings of an operating system that does all that. Lots of sandboxing? Yep, the whole security model of the web assumes domains don’t trust each other, unlike traditional desktop applications. Lots of customised interfaces? Yep, with Canvas and CSS3 and SVG and WebGL and audio and video and screamingly-fast browsers and a million brilliant libraries and toolkits, yes. Lots of abstraction? Yep, the web never did like file systems, and with offline storage, it doesn’t have to. App stores? Yep, a simple system of URIs and standard HTTP security techniques can do it easily.
Most developers would rather code in technology they already know, that’s open, and has a diverse community contributing a wealth of “how-to” knowledge.
It’s all happening now. Google has ChromeOS. Palm has WebOS. Nokia and others have the W3C web widget standard. Stick these things on tablets, and a whole new generation of UI will flourish.
Around the time Ajax got coined, one of the already-known patterns was 37Signals’ Yellow Fade Effect. As techniques were shared and visual effects libraries emerged, we began to see visual effects become commonplace on the web. I documented four of them in Ajax Design Patterns: One-Second Spotlight, One-Second Mutation, One-Second Motion, Highlight. (I wish I called them something else, e.g. “Spotlight Effect”.) And all of these are around today, and even in the jQuery core which makes t
hem pretty darn accessible to any developer. It’s as simple as $(“message”).fadeIn().
Where we are now, most developers are stuck with trial-and-error. In an ideal world, you’d have a SWAT team of UX experts to analyse each visual effect, test with users, and maybe they’d get it right, but most UI design in the real world and developers often just have to play around a bit and then move on to something else. So I think we could do with some simple guidelines, ideally based on empirical data, but even just based on pattern mining the web would be handy.
I’ll give some examples of the kinds of problems I’ve faced as a developer building in visual effects:
Choosing Visual Effect. I want to update some content that’s just been received from the server. e.g. the football score changed. Do I show a yellow highlight, make it blink, make it vibrate, do nothing?
Choosing Effect Parameters. I’ve decided to use a Slide Down effect. How longshould it last, i.e. the duration of time from not shown to show? Should it slide down at constant pace or accelerate and bounce at the bottom?
Composing Effects I have a block of content with the user can edit. When switching it between edit and view modes, there are two simultaneous effects taking place – one thing is being hidden while the other is being shown. How do I handle those? Do I slide one up and when that’s done, slide down the other? Do I fade one out and simultaneously fade one in? Do I immediately close one and transition in the other, or vice-versa? Many options come into play.
It should go without saying that there’s no right answer for any of these problems. But instead of developers always working from first principles, I think there are some very useful guidelines and discussions that could be had around the design patterns involved. Maybe even leading to a high level effects library supporting those patterns.
I’ve said a bit about this on Twitter, but basically a couple of years ago I saw REST beginning to take off in the enterprise, it was inevitable, and was amused by the prospect of vendors “embracing and extending” it for the enterprise; just as other useful terms like “synergy” and “agile” get sucked into the vortex of hollow lip service.
Bill, if you want people to have an open mind about what you are
trying to do, then the respectful thing would be to remove REST
from the name of your site.
Quite frankly, this is the single dumbest attempt at one-sided
“standardization” of anti-REST architecture that I have ever seen.
It even manages to one-up the previous all-time-idiocy of IBM
when they renamed their CORBA toolkit “Web Services” in a
deliberate attempt to confuse customers into thinking they
had something to do with the Web.
Distributed transactions are an architectural component of
non-REST interaction. Message queues are a common integration
technique for non-REST architectures. To claim that either one
is a component of “Pragmatic REST” is the equivalent of putting
a giant Red Dunce Hat on your head and then parading around as
if it were the latest fashion statement.
The idea that the community would welcome such a pack of
marketing morons as the standards-bearers of REST is simply
ridiculous. Just close the stupid site down.
This article by Nick Zakas, covering some technical issues in iframe loading, triggered me to surface a JQuery IFrame plugin I made a little while ago, which supports loading IFrames. (I did tweet it at the time, though I’ve since changed the location.)
The plugin basically tells you, the programmer of the parent document, when your child iframe has loaded. It also has some other bells and whistles, like timing the load duration.
It’s inspired by the use cases of WebWait (http://webwait.com) and the Scrumptious trails player (http://scrumptious.tv). Both are applications whose whole mission in life is to open up an external page in an iframe, and do something when it has loaded.
There’s already an IFrame JQuery plugin, but it’s to do with controlling what’s inside the iframe, i.e. the situation which is only possible when the iframes are from the same domain. What I’m dealing with here is the parent knowing when the iframe is loaded, regardless of when it comes from and agnostic to what it does.
I’m pleased to announce a major upgrade to WebWait, the first big upgrade since I launched the site 2.5 years ago. It’s based on watching how people are using it, talking about it, and sending me direct feedback about it. Thanks to all who have provided feedback.
The new features are:
Multiple sites You can keep typing in new URLs and hitting “Add”. This will put them in a queue of sites to eventually be benchmarked. (I decided against a big textarea where you could type in all URLs at once; this would be confusing to the majority of WebWait users, who are casual visitors wanting to check out the time for their own site.)
Stats Average, median, and standard deviation for each website, in a summary table. Indeed, the summary table is the cultural centre for all these new features – when you queue up a site, it’s immediately added to the table.
Export The export feature seems to be popular in List Of Tweets. I took the same UI concept and applied it here, so you can export data in plain text, HTML, and CSV format. The CSV is especially interesting as one of the people who’ve given me feedback is a statistician who’s planning to run analysis on some data.
Call Data View times for each individual call.
Expand/Collapse Call Data You can choose whether to show call data or not. Collapsing call data gives you a neat overview of stats for each domain.
Delete You can delete individual call data, or delete an entire website record. This is useful for outlier data; for example, you can see how fast the site loads from your browser cache, by turning browser cache on and deleting just the first call in the sequence.
Stats Average, median, and standard deviation for each website.
Browser compatibility. WebWait didn’t work upon launch in Safari, but it does now work in recent versions of Safari (I think a change in 3.0 made it work again :)), and works fine in Chrome too. This is important for users testing how their website loads in different browsers.
Thanks for reading. As always, let us know what you think of the site and anything else you’d like to see.
At Osmosoft, we have been engaging in a one-day hackathon about every month or so. There are several benefits:
It helps us prove our tech in a range of application contexts.
It helps us spread the word about web standards and why everything needs its own URI.
It helps us demonstrate and gain experience in agile processes.
It gets us Osmosofties working together in the same place at the same time, whereas day-to-day we tend to work on products in small groups or individually.
The WikiData Hackathon
The latest hackathon was last Thursday, July 16, which saw us collaborate with Avox, a company specialising in business entity data. I believe it’s working in a similar space to companies like Dunn and Bradstreet and other information providers, in that they collect, verify, and report information about companies. Avox is keen to open up parts of their database and gain the benefits of community feedback – with users being able to contribute information, leave comments, add new companies, and so on. Of course, Avox is in the business of gathering verified data, so it’s not just a case of making a new wiki and letting it run by itself. There remain challenges about how Avox will merge and verify community input, and how they will make it clear to users what’s verified and what’s not.
We had a conversation on the mailing list about what we did last time and what could do differently for this hackathon. Like a retrospective, but over email. TiddlyWeb architect Chris Dent set up some instances with sample data taken from the existing product.
Venue and Attendance
The hackathon took place in Osmosoft’s office, centered around our big table in the centre of the room. Seven people from Avox, in a range of technical and non-technical roles, attended for the duration of the event. Osmosoft had five developers working on stories, a developer working on integration and mediation, and another helping with the server and offering second-line support.
Introductions and Overview (about 1 hour)
We went round the table and everyone introduced themselves. Jeremy Ruston explained “what’s in it” for Osmosoft, as outlined above, and Ken Price outlined Avox’s interest in building wikidata. We also looked at the existing product. We then had a discussion which was mostly Osmosoft people asking Avox people questions about the nature of their business, the technologies involves, and their vision for wikidata. Paul began writing down stories on flashcards during this time.
User Stories (about 1 hour)
We finished writing stories – this was somewhat in parallel to the previous activity – and put them all up on the wall, with the magic of Blu-Tac. The stories were in the form “As a (role), I want to (task), so that I can (benefit)”. With about 20 stories, it was useful to organise them, so we grouped them according to the role. The main roles were for Avox staff and general users. There were also some stories involving 3rd party developers and employees of companies listed in the wiki.
Everyone gathered around the stories, we read them out, and we all agreed on priorities. We didn’t worry too much about ordering the stories at the bottom as it was unlikely we’d get to them during the event; if we did, we could prioritise later.
What we ended up with was a solid idea of the steel thread we were building. It would mostly be standard wiki functionality, but applied to the particular data and context of companies info. We had some bonus stories we could implement if we had time, like comments and tagging.
Planning and Infrastructure(about 30 minutes)
Developers put their initials against the first story they’d be working on, and likewise each story needed a customer representative to initial the story, so they would help refine requirements. In the event, we didn’t talk all that much with customers during development; it’s obviously an extremely important thing to do in a real project, but when you’re wanting to get real functionality out in a day, the cost of a conversation is relatively high and the benefit to the task is relatively low. It would have been different in a 2-3 day event, or with certain stories that are highly specific to the domain. The main thing was to checkpoint with customers to check that we’re generally on track.
Around noon, we drew up a timeline and agreed on a hard stop at 7pm. This means we timebox – the time is fixed and the only variable is the number of stories that get implemented in that time. We then got into
Development Sprints and Standup Meetings (about 6.5 hours)
We developed in sprints, with standup meetings in between. It was felt the previous hackathon’s hourly sprints were too frequent, so in this case the plan was every 1.5-2 hours; we decided at the end of each standup when the next would be.
Most of us developed against the user stories. We also had a very useful role fulfilled by Fred, who was designated as a general “helper” and mediator. This was the result of our feeling that things were sometimes disjointed in previous hackathons – not enough focus on integration and infrastructure. I feel that this role was very useful and Fred fulfilled it admirably, although at times he probably felt like he had 400 people talking to him at once! We also had Chris Dent working remotely to ensure the TiddlyWeb server was running and help deploy to the live site.
At the start, we intended to write the web app and talk to the live server. But it soon became apparent that we would step on each others’ toes by doing this, and opted for a more conventional setup where we each have our own server instance. We also had a quick debate about version control – github, private osmosoft repository, or tiddlywiki repository. This is a recurring debate which we ought to decide in advance next time. It partly arose after deciding to run our local copies of the server; it was felt this would be too much data for the tiddlywiki repo, so we used the private osmo svn. As for github, it would be nice to use git, but most of us are more familiar with SVN as we use the tiddlywiki SVN repo day-to-day, so it would cause too much complication to switch to git. Again, it might be a good idea for next time to use github instead, with some pre-work.
Presentation (about 45 minutes)
After the usual last-minute rush, we got a working product deployed. It must be noted that we let the “hard stop” slip by about 45 minutes, to about 7:30. Admittedly a bad practice, but it did yield some major results in this case, as it got us search, edit, and maps all working during that time.
Each of the developers explained what they’d worked on. We then gathered round the screen and walked through the app. Avox gave their thoughts, I recorded the video above while others broke out into informal conversations, and by that point, it was pub o’clock.
We were able to produce the steel thread we’d been planning; the key stories were demonstrated. We also implemented commenting and Google Maps integration. Being based on TiddlyWeb means we had also produced a working API “for free”; it’s just the nature of TiddlyWeb as a RESTful data container. (On a personal note, commenting was the main thing I did – I extracted the comments feature from Scrumptious into a standalone plugin, and integrated it into the emerging WikiData product. I’ll post about that separately.)
About the Video
I recorded the video above at the end of the event. One of my bugbears about hackathon events is that people spend all day coding and setting up infrastructure, and it inevitably comes down after that, or gradually falls apart as various third-party services close, databases get corrupted, people forget to keep hosting it, etc etc. In other words, you have to assume the online result of any hackathon event is transient. This is unfortunate, because the deliverable should be something that people can use to decide what happens next, whether to fund a project, and so on. While meeting minutes are often a waste of time, the artifacts that emerge in a workshop are critical to followup.
For that reason, I am perfectly passionate about taking a screencast or video to capture what took place in such events. Thanks to Paul and Ken for taking part in it.
Excite!. Goes so far as to ask you your race, and then a further question about whether you are of Hispanic or Latino descent. Oh, and then your household income! Talk about building barriers to signup!
Yahoo!. IIRC Yahoo! also used to ask for your profession and related stuff, but simplified things a bit somewhere along the line. They’re still adding big barriers though by asking a lot of unnecessary questions. As for these security questions, they’re of dubious value. When exactly did Yahoo! remove questions on professions etc? I’m not sure, but I’ve set the date at 2001 as it’s the kind of time people started doing that.
37Signals’ TadaList. Seemed pretty simple at the time, just asking for the basics. But still, characteristic of its time, it goes for email and password confirmations. Optimising for an unlikely error condition isn’t a great idea. Also, you have to tick to agree to terms and conditions. I’m not a lawyer, but I suspect you could argue that signing up is implicity agreeing to terms and conditions, so the slight increase in complexity isn’t justified for simple websites. (On a minimal form, the extra checkbox could raise the number of controls by 50%.)
For clarification, all these screenshots were taken today. The years are reflective of the site’s origins (not necessarily their year of creation, in the case of Yahoo!) and the kinds of forms that were around at the time. I dig all of the sites here and their inclusion is testament to their role as representatives of an era. I just wish some of them would modernise their signup forms a bit :).