Just a pointer to a TiddlyGuv article I’ve put up on FossBazaar. TiddlyGuv is an early-stage open-source governance system.
We’re having a TiddlyWeb fest tomorrow, and I explained one of the things I want to get out of it is a canonical design sketch.
Luke Hohman’s superb text, Beyond Software Architecture, explains how he comes into a project and asks people to draw the architecture. A great predictor of the project’s state is whether people come up with the same diagram. Put crudely, if they are all on the same page – if they all know module X is in the top-right and module Y is centre left – the project is onto a winner.
I’ve found similar things myself. I will work on a project for a while before someone puts up a poster depicting the emerging architecture. People will then find themselves congregating around the poster and building it up.
While some people are more visual than others, I believe most of us have good spatial abilities and a team has a lot to gain from a shared understanding of the architecture. This is even more so in the case of a framework like TiddlyWeb, where the community is disparate and connected much more loosely than a group of agile developers in a room together.
A good example of a design sketch is that by a project with some similarities to TiddlyWeb: CouchDB.
I like the fact it’s a sketch, not a boring old “formal” diagram. Diagrams that are too neat are also brittle, whereas simple sketches are resilient to change. Moreover, I like the fact it’s on the homepage. That makes a bold statement to the community. This is the canonical design sketch for the project, and any discussions around architecture will naturally be couched around this sketch and the entities within.
There is a well-established “grammar” for how we sign up and log into websites. Provide your name, email and password; verify the email; login to the site with username and password until you’re timed out. You know the drill. But a wave of new web apps and protocols is challenging the status quo, breaking the traditional interaction patterns. In some cases, they blur the distinction between logged-in and logged-out status; and in others, they provide ways to perform privileged actions without explicitly logging in.
Before we walk through these new interaction patterns, let’s be clear about the “old” pattern we’ve come to know and (sometimes) love. It starts with visiting a new site:
- use the site anonymously. Nothing you do yet is persistent; you can look (sometimes), but you can’t touch.
- sign up by providing username, email, and password. (variants: email is *username*, password is auto-generated)
- click on link in verification email
- now you can log in by providing username/password and logout by clicking the logout button or by explicitly timing out
Right, you know that sequence and it works okay. However, problems remain: – barriers to signup: perhaps the biggest problem faced by site owners is getting users to sign up. A known, authenticated, user gets a much richer experience, can contribute in mire valuable ways to the community, and can receive targeted services, not to mention targeted ads. – barriers to login. Users might have signed up at one stage, but rarely sign in due to the hassle factor, a factor that multiplies when they have forgotten their password. – timeout. For security reasons, timeout must happen and that will frustrate users and see them going elsewhere. – mobile access. My dear reader may well bear the latest and greatest web-enabled digital communications circuitry in thir pocket or handbag, but the majority of phones still lack a basic web browser. Even when you do have a browser, it’s difficult to login, especially if you are using suitably complicated passwords. There’s something to be said for more fluid models of mobile-webapp interaction.
Lazy Registration (NetVibes)
In Lazy Registration, an anonymous user is allowed to interact with the site right away, and through the magic of cookies or unique URLs, the site builds up a profile before the user even signs up. Once the user does sign up, they take their existing (hencetoforth anonymous) profile with them. The benefit is clear: the ability to start playing with the site without incurring the overhead of registration. The better e-commerce sites do this by letting you add to your cart as an anoynomous user; where most of those fail is they completely time you out after a short time. See Partial Logout below …
Unverified User (Flickr pre-Yahoo!)
Here is another pattern that defies the traditionally strict dichotomy between those who are in and those who are out. You sign up as normal, but you don’t have to verify the link straight away. There’s effectively an “email verified” flag against your name and unverified users only have partial rights, e.g. You might be able to start maintaining content, but no-one else can see it until you’re verified. The benefit here is eager users can get going right away, and have a reason to click on the verify link when they come across it later. (I’m afraid the only example I can think of is Flickr’s Guest Pass concept, before they tied login to Yahoo!. I’ve seen it elsewhere, but can’t recall where!)
Something like a rewind of Lazy Registration, the Partial Logout pattern allows for differing degrees of “logged-in-ness”. Once a user has been idle a while, instead of completely timing them out and making them anonymous again, the system continues to “sort of” trust them by letting them view some things and keep building up some aspects of their profile. In this state on amazon, you can view your shopping cart and personalised rexommendations, and products you view will be fed into your profile; but try and purchase a book or change your credit card details, and you’ll be faced with a login form. Timeout is the main reason to be partially logged out, but it might also be something else which leads the system to believe there’s a risk the browser is no longer under the user’s control; for example, the user logs in from somewhere else.
Device-Driven Interaction (SMS-Twitter)
You can tweet from Twitter.com or any number of web and desktop clients, but one of the main driving forces for twitter was the ability to say what pure doing wherever you are, a perfect example of SMS-driven authentication, and the reason why tweets are limited to 140 characters (leaving room for a username as well). The verification is all thanks to the caller ID. There are interesting international implications since the recipient pays in the US, while most other countries run a model where the sender pays. And when the sender is roaming abroad, they always pay to send. And it’s usually an absurd amount.
A variant is MMS-Driven Interaction (e.g. Joomla MMS module, although its cost – and immaturity in the US – makes it less viable. The more general way to depict this pattern is to see it as Device-Driven Interaction, where your possession of the device is used to authenticate you. Thus, a variant would be GPS-Driven Interaction, such as Google Latitude.
Email-Driven Interaction (Posterous, Tripit, Twitpic)
Email-driven interaction is becoming really popular lately; Posterous especially is taking off right now. In the simplest case, it’s just emailing a new post to your blog, to a secret destination address you can see on the web page when you logged in. Where all this gets more interesting is when you don’t even have an account yet! You simply send an email to their generic address ([email protected]; [email protected]) and you’re up and running. In Posterous, you can also reply to comments from the comfort of your mail client too. All this is fantastic in a mobile context, where it’s often more convenient to fire off an email from your camera app than log into the website and upload it. It’s also cheaper than sending MMS content around too. Of course, these services still let you log into a website the regular way too; they’ll send you a link that gets you in the first time. Of particular interest is their use of simple, public, addresses. It raises the distinct risk of spoofing to act as someone else, but in practice, the services seem to be smart enough to thwart such attacks. Somewhat. So far.
Secret URL (SlideShare, Skitch)
A secret URL is “the login you’re having when you’re not having a login”. (Which makes it a “Claytons login” in Aussie vernacular). A privileged user can see the secret URL, it being a long unguessable string, and can then distibute to trusted others. The chief benefit is no-one needs to log in. From the owner’s point of view, they don’t have to tediously list everyone who can see the resource, and it’s more granular than just “only show my friends” if you already have a list of friends. Also, the owner can access it from any device or browser just by knowing the URL, and without worrying about timing out. Secret URLs are especially nice in a real-time setting (Web 3.0, here we come), where you want your friends to converge on a website for closed communication and collaboration. The main downside of course is that it’s simplistic, offering rather limited security, and in most implementations the resource must be treated as read-only, as you don’t want to let people change something unless they can be held accountable for their actions. Skitch’s settings for an individual image are shown below:
Federated (open id-e.g. StackOverflow; Facebook Connect-e.g. CNet)
Federated login, most famously exemplified by OpenID, allow for a single identity across the internet. You authorise sites with your OpenID provider, and you only need to log into your provider to access those sites. More recently, Facebook Connect provides its own brand of federated login, which is conceptually easier and simpler for users, at the cost of tying their identity to a single provider. So while purists are grizzling about lack of openness, mainstream users right now seem happy to sign in to sites with Facebook Connect, as opposed to signing up to the site separately. And sites seem keener to integrate Facebook Connect than OpenID, probably because the story is clearer for end-users, and there is a tie-in to Facebook too (e.g. site activities can show up on the user’s Facebook feed). Facebook supports OpenID login, is now on the OpenID board, and will probably become an OpenID provider at some stage.
Special Mention: Delegated authority (OAuth-e.g. Twitter)
In a somewhat separate category is OAuth, the protocol that lets you delegate authority from one site to another. It is sometimes likened to giving someone a valet key, i.e. they have partial authority at your discretion and you can revoke it any time. For example, an OAuth-powered Twitter mashup will redirect you to Twitter, where you tell Twitter you accept certain actions being conducted with your account by the mashup. From that point on, until you revoke it, the mashup will be able to work with your Twitter data as if it were you. All this would be possible if the mashup asked you for your username and password, but OAuth offers a more secure model because you can moderate, track, and revoke privileges at any time. In that sense, it is unlike the other techniques here as it is less about making the service more convenient or user-friendly; and more about allowing for extra functionality by third parties, and with a hardened security model.
What sparked this post was the rise of Posterous, which is undergoing some kind of tipping point lately. First, I heard the founders on Technometria a few days back, outlining their plans to provide flexible templating. Then they turn up on TechCrunch, having acquired Y-Combinator stable partner Slinkset. And finally, A-List blogger extraordinaire Steve Rubel announces his new publishing strategy yesterday, with Posterous playing a big part in it. All in all, a big week for a company whose key selling point is a new registration grammar.
I’ve released a new version of Scrumptious. Main change is it now supports Open ID. You can click a “login” link to comment by Open ID. It’s optional by default, though a Scrumptious site operator could easily make it mandatory for read, write, or both by just changing a couple of words in the “policy” config files. Similar to other CMSs like WordPress, non logged in users can indicate their name and URL when they submit a comment.
There are also UI enhancements – the design is cleaner and looks closer to the original original TiddlyWiki comments plugin. Interestingly, I retained almost identical markup, so I was able to cut-and-paste the original CSS for the comments plugin and it mostly worked. I also now include the default TiddlyWiki stylesheets as well. It’s not just look-and-feel which is closer to the original plugin, but the content – you now have info like modifier, modified date, and a permalink available.
I also added something I always wanted to add on the original plugin, which is some animation, e.g. when you add a new reply, the page scrolls to that reply, and a yellow fade effect highlights it. This is a genuinely useful feature as I was finding it difficult to see which reply I’d just added, when there are a lot of comments around.
I’ve also begun work on a Comments Report showing recent comments. Obvious related enhancement is to take the TiddlyWeb Atom plugin and make a comments feed.
Right now, all this is only tested on Firefox (the original was tested on all browsers, at least the full website view); my next priority is to work on browser compatibility, and after that, extract a modular JQuery comments plugin.
Regarding the implementation, TiddlyWeb ships with Open ID by default (Open ID is one of two default challengers, the other being the usual simple user-pass key pair config). The most challenging thing here was getting the UI right for both anonymous users and logged in users, as well as handling a redirect in the popup after a successful login; but at the back end, Open ID “just works”.
In summary, I added Open ID support as follows:
- Add a “login” link to the TiddlyWeb OpenID challenger UI, using a “target” attribute so the challenger opens in a popup.
- The challenger URL in that link also contains a redirect param, which I redirected to a new static page. This static page shows the user their login ID (by inspecting the “tiddlyweb_user” cookie value), calls a callback “onLogin” method on the original page, and closes itself.
PS I discovered late in the day (literally) that TiddlyWeb lets the client specify whatever modifier they want
Being a web app about websites and other resources, Scrumptious has a resource which is basically a URL, called “Pages”. A Scrumptious resource modelling google.com/about looks like:
See? The resource ID is http://google.com/about, encoded. (encodeURIComponent("http://google.com/about")
This was working fine on my dev machine, running “twanager server comments.boz 8080” (comments.boz being an alias for my local machine). But on the server, and run through apache and TiddlyWeb’s apache.py, it failed:
The fix was twofold – both of the following were required:
- AllowEncodedSlashes On in apache config. This option ensures encoded slashes (%2F) are passed through to the end-app.
- PathInfoHack plugin As with any TiddlyWeb plugin, I downloaded it and added it to ‘system_plugins’ list in tiddlywebconfig.py. (Thanks Chris for the pointer.)
And now we can happily talk resources with URLs as IDs to the server.
Today, we’re excited to introduce a new feature to our website that will expose the niche add-ons that can be hard to find, and gives users a more active role in helping outstanding add-ons bubble to the top. One thing we’ve learned as add-ons have grown in popularity over the years is that once a user finds an add-on they love, they become a fan for life. We see this all the time as people recommend add-ons to their friends and write great reviews. And we’re very happy to see so many bloggers writing about lists of their favorite add-ons.
Now this is interesting for a couple of reasons. Firstly, it’s an example of the URL Trail pattern; Firefox Collections is a framework allowing people to publish and share a collection of links; very similar to Amazon’s “So you want to …” ListMania. It adds the ability to do something that neither I, nor Vannevar B. mentioned about trails: the ability to subscribe to a trail, in order to see changes made, e.g. Here is the RSS feed for the Social Circuit collection.
The second hotness about Collections is the fact that Firefox add-on management is getting easier. You can click on a bunch of add-ons, like adding them to a shop cart, and install them all in one go:
It takes Firefox closer to where it should be IMO, in which add-ons are easily installed with one click, and distros are available for specialised needs, e.g. a developer build containing Firebug and other dev tools. Moz themselves might be wary of blessing certain extensions in this way; so the ideal situation might be if they were to provide a framework like Collections, where anyone can propose a bundle and the best bundles rise to the top, with a way to yoink a Firefox build containing all the add-ons in a collection.
We’re designing a setup for TiddlyDocs (and potentially other “TiddlyCMS”s) where there will be an instance for each group of contributors. This is one TiddlyWeb design pattern, where you say “we all trust each other” (although there are still audit logs!) and so everyone can add, delete, or modify all the content freely. That is, there is only one content bag accessible to all. Likewise, anyone can append comments. There is still an admin role for setting up config stuffs, which only the admin can change; otherwise, any user could easily perform a phisihing attack, for example. Users also have private bags which they can do what they like with – they could use it for drafts, or for overriding shadow tiddlers setup by the admin in config.
A rough sketch looks like this:
(Some things need fixing, but the overall idea is conveyed.)
I discuss more about this architectural pattern with Jon in the following video:
Now that model alone is good enough for basic cases, where you have a group of people who work together and just want to set up an instance. TiddlyWeb handles that just fine, even using the basic cookie authentication technique. In more complex situations (read: “enterprises”), you have a many-to-many relationship between users and instances. I was talking with TiddlyWeb’s architect, Chris Dent, about possibilities, and we decided on one possible model:
A central user-to-pool mapping database, which is shared by three web apps: (a) all the TiddlyDocs’ servers, specifically their Challenger module; (b) an admin app for managing the mappings (typical Django/Rails app with forms generated from data model); (c) a user-facing app (possibly a gadget in our envisioned OpenSocial scenario) showing the user (already authenticated via single-sign-on) a dropdown of instances they can launch.
Scrumptious is a web framework I’ve begun working on at Osmosoft. It’s a web app and web service for sharing bookmarks and comments about websites, and pretty much anything else with a unique URL. Things it is related to: Delicious (bookmarking), Digg (threaded comments), JS-Kit and Disqus (embedded comments with common identity across multiple sites).
Scrumptious is open source, under the BSD license (meaning you can do just about anything you like with it). There are already many open-source clones of this sort of thing, so why make a new one? There are a few reasons:
- Adherence to RESTful principles – Scrumptious is backed by TiddlyWeb, a RESTful data service.
- A non-TiddlyWiki TiddlyWeb client – So far, people who have used TiddlyWeb as a server have used TiddlyWiki as a client. They do play nicely together, but TiddlyWeb is a powerful RESTful framework on its own. Part of the motivation for Scrumptious was to port the TiddlyWiki nest comments plugin to a generic JQuery plugin that could be used on any web page. (Comments are indeed implemented as a plugin right now, but more work needs to be done to extract it into something truly modular and reusable; for example, the plugin right now assumes comments are about a web page; also, they are tied to TiddlyWeb. Nevertheless, the app still does achieve the main purpose of demonstrating TiddlyWeb is a fine data service for generic web apps.)
- Demonstrating the power of URLs – In evangelising web standards, a very practical piece of advice is simply to associate a unique URL with each distinct resource. That’s REST 101, but it’s something lacking in many web apps. With a tool like Scrumptious, you get a comment system “for free” as long as each resource in your system has a unique URL. We’ll be developing a similar framework for URL Trails in the future, and the same principle applies: use unique URLs, and people can put your stuff in trails “for free”.
- Flexible security model – Again, TiddlyWeb offers flexible permissioning, so you can use the app in different ways. e.g. a private conversation between colleagues; a public conversation (as with the demo), a publicly readable conversation where only certain individuals can contribute, etc. Likewise, TiddlyWeb offers flexible authentication, so you could hook into an organisation’s LDAP system, use open ID, simple user-password pairs, or any other form of authentication you wish to use.
Scrumptious is still at an early stage. Future work includes:
- Bookmarking – for now, there is only commenting rather than social bookmarking per se.
- Nested comments UI needs work to give it the same kind of UI as in the TiddlyWiki comments UI. e.g. show info like creator and creation data, and use suitable rendering and indenting.
- TiddlyWiki comments plugin harmonisation – As TiddlyWiki now ships with JQuery, it would be ideal if there was a single code base for the comments plugin, running in and out of TiddlyWiki. Indeed, I hope TiddlyWiki moves towards a general microkernel architecture, in which all plugins are useful outside TiddlyWiki. This is certainly becoming the case, with generic JQuery plugins being extracted for core activities like file saving, CSS applying, and wikifying.
- Browser extensions – instead of a bookmarklet, use a browser extension to show, for each page, if comments are available, as well as bookmarking info. (Similar to the StumbleUpon or Delicious Firefox extensions.) A good opportunity to get my hands dirty with JetPack.
- Login and identity management – while TiddlyWeb already provides the security and permissioning model, work is required to handle this at the UI level. For example, let anonymous users enter their email address and homepage, and/or register.
- User admin – for situations where users must be authenticated, some form of user management would be handy. Again, TiddlyWeb provides a good model for this – a “bag” of tiddlers has its permission specified in JSON, which may include a list of users for each access type, so you simply need to PUT and GET this if you are sufficiently permissioned (ie an admin). What’s missing right now is a UI (in TiddlyWiki too, let alone standalone web apps).
Z-index is the CSS property governing how high in the stack an element is, if you visualise the elements as appearing in a 3D stack coming out of the page. The actual value of an element’s z-index doesn’t matter; just its value relative to other elements on the page. Elements with higher z-indexes appear on top of elements of lower z-indexes.
I was just designing a bookmark for Scrumptious (a TiddlyWeb powered tool to have discussions around websites; I’ll talk about it in a future article). And with a bookmarklet, I need it to appear above everything else on the page. So it needs a higher z-index than everything else. Maybe not the highest z-index possible, since there might be apps that need to sit above my app’s bookmarklet (Design For Extensibility). But I still need to know the maximum z-index.
It would be nice if the standards publshed the maximum allowed z-index. Every reference always makes a comment like “there’s no real limit”. For good reason too, since the W3C standards don’t really cover this. The CSS Spec (2.1) goes so far as to include an “Elaborate description of Stacking Contexts”, and it’s even called “zindex.html”, but even here, omits to pin down the max z-index value.
A handy summary was stated on StackOverflow (which has fast become the central resource for programming FAQs and a site I have quickly come to adore for its clear mission, clean design, and community feel):
So basically there are no limitations for z-index value in the CSS standard, but I guess most browsers limit it to signed 32-bit values (âˆ’2147483648 to +2147483647) in practice (64 would be a little off the top, and it doesn’t make sense to use anything less than 32 bits these days)
Looking further, I came across the most comprehensive summary of the situation, published recently. It also highlights the fact that it’s not just the maximum value we want, but what happens if we exceed it.
I made a simple test page to find these limits and figure out what happens when you exceed them.
Browser Max z-index value When exceeded, value changes to: Internet Explorer 6 2147483647 2147483647 Internet Explorer 7 2147483647 2147483647 Internet Explorer 8 2147483647 2147483647 Firefox 2 2147483647 *element disappears* Firefox 3 2147483647 0 Safari 3 16777271 16777271 Safari 4 2147483647 2147483647 Opera 9 2147483647 2147483647
So the best way would be to use browser detection and pick the max from there. But if not, use 2147483647.