TiddlyGuv

I’ve just presented TiddlyGuv at the FOSSBazaar workshop, the event I wrote about yesterday. It’s a good time to introduce TiddlyGuv – I’ve mentioned it in some talks and tweeted about it on occasion, but this is its maiden appearance on Software As She’s Developed.

TiddlyGuv (that’s a code name we’ll use for a while) is a web app to support open source software governance. In promoting and supporting open source at Osmosoft, one of the activities is a governance process. This means establishing policies for open source usage and auditing projects against it. TiddlyGuv is the tool we’re developing to support this process – it will support community discussions, policy publication, project tracking, workflow, and alerts about potential problems.

A few things about TiddlyGuv:

  • TiddlyGuv is in progress, it’s not complete and still needs work around a clean installation package. (We’re currently re-working how all this is done with TiddlyWeb verticals; once that’s done, there will be a package so people can try it out.) In any event, the code in some form is online in Trac.
  • TiddlyGuv is a web app. No surprise there :).
  • TiddlyGuv motivated the comments plugin, which has now been used in various other projects, including the live TiddlyWeb documentation wiki.
  • TiddlyGuv is built on the TiddlyWiki-TiddlyWeb stack, like TiddlyDocs. We see a general trend of CMS apps emerging on this stack, which has been called TiddlyCMS. (TiddlyCMS is an architectural pattern.) The aim is that TiddlyCMS apps share a large suite of common components such as the comments plugin.
  • TiddlyGuv is open source. The license hasn’t been finalised, but will likely be BSD, same as TiddlyWiki. We hope other organisations use TiddlyGuv in ther own organisations; TiddlyWiki and TiddlyWeb offer highly flexible architectures which lend themselves to adaptability.

Functionally, the first thing we’re aiming for is a tool with document wiki, for general discussion about open source usage, and a license portal, showing a list of licenses and information about each one. The comments plugin ensures each document and each license can be commented on too. All of this is there today, albeit in an untested state. Later on, we will add info about projects and further still, a workflow tool – for example, “Once the project form has been submitted, it must be reviewed by two named reviewers. They will each receive an email notification when a project is ready for review.” We would want the system to follow rules like that, and we would want to allow site administrators to maintain such rules. The TiddlyWiki model is to encapsulate rules like that inside tiddlers. I foresee some kind of Javascript-powered Domain-Specific Language (DSL).

The FOSSBazaar slide pack:

The feedback from the workshop was positive; there’s interest in the project and there was also interest from one vendor of a commercial product about ways to introduce the wiki or commenting aspects into an existing product. A good example of this sort of thing is Amazon – the way it has a structured product page, which incorporates a forum and a wiki. It raises some interesting architectural issues about how a TiddlyWiki/TiddlyWeb app could be embedded onto a page as a widget-style component, with identity and permissioning handled too. I also had an extended discussion with Mark Donohoe about how TiddlyGuv could talk to FOSSology. Of which, more in the next post.

Meeting with Aptana’s Kevin Hakman – Talking Jaxer

Regular readers of this blog will know I am a big fan of Javascript on the server. Through its Jaxer project, Aptana has been at the forefront of the server-side Javascript renaissance. As I’m in SF right now (mail or tweet me if you’d like to meet up), I was pleased today to be able to catch up with Kevin Hakman, a great talent in the Ajax world who runs community relations and evangelism for Aptana.

I wanted to get an overview of the state of Aptana and Jaxer, and also understand how it might be used in a TiddlyWiki context to help with accessibility.

We discussed a range of things Jaxer – I will list them below. Kevin reviewed a draft of these points and gave me some links and corrections.

  • Jaxer – as you probably know, Jaxer is an Ajax engine. It’s server-side Javascript, but more than that, you can perform DOM manipulation on the server, and then it spits out HTML from there. Very exciting for all the rasons I’ve mentioned in previous blog posts – using a single language, making proxied cals from client to server, etc. The great thing about Jaxer is it’s open source – you can download a version with Apache that “just works” – download, install, and start the server in one click and the sample apps are up and running. Aptana runs a cloud host for Jaxer (as well as Rails/PHP etc), so you can host with them if you don’t want to host it yourself. They have some interesting value-adds around their host, like support for different environments (dev/test/etc with different parameters, such as extra debugging), which you can easily spin out and turn off again.
  • ActiveJS – ActiveJS is a Jaxer-powered framework modelled on Rails and close to beta. The most mature part is ActiveRecordJS (ORM), but there is now ActiveView and ActiveController as well. As with Rails, you can use them independently. This is exactly what I’ve been hoping for and it sounds like it’s mature enough to start hacking against. Kevin is realistic about uptake and expects it to be a long process.
  • Microformats – Kevin directed me to this this Jaxer project which filters parts of a form depending on who is viewing it. You just use a different class name to indicate who can view what, and the server chooses which parts to output. This got me thinking about microformats and how Jaxer could help render them – with a single library, dynamically in the browser and statically from the server. There are a number of microformat libraries which might already be useful sitting in a Jaxer server.
  • TiddlyWiki – Taking TiddlyWiki based apps and using Jaxer to make them come statically from the server. Since Jaxer supports DOM manipulation, it should certainly be possible. Kevin said accessibility is certainly an argument for Jaxer, though there’s no significant demo to date, so a TiddlyWiki project would make a good demo. His intuition is that it will be a 90-10 – 90% straightforward, the remaining 10 will cause some complications. Would definitely make an interesting project to do a simple version, e.g. initially just get Jaxer to output each Tiddler, showing title and wiki markup.
  • Demo app – Aptana TV is implemented in Jaxer+ActiveJS and there are plans to open source the implementation and make it Jaxer’s “Pet Store”.
  • Interoperating – Jaxer can interoperate with other languages using browser extensions/plugins, since the server is basically Firefox; these are not extensions users have to install, but act as enhancements to the server. Just as a user can install a plugin to Firefox so web apps can make calls to Java for example, a developer can patch the server with the same plugin (or similar, but with the same API anyway?). Kevin said most of these kinds of components are XPCOM, but I guess Jaxer should accept regular Firefox extensions and even Greasemonkey scripts. DotNet COM example. In a simpler sense, you can call out to other languages using the OS command line. System commands have always been possible from the Jaxer server.
  • FF bugs – Jaxer shares the Firefox’s bug set and fixes, when you think about it!
  • Performance – Likewise, Jaxer benefits from all the stuff you read about sky-rocketing Javascript performance in the browser. Jaxer will be inheriting the performance boost from TraceMonkey promising to be silly-fast at 20x to 40x speed improvements for JS according to Moz.

TiddlyDocsDocs – Design Docs for Tiddly*

TiddlyDocs - document collaboration by divide and conquer - FireMoff (-;

Since we’re planning to use TiddlyDocs internally, we’re in need of some high-level documentation for TiddlyDocs in order to have it approved for certain uses.

My starting point was to locate, solicit, or produce documentation for TiddlyWiki and TiddlyWeb, the technologies on which TiddlyDocs is based.

For TiddlyWiki, useful sources are:

  • Position Paper Although it’s technically a position paper for an event on device access, Paul Downey has done a great job of overviewing TiddlyWiki from both a high-level and technical perspective.
  • TiddlyWiki Internals Series I wrote this when I joined Osmosoft, so a caveat is that my knowledge of TiddlyWiki was at the time limited. OTOH it’s the only detailed dive of the internals I’m aware of.
  • Community Wiki Scouring around the official community wiki is another way to find a lot of useful info, particularly about the API.

For TiddlyWeb, there are less third-party sources, but fortunately its creator Chris Dent puts a lot of effort into writing detailed notes. These are notes shipped with the code itself, notes on the TiddlyWeb wiki, and even some very useful notes in commit messages. Most recently, Chris has been producing some excellent doco at here, inside the wiki.

For TiddlyDocs, Simon has produced a nice new screencast on TiddlyDocs with animations. Video quality needs some polishing, which we’ll do in later versions, but he’s done a great job motivating the product:


TiddlyDocs Intro from Simon McManus on Vimeo.

“Moving There”: Property Listings Mashup (Rewired State)

I recently attended Rewired State with several fellow Osmosofties. Our group (Simon, Fred, Jeremy, Robbie and myself) developed a little mashup called Moving There (and a couple of other things not described here).

After the demos, our mashup was mentioned by a representative from DirectGov as one of several projects they’d like to work with, perhaps to sponsor further work.

The idea was to overlay government-related data on property listings sites. I’ve been moving house right now, so this is very much an itch which I’d very much like to scratch.

Typical properties listings sites like RightMove show a list of properties matching your search criteria (shown below).

There’s a lot of things they don’t show you too, like upcoming building works, local schools, crime stats, vandalism. All these things are available in one form or another on websites, some operated by the government and others by third parties.

One part of our work involved taking in a postcode and producing useful info about it, so we came up with a little library for that. We also made some attempt to normalise these metrics, so we came up with a “coolness factor” between 0 and 1. This is very crude, but we feel it’s useful when comparing stats to say “schools have a coolness factor of 0.3 and crime prevention is 0.6” rather than “schools have an average pass rate of 48% and there are 142 crimes per week”. In other words, a standard coolness factor measure goes some way towards helping you compare apples with oranges.

But the real product concerned itself with mashing up a properties listing site. After entering a location, you get back listings as well as charts showing the “coolness” factor according to the various stats. Well, that’s the theory; in this case, it’s just one stat (fixmystreet local incidents, e.g. vandalism) per property. And the UI isn’t particularly slick either! I’m just presenting where we got to when the time ran out on the day!

A few notes on the technology. We made use of JQuery for the usual infrastructural stuff. We used Google’s geo-coding API to map street name and suburbs into a postcode. The postcode info demo and the final mashup are actually a file-based web app, i.e. it runs off a file:// URI. We debated several models for the mashup’s architecture; in retrospect, we spent a lot of time re-building the property search’s UI and it might have been better to go with a Greasemonkey overlay, at least for proof-of-concept.

TiddlyWiki Drawing Plugin based on Project Draw

I’ve been creating a drawing plugin for TiddlyWiki (SVN trunk). It delegates the actual drawing editing to Project Draw (docs; demo), a browser-based drawing editor from Autodesk Labs.

Here’s a screencast:


TiddlyWiki Drawing Plugin based on Project Draw from Michael Mahemoff on Vimeo.

Autodesk Labs’ Project Draw editor looks like this:

… and the tool has a nice set of RESTful services we can make use of:

Project Draw doesn’t have any specialised cross-domain API, e.g. using JSON or OAuth, so normally you would have to proxy any calls via the server. However, a standard TiddlyWiki runs against a file:// URL and can therefore bypass cross-domain restrictions; hence, it was possible to build this plugin using the same kind of XMLHttpRequest calls as Project Draw does for its own RESTful services demo.

One likely use of Project Draw is in conjunction with TiddlyDocs, which will generally run as a standard website – off a http:// URL. Hence, we will need to do some proxying in order to get around cross-domain restrictions.

Also, the tool isn’t fully working in IE yet; I’m working on that.

You create a new drawing with the New Drawing menu item. This is simply a <<newDrawing>> macro and could appear anywhere in the page.

And you can set image dimensions in edit mode:

… which will cause images to be clipped depending on the dimensions:

By default, images are rendered as SVG, using the tricks I discussed earlier for inline SVG. However, this is only supported for non-IE browsers, or IE with Flash-based SVG support. I am still working on IE support. Without SVG, the image will fall back to a bitmap image (fortunately, Project Draw can output an image as SVG or bitmap). However, note that the bitmap image always shows the whole thing; so rather than being clipped, it will re-scale, which could lead to a lot of whitespace being shown in TiddlyWiki. For this reason, in Project Draw, you should set the page dimensions to avoid any whitespace:

Fun with Fragment Identifiers

I was recently invited to make a statement on WebMasterWorld about something in advanced Javascript at the edge of my understanding. I decided to cover fragment identifiers, something I have an odd obsession about:

There’s a lot of cool stuff going on right now, standards-based graphics and sound foremost in my mind. But I’ll focus here on something I’ve always found oddly fascinating: fragment identifiers and the quirky exploits around them. The humble “frag ID” is not exactly the pinnacle of the Ajax revolution, but it remains misunderstood and under-utilised by developers, and a feature which the browsers could do more to support. By design, fragment identifiers do just as their name suggests – they identify fragments of a document. Type in http://example.com#foo, and the browser will helpfully scroll to the “foo” element. However, they can be used for a lot more than that, thanks to peculiarities in the way they are handled by browsers.

Most importantly, by writing Javascript to change the fragment identifier, you’re able to update the page URL, without effecting a page reload. This fact alone means – contrary to a popular Ajax myth – you can support bookmarking and history in an Ajax application. Furthermore, fragment identifier information is never sent to the server, unlike the rest of the URL, which leads to some interesting applications related to security and privacy. Another powerful application is interacting between iframes from different domain, something which is theoretically “impossible” in older browsers and tremendously important for embedding third-party widgets on web pages. And when I began working at Osmosoft, I discovered yet another application: providing startup context to web apps, such as TiddlyWiki, which run off a file:// URL…as they’re unable to read conventional CGI parameter info.

I first researched this stuff for Ajax Design Patterns in 2005, and at that time, there were only a couple of scattered blog articles on the topic. Today, we can see the pattern in action in various high-profile places such as Google Docs and GMail. When I view my starred documents in the former website, for example, the URL is http://docs.google.com/#trash. That said, many developers still aren’t using this feature to its fullest, and library support is somewhat lacking. It’s encouraging to see IE8 has a new event for monitoring the fragment identifier, which suggests the story is not over for this rarely discussed property.

This is certainly a hot topic, if not well understood or covered; shortly after I wrote it, I was amused to see the topic showed up on none less than Techmeme:

Techmeme: Google's new Ajax-powered search results breaks search keyword tracking for everyone (getclicky.com/blog) - FireMoff (-;

The issue here is summarised in my statement: “fragment identifier information is never sent to the server, unlike the rest of the URL, which leads to some interesting applications related to security and privacy”. These bloggers had noticed that google was (in some cases) changing their URL structure for searches to use hashes (#q=term) instead of traditional CGI params (?q=term). Thus, when the user clicks through from Google to your web page, you don’t know what the user searched for.

Another thing that recently arose is how easily you can retain fragment identifiers in bookmarking type sites. For example, if you submit a URL to Digg, it annoyingly strips the fragment ID. Whereas Delicious retains it. TinyURL retains it too, but their php “create” script removes it, according to a conversation I had with Fred, who created this tinyURL bookmarklet. I can see why sites wouldn’t want to retain it, because it may well be just an in-page link, so it’s a tricky issue.

The main point being, fragment IDs – and design questions thereof – are alive and well.

UML Tools in the Cloud

We have several IDEs in the cloud such as:

(Update: How’s the timing? This blog post was written one day before Dion and Ben announced Bespin, the massively important open-source project to build an IDE in the cloud among many other things. Needless to say, Bespin has been added to the list!)

I would like to see more tools for UML tools to complement these online IDEs. I’m not talking about naieve enterprise silliness like model-driven architecture; I mean just some basic tools ot support basic sketching.

So here is a nice one Jeremy showed me: Web Sequence Diagrams. It’s based on a declarative text spec of the model, rather than endless, tedious, mouse clicking. A perfect example of the WYWSIWN (what you see is what you need) paradigm trumping WYSIWYG. Just a nice, simple, tool that lets you generate UML models in a matter of minutes, the most any agile practitioner would want to spend on them.

TiddlyDocs Prototype

Simon McManus has overviewed TiddlyDocs, a TiddlyWeb-based document editing and collaboration tool. It’s a great example of taking an existing framework – TiddlyWiki – adding in a bunch of components – TiddlyWiki plugins and other open source pieces – and putting some custom code around it to generate something useful in just a few weeks. Simon’s article has a good overview of all the moving parts inside TiddlyDocs.

I recorded a screencast to demo the initial version:


TiddlyDocs – Early Prototype from Michael Mahemoff on Vimeo.

I’m excited about the potential of this tool. In the simplest case, you can see it as an online word processor which, being open source, could therefore be deployed flexibly securely and freely behind any organisation’s firewall. But it can be much more than just a word-processor, because each section is a tiddler behind the scenes, and is therefore a first class data structure with its own set of fields. In particular, there are the seeds of a workflow system, with a field indicating the current completion status, and another field assigning the individual who will progress it. This means you can get an RSS feed, for example, of all sections in “under review” status, or a list of all sections assigned to a particular user.

Inline SVG

Okay, I’m currently ripsnort delighted to have found a solution to this problem of rendering SVG element dynamically. As in:

  1. My web apps receive some <svg>...<svg> from an Ajax call
  2. ???
  3. Super, there’s a drawing on the page!!!

What is the secret sauce in step 2?

After a merry frolic through the majority of the internets and a dozen prototypes, I finally found the answer. In essence, you just create an “object” element with type and data settings:

javascript

  1. var svgObject = document.createElement('object');
  2. svgObject.setAttribute('type', 'image/svg+xml');
  3. svgObject.setAttribute('data', 'data:image/svg+xml,'+ svgCode); // the "<svg>...</svg>" returned from Ajax call
  4. $("#reportSVGDiv").append(svgObject);

It works in Firefox, and the article explains how to get it working in IE too, which I don’t need just yet.

A few other things I learned and tried:

  • I initially, naievely, tried just adding the SVG via innerHTML. As Jeremy explained to me, this doesn’t work because the browser uses a different compiler for HTML compared to pure XHTML. (And TiddlyWiki, like most things on the web, is HTML.) Even with an <object> tag around it, it won’t just switch over.
  • The easiest way to do this is to use a .svg suffix (or probably set mime type to svg, but this is tiddlywiki and I’m working from file:// URLs, where it’s not possible to set mime type). But then you can’t embed it on the page – you’d have to point to it from an embed tag or object tag.
  • You can inline the SVG if you write “pure” xhtml, and for this, you have to give the file a .xhtml suffix. As in this example.
  • I tried the Inject HTML into an iframe technique. I was hoping that with the right XML declaration and HTML type, I could convince the iframe it was hosting XHTML. But no, I could not. I’m still interested to know if there’s any way I could convince a dynamically generated iframe, with dynamic content, about the type of content it contains.

This is all part of some recent prototyping on an exciting TiddlyWiki project involving rich text editing, among other things. In a later blog post, I’ll explain how we’ve used this SVG stuff to mash up an online chart drawing tool with TiddlyWiki. I’m currently packaging it into a plugin.

Google jQuery CDN

NO LONGER UPDATED: July 2012. This page is no longer indexed prominently in Google, so keeping it up-to-date is not much use to anyone. So consider updates finished. Try this link instead

SUMMARY: Get the latest jQuery here:

http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js

To include in your web app, cut-and-paste this:

<script src=”http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js”></script>

You can also cut-and-paste one of the following to suck it down:

  • curl -O http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js
  • wget http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js

With the release of JQuery 1.3, I thought I’d mention the Google JQuery CDN, one of several libraries you can yoink from Google’s Ajax Libraries API. Instead of hosting your own JQuery, you just point your web app to Google’s JQuery:

<script type=”text/javascript” src=”http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js”></script> (Modified version from 1.3. I’ll try and keep this post linking to the latest version.)

Google’s documentation would lead you to believe you need to import Google’s library, and then call google.load(), but it’s unnecessary; just pull in the script directly using the link above. The link is officially documented, so it’s going to stay put for a very, very, long time.

There are also other libraries available on Google’s CDN and elsewhere.

The main point of the CDN is caching. If a user hits 100 JQuery sites, the library will only be downloaded once, and the remaining 99 sites will pull it from the local cache. From an operation cost, you also get to offload the cost of server Javascript to Google. As an added bonus, I also find it easier as a developer to kick off a new project by just linking to the CDN – no need to download and copy the latest version of JQuery.

Great. So when not to use a CDN?

Obviously, whenever you’re working offline. This sometimes occurs as a developer working locally, and with Single Page Applications like TiddlyWiki. (The next “major minor” release of TiddlyWiki, v2.5 is JQuery-powered.)

Another case would be when you can deliver faster than the CDN, and care about that. This might be the case when all users are close to the server. Even then, though, you won’t get the local cache benefit from users who already have the JQuery.

Finally, privacy and security concerns. Using the CDN, you are trusting the CDN to faithfully serve JQuery and relying on no third-parties injecting funniness in between the CDN and your user’s browser. If anything went wrong here, your user could be subject to a world of XSS and CSRF attacks. Or, in a more mundane scenario, the CDN might simply go down, thus breaking your web app. Furthermore, you’re giving the CDN data about your users this way. Although Google doesn’t use cookies, others might; and even if they don’t, the IP number is going to be sent back to them. The referrer – being your website – will also be sent to them, so they can, if they want, track your website’s usage.

For many websites, though, it’s great that we can use a reliable CDN like Google’s to fetch JQuery fast.

Update (11-Feb-2009): In a show of recursive linking, I should fess up as to why I originally sat down to wrote thistwitter message.

Update (erm, mid-2009 sometime): Montana left a comment pointing out you can also drop off the minor numbers to get the latest version. So, for the latest JQuery (since 2.0 doesn’t exist), use http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js. Or, for the latest in the old 1.2 series: http://ajax.googleapis.com/ajax/libs/jquery/1.2/jquery.min.js.

Update (19-March-2010): Scratch that. @premasagar pointed out to me that the link to just the latest version (until 2.0, i.e. http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js) as opposed to a specific version (http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js) is undesirable; Google will expire the “latest” content after one hour, as opposed to one year for a specific version. This is for good reason of course, as the latest version is subject to change. For this reason, I’ve reverted the link at top to be a specific version, which I’ll endeavour to update upon each new release.