Meeting with Aptana’s Kevin Hakman – Talking Jaxer

Regular readers of this blog will know I am a big fan of Javascript on the server. Through its Jaxer project, Aptana has been at the forefront of the server-side Javascript renaissance. As I’m in SF right now (mail or tweet me if you’d like to meet up), I was pleased today to be able to catch up with Kevin Hakman, a great talent in the Ajax world who runs community relations and evangelism for Aptana.

I wanted to get an overview of the state of Aptana and Jaxer, and also understand how it might be used in a TiddlyWiki context to help with accessibility.

We discussed a range of things Jaxer – I will list them below. Kevin reviewed a draft of these points and gave me some links and corrections.

  • Jaxer – as you probably know, Jaxer is an Ajax engine. It’s server-side Javascript, but more than that, you can perform DOM manipulation on the server, and then it spits out HTML from there. Very exciting for all the rasons I’ve mentioned in previous blog posts – using a single language, making proxied cals from client to server, etc. The great thing about Jaxer is it’s open source – you can download a version with Apache that “just works” – download, install, and start the server in one click and the sample apps are up and running. Aptana runs a cloud host for Jaxer (as well as Rails/PHP etc), so you can host with them if you don’t want to host it yourself. They have some interesting value-adds around their host, like support for different environments (dev/test/etc with different parameters, such as extra debugging), which you can easily spin out and turn off again.
  • ActiveJS – ActiveJS is a Jaxer-powered framework modelled on Rails and close to beta. The most mature part is ActiveRecordJS (ORM), but there is now ActiveView and ActiveController as well. As with Rails, you can use them independently. This is exactly what I’ve been hoping for and it sounds like it’s mature enough to start hacking against. Kevin is realistic about uptake and expects it to be a long process.
  • Microformats – Kevin directed me to this this Jaxer project which filters parts of a form depending on who is viewing it. You just use a different class name to indicate who can view what, and the server chooses which parts to output. This got me thinking about microformats and how Jaxer could help render them – with a single library, dynamically in the browser and statically from the server. There are a number of microformat libraries which might already be useful sitting in a Jaxer server.
  • TiddlyWiki – Taking TiddlyWiki based apps and using Jaxer to make them come statically from the server. Since Jaxer supports DOM manipulation, it should certainly be possible. Kevin said accessibility is certainly an argument for Jaxer, though there’s no significant demo to date, so a TiddlyWiki project would make a good demo. His intuition is that it will be a 90-10 – 90% straightforward, the remaining 10 will cause some complications. Would definitely make an interesting project to do a simple version, e.g. initially just get Jaxer to output each Tiddler, showing title and wiki markup.
  • Demo app – Aptana TV is implemented in Jaxer+ActiveJS and there are plans to open source the implementation and make it Jaxer’s “Pet Store”.
  • Interoperating – Jaxer can interoperate with other languages using browser extensions/plugins, since the server is basically Firefox; these are not extensions users have to install, but act as enhancements to the server. Just as a user can install a plugin to Firefox so web apps can make calls to Java for example, a developer can patch the server with the same plugin (or similar, but with the same API anyway?). Kevin said most of these kinds of components are XPCOM, but I guess Jaxer should accept regular Firefox extensions and even Greasemonkey scripts. DotNet COM example. In a simpler sense, you can call out to other languages using the OS command line. System commands have always been possible from the Jaxer server.
  • FF bugs – Jaxer shares the Firefox’s bug set and fixes, when you think about it!
  • Performance – Likewise, Jaxer benefits from all the stuff you read about sky-rocketing Javascript performance in the browser. Jaxer will be inheriting the performance boost from TraceMonkey promising to be silly-fast at 20x to 40x speed improvements for JS according to Moz.

More Adventures in Firefox Extension Land: Submitting to Mozilla

I recently explained how I created my first Greasemonkey script as an experiment, then converted it to a standalone Firefox plugin/extension/add-on using an automated tool. Official homepage here. The experiment continues, as I subsequently submitted it to Mozilla, and it’s now been approved as an official plugin.

I’ve been curious about the process people go through to submit an official extension, so here’s the rundown.

1. Upload XPI File

Standard upload of the XPI file, which contains all the plugin contents.

2. Provide Metadata

… such as my name, the extension’s homepage, and so on. (Why isn’t this info just packaged inside the XPI file? Actually, some of it is already in that file anyway.)

One interesting thing here is that I get to choose the category it goes in (although I guess the approver could change it later on). Note also that there’s no help about the categories on this page, so I think there’s a large risk that developers will probably be inconsistent. Not bashing Moz about this, but I do think they could make it easier for end-users to locate plugins if they encouraged consistent labelling – I’ve sometimes trawled through 5 or 6 pages trying to find a plugin within a particular category.

3. Wait for Approval

An official Mozilla person must now look it over, which is a reassuring security measure. (Complain all you want about Ajax and its tabloidistically alleged “security holes”, but the open-source nature of web architecture is extremely compelling from a security perspective.)

After about thirty hours, I received an email that the plugin had been approved. And now it’s available from Mozilla, ID 2816.

If you told me I was cluttering up the space of extensions with this trivial 7KB contribution, I’d say you’re probably right, but then what’s the harm in lots of extensions, as long as it’s easy to locate the one you want. (UserScripts.org, the GM repo, does that to great effect with the power of tagging, and a wiki would make it even more useful.) Not everyone wants to set up GM and maintain the scripts, even those who are savvy enough to know about extensions.

In summary, Greasemonkey and associated tools make it easy, bordering on trivial, for anyone with basic Ajax knowledge to create a standalone Firefox extension. Of course, it will be functionally limited to GM-type page munging, but it’s still a lot easier than I had imagined.

Including modern words in modern dictionaries

What manner of 19th century public domain dicitonaries are packaged with 21st century software? For Montogomery Burns, these word lists would be just spifflicastic, but maybe not for the average citizen.

I just installed Thunderbird 1.5RC1, keen to check out the spell-check. Neither “blog” and “podcast” were recognised as valid words, despite one of the other new features being RSS and podcasting support! Not pointing the finger at Thunderbird, since most dictionaries in usr/dict and ispell and Office(s) seem to be equally ancient.

Many web-related terms turn out to be unsupported:

  • blog.
  • rss.
  • podcast.
  • www.
  • weblog.

  • mozilla.

  • thunderbird.
  • firefox.
  • netscape.

  • perl.

  • usenet.
  • cgi.
  • http.
  • dotcom.

  • flickr.

  • technorati.
  • google.
  • ipod.

Further curious (bordering on obsessive), I then tried OED’s top ten new entries for 2001, specifically those with exactly one term. All but one of these fails too.

  • doh.
  • balti.
  • Doh!
  • Ladette
  • Mullet (Passes spell-check.)
  • Alcopop

Yep, forget about quoting the Simpsons and partying with Red Bull. At the end of the day, it’s the Mullet that commands your respect.

I’m sure Google Labs could run some algorithm against the web to produce a more useful spell checker. It would obviously find many new words that should be added, but furthermore, it would find obscure words that should be removed. And it could probably go a lot further too, and build a very clever grammar-checking algorithm. But for now, there’s plenty of mileage to be gained from a simple manual list.