Teleporter – From Greasemonkey to Self-Contained Extension

<

p>My previous post outlined Domain Teleporter, a first attempt at a GM script. Continuing the experimentation (as I have a comlpetely different real example in mind), I wanted to see how easy it was to create a self-contained FF extension from the GM script … I figure there’s a significant population who have FF and can’t or don’t want to install Greasemonkey, even if this leads to a functionally trivially extension.

<

p>Converting a Greasemonkey script to a Firefox extension was stupidly simple, thanks to an online tool, Anthony Lieuallen’s User Script Compiler. I’d read about it a while ago, and assumed there must be some catch, but the conversion was indeed child’s play … enter your name and homepage, cut-and-paste the script, and submit. The XPI pops out immediately as a binary download.

<

p>FWIW, the extension is published here; I used htaccess in the distro directory to ensure it will be an installable download (so it can be installed immediately, as opposed to saved locally) as follows:

AddType application/x-xpinstall xpi

The Richer Plugin pattern talks about the importance of FF extensions and the like.

BTW The extension is Amazon Teleporter, not Domain Teleporter, because there’s no longer any way to change the default “Amazon*” domain for which it applies – GM supports that sort of thing as a built-in feature; whereas an extension would have to implement it manually … a lot more work, requiring a custom-made options dialog and a persistence mechanism.

Domain Teleporter – Greasemonkey Script

Update: As an experiment, converted this into a Firefox extension (Blog Article, Extension homepage)

DomainTeleporter, my first Greasemonkey script, is related to this blog post from last April:

If you shop at Amazon.co.uk, you’re often out of luck when it comes to reader comments. So I often find myself editing the URL, switching back and forth between .co.uk and .com. Luckily, this transatlantic adventure usually works out, as the crazy Amazon IDs match.

Domain Teleporter flips the location between .com and .co.uk, retaining the rest of the URL. It would be nice to make it more generic – switch to an (quasi) TLD – but that would require more regexp parsing than was necessary here. Incidentally, I’d like to see a JS library that munges URLs – extract out the domain, the path, etc.

The script is configured to only run on Amazon, but you might find it useful with other sites too, in which case, change the applicable domains using the GM dialog.

Writing the GM script was fairly straightforward, began by copying Mark Pilgrim’s “Hello World”. It’s standard JS for the most part, but there was one gotcha: events don’t usually work with the usual, portable, solution of “control.onclick()” – you get a “component not found” error. You must instead use addEventListener().

What browsers do developers use?

Jeff Attwood points out that on w3schools, a huge majority of developers are still using IE.

About 60% IE and 25% Firefox. Amazingly 2.3% are still using Mozilla (why?). Unfortunate that 60% of developers aren’t using IE … but the reality is many don’t have much choice…they’ll be working in corporate-standard windows environments, where IE already exists and Firefox must be installed, if they have permissions to do so. Furthermore, Firefox won’t necessarily work due to firewalls and proxies. And then there are the developers who won’t install Firefox because they haven’t heard of it, are MS fanboys (yes they exist), or will get into trouble for doing so.

All of which means they’ll miss out on some of the best development tools around … including the insanely useful and popular Firebug.

FWIW AjaxPatterns must have a savvier audience ;-).

  • IE 46.8%
  • FF 36.2%
  • Moz 4.1%
  • Opera 2%
  • Safari 1.8%
  • ITunes … 0.4%
  • Konq 0.3%
  • NS 0.1%

Incidentally, were you paying attention? Let me repeat that

  • ITunes … 0.4%

🙂

That’s because one of the most popular hits is http://ajaxpatterns.org/feed/future-of-web-apps.xml, the bootleg feed I set up to serve the Future Of Web Apps conference (which onlty released individual MP3s). It’s popular not because people keep downloading it, but because so many people subscribed to it to get all the MP3s in one hit, then never unsubscribed. When Russell Beattie wanted people to unsubscribe to his blog, he started posting annoying animated GIFs. I guess the audio equivalent would be to start posting chalk-screeching noises, or better still, blast out a dozen “Ice, Ice, Baby” enclosures.

Wanted: Massive Local Storage

Local storage – beyond 2KB cookies – is now a step closer with the latest Firefox effort. You get a local storage API like this:

javascript

  1. sessionStorage.setItem(..)
  2. globalStorage.namedItem(domain).setItem(..)

The fantastic thing is Brad Neuberg’s Dojo work means we can code independently of the local storage mechanism. Since IE also has local storage, as well as Flash, most bases are covered, or soon will be: Anyone with EITHER IE or Firefox or Flash will have local storage. (Incidentally, we had a discussion in the comments on Ajaxian about the possibility of S3 and other remote bindings as well, which I’m guessing Brad will implement at some point.)

But what I’d like to know, what I’d really really like to know, are the limits for these various techniques (I think Flash is 10MB per domain, right?)

As I’ve said before, there are valuable applications of massive storage – hundreds of gigs, e.g. media players that store data server-side, but cache content locally to cut down on the need for streaming from the server. Hopefully, these storage technologies won’t try to second-guess the imagination of web developers by setting a dumb arbitrary limit that “no-one would ever need more than”, like 100MB or 1GB or whatever.

If there’s room on my hard drive, and I trust the domain, let me store as much as I please.

Who Needs These Browser Warnings?

Setting up a new Windows PC today and not loving the browser warnings.

The messages, as I recall them: “You are about to submit the form. It’s dangerous.”, “You’re going to leave the page. It’s dangerous.”, “This page is encrypted. It’s dangerous.”, “This page is not encrypted. It’s dangerous.”, “This is H20. It’s dangerous.”

So my question is, who’s benefitting? At this stage, the majority of internet users have been submitting forms and using encrypted pages for 5+ years. And if they’re a newbie, is it any more useful to them? (Hint: No.)

The only thing it does is add overhead to setting up a new system. You have to stop and think, “Hmmm is this a negative, double-negative, or triple-negative question? Ah, okay, I think I’ll leave the checkbox unchecked so as to imply I don’t want to not submit the form. And also, I’ll leave the ‘Don’t show me again box’ so it doesn’t not show me again.”

Summary:

  • Only provide dialog boxes that are useful, otherwise users will ignore them all.
  • Avoid not excluding negative phrasing in your options. Even if the most likely value is negative, you should still phrase it as a positive. (“Remember this” as opposed to “Forget this” or “Don’t remember this.”)

HTTP Streaming: An Alternative to Polling the Server

If Ajax apps are to be rich, there must be a way for the server to pass new information to the browser. For example, new stock quotes or an instant message someone else just sent you. But the browser’s not a server, so the server can’t initiate an HTTP connection to alert the browser. The standard way to deal with this dilemma is Periodic Refresh, i.e. having the browser poll the server every few seconds. But that’s not the only solution.

The recent podcast on Web Remoting includes a discussion of the HTTP Streaming pattern. By continuing to stream information from the server, without closing the connection, you can keep the browser content fresh. I wasn’t aware that it was being used much on the public web, since it can be costly, but I recently discovered JotLive (which is only semi-public since it requires registration) is indeed using it. Do you know any other examples?

Ajaxian.com’s interview with Abe Fettig of JotLive:

How do you handle the “live” part? Polling?

We’re using a (very slightly modified) version of LivePage, which Donovan Preston wrote as part of Nevow, a Python library for building web applications using the Twisted networking framework (which I just wrote a book on: Twisted Network Programming Essentials). LivePage doesn’t use polling. Instead, it uses a clever technique where each browser keeps an open XMLHTTP request to the server at all times, opening a new connection each time the old one closes. That way every client viewing the page is constantly waiting for a response from the server. When the server wants to send a message to a client, it uses the currently open request. So there’s no waiting.

A few (edited) extracts from the HTTP Streaming pattern:

Alternative Pattern: Periodic Refresh is an obvious alternative to HTTP Streaming. It fakes a long-lived connection by frequently polling the server. Generally, Periodic Refresh is more scaleable and easier to implement in a portable, robust, manner. However, HTTP Streaming can deliver more timely data, so consider it for systems, such as intranets, where there are less simultaneous users, you have some control over the infrastructure, and each connection carries a relatively high value.

Refactoring Illustration: The Basic Wiki Demo, which uses Periodic Refresh, has been refactored to use [http://ajaxify.com/run/wiki/streaming](HTTP Streaming).

Solution:
Stream server data in the response of a long-lived HTTP connection. Most web services do some processing, send back a response, and immediately exit. But in this pattern, they keep the connection open by running a long loop. The server script uses event registration or some other technique to detect any state changes. As soon as a state change occurs, it pushes new data to the outgoing stream and flushes it, but doesn’t actually close it. Meanwhile, the browser must ensure the user-interface reflects the new data. This pattern discusses a couple of techniques for Streaming HTTP, which I refer to as “Page Streaming” and “Service Streaming”.
“Page Streaming” involves streaming the original page response. Here, the server immediately outputs an initial page and flushes the stream, but keeps it open. It then proceeds to alter it over time by outputting embedded scripts that manipulate the DOM. The browser’s still officially writing the initial page out, so when it encounters a complete <script> tag, it will execute the script immediately. A simple demo is available at http://ajaxify.com/run/streaming/.
…(illustration and problems)…
“Service Streaming” is a step towards solving these problems, though it doesn’t work on all browsers. The technique relies on XMLHttpRequest Call (or a similar remoting technology like IFrame_Call). This time, it’s an XMLHttpRequest connection that’s long-lived, instead of the initial page load. There’s more flexibility regarding length and frequency of connections. You could load the page normally, then start streaming for thirty seconds when the user clicks a button. Or you could start streaming once the page is loaded, and keep resetting the connection every thirty seconds. Having a range of options helps immeasurably, given that HTTP Streaming is constrained by the capabilities of the server, the browsers, and the network.

Experiments suggest that the Page Streaming technique does work on both IE and Firefox ([1]), but Service Streaming only works on Firefox, whether XMLHTTPRequest ([2]) or IFrame ([3]) is used. In both cases, IE suppresses the response until its complete. You could claim that’s either a bug or a feature; but either way, it works against HTTP Streaming.

Donovan Preston explains the technique he uses in Nevow, which overcomes this problem:

When the main page loads, an XHR (XMLHttpRequest) makes an “output conduit” request. If the server has collected any events between the main page rendering and the output conduit request rendering, it sends them immediately. If it has not, it waits until an event arrives and sends it over the output conduit. Any event from the server to the client causes the server to close the output conduit request. Any time the server closes the output conduit request, the client immediately reopens a new one. If the server hasn’t received an event for the client in 30 seconds, it sends a noop (the javascript “null”) and closes the request.

Flock: A Tribute to Unusability of Firefox Extensions?

You’ve probably noticed the buzz around Flock, a browser built on Firefox. Information’s limited, but it seems to pick up on the “social” buzzword – tagging, annotations, RSS etc. Thing is, all of these things are possible in Firefox too via extensions. But extensions haven’t really taken off, and the reason is, quite frankly, poor usability. Firefox is a great browser with a plugin architecture that’s pretty good too. The problem is at the last mile: helping users install and manage their extensions.

Sure, developers will install programming extensions and uber-geeks will set up mouse gestures, but it won’t get beyond that in its current state.

So here’s how a plugin architecture should work:

  • Installing plugins should be dead simple. Pull up a list of plugins and click to install … that’s it! Come on, you own the whole browser and the server too! Make them work together. The current task is something like “visit extensionroom, search for extension, jump into extension page, click Install, Notice security dialog box about this subdomain if you’re lucky, Allow the subdomain, Click Install again if you’re savvy enough to realise what just happened, Watch it download, Click Install, Restart the browser if you’re still interested”. If the version’s incompatible, you’ll only find out at the end of all that (except the restart). True, the version number’s shown when you install it, but that’s really making the user do something the computer should do immediately. Also, most users don’t remember what version they’re running, and I’ve even seen extensions with the version labeled as “Deer Park”.
  • The standard distribution should have a set of extensions pre-installed. Just because you’re using a plugin architecture doesn’t mean you have to ship with a bare-bones distribution. I know there are potentially licensing issues, but shipping with plugins seems to work OK for Eclipse and similarly for the Linux distributions. Satisfy the slashdot crowd with a minimal distribution too, by all means, but mainstream users would rather not spend three hours working out how to install extensions, finding out which are popular/useful, discovering incompatibilities between different extensions, etc. There are plenty of extensions that enhance basic functionality – e.g. All-In-One Sidebar, Adblock, the improved search bars, the improved RSS aggregators – why not take advantage of them? In addition, there’s great scope for specialised distributions, e.g. Developer, Socialite. I know anyone could probably make them and distribute them, my main point here is that Firefox itself should at least distribute a more powerful default distro.
  • Don’t rely on third-party extensions for fundamental functionality. Tabbed browsing works OK in the basic distribution, but the Tabbrowser extension gives it much more power – certainly it brings it up to Opera standard. Yet, it’s been unsupported for over a year, confusing, and buggy at times. Something like this is too important to leave to a third-party. Develop it as an extension if it’s architecturally convenient to do so, but make it mandatory, so that other extensions play nice with it.
  • Provide update notifications. Indicate when updates are present and offer to do the update automatically.

Greasemonkey took off so quickly because it’s so easy to develop, modify, and install GM scripts. Hopefully, the lessons of Greasemonkey and the imminent release of Flock will offer some lessons, and help make Firefox an even greater browser.

Greasing Greasemonkey Scripts

Tweaking a Greasemonkey script is easy. It’s just a single file, so you download the file locally, edit it, and install it the same way you’d install any other script.

I did this because I needed a quick fix for the super-helpful XMLHttpRequest Debugging script. Sometimes the console has a little trouble with positioning – using it with Google Maps caused it to sit behind the upper portion of the page due to layering. So I made two quick fixes – increased the z-index in the embedded stylesheet so it would appear in front and also changed the default coordinates (I’d adjusted them with “about:config”, but somehow that wasn’t picked up.)

All in all, I was able to tweak the script, not knowing anything about the GM API, and have it running in my browser in about 10 minutes. Had it been a standard Firefox extension, I would have been out of luck. I’d presumably have to download the original source, set up a dev/testing environment, and be able to package it all up including meta-info. Furthermore, I’d have to restart Firefox to test it, unlike Greasemonkey which works straight away. I’ve never tried all that with extensions, but that’s my perception from a little looking around.

I’m nowhere near as bullish as some about Greasemonkey, at least in the medium-term, as some people, because I think the whole Firefox extension mechanism is way too complex for most end-users, let alone the idea that you have to install Greasemonkey scripts on top of one of those extensions. But in any event, once you have the Greasemonkey extenstion installed, it’s a cinch to remould a script you come across.