WebWait in the Wild

I haven’t blogged about WebWait since launching it almost two years ago. But it’s been quietly growing into quite a pretty well known site among certain communities in various languages. What I like seeing is people are actually getting value from it…it’s not just a novelty, but a tool for them. Here are some examples.

A conversation about blog timings … and soup recipes:

At December 7, 2008 6:24 PM, Blogger Arfi Binsted said… I love clear soup like this. Kalyn, for some reason, almost 2 months, my browser goes so slowly when I come to your blog. I am not sure where it comes from. Today, I try to open it from the link on my blogroll, it works! Just as well it is a click away from a miracle just to happen for Barbara :) hugs. At December 7, 2008 6:41 PM, Blogger Kalyn said… Maris, this is one of the best parts of blogging in my opinion. Arfi, thanks for telling me. I don’t know what’s going on because I use Webwait.com and check and my blog always seems fine there. I’m going to remember to check back with you and see if it is still a problem. I do hope you’re right that it’s a good omen!

A softly, softly, twitter encouragement for the TechCrunch boys:

liking http://webwait.com/ – http://fav.or.it took 3.04s – (pretty quick!) – @techcrunch took 13s!! – sort it out boys

A tech journalist compares browser speeds:

I used WebWait.com to test how quickly Chrome 0.2, Firefox 3, Safari 3.1, and Internet Explorer 7 loaded the InformationWeek.com home page. The results for three page loads averaged were: Firefox (5.21s) Safari (6.34s), Chrome (6.48s), Internet Explorer (8.90s).

A reviewer notes how much caching helps and I discover in the process I can almost grok Italian geek-speak:

La seconda volta grazie alla cache WebWait ci ha messo 5.17. Un tempo discreto direi.

A blogger produces a “webwait research report” – schweet!:

Ko nih, tanya sket pun tak boleh. Oklaa aku tunggu ko buat positioning tuh. Sambil2 tu jom buat research… :: WebWait Research Report :: Site: farisfakri.com: Average load time after 15 runs: 0.11s telescopictext.com: Average load time after 15 runs: 0.12s google.com: Average load time after 15 runs: 1.11s Blog: ladycoder.com: Average load time after 15 runs: 5.29s blog.farisfakri.com: Average load time after 15 runs: 8.12s soleh.net: Average load time after 15 runs: 9.02s noktahhitam.com: Average load time after 15 runs: 10.07s life4hire.berceloteh.com: Average load time after 15 runs: 11.99s kujie2.com: Average load time after 15 runs: 14.10s berceloteh.com: Average load time after 15 runs: 14.52s Terima kasih Yam, kerana memberi aku kerja. Laporan kajian aku mendapati feedjit, nuffnang, mybloglog, dsb adalah antara yang menjadi punca utama kelembapan loading sesebuah blog. (MM – Google Translate says “Thank Yam, because I gave work. I found the study report feedjit, nuffnang, blogspot, etc. are among the main source of humidity loading a blog.”)

WebWait is just one way to get an impression of speed, as the FAQ explains. And in cases like those above, it can give people a handy snapshot without relying on any browser-specific plugins.

People also love screencapping their webwait results, as this google images search illustrates. It would be nice to somehow make a gallery of those. Anyway, I’ve got some time off coming up, and one of my projects will be to make some long-overdue updates to the site, while ensuring it stays dead simple to use.

Requirements Analysis: Better than a Dilbert Cartoon

These true anecdotes about sum it up::

7. Client can’t articulate a single desired user goal. He also can’t articulate a business strategy, an online strategy, a reason for the site’s existence, or a goal or metric for improving the website. In spite of all that, client has designed his own heavily detailed wireframes.

10. On the eve of delivery, the previously uninvolved “vision guy” sends drawings of his idea of what the web layout should look like. These drawings have nothing to do with the user research you conducted, nor with the approved recommendations, nor with the approved wireframes, nor with the approved final design, nor with the approved final additional page layouts, nor with the approved HTML templates that you are now integrating into the CMS.

18. As approved, stripped-down “social networking web application” site is about to ship, a previously uninvolved marketing guy starts telling you, your client, and your client’s boss that the minimalist look “doesn’t knock me out.” A discussion of what the site’s 18-year-old users want, backed by research, does not dent the determination of the 52-year-old marketing guy to demand a rethink of the approved design to be more appealing to his aesthetic sensibility.

TiddlyWeb user authentication

I’ve been getting to grips with TiddlyWeb and authentication lately.

The following plugin code will set the username in the TiddlyWiki client to match the username that was presented to the server. It’s simply a one-liner which delegates to quirksmodes’ cookie handling library. This all assumes tiddlyweb is using the authentication challenger, as there’s no standard on cookie names.

< view plain text >
  1. config.options.txtUserName = readCookie("tiddlyweb_user").split(":")[0].substr(1);
  3. // http://www.quirksmode.org/js/cookies.html
  4. function readCookie(name) {
  5.   var nameEQ = name + "=";
  6.   var ca = document.cookie.split(';');
  7.   for(var i=0;i < ca.length;i++) {
  8.     var c = ca[i];
  9.     while (c.charAt(0)==' ') c = c.substring(1,c.length);
  10.     if (c.indexOf(nameEQ) == 0) return c.substring(nameEQ.length,c.length);
  11.   }
  12.   return null;
  13. }

BTW this isn’t essential. When a user uploads something, TiddlyWeb will automatically set the modifier to the username on the server anyway. So the next time the user sees it, it will have the right username anyway. But having the right username in the client will make things right from the start and it feels cleaner to upload the tiddler with the right username data, even if it’s ignored. I could imagine future versions/plugins on the server that allow any username to be uploaded, with some algorithm to check if it’s allowed. (ie if admin, allow all usernames.)

Chasing super

A lot of discussion around object-oriented Javascript involves finding cunning methods to get a super reference. This is sometimes a reference to the super class, the super instance, or the super version of the current method. The latest installment in an Ajaxian posting this week on work by Erik Arvidsson. It makes for an interesting enough intellectual exercise and I have full respect for anyone who can pull these things off. But as Erik himself has commented, is it really worth it? (“Once again I find that adding code to make things simpler in JS is unsuccessful. The costs for adding better ways to do things don’t pay for itself. (Except for the inherits function of course.”)

In the case of super…how useful is super – any of the types of super – in practice? In a more general sense, it seems that most discussions of OO and inheritance in Javascript focus more on mimicking the features of typical OO languages and less about how they are actually used. We’re thinking more about the design patterns of implementation – how to “do OO” – than the much more important design patterns of application.

super isn’t very useful for two reasons:

  • Inheritance is overrated.

    Inheritance is often considered the killer app of OO. In fact, the killer app is encapsulation – combining data and behaviour in a single model. Inheritance is a great feature, but it’s icing on the cake compared to the magnificence of encapsulation. Also, as the uber-uber GoF patterns book emphasised, delegation often trumps inheritance. In enterprise Java-land, this has now been firmly entrenched by the dependency injection “revolution”. The idea is basically small dumb things, loosely connected, just like the Unix philosophy and that of Web 2.0. You have a small Strategy class that does just that, and you can inject it into many and varied classes. You just wire up the dependencies, outside of both classes. What you don’t have is a whopping big monolithic class with loads of subclasses that each have their own subclasses doing different things. The delegation model relies on contract inheritance, i.e. Interfaces in Java terms, but not on behaviour inheritance. Inheritance is often confused for design-by-contract, especially in languages like C++ which don’t have an explicit Interface construct.

    The popularity of tools supporting delegation and dependency injection, Spring in particular, means many developers are learning this principle by sheer osmosis if not explicitly. Likewise, the duck typing of languages like Ruby and Python – and, notably for our purposes, Javascript – means you can do this stuff well without any special frameworks. Furthermore, even with Ajax starting to reach some level of maturity, most Ajax apps are orders of magnitude smaller than those enterprise apps whose complexity is a key motivation for inheritance. Inheritance? Good, very good…but overrated.

  • super is a code smell.

    For those occasions when inheritance is appropriate, super still remains inappropriate in most cases. If you take a gander at the aforementioned GoF patterns, you’ll see that most inheritance-related patterns rely on the superclass calling particular methods on the subclass (usually protected methods). These are methods the superclass has explicitly defined and knows about. As long as the protected method fulfills its contract correctly, everything works nicely. There’s no need to call super.

The more fundamental point here? Javascript ain’t Java! Every language is unique.

BashPodder mod – add podcasts to iTunes

As a podcatcher (among other things), iTunes sucks. Badly. iPodder is nicer, mainly because I can keep my follow list in the cloud at PodNova. However, it (or the combination with podnova) often ends up downloading gigs of old stuff, on some particular feeds. Worse, it consumes obscene quantities of memory and CPU, with its UI being unresponsive to the point of being unusable, like 30 second or more delays for each gesture. This is on an early macbook.

Anyway, I decided to rectify the situation and go back to bashpodder, a tiny shell script which proves the point that a podcatcher need not be grandiose, nor a resource gobbler. It’s also cool as it’s easily customisable for anyone with some bash-fu. I modded it a few years back to keep my follow list in the cloud. (I believe clouds were called “servers” back then.)

I’ve recently modded bashpodder to add files to iTunes. Yes, I still like iTunes and I definitely like the i* players which are, for most intents and purposes, constrained to the universe of iTunes. As for it’s podcatcher, not cool. The interface for exploring podcasts is cumbersome, and the result, the downloaded podcasts, are not handle with care. For example, if you download podcasts with iTunes, it marks them out specially as podcasts, and there’s no way to, say, delete all podcasts older than a week. If they’re normal tracks added from an external catcher, they’re just regular MP3s and you can do what you like with them. And you can’t keep your follow list in the cloud!

So here’s bashpodder modified to add to itunes. (The itunes part I added is the HERE doc section beginning with /usr/bin/osascript. You could easily extend it to, say, tag podcasts from certain feeds with a certain album name.)

Click on “Plain Text” and cut-and-paste it into a shell file. Easiest would be to download the several files required for bashpodder (there should be a mod to make it just a single self-modifying file), and replace bashpodder.shell contents with that below.

< view plain text >
  1. #!/bin/bash
  2. # By Linc 10/1/2004
  3. # Find the latest script at http://linc.homeunix.org:8080/scripts/bashpodder
  4. # Revision 1.2 09/14/2006 - Many Contributers!
  5. # If you use this and have made improvements or have comments
  6. # drop me an email at linc dot fessenden at gmail dot com
  7. # I'd appreciate it!
  9. # Make script crontab friendly:
  10. cd $(dirname $0)
  12. # datadir is the directory you want podcasts saved to:
  13. datadir=$(date +%Y-%m-%d)
  15. # create datadir if necessary:
  16. mkdir -p $datadir
  18. # Delete any temp file:
  19. rm -f temp.log
  21. # Read the bp.conf file and wget any url not already in the podcast.log file:
  22. while read feed
  23.   do
  24.   podcast=`echo $feed | cut -f 1 -d ' '`
  25.   echo $podcast
  26.   file=$(xsltproc parse_enclosure.xsl $podcast 2> /dev/null || wget -q $podcast -O - | tr 'r' 'n' | tr ' " | sed -n 's/.*url="([^"]*)".*/1/p')
  27.   for url in $file ; do
  28.     echo "Retrieving $url"
  29.     echo $url >> temp.log
  30.     if ! grep "$url" podcast.log > /dev/null
  31.       then
  32.       # wget -t 10 -U BashPodder -c -q -O $datadir/$(echo "$url" | awk -F'/' {'print $NF'} | awk -F'=' {'print $NF'} | awk -F'?' {'print $1'}) "$url"
  33.       outpath=$datadir/$(echo "$url" | awk -F'/' {'print $NF'} | awk -F'=' {'print $NF'} | awk -F'?' {'print $1'})
  34.       curl --retry 10 -C - $url > $outpath
  35.       fullpath=`pwd`/"$outpath"
  36.       /usr/bin/osascript <<-EOF
  37.         tell application "iTunes"
  38.           set posix_path to "$fullpath"
  39.           set mac_path to posix_path as POSIX file
  40.           set new_track to add mac_path
  41.           set genre of new_track to "*Podcast"
  42.         end tell
  43. EOF
  44.     fi
  45.     done
  46.   done < bp.conf
  47. # Move dynamically created log file to permanent log file:
  48. cat podcast.log >> temp.log
  49. sort temp.log | uniq > podcast.log
  50. rm temp.log
  51. # Create an m3u playlist:
  52. ls $datadir | grep -v m3u > $datadir/podcast.m3u