TiddlyWiki Screencast: Forum in 15 Minutes

TiddlyWiki Screencast: Multi-User Forum in 15 Minutes from Michael Mahemoff on Vimeo.

This screencast is how I finished up the 12Days project, as it captures a lot about what I have come to appreciate in TiddlyWiki. It shows how you can do some simple hacking in the browser, in a single-page web app, reusing and configuring existing plugins from the community. And you can then “turn the handle” to deploy it into a multi-user environment, through the magic of TiddlyWeb. The deployment process in the video uses Hoster, which provides lower-level functionality, but it will be even simpler when TiddlySpace is mature.

This is a screencast showing the evolution of a tiddlywiki, starting from a freshly-minted TiddlyWiki downloaded from TiddlyWiki.com, and transforming it into a viable multi-user forum. The steps shown in this screencast:
  • Customising the forum’s look-and-feel by updating shadow tiddlers (SiteTitle, SiteSubtitle, ColorPalette, StyleSheet, DefaultTiddlers, MainMenu).
  • Using tags, the list macro, and the newTiddler macro to show and create new topics.
  • Reusing components with backstage (CommentsPlugin, SinglePageMode, TaggedTemplateTweak)
  • Building a custom macro (taggedCount)
  • Transforming into a multi-user forum (Hoster)
  • Visualising version history

    I’m a redundancy fanboy. In visualisation, different formats suit different personalities and different tasks. With version control, the usual format is just a text log. This is good if you’re scanning for specific terms, but pretty ordinary for other activities – e.g. to get a feel for general trends that have arisen, the pace of change, or the rise and fall of specific contributors.

    It’s encouraging, then, to see demos like the following, which shows the evolution of the Python language project (via Dion’s tweet).

    code_swarm – Python from Michael Ogawa on Vimeo.

    It reminds me of one of the first screencasts by Jon Udell, a fascinating walkthrough of the evolution of a wikipedia page over a year or so. The page he chose for this demo is as memorable as the message of the video itself.

    These visualisations are cool as tasters for what might be, but they are “here’s one we made earlier”. Where are the tools to automate all this? I have no doubt such tools have been created in academic research projects, but let’s see them in action. I’d love to see the source code hosts – sourceforge, google code, github, et al – integrate this technology to produce visualisations on the fly.

    Screencasts with Audio on Wink

    I’m at a workshop on widgets. At a lot of workshops, people build some code, demo it, and then go away and no-one can see it running again. In an ideal world, we’d keep the apps running forever, but that relies on a complex tangle of internal and external services remaining online and staying in the same form. For example, will Twitter’s experimental OAuth have the same interface in six months time as it does today? In an experimental workshop involving mashups, there are bound to be numerous calls to services like that. The best way to preserve the work is not to keep the apps running, but to capture screencasts where the developers can explain the underlying concepts.

    On Mac, I use iShowU for screencasts (like the Web-O-Random screencast I did a while ago). For Windows and Linux, though, there’s the possibility of Wink, which is nice as it’s (a) free; (b) capable of producing SWF files directly (Flash movies which can be embedded in a web page like YouTube). Last tried Wink two years ago to make some AjaxPatterns screencasts that never happened. (It’s funny to think that at the time, I was bothered about how to host and serve these files, a few MB each. Now I’d just store them on Dreamhost at 1+TB/month for about $20.) At the time, Wink didn’t handle sound, so you had to go through contortions to get an SWF movie containing the screencast with audio. Now it does, but it turns out to be not brilliant. When I tried to record with the “audio” option checked, the audio ended up being broken – 1 second on, 1 second off. Would be indicative of a buffering issue, but there’s plenty of memory available.

    So here’s what I discovered, which actually works (using Wink 2.0). Instead of simultaneous audio and video, you can record audio over a single – frozen – frame. i.e. the frame will be frozen while you say your thing. It’s not Tarantino, but good enough for an explanatory screencast.

    1. Start a new project, with audio option not checked.
    2. Record the interaction without audio (Shift-Pause to start, Alt-Pause to stop). Don’t slow down on critical events as you can easily add delays during editing.
    3. Once you’re done recording, Wink will show a reel of all frames on the bottom of the screen.
    4. Click on a frame showing a critical event. On the right of the screen, there’s a dialog showing properties for this frame.
    5. Click on “+” audio button, which will produce a recorder. You can now record some audio which will sound out while showing the frame. The frame is automatically paused for as long as the audio you record.
    6. Now do Project Menu|Render and then Project Menu|View Rendered Output to see your video and hear your narration.

    (I’m aware this is a plain-text post, explaining how to use software without screenshots or screencasts. Isn’t it ironic?)