Blinking WebKit

  • Speed. When Alex Russell talks about greater speed [1], I take it fractally. At micro level, it means actual day-to-day web development and debugging is faster; and at macro level, it means browsers and web standards move faster. Google works the same way; it is a company which cares deeply about speed; at macro level, that means pushing Kurzweil’s broader interpretation of Moore’s Law to its limit, and at micro level, it means great victory for every nanosecond that can be shaved off a search query.

  • Inevitable. The writing has been on the wall for years. Chrom{e/ium} has been heavily driving WebKit and it’s only natural they should want to lead the project. Cutting-edge WebKit is already there on desktop and mobile; in the future, it will need to be there in more contexts, i.e. Android webviews, Google TV or what becomes of it, Glass, cars, etc.

  • Dart. I can’t get a grip on how much Dart is growing, I’m too out of the loop. But if it is indeed growing to the point that it gets to survive and be blessed internally, it will be part of Blink. No question.

  • Safari. I’ve read some people say to the effect “you’re doing it wrong if not already testing on Safari as they’re already different”. Well yeah if you’re writing a mission-critical trading app. But let’s be honest; this business about testing on all browsers comes with a big wink and a sizeable nudge. Most of us can and do get by testing only occasionally on Safari. Even more so for Windows developers who don’t even have access to a modern Safari. I don’t see Apple adopting Blink anytime soon, I’m not even sure the importance of this fork will filter up to Apple’s seniors for some time. And this is a good old fashioned fork; WebKit and Blink will be significantly different. So the net effect for developers is more testing on Safari. And compensated by less testing on …

  • …Opera. My heart sank a little for Opera on reading this news; so it’s good to know Opera was in on the secret. If not when they made the decision to adopt WebKit, then at least some point before the Blink news dropped. Blink will certainly be stronger for Opera’s contributions.

  • Samsung. Samsung has to be considered a major part of today’s browser ecosystem. They get to pick the browser that goes into most smartphones after all, and it’s no secret they are on a collision path with Google. Last night’s news of a major collaboration with Mozilla (on Servo) is more evidence of that. Should Samsung start shipping Firefox as the default browser, the web really will have four major mobile engines (including IE here). It feels like battle lines have been drawn, but that’s probably more about the coincidence of timing. Also worth mentioning Amazon as a similar company with potential to grow into a major influence on the web ecosystem, via Silk. One can assume they will adopt Blink.

  1. http://infrequently.org/2013/04/probably-wrong/

Why Google killed off Google Reader: It was self-defense (GigaOM guest post)

Guest-posted this on GigaOM today.

Backstory is I started writing it on Thursday night after seeing all the Reader tweetstorm and figured it’s probably of more general interest, so I submitted it there. The original draft was ~1400 words and I wasn’t sure how seriously they take their guideline fo ~800, so just left it at 1400, but turns out they are, in fact, serious. So we edited it down.

For the record (since some people asked), I used Bloglines for as long as I could cope with its downtime, as I always found Google Reader too magic (unpredictable) with its use of Ajax. Eventually Bloglines was outaging for hours and IIRC whole days, so I made the switch to Reader, but could never get into the web app – too much Ajax magic – and instead used Reeder, sync’d to Reader when it came along. When I switched to Android for my primary device, I couldn’t find a satisfactory app, so just used Reeder on the iPad occasionally.

Meanwhile, with podcasts, I preferred the cloud approach of Odeo and Podnova, but both sadly died. I tried podcasts with Reader, but it just wasn’t the right experience so I mostly used iTunes, and then on Android, mixed it up between several apps (DoggCatcher, BeyondPod, PocketCasts, etc…the usual suspects) until eventually creating my own (still in beta). I really had problems with Listen though, so again, no didn’t do the Reader sync.

So bottom line is I did use Reader “somewhat”, but mostly as an API; and it’s no great loss to me like I appreciate it is to others. The responses to this article certainly demonstrate how passionate people are about a product they get to know and love, and use on a daily basis. It’s never easy giving up on muscle memory. The bright side of the equation is exactly what people like about it: RSS and OPML are open, so at least people can move on to Feedly, Newsblur, and so on. And I truly believe this decision ultimately liberates the standard and allows it to thrive among smaller players.

About WebWait and Caching

I received today a question often asked about WebWait, so I’ll answer it here for reference.

WebWait User asks:

I have been using webwait for a while and have a quick question for you. When running multiple calls on the same website, is each call downloading the entire page again, or is the information being loaded from the browser cache?

My answer:

It will do whatever the browser would do if the page was loaded normally, so that would usually mean the 2nd-Nth time it will download from the cache. To counter-act that, you can simply disable your browser cache while performing your tests. Or if you do want to test cache performance, just open your site once (either in the browser or WebWait) and then start the WebWait tests, obviously keeping the cache enabled throughout.

Shorthand Parameters

Here is a weird abuse of default variable values to support shorthand variable names. It’s valid Ruby.

  1. def area(r=radius) {
  2.   Math::pi * r * r
  3. }

Simple example, but you get the point. It lets you tell the external world what a parameter is all about, but keeps the implementation shorthand. Obviously it’s just a simple example here; parameter names can be much more verbose than just this example and functions can be longer, so you don’t want to keep repeating a long name. For example:

  1. def damage_level(force_exterted_by_car=force) {
  2.   force = 0 if force < 0
  3.   acceleration = mass/force
  4.   ...
  5. }

Now you might say “just declare it in the first line”, but I prefer small code and there could be several such lines.

You might say “mention it in a comment”, but I prefer self-documenting code. Comments go out of date and clutter up code. (Strictly speaking, the long name here is a comment, but it’s more likely to be maintained.)

[Update: I don't often mention Pi, but when I do, it's on March 14: Pi Day. Thanks to the reader who pointed it out!]

Revoking OAuth Tokens From Google, Twitter, etc

The URLs below let you manage and revoke permissions you’ve given to third parties via Google, Twitter, Facebook, etc. It’s not only useful for security, but also for testing while developing such tools. By deleting the connection, you can see what a user will see the first time they connect.

Mainly writing this because I keep searching for these things and don’t have much luck (as I forgot about MyPermissions). Being personalised URLs, they don’t show up in searches (which is a wasted opporunity, since they are all static and could have just been public placeholders). Hopefully this post will show up in the future when I search for “revoke OAuth Tokens”. You can find links to more of these services on MyPermissions.

A Bash Logging Utility

With a long-running script, it’s convenient to see checkpoint log messages indicating what stage it’s at and how long it’s taken.

Most scripts simply run date to show the boring long date format: Fri Mar 29 21:07:39 MST 2002. Info overload! You don’t want to know what month it is, whether you’re in the middle of a weekend, or what timezone you’re in! More to the point, you want to know how much time has elapsed, not what time it is now; you want to know the script’s age.

So here’s a little utility to make it easy. Just call “age” and it will output time since the script began in 00:00:00 format.

I also made another function “announce” which you can use to announce the current function is running. With larger bash scripts, I tend to break them into functions with a list of calls at the bottom; so I can quickly bypass unnecessary crunching by commenting out the call. “announce” makes it easy to see which is running. And if you wanted, you could easily automate announcing for each function…making aspect-oriented Bash the place to be.

No, let’s not use that date format

Doing the rounds is XKCD’s endorsement of the ISO 8601 date format. Let’s avoid that, because as another XKCD reminds us, you don’t just invent new standards in the hope of wiping out the old ones.

I don’t know how serious the proposal is, but I’ll bite:

  • 2013-02-27 is used by no-one; so it will confuse everyone.
  • Real people don’t use leading zeroes.
  • It’s still ambiguous. Given dates are already messed up, there’s really no reason to assume 2013-02-03 is the logical MM-DD order.

No, the real answer is either include the month name (or abbreviation), or (in a digital context) use the “N days ago” idiom. (Note that “N days ago” does suffer from one major issue, which is it goes stale if caching the content.)

Sure, if the context is filenames or something technical, use this format or just plain old 20130227 (it will sort nicely (https://twitter.com/CastIrony/status/307014830752149504)) and I often do use this format for backups. But for humans, stick to what they know.

Discovering Users’ Social Path with the New Google+ API

Google announced a slew of identity and social updates today, most excitingly, the ability to browse users’ social paths. This happens after similar services recently blocking some folks from doing so, which tells you Google gave it due consideration and is committed to supporting this feature indefinitely.

Here’s how the authentication looks:

Now there’s a whole set of widgets and JavaScript APIs, but I was interested in the regular scenario for apps already using the “traditional” OAuth 2 dance. After asking on the G+ APIs community, I was able to get this running and I’ll explain how below.

Step 1. Visit the API doc: https://developers.google.com/+/api/latest/people/list

Step 2. Scroll to the interactive part below and turn on OAuth 2.0 on the top-right switch.

Step 3. To the default scope, add a new one: https://www.googleapis.com/auth/plus.login. That’s the magic scope that lets your app pull in social graphs.

Step 4. For userID, enter “me”. For collection, enter “visible”. (This collection property, representing circles/people the user can identify to the app, only has that one value at present.)

Step 5. Now hit execute and (as a test user) you’ll see the dialog shown at the top of this article. Then hit accept.

Step 6. I got a confirmation dialog saying “Clicking Confirm will let Google APIs Explorer know who is in your circles (but not the circle names). This includes some circles that are not public on your profile.” which is surprising as I believe circles are always private (for now), so I guess users will always see that. Accept it.

Step 7. The JSON response will now be shown below the form. It includes a top-level field called “items”, which is the list of your (the authenticated user’s) G+ people. If the list is too long, there will also be a “nextPageToken” field so the app can page through the list.

So that’s an overview of the new G+ social API. It’s a straightforward OAuth implementation and should be easy for anyone with a Google login to adopt. I’ve been looking forward to adding this functionality on Player FM so people can see what their friends are listening to … I think it’s a nice model where users can choose how much of their social graph they share with any app.

Glass Surrogates

Google Glass rolls out later this year. The commonly discussed applications have focused on receiving timely notifications and recording video from first-person, but in the hands of developers, many more ideas will emerge. One possibility I haven’t encountered is surrogates. Like all things Glass, a potentially transformative and empowering possibility teetering right on the creepy line.

A Glass surrogate is best exemplified by Larry Mittleman in Arrested Development’s third season (“Middleman”, get it? Portrayed to comedy perfection by Bob Einstein).

While bedroom-bound, George Bluth recruits the surrogate to walk through the world on his command, say what he says, do what he commands.

Being equipped with streaming mic and camera, the surrogate is starting to look awfully familiar in a world of Glass.

One area where this will likely happen is “commodified outsourcing”, an industry that’s already moved on from desk-bound eLance/oDesk type work to real-world delegation, a la Exec and TaskBunny.

These services let you post errands that will be conducted in fleshspace (delivery, cleaning, lining up for tickets, etc). These contractors will already be using their phones to send photos and converse with their providers. I’d be surprised if these workers weren’t standard-issued with Glass devices in a year or two. A busy person could send the surrogate to the store and provide instructions once the surrogate is there. So the busy person only needs to be engaged during the 10 minutes of actual shopping, instead of the hour it takes to visit the store and return.

On a grander scale, a well-located surrogate might save someone a timely overseas trip.

It’s more than just saving time. Some people are physically immobile or find it impractical to travel any length. For them, a surrogate would be the closest thing to being physically present.

This will also take shape in professions where telepresence is emerging. Medicine, for example. A surrogate medical specialist (maybe doctor, maybe not) would perform procedures on behalf of a remote doctor.

You can also see how a manager will be able to flip between worker’s points of view like flipping between CC cameras, even if said workers are in the field. They might be physically labouring on the factory floor or juniors in a business meeting, where the big boss might jump in and out. This is definitely a potentially creepy scenario, but one that would have immense training and feedback benefits.

Far-fetched? Consider that one of the services I mentioned above already does this in its own way. oDesk lets providers view periodic screenshots of contractors. When I’ve provided contracts this way, I haven’t used this facility often because I hire motivated workers and manage workflow in other ways (e.g. Trello), but it can be useful in a remote working context to check a contractor is on the right track. Glass would take all that out to the real world.

There are many ethical and well-being concerns here. I imagine this can quickly become a scenario where managers are able to view all their workers’ perspectives and chime in as “a voice of God” to direct their work. These scenarios will definitely need to be ironed out and as with other areas of Glass, etiquette and conventions will emerge.

Side note: The movie Surrogates is another example, but unlike Arrested Development, these surrogates are humanoid robots which is one or two AI generations removed from the imminent Glass scenario.