Discovering Users’ Social Path with the New Google+ API

Google announced a slew of identity and social updates today, most excitingly, the ability to browse users’ social paths. This happens after similar services recently blocking some folks from doing so, which tells you Google gave it due consideration and is committed to supporting this feature indefinitely.

Here’s how the authentication looks:

Now there’s a whole set of widgets and JavaScript APIs, but I was interested in the regular scenario for apps already using the “traditional” OAuth 2 dance. After asking on the G+ APIs community, I was able to get this running and I’ll explain how below.

Step 1. Visit the API doc: https://developers.google.com/+/api/latest/people/list

Step 2. Scroll to the interactive part below and turn on OAuth 2.0 on the top-right switch.

Step 3. To the default scope, add a new one: https://www.googleapis.com/auth/plus.login. That’s the magic scope that lets your app pull in social graphs.

Step 4. For userID, enter “me”. For collection, enter “visible”. (This collection property, representing circles/people the user can identify to the app, only has that one value at present.)

Step 5. Now hit execute and (as a test user) you’ll see the dialog shown at the top of this article. Then hit accept.

Step 6. I got a confirmation dialog saying “Clicking Confirm will let Google APIs Explorer know who is in your circles (but not the circle names). This includes some circles that are not public on your profile.” which is surprising as I believe circles are always private (for now), so I guess users will always see that. Accept it.

Step 7. The JSON response will now be shown below the form. It includes a top-level field called “items”, which is the list of your (the authenticated user’s) G+ people. If the list is too long, there will also be a “nextPageToken” field so the app can page through the list.

So that’s an overview of the new G+ social API. It’s a straightforward OAuth implementation and should be easy for anyone with a Google login to adopt. I’ve been looking forward to adding this functionality on Player FM so people can see what their friends are listening to … I think it’s a nice model where users can choose how much of their social graph they share with any app.

Glass Surrogates

Google Glass rolls out later this year. The commonly discussed applications have focused on receiving timely notifications and recording video from first-person, but in the hands of developers, many more ideas will emerge. One possibility I haven’t encountered is surrogates. Like all things Glass, a potentially transformative and empowering possibility teetering right on the creepy line.

A Glass surrogate is best exemplified by Larry Mittleman in Arrested Development’s third season (“Middleman”, get it? Portrayed to comedy perfection by Bob Einstein).

While bedroom-bound, George Bluth recruits the surrogate to walk through the world on his command, say what he says, do what he commands.

Being equipped with streaming mic and camera, the surrogate is starting to look awfully familiar in a world of Glass.

One area where this will likely happen is “commodified outsourcing”, an industry that’s already moved on from desk-bound eLance/oDesk type work to real-world delegation, a la Exec and TaskBunny.

These services let you post errands that will be conducted in fleshspace (delivery, cleaning, lining up for tickets, etc). These contractors will already be using their phones to send photos and converse with their providers. I’d be surprised if these workers weren’t standard-issued with Glass devices in a year or two. A busy person could send the surrogate to the store and provide instructions once the surrogate is there. So the busy person only needs to be engaged during the 10 minutes of actual shopping, instead of the hour it takes to visit the store and return.

On a grander scale, a well-located surrogate might save someone a timely overseas trip.

It’s more than just saving time. Some people are physically immobile or find it impractical to travel any length. For them, a surrogate would be the closest thing to being physically present.

This will also take shape in professions where telepresence is emerging. Medicine, for example. A surrogate medical specialist (maybe doctor, maybe not) would perform procedures on behalf of a remote doctor.

You can also see how a manager will be able to flip between worker’s points of view like flipping between CC cameras, even if said workers are in the field. They might be physically labouring on the factory floor or juniors in a business meeting, where the big boss might jump in and out. This is definitely a potentially creepy scenario, but one that would have immense training and feedback benefits.

Far-fetched? Consider that one of the services I mentioned above already does this in its own way. oDesk lets providers view periodic screenshots of contractors. When I’ve provided contracts this way, I haven’t used this facility often because I hire motivated workers and manage workflow in other ways (e.g. Trello), but it can be useful in a remote working context to check a contractor is on the right track. Glass would take all that out to the real world.

There are many ethical and well-being concerns here. I imagine this can quickly become a scenario where managers are able to view all their workers’ perspectives and chime in as “a voice of God” to direct their work. These scenarios will definitely need to be ironed out and as with other areas of Glass, etiquette and conventions will emerge.

Side note: The movie Surrogates is another example, but unlike Arrested Development, these surrogates are humanoid robots which is one or two AI generations removed from the imminent Glass scenario.

Order Of Magnitude Improvement: 3.16x

Disclaimer: Largely waffle.

A common principle in tech is that changes are only adopted on a grand scale when there is an order-of-magnitude improvement. That is, it’s not good enough to add a couple of new features to make the product 10% better; that will only bring a niche audience. You have to make it radically, qualitatively, better.

It’s easy to see examples of this: Google’s search was blatantly more useful to anyone acquainted with Alta Vista and friends; windowed UIs were blatantly more friendly to casual users than a text terminal, etc.

An interesting thing I wanted to mention is “how much is an order of magnitude improvement”? Well, the real answer is, it doesn’t really matter. It’s a principle, and it’s more about a disruptive, qualitatively different change, than something you can measure.

But that said, it’s often equated with a 10x improvement. That’s what English thinks anyway.

But actually, I always think of it as a 3.16x improvement. That is, the square root of 10. The reason I say that is that “orders of magnitude” implies a discrete scale, and jumping to the “next” order of magnitude means going 1 up. So you might say it’s anything more than 5x. But order of magnitude changes happen exponentially by definition, so if you can improve something by 3.16, you’re halfway there. (It would be halfway on a log-log chart.)

This is all very silly calculation, because like I say, the whole concept is wishy-washy. What are we even measuring anyway? Utility? Something else? And if someone came up with a 3.16x innovation as soon as the last one happened, then by this definition we’ve jumped only 10x but two orders of magnitude. Just thought I’d mention it anyway.

Bitten By Significant Whitespace

I’ve come to love significant whitespace since using it in CoffeeScript. (I’d dismissed it due to generally not getting on with Python, but really that’s for other reasons.) By eliminating the need for { }, code is more to the point.

However, significant whitespace is playing with fire and I just got burned.

The code helps to tailor sidemenu behaviour for a touch device. Anyway, the final false was wrongly indented. It should have been indented by 2 more characters to appear directly under the other 3 lines.

It must have been a quick edit or something, but the net effect was forms couldn’t be submitted when on a touch interface. I couldn’t quickly track it down, so made some workarounds to get things working, but then I realised it was happening on all forms, so looked into it more.

Lessons: * Be very careful changing any Coffee indents * Modernizr.touch would be a good starting point to search for the cause of any bugs like this.

Private resources with ElasticSearch and Tire

I’m adding private channels to Player FM and one consideration is search results. Tire’s activerecord does a great job at making updates transparent, but in this case some manual overriding is required.

Importantly, this allows the user to switch privacy on and off, and the index will automatically be created and deleted. I initially considered using a “_changed?” check, but realised it’s unnecessary as ElasticSearch’s remove operation is idempotent. In other words, it’s safe to remove an already-removed item. Yes, the call could be avoided by checking if the resource is already private, but the call is cheap, a fraction of the cost incurred if the channel was public anyway (i.e. it would have to be re-indexed).

There was some talk of a “should_be_indexed?” method which any record could override. I think it would be perfect for this use case – it’d just be a case of a one-word return value (public?) but alas, it wasn’t added. As the code above shows, though, it’s pretty simple to DIY.

image by Zebble

Testing ETags: A Little Gotcha

TLDR: ETags have quotes, escape them when issuing requests.

I’ve been using HTTPie to test some conditional caching I’ve been setting up on a JSON API. It’s much more intuitive than Curl, very recommended.

A funny thing about ETags is the values are surrounded by quote marks, unlike most other string-based HTTP header values. (And even better, this being the web, there’s much flexibility in implementations, even if the quotes are required by the standard.) So a response looks like:

HTTP/1.1 200 OK
...
ETag: "avm3pvp34vpoktcbd18db4c"
Normal-Header: some value

Having added conditional caching support on the server, I was now looking forward to reaping the reward and seeing 304s show up client-side. Hustling for the cacheworthy 304 Not Modified response, I tried this:

http -phH localhost:3000 If-None-Match:avm3pvp34vpoktcbd18db4c (Wrong!)

And I kept getting 200s, meaning a fresh response every time. Server wasn’t recognising the value because quotes. So the correct thing to do is:

http -phH localhost:3000 If-None-Match:\"avm3pvp34vpoktcbd18db4c\"

And now the server recognises it as the same ETag. Satisfying 304 is Satisfying.

(Of course, this was working all along in the browser. I figured it might be working because of the other conditional caching mechanism – timestamps (Last-Modified and If-Modified-Since headers) – even though the ETags were apparently different. But it turned out the ETags were indeed right as the browser, unlike me on the command-line, knew how to actually send them.)

Keeping SSH Alive on OSX 10.7.5

Quick SSH tip. I recently upgraded to OSX 10.7.5 and promptly found SSH sessions dying early and often. After trying various things people suggested, I found some flags which fixed it and am using this alias for now:

function shell { ssh -o TCPKeepAlive=no -o ServerAliveInterval=15 $1; }

You can stick that in a config file, if you wish to make it default.

Phablet-Only

Mid-Range Tablets: Their Time Has Come

I was initially skeptical about mid-range tablets. I figured I have my phone in my pocket and my big-ass tablet in my living room; why would I need something in the middle. Niche at best, right?

I was wrong.

I got interested in this form factor when I saw my G+ stream starting to light up with rave reviews about the Nexus 7. Biased crowd, admittedly, but they weren’t the usual fanboys/girls. After Google dropped the price for the holidays, it was too much too ignore. I yoinked a 16GB Nexus for £159 and I haven’t looked back. It’s one of the best devices I’ve owned, up there with iPod, iPhone, and iPad for sheer delight of getting to know it. I find myself reaching for the Nexus even when the iPad 2 is nearby. It’s lighter and less bulky to hold, and works fine for any kind of surfing and reading. (And it doesn’t hurt that it’s running what has become a fantastic OS.) Only video suffers from the size, though the trade-off still makes it worth it for video sometimes, especially in environments like public transport.

The little secret about this form factor, now revealed to the masses, is that it fits fairly comfortably in most adult jeans. Not to mention handbags and glove boxes. I did have a crappy knock-off 7″ model from early 2011, and it was simply too fat to fit comfortably in the pocket. But – thanks mostly to battery improvements, apparently – the Nexus 7 and iPad Mini are way thinner, and that really makes the difference. Furthermore, the grippy backplate of the Nexus 7 is genius, one of those “little things” that makes a huge difference and elevates the form factor overall.

The New Must-Carry Device

Given that (a) mid-range tablets are the sweet spot for many interactions, and (b) they are feasible to carry around, the natural deduction is I want to carry them around with me. Even more so as I’m often using it to listen to podcasts or watch videos prior to stepping out and want to continue that experience without switching over to the phone.

The mid-range tablet has begun to occupy the special place traditionally occupied by the phone: A personal device, always carried. Not a shared device like the iPads of yore, but a device as personal and omnipresent as the smartphone. So I’m often carrying two devices now; the convergence trend has been reversed and after shedding the dedicated MP3 player and camera, suddenly my gadget count has doubled. A smartphone and a tablet, both in my pockets? Don’t want.

Why even bother carrying a smartphone anymore? What does it offer that the mid-range tablet doesn’t? Well, two things for now: Bandwidth and actual phone services.

Bandwidth is mostly what I still need the smartphone for. The S3 has increasingly a dumb appendage which sits in my pocket and is only there to provide tethering support for the Nexus. Well, that happens as long as I have a wifi-only model, but if I could choose again, I’d splurge on a 4G model.

That leaves only one thing the smartphone is good for: the “phone” bit. And that’s why I’m talking about Phablets here. Modern tablets do in fact allow for phone services. We can use VOIP solutions like Facetime, Google Talk, and Skype, all with plenty of options for buying traditional phone numbers and interfacing with the regular network (including SMS and voicemail). Furthermore, it should be possible to access the radio and make actual phone calls using the standard dialling app (requires a rooted Android device for now, but it’s possible). One could easily make regular phone calls with a headset, bluetooth or wired, or speakerphone, or – yes – the comedy scenario of just holding the damn thing to your face for a few minutes.

Having explained why those two barriers are surmountable, I believe Phablet-Only is possible and something I want to do. I think we’ll see a little Phablet-Only trend gain momentum in the next year.

One Size Fits No-one

Phablet-Only is not for everyone. I know. Not everyone wants to interact with their phone using a headset. It’s super-convenient to just hit Call and hold the phone against your face. Others might object to the one-hand experience; if you’re standing on a crowded train every day, you probably want to hold a device where your thumb can reach every point on the screen. And the size itself, of course. If you don’t have big enough pockets and don’t want to carry a bag around, you can’t do this.

The real point is that everyone will have a range of options available. It’s likely that we’ll converge to one personal device, because most people will be too inconvenienced by keeping multiple devices with them at all time. Even with cloud syncing, you still have to install apps twice, set up your homescreen again, etc. Only a revolution in wearable devices, like Google Glass, will bring about more than one device. So assuming for now, we have only one device, what will that device be? In 2009, we could confidently say it will be an iPhone or similar form factor. But for 2013, I believe we won’t be able to say much at all as it could be anything from 3″ to 8″. For Phablet-Only users, they might still keep their phone, but it would switch to a secondary device for occasions where a tablet isn’t practical (e.g., the running/clubbing/gyming scenario). The implications for developers are obvious: Get Responsive! And I mean this in the broader sense: Native apps must be responsive too and designers must consider how different form factors affect different usage patterns.

Chrome Apps v2: Native-Grade HTML5 on a Desktop Near You

I went yesterday to Google’s Chrome Apps hack day at Google Campus. A lot of what you hear about the new Chrome Apps is actually a v2 of the packaged app concept introduced in 2010, but focused on making apps more native like. I’ve followed this closely, but didn’t get hands-on until yesterday.

I’ll outline my initial thoughts here. For context, I hacked on a long-planned app to sync podcasts offline according to account settings (show me the source), and got as far as a scrapp that shows and plays latest episodes in all of the user’s starred channels.

As I’d booked Campus’s swish studio for a couple HNpod recordings, I didn’t get time for the offline components, but did discuss options with Chrome team.

Desktop For Now

We should firstly say that this is a desktop play. Of course, the elephant in the room is Chrome for Android, a fine marriage of two products from the same organisation that is releasing this, and it’s running on mobiles, tablets, and TVs. So the eternal dream of HTML5 Everywhere might one day be carried forward by this initiative. But for now, this is all about the desktop. And that’s a fine place to start…I’d think of it as Browser++ instead of Mobile Apps–.

Native-Like User Interface: Seriously Competitive

This is a major improvement over the original apps model and certainly in the direction I’d hoped for in the v1 era of Chrome apps. This is much more like the Adobe Air vision of write-once, run-many, native apps that sit in their own window. You don’t see browser Chrome and apps don’t even need to use a standard title bar…you can roll your own there too (Yes, you too can design a lickable 1990s MP3 player with custom title bar!). Adobe Air’s vision was actually very promising and certainly did well in the form of TweetDeck, Destroy Flickr, and others. But Adobe let the runtime languish, particularly the HTML5 runtime. And the Flash runtime dwindled along with Flash itself. In contrast, this is Chrome’s leading-edge HTML5 runtime. Gloves are off, native apps!

The caveat is that presently, it’s launched through Chrome and apps don’t launch from start menus, task bars, etc. The team has aspirations to fix all that and make the apps feel truly native, much like Adobe Air apps, but it’s obviously a lot of work, has standardisation implications, and remains to be seen if they will make good on that aspiration.

Native APIs

It’s not just a native UI. It’s native APIs. I mean, you can write a freaking IRC server in the browser! Someone has. Check out the servers in the app samples.

This is great and one of the main things missing from web apps right now. That said, it does lead to another challenge which Chrome team and devrel needs to solve: Too Much Choice!

In my case, I had to ask the question: Where am I syncing these podcast files to? Google Drive space? FileSystem API space? Or raw filesystem, given that Chrome apps can actually save files on the regular hard drive! FWIW my answer, in my specific case, FileSystem API, because it’s perhaps the most likely to be auto-sync’d in the future and doesn’t have the extra barrier of Google Drive. But the decision tree is far from clear. Just as HTML5 gave us purpose-built APIs for the hacks of Ajax era, Chrome apps are ushering in a new era where there will be a third way to do everything: Pure native. And it’s not always clear which way is the right way.

Distribution Model: Potentially Confusing, Potentially Solveable

Distribution is presently planned to happen through the Web Store, but may also change in the future. I think this may be a point of confusion for users, given there’s really no obvious tie to Chrome. Why install a standalone app from inside your browser? I presume the App Store itself might gain a more native look-and-feel and hopefully the Chrome team can provide a path that is intuitive for users, while maintaining the security benefits this approach affords.

Build Process

You don’t need a build process. You can just manually hack the manifest and build a single page app. But this is from the team that made Yeoman, and Yeoman’s Addy Osmani was on hand, so I tried it out. Once installed, Yeoman supports a yeoman init chromeapp command to auto-build a Chrome app project, so this will make the process simple. I did encounter a couple of basic bugs, which Addy’s aware of and will fix soon. Moreover, the fundamental issue right now is combining Yeoman targets. Right now, you can only initialise with a chromeapp or an Angular app or whatever…but not all of them. Addy well understands this problem and the next version of Yeoman will allow you to initialise with several targets simultaneously. So you could initialise a Chrome App that is powered by all of Angular, Bootstrap, and Testacular, for example.

Debug Process: Most Improved

Chrome app development is much easier than it was a couple years ago, thanks to some small but powerful additions. Instead of hacking URLs to find the background page location, there’s now a direct link. Instead of disabling and enabling the extension, there’s now a reload button. And so on. It’s still in need of more basic UX – put a developer in the UX lab and watch them debugging for an hour – but much better than it used to be.

Content Security Policy

This isn’t specifically about Chrome Apps, but is closely related from an end-developer’s perspective. With the new CSP setup, apps are much more restricted in terms of dynamic constructs like eval usage. The net effect is that a number of libraries don’t work in Chrome apps, e.g. libraries like jQuery Templating and Underscore templating. I found lodash’s default build doesn’t work either. My main feedback here to Chrome team would be to work with those API developers to make CSP-friendly implementations, and to guide developers on which libraries are working right now.

Window-Background Page Interaction Model: Still Confusing

The background page is the same background page concept used in Chrome extensions. It’s an invisible HTML page for your app that’s always present when the browser is on. It’s the thing that does basic polling and crunching behind the scenes.

As with Chrome apps of yore, there is still a fundamental Something’s Not Quite Right tension with background pages. It’s a rather complex interaction model for just basic communication between the background page and the UI windows, and never a true sense of which component, if any, is in charge. Again, it feels like some good old fashioned UX style research would solve this. What are devs trying to do? There really needs to be a better library/API and documented patterns to guide developers on the basic communication model between background windows and the rest of the app. (Not just UI windows, but also things like context menus and content scripts, when we’re talking about extensions.) I cobbled together a little Pub-Sub type approach which feels like the right way to do it, but it shouldn’t be anything an app developer needs to even think about.

Standardisation or No?

I’ve said this before, but fragmentation is a huge threat to the web right now, because the miserable alternative seems to be trailing years behind native apps while debating on standards. In six months, we will have a perfect litmus test, because Firefox OS will by then be in production and using similar native-like APIs to Chrome. The question is, will they be the same APIs or not? I haven’t seen a lot of evidence these really deep-impact APIs, like background processing and notifications, are being standardised, but I’m only a casual observer. What I can say is that if they’re not being standardised effectively, then efforts like Chrome apps and Firefox OS won’t make much of a dent in the onslaught of native platforms. Best I could hope for in that case, as a pragmatic developer who is ultimately motivated by the needs of end-users, is a tighter integration between Chrome and Android.

Conclusion

So, yesterday’s workshop left open a lot of questions about project direction, but also a lot of hope for the HTML5 and Chrome platforms. All that depends on how well developer feedback is taken on board, and devrel is certainly doing its part to make that happen on the basis of what we saw yesterday.