Discovering Users’ Social Path with the New Google+ API

Google announced a slew of identity and social updates today, most excitingly, the ability to browse users’ social paths. This happens after similar services recently blocking some folks from doing so, which tells you Google gave it due consideration and is committed to supporting this feature indefinitely.

Here’s how the authentication looks:

Now there’s a whole set of widgets and JavaScript APIs, but I was interested in the regular scenario for apps already using the “traditional” OAuth 2 dance. After asking on the G+ APIs community, I was able to get this running and I’ll explain how below.

Step 1. Visit the API doc: https://developers.google.com/+/api/latest/people/list

Step 2. Scroll to the interactive part below and turn on OAuth 2.0 on the top-right switch.

Step 3. To the default scope, add a new one: https://www.googleapis.com/auth/plus.login. That’s the magic scope that lets your app pull in social graphs.

Step 4. For userID, enter “me”. For collection, enter “visible”. (This collection property, representing circles/people the user can identify to the app, only has that one value at present.)

Step 5. Now hit execute and (as a test user) you’ll see the dialog shown at the top of this article. Then hit accept.

Step 6. I got a confirmation dialog saying “Clicking Confirm will let Google APIs Explorer know who is in your circles (but not the circle names). This includes some circles that are not public on your profile.” which is surprising as I believe circles are always private (for now), so I guess users will always see that. Accept it.

Step 7. The JSON response will now be shown below the form. It includes a top-level field called “items”, which is the list of your (the authenticated user’s) G+ people. If the list is too long, there will also be a “nextPageToken” field so the app can page through the list.

So that’s an overview of the new G+ social API. It’s a straightforward OAuth implementation and should be easy for anyone with a Google login to adopt. I’ve been looking forward to adding this functionality on Player FM so people can see what their friends are listening to … I think it’s a nice model where users can choose how much of their social graph they share with any app.

Social Delegation

Social may have been a buzzword for several years, but it’s actually very primitive from a developer’s perspective. As an app developer, all I can really do is let users log in and inspect their properties. Here’s what I actually want to do …

Delegate The Karma

Points (StackOverflow, Reddit) aka Karma (Slashdot, Hacker News) is a neat social pattern. There’s a gamification aspect to it obviously, but what I’m especially interested in is the permissioning aspect. As you progress up the ladder, you get more and more permissions to the point where you’re pretty well part of the team. This is a more elegant permissioning model than specific roles such as “guest” “contributor” “admin”.

The problem for users, though, is they’re always back to square one. They can’t bring their reputation with them other than a token link to Somewhere Else On The Web You Probably Won’t Visit, which rather well defeats one of the original points of federated identity. And from the app developer’s perspective, there’s two further problems. Firstly, it damn well takes time to build all this! It’s not 1999 and I don’t have an army of code monkeys to bang out anything on a whim and another army to monitor funny business. And secondly, the chicken-egg problem. For a site in its tender youth, there’s not always enough activity for users to build karma and certainly not enough to build the outer loop of meta-moderation.

Well, one lean approach would be to just do that which doesn’t scale. Namely, give your mates karma, and anyone else you’re talking to who seems half-savvy. That’s perhaps the best thing you can do right now. But a better approach would be to delegate karma to a third party. What would such a third party look like? It would need some quantifiable measure of karma, an API, and really some notion of exactly what this user is good at. No point letting users run wild on your pistachio content if they’re specialists at chocolate.

The service already exists: Klout. And PeerIndex, Kred, and very likely Google Plus and Facebook under their respective covers. Klout is just PageRank for people after all, with all of the gaming and spam detection that comes with it. (These services are mostly vilified by people who are confusing flaws in their current implementations with the principle behind them and people who haven’t dealt with the considerable classes of problems they solve for enterprises. Let’s just assume they are generally reasonable services for now.)

So a solution is to keep your users’ Klout up to date. Sync all users’ Klout weekly and permission them accordingly. As your system grows, you may also build your own karma system up. Or even better, contribute your users’ karma – with their consent – back into the Klout pool.

Delegate The Groups

Social? Hello? Social should surely be about individuals banding together to form groups. Google Plus gets this right by baking in the notion of circles, though as I’ve often griped, Google’s notion of circles is more geeky than the average individual cares for, in much the same way wave’s hierarchy was more complex than most people want to think about. It should be first and foremost about fixed groups, maintained by an admin.

I don’t want groups in my own system. I don’t want a manager to have to go around to every new system and enter the emails of their 15 direct reports. I don’t want the football captain to have to enter his 30 team mates into my system for the umpteenth time. Let the group live somewhere else and I’ll deal with it.

So a solution is to identify a platform for groups, make sure your user is the same guy or gal managing said group, and let them give permissions to that group on your own system. Sync the group regularly.

Delegate The Relationships

Seriously man, every time I upload a slide to slideshare, I get a week of “Xyz has started following you on SlideShare”. WAT? I have a social network for slides! This is unacceptable. I don’t know if their real people or spammers (if the former, don’t get me wrong, thanks for the interest in my slides and now we can be slide buddies. No, wait …) But either way, I’m not going to click on their profile and Visit Their Site Somewhere Else On The Web to work out if I should invite them into my inner sanctum of people I share slides with.

Here is where Lanyrd got it right by just making your conference network your Twitter network. In the same way, it might be useful to see what your Facebook buddies are doing too. And so on. Facebook and I think G+ help a bit here by incorporating friends’ info into their Like/+1 buttons. ie “Joe, Sue, and 7 others liked this”. You may need more than that though if you’re baking this right in.

So a solution is to sync social networks.

Delegate The Profiles

“Hi I’m Michael and this is a new profile I wrote especially for this here basketball site because my football profile was wholly inadequate”. Okay again you see where this is going…pull that profile content in from other sites, at least as a default. The ever-parsimonious Josh Schachter plays this gambit nicely with his latest project, Skills.to. And pull in those Sites From Somewhere Else On The Internet too, so our intrepid user doesn’t have to enter their flickr URL yet for the tenth year running.

A special case that bears mentioning is avatars. This particular piece of developer experience suckage requires the developer to repeatedly poll the social API for the latest URL of the avatar, and then replicate the avatar, perhaps on S3, in order to avoid the sin of hot-linking. (It’s not actually clear if sites like Twitter allow hot-linking of avatars, by the way. Please make that clear, Twitter and friends.) A service I used on Twelebs was Joe Stump’s Tweet Imag.es, but it’s always been a kind of beta project and I’ve found it’s sometimes delivering blank images. And fair enough, it’s free and experimental. I’m pleased to see there’s a new service out today that may actually solve this problem: Cloudinary is an image-URL-manipulation tool, something I’ve thought about building myself for years, after once creating a gradient generation tool and, later, Faviconist.

Bottom line is, sync your users’ profiles.

Now Don’t Make Think, Consarn It! This Stuff Should Be Easy

If I wasn’t focused fully on Player FM right now, I’d be building a system to make all this easy for developers. Right now, it’s a ton of work, and really, it’s all generic stuff. As you probably noticed, it’s mostly just “here’s some data, now let’s just keep it up to date”. So, you know, let me drop in a Rails gem and said gem just automatically keeps all this stuff in the database up to date “for free”. Man would I pay you real dollar bills for that. I’m hopeful services like Parse might actually do this. I’d also suggest that any platform that provides these services – Klout, Twitter, etc. – should be mobilising their developer relations to make these kinds of libraries happen. Platforms are a huge battleground right now, and the winners will have to do more than offer raw RESTful services. They’ll need to encourage frameworks and libraries that make the sync “just work”.

ColourLovers API – JSON Demos

About a year ago, I was excited to discover ColourLovers had an API. There is some great data about colours and palettes on that site, and some great possibilities for mashups – some as general-purpose art projects, and some as tools for designers.

However, I noticed there was no JSON-P interface, so there was no way to write a pure browser-based mashup. I got in touch with them, and heard back from Chris Williams, who to my pleasant surprise was willing to introduce JSON-P. And so we had some email discussion about the various options for the API interface (e.g. do you call it “jsonp” versus “callback”; do you use “real JSONP” versus just a callback function; do you hardcode the name of the callback function versus let the caller specify it as a parameter), and ultimately Chris went ahead and implemented it a short time later and published the API on January 26, 2009. Fantastic!

I offered to make some demos illustrating code usage, and did around that time, but never got around to publishing them until now. I’ve released the demos here. There’s a couple of JQuery demos, and one raw Javascript demo. I had fun making them and have plans to do a lot more with this API.

Digg API – Can’t Bust the Cache

It’s often a requirement for an Ajax app to “bust the cache”, i.e. call a service and ensure its response comes direct and not from a cache. For all the talk of fancy header techniques, the easiest way to do it is by appending an arbitrary parameter, typically a random number and/or a timestamp i.e. call service.com/?random=39583483. I use this pattern not just as an Ajax programmer but also as a user; when I suspect my wiki page or blog is cached, I just add an arbitrary parameter and usually get the most recent result.

With Digg API, that’s impossible. I just tried it and discovered you get an unrecognised argument error if you pass in a parameter they weren’t expecting. This is well-intentioned as it will give early failure advice to callers who mistype or misinterpret parameter names. But unfortunately, it has the major downside of breaking this convention and thus breaking compatibility with a large number of toolkits that exploit this pattern.

I discovered this as I was writing a Google Gadget to show Digg stories and when you fetch content with the Google API _IG_FetchContent("http://digg.com/....", callbackFunc, { refreshInterval: 60 }) – the refresh option apparently leads to Google tacking on an arbitrary parameter called “cache” (it’s impossible to be sure as it’s proxied via Google, but the argument to that proxy is the Digg URL with “cache=” on the end). Net effect: I can’t update Digg stories any more than once per hour. The only way I could do it would be to write my own proxy and cache. A lot more work than the one-liner _IG_FetchContent call! Yeah, like that’s gonna happen.

For the Digg API, perhaps they need to introduce an optional “strict” parameter like a compiler offers to give extra debug info. If it’s off, then let anything through.

An API-Only Business Model?

Amazon, Yahoo, and Ebay all have E-Commerce APIs, right? The model is “You can create a UI and if it fits a niche better than us, you make money”. It’s one of those odd models where you’re competing and partnering at the same time. Some other examples where this hsa occurred:

  • Phone companies forced by government regulations to open up to third-party DSL providers.
  • Microsoft providing APIs for competitors to use.

Hmmm. These previous examples have not been without controversy. Ultimately, a company under this model can make a bit of money, but things get antagonistic once they start to be serious competition. Does this logic apply with Amazon, Yahoo, and Ebay? I haven’t heard too much about it, but one thing’s for sure: whatever you can build with their API, they can build too, and they have the data and background knowledge to go much further.

So as pure speculation, what would happen if a startup offered a pure API-only E-Commerce platform? No web interface, their entire revenue derived from third-party clients. Maybe there already is such a thing, I don’t know. The only way they make money is if the third-party clients are making money.

This came up because I’ve been thinking about the Ajax-WebServices/SOA tie-in. It seems that many people are talking about creating generic web APIs and handling the entire UI in Javascript. (I’m ignoring whether the API is REST or RPC or whatever for now – that’s a side issue.) That being the case, the browser-based Ajaxian UI becomes nothing more than one of infinite possible clients for the web API.