API Pattern: Self-documenting REST Responses

Here is an example of a self-documenting REST response. Ideally every API call should let the developer append a param like help=true to get details such as calling context, documentation links, examples, and related/further calls that might be made. Of course, those additional URLs also include the help=true to ensure hypertext-powered API browser surfing is possible (Further calls might also be part of the response in some cases, following the HATEOAS model.)

javascript
< view plain text >
  1. {
  2.   type: 'post',
  3.   id: 123,
  4.   title: 'How to make pancakes',
  5.   ...
  6.   # other normal response stuff
  7.   ...
  8.   # some calling context help
  9.   caller: {
  10.     type: 'user',
  11.     id: 789,
  12.     name: 'happyjoy',
  13.     role: 'member'
  14.   },
  15.   # general docs link
  16.   docs: 'https://example.com/docs/posts#get',
  17.   # usage examples for this particular resource
  18.   examples: [
  19.     {
  20.        url: 'https://example.com/posts/123?help=true',
  21.        explanation: 'get post 123'
  22.     }
  23.     {
  24.        url: 'https://example.com/posts/123?metadata=true?help=true',
  25.        explanation: 'get only metadata for post 123 (omits the description)'
  26.     }<br />
  27.   ],
  28.   # calls related to this particular resource
  29.   related: [
  30.     {
  31.       comments: [
  32.         {
  33.           url: 'https://example.com/posts/123/comments?help=true',
  34.           explanation: 'get comments for this post'
  35.         }
  36.       ],
  37.       owner: [
  38.         {
  39.           url: 'https://example.com/users/5555?help=true',
  40.           explanation: 'get author details'
  41.         }
  42.       ]
  43.       editor: [
  44.         {
  45.           url: 'https://example.com/users/8888?help=true',
  46.           explanation: 'get editor details'
  47.         }
  48.       ]
  49.     }
  50.   ]
  51. }

Lightweight vs heavyweight frameworks. Or, “which kneecap do I want to be shot in”

Very sensible commentary on software frameworks and the dichotomous debates that afflict them.

“”” Using a lightweight, comprehensible framework is good, until you hit the limits of that framework and start pulling in lots more libraries to fill the gaps (the Sinatra/Cuba world). Using a heavyweight, complete library is good, until you start suffering from bugs caused by incomprehensible magic buried deep inside (the Rails world).

The underlying problem isn’t fashion, or bloat. It’s that software is very, very complex. Unless you’re doing hardcore embedded work in assembly language, you’re building a raft on an ocean of code you won’t read and can’t understand.

A friend of mine put it well once. He said that you should have deep understanding of systems one layer from yours (ie your frameworks), and at least a shallow understanding of things two or three layers away (operating systems, etc). “””

The last comment is similar to what I learned from the wisdom of Spolsky: Take control of one level below your primary level of abstraction.

https://news.ycombinator.com/item?id=9347318

Work-sample tests during interviews

Patio11:

“”” (W)e know — via copious academic studies — that work-sample tests are the best available method of predicting performance. Many companies in the software industry do not administer work-sample tests during job interviews. Instead, they have a disinterested person who you won’t work with make up a random question on the spot (seriously, this is not only a thing that exists in the world, it is the default hiring method in our industry). The disinterested engineer interviewing you then spends most of their time preening about their own intelligence while ignoring your answer, and returns a decision known to be primarily determined by demeanor, rapport, demographic similarity, and other things which all decisionmakers will profess that they are not assessing for. “””

Speeding up Rails asset loading in development: Tips and Gotchas

Rails can be so productive, but one big exception is asset serving in development. Loading HTML, scripts, stylesheets, images, fonts, etc can be slow, sometimes 10+ seconds per page if things go wrong.

Here are some tricks and gotchas to help improve asset speed in development. I’ve learned each of them the hard way, after messing around with settings in a rush to get things working.

Ensure caching is on

  1. # config/environments/development.rb:
  2. config.action_controller.perform_caching = true

Assets may compile slowly, but at least make them compile slowly only once, not every time. To ensure assets are cached, make sure caching is on.

Ensure the configured cache is working/running

Continuing the previous point, make sure caching is working. I normally use memcached via dalli gem, so I have a config line like this:

  1. # config/application.rb:
  2. config.cache_store = :dalli_store, 11211, { namespace: 'player', pool_size: 10 }

The important thing is to make sure memcached is running. I’ve left it off at times to guarantee cache is busted on each request, forgetting it’s off when I see slow page loads.

If you’re using the default file cache, make sure it’s writeable by the Rails process and there’s free disk space. Check files are being created.

Ensure browser caching is on

In a tool like Chrome devtools, it’s easy to flip HTTP caching on and off. With HTTP caching on – the default for browsers and their normal users – requests will include if-changed-since and last-modified. As with regular requests, Rails assets will serve faster if those things are turned on and that’s a good simulation of the production environment too. However, you will sometimes need to turn caching off to test changes; just be aware that this one can substantially speed up assets serving and developers probably turn it off too readily.

Turn debug off

  1. # config/environments/development.rb:
  2. config.assets.debug = false

This one’s another trade-off. It will munge your assets together, which usually means faster load times. e.g. if you have an application.js with //= require book and //= require author, you’ll end up with a single file with both. I’ve not been able to get Coffee/Sass mappings working under those conditions, so it makes debugging harder.

Inject styles and scripts dynamically

Web pages can easily update stylesheets and scripts just by adding a style or script tag. This is super-helpful during development because it means you don’t have to serve a whole new page from the server if you are just messing with styles or scripts. I use a keyboard shortcut to automatically refresh the stylesheet with a cache-busted update. (It could also be more fine-grained if debug is turned off).

javascript
< view plain text >
  1. Mousetrap.bind 's', U.reloadStylesheets
  2.  U.reloadStylesheets = ->
  3.    showDialog 'Loading stylesheet'
  4.    $('link[href^="/assets/application.css?body=1"]').remove()
  5.    $("<link rel='stylesheet' type='text/css' href='/assets/application.css?body=1&#{Math.floor(1e6*Math.ran
  6.  dom())}' />").appendTo('body')

There’s a more sophisticated, more automated, approach to injection here.

Libsass

Libsass is a fast rewrite of Sass in C. This makes every programmer happy except for Rubyists, who may feel bittersweet about Ruby Sass being obsoleted. But still, it’s happening and there is indeed a Ruby binding for it which should be much faster than the pure Ruby Sass. I’m talking 10x faster, potentially.

The main downside right now is compatibility. Not all features have been ported and not all of Compass will work presently, is my understanding, though I’ve also seen a report that Bourbon is now fully compatible, so that’s exciting progress. I do think the benefits will be too great and eventually Libsass will be The One for all things Sass.

So the advice here is to consider compiling with Libsass instead of Sass. Easier if you are starting a new project from scratch. I haven’t done this myself, though I noticed a while back Guardian did.

Avoid external dependencies

If you have scripts such as analyics or widgets, take steps to either not load them during development or defer loading so they don’t block anything. (Good advice for production anyway.) The last thing you want is a slow Twitter widget stopping your assets from even beginning to compile.

Consider parallel loading

Using a server like unicorn, you can configure parallel processing. This is another big trade-off. You’ll have trouble debugging and reading log files (though you can also separate log files per process).

Consider precompilation

You can, in some cases, precompile like you would in production. This is useful if you are writing and testing only back-end updates. However, a lot of the time, you should hopefully be writing tests for those and not actually testing in the browser much; in which case precompilation won’t be so useful after all.

Understand the fundamental asset model

Read this and this to really understand what’s going on. There’s a lot of quick fixes around (such as those herein) but it can all seem like magic and still leave problems if you’re not following it.

Power Chrome

Here are some random handy tips for Chrome power users. [Alternative Buzzworthy title: “Each of these shortcuts could save your life one day”.] These are specifically not devtools-related; just features developers (and others) will benefit from.

about:about As you know, Chrome has a lot of nice diagnosis and config screens, but who can remember them all? Good news, you don’t have to enter “chrome memory” “chrome dns” etc into Google every time. Just remember one URL – about:about – and you’ll always have the full list at your fingertips.

File Menu > Warn before quitting Come on, how many times did your finger veer a bee’s hair from cmd-w to cmd-q. You thought you were shutting down Hacker News and instead you blasted 60 tabs. The implementation of this quit warning is smart too – you just have to keep pressing cmd-q. There’s no annoying “Did you really mean to …” dialog.

Multiple profiles Incognito mode is already a developer’s best friend – it allows you to check how your site looks to a new user, free of extension interference, and cancelling out any logins. Multiple user profile extends this to let you jump between profiles all day long. It’s vital if you have to manage multiple Google accounts, Twitter accounts etc and even more so if you login to other sites with those. (Chrome recently botched the new UI for this, but for now at least you can keep the original interface by setting chrome://flags#enable-new-profile-management to disabled.)

Control Freak If you have a need to tweak pages to your convenience, you can use the control freak extension as it’s super-fast to add CSS or JS rules to any page (much quicker for quick tweaks than Greasemonkey imo). Disclaimer – I originally wrote this, I’ve since passed it on as I couldn’t maintain it, but still find it useful.

Pin tabs Get into the habit of pinning tabs for reference material you’re frequently coming back to and sites you’re testing (e.g. your localhost).

Open email in your browser Make sure you’ve configured Chrome so that Gmail et al may request to act as protocol handlers.

Server-side rendering of Single Page Apps

In the simplest case, a Single Page App is merely an empty HTML body with JavaScript and template elements used to bring the page to life.

Web developers have begun to re-consider this starting point for SPAs. Even if an empty body tag is digestible by Googlebot and acceptable to screen-readers, there’s a performance problem. The quintessential case study is Twitter, who found it’s not such a good idea to send and run a megabyte of scripts just to view 140 characters. They returned to server-side rendering in 2012 to improve their “Time to first tweet” metric.

Server-side rendering

One approach is what AirBNB famously calls the Holy Grail: running the same NodeJS client on both ends. along those lines, EmberJS is working on FastBoot, a way to render on the server, and Tom Dale has written about it.

But what if you don’t have, or don’t want to have, your server-side code base in JavaScript? You could still separate out a web tier microservice (it’s the future!) in JavaScript. If you don’t want to do that, you could pre-render every page using a headless browser and build it as a static HTML file. That has the advantage of being super-fast, but requiring a bunch of infrastructure.

An alternative approach I’m exploring

Wanting to keep my solution lightweight, and not have to run Node on the server or pre-render millions of pages, my plan for the Player FM website is a variant of the old “Dynamic Placeholder” approach where the initial page is served with “holes” in it and the client subsequently makes requests to populate the holes. Instead of serving pages with holes, we could serve the entire page and have the client refresh dynamic content blocks in a way that is as unobtrusive as possible.

It goes like this:

  • Serve pages as static assets cached for an hour or so (the length will perhaps depend on the anticipated update frequency).
  • Dynamic sections in the page will use a data tag to keep track of timestamps for dynamic content.
  • A script tag (at the top of the page) will request the latest timestamp for each dynamic unit.
  • If any dynamic block has changed, its new content will be requested. This request will include a timestamp property in the URL, so that the block may be long-cached and should then return quickly.
  • To avoid a Flash Of Unwanted Text (FOUT), the page content won’t be rendered until the initial freshness check has returned, up to a timeout of a few hundred milliseconds has passed, in which case it will indeed be rendered along with a progress indicator until we get the freshness response and can deal with it.

It’s a little convoluted, but should mostly be out of the way once the framework is established. As the site already uses a PJAX approach to loading new pages (ie HTML output from server, but only the changed content), subsequent pages could optionally be served even faster by building on this technique, (i.e. in parallel to requesting the next page’s HTML, the framework can also request the relevant timestamps. This assumes we are willing to imbue the framework with upfront details of each dynamic page’s structure, an increase in complexity for a further performance boost.)

Global ID

Rails recently introduced Global IDs. It’s a very simple but potentially very powerful concept, and one I haven’t come across before.

The format is:

gid://YourApp/Some::Model/id

i.e. a combination of an app, a type, and an ID. e.g. “Twitter User 114″; “Epicurious Recipe 8421″. It’s a nice lightweight way to standardise URL schemes without trying to go full-HATEOAS. A typically Rails approach to pragmatic conventions.

A good example of using it is for pushing GIDs to queueing systems. When they are later retrieved from the message store, it will be unambiguous about how to fetch that record, and that’s exactly how the new ActiveJob works in Rails 4.2. It supports a notion of an app-specific locator, so the queuing system doesn’t have to assume all records are in MySQL or Mongo or whatever. The app tells it how to retrieve a certain kind of record with a specific ID.

What happened to Web Intents?

Paul Kinlan:

The UX .. killed it especially on desktop, we hit a huge number of problems and the choices we made designing the API that meant we couldn’t solve the UX issues without really changing the way we built the underlying API and by the time we realized this it was too late.

I mentioned web intents would be one of the most profound technologies for the web a few years ago. And then, it disappeared. I still think it will be exactly that when it’s revived, taking into account the lessons Paul outlines here.

In fact with the coming improvements slated for the browser, intents (aka iOS extensibility) actually stands out as one of the biggest native-web gaps of the short-term future web.

Comparing Fire TV to Android TV

Having recently played with Amazon and Google’s TV devices, I believe both work great and complement each other well. There’s overlap, but also a lot of unique and useful features of both makes a good case for going with both devices if you want access to a wide catalogue. Here’s a quick run-down.

Caveats:

  • The Android TV was a Google IO giveaway, I think basically a Nexus Player, but may be some slight differences.
  • The Amazon device is a Fire TV Stick. You might say a fairer comparison is the “Fire TV”, i.e. the console and not the HDMI stick, the latter of which seems more like a Chromecast unit. But in reality, the Fire TV Stick is more like a full-blown console in a stick. It’s less gruntier than its console counterpart, which may impact on some games, and the remote doesn’t do voice. But the core features are basically the same afaict.
  • I’m focusing on video apps here and leaving aside games, audio, etc.

Both:

  • Make video on TV easy. It’s hugely convenient to browse directly on the TV and just hit play. (There are some interesting pros and cons of built-in TV apps compared to the Chromecast model, but I’ll ignore those here.)
  • Run third-party apps including, most importantly, Netflix, as well as some other video apps on both. The apps on both are limited so far to specific partners, but both have some good apps for audio, photo slideshows, utilities, and so on.
  • Ship with good, dedicated, remote controls. This is nicer than having to use your phone or a big keyboard. (I never understood how Google TV thought the keyboard remove was a good idea outside of a lab.)
  • Have nice, fast, UI. They boot quickly (not Chromebook-quickly, but not PC-slow) and respond to remote control interactions without visible lag.
  • Let you move between phone and TV, with native Android and iOS apps available for video streaming. (Amazon access on Android devices has been a problem in the past, but I found since joining Prime and installing Amazon app, I can play video fine on a Kitkat device.)

Unique features/benefits (relative to Amazon Fire) of Android TV:

  • Cards interface and search unify content and recommendations across apps. The recommendations are actually valid.
  • Sideload APKs. I haven’t experimented with this, but it’s possible to send apps to Android TV using Play’s web interface and some messing around. Some supposedly work well so you can use them even if there’s no dedicated TV app.
  • YouTube app works well and it’s really the first time I’ve spent any time “surfing” YouTube or bothering to setup subscriptions and so on. Note that YouTube is also available on Fire, though it’s a specialised web UI. The UI is surprisingly-well optimised for the big-screen experience and remote but still not as slick or performant as the native Android TV app.
  • Acts as a Chromecast, opening up the TV to a lot more apps.
  • Access to Play TV and Movies catalogue.

Unique features/benefits (relative to Android TV) of Amazon Fire:

  • Amazon Prime. This is the standout feature and makes the content an all-you-can-eat rental plans along the lines of Netflix. The actual content (in UK) I’ve found to be good too, fairly comparable to Netflix and with a lot of shows not available there. (Some of those shows may be available on Netflix US, but not Netflix UK, e.g. Mad Men.)
  • Access to Amazon’s streaming movie and TV catalogue. This is a separate point to Prime as it’s also possible to buy some titles that are not in Prime. On-demand rentals and purchases is the best of both worlds – the all-you-can-eat model of Netflix with the purchase model of Google Play.
  • Cheap! The Fire TV Stick is just $39 compared to $95 for the Nexus Player.
  • Portable. Similar to Chromecast, being a stick means it takes up less space and it’s easy to travel with one for instant hotel entertainment.

Overall I’m happy with both devices and looking forward to their next round of updates later this year. After a series of false starts, TV is finally possible online without fiddling on a keyboard in the living room and running cables to the PC.

Did I miss anything? Please post comments here.

Fatty watching himself on TV

(CC image by Cloudzilla)