API Pattern: Self-documenting REST Responses

Here is an example of a self-documenting REST response. Ideally every API call should let the developer append a param like help=true to get details such as calling context, documentation links, examples, and related/further calls that might be made. Of course, those additional URLs also include the help=true to ensure hypertext-powered API browser surfing is possible (Further calls might also be part of the response in some cases, following the HATEOAS model.)


  1. {
  2.   type: 'post',
  3.   id: 123,
  4.   title: 'How to make pancakes',
  5.   ...
  6.   # other normal response stuff
  7.   ...
  8.   # some calling context help
  9.   caller: {
  10.     type: 'user',
  11.     id: 789,
  12.     name: 'happyjoy',
  13.     role: 'member'
  14.   },
  15.   # general docs link
  16.   docs: 'https://example.com/docs/posts#get',
  17.   # usage examples for this particular resource
  18.   examples: [
  19.     {
  20.        url: 'https://example.com/posts/123?help=true',
  21.        explanation: 'get post 123'
  22.     }
  23.     {
  24.        url: 'https://example.com/posts/123?metadata=true?help=true',
  25.        explanation: 'get only metadata for post 123 (omits the description)'
  26.     }<br />
  27.   ],
  28.   # calls related to this particular resource
  29.   related: [
  30.     {
  32.         {
  33.           url: 'https://example.com/posts/123/comments?help=true',
  34.           explanation: 'get comments for this post'
  35.         }
  36.       ],
  37.       owner: [
  38.         {
  39.           url: 'https://example.com/users/5555?help=true',
  40.           explanation: 'get author details'
  41.         }
  42.       ]
  43.       editor: [
  44.         {
  45.           url: 'https://example.com/users/8888?help=true',
  46.           explanation: 'get editor details'
  47.         }
  48.       ]
  49.     }
  50.   ]
  51. }

Lightweight vs heavyweight frameworks. Or, “which kneecap do I want to be shot in”

Very sensible commentary on software frameworks and the dichotomous debates that afflict them.

“”” Using a lightweight, comprehensible framework is good, until you hit the limits of that framework and start pulling in lots more libraries to fill the gaps (the Sinatra/Cuba world). Using a heavyweight, complete library is good, until you start suffering from bugs caused by incomprehensible magic buried deep inside (the Rails world).

The underlying problem isn’t fashion, or bloat. It’s that software is very, very complex. Unless you’re doing hardcore embedded work in assembly language, you’re building a raft on an ocean of code you won’t read and can’t understand.

A friend of mine put it well once. He said that you should have deep understanding of systems one layer from yours (ie your frameworks), and at least a shallow understanding of things two or three layers away (operating systems, etc). “””

The last comment is similar to what I learned from the wisdom of Spolsky: Take control of one level below your primary level of abstraction.


Work-sample tests during interviews


“”” (W)e know — via copious academic studies — that work-sample tests are the best available method of predicting performance. Many companies in the software industry do not administer work-sample tests during job interviews. Instead, they have a disinterested person who you won’t work with make up a random question on the spot (seriously, this is not only a thing that exists in the world, it is the default hiring method in our industry). The disinterested engineer interviewing you then spends most of their time preening about their own intelligence while ignoring your answer, and returns a decision known to be primarily determined by demeanor, rapport, demographic similarity, and other things which all decisionmakers will profess that they are not assessing for. “””

Speeding up Rails asset loading in development: Tips and Gotchas

Rails can be so productive, but one big exception is asset serving in development. Loading HTML, scripts, stylesheets, images, fonts, etc can be slow, sometimes 10+ seconds per page if things go wrong.

Here are some tricks and gotchas to help improve asset speed in development. I’ve learned each of them the hard way, after messing around with settings in a rush to get things working.

Ensure caching is on

  1. # config/environments/development.rb:
  2. config.action_controller.perform_caching = true

Assets may compile slowly, but at least make them compile slowly only once, not every time. To ensure assets are cached, make sure caching is on.

Ensure the configured cache is working/running

Continuing the previous point, make sure caching is working. I normally use memcached via dalli gem, so I have a config line like this:

  1. # config/application.rb:
  2. config.cache_store = :dalli_store, 11211, { namespace: 'player', pool_size: 10 }

The important thing is to make sure memcached is running. I’ve left it off at times to guarantee cache is busted on each request, forgetting it’s off when I see slow page loads.

If you’re using the default file cache, make sure it’s writeable by the Rails process and there’s free disk space. Check files are being created.

Ensure browser caching is on

In a tool like Chrome devtools, it’s easy to flip HTTP caching on and off. With HTTP caching on – the default for browsers and their normal users – requests will include if-changed-since and last-modified. As with regular requests, Rails assets will serve faster if those things are turned on and that’s a good simulation of the production environment too. However, you will sometimes need to turn caching off to test changes; just be aware that this one can substantially speed up assets serving and developers probably turn it off too readily.

Turn debug off

  1. # config/environments/development.rb:
  2. config.assets.debug = false

This one’s another trade-off. It will munge your assets together, which usually means faster load times. e.g. if you have an application.js with //= require book and //= require author, you’ll end up with a single file with both. I’ve not been able to get Coffee/Sass mappings working under those conditions, so it makes debugging harder.

Inject styles and scripts dynamically

Web pages can easily update stylesheets and scripts just by adding a style or script tag. This is super-helpful during development because it means you don’t have to serve a whole new page from the server if you are just messing with styles or scripts. I use a keyboard shortcut to automatically refresh the stylesheet with a cache-busted update. (It could also be more fine-grained if debug is turned off).


  1. Mousetrap.bind 's', U.reloadStylesheets
  2.  U.reloadStylesheets = ->
  3.    showDialog 'Loading stylesheet'
  4.    $('link[href^="/assets/application.css?body=1"]').remove()
  5.    $("<link rel='stylesheet' type='text/css' href='/assets/application.css?body=1&#{Math.floor(1e6*Math.ran
  6.  dom())}' />").appendTo('body')

There’s a more sophisticated, more automated, approach to injection here.


Libsass is a fast rewrite of Sass in C. This makes every programmer happy except for Rubyists, who may feel bittersweet about Ruby Sass being obsoleted. But still, it’s happening and there is indeed a Ruby binding for it which should be much faster than the pure Ruby Sass. I’m talking 10x faster, potentially.

The main downside right now is compatibility. Not all features have been ported and not all of Compass will work presently, is my understanding, though I’ve also seen a report that Bourbon is now fully compatible, so that’s exciting progress. I do think the benefits will be too great and eventually Libsass will be The One for all things Sass.

So the advice here is to consider compiling with Libsass instead of Sass. Easier if you are starting a new project from scratch. I haven’t done this myself, though I noticed a while back Guardian did.

Avoid external dependencies

If you have scripts such as analyics or widgets, take steps to either not load them during development or defer loading so they don’t block anything. (Good advice for production anyway.) The last thing you want is a slow Twitter widget stopping your assets from even beginning to compile.

Consider parallel loading

Using a server like unicorn, you can configure parallel processing. This is another big trade-off. You’ll have trouble debugging and reading log files (though you can also separate log files per process).

Consider precompilation

You can, in some cases, precompile like you would in production. This is useful if you are writing and testing only back-end updates. However, a lot of the time, you should hopefully be writing tests for those and not actually testing in the browser much; in which case precompilation won’t be so useful after all.

Understand the fundamental asset model

Read this and this to really understand what’s going on. There’s a lot of quick fixes around (such as those herein) but it can all seem like magic and still leave problems if you’re not following it.