Server-side rendering of Single Page Apps

In the simplest case, a Single Page App is merely an empty HTML body with JavaScript and template elements used to bring the page to life.

Web developers have begun to re-consider this starting point for SPAs. Even if an empty body tag is digestible by Googlebot and acceptable to screen-readers, there’s a performance problem. The quintessential case study is Twitter, who found it’s not such a good idea to send and run a megabyte of scripts just to view 140 characters. They returned to server-side rendering in 2012 to improve their “Time to first tweet” metric.

Server-side rendering

One approach is what AirBNB famously calls the Holy Grail: running the same NodeJS client on both ends. along those lines, EmberJS is working on FastBoot, a way to render on the server, and Tom Dale has written about it.

But what if you don’t have, or don’t want to have, your server-side code base in JavaScript? You could still separate out a web tier microservice (it’s the future!) in JavaScript. If you don’t want to do that, you could pre-render every page using a headless browser and build it as a static HTML file. That has the advantage of being super-fast, but requiring a bunch of infrastructure.

An alternative approach I’m exploring

Wanting to keep my solution lightweight, and not have to run Node on the server or pre-render millions of pages, my plan for the Player FM website is a variant of the old “Dynamic Placeholder” approach where the initial page is served with “holes” in it and the client subsequently makes requests to populate the holes. Instead of serving pages with holes, we could serve the entire page and have the client refresh dynamic content blocks in a way that is as unobtrusive as possible.

It goes like this:

  • Serve pages as static assets cached for an hour or so (the length will perhaps depend on the anticipated update frequency).
  • Dynamic sections in the page will use a data tag to keep track of timestamps for dynamic content.
  • A script tag (at the top of the page) will request the latest timestamp for each dynamic unit.
  • If any dynamic block has changed, its new content will be requested. This request will include a timestamp property in the URL, so that the block may be long-cached and should then return quickly.
  • To avoid a Flash Of Unwanted Text (FOUT), the page content won’t be rendered until the initial freshness check has returned, up to a timeout of a few hundred milliseconds has passed, in which case it will indeed be rendered along with a progress indicator until we get the freshness response and can deal with it.

It’s a little convoluted, but should mostly be out of the way once the framework is established. As the site already uses a PJAX approach to loading new pages (ie HTML output from server, but only the changed content), subsequent pages could optionally be served even faster by building on this technique, (i.e. in parallel to requesting the next page’s HTML, the framework can also request the relevant timestamps. This assumes we are willing to imbue the framework with upfront details of each dynamic page’s structure, an increase in complexity for a further performance boost.)

Global ID

Rails recently introduced Global IDs. It’s a very simple but potentially very powerful concept, and one I haven’t come across before.

The format is:

gid://YourApp/Some::Model/id

i.e. a combination of an app, a type, and an ID. e.g. “Twitter User 114″; “Epicurious Recipe 8421″. It’s a nice lightweight way to standardise URL schemes without trying to go full-HATEOAS. A typically Rails approach to pragmatic conventions.

A good example of using it is for pushing GIDs to queueing systems. When they are later retrieved from the message store, it will be unambiguous about how to fetch that record, and that’s exactly how the new ActiveJob works in Rails 4.2. It supports a notion of an app-specific locator, so the queuing system doesn’t have to assume all records are in MySQL or Mongo or whatever. The app tells it how to retrieve a certain kind of record with a specific ID.

What happened to Web Intents?

Paul Kinlan:

The UX .. killed it especially on desktop, we hit a huge number of problems and the choices we made designing the API that meant we couldn’t solve the UX issues without really changing the way we built the underlying API and by the time we realized this it was too late.

I mentioned web intents would be one of the most profound technologies for the web a few years ago. And then, it disappeared. I still think it will be exactly that when it’s revived, taking into account the lessons Paul outlines here.

In fact with the coming improvements slated for the browser, intents (aka iOS extensibility) actually stands out as one of the biggest native-web gaps of the short-term future web.

Comparing Fire TV to Android TV

Having recently played with Amazon and Google’s TV devices, I believe both work great and complement each other well. There’s overlap, but also a lot of unique and useful features of both makes a good case for going with both devices if you want access to a wide catalogue. Here’s a quick run-down.

Caveats:

  • The Android TV was a Google IO giveaway, I think basically a Nexus Player, but may be some slight differences.
  • The Amazon device is a Fire TV Stick. You might say a fairer comparison is the “Fire TV”, i.e. the console and not the HDMI stick, the latter of which seems more like a Chromecast unit. But in reality, the Fire TV Stick is more like a full-blown console in a stick. It’s less gruntier than its console counterpart, which may impact on some games, and the remote doesn’t do voice. But the core features are basically the same afaict.
  • I’m focusing on video apps here and leaving aside games, audio, etc.

Both:

  • Make video on TV easy. It’s hugely convenient to browse directly on the TV and just hit play. (There are some interesting pros and cons of built-in TV apps compared to the Chromecast model, but I’ll ignore those here.)
  • Run third-party apps including, most importantly, Netflix, as well as some other video apps on both. The apps on both are limited so far to specific partners, but both have some good apps for audio, photo slideshows, utilities, and so on.
  • Ship with good, dedicated, remote controls. This is nicer than having to use your phone or a big keyboard. (I never understood how Google TV thought the keyboard remove was a good idea outside of a lab.)
  • Have nice, fast, UI. They boot quickly (not Chromebook-quickly, but not PC-slow) and respond to remote control interactions without visible lag.
  • Let you move between phone and TV, with native Android and iOS apps available for video streaming. (Amazon access on Android devices has been a problem in the past, but I found since joining Prime and installing Amazon app, I can play video fine on a Kitkat device.)

Unique features/benefits (relative to Amazon Fire) of Android TV:

  • Cards interface and search unify content and recommendations across apps. The recommendations are actually valid.
  • Sideload APKs. I haven’t experimented with this, but it’s possible to send apps to Android TV using Play’s web interface and some messing around. Some supposedly work well so you can use them even if there’s no dedicated TV app.
  • YouTube app works well and it’s really the first time I’ve spent any time “surfing” YouTube or bothering to setup subscriptions and so on. Note that YouTube is also available on Fire, though it’s a specialised web UI. The UI is surprisingly-well optimised for the big-screen experience and remote but still not as slick or performant as the native Android TV app.
  • Acts as a Chromecast, opening up the TV to a lot more apps.
  • Access to Play TV and Movies catalogue.

Unique features/benefits (relative to Android TV) of Amazon Fire:

  • Amazon Prime. This is the standout feature and makes the content an all-you-can-eat rental plans along the lines of Netflix. The actual content (in UK) I’ve found to be good too, fairly comparable to Netflix and with a lot of shows not available there. (Some of those shows may be available on Netflix US, but not Netflix UK, e.g. Mad Men.)
  • Access to Amazon’s streaming movie and TV catalogue. This is a separate point to Prime as it’s also possible to buy some titles that are not in Prime. On-demand rentals and purchases is the best of both worlds – the all-you-can-eat model of Netflix with the purchase model of Google Play.
  • Cheap! The Fire TV Stick is just $39 compared to $95 for the Nexus Player.
  • Portable. Similar to Chromecast, being a stick means it takes up less space and it’s easy to travel with one for instant hotel entertainment.

Overall I’m happy with both devices and looking forward to their next round of updates later this year. After a series of false starts, TV is finally possible online without fiddling on a keyboard in the living room and running cables to the PC.

Did I miss anything? Please post comments here.

Fatty watching himself on TV

(CC image by Cloudzilla)

Feedreader Apps Redux

After a perfect storm of Twitter, Pocket, and podcasts wiped out most of my desire to use a feedreader, I’ve lately been missing it, so I looked at options.

The requirement was runs on Android and web and ideally iOS. Ideally with a river of news.

Results:

Feedly

Used it a bit in the past too and a lot to like. However, the showstopper for me is the Android app uses the dangerous webview model for Google+ login, meaning I have no idea where my Google credentials are going. This was my biggest objection to feed reading on Android 5 years so it’s disappointing it still persists when the operating system has a perfectly sane way to do 3rd party login.

This also applies to some of the nicer 3rd party Feedly client apps too. They also present webviews and in that case your password gets even more exposure.

The simplest solution for Feedly is to introduce a classic password model.

Newsblur

The web client has always been nice,  somewhat like a desktop mail reader (which was one of the  traditional RSS models). I also like the training capability, which is well-reviewed.

However, reviews suggest the Android client had long suffered stability problems, so ruled it out on that basis. Will keep an eye out for updates as I think it would be a fine choice.

InoReader

This is what I ended up using. The web client is very good, works offline, and uses a standard login/password combo. Each category and feed can be viewed as a river, with read/unread automatically updated (like how Bloglines ended up).

The Android app isn’t going to knock your socks off but it does everything you’d expect it to very well. While I’d prefer a river view, it supports swiping right to move to next post. This can of course be preferable at times, as a way to skim posts. I suppose I don’t need to skim much as my feedreading 2.0  regimen consists of following a small number of individuals’ feeds, such that I should only be seeing 5-10 posts a day.

image

Don’t API All The Things … The Downside of Public APIs

Paul kicked off an interesting conversation about pros and cons of a public API [1]. The benefits of APIs have been well-cited, certainly valid, and I would also argue in favour of public APIs in many situations. Yet startups like Uber and WhatsApp managed to grow to stratospheric heights without any API. Which begs the question often ignored in some API-gushing circles: In the ROI equation, what are the downsides of publishing an API?

This is a deliberately one-sided post as I wanted to make a reference of all the downsides in one place. Following are the potential cons that should be considered in any pros-versus-cons API feasibility analysis:

  • Friction and lock-in – APIs are notoriously difficult to upgrade as clients don’t want to keep updating their code. So once you publish a production-ready API, you have a long-term commitment to maintain it even as you grow and shift the underlying functionality.

  • Server load and quality of service – Of course, your servers are going to be doing more work when 50 other clients are connected, or 50x clients if the API clients’ end-users are communicating dirctly with your API. (OTOH they might be unofficially scraped in the absence of an API, in ways that may be even worse because you can’t fully consider cacheability and anticipate what will be called.)

  • Need a higher-quality API – Before your API is public, you can “fake it till your API makes it”. Meaning that you can put logic in your clients which should really be on the server, but because of other priorities or implementation differences, it’s easier to do a quick job in the client(s) [2]. (Which is fine as long as security-critical functionality like input validation still happens on the server.)

  • Developer Experience cost – There’s no point putting out a public API unless it’s going to be adopted by developers and continuously improved in response to real-world experience. That requires good API design, documentation, developer engagement, and issue tracking. Extra work.

  • Conflict of Interest – An API invites competing apps cough Twitter cough or even apps that combine several services and thus turn you into a commodity [3]. If you’re a pure “API company”, e.g. Stripe or Twilio, that’s par for the course. If you are more of a UX/app company, it may be an issue depending on your business model. If you’re heavily ad-based and not interested in charging developers, like Twitter, then yes it’s an issue that someone can make a good app without ads. If it’s based on a charge for server-side services, e.g. Dropbox, then no it’s not much of an issue that there are alternative Dropbox apps around [4]. Guardian’s API is an interesting example of a hybrid model which allows the provider to have similar incentives to the developer, at the expense of extra complexity.

  • Distraction – As well as the extra effort required, the API and surrounding developer relations is adding another department to the organisation, and therefore a new source of management distraction.

<

p>Most companies can and should deliver public APIs at some point and for some functionality. None of the points above argue against that, but should give pause to anyone blindly following the 2010-era doctrine of “API All The Things”.

Notes

  1. “Public” APIs because I take it as a given that any cloud service has a private API. That’s just a truism. Whether companies frame it as an API and benefit from principles of public API design, that’s not always the case, but they have some API either way.

  2. For example, Player FM’s Android app does some search filtering to remove junk results because until recently, the server didn’t provide that.

  3. When Uber did release its first very limited API, it added an exclusivity clause, meaning apps can’t also use Lyft etc APIs, so it avoids direct price comparisons. TweetDeck was also on this path, with its Facebook integration, as was FriendDeck from none other than the man who inspired this post, Paul Kinlan.

  4. It’s still an issue as some of those apps could turn around and provide a one-click conversion to Box, GDrive, etc. But far less concerning that for Twitter, the worry of a stunning Twitter app that doesn’t show ads