Friday, September 21, 2012
Hackday project:

I saw this tweet from @kellan that got me to thinking:

It would be useful if could describe itself in the world of protocols I know, particularly PuSH, XMPP, and SMTP.

I think the notion of a distributed social network is an interesting one, and I've been pondering how it might work. I think I've got a scheme for one, which I'll outline in this post, and I think it might not take that long to throw it together with off-the-shelf open source and a dedicated hack day.

General Principles

Use existing, open standards. This increases the likelihood of finding good open source code to use when building software, ensures that a distributed marketplace of (interoperable!) clients and servers could arise, and increases the likelihood of avoiding platform specificity. Plus, giants have pretty high shoulders. I have limited hack time; I'm not into wheel reinvention.

Central service provider not required. This is pretty fundamental; means there's no central organization that owns/has access to everything ("creepy alert!"), but also that there's no central organization that can be subjected to lawsuits (e.g. Napster[1]).

Users have control over sharing. At least for the initial communication, users need to be able to say exactly who gets to see a piece of content. Of course, once you've shared something with someone else, there's no stopping that other person from sharing it further, but that's what social trust is all about.

Protocol Sketch

TL;DR. S/MIME, public key crypto, Atom.

To expand: everyone runs their own AtomPub server somewhere, or contracts with a service provider to do it on their behalf. Multiple PaaS and IaaS providers have a free tier of service, which is almost certainly more than enough for most users. I get whole tens of visits to my blog every day. And I am not a prolific sharer: looks like I've averaged 1.1 tweets per day since starting to use Twitter. "Roflscale" is probably enough, to be honest. Easy recoverability probably more important than high availability for a single-user server/hosted PaaS app, so I think this could be done for less than $10 a year (most of which is probably just buying a domain name). Given the relatively low traffic, actual friends could pretty easily share infrastructure.

Now, I'm the only one who posts to that Atom feed, and anyone can poll it at any time. The trick is that the content is encrypted, and you specify who gets access to the decryption key. Basically, you generate a random (symmetric) encryption key for the content, then public-key encrypt copies of the content key for all your recipients. Put it all together in one multipart package, upload to, say, S3, and then post a link to it on your Atom feed. Of interest is that neither S3 (the storage) nor the AtomPub server care one whit about the encryption/decryption stuff.

Interestingly, following gets somewhat inverted. You can post content intended for someone who never subscribes to your feed (gee, I hope he notices me!). You can also subscribe to someone's feed who never encrypts stuff for you. But you can see anyone's public broadcasts, of course. Actual "friending" is just GPG key exchange and an addition of the key to appropriate keyrings to represent friend groups/circles.

On the client side, then, it's mostly about subscribing to people's Atom feeds, polling them periodically (for which HTTP caching will really help a lot), cracking open the content encryption keys with your own private key, then interleaving and displaying the content.

That's about it. Definitely possible to layer PubSubHubbub on top to get more realtime exchange; no reason your free tier server couldn't host one of those for you too. Or perhaps just a feed aggregation proxy that would interleave all your subscriptions for you (without needing to understand who any of the recipients were, or even if you were one of the recipients!).

Hack Day Proposal

Roughly, get a simple AtomPub server stood up somewhere. Apache Abdera has a tutorial for doing exactly that, if you can implement the storage interface (MySql/Postgresql/SimpleDB/S3 I think would all work; looks to be just a Map). Front with Apache and some form of authentication module. Alternatively, grab Abdera and retrofit into Java Google AppEngine and use Google federated OpenID for login.

Client side, cobble together a UN*X command-line client out of bash/Python/Ruby/whatever, depending on what open source is out there. It'd be kind of neat to perhaps do commands interleaved on the command line, in MH style. Everybody's got an HTTP client library that can be made to convince the server you're you. AtomPub clients are available in Python (1,2), Java (3,4), Ruby (5,6). GnuPG for the public key implementation. Bouncycastle has a Java S/MIME implementation. Lots of S3 clients out there (7,8).

I think it's entirely possible to have a simple client and server up in a day with a smallish group of folks (5-6). If there are more folks we can also try multiple clients and servers.

Tentative Projectname

Since @kellan started this off by mentioning, and this is an attempt to slap something together with off-the-shelf stuff, it felt more like a lean-to made out of a tarp than a true tent, so I've snarfed the domain name (which is the closest I could come to something that meant "like a tarp" that lined up with a cheap TLD). Cool part, of course, is that there's no reason and couldn't interoperate with protocol bridges someday.

If you're interested, hit me up on Twitter at @jon_moore or follow along on GitHub. I'll see when I can free up for a hackday soon!


  1. I'm not advocating violating copyrights here, merely noting that Napster was a P2P system with a single point of failure (an organization that could be sued).

Saturday, September 1, 2012
Resources and Query Parameters

[Editor's note: This is a cross-post from a Tumblr entry; I started to write it as a quick note because someone was wrong on the Internet, but by the time it was done, it was long enough to be a blog post in its own right.]

What kind of string is this?

Well, it's a URI (Uniform Resource Identifier) as well as a URL (Uniform Resource Locator). But wait, that means the whole string (check RFC 2396) is resource identifier, which means the whole thing identifies a resource (literally). Not just the "" part.

I often run into folks that think this string identifies a "" resource that happens to take a parameter, which is understandable, because this is how almost every web framework is set up to implement it (identify your "/path" route, set up your controller, pick off the parameters as arguments).

However, from an HTTP point of view, and especially from a hypermedia point of view, this isn’t right. HTTP (the protocol) treats everything from the path onward as an opaque string--it shows up as a single token on the Request Line. The whole thing (query included) is used as the key for an HTTP cache. In fact, the only difference between a URL like "" and one like "" is that the former is not cacheable by default if it comes from an HTTP/1.0 origin server. That's it. And even that can be overridden with explicit Cache-Control or Expires headers.

From a hypermedia client point of view, you don't care which style of URL is used. Sure, you might have to construct your HTTP request slightly differently if there are query parameters involved, but that's mechanical--no semantics involved, just syntactically parsing the URL to figure out how to GET it. The only reason to prefer one over the other is purely stylistic; most modern web frameworks can pluck arguments out of a path as easily as they can out of query parameters.

Remember, a hypermedia client never constructs URLs on its own; besides a few well-known entry-points (which it should treat opaquely), it is only using URLs directly fed to it by the server, or constructed according to recipes provided by the server (typically through forms or link templates). This here is probably the main driver for which style you want to use; do you want to use HTML forms, which, for GET, use query parameters, or do you want to use link templates, which tend to use path parameters, stylistically (although they can support query parameters too)?

So in a hypermedia world, there’s really no such thing as a "RESTful" URL structure; a truly RESTful client--one which understands and uses hypermedia affordances--doesn't care.

Thursday, August 30, 2012
Hypermedia Programming: Lists

The humble list is one of our simplest and yet most powerful data structures--so much so that we will even routinely write them out by hand. We put them on sticky notes to help us remember things we need to do or things we need to buy from the grocery store. We even use them for entertainment. In this post I'll explain how to represent and manipulate lists using hypermedia techniques.

The most straightforward list representation actually doesn't look that different than a handwritten to-do list; the text/uri-list media type just consists of one URI per line. This makes the format incredibly concise, since there is very little syntactic structure (just interleaved line breaks!), while making it completely general through the use of globally unambiguous identifiers.

Now let's talk about manipulating this list with HTTP. I would expect to be able to:

  • delete the list by issuing a DELETE to its URI
  • reorder the list by issuing a PUT to its URI with the complete, reordered list of URIs
  • insert or remove new URIs somewhere in the middle by issuing a PUT, again with a complete "new" version of the list
  • append to the list by issuing a POST with a text/uri-list body containing the new URI(s) to be added
The above assume, of course, that I am properly authenticated and authorized to modify the list. If the original list resource had an ETag or Last-Modified header, I would supply If-Match or If-Unmodified-Since headers on my modification request.

Once the list grows large, however, using PUTs to make what seem like "small" changes (removing an item, inserting an item) doesn't seem particularly efficient. For these types of changes, I'd like to be able to use PATCH and specify these small edits. Now, since text/uri-list is a text type, we ought to be able to borrow the output of the common 'diff' utility to specify the changes we want to make. [Unfortunately, it turns out the output of diff isn't registered as a standard media type, although I'm trying to rectify that as well in my not-so-copious spare time.]

This means, for example, we could see something like the following protocol exchange, starting with retrieving the initial list:

Adding pagination

These general approaches will work very well up through relatively large lists, although at some point your list will get bigger than you are willing to serve in a single representation. Now it's time to add pagination!

The easiest way to do this on the server side is to provide Link headers (RFC5988) which tell you where you are in the list. In fact, there are registered link relations that are perfect for this already, in particular:

    Points to the first page of the list.
    Points to the last page of the list.
    Points to the next page in the list.
    prev or previous
    Points to the previous page in the list.

Now let's work through an example. Suppose you fetch a URL that you expect, from context, to be a list and these response headers come back:

HTTP/1.1 200 OK
Date: Fri, 31 Aug 2012 01:38:18 GMT
Content-Type: text/uri-list
Link: <./page/2>; rel="next last"
Now, you can infer a couple of things, namely, that this list spans multiple pages (due to the presence of the "next" link), but also that it has exactly two pages (because the "next" link is also the "last" link). We can also tell that this is the first page, because there isn't a "prev" link; we might also be able to infer that if the server additionally provided:
Link: <.>; rel="first"

Ok, that works well for paginated list retrieval. It's not too hard to look for these Link headers and traverse them to retrieve and/or iterate over the entire list. But now how about updates? There's actually an ambiguity problem here, because we followed a particular URL for the whole list but got back a representation for the first page of the list instead. If I DELETE that URL, does it:

  • delete the entire list; OR
  • delete all the entries on the first page only?
The short answer is: there's no way to tell. As a server implementor, though, when someone did a GET on a list that I decided I needed to paginate, I might instead issue a 302 redirect to a different URL representing the first page explicitly. For example:
GET /list HTTP/1.1

HTTP/1.1 302 FOUND
Date: Fri, 31 Aug 2012 01:38:18 GMT
Then I could treat PUTs, DELETEs, PATCHes and POSTs to /list as if they were targeting the entire list, and treat requests to /list/page/1 as if they were targeting just the first page.

But back to our client conundrum; perhaps our server doesn't adhere to this redirect convention--it's certainly not an official standard. How do we proceed? Well, if our goal (and writing "goal-oriented" clients is a good orientation for hypermedia clients) is to delete the whole list, then we can just alternate DELETEs with GETs until the whole thing is gone. Either the DELETE affects the whole list in the first shot, or it deletes just the first page. In the latter case, I've made progress and can repeatedly delete the first page until I'm done.

[Sidebar: avid readers of the HTTP spec will have spotted the trick question already here. DELETEs are supposed to be idempotent, but deleting just the first page of a list is not an idempotent operation because the second page becomes the first page, so repeated DELETEs to the same "first page" URL will continue deleting more items. Therefore the correct behavior for the server is to delete the entire list. However, if you meet a server that has decided on the second semantics, good luck waving standards documents around to get its implementor to change it.]
If our goal, however, is just to remove the first page, we probably want to PATCH it. However, that's not a commonly implemented HTTP verb (expect 501 NOT IMPLEMENTED or 405 METHOD NOT ALLOWED), and there isn't a standardized text diff media type yet anyway, so that might not work. In this case, our client may well have to be prepared to DELETE the entire list and then reconstruct just the desired parts with PUT and/or POST.

What's very interesting about this is that the client as we've described it actually implements a closed-loop control mechanism. It takes sensor readings or receives feedback via "examining" the system with GETs, and then takes further actions based on its current goal and the current state of the system. For a really good introduction to how this can lead to very robust systems, see "GN&C Fault Protection Fundamentals" by Robert Rasmussen; although the paper is about spacecraft guidance systems (cool!) its concepts are easily applicable to software systems in general.

Richer Representations

The text/uri-list, while a great format for capturing list membership and order, doesn't tell a recipient anything about the actual members of the list. In that sense, it's all identifier and no data. For those list members that are URLs, we can attempt to GET them or check their HEAD or ask for their OPTIONS to get more information. For URIs that are not locators (and hence dereferenceable), like URNs or Tag URIs, we'd have to consult a lookup service or have access to other out-of-band information. At any rate, if a client was looking for a particular member of a list, it might have to make several more requests before it could find the right one. In particular, a human looking at the list in a browser will likely have to do a bunch of cut-n-pasting to fully investigate the list contents.

What can we do about this? In "REST APIs must be hypertext-driven", Roy Fielding suggests the following pattern:

Query results are represented by a list of links with summary information, not by arrays of object representations.
In other words, along with the links, we want to provide a little extra contextual information to make it easier for the client to identify what they're looking for. The text/uri-list format lacks this extra context and assumes the recipient can find it elsewhere easily. Perhaps we should look for alternative formats that are nearly as concise but which also provide opportunity to supply the little bit of context Fielding describes. Two immediate options that spring to mind are Atom+XML and Collection+JSON, which are media types whose domains specifically include lists.

For example, here's our initial list of favorite items, represented in application/atom+xml

Now all the same rules apply for this list, as far as what the methods mean (i.e. POST appends a new item to the list, etc.). This is an example of what folks mean by uniform interface. If a URL represents a list, then POSTing to it should append to the list, regardless of the media type used to serve the list or to enclose a new item to be appended. So long as the client and server commonly understand a sufficient set of media types, they can interoperate. In the case of the Atom-formatted list, I would probably expect to have to POST an <entry> containing my new item, as I have a strong hint that the server understands application/atom+xml. However, the server may also advertise additional formats with Link headers (Atom lets us do this with embedded <link> elements too):

Link: <.>; rel="alternate"; type="text/uri-list"
To take advantage of these I may need to adjust my client's Accept header to specify my preference for them. But at any rate, if the resource is a list, there's no reason a client couldn't GET it as Atom, and then POST a single URI onto it with text/uri-list, so long as the client and server understand both media types. If the server doesn't, it may well reply with a 415 UNSUPPORTED MEDIA TYPE and then the client may try again if it has another option.

Last but not least, since I like using HTML as the media type for my APIs, we should point out that this is also a fine and relatively compact way to represent a list:

Where to go next?

I've given you a brief tour about how to deal with hypermedia lists in a standardized way, relying on the documented semantics of HTTP and various media types. I believe it should be possible to construct relatively robust and general client libraries for dealing with lists in all their incarnations; that would be a great open source project...hint hint.