Saturday, October 31, 2009
NoSQL East 2009 Redux

I just attended NoSQL East down it Atlanta over the last two days. This was a fantastic conference, well-organized, with not only great content and speakers, but also a very well-educated audience. There was a telling moment in the first non-keynote talk where the speaker asked the audience "How many people have read the Dynamo paper?" and easily 95% of the audience put their hands up.

I'd divide the major focus areas into the following groups:

As one of the speakers put it, which one you need depends on the shape of your data; these are all isomorphic in one way or another to each other, just as Turing complete languages are all essentially complete. But you still want to choose the right tool for the job. One nice touch was that a number of the systems were introduced not by one of the project committers, but rather by people who were using them to get stuff done. It was a very practical focus, much appreciated.

There were a couple of interesting tidbits from the conference, including a rollicking, confusing talk by John "My first network address was '12'" Day about network architecture and how TCP/IP had really gotten it all wrong. This guy was about 3 planes of existence above what I could follow at 9:15am--I had a distinct feeling he knew exactly what he was talking about, and was probably right, but there was no way he was making sense even to a very technical audience. Then there was the poor guy from Microsoft Research who had done some interesting distributed IDE work...for the .NET framework. In a sea of Mac and Linux laptops across the audience, he had a bunch of people who really couldn't make practical use of what he'd done.

Key-Value Stores

I've got to give a pretty big nod to Riak here for a straight-up key-value store; Justin Sheehy basically gave a talk about decentralized, scalable, lights-out system design that might well have been a good keynote, and it was clear that Riak was designed with high priority on those aspects. Major advantages over Voldemort in my mind are: ability to add/subtract nodes from running clusters, tunable (per-bucket, per-request!) quorum parameters (N,R,W), plus support for pluggable consistent hashing modules, partition distribution modules, and conflict resolution modules. For most of my use cases, with N=3, I'd probably do R=1,W=3 to optimize for read latency, as most of our writes would either be a user clicking "save" or would be the result of an asynchronous background process. On the other hand, if I wanted to build something like SQS on top of this (which I conjecture is possible), I'd probably do W=1,R=3 to optimize for the "post a message" latency.

While I was there, I had a great talk with them where we went over how there were going to expose the conflict resolution out up to the client via their HTTP/JSON interface. There's no out-of-the-box support for multiple datacenter awareness, although it seemed possible to add it via the pluggable partition distribution module, although their inter-node communication leverages Erlang's native networking, meaning some kind of VPN/IPSec tunnel would be the main way to make this work across sites. I'm pretty sure no matter what you would want to use in this space you'll probably have to end up using a VPN over WAN to give the appearance of a seamless LAN cluster (although the Cassandra guys piped up in the conference IRC channel that Cassandra has multi-site support already).

I can't tell whether being written in Erlang is a plus or a minus. On the one hand, it won't plug in well to our JMX-based monitoring frameworks, and I'm pretty sure very few folks within our organization have ever built or deployed an Erlang system. On the other hand, Erlang is the perfect choice for highly concurrent, fault-tolerant programming, and would probably be right up the alley of several of our programmers (I'm proud to say that while we are primarily a Java shop, we have a lot of other language proficiencies in-house spanning Ruby, Python, Clojure, and Scheme. Based on my brief escapade with it last night, I'm confident that (1) we have a number of folks who could pick it up easily and (2) who would jump at the chance). This is probably a wash.

Very interesting in my mind is that some of the key-value stores have brought over some features from other design spaces; Riak allows some basic link structure in their values, including a version of targeted MapReduce that can follow links, which starts to make it feel like a graph-oriented database like Neo4j. Similarly, Cassandra has support for column-oriented indices like HBase does. It's clear there's probably a project out there to scratch your particular itch.

Massive-scale data mining

We saw a couple of talks from folks who were using Hadoop for analyzing very large data sets. But furthermore, they were using frameworks like Pig and Cascading on top of Hadoop to do a lot of ad-hoc queries. Twitter in fact uses Pig to do all of their interesting web analytics, and they've taken all their analytics BAs who are used to doing ad-hoc SQL queries and trained them up with little problem in Pig. This is probably somewhere on our horizon, although there are larger cats to skin at the moment.

Column-oriented datastores

Cassandra and HBase were the major players here. Digg is moving all of their backend storage over to Cassandra after some successful pilots, and we saw a great talk by Mark Gunnels about how he is using HBase because it makes it "simple to get sh*t done". Apparently recent releases of HBase have made strides in fault tolerance (there is now an election algorithm for some of the special name nodes) and latency (apparently performance optimization was a major focus of the 0.20 release). There's an interesting article that describes some wide-area replication schemes that are available with HBase that sound intriguing (although once you are beyond two datacenters, I am convinced that you are better off with the VPN LAN solution if you want to have any hope of achieving eventual consistency).

Document-oriented databases

While the column-oriented datastores are good at organizing semi-structured data, the document-oriented guys are really all about organizing largely unstructured documents, and focus on doing some wild ad-hoc MapReduce queries. I still need to be convinced about the scaling, replication, and geographic distribution capabilities in this space, so it may be a while before we dip our toes in here.

Other

There was a very interesting talk about Tin ("the database so tiny they had to shorten its name"). This basically seemed to be a very clever approach for delivering stock data that leveraged a basic filesystem with range queries and rewriting rules via Sinatra. Turns out web servers serving static files goes pretty fast, if that's a suitable representation for your stuff (seems like his primary use case is for delivering read-only out to clients, where the data gets updated asynchronously in the background by the system). We actually have some datasets that are like that, so this is intriguing!

There was a talk from Neo4j, a graph-oriented database, which I just can't wrap my head around yet. I think I need to read up on some of the background research papers in this area. Certainly, the notion of a datastore based around the notion of link relations would likely be easy to expose as a REST HTTP interface, which is attractive. Our particular domain model can actually be nicely normalized, however (we are currently running off a traditional RDBMS after all), so I'm not sure we need the full semantic flexibility this offers.

There was also a talk from Redis; this is primarily an in-memory storage system with blazing-fast speeds, and the ability to write out to disk is really an afterthought. During the talk he showed a screen snapshot of a "top" running on a cloud-hosted virtual node with 60GB of memory. I cannot make up my mind whether this is ultimately just a badass memcached, or if he's on the cusp of something: if you have enough memory to hold your whole dataset, and you are sufficiently fault-tolerant, why bother writing to disk at all? Especially if you can easily get a periodic snapshot out to disk in the background that can be properly backed up for disaster recovery.

Conclusion

As folks pointed out, "NoSQL" might better be written "NOSQL" to mean "Not Only SQL". I didn't sense a lot of MySQL hatred in here; quite the contrary, many people were very complementary that there were certain things it does *really* well, and that the MySQL hammer was something that was staying firmly in their toolboxes. However, it is clear that there is a maturing community of very practically-minded folks that are looking for a new set of tools to drive their particular screws. Although to be sure (and this is something that I think some of the conference tweets corroborated), this also implies we all have a screw loose or two....

Friday, October 30, 2009
Object Calls == Message Passing in Erlang

I started playing around with Erlang last night as a result of learning about Basho's key-value store Riak at NoSQL East yesterday (more specifically, it was due to Justin Sheehy's talk, and two realizations: (1) this guys *gets* a lot of important *operational* design choices in this space, and (2) he decided to build his system in Erlang).

So I decided to read the online Erlang "course" and started working on the exercises. One of them was:

Write a function which starts N processes in a ring, and sends a message M times around all the processes in the ring. After the messages have been sent the processes should terminate gracefully. picture of a unidirectional ring of nodes

And so, summoning vaguely-remembered lectures by Bob Harper in "Fundamentals of Computer Science II" at CMU on doing object-oriented programming in Scheme, and remembering that the original Smalltalk OOP guys always said "send an object a message" rather than "invoke a method on an object", I set to work. [Editor's note: please feel free to post comments showing me better ways, I have known Erlang for all of about 12 hours at this point!]

Let's get the declarations out of the way. I need to define the entry point function which creates the ring of N nodes and sends the message around it M times, and I know I'm going to need a function representing a node in the ring, since I'm going to have to spawn processes for them.

-module(ring).
-export([ring_msg/3, ring_node/1]).

Ok, what job does a node in the ring have? Well, most of the time, when it receives a message, it just needs to pass it on to the next guy. So my node process is going to need to know about its next neighbor. Now in Erlang, what I would normally think of as an object can be modelled as a recursive function that passes its current state back into itself as an argument, and processes "method calls" by receiving messages. Interestingly, not all method calls actually have to send something back to the caller!

ring_node(Next) ->
  receive
    { pass, M, Msg } ->
      % Note that we got this message. 
      io:format("Node ~w~n",[Msg]),
      % Pass the message on around the ring.
      Next ! { pass, M, Msg },
      % If the count was down to zero, I can
      % exit, otherwise, I loop and wait for
      % the next incoming message.
      if M == 0 -> ok;
         true -> ring_node(Next)
      end
  end.
Ok, seems pretty straightforward. But if I had a ring of these set up, a message would just keep running around the ring. At least one node needs to be special, so that it can decrement the count M as the message comes through. It's pretty similar to the ring_node above, but is a little different.
init_node(Next) -> receive
    % message has been all the way around
    % the last time, so I can quit
    { pass, 0, _ } -> ok;
    % otherwise, log the message and pass
    % it on, decrementing the count
    { pass, M, Msg } ->
      io:format("Node ~w~n",[Msg]),
      Next ! { pass, M-1, Msg },
      init_node(Next)
  end.
Now an interesting thing here is that the init_node and the ring_node can both handle the "pass" message, and that when they send the message on, they don't actually care what their "Next" process is. It's like both of these "objects" implement the following interface:
public interface MessagePasser {
  void pass(int count, Object msg);
}
Ok, so now if we can create a ring with 1 init_node and (N-1) ring_nodes, we're all set if we inject the initial Msg into the init_node. So let's think about constructing a ring of nodes; if we have a node handy, we can pass that in as the initial argument (think "constructor") to a ring_node process to use as its Next node, then we just count down:
ring(Last, 0) -> Last;
ring(Last, N) -> 
  RN = spawn(ring, ring_node, [Last]),
  ring(RN, N-1).
Hmm, that's close, but that's a linked-list of nodes, not a ring. But we can't pass a node in as a constructor argument to the first node we create, because we don't have any yet! So it seems like we'll need to construct a linked-list of nodes, and then "close the loop" by stitching the front and the back together. Our init_node is already a special node, so maybe we can extend it this way:
init_node(Next) -> receive
    % acknowledge the request, update state
    { setNext, N, From } -> From ! ok, init_node(N);
...
In other words, the init_node can get a special message telling it to "update" its Next state. In some sense, we've just done this:
public interface InitNode extends MessagePasser {
  void setNext(MessagePasser N);
}
We want to acknowledge the request so our ring construction knows when that message has been processed -- we don't want to hand the ring back until it's all stitched together, and we can't guarantee ordering of message delivery unless we specifically wait for a response. So here's the full ring construction:
ring(N) when is_integer(N) ->
  % just pass in a placeholder for Next
  RN0 = spawn(ring, init_node, [nil]),
  ring(RN0, RN0, N-1);
% finished stitching, can return our
% init node to the caller
ring(Init) -> receive ok -> Init end.
ring(Init, Last, 0) -> Init ! { setNext, Last, self()}, ring(Init);
ring(Init, Last, N) ->
  RN = spawn(ring, ring_node, [Last]),
  ring(Init, RN, N-1).
Finally, the thing we're trying to do (including optimizing the degenerate cases):
ring_msg(0, _, _) -> ok;
ring_msg(_, 0, _) -> ok;
ring_msg(N, M, Msg) ->
  Init = ring(N), Init ! { pass, M, Msg }, ok.
Actually runs, too! It's pretty neat to see polymorphism via being able to accept the same message, and I've always loved the pattern matching in ML (both SML and OCaml variants!). Some pretty serious systems programs are getting written in this language; it's clear that the process spawning methodology lends itself well to a SEDA-style approach which is great for graceful degradation of service, and the fully-functional style (no mutation) means that you have no locks, no shared state, and hence safe concurrency (as long as you can model what you're doing properly).