Wednesday, March 26, 2008
Sprint Review

Just wanted to follow up on my previous post about how we were going to get back to scrum basics this sprint. We had our sprint review and sprint retrospectives today, and the sprint was viewed as a great success.

Here are some things that I think contributed to our success:

Better estimation: We used past task estimation history plus monte-carlo simulation, as I described earlier, to sign up for a set of achievable tasks. The end result was that we showed up at the review with 90-95% of the committed work done. The parts that didn't get done were due to external blockers we couldn't do anything about (e.g. external partner didn't have a data feed ready) or were due to too many round-trips through the design-IA-product-eng collaboration cycle (essentially, deviating from the plan we set forth at the beginning of the sprint) that took time away from other things.

Commitment: Towards the end of the set of product backlog our team was planning, we hit a fairly large task that was hard to estimate out. However, we had two team members who were very lightly booked, and we simply asked them if they were willing to commit to finishing the task somehow by the end of the sprint. They signed up, and they got it done. There was really no talk of punting things off the sprint, although there were a couple of rescopings and/or a change of plan due to time constraints that happened to get things done by the review. In general, I think these changes actually just meant the abstract functionality in the user story got built with fewer resources -- i.e. we completed them more efficiently thanks to the pressure of timeboxing.

Empowerment: Thanks to being committed to their tasks, my teammates basically kept themselves unblocked most of the sprint by engaging with other departments (design, product, IA, ops, QA) early on to make sure they could get what they needed. Most of the time things stayed unblocked by getting the right folks together in one room. One of my teammates mentioned that one of the things I was best able to do was to simply identify which people needed to be in the room! Then he went off and arranged the meetings and got things done himself.

Shippable product: We planned explicit tasks for each feature to test/verify it, to show QA how it worked, and to demo it and get explicit signoff from the product owner. Our product owner gave me the feedback that this resulted in most of the user stories we called "done" being potentially shippable.

Self-organization: The team dynamically swapped tasks with one another to load balance effectively during the sprint. There was such appreciation for this help that people started buying each other donuts as thank-yous, and I think we ended up with around 4 dozen donuts being exchanged by the end of the sprint. Bad for the waistline, but good for team morale!

On another note, this was a re-entry for me into the dual role of scrum master and team member (in that I signed up for development sprint tasks as well as being scrum master). For other developers who find themselves being called on to carry the scrum master mantle, here are some tips to help you survive:

  • Only book yourself half-time. The rest of the time will be eaten up by scrum mastering, and if you don't scale back on your coding commitments, you're just going to end up staying up late working all sprint.
  • Tell people how to get unblocked rather than trying to unblock them yourself. This seems pretty simple, but is a big time saver. For one thing, it's probably quicker to describe what to do (e.g. just set up a meeting with X, Y, and Z) than it is for you to send out the meeting invite yourself. Plus, you remove yourself as a bottleneck -- so your teammate doesn't have to wait for you to send the invite. Furthermore, after you do this a few times, your teammates will pick it up and will be able to just solve more of their problems themselves without asking you for help.
  • Get good reports. I invested 5-6 hours towards the beginning of the sprint to get some automated reports set up to get info out of our ticketing system. I really ended up just using two reports:
    1. Storyboard: for each story, identify who has tickets against it, and identify whether each ticket is "Not Started", "In Progress", or "Finished". This provides a quick way to scan the status of each story and reminds you to ask questions like: what do we need to do to close out that story again? why are we working on lower priority things instead of the higher priority ones? etc.
    2. Burndown count: for each story, add up time remaining on each ticket, and then provide a total for the amount of hours left for the sprint.
    So what I did was: right before scrum, I pulled up the Storyboard report to check status so I could ask followup questions during scrum. Right after scrum, I hit the burndown count and then actually plotted by hand on a big piece of flipchart paper the burndown graph. That was pretty much the extent of the reporting I needed.

So, all in all, things turned out pretty well. I'm excited to see the teams back in a good sprint cycle groove and turning things out (our review meeting lasted almost 5 hours, mainly because so much had gotten done across the multiple teams working on the product that it took a while to go over it all). We'll be mostly keeping things the same going into the next sprint, so I'll be curious to see if the teams are able to accelerate now that they're used to working this way.

Sunday, March 16, 2008
REST: Unix programming for the Web

I've been giving some thought to REST-style architectures recently, and recently re-read some of The Art of Unix Programming by Eric S. Raymond. ESR notes that some of the characteristics of Unix-style programming include:

  • Do one thing, and do it well (attributed to Doug McIlroy)
  • Everything is a file.
  • Comprise complex systems by connecting smaller, simpler programs (e.g. Unix pipes).

Unix-style systems have had undeniable success and remarkable stickiness for a technology; I have an old copy of my father's System V manual from when he worked at Bell Labs (yeah, they actually printed out the man pages and bound them!), and I pretty much recognize everything in there. Sure, there are many new commands available, new kernels, new distributions, etc., but they are all very recognizably Unixy.

I've been thinking about how this philosophy applies to the Web 2.0 world. I think this list turns into:

  • Do one thing, and do it well.
  • Everything is a RESTful service.
  • Comprise complex systems by interconnecting smaller, simpler services.

For one thing, "do one thing, and do it well" isn't limited to Unix, this is a key part of abstracting and decomposing a technical problem. But we see it everywhere: I read my mail on Gmail, keep my to-do lists on Remember The Milk, store photos on Flickr, keep my browser bookmarks on del.icio.us, etc. All these sites adhere to this principle.

But taking things down a layer, what does this mean to someone architecting or implementing a Web 2.0 service? For one thing, I think this means that it would make sense to break your overall service down into individual microservices that are individually maintained and deployed; i.e. break things down to the smallest level where they still make coherent sense.

As for "everything is a REST service", the everything-is-a-file abstraction worked for Unix because there was a small set of common operations that applied to files (open, close, dup, read, write). Sounds a lot like doing REST over HTTP, with HEAD, GET, PUT, POST, DELETE.

Combining various small REST microservices into larger services is already being done (this is, after all, basically what iGoogle is) on a one-off basis. The main question is: what is the Web 2.0 equivalent of the pipe? Namely, is there an easily understood abstraction for composing webservices? Sounds like a topic that might be rife for some kind of logical calculus (like the relational calculus for databases or the pi-calculus for concurrent processes), e.g. the REST calculus. If there were a couple of easily understand and specifiable combinators, these could be pretty easily built into some language-specific libraries for use in quickly building some new macro-services.

I might try to interest some of my old colleagues from my programming language research days to see if they have anything to say about the matter....

Comments definitely welcome here. Is this a new idea? Are others espousing this? Is this even a good idea?