Sunday, December 30, 2007
Code Artisan's New Year's Resolutions

Well, it's a new year, so time to sit down and draw up a list of development resolutions. Without further ado:

1. I will create unit tests for all my classes. There's no excuse not to have at least some kind of unit test for each class that gets created; this at the very least provides a stratum on which other people can flesh out the tests to cover things that come up. This may not be full-blown TDD, but we could at least enforce this with some kind of unit test coverage tool like Cobertura.

2. When debugging a defect, the first step will be to write an automated test case that captures it. Nothing is more annoying than a bug that can't be reproduced. If you can actually get to the point where you can reliably trigger the bug, you're probably most of the way towards finding the problem. Furthermore, in the process, you'll have created an easy-to-run way to know when the bug is fixed. Since the test is automated, there's a good chance that test will get re-run a whole bunch more times in the future of the codebase, making sure that once a bug gets squashed, it stays squashed.

3. I will write Javadocs for all my public methods, except where they can be inherited from the Javadocs in an interface. This is just common courtesy to myself and the other developers. I'm cutting myself a break here, since one could argue that you really ought to document the protected ones too, but we'll start small. This will also encourage definining interfaces and coding to them, as it saves a lot more effort if you can just Javadoc up the interface itself! For bonus points this year, write or find a tool to automatically check this.

4. I will have explicit up-front design sessions before writing any code. I really don't know why we don't do this more than we do, other than we forget to do it. There are all kinds of good reasons to do this, like the fact that we always come up with better designs in a small group than we do by ourselves, or that this will help all of us become better designers by being present while good designs are being born. Or that a good design is usually easier to implement, and with all the hard work I've already set out for myself, I don't mind finding resolutions that save me some effort.

5. I will relentlessly learn to estimate better by considering my past estimating history when providing a new estimate. First step here is to actually look at my past history, so I know what it is! (Fortunately, we are already collecting the data for this...) Then possibly use techniques like those I laid out in a previous post about improving estimates.

Well, that's it. Those are some pretty specific resolutions, which, I'm given to understand, makes it more likely that they'll be adhered to. What are your coding resolutions for the new year?

Thursday, December 27, 2007
Who designed this stuff, anyway?

Many development organizations have an "architect" role, meant to provide technical direction over an entire engineering process [As far as I can tell, "architect" really seems to be shorthand for "someone with deep and broad technical understanding and experience"]. How does this mesh with scrum-based, grass-roots development by empowered teams? Or perhaps, the real question is: if I have folks that would traditionally fill an architect role, what's the best way to utilize them in a scrum setting?

Three areas in which these folks tend to contribute are (although this list is certainly not exhaustive):

  • R&D: advanced prototyping/proof-of-concept work
  • Due Diligence: evaluation of emerging technology
  • Design: participating in system and/or software architecture and design

Let's take each of these areas in turn.


Here the notion is to invest some amount of exploratory effort to remove technical risk from a product or feature--to the point where the feasibility of an approach can be established and an associated rough LOE (level of effort) can be given for a real implementation using this approach. We typically hand this type of task off to an architect, hoping his/her broad technical background can help provide hints/leads to navigating the solution space, and thus reduce maximum uncertainty with a minimum of effort (bang for the buck!).

Depending upon the nature of the problem at hand, and/or the structure of the available scrum teams, there are two readily apparent approaches:

  1. run an R&D scrum team; this makes sense when there are sufficient architects around to form a team of them, and when the explorations are not directly tied to an existing product (e.g. technical feasibility for a new product)
  2. embed your architects on the scrum teams themselves: this makes sense when your architects are more scarce (because it will be beneficial to have them on the teams for the other reasons we'll mention below), but also when the explorations are directly related to an existing product. In this case, the rest of the scrum team they are on will benefit directly from (and help participate in) the exploration--this learning process will make a later production implementation easier, and will furthermore help teach the other team members how to do this type of work. This is vitally important if we are short on architects and good ones are hard to hire--then we need to view these other folks not as "non-architects" but rather as "architects-to-be." This model is also easy to follow if you are already using the idea of prototyping sprints.

Technical Due Diligence.

Here, again, we are counting on our architect's experience to enable a thorough and accurate assessment of some technology. If there is already an "architecture scrum" running as described above, this might be a natural place to fit these efforts. However, in the absence of a dedicated group, it may fall on an architect that is embedded on a scrum team (ideally one working on a product that might be impacted by the technology in question), which would also be perfectly fine.


Architects' experience helps most during design exercises--knowing what kind of pitfalls to look for, what kind of generality to build in, and what kind of approaches/patterns are available and relevant. Design is also an area where collaboration is both possible and usually helpful (at least up to the amount of people that can fit comfortably around a whiteboard!). So, no doubt--we want our architects participating here in order to get good designs up front and hopefully minimize refactoring later.

Again, I am going to argue for the embedded approach here, as opposed to running an architecture scrum that spits out designs. I think the benefits here are overwhelming:

  • by generating designs as they are needed for feature development, effort will not be spent building in generality that may never be exercised
  • by having the scrum team participate in the design, those familiar with the details of the existing system will have input, ensuring the design is complete
  • by having the team participate, they will learn to become better designers themselves (again, a key benefit if there are not enough architects to go around--and there probably won't ever be, by the way--as the architecture will have to live and grow regardless of whether an architect is available to design it all).


Well, I seem to have convinced myself, anyway, that the right thing to do with your architects is to just put them on your scrum teams and let them have at it (Ken Schwaber would be so proud!). In other words, don't treat them any differently than you would any other engineer in your development organization (indeed, in our current organization, we have "architects" that do production development, and "engineers" that do software architecture, so we're probably not far from this anyway). The main benefits of this approach are:

  • relevance - by staying grounded in the day-to-day realities of creating products, our architects will be focused on real needs
  • integration - by having architecture activities happen in the context of normal scrums, those activities will have a high likelihood of impacting production code
  • collaboration - as a team member, there are no artificial barriers to communication, no "us" and "them" mentality between architecture and engineering
  • resourcing - architects can leverage their teammates to provide extra horsepower towards architecture tasks
  • learning - by having scrum teams responsible for architecture tasks, experienced team members will teach less experienced teammates how to accomplish architecture-related tasks

Tuesday, December 18, 2007
What's the APR on your technical debt?

Today we're going to discuss technical debt--what it is, what its impact is, and what to do about it.

One of the key tenets of scrum and other agile development methodologies is the idea of producing production-ready increments of work at the end of each iteration. Ken Schwaber calls this "sashimi" in the scrum framework--a slice of product. You may have also heard scrum practioners talk about the definition of "done", i.e. what steps have to be completed before we can consider a user story satisfactorily completed, which may include steps like:

  • code written!
  • unit tests written and passing, with a certain degree of code coverage
  • automated acceptance tests written and passing
  • code refactored
  • code documented/commented (e.g. javadoc)
  • product owner signoff
  • documentation updated (e.g. UML diagrams)
  • etc.

In general, the notion is that the definition of "done" should capture the degree of code quality the team and product owner mutually agree to.

Now, anyone who has tried to do this has probably run into this (as we have): a sprint gets started, and at some point through the sprint, you realize that you're not going to be able to finish all the user stories you signed up for by the end of the sprint. Or at least not get them all the way "done". So, you yield to the temptation and cut corners, just doing the implementation without all the rest of the supporting infrastructure, so that at the sprint review you can call this done. Oooh, it's so tempting! C'mon, the product owner isn't going to look at your CruiseControl outputs....

But now, what you've done is that you've incurred technical debt. All that code doesn't have automated test coverage or adequate documentation, and probably has more complexity than it needs because you didn't get a chance to clean it up and refactor it. Now it's going to be harder for someone else to work on it (including you, two months down the line!), because it's inadequately documented and complex, and it's even hard to refactor it without introducing bugs because you don't have the automated tests to help you know if you covered everything! Thus, future work on this codebase gets bogged down a little bit by all this cruft.

Now, technical debt works just like credit card debt: if you keep racking it up, it really starts to kill you. Because you are now working slower, you again find yourself behind the 8-ball of not being able to finish everything by the end of the sprint, and you rack up even more technical debt. Our product owner recently commented to me that he felt like we were getting less done than we used to at the beginning of the project with fewer people, and I'm starting to think it's all this accrued technical debt.

Unfortunately, getting out of technical debt is pretty similar to credit card debt, and is also probably psychologically just as hard. The first step is cut up the credit cards -- refuse to accrue any more technical debt from this time forward, and instead only demo things at the review that are really fully "done", even if this means you don't get to everything you thought you would. This takes a certain degree of courage, and is probably something to do right after a major release, rather than just in front of one....

Then, start paying down the debt. There are a number of ways of measuring this debt, many of them easily automated with Maven, such as:

  • Cobertura - unit test code coverage measurement
  • PMD - static analysis to identify unclean code patterns, including code complexity and design issues
  • Javadoc output - how many of your packages are missing documentation? how many classes don't have class documentation, and how many of the public/protected methods are missing javadocs? Do you have any javadoc warnings?

So the point is, you can measure all of these things, and come up with some numerical measure (pick some formula you like) of your technical debt. Furthermore, if you are tracking the user story velocities of your teams, you can actually start correlating your technical debt measurements at the beginning of a sprint with the story point velocity output by that sprint. In fact, you may even be able to figure out exactly how much drag your technical debt is putting on development, which means, based on your average cost for developer time, you can actually put a dollar amount on it. You can actually tell how much interest you're paying on your technical debt in real dollars.

Then see how the conversation with your product owner goes when you ask for some technical debt paydown to get prioritized on the product backlog....

Wednesday, December 5, 2007
Scrum thoughts: Improving individual estimation

In my last post, I described that our team routinely overestimates the amount of work it can finish in a sprint. The amount of this overestimation is quite high at times - sometimes we only finish two-thirds of what we initially plan.

So how can we get our estimation better? I think there are two sources of estimation error: one is how many hours per day you actually have to apply to sprint tasks; the other is how many hours it will take you to actually finish something.

In his article on Evidence-Based Scheduling, Joel Spolsky describes how we can compare the history of a developer's estimates vs. the actual time required to get an error factor in their estimates. I think this does a good job of addressing the second type of error, but you need to account for both if you are going to get an accurate estimate of how much you can do in a sprint.

For example, as a team lead, I often get interrupted while coding when someone asks me a technical question, or to participate in a discussion on the dev mailing list, etc. I usually "leave the clock running" for these things, so I think evidence-based scheduling would cover this kind of inaccuracy in my estimation. However, I also get scheduled to attend extra meetings which don't go towards finishing sprint tasks, and I don't leave the clock running for that. I also sometimes forget to log all my time spent against tickets, even though I update the remaining time on the tickets to make the burndown accurate. So that's another source of inaccuracy.

Fortunately, I think there's a pretty simple solution that covers all these things. Let's assume we recorded in the past sprint (or past few sprints) how many working days someone was going to be available. Then we simply take the time logged against tickets that were originally planned for in the sprint and divide that by the number of days worked to get a per-day burndown figure. If someone works nights and weekends, they'll just have a higher per-day burndown rate. If someone routinely has other duties that keep them off sprint tasks (e.g. doubling as a scrum master), their per-day burndown will be lower. If someone forgets to book time, their per-day burndown will be lower. If they routinely forget to plan for required elements of a task or if the product owner injected some new requirements mid-sprint, the per-day burndown goes down.

Now, once you know that, you can use the evidence-based scheduling trick from Joel's article to use the actual task estimation history (here's where we compare original estimates vs. actual estimates for finished tasks) to put an actual confidence level on how likely that developer will have a set of estimated tasks done by a certain time.

So, during planning, you let someone estimate hours as if they would get to spend uninterrupted time on it, and then you can use the EBS plus per-day burndown to monitor whether something will likely get done by the end of the sprint.

To make it concrete, let's say we want to put a 95% confidence level on being able to finish all sprint tasks. So, for each new ticket I write, we go through the following exercise:

100 times, do:

  • for each ticket so far, select a random estimate ratio from my estimation history, multiply by the original estimate to get a likely actual time
  • add up total likely time, divide by per-day burndown figure to get a likely number of days
  • if likely days <= days available this sprint, trial is a success

If you have at least 95 successes, you may take on that task for that sprint. Otherwise, you're not allowed to commit to it.

Scrum thoughts: Fixed feature set vs. fixed time

I think we're going to enter into an interesting discussion with our product owners where they would like to propose moving away from a fixed-length (in time) iteration model to a fixed feature set model. The claim is that it is too hard to maintain a release roadmap when the set of features that comes out of a sprint is fluid.

[Sidebar: our sprint teams routinely overestimate how much we can accomplish, so items are always falling off the end of our sprints. We've mitigated this somewhat by at least marking the bottom part of our sprint backlog as "low confidence items", but the truth is we rarely get to those items, i.e., we never totally finish our sprint backlog.]

So I think there's a valid complaint here, however, I think it is misplaced. We also never actually get far enough ahead on story point estimation with the product team to have more than a sprint's worth of rough LOEs on things. So I'm not sure that any roadmap that would get produced would have any grounding in actual engineering estimates.

So now that I think about it, this is probably more of a version numbering problem more than anything else. We increment our version numbers with every sprint, and furthermore release the output of every sprint to production. So I think the issue is that, for internal consumption's sake, our product owner can't say "the following features will be in 1.7, these other ones in 1.8, etc."

It sounds like what we want to do is to choose a set of features for a release, and then simply run iterations until either that feature set is complete, or the product owner decides that the output of some iteration is good enough to release. Naturally, of course, now that I've written this out, this is the by-the-book definition of how to do release management with scrum. Not quite sure how we got away from doing things that way.

I'm pretty convinced we should stick with the fixed-length iteration method, mainly for the reprioritization effect it has at planning. For every sprint where we had carryover, big chunks of that carryover were routinely deferred down the priority list from sprint to sprint, suggesting that we actually produced more product value than we would have if we had just finished each sprint straight out.

Tuesday, October 16, 2007
Scrum thoughts: Staggering sprint teams

When you have two or more sprint teams dedicated to a product, I think there's an opportunity to stagger their efforts in the following way. Let's track a user story through two sprints. In the first sprint, the prototyping sprint, design, UX, and development cooperate to put together a working proposal for a user story. At the sprint review, if possible, the whole user story will be complete, where design, UX, and implementation are all consistent.

This is very important, because one of weaknesses of the traditional waterfall method is that design, UX, and implementation must be consistent before the software is finished. Obviously, the implementation will be influenced by the design and wireframes, the wireframes will be influenced by what is visually possible on the page, the design will be influenced by the functionality required by the wireframes, the design will be influenced by what is hard or easy to realize in HTML/CSS, the wireframes and functionality will be influenced by what is feasible to build in the backend (whether from an LOE standpoint or from a performance point of view). The waterfall method doesn't allow for the full richness of this interaction to help arrive at the "optimal" way to solve a user story from all points of view.

However, if we are considering features where we want more lead time than a month, then it may not be possible to get all the way to a finished feature in one sprint. Instead, it may be enough to get to a consistent prototype, where there are designs, wireframes, and mostly-working code that can be shown at the review. All of the creative output may not be integrated yet, but it should be close enough to know that it can be gotten to a consistent state.

This is where the next sprint comes into play: the finishing sprint, where the mockups are brought to production by the same team. Naturally, they will take product owner feedback from the review into consideration during the finishing sprint.

In the simplest case, let's say we have two scrum teams. We can stagger their sprints such that when one team is in the prototyping sprint, the other is in the finishing sprint; this way we're still able to release new functionality to production after every sprint while still enabling cross-functional development of larger features.

However, we can actually take things a step further: if, during pre-planning, we can roughly separate the stories into "small" (can be taken from start to finish in one sprint) and "large" stories, then we can actually mix things out in interesting ways, like:

Sprint 1Sprint 2Sprint 3Sprint 4


  • SF : finishing sprint for small user stories
  • LP : prototyping sprint for large user stories
  • LF : finishing sprint for large user stories

So here we have Team 1 kept full with small user stories every sprint. Teams 2 and 3 are kept full with large user stories, but staggered (note how Team 3 starts with a small finishing sprint so they have something to do during Sprint 1!). Then Team 4 operates under a model where some of their user stories are small, and some are large.

I think the only operational trickiness here, from a software development standpoint, is that any "LP" work needs to happen in a branch and not the main development trunk, because it will not be production-ready by the end of the sprint. When the product owners finally sign off on an LP prototype as being ready (it's always possible that the product owners will not like--or will require significant enough changes to--a user story solution, thus requiring a follow-on LP sprint), then the first task of the LF sprint would be to port the branch back into the trunk and continue on.

Scrum thoughts: scrums-of-scrums and daily scrums

After planning, we dive right in to having daily scrums. We've gotten these to be pretty efficient (today we had a 10-person scrum finish in around 8 minutes). One key thing is starting on time--we charge people $1 if they are late to the scrum, and no one (not even the scrum master) is exempt. When we get enough saved up, we make a donation to charity (there was a thought that this should go to buy donuts for the team, but that felt a little like rewarding ourselves for being late). Our scrums are mainly a forum for people to schedule ad-hoc meetings with the people they are blocked on ("right after scrum in my office" being the most common meeting that gets scheduled). But running individual scrums is pretty well-documented and understood in the literature, I think.

As I mentioned in my post on sprint planning, we track individual burndowns on a daily basis -- how many hours does each person have left against each user story. At our scrum-of-scrums, which happens twice a week, the user story statuses for each team are put together to get an overall sprint status for the product. If you'll recall, we kept track of the global priorities of the user stories in our pre-planning session, so we can put this together in global priority order.

Then based on the time remaining in the sprint, we can again reassess confidence levels for user stories:

  • high confidence (green) : requires less than 50% of each person's remaining time
  • medium confidence (yellow) : requires less than 80% of each person's remaining time
  • low confidence (red) : requires more than 80% of someone's remaining time (possibly even getting into the "punt" range -- someone doesn't have enough time to finish their part this sprint, because they are overbooked)

The scrum-of-scrums is then a rebalancing effort. If one team falls behind, then you start to see a "striping" effect, where some of their user stories start falling in confidence, even though the surrounding user stories from the other teams stay the same. If you actually color these reports, it becomes pretty visually obvious. The rebalancing is all about trying to swap resources around such that the following principle holds:

No user story should have a lower confidence level than a user story with lower priority.

Ways that we rebalance include swapping tasks from one team to another (this is easier when the teams are vertical striped across architectural levels, as its likely that another team will still be able to do the task in question; however, a similar effect can sometimes be achieved by moving functionality from one architectural level to another (e.g. computing something in the middleware vs. the database)), or by rescoping user stories to make them easier.

Unfortunately, the way we are running sprints now ties the teams down a bit much to always make this flexibility possible. For example, a user story might say "put a Flash widget on the page that does X", and if we only have one Flash guy, we're pretty much stuck. If instead the user story said something more like: "we want a user to be able to see or do A,B,C on the page", then we might have some alternatives if the Flash guy gets overbooked.

Thursday, October 11, 2007
Scrum thoughts: sprint planning

If you'll recall from my earlier post about pre-planning, we enter into our sprint planning meetings with a stack of user stories written on large index cards.

The first thing we ask everyone to compute is the number of hours they have available for the sprint, starting with number of working days, subtracting out holidays/vacation, and then using a general availability of hours per day of productivity. We've tended to use 6 hours per day per developer, although not everyone uses that rate, particularly tech leads and team members doing double-duty as scrum masters. For these special cases we ask each member to take a guess at the number of "burnable" hours per day they will have available. Each person writes the total number of hours down on a piece of paper in front of them.

We distribute a bunch of post-it note pads and pens around to the entire team, who sit around a large conference table. We have our product owner take each user story in turn and elaborate on it. The team then poses questions to refine the requirements, and brainstorms a plan of attack. Then the fun part starts.

People start signing up for work. We'll identify all the tasks, and as we go, people will write themselves tickets on a post-it. The tickets contain a task description, a name (who has signed up for it), and an estimate in hours. When we're done, we collect up all the post-its and keep them with the user story index card. Then we move onto the next user story.

This makes for a pretty big flurry of activity -- everyone is participating, writing tickets, brainstorming, load-balancing tasks. It is anything but boring.

Each person is then responsible for keeping track of their committed hours (I just keep a running tally, subtracting each ticket's hours off my initial availability total). We then use this to assign confidence levels to each user story. If I am within the first 50% of my total available time, I'll mark my tickets with an H (for high confidence). When I'm in the 50-80% range, I mark them with M (medium confidence), and tickets written against the last 20% of my availability are marked L for low confidence.

The overall confidence for a user story is the lowest confidence of any of the constituent tickets. So just one person working on a story with a M ticket will make the whole story medium confidence, even if everyone else has H tickets against that story. The thought here is that you need all the tickets to actually complete the user story. This works out great for communicating back to the product team (and senior management), as it neatly captures the "cone of uncertainty" around the stuff that's furthest in the future. If all goes exactly according to plan, you get everything, but if something takes slightly longer, and a low priority user story gets bumped off, people might be disappointed, but no one is surprised.

Generally, this sprint planning process is one of the things I think we have nailed. There are a number of good features about it:

  • It's fun! We used to sit around and have someone recording the tickets in a spreadsheet on a laptop while everyone else watched, and man was this a low-energy, spirit-sapping exercise. It's much better to get everyone involved, and people get to write the ticket description exactly how they need it worded.
  • It's pretty accurate. We used to not assign tasks to specific people, but rather just generate all the tasks and manage them as a big to-do list where people would just pick off the next task they could do. The problem was that we would argue about the estimates, failing to take into account differing skill levels of different team members. Now, the person doing the work owns the estimate.
  • It's pretty fast. Because a lot of the discussion and ticket writing happens in parallel, you can chew through user stories pretty fast.

The main downside to this approach is that the poor scrum master has to take all the index cards and post-its and enter them into our ticketing system (we use Trac). Usually the scrum master can get away with entering all the tickets into a spreadsheet, and then we have some scripts that import them into Trac. Ideally, I think it would be fun to just use the index cards and post-its and keep a team wall going, but we're pretty cramped for space, and wall space is in pretty short supply.

Wednesday, October 10, 2007
Scrum thoughts: cross-functional teams

For a little while we experimented with truly cross-functional teams, where we had copywriters, graphic designers, UX architects, frontend developers, backend developers, and QA folks all on the same team. I thought this was great, and our planning meeting was very interactive, as everyone had things to work on for new features. There was constant collaboration throughout the sprint: design/frontend, frontend/backend, product owner/QA/UX, etc.

As a developer, this was great. We had instant access to team members from the other disciplines, able to brainstorm about how a feature should work and getting questions answered quickly. There was a very real team vibe, very exciting. A lot of the UX/algorithm worked was never documented more formally than scratched-out notes on a whiteboard (yes, I think this was awesome, because no one had to spend time creating and updating intermediate artifacts like wireframes, and no one had to spend time waiting for those artifacts -- ideas were worked out together and then rendered directly into code). The six weeks that we operated like this were really fun!

Now, we did run into some problems managing dependencies between team members sometimes--e.g. a frontend developer would end up waiting for a design to be finalized and then only have a day at the end of the sprint to get it coded up. I suspect we could have ameliorated some of this by exchanging rough drafts, for example, or simply identifying the affected user story as being at-risk. We work around this in a purely development environment by stubbing out functionality to let other people keep working, and I suspect there are similar things that can be done with the other creative disciplines like design and UX.

Unfortunately, someone (or multiple someones) came to the conclusion that it simply wasn't realistic to have the team really be cross-functional. We've now retreated to a more waterfall method where we try to get design and UX out ahead of the sprints. This makes for really boring planning meetings and daily scrums, because those folks largely just say "I'm working on the stuff for the next sprint," and I have no idea what that stuff is. We also end up having to do design and UX work again anyway, because while it's really helpful to have the designs and wireframes as a starting point, they never exactly match what we can build in a sprint, and usually require some amount of clarification/adjustment anyway.

There's a claim that design and UX work takes a long time to get just right, and so it must be done ahead of time if the full feature is to be completed in one sprint. I'm not sure I totally buy this, though, as I could really say the same thing about writing code, which is itself a creative endeavor. We figure out what's possible in the time allotted, and that's what we do. If we're not satisfied with how it looks after one sprint, then we plan to alter/improve it in the next one.

Now possibly, there is some pressure around this because we've been actually releasing every sprint to production. I wonder if having a longer release cycle spanning multiple sprints would help give folks the courage to be able to "try the possible" for a sprint, where we might spend two sprints making new features and then a third sprint to prepare it for production and get it "just right".

If anyone out there has had successful experiences with multi-disciplinary teams, please leave a comment and let us know how it works for you.

Tuesday, October 9, 2007
Scrum thoughts: Pre-planning

I think the way I'll go about my scrum discussion is to go through the order of battle for one of our sprints, at least from my point of view as a software engineer/tech lead.

We usually kick off a sprint with a pre-planning session that happens before the formal Sprint planning session. In this session, we walk through a list of asks from our product team (who come prepared with a big whopping list of them off the product backlog), put LOE (level of effort) estimates on them, and then help distribute them across the multiple scrum teams we have dedicated to the product.

We've been capturing the features on index cards in what we call "user stories" for convenience, even though they are not written in the usual "As a user, I want X so that I can do Y" format. I'll continue to call them user stories here, since that's what we call them, but you should mentally substitute "feature name" here instead.

Now, we are still essentially working in a waterfall methodology, and just running scrum within the development portion (I really dislike this, and think it's inefficient, by the way), so a lot of the feature requests are "implement page X to spec", where we get an IA wireframe and some designs handed down as the definition of the feature.

So now we have some senior engineers representing all the development teams provide a high-level estimate. We use story-point estimation for this, and play "Planning Poker" using estimates of 0, 1/2, 1, 2, 3, 5, 8, or 13 (so we go for an order of magnitude). We sometimes get into the weeds here but are getting better at quickly arriving at a common estimate. I'd say most of the engineers that participate here are roughly equally productive, and we have a rough rule of thumb that says one story point is about two developer days for us. YMMV.

So we write the story point (SP) estimates on the user story index cards, and then the product folks prioritize them. We actually stick the index cards up on the wall and then the product team rearranges them in priority order while the engineers tag them to indicate which team would likely tackle each one (according to which subsystems those teams have expertise in).

Now, given our past SP velocities, we can guess where the "fold" is likely to be -- how many of the features are we likely to actually be able to get done. We also, thanks to the team tagging, can juggle some of the tasks around to balance out the workload for the teams (especially for tasks where multiple teams could take them on). The product team may also juggle some of the lower priority items around to move things back and forth across the fold (there's nothing like saying "You probably won't get this feature this month" to see how important a feature really is!).

When we're done, we have a global priority on the user stories, and know which user stories we're going to present to each team for planning on the next day, and take down the index cards so each team can have their stack to plan with.



  1. Story point estimation with historical story point velocity measurement is surprisingly accurate. When we do the actual down-and-dirty sprint planning, we end up pretty close to the same set of tasks.
  2. Putting the user stories on index cards and taping them to the wall lets multiple product folks work on the prioritization in parallel, while the engineering folks can work on the team-tagging in parallel. It's also much easier to reprioritize on the wall than it is in a spreadsheet!


  1. This is time consuming as heck. We usally spend 4-5 hours doing this for a one month sprint with about 10-12 total development resources. It is emotionally (from arguing over SP estimates) and mentally (from the length) draining.
  2. Due to the nature of our user stories, we have to actually look at wireframes in minute detail to grok the nature and complexity of a task to put an SP estimate on it. This usually means this has to be gone over twice -- once in the pre-planning meeting, and then once again in the actual sprint planning meeting. This is a direct result of having fully-spec'ed wireframes come down from on high. For example, it would actually be quicker to estimate a story like "As an editor, I would like to have a page where I can drive the content" versus one which is "As a product owner, I would like you to implement the vision of an editorial page as conceived by our design and IA teams". Especially if the design and IA teams have decided to put some complicated (to implement) features in there.

Scrum thoughts: intro

Sorry for the pause in posting--I have been a member of our support scrum team at work for the past six weeks or so, and the ensuing insanity in my work day left me too tired to post anything here. I think what I will do is pick up with a review of some of the scrum practices we're using, what works, and what doesn't, probably moving through some of the same sequences of areas as the somewhat infamous (at least around here) "Swedish Scrum" article, Scrum and XP From the Trenches. Hope you'll enjoy the discussion.

Monday, August 20, 2007
Easy sanity testing of a webapp with Python

Needed to whip up some automated sanity testing of a webapp, and Cactus seems to have a somewhat steep learning curve. Eventually, yes, this might be the way to go, because we should be able to integrate it nicely with Maven and CruiseControl, but I needed something quick.

In particular, I needed to check whether a certain number of images were showing up in a portion of a JSP-generated page.

Step 1. Prep the JSP to make it testable. Primarily, we just want to generate a unique HTML id attribute for the <img> tags in question:

<c:forEach items="${items}" var="item" varStatus="itemStatus">
  <c:set var="imageId">
    specialImage<c:out value="${itemStatus.index}"/>
  <img id="${imageId}" .../>

This numbers the images with ids like specialImage0, specialImage1, etc.

Step 2. Fetch and parse the page with Python's HTMLParser library. Basically, we write a parser that only pays attention to the tags we've marked in the JSP above:

import HTMLParser
import re

class TestSpecialImageCount(HTMLParser.HTMLParser):

    def reset(self):
        self.imageIds = []

    def handle_startendtag(self, tag, attrs):
        if tag == "img":
            for key, val in attrs:
                if key == "id":
                    if re.match("specialImage[0-9]+",val):
                        if val not in self.imageIds:

    def testPassed(self):
        return (len(self.imageIds) == 6)

Then running the test is as easy as:

import urllib
import TestSpecialImageCount

test = TestSpecialImageCount.TestSpecialImageCount()
page = urllib.urlopen("http://localhost:8080/webapp/home.htm").read()
if test.testPassed():
    print "OK"
    print "FAILED"

You could pretty easily batch this up and instantiate several parsers for different tests so you have one uber Python script to run all the tests (this is what we did). But the point here is that you don't have to write very much code at all to get the sanity test written, because you can write the HTML parsers so compactly in Python.

Thursday, August 9, 2007
Auto-deploying to Tomcat from CruiseControl with Maven

We have a web project that gets built using Maven; it was relatively simple to get this running under CruiseControl since CC has pretty good Maven2 support. However, we also wanted to have a successful build actually deployed to a Tomcat somewhere at a known location so that people could always check out the latest version.

I played around with various maven plugins for working with Tomcat, but had a really hard time getting them to work properly. Then I realized, even if I got the maven plugins working, I would need to do something else, because we run into PermGen errors on our Tomcats when we undeploy/redeploy, so we'd actually need to do an undeploy/restart-tomcat/deploy loop instead. Of course, none of the tomcat plugins can do the restart-tomcat bit (they can shut it down, but because they interact with the tomcat management interface, they can't actually start it up again!).

So, I went back down the script route. I wrote this script which can do the undeploy/restart/redeploy loop:



export CATALINA_HOME=/usr/local/lib/apache-tomcat-5.5.17
export JAVA_HOME=/usr/java/jdk1.5.0_09


# undeploy old WAR, if any
( echo "--silent"
  echo "--url http://localhost:$TOMCAT_PORT/manager/html/undeploy?path=/" ) | \
curl --config - > /dev/null

# stop the tomcat
(cd $CATALINA_HOME; bin/

# pause to wait for shutdown
sleep 5

# restart the tomcat
(cd $CATALINA_HOME; bin/

# drop the WAR in place
cp $CC_BUILD/webapp/target/ROOT.war $CATALINA_HOME/webapps

This assumes that you have the tomcat installed in the given CATALINA_HOME with the manager app enabled and set up with the appropriate admin credentials, and that the build directory for the CC project is in the "trunk/webapp" directory. Note also that we're deploying to the root context, so you will want to modify your undeploy URL if that's different.

The last step is to add the following CC project; assuming we have an existing "webapp" CC project in existence:

  <project name="webapp-tomcat" buildafterfailed="false">
      <buildstatus logdir="logs/webapp"/>
    <schedule interval="60">
      <exec command="/home/cruise/bin/redeploy"

This triggers off a successful CC build of "webapp" and just calls the script we wrote above. Nice and easy.

Monday, July 23, 2007
Exposing the stereo knobs

As I mentioned in my last post, I'm about to implement an algorithm that involves a bunch of constant coefficients that we might be wanting to play around with. In order to make them easy to play with at runtime, we'll go the JMX route.

First, the service code will just have the coefficients injected as dependencies:

public class MagicalAlgorithmServiceImpl {
 protected int coefficientOne;
 protected int coefficientTwo;

 public void setCoefficientOne(int one) { ... }
 public void setCoefficientTwo(int two) { ... }

 public int performMagic() { ... }

Naturally, in the Spring application context, we'll create the bean like so, so it can be easily modified at build/startup time, using Maven resource filtering:

<bean id="magicalAlgorithmService" class="com.blogspot.codeartisan.magic.magicalAlgorithmServiceImpl">
 <property name="coefficientOne" value="${magic.coefficientOne}">
 <property name="coefficientTwo" value="${magic.coefficientTwo}">

Now, we just need to define a management interface:

public interface MagicManager {
  public void setCoefficientOne(int one);
  public void setCoefficientTwo(int two);

Now, we make the MagicAlgorithmServiceImpl implement MagicManager.

Then if you've managed to get a JMX port exposed through your container (here's how to do it for Tomcat), all you need to do to expose this functionality via JMX is to use Spring's MBeanExporter:

<bean id="mbeanServer" class="" factory-method="getPlatformMBeanServer"/>

<bean id="exporter" class="org.springframework.jmx.export.MBeanExporter" lazy-init="false">
  <property name="server" ref="mbeanServer"/>
  <property name="beans">
      <entry key="MyApp:type=Twiddling,name=magicAlgorithm" value-ref="magicalAlgorithmService"/>
  <property name="assembler">
    <bean class="org.springframework.jmx.export.assembler.InterfaceBasedMBeanInfoAssembler">
      <property name="interfaceMappings">
          <prop key="Fancast:type=Twiddling,name=magicAlgorithm">com.blogspot.codeartisan.magic.MagicManager</prop>

That's it. Now you can twiddle with the stereo knobs at runtime via a JMX console.

Friday, July 20, 2007
Customer engagement

Where I work, we have a product group that functions as our product owners for Scrum development. Fortunately, they are co-located, which generally makes them pretty accessible for clarifications, although they do have this habit of having to be in a lot of meetings (hi guys).

So today I sat with one of them to work out a fairly complicated user-specific algorithm for sorting a bunch of movies and tv shows. In the end we came up with a scoring system involving a lot of constants as coefficients; the plan was to make those easily configurable so we could play with them.

I'm actually planning to take this a little step further than normal, and rather than just making them configurable from the Spring application context, I'm going to write an MBean that makes them configurable at run-time via jconsole. Then, I'm going to show the product guys how to play with the jconsole and let them come up with the actual coefficients we want to use while playing with a development instance.

I'm briefly debating handing out instructions to multiple executives so they can play too....

When I pick up again on Monday I'll post some basic code snippets.

Tuesday, July 17, 2007
Keep it Simple, Stupid

Well, I just spent a couple of hours last night trying to get the Cargo Maven2 plugin to work the way I wanted it to, which was to be able to control an already-installed local Tomcat by saying "mvn cargo:start" and then having that process exit.

Now the whole motivation was to try to get CruiseControl to redeploy the latest version of our webapp after a successful build. Due to various undeploy/deploy cycle PermGen memory leaks, and because it's on a shared server, I essentially wanted to just have CC do a "mvn cargo:undeploy cargo:stop cargo:deploy cargo:start". Unfortunately, it looked like this process would hang.

When I took a step back, I realized I could do that with a pretty short shell script, something like:


# ... config variables here

curl --user $TOMCAT_ADMIN:$TOMCAT_PASSWD --url http://localhost:$TOMCAT_PORT/manager/html/undeploy?path=/
(cd $CATALINA_HOME; bin/
scp $MAVEN_REPO/webapp-1.2.3-SNAPSHOT.war $CATALINA_HOME/webapps/ROOT.war
(cd $CATALINA_HOME; bin/

Ok, a little more error checking, and you're basically done, and then just use an <exec> CC task conditionally off the normal continuous integration task.

Moral of the story: don't try too hard to do things in a fancy, snazzy way when a simple way works just fine. Incidentally, this is why some of my favorite phone screen questions to ask folks who are interviewing are (more or less):

  • what programming languages do you know/use?
  • do you know your way around a Unix command prompt?

If I didn't know enough Unix commands or shell scripting to do what I did above, I probably would have either given up or had to spend a ton more hours digging through the source code to the maven plugin to figure out why it wasn't doing what I wanted it to do.

Friday, July 13, 2007
Using interface hierarchies to enforce access protection

The "implements" relationship in Java between a class and its interface provides for Abstract Data Types where the client of the interface can't get at the guts of the implementation to muck around with it.

I recently encountered this where we had a user object that needed to behave differently based on different login states. We decided to use the State Design Pattern, where we would have the user object delegate to a state object for its state-specific behavior.

The way we initially set this up was to set up the User interface (one example here was that we needed to use a different user ID in different states):

public interface User {
  String getUserId();

public interface UserState {
  String getUserId(User u);

public class UserImpl implements User {
  private String emailAddress;
  private String sessionId;
  private UserState userState;
  public String getUserId() {
    return this.userState.getUserId(this);

Ok, great. Now, it turned out that the userId, depending on what state you were in, was either your sessionId, or it was a hash of your email address. This meant that the user states had to actually get at the emailAddress and sessionId of the UserImpl. Unfortunately, the argument of the getUserId(u) method in the UserState interface takes a User as an argument, and there's way to get the emailAddress of a User.

So we initially went down the road of adding getEmailAddress() and getSessionId() to the User interface, but that started clogging it up with a bunch of methods that the clients of Users would never need to use.

Eventually, we settled on a revised hiearchy:

public interface User {
  String getUserId();

public interface InternalUser extends User {
  String getEmailAddress();
  String getSessionId();

public interface UserState {
  String getUserId(InternalUser iu);

The implementation of UserImpl didn't actually change, but now we could still have the user object and its state interact via interfaces (side note: maybe in this case it would have been ok to pass a UserImpl as the argument of the UserState's getUserId(u) method, because this was so simple, but I've gotten really used to passing arguments and dependencies as interfaces). Also, the clients of the User interface were separated from having to know how the users were being implemented.

So: lesson for the day: if your interfaces are getting too bloated with methods, one place to look for refactoring is to try to break that interface into an inheritance hierarchy to simplify things.

Monday, July 9, 2007
TestUtil: package for unit testing setters/getters

In a comment on an earlier post, a reader Tatsu reported on the TestUtil package from the GTC Group that's a step ahead of us in this discussion. Here's a sample snippet of all you have to do to test the setters/getters of a Java class:

public void testSettersAndGetters() {
  assertTrue(TestUtil.verifyMutable(underTest, 1, 0));
where underTest is the instance of the class you are testing (created and injected in the setUp method of your JUnit). If someone has time, please dig in and let's understand what the 1 and 0 arguments are, and whether this is covering all the cases we've been talking about.

Thursday, July 5, 2007
Unit testing boolean setters and getters

It's crunchtime down on the ranch, so just a quick post today:

public class Foo {
  protected boolean theBool;
  public void setTheBool(boolean b) {
    theBool = b;
  public boolean getTheBool() {
    return theBool;
public class TestFoo extends TestCase {
  private Foo foo;
  public void setUp() {
    foo = new Foo();
  public void testSetsOwnTheBoolForSetTheBool() {
    foo.theBool = true;
    assertEquals(true, foo.theBool);
    foo.theBool = true;
    assertEquals(false, foo.theBool);
    foo.theBool = false;
    assertEquals(true, foo.theBool);
    foo.theBool = false;
    assertEquals(false, foo.theBool);

  public void testReturnsOwnTheBoolForGetTheBool() {
    foo.theBool = true;
    assertEquals(true, foo.getTheBool());
    foo.theBool = false;
    assertEquals(false, foo.getTheBool());
Again, this is in the context of creating an IDE command "insert-getter-setter-tests" which would take the property name (theBool), property type (boolean), and variable name of the class under test (foo) and generate the text for the two unit tests shown above. Did I miss anything for this one?

Tuesday, July 3, 2007
Unit testing setters and getters for Java base types

Yesterday we took a look at some "stock" unit tests for Java bean-style setters and getters, where the underlying property was an object. The tests for properties with base types will be similar, but slightly different. One nice thing about the object tests are that they can use the assertSame assertion (essentially that two objects are == each other) to make sure that the setters/getters do exactly what you thought they would. It's a little different with base types, because these must be compared by value and not by reference. For example, suppose we had implemented a setter for an integer like this (maybe as a stub):

protected int n;
public void setN(int newN) {
  /* no-op */;
Now, what if your unit test looked like this:
public void testSetsOwnNOnSetN() {
  int n = 0;
  assertEquals(n, underTest.n);
It would pass (eek)! In the object cases, we were creating a new mock object and then making sure that object got put in there. Now, we can't necessarily tell. And there's always some chance that the setter is doing something funky, or that it got stubbed out with just the wrong value (incidentally, this is a good reason why it's better practice to throw an IllegalStateException("Not implemented") for an unimplemented method rather than just return something of the right type). So, I think the easy solution here is to use java.util.Random and generate a random non-zero value to use in the tests. Even better, generate two different values, and do:
underTest.n = rand1;
assertEquals(rand2, underTest.n);
Probably for a boolean you just want to exhaustively check the four cases of (previous_state, new_state). That's it for today. We'll be back on Thursday after the July 4th holiday, perhaps with a full-on listing of the unit tests for all the base types.

Monday, July 2, 2007
Unit testing setters and getters

So here's a question for the masses: is it worth it to unit test simple setters and getters? I would contend that if you can automatically generate the code for them, you can also automatically generate the unit test code that tests them (I smell a new ELisp command coming...). So what code should this tool actually generate? Here's my first cut, assuming we're going to use EasyMock: Let's say the name of the bean is foo, and it has class Foo (we'll go into unit testing beans of base type in a later post). Also, let's assume that the code for this in the production class (to distinguish it from the unit test class) looks as follows:

public class Production {
  protected Foo foo;
  public void setFoo(Foo foo) { = foo; }
  public Foo getFoo() { return; }
Note here that we made the foo field have protected scope, so that our test class (which we are assuming will be in the same package) can access it directly. I claim that these are the tests we need to write:
public class TestProduction extends TestCase {
  protected Foo mockFoo;
  private Production underTest;
  public void setUp() {
    mockFoo = EasyMock.createMock(Foo.class);
    underTest = new Production();
  private void replayMocks() {
  private void verifyMocks() {
  public testSetsOwnFooForSetFoo() {
  public testReturnsOwnFooForGetFoo() { = mockFoo;
    Foo result = underTest.getFoo();
    assertSame(mockFoo, result);
The replayMocks and verifyMocks are a habit I've developed, where I actually replay and verify all the mock objects in the test class, just to make sure I don't forget any. The use of these methods for these particular test cases actually asserts that there's no funny business going on here; none of the collaborators for the Production class are being called. So are there any cases this misses? What else would you want to test on your setters and getters? Tomorrow: what to do about base types.

Friday, June 29, 2007
More on calling domain object methods from JSPs

This is to follow up from yesterday's post about calling domain object methods from JSPs. If you'll recall, there were three methods proposed:

  • controller call: make the call in the controller, and pass the results as extra entries in the model
  • data transfer object: put the logic in a special object that provides no-argument methods for retrieving the data in the JSP page
  • taglib: create a taglib that essentially just allows the JSP to call the method itself
Having thought a bit about this for a day on the back burner, I'm pretty sure that going the taglib route is not quite the appropriate one here, at least for this method call (which was essentially a range query); so I think it comes down to either the controller call or the DTO.

So where do each of these make sense? I think if you are pretty sure that a standard web application is the only way you'll really be viewing these results, then the controller call makes a lot of sense, particularly when there's not a lot of model entries to make. The specific example we had, where we were storing a Post object and a List in the model, still keeps the controller pretty simple.

However, I think if you need to do this more than a handful of times, or if you know you are going to be calling the underlying business logic from multiple frontends (maybe via a SOAP web service to power a remote AJAX web widget), then it makes sense to start migrating towards the DTO solution. I'd actually take it a step further now, and encapsulate the DTO creation in a service:

public class PostController implements Controller {
  private PostDTOService postDTOService; // dependency
  public ModelAndView handleRequest(HttpServletRequest request,
                                    HttpServletResponse response) {
    String postId = request.getParameter("postId");
    PostDTO postDTO = postViewService.getDTO(postId, 0, 8);
    Map model = new HashMap();
    model.put("postDTO", postDTO);
    return new ModelAndView(..., model);

public class PostDTOService {
  private PostDAO postDAO; // dependency
  public PostDTO getDTO(String postId, int first, int max) {
    Post post = postDao.getPostById(postId);
    if (post == null) return null;
    List<Comment> comments = post.getCommentsByRange(first, max);
    PostDTO dto = new PostDTO();
    return dto;

public class PostDTO {
  private Post post;
  private List<Comment> comments;
  public Post getPost() { ... }
  public void setPost(Post p) { ... }
  public List<Comment> getComments() { ... }
  public void setComments(List<Comment> cs) { ... }

Now everyone's roles can be pretty succinctly described:

  • controller: gather parameters from HTTP request and pass to service; place returned object in model and invoke view
  • service: makes the middleware logic calls to produce the needed data and instantiates the DTO
  • DTO: just carries the data around
  • JSP/view: renders the data contained in the DTO
So, my question to you is: is it too much trouble to add the service layer and DTO just to keep the controller and JSP simple, especially when there's no reason the controller couldn't do this? I'd say the service/DTO route probably has a more elegant design to it, but is there such a thing as going to extremes here?

Thursday, June 28, 2007
How to call domain object methods from JSP

It seems like the earlier post about super() spawned at least a little (offline) discussion, so here's another comparison of some different ways of approaching a problem. People should definitely feel free to post some comments , lend your opinions, offer up code snippets, etc. Ok, so here's the deal. We have a middleware domain object that has a method with arguments, and we essentially want to call that method from a JSP page. To make this concrete, let's suppose we have some software supporting a blog. Here's the middleware for a blog post:

public class Post {
 public String getTopic() { ... }
 public List<Comment> getCommentsByRange(int first, int max) { ... }
So, we let's say we have a web page where we want to show, for example, the first eight comments under the post itself. Probably the JSP page will want to, somehow, essentially call post.getCommentsByRange(0,8).
Approach A: Call from controller, add to model. Our controller will look like this (assuming Spring MVC).
public class PostController implements Controller {
 public ModelAndView handleRequest(...) {
   Map model = new HashMap();
   String postId = request.getParameter("postId");
   Post post = postDao.getPost(postId);
   List<Comment> comments = post.getCommentsByRange(0, 8);
   model.put("post", post);
   model.put("comments", comments);
   return new ModelAndView(..., model);
And our jsp page will contain the following snippet:
<c:foreach items="${model['comments']}" var="comment">
  <!-- display a Comment in here -->
  • No extra classes or taglibs needed.
  • Presentation/business logic appears in the controller, which is arguably not its job.
  • Controller gets really messy the more items like this have to get added to the model.

Approach B: Create a data transfer object. This creates a special class whose methods take no arguments:
public class PostDTO {
  private Post post;
  private List<Comment> comments;
  public PostDTO(Post post, int start, int max) { = post;
    this.comments = post.getCommentsByRange(start,max);
  public getTopic() { return post.getTopic(); }
  public getComments() { return comments; }
Now the controller looks like as below. Notice that there's only one object put into the model, so the controller is very simple.
public class PostController implements Controller {
 public ModelAndView handleRequest(...) {
   Map model = new HashMap();
   String postId = request.getParameter("postId");
   Post post = postDao.getPost(postId);
   PostDTO postDTO = new PostDTO(post, 0, 8);
   model.put("postDTO", postDTO);
   return new ModelAndView(..., model);
Finally, the JSP snippet looks like:
<c:foreach items="${model['postDTO'].comments}" var="comment">
  <!-- display a Comment in here -->
  • Standard JSP.
  • Controller stays simple, and does not contain presentation logic.
  • We have to create this extra DTO class which doesn't do anything particularly interesting.

Approach C: Use a taglib I'm not going to show the code here (want to keep to my self-imposed time limit), but essentially, we create a taglib that knows how to call the getCommentsByRange() method of a post. The JSP would look like:
<c:set var="comments" value=""/>
<mytags:postComments post="${model['post']}" start="0" max="8" outvar="comments"/>
<c:foreach items="comments" var="comment">
  <!-- display a Comment in here -->
  • Controller stays simple, and does not contain presentation logic.
  • We have to create a taglib which pretty much exists just to allow the jsp to invoke a method on the domain object.
Exercise for the Reader. Which approach would you use, and why? Does it depend on the situation?

Wednesday, June 27, 2007
Chatty unit tests

Did I mention this blog is going to be a bit stream-of-consciousness, hopping around from topic to topic? Anyway, wanted to talk today about "chatty" unit tests--unit tests which produce logging output when they run. Most of the time this is due to logging statements from the class under test. Now, most of our Java classes use commons-logging in the following fashion:

public class Thing {
  private static final Log logger =
This makes sure there's one logger per class (not per object), which is usually what you want, so that you can control logging (particularly at runtime) on a whole package basis. However, now when you run unit tests against Thing, it might spew out a bunch of log messages (particularly if those messages are at the INFO level, or if you log exceptions to ERROR (you are testing the exception cases in your unit tests, right?)). This not only scrolls a bunch of stuff on the screen you probably don't care about (especially if it's for tests that pass), but it also slows your unit tests down with I/O. So, a simple solution is to code it like this:
public class Thing {
  private static final Log defaultLogger =
  private Log logger;
  public Thing() {
    this.logger = defaultLogger;
  public Thing(Log logger) {
    this.logger = logger;
Now, most of the time your class will behave as before, especially in a Spring/Hibernate world where everything is expected to have a default constructor. However, now in your unit tests you can either:
  public void setUp() {
     Log mockLog = EasyMock.createMock(Log.class);
     Thing thing = new Thing(mockLog);
especially if you want to actually test the logging. Or, if you just want the unit test to "shut up" while you run it:
  public void setUp() {
    Log nullLog = new org.apache.commons.logging.impl.NoOpLog();
    Thing thing = new Thing(nullLog);
Note that if a unit test breaks, you can always come back in to setUp and change the call to instantiate the Thing under test to use the default constructor again (and thus bring back more verbose logging).

Tuesday, June 26, 2007
Calling super() considered harmful?

Had an interesting discussion yesterday about how to solve the following problem:

  • You have a class (in our case, it was a Spring MVC controller) which has some dependencies injected, and which you, being a good defensive programmer, want to check that all the needed dependencies are present.
  • Since this is a Spring app, it's ok to use the InitializingBean's afterPropertiesSet method somewhere in here.
  • We want to derive some subclasses from the base class that introduce additional dependencies that we want checked, but we want to continue checking the superclass's dependencies.

How would you do this?

Option A. Template method. This would look as follows:

public class Parent implements InitializingBean {
  private ParentDep parentDep;
  public void afterPropertiesSet() {
    if (parentDep == null) {
      throw new IllegalStateException();
  public void checkChildConfig() { }

public class Child extends Parent {
  private ChildDep childDep;
  @Override void checkChildConfig() {
    if (childDep == null) {
      throw new IllegalStateException();

Pros: (a) parent checks parent configs, child checks child configs. (b) parent configs are always checked, unless the child decides to do something like override afterPropertiesSet. (c) no use of super. (d) This is the one Martin Fowler likes.

Cons: (a) parent has to be aware it will be subclassed. (b) this gets considerably awkward with more than one level of inheritance (i.e. what if there's a GrandChild class?): Parent may need to be modified, or Child may have to override the afterPropertiesSet method of the Parent to cover this case.

Option 2. Just use super.

This one looks this way:

 public class Parent implements InitializingBean {
   private ParentDep parentDep;
   public void afterPropertiesSet() {
   public void checkConfig() {
     if (parentDep == null) {
       throw new IllegalStateException();

 public class Child extends Parent {
   private ChildDep childDep;
   @Override public void checkConfig() {
     if (childDep == null) {
       throw new IllegalStateException();

Pros: (a) Parent checks parent configs, child checks child configs. (b) Parent doesn't care if it gets subclassed, or what the depth of the inheritance hierarchy is. In particular, Parent probably doesn't have to be modified with the addition of more hierarchy.

Cons: (a) Child has to remember to call super.checkConfig in its own checkConfig; if not, then the parent dependencies won't get checked. (b) Martin Fowler will look on you with disdain for using super.

It strikes me that if I'm going to inherit from a class (particularly from one where I don't have access to the original source code of the Parent, or where the coder of the Parent didn't totally contemplate that it would be subclassed), I'm probably going to end up overriding some of the parent's methods here or there, and using the parent's methods in other places. Why is this any different? Part of successfully subclassing a class would mean not breaking the "contracts" established by the Parent, and super gives you a nice, easy way to say "oh yeah, and check what he was checking too."


Monday, June 25, 2007
Programmable IDEs

Ok, I'll admit it. I still use Emacs as my "IDE". It's what I started programming with when I was learning Unix/C in college, and it's always served my needs well. Nowadays, I pretend to be a Java developer, and Emacs still works fine for me. I've been meaning to try Eclipse for some time now, but have ended up having issues getting it to work on my 64-bit RHEL workstation, and I just haven't had time to make it work yet.

However, what Emacs and Eclipse have in common is that they are both programmable IDEs. Emacs is pretty explicitly programmable with Emacs Lisp (in fact, you pretty much couldn't even do any amount of preference setting or customization without learning Elisp, for quite a long while), and Eclipse lets you code up extensions to customize its behavior. So what's the big deal about your IDE being programmable?

Well, it means that you can start writing programs to save yourself work. I remember sitting with a colleague who was using Eclipse while I was getting back up to speed on Java development, and got a twinge of Eclipse envy when he right-clicked on an instance variable and selected "create setters and getters". I had just been typing those out by hand in Emacs, but a couple of hours later, I had hacked it up in Emacs, and use it all the time:

(defun java-insert-getter (bean-name bean-type)
 "Insert a simple getter method."
 (interactive "sInsert getter for bean: \nsInsert getter for %s of type[]: ")
 (let ((upcase-bean-name (concat (capitalize (substring bean-name 0 1))
                 (substring bean-name 1))))
   (let ((bean-setter (concat "set" upcase-bean-name))
     (bean-getter (concat "get" upcase-bean-name))
     (bean-use-type (if (string= "" bean-type)
     (insert (concat "public " bean-use-type " " bean-getter "() {"))
     (insert (concat "return this." bean-name ";"))
     (insert "}")

(defun java-insert-setter (bean-name bean-type)
 "Insert a simple setter method."
 (interactive "sInsert setter for bean: \nsInsert setter for %s of type[]: ")
 (let ((upcase-bean-name (concat (capitalize (substring bean-name 0 1))
                 (substring bean-name 1))))
   (let ((bean-setter (concat "set" upcase-bean-name))
     (bean-getter (concat "get" upcase-bean-name))
     (bean-use-type (if (string= "" bean-type)
     (insert (concat "public void " bean-setter
             "(" bean-use-type " " bean-name ") {"))
     (insert (concat "this." bean-name " = " bean-name ";"))
     (insert "}")

(defun java-insert-setter-and-getter (bean-name bean-type)
 "Insert both setter and getter methods."
 (interactive "sInsert setter/getter for bean: \nsInsert setter/getter for %s of type[]: ")
 (java-insert-setter bean-name bean-type)
 (java-insert-getter bean-name bean-type))

(setq java-mode-hook '(lambda ()
             (local-set-key "\C-c\C-i" 'java-insert-setter-and-getter)
             (local-set-key "\C-c\C-j" 'java-insert-setter)
             (local-set-key "\C-c\C-m" 'comment-region)
             (local-set-key "\M-d" 'java-kill-word)
             (if (and window-system (x-display-color-p))
                     (setq indent-tabs-mode t)
             (setq tab-width 4)
             (auto-fill-mode 1)))

So what's the point? Isn't this just reinventing the Eclipse wheel? Perhaps, but now I can control exactly what it does and why. In some later posts, we'll talk about test-driven development (TDD) and why I might want to inject a protected getter where I might normally have just put a setter method for dependency injection. Later, I'll probably modify this to add in some basic Javadocs.

But here's the main point: get into the habit of automating your best practices. As we learn and share what those best practices are, we should be building tools / macros / scripts that codify those best practices, to free us up for the algorithm, object model, and application design that's the meat of what we do.

Sunday, June 24, 2007
Code Craftsmanship

Writing software is as much a craft as furniture making. Someone can ask for a chair, and different furniture makers will produce different chairs. Hopefully most of them will support your weight when you sit in them, but the sturdiness and aesthetics will vary, as will, most likely, the particular joints, types of wood, etc.

Obviously, writing code is quite similar; there's getting the job done, and then there's getting it done well. Code can be well- or poorly-designed, well-tested or hard to test, well-documented or sparsely commented. This blog is going to contain posts about software craftsmanship, including tips and tricks, philosophical discussions, and code samples. (As a word of warning, some of the code samples will be in Emacs Lisp!).

In the interests of fostering discussion, I'm going to try to tend towards shorter, more frequent posts rather than long-winded essays which spring fully-formed from my keyboard. Ok, so really my attention span just isn't long enough for the longer form.