Friday, October 18, 2019
On People Leaving a Department

I recently fielded a question in a department ask-me-anything (AMA) session noting that a few folks have left our department recently, and asking what I thought about it. I thought I would share a lightly-edited version of what I wrote:

In general, I assume that when someone joins my department, it will be for a finite period of time; it would be really unusual to hire someone right out of college and have them work in our department until they retire! So: how long will someone work for us?

In general, I think a couple of things have to be in alignment: (1) the individual has to bring skills, experience, and performance that are relevant to what the role requires (i.e. the person is a good fit for the role); and (2) the role has to offer someone the ability to work on things they are interested in and opportunities to learn things they want to learn (i.e. the role is a good fit for the person). When one of those things isn't true, it's time to part ways.

It may be that the business context changes what is needed for a role, or changes what projects are available. It may be that techniques evolve and there is less opportunity to do something you enjoyed or maybe there is a growing need to do something you enjoy doing less. Maybe a role evolves in a way that means you are not actually as good at it. Maybe you have learned everything you want to learn from a position. Maybe your interests change! These are all fine, and it doesn't necessarily mean the individual or the department is doing anything wrong. It's just not a fit anymore.

My management team and I believe so strongly in helping people advance in their careers that we are willing to have transparent discussions about whether this 2-way fit still exists, and if it doesn't, to help someone find a better fit if we can (especially somewhere else in the company). I have definitely had folks promoted out of my department and have been super happy for them to have a great opportunity they are excited about.

In addition, if there is something we seem to do exceptionally well as a department, it is to attract and hire really amazingly smart, friendly, and creative people. So I also look at someone "graduating" from my department as a new opportunity to find another person I really like working with!

Saturday, May 11, 2019
Running Concourse locally on Windows

Update 3: Ok, got this all documented, cleaned up, and pushed to Github: vagrant-concourse-local.

Update 2: Bosh deployment also didn't work on Windows. I did finally manage to get builds working. Rough recipe was: use Vagrant (from the Windows shell) to spin up an Ubuntu box. Run vault, postgres, and concourse via docker-compose (I wanted to use Vault because that's how I am used to managing secrets, although unsealing is a pain). I need to debug having everything come up after a reboot and clean up the Vagrantfile, then I will publish something.

Update: Turns out the instructions below did NOT work. Next attempt was to try to run the concourse/lite box via Vagrant; still no dice, then saw it was deprecated anyway. Currently trying a VirtualBox lite deployment via concourse-bosh-deployment.

I just spent a fair amount of time getting Concourse running locally on my Windows PC, and figured I might as well write it all down in case someone else runs into all this. In my case, I recently learned Concourse and wanted to use it for CI/CD for some hobby projects, but didn't want to just run it in AWS since I wouldn't be using it that often.

For context, I have a regular Home edition of Windows 10, which means that I can't run the nifty Docker Desktop for Windows (boo) and have to run the older Docker Toolbox that installs VirtualBox and then runs Docker inside a Linux VM.

I grabbed a docker-compose.yml file from the Concourse tutorial site, ran docker-compose up -d, and...couldn't connect to the UI in my browser.

After much poking around, I finally figured out that you can't use localhost ( to talk to Concourse, but have to use the local NAT IP address of the VM. You can find this by running docker-machine from the "Docker Quickstart" shell that comes with Docker Toolbox:

So in my case I can navigate to and get the Concourse UI.

The other updates I made to the docker-compose.yml were to create a named volume (docker volume create concourse-data) so I can re-mount the database if my PC reboots, and, most importantly, to populate the CONCOURSE_EXTERNAL_URL environment variable for the concourse container, so that redirect URLs will get properly generated to use the NAT address. That's it! Here's the working Docker Compose file (you can run it with docker-compose up -d in a Docker Quickstart shell):

Note: It can take quite a bit for the database to initialize itself the first time, and the Concourse UI won't listen on its socket until it can talk to the database, which means if you get a "connection refused" you might just need to wait a little longer. A little docker logs -f goes a long way here!

Saturday, November 28, 2015
VerySillyMUD: Continuous Integration

This post is part of my "VerySillyMUD" series, chronicling an attempt to refactor an old MUD into working condition[1].

In our last episode, we got an autotools-driven build set up, which gets us going along the path towards being able to use a continuous integration tool like Travis CI. In this episode, we'll see if we can get it the rest of the way there. I'll be referencing the getting started guide, the custom build instructions, and the C project instructions.

It seems like the easiest way to proceed is to try to just use the default C build, which I expect not to work, but then to massage it into shape by looking at the build errors. It seems like this minimal .travis.yml is a good starting point:

As an editorial comment, the autotools toolchain uses "make check" to run tests, but Travis CI expects you to have autotools and that the default way to run tests is..."make test". I kind of wonder how this happened; my suspicion is that most projects use make test (and so that's what Travis CI assumes) but that GNU autotools defined an opinionated standard of make check that ignored existing practice.

Anyway, back to the build. As expected, this failed because there wasn't a configure script to run! This is interesting--I had not checked in any of the files generated by autoconf or automake under the general principle of "don't check in files that get generated". Ok, we should be able to just run autoreconf to get those going:

This one fails too, with the following error from autoreconf:

Hmm, very mysterious. This seems to be due to missing the AC_CHECK_HEADER_STDBOOL macro, and at least on this mailing list post, it was suggested it could be removed, so let's try that.

Ok, that build got further, and in fact built the main executable, which is great; we just failed when trying to find the Criterion header files. Since we haven't done anything to provide that dependency, this isn't surprising. I also notice that there are a lot more compiler warnings being generated now (this would appear to be based on having a different compiler, gcc vs. clang). Now we just need to decide how to provide this library dependency. The build container is an Ubuntu system, which means it uses apt-get for its package management, but there doesn't seem to be a .deb package for it provided. The options would seem to be:

  • vendor the source code for it into our repository
  • figure out how to build a .deb for it and host it somewhere we can install it via apt-get
  • download a Linux binary distribution of Criterion
  • download the source code on the fly and build it locally

I'm not crazy about vendoring, as that makes it harder to get updates from the upstream project; I'm not crazy about the binary download either, as that may or may not work in a particular environment. My preference would be to build a .deb, although I haven't done that before. I assume it would be similar to building RPMs, which I have done. Downloading and building from source is perhaps a good initial method, as I know I could get that working quickly (I have built the Criterion library from source before). If I ever get tired of waiting for it, I can always revisit and do the .deb.

According to the Travis CI docs, the usual way to install dependencies is to customize the install step with a shell script. We'll try this one to start, based on the instructions for installing Criterion from source:

Ok, this does pretty good as far as starting some of the compilation process, but it still fails with this error:

Hmm, seems to be looking for some copyright information; we'll try copying Criterion's LICENSE file into the debian.copyright file it seems to be looking for. Ok, that build succeeded in building and installing the library, and in fact, the later ./configure found it, but wasn't able to load the shared library We need to add an invocation of ldconfig after installing the library, I think. Wow, that did it! We have a passing build!

Let's record our success by linking to the build status from so that people visiting the repo on GitHub know we have our act together!

Now, while debugging this build process, I got several notices that the project was running on Travis CI's legacy infrastructure instead of their newer container-based one, which purports to have faster build start times and more resources, according to their migration guide. It seems like for us the main restriction is not being able to use sudo; we use this in exactly three places at the moment:

  1. to install the check package via apt-get
  2. to run make install after building the Criterion library
  3. to run ldconfig after installing the Criterion library

It seems like there are built-in ways to install standard packages via apt, so then the question is whether we can directly compile and link against the Criterion library from the subdirectory where we downloaded and built it; if we can then we don't need sudo for that either. Ok, it looks like we just need to find the include files in the right place, by adding an appropriate -I flag to CFLAGS and then to find the built shared library by pointing the LD_LIBRARY_PATH environment variable to the right place. Nope. Nope. Nope. Nope. Ok, this answer on StackOverflow suggests we need to pass arguments through to the linker via -W options in LDFLAGS. Still nope. Maybe if we pass the Criterion build directory to the compiler via -L and to the runtime linker via -R? Bingo!

Ok, now we just need to see if we can install the check submodule via the apt configuration directives in .travis.yml. That works, and here's the final working CI configuration:

It seems that by setting CFLAGS explicitly here, we "fixed" the compiler warnings; I suspect we need to come back around and add -Wall to CFLAGS and then fix all those warnings too. But that can wait for the next episode...

[1] SillyMUD was a derivative of DikuMUD, which was originally created by Sebastian Hammer, Michael Seifert, Hans Henrik Stærfeldt, Tom Madsen, and Katja Nyboe.