The latest episode in our continuing saga of slow browser builds and attempts to fix them was a visit from Ivan Moore, a jack-of-all-trades coder with one of many specialities in continuous integration, and the originator of a clever idea for solving our problem that I had totally failed to explain properly to the team.
Over some more delicious lunch, Ivan explained what his CI server, build-o-matic does: when your build fails on three checkins, say, build-o-matic reruns previous revisions using binary search until it figures out exactly which checkin was the first to cause a failure. A feature like this – or at least the Team City feature that lets you manually rerun for an earlier revision – would help us solve at least one of our problems, namely who should look at a failed build.
But when Ivan had explained this feature on the plane to CITCON Amsterdam, I’d objected that our slow build would make a binary search impractical – we’d know what had happened, but only after waiting a day or so! Build-o-matic is smart enough to use idle agents to do the binary search, and to give recent checkins priority over the search, but still it didn’t seem possible.
Ivan suggested (and I failed to explain properly when I got home) that we break our tests into many small, independent build projects, each of which is short enough to make a binary search practical – and maybe even unnecessary, if it runs fast enough to allow one build per checkin. We would hope that most of these small projects would pass most of the time, and those that do fail would run fast enough to give us speedy feedback – of course this would require substantially more computing resource (whether virtual or physical) to keep all these builds running along quickly!
We have some resistance to the idea of running a huge build farm – we already have 18 servers and doubling or tripling this number starts moving us into real data centre territory, with whispering attendants caring for rows of gleaming machines, and we’re not quite sure if we’re ready for that much hardware management – at least while our very clever IT guy is still at college part of each week!
Some alternatives that also came up in our discussion (in addition to the ones we talked about before):
- Put together a “smoke test” – a group of tests that cover most of the application and (we think) are most likely to fail whenever anyone breaks a basic feature. Run this (shorter) suite of tests on a fast loop, figuring that this will find the majority of problems.
- Use annotations to label tests by functional area. Might help us split tests into meaningful functional groups. Something like NUnit categories might help here – anyone know if there is a JUnit equivalent?
- Use personal builds from Team City or similar features in other CI tools. These builds run through all the same tests, but without actually committing. If you have enough kit (there’s the gleaming data centre again!) then each developer can run a personal build for each checkin, and should be able to fix problems before merging them into source control.
- Distributed version control systems like git should let you run builds on each branch, if your CI system is smart enough. Again, this should let developers get feedback on their builds before committing to a common repository.
- Finally, of course, the modern answer to the data centre is the cloud. Seems like someone is working on this for the Bamboo CI server, but I don’t know if anyone has actually tried it in anger. Wouldn’t work so well for us as our customers are awfully security-conscious, but would be fun to see running somewhere!
Many thanks to Ivan for visiting us. We now have lots of ideas to chew on and try out.
Edited to include the “smoke test” idea and to better summarise Ivan’s many skills.