The Summit is Just a Halfway Point

(Title is originally a quote from Ed Viesturs.)

This past week, TIM Group held its Global Summit, where we had nearly all of our Technology, Product, and Sales folks under the same roof in London. For those who aren’t aware, we are quite globally spread. We have technologists in both London and Boston offices (as well as remote from elsewhere in the UK and France), product managers in our NYC/London/Boston offices, and sales/client services folks across those offices as well as Hong Kong and Toronto. This summit brings together teams and departments that haven’t been face-to-face in many months — some teams haven’t been in the same place since the previous summit.

On the Monday, we got together for an unconference (in the Open Spaces format.) This was a great way to kick off the week. We were able to quickly organize ourselves, setup an itinerary for the day, and get to it. We discussed what people truly thought were the most important topics. We had sessions on everything from “Where does our (product) learning go?” (about cataloging the knowledge we gain about our new products), “Deployment next” (about where we take our current infrastructure next), and many other topics across development, product management, DevOps, QA, and beyond.

For the more technically-focused of the group, the rest of the week was filled with self-organized projects. Some of these were devised in the weeks leading to the Summit — some were proposed on the day by attendees. There were so many awesome projects to be worked on last week. We had ones that ranged from investigating our company’s current technical problems (like HA task scheduling), creating tools to solve some of our simple problems (like an online Niko-niko tracker), or solve some bigger-than-just-us problems (like Intellij remote collaboration.) You should expect to see us share more about these projects on this blog. Stay tuned!

To be clear, it wasn’t all fun. We also had a football game, beer tasting, and Brick Lane dinners. Altogether, this was a great opportunity to re-establish the bonds between our teams. While there are many benefits to having distributed teams as we do, there are many challenges as well. We do many things to work past these challenges, and our Global Summit is one of the shining examples. Getting everyone face-to-face builds trust and shared experiences which helps fuels the collaboration when everyone returns to their home locations.

Drawing back to the title, I am optimistic that our teams will build on top of the code, experiences, and collaboration of the summit. We will move forward with a clear view from the Summit, and be able to move into 2014 with more great things in the form of new products, services, or even just great blog posts.

Dip Your Toe In Open (Source) Waters

One of the qualities that TIM Group look for when filling vacancies is an interest in contributing to open source projects. We think that when a candidate gets involved in open source, it indicates a passion for software development. If you’re like me, at some point, you have wanted to join a project. Perhaps you wanted to improve your skills, try a different technology, or brighten up your CV. But alas, you didn’t know the best way to get started.

I am trying to provide that exact opportunity, in conjunction with RecWorks’ “Meet-A-Project” scheme. I have an open source project, called Mutability Detector, with issues and features waiting to be completed, specifically earmarked for newcomers to the project. I promise a helpful, friendly environment, in which to dip your toe in open (source) waters.

If you want to know more details, head over to the project blog for a description on the how and why of getting involved.

Happy Holidays Jenkins!

At TIM Group we are proud to say that we support open source projects. And indeed we’ve spent time on many projects, submitting patches, posting on forums/mailing lists, and have a number of projects we’ve put up on our github page. Today we’ve taken the additional step of putting our money where our mouth is. Because we use Jenkins as a major part of our CI infrastructure we were happy to respond to the Jenkin’s holiday appeal and donate $1000 US.

Happy Holidays Jenkins!

 

TIMGroup is coming to Manchester!

Following on our success presenting Comic Collaboration and Communication at this year’s SPA conference, the nice people at the SPA invited to present again, this time at MiniSPA up in Manchester. A single-day conference on Monday October 3rd, it’s a taster of the best that the SPA Conference has to offer, and we’re honoured to have been chosen to present again.

Hopefully we’ll see some of you there!

Annual Stack Overflow Meetup Day @ youDevise London

Last week was the Annual Stack Overflow Meetup Day. youDevise was proud to host the London meetup for this worldwide event. Our dev teams work almost exclusively with open source tools and frameworks to deliver high-quality on-demand financial applications. Many of our coders are active contributors to Stack Overflow and all of us have benefited from the advice we find (and provide!) there.

Over 40 people filled up the 4th floor of our London City offices last Wednesday night. Mounds of pizza were enjoyed by all, and many brave souls tried their hands (actually their whole bodies!) at a few rounds on the Kinect we set up in the board room. We had a second big screen set up to watch the #SOMeetup-tagged tweets coming in from all over the globe as we joined in on the truly world-wide event.

It was a great way to give something back to a community that’s given a lot of help to us, and we had loads of fun doing it. Big thanks to everyone who made it out, it was great to get to meet you!

Who will test the tests themselves?

A short while ago, a colleague and I were developing some JUnit tests using the jMock library, and came across some troubles while trying to start with a failing test. If you’re unfamiliar with jMock, the basic structure of a test looks something like this:

@Test
public void theCollaboratorIsToldToPerformATask() {
  // setup your mock object
  Collaborator collaborator = context.mock(Collaborator.class);

  // define your expectations
  context.checking(new Expectations() {{
    oneOf(collaborator).performATask(); // the method 'performATask' should be invoked once
  }});

  // set up your object under test, injecting the mock collaborator
  MyObject underTest = new MyObject(collaborator);

  // execute your object under test, which should at some point, invoke collaborator.performATask()
  underTest.doSomething();

  // check that the collaborator has been called as expected
  context.assertIsSatisfied();
}

(For an excellent background on developing software with this technique, I highly recommend reading Growing Object-Oriented Software.)

So back to our problem. We couldn’t work out why our unit test, despite the functionality not existing yet, was passing. It didn’t take long for someone with a bit more experience using jMock to point out our error: we were not verifying that the mock was called as expected. In the code above, this translates to: we were missing the call to “context.assertIsSatisfied()“. Since the mock object wasn’t asked if it had received the message, it didn’t have a chance to complain that, no, it hadn’t.

Granted, myself and my pairing partner were not too familiar with jMock, but it seemed like an easy mistake to make, and it got me thinking.

  • How many other developers didn’t realise the necessity to verify the interaction?
  • How many tests had been written which did not start out failing for the right reason, and thus, were now passing under false pretences?
  • How could we check our existing tests for this bug, and ensure that new tests didn’t fall prey to the same lack of understanding?

In short, who will test the tests themselves?

A satisfactory answer for this case, I found, is FindBugs.

FindBugs is a static analysis tool for Java, which detects likely programming errors in your code. The standalone FindBugs distribution can detect around 300 different types of programming errors, from boolean “typos” (e.g. using & instead of &&) to misuse of common APIs (e.g. calling myString.substring() and ignoring the result). Obviously the FindBugs tool can’t anticipate everything, and the jMock error was obscure enough that I had no expectation of it being included. Fortunately, a handy feature of FindBugs is that, if you have a rough idea of a bug you’d like to discover, and a couple of hours to spare, you can write your own plugin to detect it.

With a bit of effort I had whipped together a simple detector which would find this type of problem across the host of tests we continuously run at youDevise. Out of approximately 4000 unit tests, this error appeared around 80 times. Not too many, but enough to be concerned about. Fortunately most of the time, when the call to context.assertIsSatisfied() was included (or the @RunWith(JMock.class) annotation added to the class), the tests still passed. That they “fortunately” still passed, was the problem, since that depended on luck. Occasionally the problem test cases didn’t pass after being fixed, and it either meant a test was outdated and the interaction deliberately didn’t happen anymore, or the interaction had never occurred in any version of the code. Fortunately (again, more by luck than judgment) the tests didn’t actually highlight faulty code. Granted, we also have suites of higher level tests, so the unit tests were not the last line of defense, but still, it is important that they do their job: providing fast (and accurate) feedback about changes, and communicating intent.

The FindBugs plugin, by testing the tests, helped to discover when they weren’t doing their job. Since we run FindBugs, now with said plugin, as part of the continuous build, it will (and has) prevented new test code from exhibiting the same fault. Although no bugs in production code were revealed as part of correcting the tests, knowing that we won’t need the same flavour of luck again increases confidence. This in turn leads to all manner of warm and fuzzy feelings.

Since the plugin has been open-sourced, if you or your team uses jMock, you can detect and prevent those errors too (though I don’t guarantee you’ll feel as warm and fuzzy as I did).

The FindBugs plugin is available for use from the youDevise github repository. Instructions and the JAR to download are included. FindBugs is also open-source, is free to use, mature, and widely used (by the likes of those reputable sorts at Google) and is available for download.

So if you ever find yourself asking,“Who will test the tests themselves?” maybe FindBugs, with a custom plugin, is your answer.

TDD Masterclass

I recently attended a two day TDD training course by Jason Gorman.

Although we practise TDD on a daily basis, I was interested to see if we are applying all the practises correctly or see if we are missing out on anything.

The Jason presented what he called the baker’s dozen of TDD practises.

  1. Write a failing test
  2. Write the assertion first
  3. Don’t refactor with a failing test
  4. Isolate tests from each other
  5. See the test fail
  6. Triangulate
  7. Organise tests to reflect model code
  8. Write the simplest code to pass the test
  9. Choose meaningful names
  10. Test one thing in each test method
  11. Refactor to remove duplication
  12. Keep test and model code separate
  13. Maintain your tests

The great way Jason reinforced these practises was to apply another great agile practise, pair programming. In pairs we applied TDD to solve various programming problems, e.g. generating Fibonacci numbers, FizzBuzz, etc. Solving these problems in pairs was the most enjoyable aspect of the course. By enforcing frequent pair rotation I met a lot of nice people plus I even got a taste of TDDing in C#!

So what did I find in the end? We are doing pretty well at youDevise. We nearly apply all the practises (we can probably do triangulation more). Overall, I think the course is a good introduction to TDD, especially learning through pair programming.

Doing things the old way

We’ve been talking recently about why we get stuck in processes that no longer serve our needs even though they were the right thing when we started.

There were a few things that came up: tests written in particular ways, particular sign-offs in our development cycle, the idea that we must have certain types of tests, writing code in particular ways, and so forth.

There was a bit of defensiveness when unnecessary things were highlighted. We didn’t set out to be inefficient, and we wanted to explain the context that made these things good ideas. Understanding that context and understanding where we’re at now are both important for positive change. Lack of understanding just fuels defensiveness.

An experience of mine is a good metaphor:

Years ago I broke my left knee in a motor accident. I was very careful to do as I was told during my recovery and have since restored symmetrical strength and flexibility. (I even went on to do several years of regular fencing practice.)

Recently I’ve started Aikido classes and a basic training technique requires falling backward when thrown: lower yourself down with the rear leg and roll smoothly on your back, a very safe and effective way of falling. I can do this easily with my right leg but I always hesitate with left leg — I hop back awkwardly or I fall and land heavily on my butt. I learned to defend my left leg after the accident, and I’m still unconsciously defending it long after the need has passed.

Now I’ve got pressure on me to change. I’m getting to the mats early and practising that backward roll slowly and gently so I know what it should feel like. I’m paying attention when it turns out right in class and trying to remember those successes more than the failures. I speak to other students and the instructor so that they give me the chance to practice it right. Eventually the right move will become unconscious. Who knows but maybe losing the habitual defence is the last stage of healing for that old injury?

Some of our quirky processes may be the same. At a time of need we carefully included steps which now are not needed. We’ve learned our lessons, but we’ve spent so long valuing them as necessary defences that it’s hard to let them go. In fact, without some external perspective or change to bring them to our attention, we don’t even notice that we’re doing anything odd.

The puzzle is how to drive positive change. If my broken leg metaphor is right we need things like: a problem that highlights the cost of now unnecessary practices, recognition of why they were valuable and re-assessment of our current condition, recognition of our growth and trust that we won’t backslide, and alternatives that we can practice that might also allow us to grow even further.

In the end an up to date set of development processes and techniques should make our day to day work more comfortable and effective, that’s worth the effort.