Choosing what work to do at TIM Group

TL;DR: Working at TIM Group means having the responsibility to decide what work to do. The most obvious criteria is business value but I don’t think that is enough.

At TIM Group we have been experimenting with self-organisation for a while. It’s been a gradual process that started with the adoption of the Eight Behaviours for Smarter Teams and whose last steps were the removal of the line management structure in the technology department and a reiterated message from the CEO on employees insisting on being responsible.

My personal experience has been of changing team and/or project every six months or so which I find refreshing. Most of the time my moves were motivated by changing conditions and suggested by people higher in the hierarchy. A few times, very notably the last change of both team and project at the same time was decided and executed without indication from the management. I saw this happen multiple times since, to multiple people across multiple departments.

My colleagues and I have the responsibility of deciding what work to do. The executive team still has the authority to give orders and to fire people but that does not happen often. For all intents and purposes proposing projects, staffing those projects and delivering are now shared responsibilities.

Continue reading

Distributed Pair Programming @ TIM Group with Saros

There’s a particular technology used within TIM Group that has been too useful for too long to continue to go uncelebrated. The tool is called “Saros“, it is an Eclipse plugin for distributed pair programming that is used extensively at TIM Group.

Why We Need A Distributed Pairing Solution

We have developers spread across London and Boston offices, and a few “Remoties” working from home in various far flung lands. As one of said Remoties, I believe I would not have the opportunity to continue to work at TIM Group and live where I want to live (hint: not London) if it not for Saros.

Having been entirely colocated just a few years ago, our choice to use pair programming by default didn’t really have technical limitations. With an extra seat, keyboard and mouse at every developer’s workstation, the barrier to pairing was low. Just pull up a chair and pair. When we first started having distributed developers, we didn’t want to give up on pairing for technical reasons, so the search began for how to adapt pair programming to a distributed environment.

Why Other Solutions Don’t Match Up

An oft-cited solution for this problem is to use a screen sharing technology and remote access technology. There’s plenty to choose from: Windows’ Remote Desktop; TeamViewer; LogMeIn; GoTo MyPC; VNC; NX; etc. However, we found them all far too limited for pairing. Just like colocated pairing, there’s only one cursor, so only one person can use the keyboard and mouse at any time. Unlike colocated pairing, there’s a large bias towards the person hosting the session, as they get much faster feedback from typing, since they don’t have to suffer the latency. When it’s (figuratively) painful to type, it’s much easier to shy away from being the “driver”, which hinders the collaboration. We never found a solution that reduced that latency to an acceptable level.

Another cited solution is to use a terminal editor like tmux, that does not have the overhead of screen sharing. The feedback when typing is much better, however, there’s one major drawback: being limited to terminal editors only. I’ve seen people whose terminal environments are well suited for developing in a language like Ruby or JavaScript. For those of us coding in Java and Scala, we didn’t want to give up the powerful features we appreciate in our IDE, so tmux is not suitable.

Saros To The Rescue

Thankfully, we discovered Saros, a research project from the University of Berlin. The most relatable way I’ve found to describe it is as:

Google Docs within the Eclipse IDE

It works by connecting two or more developers through Eclipse, so that when Alice enters some text, Bob sees it appear in his editor. The experience for both users is as if they were editing files locally[0]. Rather than sharing the image of a screen, edit commands are serialised and sent over the wire, changing the other participant’s local copy. This comes with several other benefits over remote access technologies:

  • the feedback when typing is instant, for both parties
  • the latency for seeing your partner’s keystrokes is much lower than when transmitting an image of the screen[1]

There are even benefits over colocated pairing:

  • neither the host nor guest has to leave the comfort of their familiar work environment to pair; you can set up fonts and shortcuts however you like
  • since you are both editing as though it was local, each participant has their own cursor, and can be editing different files, allowing ad-hoc splitting of tasks[1], which can be very handy
  • you can have more people involved (up to 5) which we’ve used for code review and introducing a project to a group of people, sparing the discomfort of hunching around a single desk

There are other distributed pairing solutions, such as Cloud9 IDE, and, but none (that we’ve found) that let you stick with your own IDE setup, retaining parity with how you would develop locally.

There are of course drawbacks to distributed pairing, regardless of technology, which I’m not going to cover here; they’re generally not problems Saros, or any technology, will be able to solve.

IntelliJ Support Is On The Roadmap

After a long search, we have not found anything that really compares with Saros. For me, it really is a killer feature of Eclipse. So much so that I’ve basically kept IntelliJ in a dusty old drawer, even when it would be better for certain tasks (like Scala development). Fortunately, it looks like that’s about to change. Work has just begun to port the Saros platform to IntelliJ, which is really exciting. Whenever I’ve talked to people about Saros, alternative IDE support inevitably arises. If the core Saros technology was agnostic of the IDE, it could be a huge leap forward for collaborative development in general.

At TIM Group we were so keen on the idea that a handful of us spent a “hack week” throwing together the first steps towards an IntelliJ plugin for Saros. We were able to demonstrate a proof-of-concept, but didn’t get anything truly viable. Having brought this effort to the attention of the Saros team, I hope that in some small way it inspired them to start work on it, but I doubt that’s something we can take credit for. Hopefully, during the development of the IntelliJ plugin there will be something that we can contribute, and give something back for our many hours of happy usage.

If you’re looking for an answer to the problem of distributed pairing, I heartily endorse Saros!

[0] for more details of the theory behind this aspect of collaborative editors, see
[1] we have acquired a habit of being vocal about whether we are in Driver/Navigator mode, and if your pair is following you, since you can’t assume they are

Facilitating Agility with Transparency

Part of the agile coaching work I do at TIM Group involves running a large number of Retrospectives and the (hopefully only) occasional Root Cause Analysis. Both of these generate actions designed to improve (and/or shore up) our processes so that we are constantly improving. These actions are supposed to be discrete, and done within a week of their assignment.

Over the last year or so, TIM Group been moving to a more ‘start-up’ style organizational model. Previous to this, we had a stabilized two week release cycle, and our development teams were quite static. This has changed now, and while some of the teams here still run retros on a two-week cycle, others are on a one-week cycle, and still others on a more ad-hoc basis. More importantly, the teams are a lot more fluid, with developers not just moving from one development team to another but also to our infrastructure team and back.

In a perfect world, this would not be a problem because actions are all done within a week.

Well, despite the ‘within a week’ expectation, actions had been piling up. Retro after retro would pass, and the ‘actions’ column would get clogged with outstanding items. In addition to this, the RCA actions were also not getting done. While it wouldn’t be fair to say this was a new problem, the new organization was aggravating the existing problem.

I was into the habit of reminding each team about their retro first thing in the morning of their meeting. This helped to get more discussion topics brought up before the start of the meeting. This helped the meeting go faster and more smoothly, but it wasn’t proving to be enough to actually get the actions done.

So I started sending out more and more specific reminders, looking at each board and naming the individuals who had outstanding actions.

As this activity took more and more of my time each retro day, I decided to build myself some help. Luckily, I had previously done some work with the APIs of our on-line Kanban tool. It was fairly simple to make a new version of the code that instead of working with our taskboards, worked with the boards we used for our RCAs and retrospectives.

My initial idea was to simply find a way to generate (or at least partially generate) some of those reminders I was sending out to the teams. But once I had gotten the team notifications done, a pattern emerged — many people had actions across multiple teams. This was when it struck me. I was facilitating the teams, but the *individuals* were the ones who needed to get their actions done. I needed to make their lives easier if I wanted them to get the actions done.

The clear next step was to give each person their own ‘actions report’. Now, at the start of each week, instead of having to look in a bunch of different locations and trying to check if there were things they needed to do, each person who has any outstanding actions gets an e-mail. It clearly states which actions need to be done, including the action title and description, with a URL linking back to the exact card in question on the taskboard. *This* was getting somewhere. I got a lot of positive feedback from people. In fact, I got a number of people asking to put their own smaller-task or special project taskboards on the system so that they could get even more of their actions in one place.

That was a big indicator that I’d done something right, people asking for more!

Of course, once I had a person-by-person action tally, it was a doddle to implement a simple gamification, a leader-board, posted weekly, listing everyone who has yet to complete their actions with the ‘top’ person having the most outstanding actions. A top position which has been, incidentally, occupied since inception by our very own CTO.

Next up? Implementing markdown in the action reports, to increase readability. Team status pages, to show what cards we have as ‘monitor’ cards, so we know what current issues we are monitoring.

Grabbing the Estimation Bull by the horns (and holding on for dear life)

You’re going to talk about estimation, you must be crazy?

Before you start, just let me tell that I’ve either read it or heard it, ALL of it. Putting that all to one side it is my assertion that even if you work at Apple or Google or wherever, if you want to make money from selling software, then sometimes you need to tell the people who are interested in buying your software when you’ll be ready to take their money.

If you’re still with me, here’s how our team does this today (I’ll ignore the fun we had to get to this point, sometimes you really don’t want to go there.)

Okay, you might have a point. So where do you start??

It always starts with an idea, a “wouldn’t it be great/cool if…” We kick this idea around the product/client/dev team until we think we’re ready to come up with a “Level-Zero” estimate.

What’s a Level Zero estimate?

It’s a big number that we arrive at after an hour of at least two developers and a product manager talking about the idea and how we might implement it. We express the Level Zero as the maximum number of iterations (fortnightly releases) that a pair of developers are confident it would take them to deliver the feature as we understand it. We document this break down and any assumptions each might contain, as well as, estimate the number of cycles per major section of the feature.

What happens next?

If the idea makes it to the top of the backlog and we decide to work on it, we then go to the next stage – a “Level One” estimate.

I get the idea now, how does the Level One work?

At this point, we break the feature down into development cards. We aim to make all of these small enough that no individual card represents more than two days of work for a pair of developers. In an ideal breakdown, all of our Level One cards are smaller than this (zero-point cards to those of you more familiar with our numbering).

Is it time for a cross-check?

That’s right. Next we compare the total of the Level One cards with the Level Zero estimate. If it is not comfortably lower, we republish a new total.

Are we nearly there yet?

Almost. We use our Level One estimate and the number of pairs of developers that we plan to assign to the work to come up with a delivery date. We’ve learned that parallelisation is complicated. So, we try to identify areas where we are confident that we can parallelise and areas where we know we can’t, then judge the date accordingly.

Are you ever going to build anything?

I’m right with you. From this point, we want to start development as soon as possible to minimise any context loss or code base shifts that could derail us before we begin. Ideally, we work on the first card right after the Level One breakdown session.

How do you track progress?

Once we start, we use burn down or burn up charts (  to track the progress of each feature through the stand ups.

But nothing stays the same, how do you deal with that?

If you’re talking about scope creep. We manage this and any bugs / design changes / gold-plating by assessing each change against the committed delivery date and deciding yes or no to adding it to the feature for the same delivery date

Sounds like you’ve got it covered?

We’ve had decent predictably of delivery with this over the last year. It might not stand up to a client pushing back hard on estimates or to major team or infrastructure disruptions but we’ll cross those bridges if we get to them.

What’s the next challenge?

What we want now are ways to keep this process in place while incorporating techical debt repayment into the way we tackle individual features estimates and delivery.

Good luck with that…

With thanks to Tim Harford for the conversation style (


Telling Stories Around The Codebase

This is a story that has been about a year in the making. It has now reached a point where I think the story needs to be told to a wider audience.

At last year’s Citcon in Paris there was a presentation by Andy Palmer and Antony Marcano. They showed off a way of writing tests in Fitnesse that read much more nicely than you normally encounter. The underlying system they called JNarrate allowed writing tests in Java in a similarly readable fashion.

At youDevise we had been trying to figure out what framework we should be using to try to make things easier to understand and nicer to write. We had taken a look at Cucumber as well as a few other frameworks but none had really made us take the plunge yet. During the conference Steve Freeman mentioned that you don’t need to find a framework. You can instead evolve one by constantly refactoring, removing duplication, and making the code expressive.

Back at work I decided to try to give it a go. Guided by the small bit of JNarrate that I had seen at the conference I started trying to figure out what it would look like. The first passes at it (which got very quickly superseded) tried to do away with some of the things in JNarrate that didn’t seem very necessary. It turned out that many of them were. Not for the code to run, but for the code to be readable.

Eventually it turned into something that looked good.



    .should(be(money(300, usd)));

We worked with this for a while and had very few changes to the basic structure of:

Given.the( <actor> ).wasAbleTo( <action> );
When.the( <actor> ).attemptsTo( <action> );
Then.the( <actor>).exptectsThat( <thing> )
    .should( <hamcrest matcher> );

The framework itself is really just a few interfaces (Action, Actor, and Extractor) with the Given, When and Then classes being the only part of the framework that actually does anything. The rest of what is written is entirely part of the implementing application.

This style of writing non-unit tests has come to pervade large portions of our tests. It immediately helped immensely in communicating with business people and helping developers to understand the domain better. Since it was in Java we had the full support of the IDE for writing the tests and for refactoring them as our understanding of the scenarios improved. Once you get over the initial hurdle of defining the vocabulary, writing up new scenarios becomes so easy that we have started to sometimes go a little overboard with them 🙂

The only change that has occurred recently is that we dropped the standard Java camel-casing of identifiers and replaced them with underscores. We reached this decision after discovering that most of the pain of reading some of our more complex scenarios was in trying to parse the identifiers into sentences. SomeTimesItJustGetsALittleTooHardToFigureOutWhereTheIndividualWordsAre.

So a recent example is:

Given.the( company_admin)
.was_able_to( modify_the_portfolio( the_portfolio)
    .to_add( author_in_other_company_who_published));

Given.the( author_in_other_company_who_published)
.was_able_to( publish_an_idea()
    .with( a_long_recommendation_for( any_stock()))
    .to( a_portfolio_named( the_portfolio)));

Given.the( company_admin)
.was_able_to( modify_the_portfolio( the_portfolio)
    .to_remove( author_in_other_company_who_published));

When.the( company_admin).attempts_to(view_the_available_authors());

Then.the( company_admin)
.should( have( author_in_other_company_who_published));

Who will test the tests themselves?

A short while ago, a colleague and I were developing some JUnit tests using the jMock library, and came across some troubles while trying to start with a failing test. If you’re unfamiliar with jMock, the basic structure of a test looks something like this:

public void theCollaboratorIsToldToPerformATask() {
  // setup your mock object
  Collaborator collaborator = context.mock(Collaborator.class);

  // define your expectations
  context.checking(new Expectations() {{
    oneOf(collaborator).performATask(); // the method 'performATask' should be invoked once

  // set up your object under test, injecting the mock collaborator
  MyObject underTest = new MyObject(collaborator);

  // execute your object under test, which should at some point, invoke collaborator.performATask()

  // check that the collaborator has been called as expected

(For an excellent background on developing software with this technique, I highly recommend reading Growing Object-Oriented Software.)

So back to our problem. We couldn’t work out why our unit test, despite the functionality not existing yet, was passing. It didn’t take long for someone with a bit more experience using jMock to point out our error: we were not verifying that the mock was called as expected. In the code above, this translates to: we were missing the call to “context.assertIsSatisfied()“. Since the mock object wasn’t asked if it had received the message, it didn’t have a chance to complain that, no, it hadn’t.

Granted, myself and my pairing partner were not too familiar with jMock, but it seemed like an easy mistake to make, and it got me thinking.

  • How many other developers didn’t realise the necessity to verify the interaction?
  • How many tests had been written which did not start out failing for the right reason, and thus, were now passing under false pretences?
  • How could we check our existing tests for this bug, and ensure that new tests didn’t fall prey to the same lack of understanding?

In short, who will test the tests themselves?

A satisfactory answer for this case, I found, is FindBugs.

FindBugs is a static analysis tool for Java, which detects likely programming errors in your code. The standalone FindBugs distribution can detect around 300 different types of programming errors, from boolean “typos” (e.g. using & instead of &&) to misuse of common APIs (e.g. calling myString.substring() and ignoring the result). Obviously the FindBugs tool can’t anticipate everything, and the jMock error was obscure enough that I had no expectation of it being included. Fortunately, a handy feature of FindBugs is that, if you have a rough idea of a bug you’d like to discover, and a couple of hours to spare, you can write your own plugin to detect it.

With a bit of effort I had whipped together a simple detector which would find this type of problem across the host of tests we continuously run at youDevise. Out of approximately 4000 unit tests, this error appeared around 80 times. Not too many, but enough to be concerned about. Fortunately most of the time, when the call to context.assertIsSatisfied() was included (or the @RunWith(JMock.class) annotation added to the class), the tests still passed. That they “fortunately” still passed, was the problem, since that depended on luck. Occasionally the problem test cases didn’t pass after being fixed, and it either meant a test was outdated and the interaction deliberately didn’t happen anymore, or the interaction had never occurred in any version of the code. Fortunately (again, more by luck than judgment) the tests didn’t actually highlight faulty code. Granted, we also have suites of higher level tests, so the unit tests were not the last line of defense, but still, it is important that they do their job: providing fast (and accurate) feedback about changes, and communicating intent.

The FindBugs plugin, by testing the tests, helped to discover when they weren’t doing their job. Since we run FindBugs, now with said plugin, as part of the continuous build, it will (and has) prevented new test code from exhibiting the same fault. Although no bugs in production code were revealed as part of correcting the tests, knowing that we won’t need the same flavour of luck again increases confidence. This in turn leads to all manner of warm and fuzzy feelings.

Since the plugin has been open-sourced, if you or your team uses jMock, you can detect and prevent those errors too (though I don’t guarantee you’ll feel as warm and fuzzy as I did).

The FindBugs plugin is available for use from the youDevise github repository. Instructions and the JAR to download are included. FindBugs is also open-source, is free to use, mature, and widely used (by the likes of those reputable sorts at Google) and is available for download.

So if you ever find yourself asking,“Who will test the tests themselves?” maybe FindBugs, with a custom plugin, is your answer.

Design Perfume – The sweet smells of quality

Bob Martin wrote about smells that are signs of bad designs. While it’s convenient to have a vocabulary to describe problems, that’s only part of the picture if we want better designs. With a vocabulary for good designs we can more easily identify the strengths in our work and build on them, and we can structure our thinking about the work of others and bring the good back to our own work.

Good design shouldn’t be terra incognita, let’s have some sign posts to guide us in the right direction.

So here are my corrollaries to Bob Martin’s design smells – Julian’s design perfume:

Supple – System is easy to change (not Rigid)

  • Examples: adding new modules, alternate implementations, substitute technologies, additional processing, new functionality, clearer design and refactoring.

Resilient – Problems and their solutions are localised (not Fragile)

  • Failures don’t bring whole systems down
  • Failures don’t introduce bad data
  • Bad data doesn’t propagate
  • The consequences of failure make sense given the causes
  • Technologies are used in obvious and limited scopes
  • Dependencies on specific configurations are localised
  • No significant effects come from accidental patterns of use

Re-usable – Fits in anywhere it might be useful (not Immobile)

  • Any dependencies should make sense
  • Configuration and maintenance should be proportionate:
    • Easy to figure out
    • Sensible defaults
    • Updates should seem relevant and not a burden
  • Reuse should add clarity:
    • Making intention and correct use obvious
    • Not introducing too much unused functionality

Enabling – Makes good practices easy (not Viscous)

  • The right information and operations are available:
    • Back doors are hard find or make
    • It’s easy to find what you need
  • Examples:

Appropriately Complex – Reasons for complexity are obvious (no Needless Complexity)

  • Clearly connecting complexity to business needs, and thinking about the complexity reveals important things
  • Complexity is visible up front:
    • No nasty surprises when you start to dig in
    • Complexity hiding shouldn’t produce time-bombs

DRY – Doesn’t require users to repeat themselves (no Needless Repetition)

  • Good default values and reusable configurations
  • Useful state and memory of previous actions
  • No need to code up the same things repeatedly

Transparent – Good code is obvious and easy to understand (not Opaque)

  • It does what it says it does
  • It doesn’t do anything unexpected
  • It matches reasonable expectations

Although these are in many places around the internet, and I do recommend reading Martins’ books, I’ll include his list of smells here:

  • Rigidity – System is hard to change.
  • Fragility – Changes cause the system to break easily and require other changes.
  • Immobility – Difficult to disentangle components that can be reused in other systems.
  • Viscosity – Doing things right is harder than doing things wrong.
  • Needless Complexity – System contains infrastructure that has no direct benefit.
  • Needless Repetition – Repeated structures that should have a single abstraction.
  • Opacity – Code is hard to understand.

SVNKit versus JavaHL

When using the Subclipse plug-in to integrate Subversion and Eclipse, there is a choice of two back-ends: one that uses the native Subversion libraries (JavaHL), and a pure Java implementation (SVNKit). There seems to be a massive memory leak in the SVNKit back-end, and the only available work-around is to switch to the JavaHL back-end in the Window->Preferences->Team->SVN dialog. If the JavaHL back-end is not visible in the “SVN interface” menu, it may need to be installed (available in the same repository from which you installed the Subclipse plugin).

Who ya gonna call and what ya gonna call it?

So, this is my last planned post on antipatterns, but it was this antipattern that sparked me into writing the mini-series in the first place.

It all started with Ryan and I doing some work on Idea Group Rules a few weeks back. We dived in a little, and suddenly my fury was enflamed. I noticed that this functionality was filled to the brim with my second least-favorite “type” of class, the “Helper” class. (My least favorite are “Manager” classes.)

Nearly every time that I have come across one of these “Helper” classes, the class just doesn’t mean anything.

It is just a lump of code.
It is not part of the domain.
It does not represent a function of the application.

Our Helpers are just a lump of code.

This antipattern is called the Poltergeist. C2 describes the Poltergeist antipattern as:

Unnecessary and redundant navigation paths in the course of development, highly transient associations of a particular class with another one, presence of stateless classes, occurrence of temporary and short duration objects/classes or classes that exist only to invoke other classes through temporary association

That explanation is bit wordy, but I think that the key words in there are “highly transient”, “stateless”, and “only exist to invoke other classes”. They are just little bits of code that flit in and out of existence, push some objects around mysteriously, and make little sense in terms of the rest of world around it.



I will be the first to admit that Object Oriented Design is hard to do and takes some time. I will also readily admit that we are often pushed to get things done, or feel pressured to just add that one line of code and not think about that yucky legacy code. By adding (or enhancing) these Helpers we are just increasing our code debt, and making it even harder for the next person to figure out what is going on…

For those of you who remember the last antipatterns post may wonder what is the difference between Managers and Poltergeists. Well, there is no difference really. Our Managers are just Poltergeists that are seriously out of control.

Manager and Helpers are symptoms of the same problem: a need to focus on what our TIM/HIP/IDS objects mean and how they interact with each other.

It is hard to do, but let me see if I can start with one tip to get us all moving in the right direction.

Jason’s Object-oriented design (OOD) Tip #1: Get the name right.

Think long and hard on what you are creating and make sure the concept fits into the domain of the rest of the objects. Ask somebody if the name makes sense to them. Remember that I am always there to help you if you need some to talk through it when you get stuck.

Now, I do NOT mean get the name right the first time. You may not, and that is okay. Get it right the next time.

Also, do not be so naive to think that the name will never need changing. All of our codebase is growing, and what is called an Idea today, might be a Basket tomorrow…

Naming applies to classes, methods, fields, and really anything in the code. If you see something that has gotten out of sync with the current naming standards, fix it. There are so many tools in Eclipse to make renaming trivial.

Let us try to get the names right first, and see if that starts to push through the right structures and relationships for better code.

The Release Rush

Two blog posts (The Crunch Mode Paradox – Turning Superstars Average and Exception Handling in Software) really reminded me of something this past week.

Fortunately at YouDevise, we have a very strict No Death March policy. Working 40 hour weeks is company policy. (You don’t want to see Squirrel angry, do you?) But, in our past, we have seen the Death March’s sneaky cousin, the Rushed Out Feature/Bugfix.

The warnings signs are often a reasonable (or unreasonable) “customer deadline” coupled with too much other “high priority” work. The developer will usually get the feature assigned to them too late to do the correct thing and spend the time on it that it needs.

Now, all of our developers are smart guys, and they will usually figure out a way to squeeze the feature out in time, but it will always be at the cost of rotting the code around it. Now, being a smart guy, the developer has good intentions and plans to fix it later. Inevitably though, given that the feature is now “complete”, all of the refactoring, testing, and clean-up gets lost in the priorities shuffle.

We need to be diligent and focus on doing things correctly and not simply fast. All of us have stories of leftover code that was just slopped together to get it out the door quickly. Then, many months later, the bugs surface, and we end up spending many times more effort to spelunk the code, figure it out, and find a fix that doesn’t cause more problems. That doesn’t even cover the time spent needing to fix the original technical debt needed to refactor, test, and clean up the code.

We need to be diligent to make completing features (and bugfixes) correctly our first priority. Trust me, if we make all of our code changes correct, making code changes quickly will usually follow right behind it.