Doing Fieldwork in TIM Group


How do you get a sense of the culture of a place? This is the question that Joe Schmetzer and I were contemplating at our fortnightly morning ‘get together’ where we mentor and coach each other. We’d been discussing the idea for several meetings by this point, but this time I’d been looking at the blog of Fieldwork (thanks to Douglas Squirrel for pointing me at it!). Fieldwork has been publishing a series that they call Everyday Fieldwork and provided some instructions on how to observe the unseen elements around you and see your work differently.

Armed with the discussions I had with Joe and the instructions from Fieldwork, I’ve been taking pictures of what I see around me at TIM Group. Along with each picture, I’ve been keeping a description of what is happening in the picture and some of my own thoughts about the scene. Going forward, I am planning on posting a number of these pictures and their descriptions. I hope that they’ll help me and others understand what the culture inside TIM Group actually is.

And so, in keeping with one of the phrases I hear around here pretty often, I won’t delay in posting the first one of these.

Home Office

There is a messy desk with a laptop plugged into a large monitor. A separate keyboard and mouse are in a prominent position. A cup of tea is on the left hand side. The chair has a blanket over the back, which is used to cover the seat to keep cat hair off. On the monitor is a browser window with email in it. Behind the browser is OmniFocus and Slack. I took this on a day that I decided to work from home. A number of developers at TIM Group work from their homes either most of the time or regularly. Working from home isn’t something I normally do, however, so this was a somewhat special day. Even though I don’t do it often, I was still able to be connected to everything and everyone I needed.

What I learned about self-organisation

I learned a number of things at Olaf Lewitz and Adam Pearson’s course ‘Enabling Self-Organisation: Getting Macro Results without Micromanaging‘. I think my biggest a-ha moment was that self-organisation is not something that you do to others, it is something that you do to yourself.

The course focussed on a few simple techniques to help each of us be better able to self-organise. The first is a simple system to train into yourself for evaluating the situation you are in (feel, think, act). The second is a structure to use for actively listening where you fit what you want to reflect back to someone in a conversation structured as “you feel X because Y”.

My experiences with self-organisation at TIM Group has taught me that it starts with myself. From there it expands into how I interact with others. These two systems that I got from the course will make it more likely that I’ll have effective interactions and expand self-organisation in the group.

TopicalJS: A rose amongst the thorns?

We have a codebase in semi-retirement to which we occasionally have to make limited changes. With active codebases, you are continuously evaluating the state of the art and potentially upgrading/switching third-party libraries as better solutions become available. With inactive codebases, you live with decisions that were made years ago and which are not cost-effective to improve. However, not all old code is bad code and there are some really excellent patterns and home-grown tools within this codebase that have never seen light outside of TIM Group.

It was a great pleasure, therefore, to be able to extract one such gem and make it open-source during the last round of maintenance programming on this codebase. We’ve called it TopicalJS, mostly because this was one of the few words left in the English language not already used for a JavaScript library. Topical is concerned with the management of events within a client-side environment (or indeed server-side if you run server-side JavaScript).

This old codebase uses Prototype and YUI on the front-end and a custom event-passing system internally (which is not very inspiringly) called the “MessageBus”. Our newer codebases use Underscore, jQuery, and Backbone. Backbone comes with its own event system which is built into every view, model, and collection. You can raise an event against any of these types or you can just use a raw Backbone Event instance and use it to pass around events.

Without Backbone, and in fact before Backbone existed, we invented our own system for exchanging events. Unlike Backbone all it does is exchange events. So, it could even be used to complement Backbone’s event system. It’s best feature is that it encourages you to create a single JavaScript file containing all of the events that can be fired, who consumes them, and how they get mapped onto actions and other events. This is effectively declarative event programming for JavaScript, which I think might be unique.

You use it by creating a bus and then adding modules to that bus. These modules declare what events they publish and what events they are interested in receiving. When a module publishes an event it is sent to every module, including the one that published it. Then, if that module subscribes to that event type, its subscription function will be called along with any data associated with the event. Events can be republished under different aliases, multiple events can be awaited before an aggregated event is fired, allowing easy coordination of multiple different events.

As an example, this is what bus configuration code might look like.

    expecting: ["leftTextEntered", "rightTextEntered"], 
    publishing: "bothTextsEntered" }),

    subscribeTo: "init", 
    republishAs: [ "hello", "clear"] }),
    subscribeTo: "reset", 
    republishAs: "clear" }),

    name: "Hello",
    subscribe: {
      hello: function() {
        alert("This alert is shown when the hello event is fired");

The Coordinate module waits for two different text boxes to be filled in before firing and event saying that they’re both present. The Republish modules raise the hello and clear event, ultimately causing an annoying alert to be shown, as well as providing another module the ability to react to the clear event and flush out any old data.

Full documentation and a worked example is available here:

Feedback or contributions are most welcome.

Telling Stories Around The Codebase

This is a story that has been about a year in the making. It has now reached a point where I think the story needs to be told to a wider audience.

At last year’s Citcon in Paris there was a presentation by Andy Palmer and Antony Marcano. They showed off a way of writing tests in Fitnesse that read much more nicely than you normally encounter. The underlying system they called JNarrate allowed writing tests in Java in a similarly readable fashion.

At youDevise we had been trying to figure out what framework we should be using to try to make things easier to understand and nicer to write. We had taken a look at Cucumber as well as a few other frameworks but none had really made us take the plunge yet. During the conference Steve Freeman mentioned that you don’t need to find a framework. You can instead evolve one by constantly refactoring, removing duplication, and making the code expressive.

Back at work I decided to try to give it a go. Guided by the small bit of JNarrate that I had seen at the conference I started trying to figure out what it would look like. The first passes at it (which got very quickly superseded) tried to do away with some of the things in JNarrate that didn’t seem very necessary. It turned out that many of them were. Not for the code to run, but for the code to be readable.

Eventually it turned into something that looked good.



    .should(be(money(300, usd)));

We worked with this for a while and had very few changes to the basic structure of:

Given.the( <actor> ).wasAbleTo( <action> );
When.the( <actor> ).attemptsTo( <action> );
Then.the( <actor>).exptectsThat( <thing> )
    .should( <hamcrest matcher> );

The framework itself is really just a few interfaces (Action, Actor, and Extractor) with the Given, When and Then classes being the only part of the framework that actually does anything. The rest of what is written is entirely part of the implementing application.

This style of writing non-unit tests has come to pervade large portions of our tests. It immediately helped immensely in communicating with business people and helping developers to understand the domain better. Since it was in Java we had the full support of the IDE for writing the tests and for refactoring them as our understanding of the scenarios improved. Once you get over the initial hurdle of defining the vocabulary, writing up new scenarios becomes so easy that we have started to sometimes go a little overboard with them 🙂

The only change that has occurred recently is that we dropped the standard Java camel-casing of identifiers and replaced them with underscores. We reached this decision after discovering that most of the pain of reading some of our more complex scenarios was in trying to parse the identifiers into sentences. SomeTimesItJustGetsALittleTooHardToFigureOutWhereTheIndividualWordsAre.

So a recent example is:

Given.the( company_admin)
.was_able_to( modify_the_portfolio( the_portfolio)
    .to_add( author_in_other_company_who_published));

Given.the( author_in_other_company_who_published)
.was_able_to( publish_an_idea()
    .with( a_long_recommendation_for( any_stock()))
    .to( a_portfolio_named( the_portfolio)));

Given.the( company_admin)
.was_able_to( modify_the_portfolio( the_portfolio)
    .to_remove( author_in_other_company_who_published));

When.the( company_admin).attempts_to(view_the_available_authors());

Then.the( company_admin)
.should( have( author_in_other_company_who_published));

Test Data Buildering: Take 2

After the great comments on the last post about the test data builders we kept poking and prodding to see what we could do. What we came up with looks like this now:

FundOfFund fohf = derivedFromA(new FundOfFundTemplate() {{
    with(name, "Blah");

The structure isn’t really different, but the naming has changed. Instead of being called “builders” or “makers” we call them “templates”. This was prompted by Antony Marcano’s comment about things being tailor made. At first we tried calling them “patterns” but realized that this would cause far too much confusion during discussions because of the whole design pattern domain. Name clashes can make a great idea very quickly turn bad.

We tried to keep on the tailoring idea for a bit, but decided to drop it in favor of deriving things from templates (with the path there being something like: pattern -> template pattern -> template -> create things based on a template).

Test Data Buildering

This blog post accompanies one of our weekly lightning talks, embedded below. Read the text, watch the video, or heck, do both!


Over the course of our learning how to specify and write tests for our code on the HIP we have gone through many different styles of dealing with setup. There was a stage when No setup was done and therefore very little real testing of behavior. Then there was the time of mocking absolutely everything (including numbers). That time still haunts us, but we are slowly putting it behind us. After that we got mock shy and just started using the real objects whenever possible.

This stage in our evolution caused tests to be written. The tests checked meaningful behavior, were fast, and somewhat maintainable. The problem was that they were not always the easiest to read. The problem came down to how we built the objects that we wanted to use.

   FundOfFund fohf = new FundOfFund( [8 parameters, only two of which we care about] );

We tried to solve this problem by having a whole load of pre-built test objects stored as statics.

    public class TestFundOfFund {
        public static final FundOfFund USD_FOHF = new FundOfFund(...);

This unfortunately lead to us have a lot of hidden knowledge in our tests. “Oh that USD_FOHF, it also happens to have an initial price of $5.” This caused us to start backing away from that approach pretty quickly.

Our next step was to try out mocking again. If we needed a FundOfFund, then we would mock out the parts of a FundOfFund that should be used in the test.

    final FundOfFund fohf = context.mock(FundOfFund.class);
    context.checking(new Expectations() {{
        allowing(fohf).getName(); will(returnValue("blah"));
        allowing(fohf).getCurrency(); will(returnValue(usd));

This worked. It expressed what was needed for the test. But it is noisy and annoying to type. If a large amount of data is needed (which we sometimes need), then it gets hard to see what has been setup. It also caused noise in tests when some data needed to be available for the code under test to work, but the value had no bearing on what we were trying to specify (name in all of my code snippets here is a prime example). There was no way for making sure that the object had sensible defaults.

The next thing we are trying out is Test Data Builders. The standard way of doing it is fine and gets it done. What I don’t like about it is the mass of boilerplate code that is needed (all of those with*() methods written out individually? ick!). The make-it-easy framework is a bit better, but has problems as well.

    FundOfFund fohf = make(a(FundOfFund, with(name, "bar"), with(currency, usd)));

To write it this way you need to import “make”, “a”, “name”, and “currency” into the namespace. Pollution! Collision! Confusion ensues. But the basic idea is good. So what I am trying out now is using the framework for its basic elements and wrapping it up a little.

    FundOfFund fohf = new FundOfFundMaker() {{
        with(name, "bar");
        with(currency, usd);

So far this has worked out pretty well (if you can get over the instance initializer syntax). It doesn’t pollute the namespace and also it seems to provide a nice middle ground between the boilerplate of the all custom builders and the more composable nature of make-it-easy. For instance:

    FundOfFund fohf = new FundOfFundMaker() {{
        withYearEndPrice(4, december(30, 2010));

Pattern Language Problems

I’ve been talking to various people about pattern languages lately and trying to get various developers here at youDevise to give lightning talks about patterns to help spread knowledge about them a little. During all of this I’ve noticed a recurring theme: people usually think that a pattern is a description of an implementation. This means that people start using pattern language to talk about specific implementations to solve problems. I think this is wrong and leads to misunderstanding with no greater insight into the problem that is being solved.

Various communities have railed against patterns because of this focus on implementation (see Design Patterns in Dynamic Programming for one example) because many of the implementations are nonsensical in their environment. This is a perfectly reasonable reaction to a language that is being used solely to express a particular kind of solution to a general problem.

Instead of patterns being about solutions or implementations, I think that they should instead be seen as identification of problems. When you say you plan to use the Factory pattern, what you are really saying is that you have encountered a particular problem: you need to create instances of an object, but the concrete class of the object being created may change.

Once the pattern language has been turned around like this, then suddenly many more things start to fall into place. Patterns beg other patterns (the identification of one problem brings to light other problems that will need to be solved). Patterns have synonyms (Template Method and Strategy). The list goes on.

So try to change your thinking about pattern languages: instead of using them to express a concrete solution, make them a way of identifying the problem you are trying to solve. From there you can start choosing the best implementation based on your context.

Continuous Deployment

At one of the sessions that I attended at CITCON Europe 2009 Chris Read (I think it was him) talked about build pipelines and continuous deployment. As he explained his view on what continuous deployment means, I had a bit of an epiphany. We are already doing it! Of a sort. I say “of a sort” because we still are not perfect. We can’t deploy 50 times a day, but we can deploy often, reliably, and quickly (we have deployed a new version of the HIP nearly every other Saturday for the past year that I have been working here, each deploy normally takes just a few minutes). Every release that we do has also passed through our CI system. In fact every release that we do has to. The only release candidates are those that have been through this testing procedure, and any passing builds are automatically added to the list of release candidates.

Our deployments still require a little manual intervention (copying a file), but that is something that we are working on. At the moment we are overhauling our deployment mechanisms so that deploying a new release of either of our applications turns into a single click. This should make us able to deploy even more quickly and smoothly.

At previous companies I have worked we didn’t have this ability and we didn’t think that being able to do it was needed. After now having this ability I can’t imagine not being able to do it. No more fear of deploying. No more babying deploys. It all just works.

It is things like this that make you start to wonder maybe there really is value in a change even though you can’t see it right now. Maybe updating our deployment process and infrastructure so that we can deploy at any time during the day would have immense benefits that we just can’t see right now.

Code Dojo VII: Return of the JTanks

In early December we held another Code Dojo. We wanted to build on the successes of the last dojo and to make a few small improvements and changes. It seems that staying away from randori style is much more liked by everyone, so we didn’t want to move too far away from a winning formula.

At our last dojo had the problem that most teams fell into a hacking mindset and did not concentrate too much on practicing good techniques. The flip side is that everyone had so much fun that it was hard to stop. This time around we wanted to keep that excitement, but make sure that the dojo gave us techniques that we could apply immediately in our day jobs to improve application features and code maintainability.

Since I had volunteered to organize this dojo, I thought that a good way to keep that excitement but make sure that there is still a significant amount of learning would be model the Code Dojo after real dojos that I am familiar with: kendo dojos. Kendo dojos offer an excellent model for how to handle code dojos. In code dojos we want people to practice working together in teams and improving skills. Kendo dojos offer almost exactly this model.

Practice at kendo dojos centers around pairs of practitioners: a receiver and an attacker. The receiver is not just passively receiving, however. They are there to keep an eye on what the attacker is doing. They provide an opening (appropriate to the skill of the attacker) and give feedback (through words or actions) to the attacker. After the attacker has practiced a few times, the receiver and attacker swap roles. Practice continues after both sides have performed both roles by having everyone rotate pairs. Throughout the practice the sensei will introduce different techniques and everyone will practice these techniques.

Kendo Rotation
Rotation in a Kendo Dojo
The upper-right person stays in place to break up the groups.

After practicing specific techniques, participants move on to sparring practice. During sparring, the pairs practice the techniques that they have been working on that day as well anything else that they feel they need to work on (maybe even just returning to practicing the absolute basics). Sparring is the chance to practice when you have an opponent that fights back.

To transfer this style to a coding dojo I had to make a few modifications. We wanted to build on the success of the JTanks framework at the previous dojo, practice some good techniques, and have fun. We came up with a list of changes to the JTanks framework that we wanted to have. Each one of these tasks got assigned to a workstation (not a person or a pair). Along with each task were two or three different design guidelines and code smells that the pair working on the task should pay attention to.

In addition to the specific guidelines and smells, everyone was also supposed to constantly pay attention to other practices: checking in often, practicing TDD, and keeping it “DRY, shy, and tell the other guy.”

The pairs of people were then spread to each workstation and started working on the task. At this point each pair was supposed to try to make progress on the tasks, but mainly practice identifying and fixing code smells (practice the techniques). After half an hour of working on the tasks the “left” pair rotated left by one workstation. The new pairs continued working for another half an hour. Then the “right” pair rotated to the right by one workstation.

Code Dojo Rotation
The “left” Rotation

Instead of getting right back to work with the new pairs we took a short break. Everyone seemed to be enjoying themselves and most of the people felt that they were making progress on the tasks. There was even a little competition and commiseration on the frequent checkin front.

After the break the pairs reformed (in the same pairs that they had just rotated to) and went back to work. During the last rotation the pairs were told to enter into full-speed sparring practice with the task.

Another half-hour later everyone was pried away from the computers. We reconvened in a meeting room, ate dinner, and discussed how it all went. Most of the tasks had made significant progress (some could even be considered done). Each person had gotten a chance to practice some new skills, learn some new concepts, and have some fun.

The format that we used for this dojo seemed to be very popular. So we will definitely be trying it again. We think we will call it a keiko dojo – though the style of training used in kendo doesn’t seem to have a formal name, this word for “practise” seems to describe it well.

Smart Collections

Collections are all too often seen as simple bags full of some particular type of objects. These collections are then at the mercy of their clients. They get taken apart, have elements added and removed, iterated over, and just plain trampled upon. They are given no responsibility of their own.

It is time for that to change! Collections are people too!

Collections represent important concepts in a domain – concepts that appear over and over again as you build different features in your application. They can enforce constraints and provide many operations that apply to entire sets, lists, or maps of domain objects. Creating custom collections (they do not even have to be part of the Java Collections framework!) gives a home for these collection manipulation concepts. Without this home, you would have to remember to include, and rebuild, the constraints or operations every time you built a feature that involved the collection.

Today I created IDSNumbers to act as a home for operations that we often perform on sets of IDSNumber, particularly in our overly-complex calculation code. So far it just has a sum method, but as I (hopefully we) find other common operations, it will be picking up more abilities that we can apply across all our applications.

What other collections are there that deserve to become citizens in their own right?