Gaining Feedback, Building Trust

Link

A powerful means of building trust and collaboration within a team is to gather around a table and tell each other face to face ‘this is how your actions made me feel’, both the good and the bad.

The importance of feedback

Effective software development requires a robust feedback loop.  Test-driven development gives immediate feedback on ‘does this code do what I expect?’ Continuous Integration gives early feedback on ‘does this code work with the rest of the system?’  And early incremental delivery gives quick feedback on ‘is this code useful to the customer?’

All this feedback is about our code, but as Gerald Weinburg famously said, No matter how it looks at first, it’s always a people problem” – so it seems wise to put at least as much effort into getting feedback about how well we are collaborating with our team-mates.  Compared to the immediate and precise feedback of a good test,  the standard annual performance review is too far distant from the behaviour in question to be much use in guiding iteration and continuous improvement.  We need something better.

How it works

We decided to adopt a variation on Jurgen Appelo’s 360 degree meetings.  The key point is to bring the whole team together, face to face around a table, to give direct feedback.  First, each person writes down items of feedback.  Each item:

  • should be addressed to one team-mate,
  • can be ‘went well’, ‘puzzle’, or ‘could improve’,
  • should refer to a specific incident, meeting or piece of work, and
  • ideally should be framed in terms of how that behaviour made you feel at the time.

We then go around the group, and each person in turn delivers one item of feedback, until they have all been covered.  After each item, the feedback recipient may respond briefly to ask for clarification; however the focus should be on asking ‘what can I improve?’ from the feedback, not on ‘defending myself’ from criticism.

As an example, recently a team-mate raised a ‘puzzle’ that I often seem to make arbitrary business decisions while coding, rather than raising the question with our business representatives.  Talking things through, we surfaced many underlying reasons why I tend to do this, including a degree of intellectual arrogance and a lack of trust.  We agreed that in future, we will at least share our decisions on slack, where business owners can see them.

How it helped us

The team agreed to experiment with holding this feedback meeting weekly for a month, after which we asked ourselves ‘should we continue?’. There was unanimous consent that we wanted to; what was interesting was that each person spotted different benefits from the process:

  • Lots of ‘went well’ feedback reinforced positive behaviours, which people might otherwise stop doing as they might not know they were valued
  • Positive feedback also builds confidence and satisfaction working on the team
  • ‘Could Improve’ feedback relating to a specific incident feels easier to accept, as it is about behaviour on a single occasion, rather than a general criticism of you as a person.
  • Rapid, small pieces of feedback lead to iterative improvements
  • When one person gives me feedback privately, then I’m inclined to think ‘that person has a problem’, and not look to change my own behaviour.  By giving it in front of the whole team, then unless anyone disagrees with the feedback as given, I’m more likely to accept that I should try and change myself.
  • Subsequent performance reviews are fairer, easier to write, and less surprising
  • By being prepared to accept critical feedback in front of the team, each of us makes ourselves vulnerable to our team-mates.  Doing this builds up a powerful sense of trust and interdependence between team-mates, which helps the team gel and thus collaborate effectively.

As team members switched onto other teams, they spread the practice of weekly feedback sessions across the TIM Group development organisation.  Having seen the benefits, I highly recommend this practice to any team working together closely.

Doing this yourself

Just as with retros, it can be helpful first to jog memories about what the team has worked on over the last week or fortnight.  Gathering in front of a task board, or collectively making a list of what has just been done, often helps people remember events they have feedback to give on.

We discovered that the vast majority of feedback (90%), was positive, and quite often a meeting concluded with no negative feedback given.  I think this is helpful too, and helps build up the trust and confidence which makes it possible for constructive criticisms to be made.  

We started out with all team members physically present in the room, however as our teams become more and more distributed, we soon adapted to use a hangout for some or all participants, which works fine.

We were lucky to start from a position where the team members knew one another, and had a reasonable level of trust and a fairly flat hierarchy.  It worked for us even though I was officially line manager of two of the others in the team; the reciprocal nature of this tended to reduce the effect of hierarchy, since my reports could equally raise feedback for me, which would go to my own line manager afterwards for my performance review.  

Everyone in the meeting should be ‘part of the team’ and committed to common goals, and everyone present should be open to receiving feedback – it’s fine to have a team lead or manager in the meeting, as long as they are also receiving feedback.

So if your team works closely together, I really recommend trying weekly feedback meetings.  They are very simple to run, and need only take as long as necessary based on the amount of feedback people have; the benefits in increased trust and collaboration are noticeable.

Distributed Pair Programming @ TIM Group with Saros

There’s a particular technology used within TIM Group that has been too useful for too long to continue to go uncelebrated. The tool is called “Saros“, it is an Eclipse plugin for distributed pair programming that is used extensively at TIM Group.

Why We Need A Distributed Pairing Solution

We have developers spread across London and Boston offices, and a few “Remoties” working from home in various far flung lands. As one of said Remoties, I believe I would not have the opportunity to continue to work at TIM Group and live where I want to live (hint: not London) if it not for Saros.

Having been entirely colocated just a few years ago, our choice to use pair programming by default didn’t really have technical limitations. With an extra seat, keyboard and mouse at every developer’s workstation, the barrier to pairing was low. Just pull up a chair and pair. When we first started having distributed developers, we didn’t want to give up on pairing for technical reasons, so the search began for how to adapt pair programming to a distributed environment.

Why Other Solutions Don’t Match Up

An oft-cited solution for this problem is to use a screen sharing technology and remote access technology. There’s plenty to choose from: Windows’ Remote Desktop; TeamViewer; LogMeIn; GoTo MyPC; VNC; NX; etc. However, we found them all far too limited for pairing. Just like colocated pairing, there’s only one cursor, so only one person can use the keyboard and mouse at any time. Unlike colocated pairing, there’s a large bias towards the person hosting the session, as they get much faster feedback from typing, since they don’t have to suffer the latency. When it’s (figuratively) painful to type, it’s much easier to shy away from being the “driver”, which hinders the collaboration. We never found a solution that reduced that latency to an acceptable level.

Another cited solution is to use a terminal editor like tmux, that does not have the overhead of screen sharing. The feedback when typing is much better, however, there’s one major drawback: being limited to terminal editors only. I’ve seen people whose terminal environments are well suited for developing in a language like Ruby or JavaScript. For those of us coding in Java and Scala, we didn’t want to give up the powerful features we appreciate in our IDE, so tmux is not suitable.

Saros To The Rescue

Thankfully, we discovered Saros, a research project from the University of Berlin. The most relatable way I’ve found to describe it is as:

Google Docs within the Eclipse IDE

It works by connecting two or more developers through Eclipse, so that when Alice enters some text, Bob sees it appear in his editor. The experience for both users is as if they were editing files locally[0]. Rather than sharing the image of a screen, edit commands are serialised and sent over the wire, changing the other participant’s local copy. This comes with several other benefits over remote access technologies:

  • the feedback when typing is instant, for both parties
  • the latency for seeing your partner’s keystrokes is much lower than when transmitting an image of the screen[1]

There are even benefits over colocated pairing:

  • neither the host nor guest has to leave the comfort of their familiar work environment to pair; you can set up fonts and shortcuts however you like
  • since you are both editing as though it was local, each participant has their own cursor, and can be editing different files, allowing ad-hoc splitting of tasks[1], which can be very handy
  • you can have more people involved (up to 5) which we’ve used for code review and introducing a project to a group of people, sparing the discomfort of hunching around a single desk

There are other distributed pairing solutions, such as Cloud9 IDE, and Kobra.io, but none (that we’ve found) that let you stick with your own IDE setup, retaining parity with how you would develop locally.

There are of course drawbacks to distributed pairing, regardless of technology, which I’m not going to cover here; they’re generally not problems Saros, or any technology, will be able to solve.

IntelliJ Support Is On The Roadmap

After a long search, we have not found anything that really compares with Saros. For me, it really is a killer feature of Eclipse. So much so that I’ve basically kept IntelliJ in a dusty old drawer, even when it would be better for certain tasks (like Scala development). Fortunately, it looks like that’s about to change. Work has just begun to port the Saros platform to IntelliJ, which is really exciting. Whenever I’ve talked to people about Saros, alternative IDE support inevitably arises. If the core Saros technology was agnostic of the IDE, it could be a huge leap forward for collaborative development in general.

At TIM Group we were so keen on the idea that a handful of us spent a “hack week” throwing together the first steps towards an IntelliJ plugin for Saros. We were able to demonstrate a proof-of-concept, but didn’t get anything truly viable. Having brought this effort to the attention of the Saros team, I hope that in some small way it inspired them to start work on it, but I doubt that’s something we can take credit for. Hopefully, during the development of the IntelliJ plugin there will be something that we can contribute, and give something back for our many hours of happy usage.

If you’re looking for an answer to the problem of distributed pairing, I heartily endorse Saros!


[0] for more details of the theory behind this aspect of collaborative editors, see http://en.wikipedia.org/wiki/Operational_transformation
[1] we have acquired a habit of being vocal about whether we are in Driver/Navigator mode, and if your pair is following you, since you can’t assume they are

TopicalJS: A rose amongst the thorns?

We have a codebase in semi-retirement to which we occasionally have to make limited changes. With active codebases, you are continuously evaluating the state of the art and potentially upgrading/switching third-party libraries as better solutions become available. With inactive codebases, you live with decisions that were made years ago and which are not cost-effective to improve. However, not all old code is bad code and there are some really excellent patterns and home-grown tools within this codebase that have never seen light outside of TIM Group.

It was a great pleasure, therefore, to be able to extract one such gem and make it open-source during the last round of maintenance programming on this codebase. We’ve called it TopicalJS, mostly because this was one of the few words left in the English language not already used for a JavaScript library. Topical is concerned with the management of events within a client-side environment (or indeed server-side if you run server-side JavaScript).

This old codebase uses Prototype and YUI on the front-end and a custom event-passing system internally (which is not very inspiringly) called the “MessageBus”. Our newer codebases use Underscore, jQuery, and Backbone. Backbone comes with its own event system which is built into every view, model, and collection. You can raise an event against any of these types or you can just use a raw Backbone Event instance and use it to pass around events.

Without Backbone, and in fact before Backbone existed, we invented our own system for exchanging events. Unlike Backbone all it does is exchange events. So, it could even be used to complement Backbone’s event system. It’s best feature is that it encourages you to create a single JavaScript file containing all of the events that can be fired, who consumes them, and how they get mapped onto actions and other events. This is effectively declarative event programming for JavaScript, which I think might be unique.

You use it by creating a bus and then adding modules to that bus. These modules declare what events they publish and what events they are interested in receiving. When a module publishes an event it is sent to every module, including the one that published it. Then, if that module subscribes to that event type, its subscription function will be called along with any data associated with the event. Events can be republished under different aliases, multiple events can be awaited before an aggregated event is fired, allowing easy coordination of multiple different events.

As an example, this is what bus configuration code might look like.

topical.MessageBus(
  topical.Coordinate({ 
    expecting: ["leftTextEntered", "rightTextEntered"], 
    publishing: "bothTextsEntered" }),

  topical.Republish({ 
    subscribeTo: "init", 
    republishAs: [ "hello", "clear"] }),
  topical.Republish({ 
    subscribeTo: "reset", 
    republishAs: "clear" }),

  topical.MessageBusModule({
    name: "Hello",
    subscribe: {
      hello: function() {
        alert("This alert is shown when the hello event is fired");
      }
    }
  }))

The Coordinate module waits for two different text boxes to be filled in before firing and event saying that they’re both present. The Republish modules raise the hello and clear event, ultimately causing an annoying alert to be shown, as well as providing another module the ability to react to the clear event and flush out any old data.

Full documentation and a worked example is available here: https://github.com/youdevise/topicaljs.

Feedback or contributions are most welcome.

Report from DevOpsDays London 2013 Fall

This Monday and Tuesday a few of us went to DevOpsDays London 2013 Fall.

We asked for highlights from every attendant and this is what they had to say about the conference:

Francesco Gigli: Security, DevOps & OWASP

There was an interesting talk about security and DevOps and a follow up during one of the open sessions.
We discussed capturing security related work in user stories, or rather “Evil User Stories” and the use of anti-personas as a way to keep malicious users in mind.
OWASP, which I did not know before DevOpsDays, was also nominated: it is an organization “focused on improving the security of software”. One of the resources that they make available is the OWASP Top 10 of the most critical web application security flaws. Very good for awareness.

Tom Denley: Failure Friday

I was fascinated to hear about “Failure Fridays” from Doug Barth at PagerDuty.  They take an hour out each week to deliberately failover components that they believe to be resilient.  The aim is not to take down production, but to expose unexpected failure modes in a system that is designed to be highly available, and to verify the operation of the monitoring/alerting tools.  If production does go down, better that it happens during office hours, when staff are available to make fixes, and in the knowledge of exactly what event triggered the downtime.

Jeffrey Fredrick: Failure Friday

I am very interested in the Failure Fridays. We already do a Failure Analysis for our application where we identify what we believe would happen with different components failing. My plan is that we will use one of these sessions to record our expectations and then try manually failing those components in production to see if our expectations are correct!

Mehul Shah: Failure Fridays & The Network – The Next Frontier for Devops

I very much enjoyed the DevOpsDays. Apart from the fact that I won a HP Slate 7 in the HP free raffle, I drew comfort from the fact that ‘everyone’ is experiencing the same/similar problems to us and it was good to talk and share that stuff. It felt good to understand that we are not far from what most people are doing – emphasizing on strong DevOps communication and collaboration. I really enjoyed most of the morning talks in particular the Failure Fridays and the The Network – The Next Frontier for Devops – which was all about creating a logically centralized program to control the behaviour of an entire network. This will make networks easier to configure, manage and debug. We are doing some cool stuff here at TIM Group (at least from my stand point), but I am keen to see if we can toward this as a goal.

Waseem Taj: Alerting & What science tells us about information infrastructure

At the open space session on alerting, there was a good discussion on adding context to the alert. One of the attendee mentioned that each of the alert they get has a link to a page that describes the likely business impact of the alert (why we think it is worth getting someone out of the bed at 3am), a run book with typical steps to take and the escalation path. We have already started on the path of documenting how to respond to nagios alerts, I believe expanding it to include the perceived ‘business impact of the alert’ and integration with nagios will be most helpful in moments of crisis in the middle of night when the brain just does not want to cooperate.

The talk by Mark Burgress on ‘What science tells us about information infrastructure’ indeed had the intended impact on me, i.e. I will certainly be reading his new book on the subject.

You can find all the Talks and Ignites video on the DevOpsDays site.

High Availability Scheduling with Open Source and Glue

We’re interested in community feedback on how to implement scheduling. TIM Group has a rather large number of periodically recurring tasks (report generation, statistics calculation, close price processing, and so on). Historically, we have used a home grown cyclic processing framework to schedule these tasks to run at the appropriate times. However, this framework is showing its age, and its deep integration with our TIM Ideas and TIM Funds application servers doesn’t work well with our strategy of many new and small applications. Looking at the design of a new application with a large number of scheduled tasks, I couldn’t see it working well within the old infrastructure. We need a new solution.

To provide a better experience than our existing solution, we have a number of requirements:

  • Reliability, resiliency, and high availability: we can’t afford to have jobs missed, we don’t want a single point of failure, and we want to be able to restart a scheduling server without affecting running jobs.
  • Throttling and batching: there are a number of scheduled tasks that perform a large amount of calculations. We don’t want all of these executing at once, as it can overwhelm our calculation infrastructure.
  • Alerting and monitoring: we need to have a historical log of tasks that have been previously processed, and we need to be alerted when there are problems (jobs not completing successfully, jobs taking too long, and so on). We have an existing infrastructure for monitoring and alerting based on Logstash, Graphite, Elastic Search, Kibana and Nagios, so it would be nice if the chosen solution can easily integrate with those tools.
  • Price: we’re happy to pay for commercial software, but obviously don’t want to pay over-the-top for enterprise features we don’t need.
  • Simple and maintainable: our operations staff need to be able to understand the tool to support and maintain it, and our users must be able to configure the schedule. If it can fit in with our existing operating systems and deployment tools (MCollective and Puppet running on Linux), all the better.

There is a large amount of job scheduling software out there. What to choose? It is disappointing to see that most of the commercial scheduling software hidden behind brochure-ware websites and “call for a quote” links. This makes it very difficult to evaluate in the context of our requirements. There was lots of interesting prospects in the open-source arena, though it is difficult finding something that would easily cover all our requirements:

  • Cron is a favourite for a lot of us because of its simplicity and ubiquity, but it doesn’t have high availability out of the box.
  • Jenkins is also considered for use as a scheduler, as we have a large build infrastructure based on Jenkins, but once again there is no simple way to make it highly available.
  • Chronos looked very interesting, but we’re worried by its apparent complexity – it would be difficult for our operations staff to maintain, and it would introduce a large number of new technologies we’d have to learn.
  • Resque looked like very close match to our requirements, but there is a small concern over it’s HA implementation (in our investigation, it took 3 minutes for fail-over to occur, during which some jobs might be missed)
  • There were many others we looked at, but were rejected because they appeared overly complex, unstable or unmaintained.

In the end, Tom Anderson recognised a simpler alternative: use cron for scheduling, with rcron and keepalived for high availability and fail-over, along with MCollective, RabbitMQ and curl for job distribution. Everything is connected with some very simple glue adaptors written in Bash and Ruby script (which also hook in with our monitoring and alerting).

Schmeduler Architecture

This architecture§ fulfils pretty much most of our requirements:

  • Reliability, resiliency, and high availability: Cron server does no work locally, and so is effectively a very small server instance that offloads all of its work ensure that it is much more stable. rcron and keepalived provide high availability.
  • Throttling and batching: available through the use of queues and MCollective
  • Alerting and monitoring: available through the adaptor scripts
  • Price: software is free and open source, though there is a operations and development maintenance cost to the adaptor code.
  • Simple and maintainable: except for rcron, all these tools are already in use within TIM Group and well understood, so there is a very low learning curve.

We’re still investigating the viability of this architecture. Has anyone else had success with this type of scheme? What are the gotchas?

§Yes, the actor in that diagram is actually Cueball from xkcd.

The Summit is Just a Halfway Point

(Title is originally a quote from Ed Viesturs.)

This past week, TIM Group held its Global Summit, where we had nearly all of our Technology, Product, and Sales folks under the same roof in London. For those who aren’t aware, we are quite globally spread. We have technologists in both London and Boston offices (as well as remote from elsewhere in the UK and France), product managers in our NYC/London/Boston offices, and sales/client services folks across those offices as well as Hong Kong and Toronto. This summit brings together teams and departments that haven’t been face-to-face in many months — some teams haven’t been in the same place since the previous summit.

On the Monday, we got together for an unconference (in the Open Spaces format.) This was a great way to kick off the week. We were able to quickly organize ourselves, setup an itinerary for the day, and get to it. We discussed what people truly thought were the most important topics. We had sessions on everything from “Where does our (product) learning go?” (about cataloging the knowledge we gain about our new products), “Deployment next” (about where we take our current infrastructure next), and many other topics across development, product management, DevOps, QA, and beyond.

For the more technically-focused of the group, the rest of the week was filled with self-organized projects. Some of these were devised in the weeks leading to the Summit — some were proposed on the day by attendees. There were so many awesome projects to be worked on last week. We had ones that ranged from investigating our company’s current technical problems (like HA task scheduling), creating tools to solve some of our simple problems (like an online Niko-niko tracker), or solve some bigger-than-just-us problems (like Intellij remote collaboration.) You should expect to see us share more about these projects on this blog. Stay tuned!

To be clear, it wasn’t all fun. We also had a football game, beer tasting, and Brick Lane dinners. Altogether, this was a great opportunity to re-establish the bonds between our teams. While there are many benefits to having distributed teams as we do, there are many challenges as well. We do many things to work past these challenges, and our Global Summit is one of the shining examples. Getting everyone face-to-face builds trust and shared experiences which helps fuels the collaboration when everyone returns to their home locations.

Drawing back to the title, I am optimistic that our teams will build on top of the code, experiences, and collaboration of the summit. We will move forward with a clear view from the Summit, and be able to move into 2014 with more great things in the form of new products, services, or even just great blog posts.

Bring the outside in: gov.uk

Recently, Jake Benilov from the Government Digital Service (GDS) team visited the TIM Group office in London. His presentation on “7 types of feedback that helped make GOV.UK awesome” was a very interesting tour of the key success factors in building the UK central government’s publishing portal.

The talk filled the whole 90 minutes time slot – partly because a lot of ground was covered, but also because we asked many questions in between. One aspect that sparked discussions even in the following days was the definition of “DevOps” in general and the question whether developers should have full access to production systems in particular. The latter seems to work well for the GDS team but is not current practice at TIM Group.

If you want to know more, please head over to Jake’s blog entry explaining the seven types of feedback and/or view the slides of the talk.

Dip Your Toe In Open (Source) Waters

One of the qualities that TIM Group look for when filling vacancies is an interest in contributing to open source projects. We think that when a candidate gets involved in open source, it indicates a passion for software development. If you’re like me, at some point, you have wanted to join a project. Perhaps you wanted to improve your skills, try a different technology, or brighten up your CV. But alas, you didn’t know the best way to get started.

I am trying to provide that exact opportunity, in conjunction with RecWorks’ “Meet-A-Project” scheme. I have an open source project, called Mutability Detector, with issues and features waiting to be completed, specifically earmarked for newcomers to the project. I promise a helpful, friendly environment, in which to dip your toe in open (source) waters.

If you want to know more details, head over to the project blog for a description on the how and why of getting involved.

Invading the Product Landscape: Trade-offs and Being Lean

In Part 1, we introduced the Commando, Infantry, Police metaphor that we used to help TIM Group rethink how to transform ourselves into a new product company. The metaphor follows that we use commandos to discover how we invade our target market, infantry to storm across the market building market share, and police to keep our territory of the market peaceful. In this second part, we will discuss other ways that this metaphor and related concepts have caused us to rethink our product development.

The biggest shift was into a “commando” way of thinking. Given that it had been a long time since our current products were new, we needed to seek out more than a metaphor to find ways to improve ourselves. Another related space that we have taken ideas from is the Lean startup movement that was first proposed by Eric Reis and has since gone on to inspire others. A core tenant of Lean Startups is the focus on learning over revenue[1] (and learning over pretty much everything else). This focus on learning means that you want to minimize the time it takes from having a product idea or hypothesis to testing that idea in the market. This Lean Startup concept goes well with our Commando metaphor, as what the commandos want to do is try to figure out ways secure key areas of value (e.g. power stations, bridges, beachheads). Commandos don’t go in with huge forces, but with small numbers in a minimal way. Using a minimal force resonates back with the Lean Startup’s idea of a Minimal Viable Product, and the goal of the MVP as not to make some money or to build a cheap version of your product, but to prove that your product can solve the problem you think that you are solving. We need to create focused minimal forces to prove out our product hypotheses.

What this has also meant, has been a focus away from efficiency without sacrificing the effectiveness of getting an answer to our product hypothesis. As an example, let us assume that we have a new product idea involving a trading system that improves traders workflow so dramatically that we think that it can supplant existing trading systems. One feature that we may have to build to make it a viable product is trade pricing. Now the use case at its highest level might be simple: “price trades with the market rates”. There are many choices in implementing this feature: find a third-party provider of market data and integrate, create interactive admin screens for people to enter the prices, or price the data by hand (maybe even directly using SQL into the database). These solutions have trade-offs from the first (having large upfront development and investigation but requiring very little operational maintenance) to the last (having minimal upfront cost, but a large ongoing operational maintenance[2]). All of the solutions are probably equally effective, but we have a range of operational efficiencies. Which implementation should we choose? Taking on this “commando” way of thinking, we should focus on a the solution that gets us our answer sooner, even if that means losing out on efficiency in the short term. With a new product, we don’t even know if someone is going to use the product at all, let alone use it to the point that we have efficiency concerns. We have decided to build our experiments as cheaply as possible, so that we can learn as quickly as possible. Now, in time the product will no longer be “viable” (as in the “V” in Minimum Viable Product) with a heavily inefficient system, but by that point, you will have learned the lesson and the value of that product “territory”, so you should have the knowledge on how to make the product viable again.

Another efficiency/effectiveness trade-off that we have recently experienced was a resourcing one. We currently have around 25 developers with 20 of them focused on the new products. During the most recent round of resource allocation, we gave the developers a prioritized list of new product projects that we needed solving. The developers chose to allocate themselves to the two highest priority projects, even though their gut instinct was that they would be “inefficient” since there would be “too many” devs on the project team to keep busy the whole time[3]. They thought that they may learn a way to utilize all of them (or even more) to get the project completed earlier, and they should wait and see after some time working on the project before deciding that they truly were over-staffed. Getting the project completed earlier (and therefore sooner to answering the product hypothesis) at the cost of efficiency, helps us get us quicker to an answer of whether we can sell our product.

Another interesting point that caused us a bit of confusion early on was how quality fit into the picture. When people hear the phrase “build experiments to test your hypothesis as quickly as possible”, they might think, “Oh, I will just hack out some code then. No tests. Ugly screens. That’s quick.” That is *not* at all what we should do. When these trade-offs were suggested to the greater team (code quality for potential time-to-market[4] advantage), we confirmed with our teams that these products had to be as high as (or even higher!) quality than our past products. If we only have one chance for a client to use the system and it is buggy or looks awful, they likely aren’t going to look at it again. We should be focusing on simplicity, few features, and precise strikes on the high value questions that we need answered.

In Part 3 (to be completed), I discuss how we have gone one step further and played with the metaphor, enhancing it to better align to our new (and “old”) product needs.

[1] Geoffrey Moore said in a video interview at SAP (around the 21 minute mark): “The way you kill [new fledgling initiatives within a company] is to hold them to metrics that don’t make sense…”

[2] A new definition of YAGNI? “You Are Gonna Need an Intern”?

[3] Actually, another answer to this one comes from the original Lean/TPS doctrine: eliminating Muri, the overburdening of people and equipment.

[4] There is a strong argument to be made that low code quality or bad development practices will actually slow you down very quickly. Just maybe, that one feature will get done a little bit earlier, but you will pay that back many times over when you review/enhance/modify that code later on and the bad code gets in your way.