Small Controlled Experiments

How we made continuous improvement truly continuous, using stickies, a timeline, and few minutes each day.

Published on 22 March 2014 by @mathiasverraes

You’re in a meeting. Maybe it’s a daily stand-up meeting, or an agile retrospective. Somebody has an idea. A heated debate follows. Everybody is stating opinions, trying to convince the others, explaining why their idea is going to work, and the alternative is stupid. Because everybody is focused on making their point, nobody is actually listening to the others. Eventually the decision is either postponed, or the person with the highest salary makes the final decision, or the one who pushes the hardest. The decision is final, and it may or may not be actually get implemented. And if it is, it is never properly evaluated, or adapted.

I humbly admit to being guilty of having done all of these. And it’s all such a terrible waste.

The cure is to set up small controlled experiments.

experiment

Continuous Improvement

I used to think that simply trying to do a good job, was sufficient to incrementally improve the way you do things in your organisation, your process, your culture. Having worked closely with @VincentVanderh for a year, I’ve learned to appreciate that, even when you think you’re doing just fine and you work with the greatest people, a relentless focus on making small improvements every day, is going to pay off massively.

The basic principle is to identify problems, find the simplest thing you could try to improve it, try it for a short time, and evaluate. After that, you either implement the change permanently, dismiss it, or try a new cycle. The idea is of course not new. Deming calls it the PDCA-cycle (Plan-Do-Check-Act). The Japanese, spearheaded by Toyota, have made Kaizen, meaning “change for the best”, a central part of business culture.

Our first attempts were clumsy at best. We had improved a lot since the start of the project, so we felt we didn’t need a formal process. We didn’t identify major problems, let alone major solutions. We were ready to dismiss the whole idea of PDCA as just another management buzzword.

We were partly right. The process was too formal for our context, and the last thing you want to do is burden knowledge workers in a tight schedule with more processes to worry about. More importantly, we were trying too hard: we focused on finding major, critical problems, and grand, all-encompassing solutions. And that only works in the movies. We did keep trying however. We came up with experiments, wrote them on a sticky, put them in a corner of the whiteboard, and started doing them. Well, sometimes. More often than not, the stickies hung there for a while, until someone would feel they were pointless, and they were thrown out. Some ideas did get implemented – after all we had been improving somewhat steadily before we had a name for it – but it was all rather vague and unpredictable.

Make it lightweight

Some changes had to be made. The biweekly retrospectives made the improvement process too slow. By this time, we had pretty much gotten rid of Scrum(ish) sprints, in favour of a more continuous Kanban-flow. Saving up all the improvement ideas and discussion for the retrospective, started feeling like a remnant from the biweekly sprint cadence, something that did no longer fit in our flow. So we needed the benefits of evaluating our process, without the slowness of retrospectives.

We changed the rules of the experiments:

A number of things turned out to be critical here:

  1. It works best if the experiment takes as little effort as possible. Most of them do not take any time at all, as they only made a small change to something we were already doing. Others take a minute a day, or even a minute a week. The “small” in “small controlled experiment”.

  2. Sometimes one person is assigned to an experiment, but usually it’s for the whole team.

  3. When starting an experiment, make sure everybody understands that the experiment is limited in time, and that they will be heard and allowed veto when evaluating it. This is especially important with more introvert team members, especially if you have one or two loudmouths like yours truly.

  4. Make sure the experiments are for everybody. I tended to focus on improving conditions for the developers, but of course, testers, analyst, the project manager, … should all benefit.

  5. Deciding, and visualising exactly when the evaluation will happen, is probably the best thing we did. It gives a sense of urgency and deadline to the experiment, preventing it from slowly fading into oblivion. That’s the “controlled” in “small controlled experiment”.

  6. We’d usually have a couple of experiments running simultaneously at any given time.

Here’s a sample of some of the things we experimented with, to give you some ideas. Some happened before we tried PDCA or Kaizen, before we settled on the small controlled experiments above, but making it a core part of our standup has greatly increased the rate of improvement.

Hacking our productivity

Technical Debt

Visualisation

Kanban board

Process

Analysis

Exposing invisible problems

I could go on for a bit, but I hope I made the point clear. According to the Toyota Way, any process, no matter how good you think it is, has waste. The waste is usually not obvious, because if it was, it would have been improved already. Many experiments are not focused on solving a known problem, but on trying things to expose unknown problems. Make sure they are small, low cost, low risk. The benefit may be anywhere between zero and enormous. You won’t know in advance, so opinions have little value. You are the scientist of your own culture.

Optionality

As an aside:

Humans favour courses of action that have a well-known, limited set of potential gains, with an unknown, unlimited potential cost. We ignore this cost because it is invisible. In “Thinking, Fast and Slow”, Daniel Kahneman refers a lot to the brain’s habit of ignoring the unseen, and calls this “What you see is all there is”.

The opposite is a course of action that has a well-known, limited potential cost, with an unknown, unlimited potential gain. In “Antifragile”, borrowing from the finance world, Nicholas N. Taleb calls this “optionality”. Keep this in mind: many many very small experiments with a very limited cost, will eventually produce totally unexpected outcomes.

Read More

My talk on Small Controlled Experiments

Follow @mathiasverraes on Twitter.



Upcoming

2017
Topic Event Type Location Date
Modelling Heuristics Workshop @ Domain-Driven Design Europe workshop Amsterdam, NL Feb 1
Conference Domain-Driven Design Europe 2017 organiser Amsterdam, NL Jan 31 - Feb 3
Domain-Driven Design Neos Conference workshop Hamburg, DE Mar 30
Keynote (TBD) Neos Conference keynote Hamburg, DE Mar 31
Older entries...

Blog Atom

2016

2015

2014

2013

2012

2011

Creative Commons License This work by Mathias Verraes is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License.