All models are wrong but simple models are more wrong than complex ones. Simple models are more appealing, easier to teach and spread and apply. Because of that, they can bring value faster, and they can cause harm faster. “Everything should be made as simple as possible, but no simpler.” (Einstein) is great advice, but the risk is that we forget the last bit. It’s not a joke, it’s a serious warning against reductionism.
Reductionism is looking at a problem space as a simple combination of parts, and ignoring hidden, but essential, complexities (eg in the relations between the parts). You end up with what looks like simple model but is actually a lossy model.
Luckily, reductionism is often easy to spot. It starts with “[Complex thing] is just a [simple thing]”. The “is just” gives it away. Reductionist models have “5 easy steps” or “3 secrets” or a convenient mnemonic acronym like E.A.S.Y.
Simple models that are not (overly) reductionist can also be spotted: they accept to be changed over time. They accept exceptions as inputs to refine the model. When an exception doesn’t fit, it isn’t discarded but embraced.
How to use a simple model
However (and this is where I’m changing my opinion) even reductionist models have their use, and not just for the teaching and spreading and applying part. There are some important ingredients here, which are not part of the reductionist model itself, but are about how we use the model: scaffolding and enabling constraints.
Say a simple, reductionist model is expected to be good enough to cover 90% of cases. To evaluate our case, we try to fit it into the model. If it works, great, if not, we are either in a 10% case; or we are in a 90% case but we’re doing something wrong. In the 10% case, we need to put in the work to find better, richer, more suitable models. But because we’re more likely to be in the 90%, first we must look for problems in our information and in how we use the model.
This is how it becomes an enabling constraint then: The model offers a constraint on how to look at the problem. That enables you to see if you’re in a standard or exceptional situation, raises red flags if something is missing on your side, and helps you pick the right approach. Reductionist models aren’t usually presented to you that way. They should come with a sticker on the box: “Only applicable in 90% of cases. Use as enabling constraint only.”
Scaffolding is the other ingredient. Scaffolding means that use the model to make progress fast in the beginning, use it as the fire starter for finding your own better models, and then get rid of the simple model.
This is basically a superpower. You start by not re-inventing the wheel, but you end up discovering novel solutions that address novel problems that the simple model’s author couldn’t possible foresee. Again, most of the time simple models aren’t sold to us explicitly as scaffolding. Perhaps the author, being a great problem solver, uses the model as scaffolding themselves, but it’s not labeled as such.
It is up to us to use them that way. Model authors can help though: they can include ways to get rid of the model as part of the model. “Here’s how you apply it, and here’s how you look for opportunities to evolve it, here’s how you get rid of it.”
Our relation to simple models
Ironically, by adding these ingredients to the model itself, the model becomes more complex (that is, it addresses more of the inherent complexity in the problem space). And complex models are, although less wrong, harder to teach, spread, and apply!
I used to blame the authors, but (and this is also something where I’m changing my mind) I think all these effects are inevitable. Simple models are more likely to become popular.
The work of using simple models as enabling constraints and scaffolding is up to us.
Sometimes you’ll want to follow the recipe. A recipe can be a great tool to understand why the soufflé doesn’t rise, and a kickstarter to create your own dishes. And if you follow the recipe, remember to taste the food before you serve it.