Have you ever met a person involved in software development who thinks that the correlation between number of features and complexity looks something like this`
I mean, who does not understand that adding new features does add to complexity.
Then there are the slightly more informed ones who thinks that when you add 1 new feature, you add 1 complexity unit of one. You can see this as test cases. So, these guys thinks that one new feature is about one more test which you have to run.
But have you ever been in this situation (X axis is number of features and Y axis is number of test cases or what ever complexity unit you prefer).
You have a new system to wish you slowly add new features, things aren’t that complicated and then someone comes up with a brilliant idea. It shouldn’t be that hard to implement. But the new feature starts an exponential increase of complexity and now you’ve created a monster. The initial development cost was perhaps small but that little thing made all the upcoming features so much harder to implement.
We also have this scenario. Everything runs smoothly and suddenly you start an exponential growth of complexity. But someone stops the line, recognize what is going on. And does something of the architecture.
If you’re really lucky, you can perhaps even stop and decrease the complexity. Refactoring should lead to something like this:
But you also have another solution and that is removing features. Features which are heavy in complexity and which business value does not match the increased complexity it creates. I’ve done this a couple of times and all of those occasions there has been a tremendous debate over the issue. But when the feature was actually removed, no one noticed it.
So, what do I say. Two things:
- Evaluate the complexity effects of a new feature before implementing
- If you have a run away train of complexity, do something about it right away. And that can be refactoring or removing the feature(s) which causes the complexity.