Archive

Posts Tagged ‘quality’

Are you consistently bad?

We need to be consistent! We need to be reliable!

Aren’t that nice?

But then consider Sanjiv Augustine’s wonderful statement from APLN:

image

(Source Jesse Fewell)

A team can be consistent and reliable and still deliver garbage. If you always deliver garbage, you’re pretty predictable, aren’t you? You can truly rely on them delivering garbage. Business as usual.

I have a childhood friend who works with a supplier who he really can count on. He can count on the system failing after each upgrade. He can rely on the support functionality not being able to fix his problems after the delivery and he can be sure to have a lot of angry users on his back. After each upgrade.

What bugs my friend the most is not the individual problems. It’s that no one seems to care that his company loses money every time they upgrade the system. He’s annoyed by that nothing gets better. He feels like the supplier sees the problems as normal. Well, they are since they are consistent, he laughs with panic in his voice when we talk after an upgrade.

I’ve told my friend that he should abandon the supplier, for his company’s sake but mostly for his own health’s sake. And this my recommendation: if you work with a supplier who sees garbage deliveries as the normal state, get out. And if the supplier thinks there are different problems each time and blames this and that, well that is no excuse. If the process consistently produces garbage, it does not matter if the garbage is of type A one day and B the other. It is obvious that the process allows the garbage.

If you instead work at that supplier, you should do something about it. The competition is hard out there and blaming this and that does not help. Look back at your latest deliveries and figure out how many critical errors your company has introduced to your users lately. And no, don’t blame the integrators or the installation guys or even the users. Look at the end product.

Are you proud or are you lining up arguments why you were not to blamed for these incidents?

Jesse Fewell have on his blog presented how Sanjiv Augustine has described his idea of an Agile PMO and how such a solution can fuel continuous improvement.

This is just one take on the problem and it’s handling. Because this is not a problem what can be solved, it’s something you need to work on all the time and on all levels. So what if you have 90% code coverage, work test driven, use scrum and have continuous integration? These are great tools among other tools but if you still work in the garbage spreading business what’s the use of you mixing the garbage with some fine wines?

Categories: Agile, Business, lean, scrum, testing Tags:

What can we learn from coffee?

I’ve previously been posting a thought about using The Ultimate Question to show the value in quality. The Ultimate Question is “Would you recommend X to a friend”?

I believe that slipping quality might not drive current customers away but prevent customers from recommending a product. For example, changing a financial system is a hazzle, so those annoying errors which you learnt to live with might not make you switch, but when your friend starts a new business you cannot recommend the system.

I just read Wired To Care. It just happened that I found the book on Audible and thought, why not? I thought it would be one of those nonsense books (or rather all sense books) which already says what you know, they state the obvious. But this is far from true when it comes to this book. Filled with real stories and examples, it presents the not so obvious cases.Like with the coffee industry in the USA.

You probably have already heard this if you’re from the US, but for a Swedish gal, this was news.

Coffee was a well tasting, high quality product in the 1950’s. And then a chill fell over the coffee fields in Brazil and the price of high quality beans sky rocketed. So, what to do?

Maxwell learnt that customers were not prepared to pay the new price, so what they found was that if they added a tiny percent of lower quality beans, the customers didn’t know the difference and they could lower prices. Wonderful. But the upcoming year, they wanted to cut prices even more, tested a blend with slightly lowered quality and since the customers didn’t mind, they were on a slippery slope. After many years, Maxwell found that they didn’t get any new customers. The young turned away from coffee. But their customer evaluation showed that the customers liked the blend and were not prepared to pay more.

But then came Starbucks. They realized that the old customers had slowly learnt to cope with the failing quality and they were not prepared to pay more. But the new customers hadn’t learnt to handle the change and were baffled by their parents drinking this vile coffee. They were not prepared to pay little for crap.

Thinking that lowering the quality when the customers accept it is probably a bad idea, but it’s easy to walk that path. To think that they don’t mind? Why bother with that pixel? It probably don’t mean anything for the user if the system becomes a couple of seconds slower. It’s all those many, smart decisions which makes that horrible coffee, while someone else with pride can present a prime product and even charge for it.

When a single feature results in an exponential increase of scenarios

Have you ever met a person involved in software development who thinks that the correlation between number of features and complexity looks something like this`

Features5

I mean, who does not understand that adding new features does add to complexity.

Then there are the slightly more informed ones who thinks that when you add 1 new feature, you add 1 complexity unit of one. You can see this as test cases. So, these guys thinks that one new feature is about one more test which you have to run.

Features1

But have you ever been in this situation (X axis is number of features and Y axis is number of test cases or what ever complexity unit you prefer).

Features2

You have a new system to wish you slowly add new features, things aren’t that complicated and then someone comes up with a brilliant idea. It shouldn’t be that hard to implement. But the new feature starts an exponential increase of complexity and now you’ve created a monster. The initial development cost was perhaps small but that little thing made all the upcoming features so much harder to implement.

We also have this scenario. Everything runs smoothly and suddenly you start an exponential growth of complexity. But someone stops the line, recognize what is going on. And does something of the architecture.

Features3

If you’re really lucky, you can perhaps even stop and decrease the complexity. Refactoring should lead to something like this:

Features4

But you also have another solution and that is removing features. Features which are heavy in complexity and which business value does not match the increased complexity it creates. I’ve done this a couple of times and all of those occasions there has been a tremendous debate over the issue. But when the feature was actually removed, no one noticed it.

So, what do I say. Two things:

  1. Evaluate the complexity effects of a new feature before implementing
  2. If you have a run away train of complexity, do something about it right away. And that can be refactoring or removing the feature(s) which causes the complexity.