https://www.gravatar.com/avatar/ef956e6de6a29785fb1550ad2a7b214c?s=240&d=mp

Lagerweij Consulting and Coaching

Planning is dealing with uncertainty

To understand how much uncertainty will always impact your plans, you need to understand the work. If you don’t, you’re toast.

If we know everything that is going to happen, exactly which steps to take, all the illnesses and unexpected changes that will occur, all the ways the outside world will change our situation and in minute detail how we need to build whatever we want to build, planning is easy.

Testing is about confidence

As part of improving our understanding of a legacy system, in “The Product Owner’s Guide To Escaping Legacy” I explain how we can go about discovering new information about a system, and how we can keep track of how we are currently testing that functionality. Then, we use that knowledge to be more targeted in our approach to testing.

The goal of testing is to be able to release with confidence. Confidence can come from many different activities and measurements. What we are doing here initially is reducing a particular type of, very visible, activity around testing: the scripted manual regression test. Since in most situations this is where the bulk of the test effort is centered, this will feel like a significant reduction in testing, and an assumed reduction in test coverage. In other words: it’s scary and reduces confidence.

The Story Map as a basis for planning

As I was giving another version of my Discovery and Formulation training recently, I noticed again that one of the ways I use the story map as a basis for planning is quite unfamiliar to people, and I actually think that it is one of the simplest ways to help teams to actually start delivering in small iterations. So, I figure I’d write a short description of it here.

The training is about how to take a feature, break it into stories using Story Mapping, and then detail those stories further using example mapping and even write those examples down in the gherkin language that can be used as a basis for testing. To most teams, that is part of an activity called ‘Refinement’. That’s fine. However, even though the outcome of these activities are in shared understanding of the functionality we want to build, there are some other outcomes: we split a larger idea into smaller steps, feature into slices of the Story Map, slices into stories, stories into rules and examples. And by doing that we also consider, explicitly or implicitly, size. In other words, there’s some estimation going on, too! And estimation is part of planning.

Technical Debt

The term Technical Debt has been wildly successful.

Ever since it was first introduced by Ward Cunningham in 1992 (Ward Cunningham, “The WyCash Portfolio Management System”) Technical Debt has grabbed the collective consciousness of the software development community and has functioned as a label applied to all the different kinds of technical shortcomings we can imagine.

The first version of Technical Debt is the one originally described: taking a conscious decision to postpone a change in the structure of our system, the need for which was triggered by new functionality, to facilitate a faster time to market. This does not mean that we have bad code, or skipped testing the new functionality, it just means that the structure of the code doesn’t quite resemble the mental model of our domain as much as it could. The key word here is conscious, this is not an accidental slip in quality, but an intentional trade-off of time to market and quality.

Legacy

Within engineering circles one of the most popular definitions of ‘Legacy’ is:

“To me, legacy code is simply code without tests.” – Michael Feathers

This might be a starting point for us when looking at legacy from a product point of view, but it is too closely tied to engineering and doesn’t allow us to talk about legacy in terms related to product development and delivery. How can we broaden this definition so that we can view it, and work on it, as a shared problem?

Getting a legacy system under test

So many of us end up in the same place: with an application that we’ve built, as quickly as possible, that has found enough users, and that seems to be getting harder and harder to do any changes to. We describe the state we end up in using terms like legacy system, technical debt, long manual regression cycles, or less polite terms.

Working with a variety of companies that have come to be in that position, I’ve found an approach to dealing with these situations. The details vary according to the needs of each client, but in each case the plan of attack is the same: chart the high-level functionality of the system; provide a broad-but-shallow level of testing; prioritise; and dive-deep into relevant areas. All using techniques that can seamlessly be applied to developing new functionality too.