I haven’t written anything on here for quite a while. I haven’t been sitting still, though. I’ve gone independent (yes, I’m for hire!) and been working with a few clients, generally having a lot of fun.
I was also lucky enough to be able to function as Chet’s assistent (he doesn’t need one, which was part of the luck:-) while he was giving the CSD course at Qualogy, recently. Always a joy to observe, and some valuable reminders of some basics of TDD!
One of those basics is the switch between design and implementation that you regularly make when test-driving your code. When you write the first test for some functionality, you are writing a test against a non-existing piece of code. You might create an instance of an as-yet non-existing class (Arranging the context of the test), call a non-existent method on that class (Acting on that context), and then calling another non-existing method to verify results (Asserting). Then, to get the test to compile (but still fail), you create those missing elements. All that time, you’re not worrying about implementation, you’re only worrying about design.
Later, when you’re adding a second test, you’ll be using those same elements, but changing the implementation of the class you’ve created. Only when a test needs some new concepts will the design again evolve, but those tests will trigger an empty or trivial implementation for any new elements.
So separation of design and implementation, a good thing. And not just when writing micro-tests to drive low-level design for new, fresh classes. What if you’re dealing with a large, legacy, untested code base? You can use a similar approach to discover your (future…) design.
As described in Michael Feathers’ great book ‘Working effectively with legacy code’, when you have such a legacy code base a good first step is to surround it with some end-to-end tests. Those tests are needed to be able to change the code without fear of breaking the existing functionality. But those tests can also be used as a way to discover a target design for the system. A design that is usually not at all clear when you start attacking such a beast.
So how would we do this in practice. Let’s say we want to use an ATDD/BDD type of definition to describe the expected behavior. That way, we can write them at a high level, and verify them together with end-users of the system so we know we’re testing the right functionality. A tool such as FitNesse or Cucumber can be used to store our test-cases, and the corresponding fixture/stepdefinition/glue-code can be created to implement the test.
At this point, when usually happens is that those step-definitions are implemented using outside interfaces to the system. Often GUI interfaces, using tools such as Selenium or Robot framework. And though new glue-code could be written to run the same tests against a new or refactored implementation, this is a missed opportunity.
If we implement the glue-code against the expected API of the system that is natural to describe the functionality described by our test scenarios, we are discovering the design of the system in the same way we do this at a more granular level using our unit tests.
Creating that design in this way still allows us to use whatever technology that is appropriate for our legacy system to implement the system API. But it also provides us with a target design for the system, which might be called directly from the glue-code at a later stage. It will also guide discovery of all the places where functionality is not contained in the right places and elements in the existing system. And allows a controlled and incremental refactoring of the system into a more maintainable state.