This article was originally published by the Agile Institute. This article is a precursor to the webinar “Essential Technical Capabilities: Four Activities for Successful Incremental Software Development". You can now watch the recording of the live presentation.
What is “Technical Debt”?
People are still debating over the one true meaning of the term “technical debt.” It was coined by Ward Cunningham by 2009 (likely earlier), and the short definition is this:
Technical Debt is the deferment of good software design for the sake of expediency. Put simply, we’ve chosen to make some questionable design choices in order to get product delivered. This may be a conscious choice, but—more often than not—it’s unconscious, and the result of a time-crunch.
Why is Technical Debt a concern?
Technical debt has real impact on the bottom line. It can slow the delivery of future releases or sprint increments, make defects harder to find and fix, and erode good testing practices.
What do you mean by “Software Design”?
The word “design” has many meanings within the software industry. “Software design” is distinct from user-interface design (roughly the “look and feel” of the software product) and also user-experience design (roughly, how the user uses the software to do something they want to do, and the subjective quality of that experience).
Software design is the internal structure of the code. It’s something that is chosen and written by software developers, and typically needs to be read and understood only by software developers on the team.
It’s not magic, and it’s not some pie-in-the-sky notion of perfection or art. There’s certainly skill and finesse that goes into doing it well. But the value of a good design is entirely pragmatic. A good software design does two things:
And that’s really it. There’s a whole library aisle written about it, and a lot of that is very informative. All the lessons of Design Patterns, for example, are very useful. It’s not a waste of time to learn about Design Patterns. And still, they all boil down to those two.
Do we need to get the software design right, upfront?
I built software in ye olde “pre-Agile” days, from ~1985 to 1998, and the techniques we used were—in comparison—excessively predictive, rather than relying on fast feedback and empirical methods that we used on Extreme Programming (XP) teams, for example. (XP is similar to Scrum, plus the addition of “Scrum Developer Practices”)
Doing all the design up-front, in a “design phase,” worked fine until we started testing (if the business felt there was still time for testing…), or until it was in the hands of the customers.
That’s why we enthusiastically embraced Agile methods like Scrum and XP, long before the term “Agile” was used. In 1998 we said “Let’s do what’s needed now, and continuously fold in what we learn.”
How could that possibly work? We’re doing Agile, and it’s still a painful experience.
Let me tell you about what I call the “Agilist’s Dilemma.”
For about the first 5-8 sprints (or “iterations”), everything may go smoothly: everyone is happy to be doing what they enjoy. Coders code, testers test, teams demonstrate tiny fractions of high-priority working software to stakeholders. There are balloons and cake. (But no glitter! For the love of all that has nooks and corners, please, NO GLITTER!)
After that, everything typically starts to slow down considerably. Why is this?
In order for the developers to add new features, they have to alter code they’ve already written, throughout the application. That’s just the nature of good software development: the more central the lines of code, the more likely they are to change over time, in order to support new functionality.
But developers really don’t want to break anything they’ve worked hard to build. (See, they’re really not trying to make your life miserable…quite the opposite!) So as the software becomes more complex, they have to proceed more and more carefully or risk introducing defects. Either they slow down, or they make mistakes resulting in defects, which means more time spent searching for and fixing those defects. And fixing defects involves more changes, possibly resulting in more defects…
Testers run into a similar dilemma: At first, it’s easy to keep up with the new features. But, because the developers need to change things, testers need to test everything, every sprint. Again, this gets to be a greater and greater challenge. We see teams doing some crazy things, like prioritizing certain tests, or running tests less frequently. Those defects from the previous paragraph sneak past the testers and fall into the user’s lap. Or laptop.
So everything slows down, either because people are trying to do their jobs conscientiously; or because the quality of the product is degrading due to statistically unavoidable human fallibility.
This is unsustainable. And, of course, we then hear that “Agile sucks!”
It should be no surprise that if we’re going to ask our teams to do something highly iterative and incremental, the coding and testing techniques we would have used for a gated “waterfall” process are not going to work anymore. It’s not merely that they’re not sufficient; they’re actually counterproductive!
The solution to the Agilist’s Dilemma is to use development practices that are better suited to a highly iterative and incremental approach, and to stop doing practices that act as an impediment to the agility we seek.
What technical practices are needed to reduce or avoid technical debt?
Let’s first look at the heart of the problem: We need to be able to enhance the functionality of our software without damaging any of the prior investment in functionality. We need to have software that is soft: it needs to be easy to change. To that end, our software design needs to:
On an Agile team, we support this by continuously reshaping the design so that it’s (a) appropriate for the current functionality; and (b) flexible enough to receive unexpected, unpredictable enhancements.
We call this practice “refactoring” – and it’s the core design practice for Agile software development.
Refactoring is the reshaping of the code’s structure without changing any of the behavior of the system, so that we can then more easily add the new functionality. It’s not rework, and it’s not rewriting. A good design is a changeable design, by definition.
It’s an ongoing, never-ending activity that is best done in very tiny increments, like a few seconds of refactoring every 5 minutes.
Sounds crazy, right? It’s actually quite simple, and very powerful. The best software designs I’ve seen got there through simple, continuous, wholehearted refactoring.
But refactoring can’t be done in isolation. You can’t simply tell the team: “Okay, now go refactor!”
How can we refactor safely?
A team can’t refactor unless they have a lot of confidence that their changes won’t alter existing behavior. And the only way to know that is to have a comprehensive—and very fast—automated test-suite. I will often refer to this test-suite as “the safety-net.”
In order to build and maintain this safety-net of fast tests, a team needs to be doing either Test-Driven Development (TDD), or Behavior Driven Development (BDD). Or both! (Since the BDD workflow includes the TDD workflow.)
These practices are often called “test-first” practices because we write a single test or scenario, and we work to get that test passing before we move on to writing another test.
Folks always ask me why the team can’t write the tests after coding. There was an old study comparing TDD with unit-test-after that suggested test-after was actually a little faster. The problem, though, was that the test-after teams’ test-coverage was abysmal, and quality suffered proportionally. On a real (non-academic) application, defects result in rework. Developers on test-after teams that I’ve met over the past 20 years spend about 50% of their project time hunting for, and fixing, defects.
Also, TDD is really faster in the short-term, because it quickly becomes the technique by which developers think about the decomposition of the new behaviors they’re adding to the system. We record what we expect in a test, rather than drawing diagrams and then trying to fit behaviors into our mistaken conceptual notions (been there, done that, pre-1996). Listen to two developers talking to each other about something that needs to be built, and they will usually be speaking in test-scenarios, not implementation syntax.
And TDD is faster in the long-term because it keeps defect counts extremely low. So low, on the XP teams I used to work on, that we stopped tracking defects.
Just one of many examples: The U of M OTIS2 program I worked on in 2002 is a life-critical (i.e., “you break it, someone may die”) application. Today it is still undergoing enhancements via TDD (yes, it’s old enough to vote, now). Despite continuous changes over two decades, the last time a developer had to work overtime in the evening or weekend hours to fix a “software emergency” (Richard Sheridan’s term for a dangerous defect) was in 2004.
Whereas refactoring is the core solution, TDD and BDD are the core practices of a smoothly-running Agile software development team. These practices become the means by which any ambiguities in what we’ve been asked to build get refined into discrete, and concrete, scenarios. Every high-performing Agile software team that I’ve encountered spends most of their day doing one or both of these.
So, “test-first” includes testing, sure; and also incremental design through refactoring; and just-in-time analysis. The whole team is continuously growing the product increment, and the safety-net around it, so that further enhancements and refactorings can happen swiftly and confidently.
Sounds expensive, right? Upfront, perhaps; for a month…perhaps. But the savings in cost-of-rework, and the ability to adapt to changing market conditions, have usually paid for the early overhead of learning and doing these practices ([AD ALERT! %-] including the full cost of my training and coaching, by the way) in less than a year. Often within six months.
What else supports refactoring?
Another limitation to good, changeable design is a lack of real communication with peers. I can tell you from my decades of experience writing code all by myself, that I didn’t learn much about software design. Partly because I thought I knew it all; partly because we were expected to learn these things during our copious “free time.” If we ever saw each others’ code, invariably someone else would disagree with my design, or I would disagree with theirs. And how often do you suppose we had the time to go back and incorporate the new knowledge into the code?
What solved this for us was intense, continuous collaboration. Agile developers need to talk to each other about the code, and they need to design that code together. Two practices that have arisen from this need are “Pair Programming,” and “Mob Programming” (called “Ensemble Programming” in Europe, I’m told).
Pairing is two developers working together to test, write, and design the code. Mob Programming is the whole team sitting together, usually including either the Product Advocate (Scrum’s PO) or a business-savvy BA or QA. They are all doing their professional work to develop the product in real-time, on a big screen or two.
Also sounds untenably expensive, yeah? Yet there are numerous benefits that swamp the costs. For example:
Back to design: If the whole team agrees it’s a maintainable design, then it is. When I used to write code alone, there was only one person who thought it was a great design: Me. Just increasing the number of eyes on a particular bit of fresh code turns one…into MANY. Odds are if two developers agree it’s a good design, the whole dev team will agree. Particularly if they’ve all been working together in this way, learning from each other.
Perhaps counter-intuitively, fewer design arguments happen on teams that pair or mob. It turns out that “team readability & maintainability” as a First Principle of software design pushes aside a whole lot of Dunning-Kruger-esque posturing. At least…heh…it did for me.
What is the fourth of the “Core Four”?
The final core Agile tech practice is Continuous Integration (CI).
Your sprint isn’t delivering a shippable product increment if it’s from a version-control repository branch that hasn’t been incorporated into a potentially shippable whole. So all the existing features and new User Stories (Scrum’s Product Backlog Items) need to be integrated together.
We take this to the extreme by integrating each pair’s work multiple times per day. The repository “trunk” is therefore always the “source system of record” as to what has been built. And when a pair or mob starts to work on a new task, they first obtain all the useful changes that others have made.
If we didn’t do this, then refactoring comes to a quick standstill. If I refactor something on a branch, and you refactor that same code but in very different ways on another branch, we’re going to have a hard time integrating. The smaller the incremental changes, though, the easier they are to integrate.
Mob Programmers working with their own isolated code-base still use a repo to avoid losing any changes and to track versions and change-sets. (UPDATE 2020: And to deliver those changes to the next “driver” every few minutes.) They may never encounter a merge conflict. Multiple pairs on a team will have the occasional merge conflict, but if they integrate numerous times per day, conflicts are small, rare, and easily resolved. Continuous Integration means we integrate continuously! No surprise, right?
Can you summarize the “Core Four” again?
Refactoring is at the heart of the Core Four
There’s also an interesting way to look at each of these as practices that facilitate strong team communication:
Back to Technical Debt: Is it okay to take on some tech debt, as long as we do these core four practices?
Technical debt is typically not as valuable as people assume. The notion that we either have to rush to market, or we have to design for the future—but not both—is a “false choice.” It’s misinterpreting the immediate and ongoing value of these four practices.
For example, once the teams have some experience with these techniques, it takes no additional time to use a paired test-first technique to think about what the code is solving, and to refactor as you go to keep the design tidy. If the team is working on the product’s next most-valuable-feature, you’ll likely be receiving the best return on investment that your product can expect.
Want to learn more? You can now watch the recording of Rob's live presentation.
Rob Myers has 35 years of professional experience in software development roles, and has been mentoring teams in agile techniques since 1998. His courses blend fun, practical, hands-on coding labs; advanced learning techniques; and relevant first-person stories from both successful and not-so-successful agile implementations. Learn more.
Get the latest resources from Scrum Alliance delivered straight to your inboxSubscribe