If you want to learn to play the piano, it's going to be a tough endeavor if it takes 30 minutes before your piano produces a sound after you press a key.
When you demonstrate your software just before the deadline, you know for sure that the project won't finish early. If you demonstrate it every week and implementation is done based on the product owner's priorities, there is a big chance the product owner will approve the application even before all requested features are implemented.
It's important that you get feedback early and often, and iterative development can facilitate this. What you actually get feedback on is defined in the Definition of Done. The Definition of Done defines all steps necessary to deliver a finished increment with the best quality possible at the end of a sprint. The more you do in your sprint, the more you get feedback on, and the more you can improve and learn.
This introduces the first reason for using a Definition of Done:
When the Definition of Done is complete, it will define all steps to deliver a finished increment, and therefore it creates feedback regarding the product and also regarding the process within the sprint.
With steps such as the sprint demo, performance testing, acceptance testing, etc., you generate feedback on the product. When the product owner is trying out the application during the demo, he will give his feedback. The acceptance tests generate continuous feedback on the acceptance criteria, especially when all criteria are implemented with SpecFlow or any other framework of specification by example.
With steps such as peer review and deployment, you get more feedback on the process: Are the deployment processes correct? Are we coding like we want to? And so on.
The more steps defined in the Definition of Done, the more feedback you will get.
Typically when finishing a sprint, different items are left undone. Some bugs are still in the code, integration testing is not yet done, performance testing in a production-like environment is not done, the manual is not up to date, etc. All this work has to be done at some time. The problem with this undone work is that it piles up every sprint; every feature that is added will let this undone work grow.
What happens in an agile project release planning session is that, based on the number of user stories, points and velocity for a release are planned. For example, when the team has a velocity of 6 and 23 user story points need to be implemented, the release date can take place after 4 sprints.
The problem is that after 4 sprints, this undone work is still there. Many teams solve this by introducing so-called "hardening sprints" or "release sprints." These sprints are used, for example, to create the deployment packages, solve some last bugs, do some final testing, and so on — everything to make the software ready for production.
The problem with these release sprints is that they are a bad agile practice. You are trying to timebox work that is unknown (last-minute tests can reveal all kinds of bugs), unplanned, not estimated but still it is really necessary — and it all has to be done in a fixed amount of time and before the release date.
In addition, your release date doesn't match with your release planning. Instead of planning the release based on sprints defined in the release planning, it is now based on "planned" sprints plus one or more extra release sprints.
When the team defines a complete Definition of Done and applies it, all the undone work is done within the sprints and no release sprints are necessary.
A burndown chart shows the amount of work still to be done in the progress of time represented by the green line.
This burndown chart is well visible in most teams but will give a false indication of when the software is production-ready. When the Definition of Done is not well applied, undone work will pile up in every sprint, represented by the orange line, and this line is usually not visible in regular burndown charts.
The line showing the ideal burndown line plus the undone work line represent the real burndown chart, but this is usually not shown, and the product owner is caught by surprise to realize, after four sprints, that there is still work to be done even though the burndown chart showed differently.
When no release sprints are used, the delta of the black line and the green line shows the risk that is delayed. If the team doesn't pick up this work during a sprint, it will reveal itself in production. For example, when no performance testing is done during the sprints, there is a chance that later an issue regarding performance can occur in production.
A typical conversation that most developers will recognize goes like this:
Product owner (PO): Is the software done?
Developer (Dev): Yes, almost.
PO: Can we go to production?
Dev: No, not yet.
PO: Why not?
Dev: Well, some bugs have to be solved, some integration tests still have to be run, release packages have to be updated, etc.
PO: When can we go to production ?
Dev: I don't know...
To avoid these kinds of discussions, there should be a common understanding of what is meant by "done" software. A Definition of Done will create more transparency about what the team is doing in every sprint, and what is delivered. When, for example, the Definition of Done doesn't say anything about performance testing in a production-like environment because the organization is not fit to accomplish this in every sprint, then the product owner is aware of this.
When the Definition of Done is complete, with all the steps necessary to deliver an increment with the best quality, you are minimizing the delay of risk. All steps in the Definition of Done are subjected to feedback and therefore risky items are inspected, adapted, and improved as much as possible in an early stage, and as many times as there are sprints. In other words, risks are covered several times in early stages of the project.
The smaller the Definition of Done, the more undone work is likely to pile up after every sprint. This undone work is not subjected to feedback but will reveal itself somewhere, sometime, in production.
A complete Definition of Done will minimize this undone work and therefore minimize the delay of risk.
A team is able to complete a (new) feature in one sprint and release it immediately to production with all steps defined in the Definition of Done necessary to guarantee the best quality.
The agility of the team shows in the fact that it can release a feature to production in every sprint. The quality of the team is represented by the number of steps in the Definition of Done that are applied when releasing this feature to production.
Start off by defining two versions of the Definition of Done: one current, and one ideal. The possible reasons to need two versions are competence and maturity.
Competence is a real reason, because not every team is capable of doing everything in one sprint in order to deliver a production-ready product. This is especially true at the beginning of a project. To deliver a finished increment in one sprint, you need to automate many steps in the Definition of Done. For example, automate build processes, automate tests, automate deployment, maybe automate some documentation, etc. This can be complex and time-consuming to set up.
Maturity is another reason why the Definition of Done is perhaps not yet ideal. Some teams are just not ready enough to want to do all the steps in one sprint. They feel it's better to do the regression tests only at the end of all the sprints, or to update the manual just before going to production, because they feel it isn't necessary or takes too much time to do this in every sprint. Those teams don't have an agile mindset yet.
The ideal Definition of Done defines all steps necessary to deliver a finished increment from development to deployment in production. No further work is needed.
The current Definition of Done defines the steps the team is currently capable of doing in one sprint.
It's best to make both visible on the wall so that what the team is delivering in the sprint is transparent to the product owner, and to create a common understanding of what is done. What's important to understand is that the product owner is also responsible if the team is not using an ideal Definition of Done. He can decide that performance testing is not needed every sprint because it has never been an issue on the much faster production servers, and because the team hasn't yet automated performance testing, so it takes too much time to do it every sprint. With this decision, the product owner consciously delays the (potential) risk of having a performance issue in production.
If the product owner wants to have more steps in the current Definition of Done – for example, automated acceptance tests – he should make it a priority that a framework is created that facilitates the automation of these tests. This can be done by giving the work item containing this framework a higher priority in the product backlog.
So, putting two versions on the wall will create transparency for the product owner. It represents the current capability of the team and shows what could be improved. The team can try to regularly expand the current Definition of Done with steps from the ideal Definition of Done. Expanding the Definition of Done will actually mean growing in quality and maturity.
A good Definition of Done will help with:
Getting feedback and improving your product and process
Better release planning
Giving burndown charts sense
Minimizing the delay of risk
Improving team quality and agility
Creating transparency for stakeholders
For more agile tips, go here.
Get the latest resources from Scrum Alliance delivered straight to your inboxSubscribe