It’s natural to want to measure progress. For example, kids curious about their position in the time-space continuum never get tired of asking “Are we there yet?”
And executives going through agile transformation want to know how much progress their team is making toward the goal. Unfortunately, measuring how well a team is changing can be quite difficult.
How you go about measuring progress — and the tool that you use to measure — can be daunting. Agilist Ben Linder compiled a list of easily found public agile assessment tools and there are likely many more private tools out there. So, how do you determine if a tool is good and will work for you?
Before selecting an agile assessment tool, be clear on your goal. If you hand me a hammer and don't give me plans for building something, then all I can do is pound in nails all day with no real results. Knowing what your organizational goals are and how using agility to get there fits into those goals is an important first step.
If we think of an assessment like a research study or even a survey, we can draw on four well-understood scientific criteria to measure against:
Retest reliability: You test a team in January and test them again in March. You know this team has not made any real changes. Does the assessment show the same results? If it does, that's good retest reliability.
Inter-rater reliability: You go in and review a team and give them an A+. Your colleague goes in and reviews the same team and gives them a D-. This would be poor inter-rater reliability. Using the same assessment for the same team, you should not get two such wildly different results based on the person administering the assessment.
Internal validity: It’s clear to you from just looking at the team that they got remarkably better. Does the assessment reflect this? That's internal validity — knowing that the program (the agile transformation, in this case) is having the effect you desired.
External validity: Is your test so specific that it will only work with this company, department, or a specific framework and would be totally inapplicable anywhere else? That would be poor external validity. If your assessment can apply to multiple places and applications, then it is likely a much better tool than something completely customized.
To these industry standard research study measures, I have developed two more over my years of creating and using agile assessment tools.
Can you answer with a yes/no?: When crafting product backlog items (i.e. user stories), a good rule of thumb for acceptance criteria is can it be answered with a “yes/no” question. Look for assessments where the question can be clearly understood with measurable goals. Compare “How are your retrospectives going?” to “Are you able to pick one thing and plan for it’s improvement in the next sprint” and you can see how the latter is likely to lead to a better answer.
What’s in it for them?: Your goal is to measure how your organization is doing and uncover specific areas to focus on to get better. If an assessment’s output points directly to the assessor as being the solution, always question this. While assessments are important and having a consultant help you with assessments can be very valuable, be cautious of any output that requires or implies the assessor is the only one who can help you get better.
Once you have the tools to ensure you are building a good model, you need to think about what you are going to measure. This is the hardest place to offer guidance, because what you need to measure can be very different depending on the organization, type of agile program, where the teams are in their journey, and so on.
Generally, though, there are some key areas you want to make sure you are looking at:
Ability to define the product: One of the larger threats to agile success is a lack of clarity in the backlog. If a team can’t work together to define what gets built, they will never get to how to build it.
Ability to plan effectively: Now that you know what you want to build, do you know how you will go about it? Do you communicate well in the scrum events? Are your plans clear? Are your information radiators visible?
Ability to deliver a working, tested product: At its core, this point is about the ability to get to “done.” As teams evolve it becomes more about technical practices like pairing, test-driven development, and continuous integration.
Ability to continually improve: If a team is not able to learn and take actions from their past sprints, they will never truly improve.
When a sprint is over, you know if it was successful. You can look at the backlog and see what is done and what's not done. The team can look at their velocity to determine the next sprint’s capacity, and there are several reliable ways to forecast with the sprint data.
An agile assessment model gives you the same level of awareness for your agile transformation. You just have to make sure you're working from good data.
Related article: Rescuing Company-Wide Agile Transformation
Bancroft-Connors is a Principal Consultant at Applied Frameworks and a Scrum Alliance Certified Team Coach®. Known to many as “The Gorilla Coach,” Joel has over 20 years of experience coaching teams and managing products at blue chip software companies. Joel also holds certifications as a Scrum Alliance Certified Team Coach, and Product Management Professional.
Get the latest resources from Scrum Alliance delivered straight to your inboxSubscribe