Last year, the first-ever State of Agile Coaching Report, was recently released as a joint effort by Scrum Alliance and the Business Agility Institute. It’s a meaningful milestone for the emerging profession, yielding several insights to build upon. But of particular interest to me were the deep discrepancies around how agile coaching success is measured:
Related: The 2022 State of Agile Coaching Survey
Some are measured by team outcomes. Some aren’t assessed by metrics at all, but rather by feedback alone. Heck, some of us don’t even know whether we are having impact at all, let alone how to assess it.
That’s a problem. If we don’t know what coaching success looks like, then how can we grow into it? If we can’t agree on the data to use for growth, how can we advocate with any credibility for the use of empirical data?
I’ve been coaching agility for many years, and I’ll admit this has been a struggle for me as well. However, when this report came out, I resolved to look in the mirror and explore how we as coaches might get better at the empirical practice we advocate for. To that end, I’d like to share two approaches to avoid, and one to try.
That’s right. Measuring results is a bad gauge for effective coaching.
At first, this seems to run counter to everything. Isn’t the whole reason leaders seek agility to achieve outcomes like faster delivery, more productivity, or better quality? Year after year, industry after industry, agile methods are shown to yield measurable improvement in bottom-line results. However, there are two problems when we conflate the success of agility with the success of an agile coach.
The first problem is that of causality. Let’s say a product group doubles its quality scores over a year. What proportion of that improvement should be attributed to the team doing the work, to the leadership deciding the work, or a skilled mentor advising the work? If we’re honest, then it’s not perfectly clear.
Yes, it is most likely that agile techniques like continuous integration, definition of done, or mobbing directly resulted in the team’s quality boost. Yes, it is also probably true that a coach dramatically helped with the implementation of those techniques. However, it is fundamentally wrong to say those results were possible expressly because of that one person. Consider the budget approvals required for the automation infrastructure, the management trust required to allow a dozen engineers to mob on a single feature, or the willingness of each team member to adhere to a “done” checklist. In each of those cases, choices are made by different people, each interdependent on each other to achieve results.
I have personally witnessed several gifted colleagues perform excellent coaching skills, only to watch their client ignore their reality, make poor choices, and see their status quo. Conversely, some of my clients have achieved new heights, in spite of terrible coaching mistakes I made while working with them.
Organizations are complex systems. There are simply too many variables, too many players, too many to say that faster delivery, better quality, or more productivity is a reflection of the helper they hired.
If a technology leader has a mandate to improve key objectives, who ultimately answers for whether those results are achieved? The teams they lead? The vendors they hire? No. It’s the leaders themselves.
A smart person will hire expert advisors and mentors to accelerate and enable change. However, an even wiser person knows that it is strong leadership that makes the difference between being stagnant and showing momentum.
And that relates to a problematic key finding of the report:
“An agile coach's success is often measured based on the performance of those they coached rather than by specific coaching metrics.”
That’s a problem because when leaders and teams hesitate to make tough choices and painful changes, the temptation is to blame the coach. Conversely, if everything goes well, the coach may ride off into the sunset believing a bit too much of their own press.
Put another way, agile coaches are not the source of agility; they are merely an amplifier of the leaders and employees who build their own agility. Agile coaches are not the heroes of the story; the leaders and teams they coach are the heroes.
____________________________
Let’s say a CTO hires an expert to guide their journey to enterprise agility. After a full year of effort, none of the metrics show improvement in quality, productivity, or predictability. When the CEO convenes the annual board meeting, what will be the narrative around the company’s inability to evolve? It may look something like this:
“Honestly, we’ve spent roughly $3M across the teams, with not much to show for it.” One director responds by asking “Well, what does the CTO say is the culprit.”
“He made it clear to the coach that he would hold the coaches accountable for measurable improvements in these objectives. Since those all involve lagging indicators, the coaches demanded a full year to see any measurable lift. Well, here we are. Unfortunately for him, he hired the wrong coach.”
Another director sounds skeptical, “Wait, she burned through the whole year and the whole budget, without changing vendors? Why?”
“We all know it’s standard management practice to transfer operational risk to a 3rd party. We measure coaches by outcomes. If we don’t have outcomes, it’s the coach who incurs that liability.”
Finally, the chairwoman speaks up saying “Um, actually speed, quality, throughput are not operational details. They are the results that drive revenue and profitability. Your CTO may have hired the wrong coach, but it sounds like you hired the wrong CTO. I don’t care if you were roommates in university. You need to make a change there, right away.”
__________________________
Avoid measuring coaching success by coach activity.
Since measuring results is problematic, let’s try something more specific to coaches themselves.
Consider a hypothetical comparison between two teams. Six months into a project, the Agile Center of Excellence distributed a performance report of their effectiveness:
Which coach was better? You can’t tell, can you? Just because the second coach did twice as much work doesn’t mean they were more effective? Even if we add in some improvement metrics, how much of those metrics results were because of the coaching, and how many were despite the coaching?
The other problem emerges when you consider “what gets measured is what gets done.” If we measure success by the volume of coaching output, well then that is what you will get.
At the end of the day, coaches serve people. The ultimate arbiter of coaching value comes from the people who receive coaching help.
So how do we measure coaching satisfaction? The key here is to use both quantitative and qualitative data.
For hard numbers, there are a variety of ways to measure how strongly a coach is valued by their audience. We can simply adapt proven customer satisfaction metrics, which ask an audience or clientele to score an experience on a scale from really bad to really great. Examples include:
But metrics don’t tell the whole story. What if a coach scores well, but the team doesn’t improve? What if the metrics look rather unflattering, but the team was simply uncomfortable with a truth-teller revealing their weak spots to them? That’s why it is so critical to collect qualitative feedback on coaching performance, such as:
Often, these questions can be asked in an anonymous survey. They should also be asked by the coach themselves in both 1-on-1 and group settings. Agile coaches have a professional obligation to role model the growth mindset they are fostering in others.
“Those who reported measuring success at the customer/client level most often based their level of success on [their] satisfaction.”
That means coaches should be viewing their success through the eyes of those who hire them. And those who hire them will have a full view of the coach’s value when they have a full picture of feedback.
In the end, agile coaching is a relatively young discipline, with professional standards and practices still emerging. The good news is, if we start measuring the impact we have on a human level, we can remove several distractions, and focus on the essence of making the craft truly meaningful.
The survey for the 2022 State of Agile Coaching is here. Share your insights to help us measure the impact of Agile Coaching
Jesse Fewell works as an agile leadership coach, having taught, keynoted, or coached agility to thousands of leaders and practitioners across 13 countries on five continents. His best insights are captured in his recent book, Untapped Agility – 7 Leadership Moves to Transform your Transformation. His contributions have earned him the Scrum Alliance Certified Agile Coach® (CTCSM/CECSM) designations as well as an IEEE Computer Society Golden Core Award.
Get the latest resources from Scrum Alliance delivered straight to your inbox
Subscribe