Data-Driven Management: Near-Term Steps to Big-Time Benefits

    A lot is being written about employing data to improve business management and decision making. Recently, I’ve written a couple blog posts on the topic myself, including one on making data-driven decisions and one on establishing data-driven product management.

    This post will build on some of this initial information, but take a more practical approach. While it’s important to recognize the value of data-driven management, it’s also critical to gain an understanding of the practical steps that need to be taken to make it happen.

    In this post, I’ll give you some tangible steps to take that can actually improve your team’s performance and enable you to more fully understand what is really going on inside your teams. Start here and, in future blog posts, we will continue to offer more information on this vital topic.

    Know How Much Your Teams Can Get Done

    This sounds easy. However, measuring the velocity of a team, without ensuring team members really understand your vision and what you want to achieve, will leave the door open to failure. Without that collective understanding, teams will focus on the wrong things, like increasing point estimates, worrying about what other teams are doing, and other productivity killers. This slows your organization down, which is decidedly what you do not want.

    So, to start, have a conversation with staff across your entire organization. Tell them that your main goal is to start by understanding the reality that exists in the company today. That what you do not want is elevated point estimates or padded schedules. The entire goal of the first exercise is to simply gain an understanding of what is happening today. Promise that data being gathered will not be used punitively. Convey that all you need to know is the reality of the situation so that you can make better business decisions.

    Look at an iteration-over-iteration velocity chart for your entire organization. Tell any leaders that report to you that you expect them to look at this information for their organizations as well. For a quarter, it might look something like this:

    Data-Driven Management: Near-Term Steps to Big-Time Benefits - Image 1

    Measured together, the average for these teams is 80 points per two-week iteration. This is a fact for your organization. It might be great news. It might be bad news. But what really matters is that this is a realistic view of what your teams can get done.

    Next, look at the quality of the work. Is it at the level you need and expect? Yes? Great! 80 points/sprint is what you can plan going forward. If the quality is not at the level you require, then your teams are not taking enough time to meet quality expectations. Most likely, that’s because they are feeling pressured to cut corners and do more than they are able to do well. We all know that poor quality slows teams down and creates more problems and cost overruns. When this is the case, it’s time to slow down, institute what is needed to get to the level of quality required, and then re-examine the point totals.

    Together, Ascertain What “Done” Means

    In a prior post, I have written about the criticality of establishing a definition of done, but I’ll mention it again here because it is so fundamental to understanding the data your teams are creating. How can you ever understand how much technical debt you have if you don’t know that everyone has completed certain steps towards being done?

    Within many organizations, there are often jokes about this. People raise questions about the difference between “done” and “done-done.” Maybe you have one team that believes that done means unit tests, functional tests, documentation, and code reviews are all completed. On the other hand, another team believes they’re done when untested code is being handed over to QA. When there’s this kind of dichotomy, you really have no idea how much is truly being done or how close your products are to being releasable.

    In the figure below, we can see that 58% is completed. Without a definition of done, however, you have no idea if you could release the work that has been “completed.” Are you really 58% done with the work? Or has every team decided what completed means to them? There is no way to get to a place where you can tell what projects are on track if you don’t define done.

    Data-Driven Management: Near-Term Steps to Big-Time Benefits - Image 2

    Many executives are inadvertently rewarding those teams doing the bare minimum. They’re the teams generating 50 points, while other teams are generating 28. That makes them rock stars right? Wrong. They have done the bare minimum and have punted the hard work and cleanup down the schedule. The team members netting 28 points are the true rock stars, since you could actually release their code.

    Implement full definitions of done for stories, features, sprints, and releases. If your customers are finding a lot of bugs, determine how early in the process you could have found each and every defect and put in tests to make sure you find them as early as possible next time. This will slow you down for one release or quarter, but will allow your teams to move much more quickly in the long run.

    A standard definition of done can help you truly judge how much can get completed well by your teams. If you are allowing testing to be done in future iterations, you aren’t requiring automation, and you’re setting yourself up for failure. The same is true if only an off-shore QA team is “responsible” for quality.

    At one point in my career, I ran a QA team. Any time a customer mentioned a defect to the GM of my division, the GM would promptly show up at my office. He would ask me how I intended to ensure a similar defect never got out the door again. While the process wasn’t fun, it was highly instructive. These exercises showed me that I could find more and more defects in house and that, by doing so, I could have a material impact on the cost we spent on maintenance releases, patches, customer calls, and escalations.

    Review the Data in Your Agile System

    How good is the information your teams receive? Do your product managers and product owners understand what developers need to do their jobs? Are they good at writing a press release, initiative, use case, or user story? How good is the acceptance criteria on each story?

    Let me take an aside to describe the difference between a definition of done and acceptance criteria. Definitions of done are high level, and include the list of things that must be done before any story can be accepted. This includes things like functional tests must be automated and passed; unit tests must be written, passed, and included in a test harness; code must be in a test environment and not on a developer’s machine; code reviews must be complete; and zero defects can be open. These are akin to the release criteria we used to have back in the waterfall days.

    In contrast, acceptance criteria are written for each story and let everyone working on that story know when they are done. For example, imagine you have a user story to add a dark color mode to your app. The acceptance criteria would include making sure:

    • All fields, buttons, and words are still readable.
    • That the background actually changes to the specific dark charcoal color your UX team chose.
    • That the brightness bar still lightens or darkens the screen.

    In other words, these criteria detail every specific thing a developer or tester needs to do to be sure this story is complete.

    Have your business leaders show you the features and stories they’ve written for their teams and collaborate to increase the quality of the asks being made. Are your teams being given what they need to be successful? Collaborate with the business to fix any issues you see.

    Assess How Good Your Plans Are

    Another area where I see a lot of lost productivity is in the amount of change or churn during iterations. Unplanned work can be another big waste of time. In undisciplined organizations, I have seen teams have as much as 50% of their work be unplanned.

    Every team has some unplanned work. Not a day goes by when someone doesn’t ask me to do something I didn’t expect, and I am betting your workplace is the same. Some additional work can be absorbed, but after a certain point, it will negatively affect your planned work. The best thing to do is to track it.

    I used to have teams create a story for each iteration titled “unplanned.” If they did any work that was not part of the original plan, they didn’t enter a new story, but instead added a task under the unplanned story. They entered how much time it had taken them to complete that task. This served a number of purposes:

    • It allowed us to see what was being asked of our teams outside of their planned work.
    • It enabled us to determine how effectively each team could plan.
    • It highlighted where and how we could take steps to better protect our teams so they could spend more time focusing on planned work.

    At some companies, just managing unplanned work, and stopping anything unnecessary, can have a dramatic increase in productivity. Using metrics that track the percentage of unplanned work can be very useful to seeing the full picture.

    Manage Rework and Refactoring

    If your teams are like many, they have been under too much time pressure to release quality code. This may be due to a number of reasons, such as there being multiple ways to do the same thing within a product, the existence of spaghetti code or old code, and so on. If this is this case, then you will need to start planning in refactoring.

    While refactoring will eat up a portion of your development budget now, later you will be able to go much more quickly. For teams, working on messy code creates headwinds that they must deal with every time they try to make a change. The more issues you have in your code, the stronger the headwinds are and the slower your teams will be, no matter how much pressure you apply. And when those teams feel pressure to move more quickly, they often create even bigger messes in the code by applying band-aids and workarounds.

    One way to tell if your teams are working with difficult code: If you have added consultants or additional team members and you don’t see any improvements in speed. If things always seem to take a lot longer than you expect, find a way to approach the team and determine the reasons. If the problem is poor-quality code, start to fix it.

    The good news is that, most of the time, all this rework doesn’t need to be done at once. Plan to refactor the areas you will be enhancing in your next release. Alternatively, on an ongoing basis, you can dedicate 15% of your team’s time to cleaning up messy code. Either way, your teams will gain speed more quickly than you may have imagined and the quality of the product will also increase.


    Running a business without data is hard. What’s even worse is trying to use data without a clear, consistent foundation. The steps outlined above can provide a critical framework for ensuring data can be used to guide real improvements. Keep an eye out for future posts that will offer additional guidance on establishing effective data-driven management.