The Whole-Team Approach: Optimizing Delivery Pipelines via Feedback Loops

    Previously in this series, we talked about how we can take a whole-team approach to agile testing. We also looked at the role of testing in our CI/CD workflows. We started to think about what testing we want and when in our delivery pipeline. Setting that high level vision is important, but as we think about our testing, how do we build it out? It’s time to define the specifics of our feedback loops.

    A Conversation for the Whole Team

    When I was first tasked with a challenge like this, I knew right away I didn’t want to approach this the way we used to do it (tester island anyone?). I wanted our approach to be agreed upon by the team. So, I pulled in everyone I could to talk through the following:

    • Our strawman delivery pipeline to agree upon where we wanted to go.
    • Each test suite so that we all understood why we needed to spend precious resources on gaining that feedback.

    As we discussed each suite, we had a series of questions that we needed to answer. As I was developing our approach to facilitate these conversations, I found that a lot of what I was planning mirrored work from Katrina Clokie, whose work I recently I came across. This work inspired me to create a lean canvas of my own to help drive the conversation. (In some cases, our questions overlapped. In other cases, I added some questions of my own.)

    ES_2021_Bizops.com_Blogs_The-Whole-Team-Approach-Optimizin-Delivery-Pipelines-via-Feedba-Loops-Image1Image Source:https://github.com/ahunsberger/TransformingCulture/blob/master/TestSuiteCanvas.pdf

    Why?

    When we were kids, we’d always be asking “Why?” At some point, we stopped asking though. We need to restore some of that curiosity and get back to understanding why we are doing things—especially things that cost us money. This is one of the most overlooked questions I’ve seen. To help you think this through, here are few questions to really dig into why you might want a suite:

    • What is the business question I am trying to answer?
    • What risk does this suite mitigate?
    • What problems do we expect this suite to uncover?
    • Why do I need this test suite?

    Perhaps you want to understand if your components integrate well enough to warrant further (often more expensive) testing, or if the code deploys without error so you can push to production? By understanding our why, we are able to also start thinking about what we DON’T want to include in that suite. Often, this has helped us understand where we may need to design a new suite, or remove a suite all together if we couldn’t identify a business need or risk.

    Dependencies

    It is important to ask, as Katrina Clokie phrased it, “What systems or tools need to be functioning for the suite to be run successfully?” If we run this suite, are there preconditions that need to be met? Once you’ve identified those dependencies, it’s time to ask HOW do you know they are functioning properly? (We don’t want to run a test suite if something we need is known to be down, do we?) Can we automate that part of the delivery pipeline?

    Constraints

    I love that Katrina Clokie specifically called this out. I had it in the back of my mind as I thought about our limitations in the past, as we tried to build out our new testing vision. Assess these questions:

    • What’s prevented us from doing this before?
    • How can we remove that constraint (or work around it)?

    Sometimes it was a cost constraint, other times it had to do with people limitations… but I think it’s important to note that your constraints could be very much human and emotion based, not just tooling/system/process/cost based. For us, a big hurdle was TRUST. We had had so many attempts at automating a particular suite in the past—attempts that had shattered trust between our developers and testers when things went south. We had to really look at the root cause and how we could address that before attempting to work on this suite.

    Pipeline and Execution

    While this may seem an obvious area to discuss, it was VERY interesting as we started diving into the flow and what would happen in the delivery pipeline… some questions for your team to talk about:

    • Should the suite be part of the pipeline?
    • If so, when does it run?
    • What triggers the suite to run?
    • How long does it take? What is our time cap for the suite?
    • What is our stability threshold/tolerance?
    • Is it a gate? Will it prevent code from progressing through the delivery pipeline?

    NOTE: This topic in particular might bring out ALL the opinions as your team discusses. Be prepared to debate respectfully and come to an agreement as a team ;-)

    Data

    Ahhhh data. There are so many talks on how to manage data for testing. You should watch those, and then also discuss as a team as you think through your test suite. When we did this, we were able to rethink our original (failing) strategy of going through the UI and switching over to using APIs for one particular suite. Consider these questions:

    • How will we create data we need to run the tests? (Test doubles? Query? Inject?)
    • How will we manage that data when the testing is done?
    • Do we need to think through a naming strategy?
    • How do we ensure uniqueness? (Do we need the data to be unique?)
    • Do I care about speed?
    • Do I want to simulate production as closely as possible? (Or do I want data from production?)

    It’s important to know you may have different data needs for each suite you run—and it’s ok if those needs change.

    Contributions and Failure Response

    This was the conversation I was most looking forward to having with our team. We examined these questions:

    • Who creates tests?
    • Who contributes?
    • Who is not involved now but should be?
    • If a test fails, who addresses it?
    • How are we notified of failures?

    Through this discussion, we found there had been a lot of expectations that the responsibility for engaging in the test suite—who creates them, who contributes, who addresses failures—was largely reliant upon the automation engineer. In order to adopt a whole-team approach, we needed to agree upon what was realistic and sustainable. In our case, we realized we wanted developers to be the primary contributors and responders.

    Especially as your contribution scope changes, it’s important to ensure communication is happening up, down, and all around. Does an expected contributor have a different manager than you? Are they talking to their boss about potentially new expectations? Are there skills that need to be acquired by the people that should be contributing? What training do we need? I could go on all day...

    Maintainability

    As we design our test suite (or think critically about an existing suite), we also need to think about how it is maintained. How often have you joined a team without any idea of how to work on something? Or even how to contribute? Think about some of these questions (which, to be honest, you may not know yet if you haven’t started building the framework) if needed:

    • What is the code review process as people submit tests? (because tests are code)
    • What documentation exists?
    • Is it clear how to contribute?
    • Is this internally sourced? Open sourced?
    • How do we maintain the framework itself?

    This area (and the contributions and failure response discussion) also led us to think about our framework itself, and work more closely with developers to make something that WAS easy to maintain and get people to contribute as easily as possible. Think about these answers as inputs to your user stories/needs. After all, our team is our customer!

    Effectiveness

    Last, but not least (yet certainly often forgotten), is discussing how we know this suite is effective:
    • What is it actually finding?
    • Is it preventing failures from progressing through the delivery pipeline?
    • Are we actually mitigating the risks we identified?
    • What indicators do I think will tell me this?

    A critical note: It is important to revisit your test suite canvas. Look at your effectiveness: Is the test suite doing what you thought it would? Is there room for optimization in any of the areas you discussed as a team?

    Getting Started

    Ready to start thinking through your test suites? (Note: I recommend doing this on existing suites, too.) Here are some links to help you on your way:

    Lessons From the Field...

    As you can see, there are some questions in here whose answers could affect a lot of people—that is why it’s so important to have the whole team discuss. We made a lot of assumptions in the past about who would be contributing, or who was expected to respond to a failing test… or even how we maintained a test suite. In having this full team discussion, it was one of the first times I’ve seen others (besides testers) realize just what went into creating and maintaining a test suite. It’s also one of the first times I’ve seen a team member say, “Hmm… maybe we DON’T need that.” (I love removing unnecessary tests!)

    Another interesting lesson is how we communicate and talk about the suite. A test suite may span many, many teams—it’s important to share what’s working and what’s not.

    It’s also been interesting to workshop this in simulation. Every single group is different and answers the questions in different ways—even for suites that appear to be the same at first glance. That’s amazing! We have learned so much from each other and discussed ideas and gaps, with these conversations often eliciting an “oooh, I hadn’t thought of that!”

    Wrapping up the Series

    Thanks for sticking with me through this series. I hope you’ve found the tools and techniques discussed to be useful as you look to facilitate conversations with the whole team and boost your testing in continuous delivery pipelines. Happy testing!