To be successful in today’s demanding continuous delivery environments, teams need to take a whole-team approach to agile testing. This post, the first in a series, examines how teams can apply this approach to uncover challenges that staff are contending with.
Introduction to the Series
Testing has come a long way since I started in my career. Many of us know the days of waterfall, “throw it over the fence” style testing. Then we saw the field evolve for testing in agile, and my personal mentors Lisa Crispin and Janet Gregory literally gave us a handbook. Now, we are in the world of DevOps and continuous delivery. Teams are faced with the challenge of releasing reliable, valuable software to production—and continuing to do so faster. So how does testing fit in, keep up, and contribute to the culture? Just as we saw in agile, to be successful in continuous delivery, quality needs to be ingrained in the whole team. It’s vital that we share common goals, build relationships across roles and teams, continuously learn and improve, and take advantage of many skillsets and perspectives.
But how do we identify our challenges as a team? Do we all know how our code flows and the feedback we need? How do we learn and experiment to determine if our hypothesis is proving correct, or if we need to pivot? This blog series will address just that. I hope you’ll join me in learning some techniques I have found useful along the way.
Read on for some practical, hands on activities you can do with your teams in order to apply a whole-team approach to agile testing. First up? Let’s talk about identifying challenges as a team.
Have you ever been on a team that’s been told, “we need to fix testing”? Or, perhaps you’ve gotten an objective to have “better testing” because your team now needs to ship faster, and with better quality. Who wouldn’t want better testing? The problem is, both are really broad statements. Where do you even begin?
In one of my first days as an architect, I was asked to define how we want to test in a continuous delivery pipeline (more on that in part two of this series). But before I could even think about that particular request, I had to understand the challenges that we were facing as teams when it came to our idea of testing and quality. What compelled someone to say, “We need to fix testing”? But how do you do that across a lot of teams and products? My boss introduced me to the concept of affinity groups. This has become my go-to method for synthesizing a lot of input. This falls directly in line with continuous learning, and uses retrospective-style conversations to drive relentless improvement.
The main idea with this exercise is to find commonalities across a large set of data—whether it’s from a team, a department, or an organization. As a tester, this technique really appeals to me as I love to look for patterns. We do this by asking questions, and based on the responses, find similarities and patterns (our affinity groups).
Time to Play 20 Questions
Ok, maybe not 20… but perhaps you do have a few you’d like to tee up with your group. The key is to ask open questions and avoid closed (think yes/no) questions. If you want to try this with your own team, here are some you may want to think about asking—the responses may surprise you:
How do we define quality for our product?
How do you measure quality?
What is the team’s engineering process at a high level?
What are your biggest challenges with testing?
Imagine it’s your ideal world, what does testing look like? (Follow up: What is preventing you from achieving that?)
There is no “right” way to collect the information. I’ve done this in individual interviews across teams (via calls), surveys (using something like online forms or SurveyMonkey), or group sessions (whether in person or in an online group call, using stickies). There are advantages to each technique—such as someone’s comfort and openness in a one-on-one conversation, or the ability to glean ideas in a group as sticky notes start getting posted. Once I know who I want to talk to, as the facilitator, I will want to think of how people will feel most comfortable answering honestly, and set it up accordingly.
My personal favorite way to do this is in a group setting with stickies (in my case, I placed questions in blue stickies, and answers in yellow). If you are online you could use something like Google Jamboard, Google Slides, Stickies.io, IdeaBoardz, Miro, or MURAL. Let everyone write down a sticky for every answer they may come up with. Each person in the group should be answering without consulting each other. This is meant to be a fast activity and a brainstorming session. Encourage participants to try and not overthink, and to overcome the fear of giving the “wrong” answer. For each question, I like to limit the time window to a few minutes.
Once you have the responses, it’s time to look at the answers. What similarities do you see? What patterns do you see emerging? What variations or outliers are there? Perhaps there is not a clear understanding of quality, or someone’s idea of measuring is very different than another person’s.
In one particular case, I asked “What is the team’s engineering process at a high level?” Some of the answers I received were:
No continuous integration—developers check in daily
Definition of done is not followed
2-3 releases a year—process still in flux
Tests run at a fixed time
Tests take five hours to run
Automation is done after release is already in production
Unit tests are manually run and have no gates
What would you star or group here? As I review the answers, I start noting those that had similar answers. If possible, I start moving the related stickies in their own cluster, or at least group around a similar theme. I then start arranging the most alarming responses as well to help prioritize—and just like that we had the beginnings of a backlog.
Image: Sample board after grouping answers and finding patterns.
The Whole-Team Approach to Agile Testing: Lessons from the Field
If I’ve learned anything, it’s that I never cease to be surprised by responses. I never presume to know the answers heading into a session. When we dive into challenges, it’s very rarely about a tool. In fact, I’m actually struggling to think of a case in which a tool problem was actually brought up. Instead, it’s invariably the human element. Perhaps a miscommunication, or lack of clarity. Perhaps a process is broken, and a tool isn’t required to fix it. Even automation and technical issues are more about the human side and understanding.
The other lesson? Build trust. Then ask. When I have developed a relationship with the people I’m talking to, I’ve found the responses are much richer. When people feel that they are not being judged and that I am here to listen… that’s where the gold is.
Now, this is just one of many ways to help foster continuous learning and relentless improvement. Sound like something you’d like to try? Here’s a template on GitHub to get you started.