BizOps Virtual Summit Videos

    Intelligent Test Automation: Optimizing Speed and Quality at Nationwide

    Learn how Nationwide Building Society has made significant strides in establishing truly intelligent test automation approaches.

    The promise of DevOps is that teams deliver new releases faster, and with more confidence. However, obstacles to adoption and transformation can make those goals hard to achieve. From disjointed toolchains to a lack of predictability and visibility, persistent challenges can make DevOps success elusive. This blog talks about how our team at Nationwide Building Society has made significant strides in establishing truly intelligent test automation approaches, so we can realize true DevOps success. 

    Nationwide: Responding to Customer Needs Since 1874

    Nationwide Building Society is a UK-based financial services organization that has roots that go back to 1874. Through mergers and acquisitions dating back many decades, Nationwide grew to be the largest building society in the world. 

    Within the IT domain, Nationwide has established a complex IT ecosystem that evolved over time. The by-product of the merger of many different organizations over the course of decades, Nationwide’s IT ecosystem encompasses a mix of the legacy and the new. Our environment features mainframes, middleware based on a service-oriented architecture, mobile applications, and much more. 

    In order to foster the delivery of the innovative digital services the organization and its members need, Nationwide has made significant investments in development technologies and teams, pursuing top-level initiatives in areas like agile, DevOps, and continuous integration/continuous delivery (CI/CD). In the wake of these initiatives, change continues, and at a rapid pace. In 2019, our teams made 26,000 production changes, while also achieving their best results in terms of service availability. At any given time, our groups have around 130 IT change initiatives in motion. 

    At Nationwide, amongst a delivery portfolio, I am responsible for building test engineering capabilities. In this role, our team has been looking to change the conversation and perception of testing. Here are few of the key takeaways I’ve gained from this experience.

    Avoiding Common Mistakes

    As we set out to pursue our objectives, it was critical to avoid some common mistakes I’ve seen made at other organizations. Following are a few examples:

    • Failing to define and review objectives and architectures. People wouldn’t dream of building a house without designing and architecting it, but that’s what happens with software. Teams build code and testing mechanisms without properly defined objectives and architectures. Too often, teams fail to gain real clarity around what they ultimately want the specific IT change to do, and what the ultimate objective is.
    • Lacking a clear definition of quality. Often teams call themselves agile but work in waterfall way. Product teams define features / requirements in isolation and throw them over the fence to build teams, who then send code over to test. There isn’t a consistent conversation around what quality means collectively for the entire team.
    • Making automation the goal. When people talk about testing, test automation is often seen as the be all and end all—how many job adds do you see today for test automation tool skills vs good structured test design techniques? Many fall into the trap of trying to automate as much possible, as fast as possible. Too often, people lose sight of the ultimate goal: testing is a mechanism to provide feedback of quality within the product. If your test pack doesn’t do that, automation won’t solve the problem. That’s a big reason why many organizations’ automation exercises failed in the past.

    In pursuing our objectives, we have sought to learn from, and avoid repeating, these mistakes. Through effective test engineering and advanced modeling, we’ve established the predictable, intelligent test automation that’s set the stage for faster delivery and better quality. Rather than simply rolling out a specific automation capability fast, we’ve established the predictability that will enable us to be fast, again and again.

    The following sections detail some of the keys to the Nationwide’s success.

    Fostering Effective Collaboration

    Within Nationwide, teams have sought to adopt behavior-driven development (BDD), an agile software development process that encourages collaboration among developers, QA, and business stakeholders. Through collaboration among these teams, we can build a common understanding of the stack and how it’s changing. 

    Our team at Nationwide has also sought to strengthen collaboration through the adoption of a squad model for application delivery. These squads, made up of small multi-disciplinary teams, are aligned with specific offering areas, such as mortgages and current accounts. They’ve instituted a highly collaborative approach across teams. For example, one team will be making UI-based changes. While they will concentrate on UI-based testing, they also will need to talk to the teams that are delivering middleware. We use the asset of the model as an aid to this ongoing collaboration treating it as a living specification of the systems we are changing.

    Establishing Advanced Modeling

    Historically, when teams create a requirements document, the details can be ambiguous, with different people coming away with different interpretations. On the other hand, when teams build models, it takes away a lot of ambiguity. Ultimately, these models can be instrumental in turning the topic of quality from one that’s subjective to one that’s objective and measurable.

    Modeling enables teams to better understand the impact of change. It is key to break down to the lowest common denominator, and see changes in builds. This visibility is really effective in calculating project timelines, sprint timelines, and cycle times, and in making accurate build estimates.

    Our team at Nationwide has used advanced modeling as a basis to facilitate discussions with product teams, designers, business stakeholders, developers, and testers. Through these discussions and modeling efforts, we are able to clearly define aspects like quality and gain alignment around key objectives. As outlined above, it’s critical to define objectives up front, and modeling significantly improves that effort. 

    Through modeling, teams can get a better, more collaborative understanding of the work. They now have much more clarity and predictability around what builds should ultimately look like. Through enhanced predictability and repeatability, teams are better equipped to harness automation opportunities. 

    Through this modeling and collaboration, they can enhance transparency across teams, which is vital. Amidst all the complexity they’re faced with, it’s important that teams have a clear understanding of areas where they have knowledge gaps and being honest about them. For example, this means having team members be clear about the parts of an interface that aren’t well understood. Modeling helps ensure teams have predictability and consistency, so they can work and collaborate most effectively.

    Our approach to modeling has also helped to keep the collaborative teams working during these recent Covid times. Visual models help to keep virtual teams engaged over a common goal and updating the model in real time allow the whole provide input and to have a reference point afterwards—never has the saying “a picture paints a thousand words” been so true for our teams.

    Rearchitecting Testing and Environments

    Effective test engineering is not as simple as just getting in a (virtual) room and deciding what to test. Our team had to make a fundamental shift, not only in the way testing was delivered but in the way systems were architected. We needed to move from monolithic systems to decoupled IT estates, adopting an API-first approach.

    Through this approach, we realized a broad range of advantages, particularly in the area of test data and environment provisioning. For example, you sometimes hear stories around the efforts associated with making copies of live data or synthesizing millions of records. But why would you ever create that much synthetic data when it’s a waste of money, time, and resources?

    As a result of our effective modeling and architectural decoupling, efforts like provisioning test data and environments has become much easier because we optimize around what we really need.

    As we built out the modeling, we started to identify where we could decouple environments. From there, our team began to minimize the need for real environments by doing environment virtualization. For example, in less than a year, we had done 1.2 billion interface calls to stubs, rather than “real” interfaces.

    If we understand the logic that triggers the test scenario, we can better understand exactly how testing needs to be applied, and what resources are required. In some cases, you can remove the need for test data completely, which is very powerful.

    Testing Models: From Ice Cream Cones to Volcanoes

    This move to test modeling takes us away from the dreaded ice cream cone, toward the more desirable volcano model. 

    • Ice cream cone. Historically, teams have done a lot of manual, UI-based tests. Ultimately, these UI tests are seen as the way to prove what the user experience will be like. However, these UI tests require significant resources in the form of end-to-end testing environments. Further, results are quite unpredictable. As a result, these approaches are not only resource intensive, but they don’t create a good breeding ground for automation.
    • Volcano. This is an approach in which monolithic, end-to-end tests are broken into smaller, more manageable chunks through the use of APIs. To make this move, it is essential to do model adaptation, which is what test engineering is all about. This enables collaborative technical discussions. Rather than looking only at business workflows, teams are focused on how APIs interact with other system’s APIs. This represents the new way of test engineering. This isn’t to say that UI-based testing goes away completely, it’s just a smaller part of the overall testing strategy.

    Maximizing Continuous Learning

    Within Nationwide, agile teams are delivering on a rapid, repeated basis. We sought to adopt an iterative, continuous testing model in alignment with their agile workflows. To do so, it was imperative to have automation as a fundamental building block for executing tests, which feed into CI/CD pipelines. Without continuous testing, continuous delivery isn’t possible. Further, continuous testing isn’t possible without continuous modeling. 

    Because all the automation implemented, the team is able to collect a lot of metrics from across the CI/CD pipeline, which helps them understand where there are inefficiencies. It’s key to harness this information to continuously learn and continuously improve.

    This isn’t about a big bang. You’re not ever going to stop everything and model the entire universe. Instead, we continue to iterate, increasing our understanding of what systems are doing, gaining continuous feedback, and enhancing our maturity along the way.

    Building for the Future

    Building off the success we’ve had, the team is looking to move forward in a number of ways:

    • Moving beyond functionality testing. We are planning to explore different aspects of quality. The squads have traditionally been doing a lot of testing around functionality. They are starting to tackle performance testing, and ultimately are looking to establish security testing and operational acceptance testing.
    • Harnessing AI and machine learning. We are starting to look at how AI and machine learning can help minimize the technical debt in the organization. In our automated environment, we’re gathering massive amounts of data from production. We are looking to determine how we can use machine learning to help better understand quality, and use that data to not only improve testing but better design new changes.
    • Making quality a focus across the organization. Our mantra is that “quality is everyone’s responsibility.” Toward that end, we’re continuing to focus on sharing improvements and promoting best practices. We continue to iterate around capturing more intelligence and learning. We can understand where we’re doing well, celebrate those achievements, and help to propagate successful approaches across the organization. We want to ensure teams continue to gain a better understanding of quality, and how to improve it. The goal is to make sure quality is on everybody’s minds. 

    For More Information

    At the BizOps Virtual Summit event, I gave an in-depth presentation on how to ensure continuous quality for DevOps success. To learn more about the successes the team at Nationwide has been experiencing, be sure to visit the BizOps Virtual Summit resource center page. Here, you can access my complete presentation as well as those of a range of other industry experts and practitioners.