Improving customer experience (CX) is a key driver behind many digital transformation initiatives. For digital enterprises, CX is often the top metric for measuring the quality and effectiveness of their products and services.
DevOps has long been established as paradigm and methodology for enabling better collaboration between development and operations teams, enabling enterprises to deliver and maintain software faster, with better quality and reliability.
Clearly, then, DevOps would serve the needs for digital CX well—by enabling enterprises to deliver innovations at high velocity—which is imperative for successful digital transformations. At the same time, by bringing in CX as the prime (or additional) “quality” metric into DevOps, teams would help to create CX-focused culture in the entire IT organization that is also just as important.
While it is easier to understand how CX may stand to improve from DevOps, it is also remarkable how DevOps (and the underlying disciplines of software delivery) also improve from the inter-twining of the two.
For example, data from Standish Research (see Figure below) shows that only 20% of enterprise software is actually used frequently, while 50% of features are rarely used. That means that despite our best efforts at defining appropriate software requirements that we deliver with super-high delivery velocity, less than 50% of what we produce actually gets consumed or adopted at a meaningful level. This suggests that we may not have widely adopted a CX-focused approach to software development and DevOps itself may benefit greatly by taking on a CX-driven approach.
Given this symbiotic relationship between CX and DevOps, we see enterprises increasingly blend the two disciplines. In the evolution of this trend, we see the emergence of a new discipline we call CXDevOps.
It has been trendy lately to name new disciplines related to DevOps by attaching some process name in front of “Ops”—such as TestOps, BizOps, AIOps to name a few—so one might be tempted to call it CXOps (or even DevCXOps) which is fine by me (what’s in a name anyway?). However, we feel the name CXDevOps is the most appropriate since, it means that CX is the key driver for how we do DevOps differently and vice-versa.
In this blog post, we discuss our point-of-view on how CX can be integrated into the DevOps lifecycle to deliver better faster digital customer experiences—improving both in the process.
We will also discuss specific techniques and tools that may be used to support CX DevOps. We will not cover the details of either DevOps or CX in isolation since they are addressed in detail elsewhere.
On a final note, since this is about “digital” CX, we will not discuss the more “traditional” aspects of CX (for example that involves non-digital channels such as physical interactions with customer service agents or in-store experiences). However, as enterprises go digital, we see more and more of the “traditional” CX getting transformed or impacted by digital CX (for example being able to order burgers at a McDonalds using an in-store kiosk vs at the order desk). In the rest of this article we will use CX to stand for “digital customer experience.”
Also, we assume that digital user experience (referred to as UX henceforth) is a subset of CX (see Figure below). In the context of digital CX, digital UX relates to the experience from interaction with specific products and services.
In themselves, both DevOps and CX lifecycles are both well known, and won’t be addressed here. So let’s look at how key CX processes map into the DevOps lifecycle (see Figure below) and how native DevOps processes change with the focus on CX.
Here’s a quick description of each of the mapped processes:
Requirements planning and definition in agile delivery have generally focused on the development of features and user stories as a collaborative exercise between the product owners, product managers, developers, and testers. New requirements (features and user stories) are described based on their understanding of user and customer needs, as well as from interactions and feedback sessions. Generally, such feedback mechanisms are structured (for example, customer advisory board meetings, formal customer requests, or those from support tickets).
CX-based requirements provide additional means to collect such data—from digital feedback channels. Users provide a variety of feedback on multiple channels, such as social media, app store reviews and other review sites, shopping sites (like Amazon), and in-app or web-site feedback channels. Such non-structured digital feedback channels house a treasure trove of valuable information. In these channels, users vent about features they like (or hate), things that are broken, features they would love to see, and most valuably, comparisons with competitive products and solutions.
CX analytics (such as customer sentiment analysis and social listening) allow us to analyze such data to generate these insights which may be used for requirements planning and validation (See figure below).
CX-based requirements must also take heed of customer journey maps to better understand and optimize the flow of user interactions with digital systems. Journey map analysis generates valuable data on popular paths (to understand user behaviors and performance testing requirements), blockers (paths that need to be optimized), omni-channel interaction patterns (for example mix of digital channels and their effectiveness) as well as data accessed during the course of these interactions. Such data can be extracted from various sources, such as monitoring, operational logs, application transaction traces, and even web tracking. See figure below.
A key aspect of CX-focused requirements is early user validation of requirements (that is, are we building the right things?). While feature descriptions and user stories are great for capturing requirements from a developer perspective, they are difficult for users to interpret and interact with. A better approach for better customer validation is to use model-based requirements techniques. Why? Because a picture is worth a thousand words.
Model-based requirements provide a visual representation (see figure below) of the user’s work flow (equivalent to the journey map) that can be easily understood (without having to read textual descriptions), changed easily (without having to update bodies of text) allowing rapid iteration and experimentation, and most importantly, provide a structured approach to requirements validation. There are other benefits as well, which we will discuss later in the testing section.
Model-based requirements are in fact quite aligned with how we do agile backlog planning. The process does not start out by describing user stories. Typically the “3 Amigos” (the product owner, developer, and tester) will get together to sketch out the flow on a whiteboard (see figure below). Then they take a picture of the flow and derive their specific stories, acceptance criteria, and test cases. This process often results in not only misinterpretation of requirements, but also limits collaboration (for example iterate on changes) once the white-board is wiped out.
Model-based approaches provide for a “live” digital canvas to interactively (either in a room or using online collaboration tools) sketch, discuss, change, and refine these flows. From a perspective of CX, we recommend adding a new persona—the user (or a UX/CX expert in lieu)—to the 3-Amigo mix to create a “4 Amigo” team. See Figure below.
In addition, user journey maps may be integrated with requirements models to help cross the divide between CX requirements and system requirements—by importing the flow information (typically the customer actions and touchpoint information) into a requirements model. See figure below.
Product owners and managers often use the minimal viable product (MVP) approach to determine the set of features that are prioritized in releases.
From a CX-perspective, we need to also focus on minimal viable experiences (MVE) in combination with MVP approaches. MVE focuses more on the desirability of the solution from the user perspective. It takes into account the persona-based customer journeys, ease of use experience, personalization as well minimal customer expectations (for example from use of competitive or similar products).
We recommend a combination of both MVP and MVE approaches for agile planning (see figure below) that provides a balanced approach to designing the minimal set of features that provide the most engaging experience. For more details on this approach in the context of agile product development, please see this article.
As described in the MVE approach, CX-focused system design also takes into account considerations for the customer journey, ease of use experience (UX), personalization, ability to provide easy feedback, and collaboration. Design thinking is a widely adopted principle that can be applied to the entire software lifecycle. It involves greater use of empathy maps (as part of the journey map), story-boarding, prototyping, and early (and continuous) user validation.
As mentioned in the above section, these assets can be tied to requirements models (that we discussed in the previous section) so it becomes the single source of truth for all needs—both CX and technical—and serves as a digital collaboration and experimentation platform across both disciplines.
Many enterprises are also starting to take advantage of low-code software development platforms to support the design thinking approaches in development. In fact, low code systems are an abstraction of code into design models. Hence there is a great synergy between the requirements models and such design models—since it encourages model-based thinking across the lifecycle.
Another key DevOps development practice that helps to promote CX is API-based development. Not only do APIs provide the ability to rapidly build out omni-channel applications, but they also enable data connectivity across multiple digital ecosystems that are required to deliver seamless customer experiences.
CX-focused testing extends standard software testing practices by addition focus on “form” (in addition to function and performance). Some of the key differences between standard and CX testing are described below (see Figure).
CX testing focuses a great deal on validation (i.e. are we building the right things) of customer needs expressed in the form of CX requirements described above. This is done using techniques like user reviews, prototyping, end user testing, crowd testing, beta testing, A/B testing, etc. CX verification testing is more focused on ensuring that the CX design specifications (described above) are met. This includes techniques like conformance to UX design standards, UX testing, personalization requirements, end-user performance testing, and so on.
Unlike classic software testing (where specific tests are tied to specific expectations to determine success or failure), CX tests are often less deterministic in terms of pass/fail criteria. For example, results from UX tests or end-user crowd tests often provide results in subjective shades of gray (for example, “cool,” “sort of,” “passable,” “sucks,” and so on) rather than discrete pass or fail. Hence, such testing must take advantage of analytics tools that provide statistical results, such as averages, trends, distributions, ranges, and more.
CX performance testing focuses on perceived performance from a user perspective rather than the classic performance testing approaches to benchmark the responsiveness of backend systems. This includes understanding of performance on digital access devices (for example, laptop, mobile, and wearables) that have different computing capabilities and use different network connectivity. The typical setup for end-user performance testing is shown in the figure below. This includes measurement of access performance on the device, simulation of network performance, as well performance tests on the back-ends. Resilience testing must also be conducted across all layers of the ecosystem in addition to load testing.
In the previous section, we discussed capturing customer journey maps in the form of model-based requirements. This actually helps us do better and more efficient testing based on journey maps—by automatically generating tests from the journey map models. These tests can be automatically regenerated every time the map changes and can also be automated using a variety of test automation tools (see Figure below).
We know that CX/UX testing requires significant testing of the software user-interface (or GUI), while the classic software testing pyramid recommends that we focus more on unit and component tests. Automated GUI tests using functional automation tools (some of which are shown above) are generally brittle and difficult to create and maintain. So how do we address the need for more and better GUI testing required for UX/CX?
The answer might lie in new age test automation tools that extensively leverage artificial intelligence and machine learning, such as Test.ai (and others such as Applitools, TestiM, and mabl to name a few). These tools use image/object recognition techniques to automatically generate robust, low-maintenance GUI tests that are especially useful for automated CX testing (see figure below, courtesy of Test.ai)
For example, Test.ai was used to create test bots that automatically computed various user experience metrics across three popular online shopping sites (Amazon, Walmart, and Target) before the black Friday shopping season. See the results below. While these tests may appear to be simple, the key factor here is that these tests required no human intervention. The bots autonomously learned how to navigate these different sites (for different customer journey types) and collected the data automatically for comparison as well. I see a lot of potential for the future application of such tools in combination with classic GUI automation tools.
CX releases often require experimentation with multiple options to better gauge user feedback and reaction. Popular software deployment and release techniques that may be used for these include A/B deployments (for supporting A/B testing) and canary releases. These techniques are already well-established in DevOps, so there is a great deal of synergy in being able to support CX goals. The key extension that we require from a CX perspective is the combination of CX monitoring (see the next section) in addition to the standard IT monitoring as part of these experiments.
We already leverage traditional IT systems monitoring to enable better CX, for example through faster automated RCAs and remediation, proactive performance management, use of real user monitoring (RUM), synthetic transaction monitoring, and transaction monitoring to track every step of a user’s interactions, from device to application to ultimate business outcome.
CX-focused monitoring extends IT systems monitoring by providing additional insights on CX issues, such as customer satisfaction with a product or service (for example feedback provided on app store reviews or other digital feedback channels), other forms of sentiment analysis (for example, feedback from survey data), seamlessness of experience across multiple digital channels (for example, web chat followed by email), inability to connect and complete interactions with agents (over voice/web chat/SMS/email), the quality of the interaction, and how long it takes agents to respond to chat interactions.
For example, on a recent interaction with a customer support agent on web chat, I was unable to provide an image of the affected product through chat (!), so I had to send it separately via email. However, I soon learned that all the context I provided about my problem to the web chat agent was not seamlessly transferred over to the agent that responded by email (!!!). Even though this is technically an IT issue, since both channels are digital, IT monitoring systems typically would not be able to catch these types of issues.
We also find that CX-specific monitoring data is typically by marketing (or the office of the CMO by personas like CX specialist) while IT monitoring data is used by the office of the CIO (by personas like Site Reliability Engineer or SRE). They often have disconnected dashboards in silos. There are opportunities therefore to correlate such CX monitoring data with IT monitoring data and make them work better together for holistic CX, see figure below.
For example, it would then be possible to correlate negative customer sentiment data (CX operations data) following an app store (or web app) update to specific feature sets included in the last update (IT operations data) to accelerate decisions on either a rollback (or roll-forward).
Combining CX with DevOps provides significant benefits for both. CX initiatives clearly benefit from availing DevOps capabilities to release innovations to market fast, and with high reliability. On the other hand, CX techniques infused into DevOps practices provide the ability to address the true quality measure for successful digital transformations—that is, customer experience.
DevOps is now a relatively mature, though rapidly evolving discipline. Digital CX is still maturing. As more and more enterprises execute digital transformations successfully, they will come to realize the enormous benefits of combining the best of both in the CX DevOps approach.
Shamim is a thought leader in DevOps, Continuous Delivery, Continuous Testing and Application Life-cycle Management (ALM). He has more than 15 years of experience in large-scale application design and development, software product development and R&D, application quality assurance and testing, organizational quality management, IT consulting, and practice management. Shamim is currently the CTO for DevOps business unit at Broadcom, where he is responsible for innovating DevOps solutions using Broadcom's industry leading technologies.