Service virtualization simulates unavailable systems by emulating their dynamic behavior, data, and performance. Service virtualization enables teams to work in parallel for faster delivery. This approach also helps teams create an excuse-free testing environment that uses virtual services to speed up the testing of applications, while increasing quality.
Organizations can leverage service virtualization best practices to shift left their testing and continuously validate changes in order to bring higher quality products to market, and do so more rapidly and at a lower cost.
Building effective virtual services requires preconditions. These can include:
Not meeting preconditions can mean:
You don’t need to recreate all, or even a substantial portion of the real system behavior that you want to virtualize. That is, you don’t need to recreate every function point of the target system. You’re just building a stand-in; not rebuilding the dependent service, so only implement required system transactions.
Don’t even try supporting ad hoc requests. This is only possible if the virtual service is just as complex (and just as costly to support) as the virtualized system. Input validation is not required. If you have a negative test case for malformed input, create a virtualized transaction for it.
Basically, be aware of what you need to virtualize.
Never add behaviors to a virtual service without knowing how test cases will interact with it. If you don’t have a test plan, you don’t know which transactions to virtualize.
If you’re guessing at how clients will interact with your service, you’ll probably guess wrong.
Basically, know the detailed test cases you need to support. Let your test plan drive which transactions are virtualized. Ensure each virtualized transaction can be traced back to a test case.
Ensure that you build test cases that validate the system under test and not the virtual service. Virtualize an idealized version of the dependent system. The virtual service should be smart enough to allow the client to complete a workflow.
For example, consider the UI of a banking application.
It’s counterproductive to virtualize a service only once, just so a single instance might be used across multiple teams.
Reuse is a great concept, but it doesn’t apply here. Each team has its own requirements. One virtual service for all teams creates another constrained service. The service owner becomes just another bottleneck. There’s no harm in each team having an independent virtualization of the same service.
The better approach is to allow each team to evolve their services independently.
Each team has their own needs. If there’s no overlap in requirements, there’s no reuse.
A simple virtual service can be built quickly. It is fine even if there is some overlapping behavior. You’ll spend far more time on the analysis than you will on the implementation.
You can’t record transactions all day in hopes the computer will “learn” how the dependent system behaves.
You’ll never know if you’ve captured all the data scenarios. You probably have captured countless duplicate transactions. A large virtual service is harder to maintain. You lose the benefit of effective test data management, so you’re better off only recording defined test workflows.
If you want to vary the data you get from a particular service request, randomly selecting a response is not the way to go.
You can’t create a repeatable functional test with random responses. What about database caching during performance tests? Create an invariant set of responses for a set of similar inputs. Vary the traffic distribution with random inputs instead.
If you match multiple transaction requests that all have an identical response, you’ll pay when it comes to maintenance.
As an example, match only on the last digit of the account number. Multiple unique account numbers can be mapped to a single transaction. This addresses the issue of database caching in a performance test.
Ensure that your application gracefully handles failures. Most of the test plans contain more negative than positive test cases. This helps spot when developers “drop an exception on the floor.”
“Failure on demand” is easier than yanking a network cable! Failure scenarios can be tied to narrow test cases instead of being system wide.
Virtual services should typically be built by the consumer, not the provider of the real service.
Only consumers of a service, not the owner, know what behaviors they rely upon. In a proper SOA environment, the service owner has no clue how it’s consumed. The service owner would have no choice but to reinvent the world. A close relationship with the service owner is still critical, after all, a service contract is still a requirement for virtualization.
Service virtualization requires a skill not usually found in testing organizations. Ensure you train the right people.
Service virtualization is a paradigm shift in testing, so you need to possess different skills. Traditional manual testers understand the core business, not the underlying technology.
They generally test through the UI—not at the level of SOAP services. They have little exposure to SOAP, messaging, or related technologies.
Those with strong business knowledge write requirements, and those with strong technical knowledge implement requirements. As such, let traditional manual testers create test plans, let the techies build the virtual services and automated test cases.
These service virtualization best practices represent a significant transition in the way organizations are looking to create a meaningful DevOps environment. By implementing your service virtualization successfully, you can reduce bottlenecks to testing and implement more testing best practices.
Giri is a product management professional, with over a decade's worth of experience, focusing on driving product development. His current focus area is Service Virtualization.
bizops.com is sponsored by Broadcom, a leading provider of solutions that empower teams to maximize the value of BizOps approaches.