If you want to build quality applications faster, it’s time to take a close look at model-based testing. This testing approach is a great way to eliminate the complex, manual effort involved in verifying whether your system behaves as expected. More importantly, tests cases and code are generated automatically based on actual requirements, using visual models that represent all or part of your system under test.
To get maximum value out of model-based testing, though, you need the right tool. One that delivers proven modeling capabilities in-sprint that can overcome the many bottlenecks encountered with manual test design. You need models that offer clear visibility of requirements and traceability when managing the impact of changes. This is key to enabling real-time decision making and capacity planning within development teams. In this blog, we’ll explore the basics of modeling that such a tool will need to first address when building tests, and share a few best practices that will help you realize a wide range of important new benefits. Read on.
One of the first questions many people ask about model-based testing is, “Who creates the model?” There is no single answer. QA might take on model development. A high-level model might be created by a business analyst, with the development or QA team diving in to provide further process details. A cross-functional team might be convened to develop the model collaboratively and to establish a functional flow they all agree to and understand. The reality is, most anyone can take the lead, which is why modeling for tests needs to be made simple.
In truth, we are creating models already, whether they are in our head, on a white board, or on a computer screen. The challenge is to formalize the modeling process and use what you design to transform your testing program. Your choice of a model-based test tool must address certain key functionalities when building model-based tests and automating for test case development:
Blocks are fundamental to model building. You need the ability to drag and drop them into place and to edit them as you create a visual model of the business logic you want to represent. There are three primary block types that your tool must support:
While having a simple set of blocks is great to help everyone understand what the model represents, there are times more information is better. This is why we need to create an attribute to define each block. An example of a decision block could be representative of a user deciding which item to add to their online shopping cart. Here, we define decision blocks as a type of “user interaction.” By default, your tool must to be able to change the block, to match the block type to user actions. Where you can pick out user interaction steps from the code action steps easily and at a glance.
There are even more opportunities to enrich your models with additional information. For example, under process blocks, you would be able to open a “process details” menu and add any number of expected results items to each block.
Here, you should be able to attach test data, automation, and other useful pieces of supporting data. Your tool must allow you to go through a similar intuitive process for each decision block in capturing the decision output and expected results, such as valid or invalid, approved or rejected. If there is not an attribute that suits your needs, you should be able to easily create your own custom field, allowing even more flexibility.
Editing blocks must be made easy in your tool of choice. Simply double-click or right-click to change any or all the block properties, including the description, type, expected results—and more. For example, you might edit a decision block to change the output from true/false to red/yellow/blue. If you have a large model and lots of edits to make, you can select a “grid edit” and change multiple block properties from a single, tabular view. This is about editing made easy.
To save time while modeling, it’s important to re-use as much information as possible. With copying, duplicating, and cloning, you have enough flexibility to re-use blocks to suit your needs.
Sometimes modelling particular events is difficult or can result in very large models. Constraints are a way to help control the flow of logic. You are able to specify a complex series of events that must always be satisfied. For example, if a user logs out, they a must be re-directed to the log in page. The downside is that constraints aren’t actually seen in your model. In general, you should use them only when the same logic cannot be achieved easily by flow logic alone.
You should be able to make use of subflows to “componentize” large flows, which can help reduce complexity. For example, you might establish a subflow with the appropriate set of rules for logging in or for registering a new user. This allows you to achieve re-usable assets and maintain consistency across your model.
Lastly, your tool needs to offer a storage hub for your models, so any authorized team member can check components in and out to support your Agile workflow. It’s important to have a file storage mechanism that’s purpose built to help development teams create and manage multiple projects across dedicated workspaces. Distributed teams must be able to rely on the tool for such a central repository to maintain, share, and reuse different versions of test assets modeled from one sprint to the next.
bizops.com is sponsored by Broadcom, a leading provider of solutions that empower teams to maximize the value of BizOps approaches.