Telerik blogs
OpinionT2 Light_1200x303

If you want to create a great test plan for any application, there are four truths that will guide every decision you make. In fact, you’re already using them on a daily basis even if you’re not completely aware of it.

Think about usability testing for two screens in an application. The first screen is used by internal staff to maintain a lookup table of company departments—a screen that looks much like every other “lookup table maintenance screen” those users work with. The second screen walks customers through a series of steps that lets them configure a product before purchasing it. How much usability testing are you going to do for each of those screens? Take a second.

Now: How did you make that decision?

There are four truths about testing that you use every time you make decisions like this (you just applied them). The goal of this post is to help clarify those truths so that you can build the right test plan for any application.

The Root Truth

The first truth about testing is that, in any real-world application, it’s impossible to test every possible combination of inputs and conditions. You not only know this but figure it’s so obvious that it’s not worth saying.

There are two reasons it’s worth saying. One: It’s why you wouldn’t do both sets of usability testing in my example. Two: It may not always be true. Thanks to cloud computing, test generation, massively parallel processing and machine learning, this root truth might stop being true in the future.

But, right now, this is the way it is. And, besides, the other three truths depend on it.

The Truth About Purpose

To get to the second truth, you have to recognize that testing isn’t about quality. Quality comes from the design effort, beginning at the architectural level and continuing down through code design and onto application delivery and operations management. Testing, on the other hand, just proves that software is “free from defects.” That’s important … but it’s not a measure of quality: “Free from defect” is just the minimum level of acceptability.

So, because we can’t test everything, we prioritize tests by the risk of not discovering the related defect. Risk, in this case, is some combination of how likely the defect is to occur, how much damage the defect could do and what other tools for managing the risk we have available.

Which leads to the second truth: Testing is about managing risk.

You practice this every day. If you feel another test won’t reduce the risk, if you feel the defect’s risk is low, if you have some other way of managing the risk, then you’ll skip the related test. The second truth also controls, for example, where and for how long you’ll do exploratory testing and how you plan to mitigate defects in the areas you don’t test.

In my example, most organizations will decide that the usability risk associated with the lookup table screen is minimal because (a) the users will probably “figure it out” and (b) if the users can’t figure it out, the cost of failure is low and can be handled through training (i.e., “asking a co-worker”). This screen won’t make it into the usability test plan. At best, it might get a smoke test.

On the other hand, most organizations will decide that the risk associated with the configuration screen is high because (a) customers may get it wrong, (b) the customer will now be unhappy, (c) mitigation is expensive (giving the customer a new product), and (d) customers could be lost to a competitor. Is this screen going to get usability testing? You bet it is.

The Truth About Value

Which brings us to the third truth (and the one that people generally don’t like): Testing isn’t a value-added activity. Your stakeholders regard the functionality delivered by your application as the valuable part of your work. If you could deliver that functionality without any testing, your stakeholders wouldn’t feel they’d been shortchanged.

While not a value-added task, testing is, however, a necessary task: It’s (currently) the only way we have to remove defects. This might not be true if provably correct or contract-based software had caught on … but they didn’t, so here we are.

This distinction between value-added and necessary tasks matters because time spent on necessary tasks is time we’re not spending on value-added tasks. We should spend as much time on necessary tasks as we have to and not one second more. This means we need to make sure that the time we spend on testing is as effective as possible—that we’re doing the right things with the best tools in the most efficient way.

And there are a bunch of ways you can make the time you spend on testing more effective:

  • Ensure you’re not working on low priority tests.
  • Shift work to the people who will get the value from delivering the application (<cough>stakeholders</cough>).
  • Integrate QA into software delivery (QAOps).
  • Automate where possible.
  • Reduce the labor involved.
  • Employ shift-left thinking to find problems when they’re cheapest to fix.

This isn’t a complete list and it’s also a place where you can continue to expect change. But getting the right tools (like Telerik Test Studio) can not only help in making these changes, they can reduce the costs and time you’re spending on existing tasks.

The Truth About Timing

Which leads to the final truth (and the only one that isn’t generally accepted): You can’t start testing early enough.

The later you start testing, the more likely it is that you’ll work on the wrong things and find problems when they are most expensive to fix. Testing starts as soon as you have requirements with testing/validating those requirements. Testing requirements to ensure they are valid ensures that developers are both building and testing the right things.

Your goal is to start testing early, make your testing effort as effective as possible, use the best tools and prioritize the tests you can do based on risk. If you use those truths to build your test plan, you’ll have the best test plan possible.


Peter Vogel
About the Author

Peter Vogel

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter also writes courses and teaches for Learning Tree International.

Related Posts

Comments

Comments are disabled in preview mode.