Software testing has changed drastically over the years. New methods and tools have been introduced that have helped in delivering high quality software products. However, there are terms that sometimes confused the developers, project managers and even testers. One of these points of confusion is definition of “testing” and “checking”. In this post, I will describe these two concepts, their differences and how it fits in a software development cycle..
Most of the time testing and checking have been used interchangeably. Developers test their code, testers check the feature. Or is it the other way around? Every person in a team does their part in testing and checking their work towards one goal: to release a “bug-free” product. But there is not always a clear understanding and definition of those terms in the development and testing communities, which is why James Bach and Michael Bolton worked together to come up with these definitions:
“Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modeling, observation, inference, etc.”
“Checking is the process if making evaluations by applying algorithmic decision rules to specific observations of a product.”
Generally, we describe testing as a process to explore and discover unexpected behavior, looking for defects and trying to analyze the impact from a user’s perspective. By testing, we attempt to identify any issues that may hinder the user from using the system. We try to discover any issue (recurring or intermittent, replicable or not, etc.) and make intelligent guesses of how the system should work. We investigate and isolate the root cause of the problem to help developers reproduce the defects and to fix them efficiently. Hence, testing can be summarized as, “learning sufficiently everything that matters about how the program works and about how it might not work.”
On the other hand, checking often involves an observation. By checking, we try to confirm if our assumption is true or not. We verify if the system still works after modifying or making some changes. We validate if our observations are accurate with our expectations. Checking usually provides a binary result – true or false, yes or no, pass or fail. With such, we can determine if a function works as we’d expect.
So how do we differentiate these two strategies? To paraphrase James Bach, testing encompasses checking, whereas checking cannot encompass testing. We can perform checking while testing, but not the other way around. Checking is more focused on specific facts and rules, while testing is an open-ended investigation. Another distinction is specifications. Since testers are more exploratory in nature, they tend to look outside the box (unconstrained scenarios) without solely looking on what’s expected. They try to “break the system” and look for unexpected behavior. On the other hand, checking requires specifications in order to validate, verify or confirm claims if a particular feature actually works against requirement descriptions.
Some will say testing is better than checking, while others may disagree. But as a tester myself, I believe that both disciplines have their own strengths and weaknesses. Testing is ideal for discovering issues that may appear before product release, but it consumes a lot of time to hit all those corner case scenarios. Checking is important in verifying all known functions, especially for projects with tight timelines, however some unforeseen bugs may occur once deployed. Therefore, to have an effective, efficient and excellent product, development team should always consider performing these two strategies. Because, as they say, “If you don’t like testing or checking your product, most likely, your customers won’t like to test it either.”