Test design is the process of developing test scenarios and data based on requirements, specifications, and product analysis. It arose from the need to structure tests in a way that ensures maximum coverage and minimizes effort duplication.
History of the issue:
Previously, tests were created intuitively, leading to gaps in checks and inefficient resource utilization. Test design methods have allowed for an increase in quality and completeness of coverage.
Problem:
Without formalized techniques, there is a risk of going through similar tests or, conversely, missing critical cases. It is also difficult to prove the sufficiency of testing before release.
Solution:
Implementing test design techniques allows for rational resource allocation, prioritization of checks, and monitoring of coverage. Key techniques include:
Key features:
Is it sufficient to test only at boundary values for complete coverage?
No, positive/negative scenarios, business logic checks, and non-equivalent cases also need to be considered.
In which cases is it better to use pairwise rather than equivalence partitioning?
When there are several parameters with different ranges of values — pairwise is more effective at identifying interaction errors between parameters.
Is testing on outdated specifications sufficient?
No, specifications must be updated; otherwise, test coverage will not correspond to the current product.
Testing the "Age" field only took 18, 25, and 40 years, while critical errors at the boundaries (0, 100) were not identified until release.
Pros:
Cons:
Used equivalence class and boundary value methods: tests covered 0, 1, 17, 18, 99, 100, 101, as well as typical values within the range.
Pros:
Cons: