Background:
Smoke testing originated as a quick way to check the operability of a system after build. Its goal is to ensure that critical functions work and the application is generally suitable for further, deeper checks. In manual testing, smoke tests are usually performed immediately after deploying a new version of the product.
Problem:
The main difficulty is the limited time and the need to choose truly important checks. Often, testers either check too much (wasting resources) or miss critical issues, resulting in "holes" in the release.
Solution:
Proper organization of smoke testing involves selecting a strictly minimal set of scenarios that cover the most important user flows. These checks should be clear, quick, and reproducible. For example:
- Successful user login - Ability to perform the main function (e.g., make a purchase) - Processing payment and receiving confirmation
Key features:
Can smoke testing be considered a full replacement for regression testing?
No, smoke tests focus only on "works — doesn't work" for key functions. Finding serious but implicit bugs always requires a full regression.
What to do if at least one smoke test fails? Should testing continue?
No, further testing makes no sense — the team reports the issue, the release is blocked until the bug is fixed.
Should smoke tests include checks for edge-case scenarios?
No, smoke tests are not intended to check edge cases. They are only for confirming the operational capability of the core functions of the product.
A smoke test was conducted with an extensive checklist that included trivial functions. This took a lot of time, resulting in a half-day delay in the release.
Pros:
Cons:
The smoke test focused only on the most critical scenarios. A blocking bug was quickly identified and reported to the team — the release was suspended until a fix was implemented.
Pros:
Cons: