Background
Acceptance criteria are a set of requirements that must be met for the work (release, task, test case) to be considered complete. In manual testing, clearly defined conditions help avoid errors, misunderstandings, and "hidden" shortcomings.
Problem
Lack of transparent criteria leads to different interpretations of "readiness": the developer considers the task closed, the tester sees it as not finished, and the customer is waiting for compliance with the business logic.
Solution
Developing measurable, clear, and non-contradictory criteria (for example, "the button works", "data is saved on page refresh", "no validation errors occur"). It's important to agree on the DoD among the customer, tester, and developer, reflect changes in requirements, and document the fulfillment of criteria for each stories/issues.
Key features:
Is it mandatory to meet all criteria to close the task?
Yes, that is the essence of DoD — the task is considered complete only when all criteria are met.
Can the DoD be changed during testing or release?
Yes, if requirements change or new details emerge, but all team members, especially the tester, must be aware of this.
Who should define the DoD?
The entire team together — with participation from testers, developers, business analysts, and customer representatives.
The task was accepted without formalized criteria: a colleague thought everything worked. The customer finds a "hidden" bug a day later. The tester claims the bug was not related to the task.
Pros:
Cons:
Before testing, specific criteria are formed, and after performing each task manually, a mark of completion is made. Any changes are documented and agreed upon with the team.
Pros:
Cons: