Multi-step forms (wizard, multi-step forms) are common during registration, account setup, and lengthy business processes (for example, applying for a loan or ordering services). Manual testing is prone to errors and takes a lot of time; automation saves effort and ensures coverage of all "edge" scenarios.
Background: Since the emergence of wizards and long forms, such scenarios have mostly been covered only by manual testing. With the advent of frameworks like Selenium, Cypress, and Playwright, it became possible to automatically reproduce complex multi-step stories, significantly improving software stability and reducing the number of regression defects.
Problem: Wizards and long forms often undergo changes in logic (steps appear/disappear, validation conditions change, dynamic fields are introduced). It's important to maintain test stability amidst such changes. Main pain points include: the fragility of locators due to the dynamic nature of steps, proper handling of transitions between steps, managing test data, emulating user errors, and clicking through non-linear scenarios with returns and changing states.
Solution: The Step Object pattern (an extension of Page Object) is used, allowing separation of the logic for each step into distinct entities. Tests should implement transitions for all possible scenarios, including returns and incorrect data. To enhance stability, dynamic waits and element locating methods that are not dependent on page position are employed. Test data is structured to comprehensively cover all branches of logic.
Key Features:
Trick Question 1
"Is it enough to cover only the happy path (main user scenarios) if the form is stable?"
Answer: No, errors often arise precisely in dealing with unexpected scenarios — returns, skipping steps, edge values. Without these, tests will not provide complete confidence in stability.
Trick Question 2
"Can transitions between steps be implemented solely by traversing URLs?"
Answer: Not always. Many wizards use dynamic routes or are managed only by internal JS states, so real user clicks and interactions must be reproduced.
Trick Question 3
"Managing test data does not play a significant role if all steps are mandatory and static?"
Answer: Incorrect. Even for static forms, different data inputs can trigger completely different responses, prompts, errors, and dynamic hints.
In the automation of a banking application process, a single end-to-end test was created for the happy path, without returns and errors. When one of the steps was modified (adding a dynamic block), the test not only failed but also did not catch bugs in returning to the previous step or processing incorrect data.
Pros:
Cons:
The Step Object structure was implemented, with each step covered by a separate test, simulating returns, errors, and switching between different branches. Everything was managed through sets of test data. New steps or changes did not undermine the value of the test base.
Pros:
Cons: