Automated Testing (IT)QA Automation / Automation QA

How to automate testing of multi-step forms and wizards, what problems do testers face, and how to build reliable tests for processes with long user scenarios?

Pass interviews with Hintsage AI assistant

Answer.

Multi-step forms (wizard, multi-step forms) are common during registration, account setup, and lengthy business processes (for example, applying for a loan or ordering services). Manual testing is prone to errors and takes a lot of time; automation saves effort and ensures coverage of all "edge" scenarios.

Background: Since the emergence of wizards and long forms, such scenarios have mostly been covered only by manual testing. With the advent of frameworks like Selenium, Cypress, and Playwright, it became possible to automatically reproduce complex multi-step stories, significantly improving software stability and reducing the number of regression defects.

Problem: Wizards and long forms often undergo changes in logic (steps appear/disappear, validation conditions change, dynamic fields are introduced). It's important to maintain test stability amidst such changes. Main pain points include: the fragility of locators due to the dynamic nature of steps, proper handling of transitions between steps, managing test data, emulating user errors, and clicking through non-linear scenarios with returns and changing states.

Solution: The Step Object pattern (an extension of Page Object) is used, allowing separation of the logic for each step into distinct entities. Tests should implement transitions for all possible scenarios, including returns and incorrect data. To enhance stability, dynamic waits and element locating methods that are not dependent on page position are employed. Test data is structured to comprehensively cover all branches of logic.

Key Features:

  • Implementation of Step Object for each step.
  • Testing not only "happy paths" but also alternative, error-prone paths (inputting incorrect or edge data).
  • Use of a data-driven approach: crafting scenarios based on an array of test data for maximum coverage of transition logic.

Trick Questions.

Trick Question 1

"Is it enough to cover only the happy path (main user scenarios) if the form is stable?"

Answer: No, errors often arise precisely in dealing with unexpected scenarios — returns, skipping steps, edge values. Without these, tests will not provide complete confidence in stability.

Trick Question 2

"Can transitions between steps be implemented solely by traversing URLs?"

Answer: Not always. Many wizards use dynamic routes or are managed only by internal JS states, so real user clicks and interactions must be reproduced.

Trick Question 3

"Managing test data does not play a significant role if all steps are mandatory and static?"

Answer: Incorrect. Even for static forms, different data inputs can trigger completely different responses, prompts, errors, and dynamic hints.

Common Mistakes and Anti-Patterns

  • Storing all the logic of a multi-step wizard in a single long test ("monolith"), instead of breaking it into steps/components.
  • Lack of negative scenario generation, coverage of edge cases, and returns.
  • Tying tests to unstable locators/element positions.

Real-Life Example

Negative Case

In the automation of a banking application process, a single end-to-end test was created for the happy path, without returns and errors. When one of the steps was modified (adding a dynamic block), the test not only failed but also did not catch bugs in returning to the previous step or processing incorrect data.

Pros:

  • Quick coverage of basic scenarios.

Cons:

  • Any change to the form required editing the entire long test.
  • Critical errors went unnoticed by testing — especially at "junctions" and rare cases.

Positive Case

The Step Object structure was implemented, with each step covered by a separate test, simulating returns, errors, and switching between different branches. Everything was managed through sets of test data. New steps or changes did not undermine the value of the test base.

Pros:

  • Flexibility and scalability of automated tests.
  • Easy to make changes with growing logic.

Cons:

  • Initially required more time to design the testing architecture.
  • Maintaining consistency in test data became more complex.