Historically, as the number of automated tests in projects increased, issues arose: tests became confusing, exceeded execution time limits, and it was hard to understand what was responsible for what. Moreover, the risk of dependencies between different parts of the testing system increased, slowing down the overall pipeline performance.
The problem arises when the number of tests grows faster than the architectural support for the testing infrastructure can accommodate. Without scalable solutions, tests become slow, difficult to maintain, complicating the search and localization of defects, and technical debt rapidly increases.
The solution lies in implementing specific strategies:
Key features:
Can all tests be made integration tests to cover more code at once?
No, this approach reduces defect localization and leads to high maintenance costs, as well as slowing down regression execution.
Does scalability of automated tests mean only their acceleration?
Scalability encompasses architecture, maintainability, acceleration, and flexible infrastructure. Acceleration is merely a consequence of a well-designed large system.
How to correctly scale tests for teams working across different time zones?
It is important to provide the possibility of local execution and independence of testing environments; otherwise, there will be "conflicts" between team tasks.
Several teams in the company started adding new automated tests to one folder without coordinating their changes. After a few weeks, the automated tests began failing due to mismatches in data and dependencies, and the execution time exceeded 2 hours.
Pros:
Cons:
In one of the teams, a modular structure was created, separate CI was introduced by code areas, stability improved, and automatic alerts about ineffective tests were implemented.
Pros:
Cons: