Automated Testing (IT)QA Engineer / Lead SDET

How to ensure quality support and development of test suites for long-lived projects, where there is constant functionality development and high turnover in the team?

Pass interviews with Hintsage AI assistant

Answer.

Historically, in long-lived projects, test automation often became a burden: tests were written "on the fly", not maintained, and after years they had to be discarded. Frequent team changes lead to the loss of knowledge, the architecture of tests becomes blurry, and automation turns into a "pile of scripts".

Problem: test scenarios become outdated, test owners disappear, there is no documented architecture of the testing system, code review is not applied, and technical debt arises. New team members find it difficult to understand what is covered by tests, and which test is responsible for what.

Solution — implement GitFlow practices for automated tests, write readable and well-documented code for tests, use design patterns (such as Modular Test Architecture), automate the maintenance of documentation (README, auto-generating coverage reports and test relevance). It is essential to conduct code reviews for automated tests, describe test scenarios in documentation, and implement ownership by distributing responsibility.

Key features:

  • Applying a unified approach to organizing the structure of the automated tests repository
  • Documenting scenarios and architecture of automated tests
  • Code review and assigning responsibility for different suites

Tricky Questions.

Is there any point in using static analysis for automated test code?

Yes! Static analysis (linters, SonarQube, etc.) helps maintain the quality and consistency of test code, preventing the emergence of "quick and dirty" code.

How often should automated tests be cleaned of outdated scenarios?

It is recommended to conduct a review of the relevance of scenarios at each release cycle (for example, once a month) to exclude irrelevant functionality and not to spoil stability metrics.

Does 100% test coverage help avoid test obsolescence?

No. Even with “full” coverage, automated tests can become irrelevant due to changes in requirements and architecture if they are not kept up to date.

Typical Mistakes and Anti-Patterns

  • No participants responsible for the relevance of automated tests
  • Confusing repository structure, no README and onboarding docs
  • Lack of standards for writing tests, heterogeneous code style

Real Life Example

Negative Case

In a large company, all tests were placed in one repository, written by anyone and in any way. After a year, almost no one could explain what was covered and what was not, and most scenarios were irrelevant.

Pros:

  • Quick addition of new tests by anyone willing
  • Ease of entry "over short distances"

Cons:

  • Chaos, duplication of tests, constant conflicts
  • New employees need time to figure things out
  • High technical debt and risk of knowledge fragmentation

Positive Case

A separate master plan was created for automated tests: each module had its owner; the structure of the code was described in README, standards in CONTRIBUTING.md. Every PR to the test repository required code review with a mandatory checklist.

Pros:

  • Quick immersion of new employees
  • Easy maintenance of test relevance
  • Transparency and manageability of test coverage

Cons:

  • Requires discipline and costs for documentation maintenance
  • Not all developers want to spend time on code review of tests