Automated Testing (IT)QA Engineer

How to organize effective separation of responsibilities between the test framework and test logic in automated tests?

Pass interviews with Hintsage AI assistant

Answer.

Background:

Initially, automated tests were often written "in a straightforward manner", without architectural separations: verification logic and execution mechanisms were mixed together. As projects grew, it became evident that mixing the framework and test logic created a brittle, hard-to-maintain codebase. Architectural recommendations for separating responsibilities emerged.

Problem:

If tests depend on the internal implementations of the framework or include logic to interact with the environment, any changes would force a rewrite of numerous tests. Test cases become complex, code gets duplicated, and migrating to a new framework or platform becomes difficult.

Solution:

Strictly separate:

  • Framework (baseline infrastructure): general mechanics for running tests, logging, error handling, reports, a base for helper classes (e.g., drivers, adapters, and utilities).
  • Test Logic (test cases): specific scenarios that express the meaningful goal of the test, using only the public APIs of the framework.

Key features:

  • Ease of maintenance due to isolation from platform changes
  • Ability to reuse test logic
  • Reduction of redundancy and code duplication

Tricky Questions.

Is it okay to write test steps directly in tests if there are very few of them?

No. Even short tests can grow, and the absence of layers will quickly lead to chaos.

Should test scenarios be aware of the execution mechanics (e.g., when to initialize the driver)?

No. All infrastructure details should be hidden within the framework layer.

Is it normal to hardcode test parameters within cases (e.g., URL, credentials)?

No. Test parameters should be configured via the framework and environment settings.

Common Mistakes and Anti-Patterns

  • Placing state checks inside framework methods instead of tests
  • Using private framework methods in tests
  • Duplicating helper functions within tests
  • Hardcoding parameters

Real-Life Example

Negative Case

In the project, tests directly call Selenium driver methods, and each test repeats the connection to WebDriver. Each test independently parses the config.

Pros:

  • Can quickly start writing automated tests without architecture

Cons:

  • Changing the driver or launch parameters leads to massive edits
  • Duplicated code across all tests
  • Hard to scale and maintain the project

Positive Case

Tests use the framework's basic abstractions: common initialization and teardown, parameter passing through a configurator, test logic expressed through high-level methods.

Pros:

  • Easy maintenance and modification
  • Test logic is easy to port and scale
  • A single entry point for parameters

Cons:

  • Initial investment in infrastructure