Integrating automated tests into the CI/CD process ensures early detection of defects with every code change. This is critical for modern development processes and maintaining product stability.
Background: Traditionally, automated tests were triggered manually or through separate tasks. With the advent of the concepts of Continuous Integration (CI) and Continuous Deployment (CD), the need for automatically running all tests with every commit has emerged. Systems like Jenkins, GitLab CI/CD, GitHub Actions, TeamCity, and their analogs are now common.
Problem: Without integrating automated tests, bugs are discovered too late: a developer might miss a problem, and it could end up in production. Manual triggering delays releases and does not provide full confidence in the quality of each change.
Solution: Integrating automated tests into CI/CD allows:
For this purpose, tests are organized into separate tasks in the pipeline: there are usually phases for unit tests, integration tests, UI tests, and load tests. Example from .gitlab-ci.yml:
stages: - test - deploy unit_test: stage: test script: - npm run test:unit ui_tests: stage: test script: - npm run test:ui
Key features:
Will integrating automated tests into CI/CD slow down development?
No, a properly configured pipeline utilizes parallelism, isolated environments, and triggers to run only the necessary tests — this accelerates releases by detecting bugs early.
Should all automated tests be run at every stage of the pipeline?
No, usually early stages (like pull request branches) use fast checks (linters and unit tests), while full regression automated tests run only before a release or on nightly builds.
Can we rely solely on automated tests in CI/CD, forgetting about manual regressions?
Not recommended — automation is effective for repetitive scenarios, but complex cases and user experience checks still require manual verification.
In a project, all automated tests were run on each commit, causing the pipeline to stretch to 40 minutes, and developers had to wait for the tests to finish before merging their branches, leading to conflicting situations and release delays.
Pros:
Cons:
The pipeline was designed with task division: only fast tests were run on feature branches, with complete regression tests run on stage/prod. Errors and reports were picked up by bots, and the team received notifications of failures and responded promptly.
Pros:
Cons: