Automated Testing (IT)QA Automation Engineer

How to implement automated accessibility testing, why is it important, what problems teams face, and how to solve them?

Pass interviews with Hintsage AI assistant

Answer.

Automated accessibility testing (Accessibility Testing, a11y) has become particularly relevant with the development of initiatives for digital inclusiveness. Initially, testing was done manually, which often led to missing critical flaws or late detection of issues. The modern approach involves automation through specialized tools and integrating a11y checks into CI/CD.

History of the issue: The first accessibility checks were entirely manual, which was labor-intensive and subject to human factors. With the emergence of standards (WCAG, Section 508), tools like axe, pa11y, and Lighthouse were developed, significantly automating the process.

Problem: The main challenge is that automation cannot cover all aspects of accessibility (e.g., providing appropriate alternatives for complex graphical content or adequacy of texts for screen readers). There are also often difficulties with supporting specific widgets, asynchronous interfaces, and correctly placing a11y plugins in the testing pipeline.

Solution: Combining automation of standard checks (contrasts, aria-*, tabindex, structure, labels) with manual validation of critical business processes involving accessibility specialists. A good practice is to integrate a11y scanners during pull requests and key releases to avoid creating "technical debt in accessibility".

Key features:

  • Widespread use of software scanners: axe-core, pa11y, Google Lighthouse.
  • Integration into CI processes with clear automatic feedback on errors.
  • Regular updating of tools in accordance with the evolution of standards (WCAG 2.2, ARIA, etc.).

Trick questions.

Trick question 1

"Is it enough to use only automated scanners to ensure full accessibility?"

Answer: No, automated tools cover only about 30-50% of accessibility requirements. The remaining part can only be identified through manual testing and tests with real assistive technologies.

Trick question 2

"If I only add role="button" or similar attributes, will the element be accessible?"

Answer: Not always. It is necessary to ensure proper focus management, keyboard support, event handling, and informative texts for screen readers.

Trick question 3

"Do accessibility tests significantly slow down CI: is it reasonable to run them only once a month?"

Answer: No, such tests should be run with every change; otherwise, regressions related to accessibility will not be detected in time, making their correction more difficult (and costly).

Typical mistakes and anti-patterns

  • Limiting automation to static analysis without manual checks/interviews with users with disabilities.
  • A formal approach: just passing the scanner, ignoring true accessibility for real users.
  • Running a11y tests only locally, outside of CI/CD and pull requests.

Real-life example

Negative case

The team decided to run Lighthouse once and be done with it, checking off an item on the checklist. They found and fixed several errors, but later it turned out that in the real banking application, blind users could not properly apply for a card: important messages were not read, buttons were "invisible" to screen readers.

Pros:

  • Quickly implemented automation.

Cons:

  • Issues for real users surfaced only after complaints, losing trust in the product.
  • Corrections turned out to be expensive, as interface remodeling was required.

Positive case

From the very beginning, a11y checkers were integrated into the pipeline and project regulations, regular manual checks were conducted with assistive technology, and interviews with external experts were held. As a result, blind clients found it convenient to use the bank's web interface.

Pros:

  • Fewer regressions and urgent fixes.
  • Positive feedback from users and increased brand reputation.

Cons:

  • Additional time was required for planning a11y work.
  • Manual checks increased the load on QA.