Automated accessibility testing (Accessibility Testing, a11y) has become particularly relevant with the development of initiatives for digital inclusiveness. Initially, testing was done manually, which often led to missing critical flaws or late detection of issues. The modern approach involves automation through specialized tools and integrating a11y checks into CI/CD.
History of the issue: The first accessibility checks were entirely manual, which was labor-intensive and subject to human factors. With the emergence of standards (WCAG, Section 508), tools like axe, pa11y, and Lighthouse were developed, significantly automating the process.
Problem: The main challenge is that automation cannot cover all aspects of accessibility (e.g., providing appropriate alternatives for complex graphical content or adequacy of texts for screen readers). There are also often difficulties with supporting specific widgets, asynchronous interfaces, and correctly placing a11y plugins in the testing pipeline.
Solution:
Combining automation of standard checks (contrasts, aria-*, tabindex, structure, labels) with manual validation of critical business processes involving accessibility specialists. A good practice is to integrate a11y scanners during pull requests and key releases to avoid creating "technical debt in accessibility".
Key features:
Trick question 1
"Is it enough to use only automated scanners to ensure full accessibility?"
Answer: No, automated tools cover only about 30-50% of accessibility requirements. The remaining part can only be identified through manual testing and tests with real assistive technologies.
Trick question 2
"If I only add role="button" or similar attributes, will the element be accessible?"
Answer: Not always. It is necessary to ensure proper focus management, keyboard support, event handling, and informative texts for screen readers.
Trick question 3
"Do accessibility tests significantly slow down CI: is it reasonable to run them only once a month?"
Answer: No, such tests should be run with every change; otherwise, regressions related to accessibility will not be detected in time, making their correction more difficult (and costly).
The team decided to run Lighthouse once and be done with it, checking off an item on the checklist. They found and fixed several errors, but later it turned out that in the real banking application, blind users could not properly apply for a card: important messages were not read, buttons were "invisible" to screen readers.
Pros:
Cons:
From the very beginning, a11y checkers were integrated into the pipeline and project regulations, regular manual checks were conducted with assistive technology, and interviews with external experts were held. As a result, blind clients found it convenient to use the bank's web interface.
Pros:
Cons: