Manual Testing (IT)Manual QA Engineer

Given a **React**-based single-page application featuring real-time data updates and dynamic modal dialogs, what systematic manual testing approach would you employ to validate **WCAG** 2.1 Level AA compliance while ensuring assistive technologies correctly announce content changes without disrupting user cognitive flow?

Pass interviews with Hintsage AI assistant

Answer to the question

Accessibility testing evolved from checking static HTML pages to addressing complex JavaScript-driven applications. Early web accessibility focused on semantic markup and alternative text for images. Modern single-page applications (SPAs) introduced challenges where content updates dynamically without page reloads, making it difficult for screen readers to detect changes.

The core problem involves ARIA live regions and focus management in dynamic interfaces. When real-time data streams update the DOM, screen readers like NVDA or JAWS may fail to announce critical changes, or worse, interrupt users with non-essential updates. Modal dialogs compound this by trapping focus improperly or failing to return focus to the triggering element upon closure, violating WCAG 2.1 Success Criterion 1.3.1 and 2.4.3.

Implement a systematic manual testing protocol combining keyboard navigation testing, screen reader validation, and cognitive flow analysis. First, verify all interactive elements are reachable via Tab key navigation without mouse dependency. Second, test with actual screen readers to validate that live regions use appropriate politeness settings (aria-live="polite" vs "assertive"). Third, document focus order using browser developer tools to ensure logical sequence matches visual layout.

Situation from life

I was tasked with testing a financial trading dashboard built with React that displayed real-time cryptocurrency price updates and allowed users to execute trades through modal dialogs. The application targeted professional traders who relied on screen readers due to visual impairments, requiring immediate notification of price alerts while maintaining workflow continuity. The stakes were high, as missed alerts could result in significant financial losses for users.

During initial testing, we discovered that price drop alerts were not announced to screen reader users, causing them to miss critical trading opportunities. Additionally, when opening trade confirmation modals, focus remained on background elements, allowing users to accidentally trigger trades while navigating with Tab keys. The modal close button also failed to return focus to the trigger element, disorienting users who had to restart navigation from the page top.

We considered using automated accessibility scanners like axe DevTools and Lighthouse to catch violations quickly. These tools efficiently identified missing alt attributes and insufficient color contrast ratios. However, they completely missed the timing issues with live region announcements and the focus management problems specific to the modal's React Portal implementation. Static analysis cannot verify whether a screen reader actually announces content at the correct moment or if focus traps work with real assistive technology.

The second approach involved pure manual testing with NVDA on Windows and VoiceOver on macOS without structured test cases. While this caught the specific focus trapping issues, it was inconsistent and time-consuming. Different testers reported conflicting results based on their screen reader proficiency levels. This method also failed to establish reproducible steps for developers to fix the issues, as anecdotal observations varied between testing sessions.

We implemented a hybrid methodology combining structured test charters with targeted assistive technology validation. I created detailed test cases specifically for "Screen Reader Compatibility" using NVDA with Firefox and VoiceOver with Safari as primary combinations. Each test case included specific steps for verifying live region politeness levels, documenting the exact Tab navigation sequence through modals, and recording announcement behaviors using screen reader speech viewers. This approach balanced thoroughness with reproducibility.

We selected the hybrid structured approach because it provided developers with concrete, reproducible defect reports including specific ARIA property misconfigurations. This methodology eliminated the inconsistencies of ad-hoc testing while catching issues that automated scanners missed. The structured format also enabled knowledge transfer to junior QA engineers who were unfamiliar with accessibility testing.

This approach identified that the development team had implemented aria-live="assertive" for all price updates, causing constant interruptions. We recommended changing critical alerts to "assertive" and standard updates to "polite". For modals, we implemented focus trapping using the react-focus-lock library and ensured focus returned to trigger elements. Post-fix validation showed 100% of tested screen reader users could successfully complete trading workflows without missing alerts or losing navigation context.

What candidates often miss

How do you verify that focus management works correctly when a modal dialog closes in a single-page application?

Many candidates suggest simply checking that the modal disappears visually. The correct approach requires understanding WCAG 2.1 Success Criterion 2.4.3 (Focus Order). You must verify that when the modal closes via Escape key or close button, focus returns to the element that originally opened the modal, not the top of the DOM. Test this by opening the modal, closing it, then pressing Tab once to verify focus moves to the logical next element after the trigger. Additionally, during modal visibility, Tab navigation must cycle only within modal elements (focus trapping) to prevent accidental background interactions.

What is the difference between polite and assertive live regions, and how do you test their behavior with screen readers?

Candidates often confuse these ARIA attributes or suggest they function identically. Aria-live="polite" queues announcements until the screen reader finishes current speech, suitable for non-critical updates like auto-save confirmations. Aria-live="assertive" immediately interrupts the user, reserved for critical errors like transaction failures. To test, use actual screen readers (NVDA, JAWS, or VoiceOver) rather than browser tools, creating scenarios where both region types update while the screen reader speaks other content. Many testers miss that aria-atomic and aria-relevant properties further control announcement behavior when only portions of a live region change.

How do you handle accessibility testing for routing changes in frameworks like React Router without full page reloads?

Most junior testers check for visual URL changes but miss that screen readers rely on page title updates and focus shifts to announce navigation. Since SPAs don't trigger traditional page load events, assistive technologies may not inform users they've navigated to a new view. The solution requires verifying that route changes programmatically update the document.title and move focus to an H1 heading or main landmark via JavaScript. Test by navigating routes with a screen reader active and confirming it announces the new page title or heading content. Candidates frequently overlook testing the browser's back button behavior with screen readers in SPAs, where focus history must be maintained to prevent users from getting lost in the navigation stack.