Manual Testing (IT)QA Engineer (manual testing)

How to improve the efficiency of manual testing when working with non-functional requirements (such as usability or accessibility), and what tools are available for testers?

Pass interviews with Hintsage AI assistant

Answer.

Non-functional testing is the evaluation of a system not only in terms of business functionality but also by usability, performance, security, adaptability, and accessibility parameters.

Background: In the early days of testing, the focus was solely on "works/does not work". However, as competition and product quality demands increased, attention shifted towards accompanying parameters such as usability, speed, and accessibility for people with disabilities. This influenced the development of non-functional testing.

Problem: Testers often do not know how to formalize and evaluate non-functional parameters manually. Subjectivity arises: what is convenient for one user is inconvenient for another. The lack of clear checklists and criteria only exacerbates the situation.

Solution: The tester should:

  • Use standards and recommendations like WCAG for accessibility or ISO 9241 for usability.
  • Apply specialized tools (for example, a color contrast analyzer to check the contrast between text and background, a screen-reader simulator to test accessibility).
  • Develop checklists to assess user experience, navigation, readability of elements, etc.
  • Involve real users with diverse experiences for user-testing.

Key features:

  • Working with "live user scenarios" rather than only prepared test cases.
  • The necessity to document identified non-functional issues as precisely as possible.
  • The ability to use external tools for analysis (e.g., Lighthouse, Axe, NVDA, JAWS, Color Contrast Analyzer).

Tricky Questions.

Can you do without manual usability testing if automated tests are used?

No. User experience is highly subjective, and many aspects can only be identified through manual analysis or by consulting real users.

Is it sufficient to check accessibility using only automated scanners?

No. Automated checks generally identify only 20-30% of issues. The rest can only be found through manual interaction: keyboard navigation tests, screen reader checks, etc.

Is accessibility testing necessary if there are no clients with disabilities?

Yes. Legislation, quality standards, and future product development prospects require high accessibility. Additionally, some users may have temporary impairments (e.g., injuries).

Common Mistakes and Anti-Patterns

  • Neglecting non-functional requirements due to underestimating their impact.
  • Documenting results without reference to standards.
  • Describing bugs with phrases like "uncomfortable" or "inconvenient" without specifics.

Real-life Example

Negative Case

The tester did not pay attention to the low contrast of the button label: users with color perception disorders could not see the text.

Pros:

  • Time savings on testing.

Cons:

  • Users complained, increased support inquiries, reputational losses.

Positive Case

The tester used a free tool to check contrast and created an accessibility checklist.

Pros:

  • Early detection of accessibility defects before release.
  • Increased user loyalty.

Cons:

  • Extended testing cycle.
  • Need to study additional standards and tools.