Automated Testing (IT)QA Automation Engineer

How to implement automatic localization (i18n) checks for the interface and comments: history of the issue, problems, and solutions?

Pass interviews with Hintsage AI assistant

Answer.

The first wave of test automation hardly addressed localization (i18n) checks since the primary markets focused on English-language interfaces. However, as applications globalized, the quality requirements for localization increased: the interface must display correctly in all supported languages, and text resources and formatted strings must load correctly depending on the selected locale.

The main problem is that manual checks are very resource-intensive, and automated tests are complicated due to variability in format, context, and language specifics (e.g., right-to-left or grammatical features). There may be missing translations, formatting errors, and layout violations.

Solutions include working with test data for each locale, using snapshot tests, comparing UI elements with benchmarks, implementing validation utilities on a "key-value" basis for resource files, automated extraction and comparison of strings via APIs, and regular linting of resource files.

Key features:

  • Checking for the presence and correct display of all supported languages.
  • Comparing reference translations with the current display in the interface.
  • Validators for text length and formatting to avoid layout violations.

Trick Questions.

Is it possible to create a universal test that validates any locale with a single script?

Partially yes, but the nuances of languages (cases, gender, input direction) often require manual adjustments or additional conditions in such tests. 100% universality is not achievable.

If the translation file exists and has been successfully loaded, does that mean the i18n test has passed?

No. The file may be incorrectly linked on the application side, there may be an error in the key, the context of using the translation may be violated, and unaccounted special characters may exist, etc.

Is it worth automating localization testing for languages with <1% of users?

Yes, if the business criticality of even one user is high, for example, when fulfilling contractual obligations or for markets with special requirements. Automation significantly saves resources compared to manual checks.

Typical Mistakes and Anti-Patterns

  • Checking only for the file's existence and not the actual display in UI.
  • Strict string comparison without considering the grammar and format specifics of the language.
  • Blindly copying tests from one locale to others without adaptation.

Real-life Example

Negative Case

The team implemented automated tests to compare keys in the .po file with the original English text, thinking this was sufficient. They did not write UI tests — in the release of the Arabic version, it turned out that all text was misaligned outside buttons, and some strings were not translated at all due to incorrect keys.

Pros:

  • Fast implementation of automation for i18n.

Cons:

  • Low level of coverage for real user scenarios.
  • Significant UX errors remained unnoticed.

Positive Case

A combination of resource linting and automated tests was implemented, which cycled through the interface in all languages, took screenshots, and compared them with benchmark layouts. By detecting the mixing of RTL/LTR elements, the team identified and resolved the root cause before the release.

Pros:

  • Maximum coverage of all scenarios in real conditions.
  • Ease of maintenance when adding new languages.

Cons:

  • High cost of maintaining the benchmark database.
  • Periodic manual checks of complex formatting cases are required.