Manual Testing (IT)Manual QA Engineer

What is manual integration testing between systems? What typical problems can arise and how to solve them?

Pass interviews with Hintsage AI assistant

Answer.

Manual integration testing is the process of verifying the interaction between different modules, services, or external systems manually, without automated scripts.

Background:

In the early days of IT product development, all systems were created monolithically, but as companies grew and the number of external services increased, integration tests became relevant. Testers began to wonder: how to ensure that data and actions pass correctly between systems — for example, that successful payments are reflected in both billing and accounting systems.

Problem:

The biggest challenge is the lack of a fully functional environment: integrations may depend on third-party services, unstable APIs, or external limitations. Additionally, manual testing at each integration junction can be very labor-intensive, making it easy to make a mistake in the sequence of steps or overlook important cascading consequences.

Solution:

  • Use test environments with "mocks" (mock/stub) to ensure test repeatability.
  • Structure test cases, and write step-by-step checks for messages, logs, and statuses.
  • First, check edge cases, timeouts, retries for integration calls, and the system's reaction to failures.

Key features:

  • The need to understand the business logic of both integrated parties.
  • Consideration of asynchronicity and state transmission errors.
  • Support for resilience to partial failures.

Trick Questions.

What are test doubles and why are they needed in manual integration testing?

Test doubles are imitations of integration components (e.g., mock, stub, fake). In manual testing, they are needed to execute scenarios when the actual external system is unavailable or its calls cost money.

Can integration be considered tested if the test cases only covered the happy path?

No. It is essential to test edge cases: connection errors, incorrect data formats, timeouts, unexpected responses.

Is it enough to check only data sending/receiving, or is there something else to consider?

It's important to check the correctness of the CONTENT of the data, their transformation, and the system's behavior during various errors at the junction.

Common Mistakes and Anti-Patterns

  • Working only with the "own" part of the system without checking behavior on the partner side.
  • Ignoring negative scenarios.
  • Not analyzing logs or not keeping a history of integration errors.

Real-Life Example

Negative Case

A tester checks the integration between CRM and the billing system only for successful order addition. They do not check the synchronization error and transaction skip.

Pros:

  • Rapid coverage of basic scenarios.

Cons:

  • An error during integration failure will only be discovered with real data.

Positive Case

A tester creates a set of tests with turning the internet connection off and on, substituting invalid tokens. They validate the logs of both sides.

Pros:

  • Critical errors were discovered before going live.
  • Time saved on maintenance.

Cons:

  • More labor-intensive to prepare the environment and scenarios.