Business AnalysisBusiness Analyst

How would you establish end-to-end requirements traceability when the **Jira** backlog contains 1,500+ obsolete user stories referencing deprecated **SOAP** services, the newly deployed **microservices** architecture lacks formal business requirements documentation, and an external **PCI DSS** compliance audit requires proof that all payment processing workflows map to current security controls within ten business days?

Pass interviews with Hintsage AI assistant

Answer to the question

You must employ a risk-based triage approach that prioritizes critical payment paths over comprehensive coverage. Combine automated code discovery with targeted Subject Matter Expert (SME) validation to reconstruct the traceability matrix rapidly. Focus on demonstrating functional equivalence between legacy SOAP operations and current REST/gRPC endpoints rather than perfecting historical documentation.

Leverage production APM (Application Performance Monitoring) logs to identify which code paths actually execute payment transactions, then reverse-engineer requirements from Git history and architectural decision records (ADRs). This creates a "just-in-time" traceability layer that satisfies auditors while acknowledging the technical debt reality.

Situation from life

A mid-sized fintech company completed a chaotic six-month migration from a monolithic Java EE application to Kubernetes-hosted microservices. The development team prioritized feature parity over documentation, leaving the Jira instance cluttered with 1,500 legacy stories describing SOAP-based payment workflows that no longer exist. The new system uses REST APIs, but business requirements were never formally rewritten.

The problem: A PCI DSS Level 1 audit was scheduled in ten days, requiring evidence that every payment requirement (authorization, capture, refund, void) traces to current security controls and code implementations. The auditors specifically needed to see that PAN (Primary Account Number) handling requirements mapped to encryption implementations in the new architecture. Manual reconciliation would require 300+ hours, but the team had only 80 hours available.

Solution 1: Comprehensive manual reconciliation

Hire external contractors to read every legacy story, interview the three remaining developers, and manually map old requirements to new microservices.

Pros: High accuracy for business context; creates perfect audit trail; captures tribal knowledge about edge cases.

Cons: Impossible within the ten-day window; developers are fully allocated to production support; costs exceed $50,000 in emergency contracting fees.

Solution 2: Purely automated documentation generation

Use SonarQube, Swagger/OpenAPI specs, and static analysis tools to generate traceability matrices directly from the Java source code and YAML configuration files.

Pros: Executes in hours rather than days; captures the actual current state of the system; generates beautiful technical documentation automatically.

Cons: Misses the "why" behind requirements; cannot prove business intent to auditors; ignores manual workarounds or feature flags that alter payment flow logic; produces false positives on deprecated code paths still in the repository.

Solution 3: Risk-based targeted reconstruction with automated support

Identify the 50 critical payment workflows via Splunk production logs (focusing on the 20% of flows handling 80% of transaction volume). Use Git commit message analysis and Slack channel exports to reconstruct decision rationale. Validate mappings in intensive 2-hour workshops with senior developers.

Pros: Achievable within ten days; focuses strictly on audit scope (payment processing); balances automation speed with human validation; costs less than $5,000 in internal resource time.

Cons: Leaves edge cases undocumented; requires developers to context-switch during critical sprint weeks; assumes commit messages are descriptive (they weren't, requiring additional detective work).

Chosen solution and rationale:

We selected Solution 3. The audit scope specifically targeted payment card data flows, not the entire application portfolio. By querying Splunk for transaction IDs touching the payment service mesh, we isolated exactly 47 distinct business workflows. We used GitLens to trace these code blocks back to their originating pull requests, extracting business logic from PR descriptions and linked Zendesk tickets.

The team created a "Traceability Bridge" document mapping legacy Jira IDs (e.g., PAY-1243) to new microservice endpoints (e.g., payment-service/v2/authorize) with explicit annotations where functionality diverged. We conducted three 4-hour workshops with the Tech Lead and Security Architect to validate the mappings, recording these sessions as audit evidence of due diligence.

The result:

The audit passed with zero findings related to requirements traceability. The auditors accepted the "Bridge Document" as sufficient evidence of control mapping because we demonstrated 100% coverage of PAN-touching workflows. Post-audit, the company implemented Behaviour-Driven Development (BDD) using Cucumber specifications to prevent future documentation drift, ensuring new microservices would have living documentation from inception.

What candidates often miss


How do you prove that a requirement derived from Git commit messages represents legitimate business intent rather than a developer's temporary workaround?

Treat commit messages and pull request discussions as "primary source artifacts" rather than absolute truth. Cross-reference with production APM data (e.g., New Relic or Datadog) to verify the code path is actually executed for real business transactions, not just test scenarios. Interview the original author if available, or use Git "blame" history to find the original ticket reference that triggered the change.

Document confidence levels (High/Medium/Low) for each derived requirement directly in your traceability matrix. For PCI DSS purposes, explicitly flag any requirement with less than "High" confidence and supplement it with runtime monitoring evidence showing the control works as intended, even if the documentation trail is imperfect.


When legacy Jira stories reference SOAP operations that were decomposed into three separate microservices, how do you maintain traceability without creating a 1:Many relationship that auditors reject as too complex?

Implement a "requirement decomposition" layer in your traceability matrix using a parent-child hierarchy. Promote the legacy requirement to a "Business Epic" (maintaining the original ID for audit continuity), and map the three microservices as "Technical Stories" that collectively fulfill that epic. Use a tool like Jira Advanced Roadmaps or a simple Excel matrix with indentation to visualize this relationship.

Document the decomposition rationale in an Architectural Decision Record (ADR) stored in Confluence or Git. Explain why the monolithic operation split (e.g., "separation of concerns for PCI scope reduction"). Auditors accept 1:Many relationships if you demonstrate end-to-end testing coverage using Postman collections or Karate integration tests that prove the three services collectively satisfy the original business requirement.


How do you handle the discovery that a current microservice violates PCI DSS Requirement 3.4 (rendering PAN unreadable anywhere it is stored) during your traceability reconstruction, with only five days until the audit begins?

Immediately trigger a formal "compliance exception" process using your ServiceNow or Jira Service Management incident workflow. Document the gap as a "Known Non-Compliance" with a specific remediation timeline (e.g., "Fix scheduled for Sprint 23, completion date 30 days post-audit"). For the audit itself, present the finding proactively—never hide it—along with compensating controls.

Demonstrate AWS VPC Flow Logs or Azure NSG logs proving network segmentation prevents unauthorized access to the unmasked data. Implement an emergency tokenization fix using HashiCorp Vault or AWS KMS, deployed behind a feature flag to avoid regression. Show auditors that your traceability process itself identified the gap, proving your governance controls are effective. This turns a potential failure into evidence of a mature discovery process.