logo

2.1 Before You Test: Diagnose, Don’t Flail

Most interesting work starts with a metric that moved, not with a shiny A/B idea.
When something changes, your job is to understand the system before you randomize it.
  1. Clarify the metric
      • What exactly is it measuring? At what grain? Over what window?
      • Why do we care? How does it map to user value or revenue, not vanity?
  1. Look at time properly
      • Is this a one‑day spike, a weekly pattern, or a genuine trend break?
      • Does it snap back after holidays, campaigns, outages?
  1. Check related metrics
      • If conversion dropped, did traffic, latency, or crash rates move?
      • If the North Star is flat but a driver metric moved, is that expected?
  1. Segment aggressively
      • By cohort, geo, device, acquisition channel, tenure.
      • If the story is the same in every slice, it is probably real. If it only "works" in one cherry‑picked segment, be suspicious.
  1. Decompose the metric
      • CTR × visits, frequency × basket size, etc.
      • You want a mechanical understanding: what would have to change for this number to move this much?
  1. Write the one‑paragraph narrative
      • "Here is what changed, for whom, and the 2–3 plausible root causes." If you cannot do this, you are not ready to propose experiments.
This diagnostic muscle is what separates Staff from mid‑level. You do not reach for A/B as a reflex. You earn the right to experiment by first understanding the system.
🛠

Case study: Sudden drop in conversion on a food delivery app

A weekly review shows a 4% drop in checkout conversion. The instinctive reaction is "we shipped a pricing test last week, roll it back." Instead, you:
  • Break the metric down into add‑to‑cart → checkout start → payment success.
  • See that add‑to‑cart and checkout start are flat, but payment success has tanked on Android only.
  • Segment further and discover it is heavily skewed to one country and one payment method.
  • Tie the timing to a 3rd‑party payments SDK update, not your pricing experiment.
Outcome: you avoid killing a promising pricing change, coordinate a hotfix for the payments SDK, and restore conversion. No A/B test required—just disciplined diagnosis.

How to talk through this in an interview

  • Mid‑level answer: Walk through the funnel decomposition and segmentation, and show how you traced the issue to payment failures instead of blaming the experiment.
  • Senior answer: Add how you institutionalized this: standard funnel breakdowns in monitoring, alerting on payment success by platform, and a runbook for "metric moved" investigations.
  • Staff answer: Emphasize how you changed behavior across product and ops: you made post‑launch diagnostics a default, reduced knee‑jerk rollbacks, and helped reframe experiments as one possible cause among many in a complex system.