Skip to content
  • There are no suggestions because the search field is empty.

Understanding Report Metrics in Convert

Mastering Convert Reports: Read Numbers with Clarity & Confidence

THIS ARTICLE WILL HELP YOU

Performance Metrics

  • Visitors – Unique people bucketed into each variation.
    Example: 5 000 visitors on Variation A means 5 000 different individuals saw that version.

  • Conversions – Total goal completions.
    Example: 100 “Buy Now” clicks from 1 000 visitors = 100 conversions.

  • Conversion Rate – Conversions ÷ Visitors.
    Example: 100 / 1 000 = 10 % conversion rate.

  • Total Conversions – If multiple conversions per visitor are allowed, this is the aggregate count.
    Example: 50 visitors × 2 purchases each = 100 total conversions.

  • Revenue (when enabled) – Sum of all transaction amounts tracked for the variation.

  • Revenue per Visitor (RPV) – Revenue ÷ Visitors.
    Example: $5 000 / 1 000 visitors = $5 RPV.

  • Improvement – Percentage lift or loss v. baseline.
    Example: Control 10 % → Var A 12 % ⇒ +20 % improvement (not +2 %).

Statistical Confidence Indicators

Frequentist

  • Confidence – Certainty that the observed difference is real. Wait for 95 %+ before deciding.

  • Statistical Significance – Flag showing whether confidence ≥ your preset threshold.

  • P-value – Probability of seeing the data if the variants were identical. < 0.05 is conventionally “significant”.

  • Confidence Interval – Range containing the true lift.
    Example: +15 % [+10 %, +20 %].

Bayesian

  • Chance to Win – Probability a variation is best. Most teams ship at 95 %+.

  • Expected Loss – Average % conversion you might forfeit if you pick this variant and it is not actually the best.

  • Credible Interval – Bayesian version of the confidence interval (interpretation is the same).

Test Progress Indicators

  • Sample Size – Visitors collected vs. required.
    Example: 5 000 / 10 000 visitors (50 %).

  • Statistical Power – Probability the test will detect a true effect. Aim for ≥ 80 %.

  • Minimum Detectable Effect (MDE) – Smallest lift your current traffic can reliably spot.

  • Test Duration – Elapsed runtime; keep every test live for at least one full business cycle (7-14 days for most sites).

Warning Icons

  • ⚠️ Low Sample Size – Fewer than ~5 000 visitors per variation. Let the test run.

  • ⚠️ Not Yet Significant – Results still within the margin of error. Collect more data.

  • Test Complete – All sample, power, and confidence criteria satisfied; safe to implement the winner.

Choosing Your Statistical Method

Method Best For Key Metrics Notes
Fixed-Horizon Frequentist Classic A/B with preset n Confidence, p-value, power Don’t peek early.
Sequential Testing Very high-traffic sites needing fast calls Always-valid confidence, sequential bounds You can look anytime without α-inflation.
Bayesian Most users Chance to Win, Expected Loss, Credible Interval Intuitive; resistant to early noise.

Quick-Reference Checklist

  1. Sample Size – ≥ 5 000 visitors per variation

  2. Confidence / Chance to Win – ≥ 95 %

  3. Test Duration – ≥ 7–14 days

  4. Statistical Significance – Yes (Frequentist only)

Red flags

  • Wild swings or flip-flops in the first days

  • Lift claims > 50 % on tiny traffic

  • “Significant” results with < 1 000 visitors

  • Tests shorter than one complete business cycle

 

Patience pays. Resist the urge to act on early excitement; let the data mature before you call the winner.