Process metrics (tests run, defects logged) are easy. Outcome metrics (was the user experience good?) are harder but more meaningful.
Outcome metrics:
1. Production incidents.
- Frequency by severity.
- Trend over time.
- MTTR (mean time to recovery).
2. Customer-reported defects.
- Reach the user = QA missed.
- Indicator of leakage.
3. NPS / CSAT.
- Customer satisfaction with quality.
- Surveys, feedback.
4. Adoption metrics.
- Are users engaged?
- Quality affects adoption.
5. Business impact.
- Revenue / efficiency gain attributable to quality.
6. Reputation.
- Reviews, social media sentiment.
Process metrics (still useful):
- Test coverage.
- Pass rate.
- Defect leakage (% defects in production / total).
- Cycle time to fix.
Combining:
- Process metrics tell you HOW.
- Outcome metrics tell you WHETHER it worked.
DORA-adapted for QA:
- Deployment frequency.
- Lead time for changes.
- Change failure rate.
- Mean time to recovery.
Common pitfalls:
- Process metrics only — process looks good; outcomes bad.
- Vanity metrics — count without meaning.
- No baseline — can't measure change.
Senior QA insight: outcome metrics matter more than process. Process serves outcomes.
The senior framing: measure what users experience, not what QA does.
