UAT (User Acceptance Testing) is the phase where actual end-users test the system to confirm it meets their needs before go-live. Distinct from unit/integration tests (which devs run) — UAT is human-centred.
Process:
- UAT plan — define which scenarios will be tested, by whom, on what data, against what success criteria.
- UAT environment — typically a Full Sandbox loaded with realistic data, deployed with the build that's planned for production.
- UAT execution — users walk through scenarios, log defects, suggest improvements.
- Defect triage — separate genuine bugs (don't go live) from "wishes" (feedback for next iteration).
- Sign-off — business stakeholders formally accept the build for production.
Common pitfalls:
- UAT done by the wrong people. Should be real users, not project team members. Project team is too close to the system.
- No formal UAT plan — users test ad-hoc, miss critical paths.
- UAT environment not production-realistic — bugs that manifest only with real data volume / sharing get missed.
- Defects piling up without triage — need a daily standup to keep moving.
- No clear sign-off mechanism — "are we done?" lingers.
Duration: 2-4 weeks for typical project.
Senior consultants treat UAT as the most important sign-off: technical correctness ≠ business correctness. A perfectly-built solution that doesn't fit the actual workflow fails UAT, and rightfully so.
Modern variation: UAT can be continuous — running in parallel with development sprints, with user feedback loops. More expensive but reduces the late-stage UAT crunch.
