Here is the math no engineering leader wants to do out loud. A product that takes 18 months to build and fails in market costs your engineering team 18 months — plus the opportunity cost of everything else that team could have shipped, plus the market credibility loss from a failed launch. A validation sprint that surfaces the same insight in 60 days costs a fraction of that. The difference between the two outcomes is not intelligence or effort. It is sequencing.
Most product teams build first and validate later. When validation happens at all, it arrives as user testing on a nearly-finished product — which means feedback can only result in small adjustments, not the fundamental pivots the evidence often demands. The validation stack described here reverses that sequence: evidence before engineering, not evidence after.
Why Products Get Built Without Validation
Three forces consistently push product teams toward building before validating — and none of them are irrational in isolation:
Delivery pressure. Roadmaps have ship dates. Quarterly OKRs say "ship feature X by end of Q3." Validation feels like a detour when the timeline is already tight. The team knows it should test assumptions, but the calendar wins. What gets cut is the work that has no deadline attached — and validation rarely has a deadline until after the product launches and fails.
Confidence in internal conviction. Product leaders often carry strong intuitions built on real customer experience. The danger is that intuition is pattern-matching — it works well in familiar contexts and poorly in new ones. A product leader who has shipped successfully in one market segment may be confidently wrong about a new one. Intuition compresses past data; it does not predict novel behavior.
The prototype-as-commitment misunderstanding. Many organizations treat a prototype as a commitment to build. They skip prototyping because they do not want to raise stakeholder expectations they cannot yet fulfill. The result: assumptions go untested until production code has been written and the cost of reversal has become prohibitive.
The Validation Stack
The validation stack is a sequence of techniques, each designed to test a specific class of assumption at the lowest possible cost. Each layer builds on evidence from the previous one. Skipping a layer does not save time — it moves untested assumptions downstream where they are far more expensive to surface.
What This Looks Like in Practice
For BBVA's travel banking feature, Redesign ran a compressed validation sprint before a production decision was made. The sprint covered structured interviews with travelers across three behavioral segments, concept testing of three distinct feature variants, prototype design and usability testing of the validated concept, and delivery of an engineering-ready brief. The sprint identified two critical assumptions that did not survive contact with real users — assumptions that, had they been built, would have required months of rework and a relaunch. The cost of surfacing those assumptions in validation: a fraction of what building them would have cost.
For Visa, product design work validated digital financial product experiences across multiple markets before significant engineering investment was committed. The validation cycle compressed discovery and reduced rework at the engineering phase — not by finding that ideas were bad, but by finding which specific design decisions made good ideas actually work for users in real conditions.
The pattern across both engagements is the same: validation did not slow the projects down. It accelerated them by removing the rework cycles that unvalidated assumptions generate downstream.
The 60-Day Validation Sprint
Redesign's standard product validation sprint delivers a tested, engineering-ready prototype in 60 days. The sequence:
| Phase | Activity | Output |
|---|---|---|
| Weeks 1–2 | Problem validation interviews + synthesis | Prioritized problem statement with evidence |
| Weeks 3–5 | Concept design + concept validation testing | Ranked solution concepts with evidence |
| Weeks 5–10 | High-fidelity prototype + usability testing + iteration | Tested prototype with resolved usability issues |
| Weeks 10–12 | Engineering brief + prioritized design backlog | Build-ready specification |
The deliverable is a prototype that has survived contact with real users, and a design brief that engineering can build from without reinterpretation. Not a deck. Not a set of recommendations. A specification with evidence attached to every decision.
The question that determines whether a product team validates before building is not "do we have time?" It is "can we afford not to?" The math is consistent: 60 days of validation vs. 6 to 18 months of building the wrong thing. Every product team says validation matters. The teams that actually do it consistently are the ones that treat it as a sprint — with a scope, a timeline, and a clear deliverable — not as a prerequisite that gets cut when delivery pressure increases.