This number has been true for cloud. It was true for digital transformation. It is now becoming true for AI — at a pace that should concern every board that has approved an AI transformation budget in the last 24 months.
The technology is not the problem. Large language models work. AI-powered automation delivers measurable efficiency in controlled conditions. The tools exist. The problem is the same one it has always been: organizations are buying technology solutions to design problems. And design problems, by definition, cannot be solved by technology alone.
What AI Transformation Projects Actually Produce
The pattern is consistent enough to be a template. A large language model is deployed. A chatbot is launched. An AI co-pilot is integrated into the product. Usage metrics are disappointing. Adoption is lower than projected. The AI works — technically — but it does not produce the behavioral change or operational improvement that justified the investment in the first place.
Why? Because the project was designed around the technology, not around the change in human behavior the technology needs to enable.
An AI chatbot is a design problem disguised as a technology problem. The technology determines what is possible. The design determines whether anyone actually changes how they work. A chatbot that technically answers questions correctly but that users distrust, cannot navigate, or do not know how to integrate into their workflow has a technology success rate of 100% and a transformation success rate of zero.
The technology was delivered. The transformation was not.
The Three Design Gaps in AI Transformation
Across AI transformation projects, the failure modes cluster into three categories. Each represents a design gap — a decision that was left to the technology deployment rather than answered before it began.
Design-First AI Transformation
A design-first approach inverts the sequence that most AI transformation programs follow. Instead of deploying technology and then managing adoption, it starts with the human behavior that needs to change and works backward to the technology that enables it.
- Start with the human behavior you want to change — not the technology you want to deploy. What does a person need to do differently for this investment to have delivered? That is the design target. Everything else is infrastructure.
- Map the current process at the friction-point level. Where are the decision bottlenecks? Where is human judgment most expensive? Where is inconsistency creating downstream cost? These are the design opportunities. They are not visible from a technology evaluation.
- Design the AI-assisted process explicitly: which steps does AI handle, which require human judgment, and how does the handoff between AI and human work? This is a design artifact — a process map with ownership, decision rules, and exception handling — not an engineering specification.
- Prototype and test before deploying at scale. A limited pilot that proves the behavior change happens before the full investment is committed. Not a technology pilot — a behavior change pilot. Did people actually work differently? Is the process measurably better?
- Redesign the organizational model around the new process: metrics, incentives, job definitions, training. The organizational design must reward the new behavior, or the old behavior will persist regardless of what the technology makes possible.
For Intel, Redesign designed the Enterprise AI sales enablement program — not just the technology enablement, but the sales conversation design, the buyer journey design, and the enablement program that helped Intel's salesforce use AI tools effectively in enterprise conversations. The technology was one component. The design of how salespeople would actually change how they sold was the program. Without it, the technology would have been adopted in name only.
What This Means for AI Transformation Budgets
Most AI transformation budgets allocate roughly 80% to technology and 20% to change management. The 20% going to change management is typically used for training — teaching people to use the technology after it has already been deployed. Training people to use technology they were not involved in designing, for a process that was not redesigned around them, in an organization whose incentives have not changed. This is why the 92% number exists.
A design-first approach does not mean spending less on technology. It means spending less on rework when the technology is deployed and no one uses it. The design investment happens before deployment, not after. This shifts costs from expensive remediation — retraining, re-implementation, organizational resistance management — to cheaper prevention: designing the right thing before you build it.
The organizations that are getting value from AI are not the ones with the most sophisticated models or the largest AI budgets. They are the ones that treated AI as a design challenge before it was a technology challenge. The question is not "which AI tool should we buy?" It is "what human behavior do we need to change, and how does AI help us change it?" That question requires a designer, not just a software vendor.