Want this content delivered right to your inbox?

Why AI Transformation Stalls and What to Do About It

The New Friction: Why AI Transformation Stalls and What to Do About It

Two trucks break down in a port. They are thirty meters apart, on the same lane, carrying the same cargo. One port zone recovers from the disruption in seventy minutes. The other takes more than two hours. The zones share everything that matters: the same bridges, the same lane widths, the same weather, the same sixty-second mechanical fault. The only difference is coordination. In the slow-recovery zone, a single algorithm dispatches every vehicle. In the fast-recovery zone, that same algorithm shares infrastructure with a fleet of trucks driven by independent logistics companies, each operating under its own objectives. That is the finding M. Dalbert Ma, a researcher at London Business School, reported to the BIG.AI@MIT conference last month, after studying approximately one year of operations at one of the world’s largest container terminals. The autonomous zones ran 3.8% more efficiently under normal conditions. A single sixty-second fault cost them a 12.2% delay on the operations that followed. Rain, which forces every vehicle to slow and creates temporal buffer between sequential operations, erased the fragility entirely. This is what most AI transformation stories leave out. The efficiency gain is real. So is the cost you pay when something disrupts it. Real AI change management is the work of carrying that cost forward without breaking the system.

black and white no smoking sign - AI change management

The tradeoff that is not on the procurement spreadsheet

Every AI-first workflow your organization designs makes the same structural tradeoff the port made. When execution time collapses, coupling tightens. When coupling tightens, buffer disappears. The same mechanism that produces the efficiency also produces the fragility. This is not a failure of the technology. The AGVs in Ma’s study were operating at SAE Level 4 autonomy, the highest level in commercial deployment. They were not malfunctioning. The algorithm was not broken. What the study shows is that optimization pushed to its limit consumes the slack the system needs to absorb disruption. The port is a clean case because you can measure it. The same pattern is operating inside every organization that has automated a contiguous block of knowledge work without thinking about what the friction was doing for them. When the fault arrives, and it always arrives, the organizations that over-optimized pay a tax the spreadsheet did not predict.

The name for what you are accumulating

JoAnna Vanderhoef, in a poster at the same conference, gave this tradeoff a name: Capability Debt. It is the growing gap between an organization’s apparent efficiency and its adaptive capacity. Capability Debt is subtle because it shows up as absence. Absence of novelty detection. Absence of the junior employee who stumbled into the strange request and learned how to triage it. Absence of the reviewer who noticed the model’s output was technically correct and strategically wrong. Absence of the senior whose judgment was trained on edge cases the automated pipeline now handles without them. You do not see the debt until you need to do something the system was not built for. By then, the people who would have done it have atrophied the capability, or have never built it at all. This is the part of AI transformation that is easy to underweight in a board deck. Efficiency is legible. Judgment loss is not. It hides inside the year-over-year improvement metrics and inside the reduced headcount and inside the deliverables that ship faster and look clean until a situation arrives that needs taste, or context, or the ability to know what is not in the data. Capability Debt is the bill that comes later.

Where the debt accumulates fastest

A team of researchers at MIT, Yale, and Microsoft, led by Mert Demirer, formalized the mechanism. They call it AI chains. An AI chain is a sequence of production steps in which the automated steps are contiguous. The human at the end of the chain verifies only the final output. The verification cost is fixed, not proportional to chain length. So the economic incentive is to keep adding steps to the chain until the marginal failure probability overwhelms the saved verification cost. Two consequences follow. First, the jobs that get automated fastest are the ones where AI-suitable work clusters together. Lecture preparation is one such job. Research, drafting, slide generation, and example synthesis are all AI-suitable, and they are sequential. A single verification at the end is sufficient. The chain collapses into one unit of human work. Tutoring is the opposite. AI-suitable steps are interleaved with diagnostic steps that require real-time human judgment. The chain cannot form. The human is on the hook for verification at every handoff. The second consequence is more important. Jobs that form long AI chains are also the jobs where learning loops get shortest. The junior who used to do the research, draft the slides, and watch the senior edit them loses three apprenticeship cycles per deliverable. What was formerly a sequence of moments where skill formed now happens inside the model. The researchers tested this empirically against O\\\*NET task descriptions combined with data from Anthropic’s Economic Index, which tracks which tasks are actually being performed with AI at scale. The pattern held. AI execution concentrates in contiguous blocks within occupations. Occupations whose AI-exposed steps are more dispersed throughout the workflow show substantially lower AI execution. The policy implication for leaders is quiet but significant. When your team maps its AI automation roadmap, the blocks you want to be careful about are the contiguous ones. They are where the efficiency gain is largest. They are also where the Capability Debt compounds the fastest.

The design move that most organizations never make

Here is what separates the organizations that stall from the ones that scale. The stall pattern looks like this: adopt the tool, measure the productivity, celebrate the win, and then slowly discover that the team cannot do what the team used to do. The workflow ran. The outcome degraded. Nobody is quite sure when. The scale pattern looks different. The scaling organizations are the ones that hold the line on what Renée Gosline, in a separate MIT study presented at the same conference, calls beneficial friction. Her team ran a controlled experiment. Participants worked on cognitive tasks with AI assistance. In the control condition, the AI made its recommendation and the participant accepted or rejected it. In the treatment condition, before accepting or rejecting, the participant was asked to articulate their own reasoning, or to predict what the AI’s reasoning was. That small intervention, which took thirty seconds, measurably reduced over-reliance on AI and preserved the participant’s critical thinking. This is the design move most organizations skip. They treat friction as waste. They are correct that some friction is waste. They are wrong that all friction is waste. The friction that forces a human to articulate their own judgment before the AI’s output is anchored is the friction that carries the capability forward. At the organizational level, beneficial friction looks like this. Decision rights reviews before an AI pipeline goes into production, where the team has to name who owns the outcome the pipeline is producing. Novelty drills, where a percentage of the work that could be automated is routed to humans anyway, so the capability stays alive. Signal sampling, where humans regularly review a random sample of AI outputs not for QA but for drift. Shadow-session reviews, where someone who has not been in the pipeline’s daily operation comes in and asks whether the pipeline is still doing the right thing. None of these are productivity moves. All of them are capability moves. The point of beneficial friction is not to make the system slower. The point is to keep the system teachable.

AI change management

Why AI change management is a leadership problem

The organizations that are navigating this well understand something the organizations that are stalling do not. The new friction is not a technology problem. It is a leadership problem. When execution was expensive, leadership’s job was to clear the path. Remove the blocker. Approve the budget. Unstick the review cycle. That job is largely done. The organizations that still do it well at the leadership level are optimizing a bottleneck that is mostly already gone. The new job is different. When execution is cheap and judgment is scarce, leadership’s job is to carry the organization’s judgment capacity forward. That means designing the decisions that matter, surfacing the dissent that would otherwise stay hidden, ensuring that the people who will need the skill later are getting the practice now. This is facilitation work. Not facilitation in the narrow sense of running meetings well, although that is part of it. Facilitation in the broader sense of helping groups think together, decide together, and build the shared judgment that a single expert, however capable, cannot hold alone. The organizations that treat AI change management as a tool rollout are solving for the wrong variable. The tool is the easy part. The hard part is building the organizational muscle that keeps judgment distributed across the people who will need to exercise it when the situation changes. And situations always change. The port example makes this visceral. The efficiency advantage held until the sixty-second fault. Then the organization that had preserved coordination independence recovered faster, because it had not consumed the slack the recovery required. Your organization is running the same experiment right now. You will not know the outcome until the fault arrives.

What to do about it

The organizations working the new friction well share three habits. They take Capability Debt seriously as an accounting category. Not formally on the balance sheet, but in the same way a good engineering team takes technical debt seriously. They know where it is accumulating. They know what they are choosing to trade for it. They revisit the decision when the debt load feels wrong. They design their AI automation with beneficial friction built in. Not as a safety check that can be switched off when the system is performing well. As a structural feature of how the work is done. The junior still drafts the memo the senior could get from the model. The analyst still writes the recommendation the pipeline could produce. Not because the human output is better. Because the human capability is the thing the organization is actually buying. They treat facilitation as infrastructure, not as a soft skill. They invest in it. They build it across the leadership team. They understand that the capability to carry judgment through an organization is the durable advantage. Tools will change. Models will change. The organizational capacity to decide well under uncertainty will not. This is what we do at Voltage Control. Not because we have a template to hand you. Because the work of navigating the new friction is facilitation work, and facilitation is what we have been building capability around for the last decade.

What is at stake

The organizations that hold the line on beneficial friction will move slower in the short term. They will look less impressive in the quarterly efficiency reports. Their AI transformation stories will be harder to tell in press releases. They will also move further in the long term, because they will still have the people who can do the work the model cannot yet do, and the judgment that closes the gap when the data does not. The organizations that optimize everything for speed will discover the fragility on the worst possible day. Not because the AI failed. Because the people who were supposed to catch what the AI missed have atrophied the capability to catch anything. The new friction is not a problem to be eliminated. It is a signal telling you where your organization’s judgment is concentrating. Work with it, and the organization gets stronger. Optimize it away, and you are running Dalbert Ma’s automated zone, waiting for rain.

Frequently Asked Questions

Why do most AI transformation initiatives fail? Most stall because organizations treat AI as a technology rollout when it’s actually a leadership and facilitation problem. The tools work. What breaks is the judgment capacity of the organization, the shared decision-making the model cannot replicate, and the distributed expertise that gets quietly hollowed out when contiguous workflows are automated end-to-end. What is capability debt in AI adoption? Capability Debt, named by JoAnna Vanderhoef in 2026, is the growing gap between an organization’s apparent efficiency and its adaptive capacity. It accumulates when AI absorbs work that used to build human judgment. The debt is invisible in productivity metrics and only shows up when the situation changes and the people who would have handled it have atrophied the skill. How does beneficial friction improve AI outcomes? Beneficial friction is a small intervention that forces a human to articulate their own reasoning before accepting an AI output. Renée Gosline’s 2026 MIT study showed a thirty-second reasoning step measurably reduced over-reliance on AI and preserved critical thinking. At the organizational level, beneficial friction looks like decision-rights reviews, novelty drills, signal sampling, and shadow-session reviews of automated pipelines. What role does leadership play in AI transformation? When execution was expensive, leadership cleared the path. Now that execution is cheap and judgment is scarce, leadership’s job is to carry organizational judgment capacity forward, design the decisions that matter, surface dissent, and ensure the people who will need a skill later are getting the practice now. That is facilitation work, not project management. How do you maintain judgment when automating workflows? Treat AI automation roadmaps as portfolio decisions, not efficiency decisions. Be most careful with contiguous AI-suitable steps, since those are where Capability Debt compounds fastest. Build beneficial friction into the workflow as a structural feature rather than a removable safety check. Keep humans in the chain even when the model could handle the step, because the capability is the thing the organization is actually buying.

Ready to work the new friction?

If your organization is hitting the stall, and most are, there are three ways to go deeper. Talk to us. We will help you map where your organization is accumulating Capability Debt and what to do about it. Read the full frame. Our pillar page lays out the thesis and the three pillars: New Friction, Multiplayer, and Spark. Build the capability. Our facilitation certification teaches the skills that matter most when the bottleneck is judgment, not execution.