Want this content delivered right to your inbox?

Two AI futures. The middle ground is collapsing.

AI transformation strategy

In 2017 I gave a talk at Facilitation Lab on language models in facilitation. Most people in the room had not heard of GPT. The thesis I put on the screen was simple: the technology side would outrun every organization’s ability to absorb it, and the human side would become the bottleneck. Around the same time I was advising Kungfu.ai with Stephen. His bet was on building AI. Mine was on building the human capability that would have to grow around it. I founded Voltage Control to make that second bet, and I have made it every year since. Eight years later the thesis is no longer abstract. Organizations are not choosing between using AI and not using it. They are being pulled into two different futures, and the middle ground is disappearing. One future is deep tech: organizations that have built genuine infrastructure, deployed agents at scale, automated workflows end to end, and are operating with AI as a core organizational capability, not a feature. The other future is deep human: organizations that have recognized that the hard part was never the technology, that it was always the people, the trust, the identity questions, the way groups make decisions under uncertainty. They are investing in the human capability that makes any technology productive. The space between these two positions is collapsing. You can see it in the data now. I have been waiting for the research to catch up to the pattern. It has.

The Evidence of the Split

Gartner’s Digital Workplace Summit this year surfaced a finding that should land harder than it has: executives are four times more likely to report high AI productivity gains from their AI investments, while individual contributors are five times more likely to say AI made no difference to their work. That is not a technology gap. The tools are the same. The gap is in how the technology is being experienced, and it maps almost perfectly onto the structure of most enterprise AI programs: leadership makes the decision, licenses get purchased, individuals get trained once and then largely ignored, and the two groups live in completely different realities about what is happening. Four out of five employees believe their organization is trying to replace them with AI. Only 12% feel involved in the decisions about how AI gets used in their work. 78% do not know whether they will lose their job to AI. These are not the stats of an organization that is integrating AI. These are the stats of an organization that has deployed AI at its leadership layer and left the rest of the workforce in the dark. And when 14% of leaders believe employees are effectively using the tools they have been given, the people making deployment decisions and the people living with the consequences are not operating from the same reality. This is what the bifurcation looks like from the inside.

What Is Actually Happening

The technology is not the problem. The technology, in most enterprise contexts, is working. 72% of IT leaders say Copilot users struggle to integrate it into their daily routine, but the failure mode there is not the tool. It is the design of how adoption happens. The World Economic Forum projects that 59% of the workforce needs brand new skills in the next two to three years. Gartner estimates that 32 million jobs will be transformed per year due to AI, and that managing this transformation requires 20 times more organizational effort than managing job losses. The effort ratio is 20 to 1. That 20:1 figure is the one that should reorient every AI strategy. Organizations are allocating budget and attention as though this were a technology problem, when the data says it is primarily an organizational problem. The work of transformation is not writing code or buying software. It is the human work: the alignment conversations, the role redesigns, the trust-building, the change management, the process of getting a 40,000-person organization to operate differently. That work does not scale through typical training programs. Without application and practice, half of what people learn from a one-time training session is gone within 24 hours. 90% is gone within six days. Learning decay does not care how good the content was.

Two Organizations, Same Technology

The split is easier to see in examples than in statistics. At Gartner’s Digital Workplace Summit, Ivanti presented their approach to internal AI transformation. They built a centralized AI platform called Ivy and created AI pods, cross-functional environments where subject matter experts, senior DBAs, network engineers, storage specialists, rotate through and imbue AI models with their domain expertise. The output is what Gartner called “cybernetic teammates”: AI agents that carry the actual knowledge and judgment of specific senior practitioners, available to everyone in the organization, not just to the people who happen to sit near the expert. They surfaced approximately 700 AI use cases this way. The mechanism was not a training program. It was a structured process for capturing and distributing human expertise at scale. In Manchester, University NHS Foundation Trust deployed Microsoft Dragon Copilot to give doctors back something they had been losing: full attention on the patient in front of them. The voice AI handles transcription and note-taking in real time. The doctor reviews, edits, and approves. The consultation, the actual human work, is now uninterrupted. Manchester’s Chief Executive has estimated that at full rollout, the trust could see up to 250,000 additional patients per year. That number is a projection, not a measured result, and it depends on redesigning scheduling, staffing, and workflow to convert freed-up minutes into actual appointments. The technology is the easy part. The organizational redesign is the work. These two cases look different on the surface. One is an IT infrastructure vendor restructuring how expertise flows across their organization. The other is a hospital trust giving clinicians room to be clinicians. But they are both illustrations of the same underlying logic: AI works when it is designed around what humans do best, not when it is deployed as a replacement for the conversation about what that even means.

Team collaborating with sticky notes on glass wall - AI transformation strategy

The Wrong Approach

The organizations going in the wrong direction are not doing obviously foolish things. They are doing reasonable things, badly sequenced. They buy licenses before they understand the work. They run training programs before they have addressed the trust deficit. They announce AI strategies without involving the people those strategies will affect. And then they are surprised when license usage stays flat, when the productivity gains are invisible to the people on the ground, when the AI-fluent individuals they develop become isolated experts rather than multipliers. 56% of CEOs plan to use AI to de-layer middle management within five years. The question is not whether that flattening is coming. It is whether anyone is designing what replaces the development pathways that disappear when it does. Middle management is not just overhead. It is the layer through which expertise gets transferred, context gets communicated, and junior people get the reps that build them into senior people. Remove the layer without replacing the function and you have an experience starvation problem: senior experts absorbing work that used to be the proving ground for the next generation. The pipeline for building bench strength quietly breaks. AI is not taking entry-level jobs. Experts are. That is a subtly different problem that requires a subtly different response.

Where Facilitation Lives

I keep coming back to this: there needs to be a function in organizations that lives at the intersection of all the functional groups. Not IT. Not HR. Not change management as it is currently practiced. A function that understands how groups make decisions under uncertainty, how trust is built and broken, how to create conditions where people can learn through doing rather than just through instruction. That is a facilitation function. And AI does not make it less important. It makes it more important. When you deploy AI at speed, you compress the timeline for every organizational friction. Decisions that used to take weeks get made in hours. Alignment gaps that used to surface slowly become visible immediately. The process problems that were tolerable before, the meetings where nothing gets decided, the strategies that make sense to leadership and mean nothing to the people executing them, those problems do not disappear with AI. They get louder. More inputs and faster inputs can slow alignment down if the process is broken. The organizations getting real value from AI have not solved a technology problem. They have figured out how to have the conversations that the technology makes urgent: about what work means, about who has agency over it, about how expertise flows and gets recognized, about what you are actually trying to do when you say you want to be AI-ready. The dotted line between deep tech and deep human is not a gap to be closed by more tools. It is where the work happens.

The Choice

The bifurcation is not a prediction. It is already underway. The organizations making real investments in both the technology and the human infrastructure to absorb it are pulling ahead. The ones waiting for the technology to prove itself before investing in the organizational side are falling behind, and the gap is compounding. You cannot address the organizational side with the same logic you used to deploy the technology. You cannot train your way to psychological safety. You cannot mandate your way to trust. You cannot run a workshop that solves the identity questions AI raises for the people whose work is changing. What you can do is design environments where those questions get answered through practice, where people learn by doing in conditions that are structured enough to be safe and real enough to matter. Where the facilitation is not an add-on to the AI strategy but the architecture that makes the AI strategy possible. That is the choice. Deep tech alone will get you capability without adoption. Deep human alone will get you culture without leverage. The organizations that understand both, and that have someone whose job it is to hold the space between them, are the ones that will compound the gains. Everything else is just expensive licensing. If you are building AI strategy and finding that the human side keeps creating more problems than the tools solve, let’s talk.

Frequently Asked Questions

What is the difference between deep tech and deep human organizations?

Deep tech organizations have built real AI infrastructure, deployed agents at scale, and treat AI as a core capability rather than a feature. Deep human organizations have recognized that the hard part was never the technology; it was always the trust, identity, and decision-making capacity that lets any technology produce value. The two are not opposed. The bifurcation is happening because most organizations are investing heavily in one side and ignoring the other, and the middle ground is collapsing.

Why is the middle ground collapsing for organizational AI strategy?

Because AI compresses the timeline on every organizational friction. When the technology was slower, organizations could afford to ignore the trust gap, the identity questions, and the decision-rights ambiguity. AI makes those frictions immediate. The 20:1 ratio Gartner reported, that managing AI-driven transformation requires twenty times more organizational effort than managing job losses, is the quantitative version of this collapse. Half-measures stop working.

How do you build organizational trust during AI transformation?

By involving the workforce in how AI reshapes their roles before deployment, not after. Organizations that get this right, like Ivanti’s AI pods or Manchester NHS’s Dragon rollout, are not announcing AI strategy and asking people to comply. They are bringing subject matter experts and frontline workers into the design of how the technology gets used. The mechanism is structural, not communicative; trust comes from agency, not from town halls.

What does “deep human” mean in organizational AI strategy?

Deep human means investing in the human capability that makes any technology productive: facilitation skills, decision rights design, trust-building practices, role redesign, and the developmental experiences that build judgment over time. It is not the soft side of AI strategy. It is the architecture that makes deep tech work. Organizations that go deep tech without deep human get capability without adoption.

Should organizations invest in AI technology or human capability?

Both, in sequence and in proportion. Most organizations are 95% tech investment and 5% human investment, and that ratio is what produces the executive/IC perception gap, the experience starvation pattern, and the 70% IT-leader concern about agent governance. The organizations pulling ahead invest in both at roughly the level the 20:1 effort ratio implies: most of the work is the human work, and treating it as a side-project alongside the technology budget is the failure mode the bifurcation reveals.