The post Collaborative AI appeared first on Voltage Control.
]]>“Collaborative AI” is the buzziest term of 2026. Vendors use it. Analysts use it. LinkedIn thought leaders use it. Most of the time it means almost nothing, because the term has been stretched to cover three completely different things at once. A diagram of multiple agents handing tasks to each other gets called collaborative AI. A single user prompting ChatGPT for help gets called collaborative AI. A team using a shared model in a meeting gets called collaborative AI. Three different things, one term. And the thing that actually matters, the thing that is genuinely new about how teams are starting to work with AI, gets buried under the other two. This piece is a working definition. Not the marketing one. The one that lines up with what we actually see when we walk into rooms where teams are doing this well, and what is missing from the rooms where they are not.

The most common use of “collaborative AI” right now describes a multi-agent architecture. One AI agent generates a draft, hands it to a second agent for review, hands the result to a third for formatting. The agents are collaborating with each other. The diagram is impressive. The phrase has obvious appeal. This is a useful technical pattern. It is not collaboration in any sense that matters for how people work. There are no humans in the loop. The collaboration is between models. Calling this “collaborative AI” is like calling a pipeline “collaborative software.” The work flows through stages, but no one is collaborating. The shallow definition gets worse when it is applied to a single person using a chatbot. Someone types a prompt, the model returns text, the person edits it, sends another prompt. This is not collaboration. It is iterative tool use. Useful, fast, and individual. The output reflects one person’s thinking improved by a model. No one else’s perspective is in the room. If you are looking for what actually changes when AI shows up in a team’s workflow, neither of those definitions will help you.
Here is the one that holds up in practice. Collaborative AI is the practice of bringing AI into the room with a team, where it influences collective thinking and output in real time, with shared visibility into how the model is contributing. Three pieces matter, and all three have to be present. In the room with a team. Not one person alone with a chat window. A group, working together, with an AI participating in the work. This could be a workshop, a strategy session, a stand-up, a planning call. The AI is on the screen, not in someone’s pocket. Shapes the team’s collective output in real time. The model is generating, summarizing, surfacing patterns, drafting alternatives.
Whatever the team is producing is being changed by the AI as the team works. Not after the meeting, in someone’s editor. During. Shared visibility into how the model is contributing . This is the part that gets skipped, and it is the part that determines whether the AI helps the team or quietly hurts them. Everyone in the room knows the model is contributing, knows what it has produced, can see what is generated AI versus team thinking, and has the chance to push back. The AI is a participant, not a hidden assistant. When all three are present, you get something that does not happen with individual AI use or multi-agent pipelines. You get a team that can think faster together, with a shared artifact that captures what the model contributed and what the people contributed, and a record of where they pushed back. That is collaborative AI. Everything else is either delegation (one person and a model) or automation (models talking to models).
A leadership team gathers to align on a strategic question. The question is on the screen. So is a model. The facilitator runs the team through a structured divergence: each person types a position privately, the model surfaces themes across the responses, the themes go up on the wall. The team sees the patterns the model found and the dissents the model missed. They argue with the model’s framing. They edit the themes. They re-run the synthesis with their corrections. Two hours in, the team has alignment on a position they could not have produced in two hours without the AI. They also have a record of what the model contributed and where they overrode it. The output is theirs. The model accelerated the path to it. Now imagine the same team, same question, without collaborative AI. Three options.
Option A. Each person prepares their position alone, with their own AI assistant. They come to the meeting with polished drafts that look similar because the underlying models trained on similar content. Discussion devolves into refining the most articulate draft instead of surfacing the real disagreement. The model contributed to each person individually. It did not contribute to the team.
Option B. They run the meeting without AI, fill the wall with sticky notes, take photos for the recap, and the synthesis happens later, in someone’s editor, with a model. The synthesis returns from the model and people argue about whether it captured the room. The model is reading, not collaborating.
Option C. They run a multi-agent system that takes meeting transcripts, summarizes them, drafts strategic options. The output looks like collaboration. No one is in the room with the model. The team is consuming AI output, not shaping it. Each option uses AI. Only the first is collaborative AI as the term should be used.

The reason collaborative AI works in some rooms and not others has nothing to do with the model. The model is the same. What changes is what the team brings. A facilitator who can hold the room with AI in it. Most facilitation training assumes the facilitator’s job is to manage human dynamics. With AI in the room, the facilitator’s job expands. Who decides when to use the model? When does the model’s output get accepted, and when does it get pushed back on? Who notices when the model is steering the conversation toward a generic framing the team would not have chosen on its own? These are facilitation moves that did not exist three years ago. Teams that have someone who can run them get collaborative AI. Teams that do not, fall back to one of the three options above. Shared norms about transparency.
The team has to agree, before the session, on what AI use looks like in the room. Is everyone using it? Are some people privately using it while others are not? Is the model running publicly on the screen, or quietly assisting one person? When AI use is visible, the team can engage with it. When it is hidden, it distorts the room. A working understanding of what the model is good at and what it is not. Models are excellent at synthesis, summarization, divergent generation, and surfacing patterns across text. They are bad at judgment under uncertainty, weighing competing values, and noticing what is missing from a conversation. Teams that know this use the model where it helps and override it where it does not. Teams that do not, drift toward whatever the model recommends. These three capabilities are not technical. They are practices. And practices are slow to build, because they require facilitated repetition.
Most “collaborative AI” content you will read in 2026 will be one of the two shallow definitions, dressed up in language that makes it sound like the working one. Vendors have an incentive to call any AI feature collaborative because the word is selling well. The diagram is collaborative. The chatbot is collaborative. The agent network is collaborative. None of them require what real collaboration requires, which is more than one person in the same room making decisions together. The risk for buyers is straightforward: you procure something labeled collaborative AI, deploy it across the organization, and discover that it is a productivity tool for individuals. People use it alone, at their desks, between meetings. The team-level capability you were trying to build never materializes, because the tool was never going to build it. The capability is built by humans, not software. The good news is that the actual practice of collaborative AI does not require a particular vendor. The model layer is a commodity. What is scarce is the facilitation layer on top, and that is what teams have to build for themselves.
This is one piece of a larger pattern. The friction that matters in 2026 is no longer execution speed. AI eliminated that friction. The friction that matters is consensus, alignment, and trust at the team and organization level. AI accelerates execution; it does not, on its own, build alignment. In some configurations it makes alignment harder, because individual users move so fast that the team cannot keep up. Collaborative AI is the response to that. It is what happens when teams refuse to let AI become a private productivity boost and instead bring it into the room as a shared participant. The benefit is real: faster alignment, better synthesis, decisions that more people genuinely own. The cost is that someone has to facilitate it, and most organizations have not built that capability yet. That is the work in front of leadership teams right now. Not picking the right collaborative AI vendor. Building the team practices that make any AI collaborative.
If you are leading a team and want to start moving toward collaborative AI:
Pick one recurring meeting. Not a high-stakes one. A regular planning or review session where the team is already aligned on the format. This is your test environment.
Put a model on the screen. Shared, visible, running. The output of the model goes up where everyone can see it, edit it, push back on it.
Name AI use explicitly. When the model contributes something, say so. When someone overrides it, say so. The transparency is what makes the next session better.
Run it for four weeks. The first session will feel awkward. The second will be better. By the fourth, the team will start to develop instincts about when to invoke the model, when to override it, and how to use it without losing their own judgment. After four weeks, you will know if you have built the practice. You will also know what you need from a facilitator, from governance, and from team training to scale it.
The teams that build this capability now will compound on it for the next decade. The teams that wait for the right vendor or the right tool will still be looking for the right vendor when the friction has moved somewhere else. That is the difference between collaborative AI as branding and collaborative AI as a capability. The branding will keep shifting. The capability is yours once you build it.
The post Collaborative AI appeared first on Voltage Control.
]]>The post The New Friction appeared first on Voltage Control.
]]>Two trucks break down in a port. They are thirty meters apart, on the same lane, carrying the same cargo. One port zone recovers from the disruption in seventy minutes. The other takes more than two hours. The zones share everything that matters: the same bridges, the same lane widths, the same weather, the same sixty-second mechanical fault. The only difference is coordination. In the slow-recovery zone, a single algorithm dispatches every vehicle. In the fast-recovery zone, that same algorithm shares infrastructure with a fleet of trucks driven by independent logistics companies, each operating under its own objectives. That is the finding M. Dalbert Ma, a researcher at London Business School, reported to the BIG.AI@MIT conference last month, after studying approximately one year of operations at one of the world’s largest container terminals. The autonomous zones ran 3.8% more efficiently under normal conditions. A single sixty-second fault cost them a 12.2% delay on the operations that followed. Rain, which forces every vehicle to slow and creates temporal buffer between sequential operations, erased the fragility entirely. This is what most AI transformation stories leave out. The efficiency gain is real. So is the cost you pay when something disrupts it. Real AI change management is the work of carrying that cost forward without breaking the system.

Every AI-first workflow your organization designs makes the same structural tradeoff the port made. When execution time collapses, coupling tightens. When coupling tightens, buffer disappears. The same mechanism that produces the efficiency also produces the fragility. This is not a failure of the technology. The AGVs in Ma’s study were operating at SAE Level 4 autonomy, the highest level in commercial deployment. They were not malfunctioning. The algorithm was not broken. What the study shows is that optimization pushed to its limit consumes the slack the system needs to absorb disruption. The port is a clean case because you can measure it. The same pattern is operating inside every organization that has automated a contiguous block of knowledge work without thinking about what the friction was doing for them. When the fault arrives, and it always arrives, the organizations that over-optimized pay a tax the spreadsheet did not predict.
JoAnna Vanderhoef, in a poster at the same conference, gave this tradeoff a name: Capability Debt. It is the growing gap between an organization’s apparent efficiency and its adaptive capacity. Capability Debt is subtle because it shows up as absence. Absence of novelty detection. Absence of the junior employee who stumbled into the strange request and learned how to triage it. Absence of the reviewer who noticed the model’s output was technically correct and strategically wrong. Absence of the senior whose judgment was trained on edge cases the automated pipeline now handles without them. You do not see the debt until you need to do something the system was not built for. By then, the people who would have done it have atrophied the capability, or have never built it at all. This is the part of AI transformation that is easy to underweight in a board deck. Efficiency is legible. Judgment loss is not. It hides inside the year-over-year improvement metrics and inside the reduced headcount and inside the deliverables that ship faster and look clean until a situation arrives that needs taste, or context, or the ability to know what is not in the data. Capability Debt is the bill that comes later.
A team of researchers at MIT, Yale, and Microsoft, led by Mert Demirer, formalized the mechanism. They call it AI chains. An AI chain is a sequence of production steps in which the automated steps are contiguous. The human at the end of the chain verifies only the final output. The verification cost is fixed, not proportional to chain length. So the economic incentive is to keep adding steps to the chain until the marginal failure probability overwhelms the saved verification cost. Two consequences follow. First, the jobs that get automated fastest are the ones where AI-suitable work clusters together. Lecture preparation is one such job. Research, drafting, slide generation, and example synthesis are all AI-suitable, and they are sequential. A single verification at the end is sufficient. The chain collapses into one unit of human work. Tutoring is the opposite. AI-suitable steps are interleaved with diagnostic steps that require real-time human judgment. The chain cannot form. The human is on the hook for verification at every handoff. The second consequence is more important. Jobs that form long AI chains are also the jobs where learning loops get shortest. The junior who used to do the research, draft the slides, and watch the senior edit them loses three apprenticeship cycles per deliverable. What was formerly a sequence of moments where skill formed now happens inside the model. The researchers tested this empirically against O\\\*NET task descriptions combined with data from Anthropic’s Economic Index, which tracks which tasks are actually being performed with AI at scale. The pattern held. AI execution concentrates in contiguous blocks within occupations. Occupations whose AI-exposed steps are more dispersed throughout the workflow show substantially lower AI execution. The policy implication for leaders is quiet but significant. When your team maps its AI automation roadmap, the blocks you want to be careful about are the contiguous ones. They are where the efficiency gain is largest. They are also where the Capability Debt compounds the fastest.
Here is what separates the organizations that stall from the ones that scale. The stall pattern looks like this: adopt the tool, measure the productivity, celebrate the win, and then slowly discover that the team cannot do what the team used to do. The workflow ran. The outcome degraded. Nobody is quite sure when. The scale pattern looks different. The scaling organizations are the ones that hold the line on what Renée Gosline, in a separate MIT study presented at the same conference, calls beneficial friction. Her team ran a controlled experiment. Participants worked on cognitive tasks with AI assistance. In the control condition, the AI made its recommendation and the participant accepted or rejected it. In the treatment condition, before accepting or rejecting, the participant was asked to articulate their own reasoning, or to predict what the AI’s reasoning was. That small intervention, which took thirty seconds, measurably reduced over-reliance on AI and preserved the participant’s critical thinking. This is the design move most organizations skip. They treat friction as waste. They are correct that some friction is waste. They are wrong that all friction is waste. The friction that forces a human to articulate their own judgment before the AI’s output is anchored is the friction that carries the capability forward. At the organizational level, beneficial friction looks like this. Decision rights reviews before an AI pipeline goes into production, where the team has to name who owns the outcome the pipeline is producing. Novelty drills, where a percentage of the work that could be automated is routed to humans anyway, so the capability stays alive. Signal sampling, where humans regularly review a random sample of AI outputs not for QA but for drift. Shadow-session reviews, where someone who has not been in the pipeline’s daily operation comes in and asks whether the pipeline is still doing the right thing. None of these are productivity moves. All of them are capability moves. The point of beneficial friction is not to make the system slower. The point is to keep the system teachable.

The organizations that are navigating this well understand something the organizations that are stalling do not. The new friction is not a technology problem. It is a leadership problem. When execution was expensive, leadership’s job was to clear the path. Remove the blocker. Approve the budget. Unstick the review cycle. That job is largely done. The organizations that still do it well at the leadership level are optimizing a bottleneck that is mostly already gone. The new job is different. When execution is cheap and judgment is scarce, leadership’s job is to carry the organization’s judgment capacity forward. That means designing the decisions that matter, surfacing the dissent that would otherwise stay hidden, ensuring that the people who will need the skill later are getting the practice now. This is facilitation work. Not facilitation in the narrow sense of running meetings well, although that is part of it. Facilitation in the broader sense of helping groups think together, decide together, and build the shared judgment that a single expert, however capable, cannot hold alone. The organizations that treat AI change management as a tool rollout are solving for the wrong variable. The tool is the easy part. The hard part is building the organizational muscle that keeps judgment distributed across the people who will need to exercise it when the situation changes. And situations always change. The port example makes this visceral. The efficiency advantage held until the sixty-second fault. Then the organization that had preserved coordination independence recovered faster, because it had not consumed the slack the recovery required. Your organization is running the same experiment right now. You will not know the outcome until the fault arrives.
The organizations working the new friction well share three habits. They take Capability Debt seriously as an accounting category. Not formally on the balance sheet, but in the same way a good engineering team takes technical debt seriously. They know where it is accumulating. They know what they are choosing to trade for it. They revisit the decision when the debt load feels wrong. They design their AI automation with beneficial friction built in. Not as a safety check that can be switched off when the system is performing well. As a structural feature of how the work is done. The junior still drafts the memo the senior could get from the model. The analyst still writes the recommendation the pipeline could produce. Not because the human output is better. Because the human capability is the thing the organization is actually buying. They treat facilitation as infrastructure, not as a soft skill. They invest in it. They build it across the leadership team. They understand that the capability to carry judgment through an organization is the durable advantage. Tools will change. Models will change. The organizational capacity to decide well under uncertainty will not. This is what we do at Voltage Control. Not because we have a template to hand you. Because the work of navigating the new friction is facilitation work, and facilitation is what we have been building capability around for the last decade.
The organizations that hold the line on beneficial friction will move slower in the short term. They will look less impressive in the quarterly efficiency reports. Their AI transformation stories will be harder to tell in press releases. They will also move further in the long term, because they will still have the people who can do the work the model cannot yet do, and the judgment that closes the gap when the data does not. The organizations that optimize everything for speed will discover the fragility on the worst possible day. Not because the AI failed. Because the people who were supposed to catch what the AI missed have atrophied the capability to catch anything. The new friction is not a problem to be eliminated. It is a signal telling you where your organization’s judgment is concentrating. Work with it, and the organization gets stronger. Optimize it away, and you are running Dalbert Ma’s automated zone, waiting for rain.
Why do most AI transformation initiatives fail? Most stall because organizations treat AI as a technology rollout when it’s actually a leadership and facilitation problem. The tools work. What breaks is the judgment capacity of the organization, the shared decision-making the model cannot replicate, and the distributed expertise that gets quietly hollowed out when contiguous workflows are automated end-to-end. What is capability debt in AI adoption? Capability Debt, named by JoAnna Vanderhoef in 2026, is the growing gap between an organization’s apparent efficiency and its adaptive capacity. It accumulates when AI absorbs work that used to build human judgment. The debt is invisible in productivity metrics and only shows up when the situation changes and the people who would have handled it have atrophied the skill. How does beneficial friction improve AI outcomes? Beneficial friction is a small intervention that forces a human to articulate their own reasoning before accepting an AI output. Renée Gosline’s 2026 MIT study showed a thirty-second reasoning step measurably reduced over-reliance on AI and preserved critical thinking. At the organizational level, beneficial friction looks like decision-rights reviews, novelty drills, signal sampling, and shadow-session reviews of automated pipelines. What role does leadership play in AI transformation? When execution was expensive, leadership cleared the path. Now that execution is cheap and judgment is scarce, leadership’s job is to carry organizational judgment capacity forward, design the decisions that matter, surface dissent, and ensure the people who will need a skill later are getting the practice now. That is facilitation work, not project management. How do you maintain judgment when automating workflows? Treat AI automation roadmaps as portfolio decisions, not efficiency decisions. Be most careful with contiguous AI-suitable steps, since those are where Capability Debt compounds fastest. Build beneficial friction into the workflow as a structural feature rather than a removable safety check. Keep humans in the chain even when the model could handle the step, because the capability is the thing the organization is actually buying.
If your organization is hitting the stall, and most are, there are three ways to go deeper. Talk to us. We will help you map where your organization is accumulating Capability Debt and what to do about it. Read the full frame. Our pillar page lays out the thesis and the three pillars: New Friction, Multiplayer, and Spark. Build the capability. Our facilitation certification teaches the skills that matter most when the bottleneck is judgment, not execution.
The post The New Friction appeared first on Voltage Control.
]]>The post Why Your AI Training Program Is Already Obsolete appeared first on Voltage Control.
]]>The data is in, and it confirms what many of us suspected: the way most organizations are approaching AI adoption is fundamentally broken. What looks like an AI upskilling problem is actually a design problem, and no amount of additional training will solve it. What looks like an AI upskilling problem is actually a design problem, and no amount of additional training will solve it.
In the same week, two independent reports landed on the same conclusion from completely different angles. Gartner’s Digital Workplace Summit presented research showing that generic AI training produces generic results, that 72% of IT leaders say Copilot users struggle to integrate it into their daily routine, and that collaboration, not individual tool proficiency, is the #2 skill IT workers need right now. Meanwhile, Anthropic released its Economic Index showing that experienced AI users get measurably better results than newcomers, and the gap compounds over time. People who have used AI for six months or more have a 10% higher success rate in their conversations. The longer you use it, the wider the gap gets.
This is not a training problem. This is a design problem.
Here is what most organizations are doing: they buy licenses, schedule a training session, maybe run a webinar series, and call it done. Gartner’s data shows exactly what happens next. License counts rise. Active daily usage stays flat. Within a day, employees have lost 50% of what they learned. After six days, 90% is gone.
That is not a failure of the training content. It is a failure of the approach. You cannot teach AI fluency in a classroom any more than you can teach someone to swim by showing them a PowerPoint about water.
AI fluency is not taught. It is sparked. Nobody learns something they do not want to learn, and nobody retains a skill they do not practice immediately in the context of their actual work. The most common misconception we encounter when a client first engages us: that training is a one-and-done experience. That a small training event is all that might be needed for change. The reality is that AI upskilling that holds comes from stacking small and deliberate work over time, not from a single workshop.
The Anthropic data makes this even sharper. Their report studied over a million conversations and found that the gap between experienced and new users is not explained by what tasks they are doing, what country they are in, or what model they are using. It is explained by how they interact. Experienced users do not just delegate tasks. They iterate, push back, validate, and learn. They treat AI as a collaborator, not a vending machine.
That is a skill that gets built through practice, not instruction.
Gartner surfaced a stat at the Digital Workplace Summit that should alarm every executive reading this: executives are four times more likely to report high AI productivity gains. Individual contributors are five times more likely to say AI made no difference.
Read that again. The people making the adoption decisions and the people doing the adoption are living in different realities.
This is not a technology gap. It is a perception gap, and it is driven by something deeper than skill level. When four out of five employees believe their organization is trying to replace them with AI, and only 12% feel involved in the decisions about how AI gets used, you do not have a training problem. You have a trust problem. And no amount of lunch-and-learn sessions will fix it.

Consider what this looks like on the ground. A VP of digital transformation rolls out an AI copilot and sees her own productivity jump. She assumes everyone else is having the same experience. Meanwhile, 78% of employees do not even know whether they will lose their job to AI. They are not experimenting with the tool. They are watching it with suspicion, trying to figure out what it means for them. The same technology that feels like a superpower to the executive feels like a threat to the person three levels down.
We see this pattern constantly. Teams do not resist AI because they lack skills. They resist because they do not have a vision for what purposeful adoption looks like, and they do not feel they have agency in it because they were not included. It is a mixture of capability gap and design gap, and the design gap is the one nobody is addressing.
The organizations seeing real value from AI share one characteristic that the others do not: alignment. Gartner found that organizations with business-IT-executive alignment on what problems AI should solve are three times more likely to report significant value. Only 14% of organizations have that alignment today. That is not a technology gap. That is a conversation that has not happened yet.
There is a more insidious consequence of getting AI adoption wrong, and most leaders are not seeing it yet.
When senior people use AI to do junior work faster, they are not just being more productive. They are removing the on-ramps that junior employees need to develop expertise. Gartner calls this “experience starvation.” The expert uses AI to absorb tasks that used to be the proving ground for new hires. The new hire never gets the reps. The pipeline for developing the next generation of talent quietly breaks.
Think about what this means in practice. A senior analyst who once delegated data cleaning to a junior team member now does it herself in minutes with AI. The junior analyst never learns the structure of the data, never develops the intuition that comes from wrestling with messy inputs. The senior person is more productive. The junior person is more expendable. And the organization has quietly eliminated the apprenticeship model that built its bench strength.
This is already showing up in the data. Anthropic’s report found that job-finding rates for 22-to-25-year-olds in AI-exposed occupations have dropped 14% compared to 2022. Software developer employment in that age cohort has declined roughly 20% from its late-2022 peak. The junior roles are not being automated away by AI. They are being absorbed by seniors who now have AI doing the work that used to be someone else’s learning curve.
There is a troubling feedback loop here as well. Anthropic’s researchers found that developers who used AI assistance scored 50% on follow-up knowledge assessments, compared to 67% for those who coded by hand. The tool makes you faster today while potentially making you less capable tomorrow, unless the learning environment is designed to counteract that effect.
Gartner projects that 56% of CEOs will use AI to de-layer middle management within five years. The question is not whether the org chart is going to flatten. It is whether anyone is designing what replaces the development pathways that disappear when it does.
Here is something we did not expect to find, but now see repeatedly: the teams with the most AI-fluent individuals are not always the teams getting the most value.
When a few people on a team develop real AI proficiency while everyone else stays at the basics, something counterintuitive happens. The fluent members pull ahead in their individual work, but they cannot embed what they are learning back into the team. They are producing faster, thinking differently, using AI as a genuine thought partner, but the team’s processes, meetings, and decision-making structures have not changed. The fluent members end up on an island.
In some ways, this is worse than universal low adoption. At least when nobody is using AI, the team is aligned in their way of working. When a few members leap ahead without the collaborative infrastructure to support it, you get fragmentation. The AI-fluent people get frustrated because they can see what is possible but cannot bring the team along. The rest of the team feels left behind or skeptical. The organization gets pockets of individual productivity gains that never compound into team-level or org-level value.
This is the single biggest blind spot in the “train the champions” approach that many organizations default to. Champions without a collaborative model just become isolated experts.
Here is the part that most AI adoption strategies completely miss: the highest-value applications of AI are not individual. They are collaborative.
Gartner’s research ranks collaboration as the #2 skill IT workers need, at 47%, right behind AI/GenAI itself at 53%. That is not a coincidence. As AI handles more of the execution work, the human work that remains is increasingly about alignment, decision-making, and working across functions. The ability to think together becomes more important precisely because the machines handle more of the thinking alone.
The Anthropic data reinforces this from a different angle. Their report distinguishes between “automation” (delegating a task to AI) and “augmentation” (using AI as a thought partner for more complex, creative, or strategic work). On the consumer platform, augmentation already accounts for 53% of usage. Experienced users disproportionately favor augmentation over pure automation. They have learned that the real value is not in having AI do something for you. It is in having AI think with you.
But thinking with AI is a multiplayer activity. When a team uses AI to generate options, stress-test a strategy, or prototype a solution, the output is only as good as the process that surrounds it. More inputs and faster inputs can actually slow alignment down if the process is broken. A team that cannot align on a decision without AI is not going to align any faster with it. They are just going to generate more options to disagree about.
This is where most organizations have a gap they cannot see. They are investing in individual AI skills while ignoring the collaborative infrastructure that makes those skills productive at scale. They are optimizing the nodes while neglecting the network.
The organizations that are getting real value from AI are not running better training programs. They are redesigning how teams work together. That is what AI upskilling actually looks like in practice.
The shift that matters is moving from AI as a tool to AI as a toolmate, a participant in the collaborative process rather than something individuals use in isolation. This shift is still so new that most teams do not have models for it yet. “Where do we start beyond the single-player approach?” is the question we hear most often. But when you provide those models, when you show teams what collaborative AI actually looks like in practice, excitement builds fast. People can suddenly see what is possible.

We saw this recently with a client whose previous AI training had focused entirely on individual use cases. Adoption was uneven, value was scattered, and the team could not connect their individual AI experiments to meaningful outcomes. When we introduced collaborative AI and AI toolmates, working with AI as a team rather than as individuals, it was a major unlock. Both the teams and executives saw the shift in real time. The difference was not better training. It was a fundamentally different model for how AI gets used.
Different roles also need fundamentally different AI strategies. Experts need AI that extends their capacity. People still building expertise need AI that accelerates their learning without starving them of foundational experience. A one-size-fits-all training program is the opposite of what any of them need.
The Anthropic data points to the same conclusion from the user behavior side. Their researchers found that high-tenure users actually grant AI lower autonomy, not higher. They stay more involved, iterate more, and get better results because of it. The best AI users are not the ones who have learned to delegate everything. They are the ones who have learned when to push back, when to redirect, and when to go deeper.
That kind of fluency does not come from a training module. It comes from practice in a structured environment, with feedback, with real stakes, and ideally with other people learning alongside you. Think of it as AI fitness, not AI training. A gym metaphor rather than a classroom metaphor. You do not get fit by attending a lecture about exercise. You get fit by showing up consistently and doing the work.
The urgency here is not abstract. It is compounding.
Anthropic’s data shows that the skills gap between experienced and new AI users is hardening into something more structural. Washington, D.C., where the population skews highly educated, has AI adoption rates four times what you would expect for a city of its size. Globally, the top 20 countries account for 48% of per-capita AI usage, and that concentration is increasing. The people who started early are pulling further ahead. The organizations that figured this out first are building advantages that will be very difficult to close.
Gartner predicts that by 2027, 75% of hiring processes will include AI proficiency testing. At the same time, the atrophy of critical thinking skills due to GenAI use is already pushing organizations toward “AI-free” skills assessments. The workforce is bifurcating: people who can work with AI as a genuine collaborator, and people who either cannot use it effectively or have let it do their thinking for them.
59% of the workforce needs brand new skills in the next two to three years. That is not a number that gets solved by scaling up existing training approaches. It requires a fundamentally different design.
The question is not “how do we train people on AI.” The question is “how do we redesign how teams work together when half the team is agents.” That reframing is the gap between AI training and real AI upskilling.
That is a facilitation challenge, not a technology challenge. It requires someone who understands how groups make decisions, how trust gets built (and broken), how to create the conditions for people to develop new capabilities through practice rather than instruction.
32 million jobs will be transformed per year due to AI. Gartner estimates that managing this transformation requires 20 times more organizational effort than managing job losses. That is the single most important stat in all of this research. The hard part is not the technology. It is the organizational design work that makes the technology productive.
Today, 80% of IT work is done by humans without AI. By 2030, Gartner projects that 75% will be done by humans with AI, and 25% by AI alone. That transition does not happen through training programs. It happens through deliberate redesign of how people and AI work together, role by role, team by team, process by process.
The organizations that treat AI adoption as a training problem will keep buying licenses that do not get used, running workshops that do not stick, and watching the gap between their AI-fluent employees and everyone else widen. The organizations that treat it as a design problem, one that requires rethinking collaboration, decision-making, and how people learn together, will be the ones that capture the real value.
The tools are ready. The question is whether your organization is designed to use them.
If you are rethinking how your teams work with AI and want to explore what a design-first approach to AI upskilling looks like, let’s talk.
The post Why Your AI Training Program Is Already Obsolete appeared first on Voltage Control.
]]>The post Why AI Adoption Fails appeared first on Voltage Control.
]]>Every AI transformation leader is hearing the same things right now. “People aren’t using AI together at the levels we hoped for.” “We’re not seeing the ROI.” “Our people aren’t ready.” “Workflows are still broken.”
The instinct is to blame the technology. The models aren’t accurate enough. The data isn’t clean. The vendor oversold the product. Sometimes those things are true.
But after working with leadership teams across dozens of AI transformations, a different pattern keeps emerging. The technology works fine. What breaks is everything around it: the conversations that never happen, the trust that erodes silently, the governance nobody wants to own, the roles shifting beneath people’s feet, and the talent pipeline quietly collapsing. These are organizational frictions, not technical ones. They are the actual reason most AI adoption efforts underperform.
Gartner estimates that 32 million jobs will be transformed per year by AI, and that managing transformation at that scale requires 20x more organizational effort than managing job losses. That ratio reframes the challenge.The problem isn’t whether AI can do the work. It’s whether your organization can handle what happens when it does.
Here are the five frictions that determine whether an AI initiative creates value or just creates chaos.
AI collapses execution time. A task that took a team two weeks now takes two minutes. Code writes itself. Reports generate instantly. Analysis that required a dedicated analyst happens in a single prompt.
This sounds like pure upside until you realize what it exposes. When execution was slow, it masked a deeper problem: most teams never fully agreed on what they were building or why. The two-week timeline gave people room to course-correct, to gradually align through iteration Remove that buffer, and the misalignment becomes immediate.
The bottleneck was never the execution. It was the conversation before the execution.
A product team uses AI to generate three prototype concepts in an afternoon. Previously, building one concept took a sprint. Now the constraint isn’t building, it’s deciding. Which concept? For which user? Against which strategic priority? Five people in a room with competing assumptions, and the AI is just sitting there, ready to build whatever they agree on.

Only 14% of organizations have clear alignment between business users, IT, and executives about what problems AI can even solve. That’s not a technology gap. That’s a consensus gap. And the organizations that close it are three times more likely to report significant value from their AI tools.
The speed AI provides is wasted without the ability to decide what to do with it. Decision rights, not processing power, are the new rate limiter. The organizations pulling ahead aren’t the ones with the best models. They’re the ones that have restructured how they make decisions together, fast enough to keep pace with what the technology now makes possible.
There is a perception gap at the center of most AI strategies, and it is wider than anyone wants to admit.
Executives are four times more likely to report high AI productivity gains. Individual contributors are five times more likely to say AI made no difference. These aren’t minor variations in optimism. These are fundamentally different realities operating inside the same organization.
The trust problem runs deeper than skepticism about the tools. 78% of employees don’t know whether they’ll lose their job to AI. Only 12% feel involved in decisions about how AI gets deployed in their work. And 80% believe their organization is actively trying to replace them. Whether that belief is accurate is almost beside the point. It shapes behavior. People who believe they’re being replaced don’t experiment with new tools. They protect their territory. They withhold the institutional knowledge that makes AI implementations actually work.
This isn’t irrational. It’s a reasonable response to an information vacuum. When leadership talks about “transformation” and “efficiency gains” without naming what happens to the people doing the work being transformed, employees fill the silence with the worst-case scenario.
The psychological mechanism matters here. Executives authorized the AI investment. They have cognitive skin in the game to believe it’s working. Frontline workers read the headlines about displacement. They have cognitive skin in the game to discount the benefits. Neither side is lying. Both are filtering the same reality through different stakes.
Closing this gap requires more than a town hall and a FAQ document. It requires genuine involvement: workers participating in how AI reshapes their roles, not just being informed after the decisions are made. The organizations getting this right, like Vizient, are asking their workforce directly: what work do you want to do? What work do you hate? Then they’re designing AI-augmented roles around those answers. That’s not a communication strategy. It’s an organizational design strategy. And it produces something no amount of messaging can manufacture: actual trust.
Here’s a paradox that shows up in almost every organization we work with: 70% of IT leaders cite security, governance, and compliance as the number one blocker for large-scale AI deployment. And over 50% say their primary risk mitigation strategy is simply blocking or restricting AI use.
Read that again. The dominant strategy for managing AI risk is preventing people from using AI. That’s not governance. That’s abdication dressed up as caution.
The real problem isn’t that organizations don’t want governance. It’s that governance requires the kind of cross-functional conversation that most organizations are structurally bad at. Security teams, digital workplace leaders, business unit heads, legal, and HR all have legitimate stakes in how AI gets used. In many organizations, these teams have never been in a room together. One Gartner analyst described discovering that the security team and the digital workplace team at a client had a stronger relationship with him, as an external consultant, than they had with each other.
Governance isn’t a document you write. It’s a set of ongoing agreements about acceptable use, risk tolerance, data access, and escalation. Those agreements require facilitation. They require someone who can hold competing interests in the same conversation without letting any single stakeholder dominate.
The organizations doing this well treat governance as an enabler, not a blocker. Adidas built a three-tier model: Standard use (low risk, go ahead), Conditional use (needs review), and Forbidden use (hard stop). That framework didn’t emerge from a policy memo. It emerged from structured conversations between technologists, business leaders, and risk managers who had to negotiate what each tier actually meant in practice.

Meanwhile, 70% of IT leaders are deeply concerned about agent sprawl, and only 13% say they have the internal governance to manage it. Microsoft projects 1.3 billion AI agents by 2028. Every one of those agents will need guardrails, and those guardrails won’t come from the technology layer. They’ll come from organizational agreements about what agents can and cannot do. That’s a facilitation problem masquerading as a technology problem.
The conversation about AI and jobs has been dominated by a binary: will AI take my job, yes or no? That framing misses what’s actually happening. AI isn’t eliminating most roles. It’s reshaping them in ways that nobody is explicitly addressing.
When AI handles the routine components of a role, what’s left is the judgment work, the relationship work, the ambiguity-navigation work. For some people, that’s the part of the job they’ve always wanted to do more of. For others, the routine work was the job. It was the source of their competence, their identity, their value to the organization.
56% of CEOs plan to use AI to delayer middle management within five years. That’s not a future scenario. That’s an active planning assumption in more than half of the C-suites in the economy. And the people in those middle management roles? Most of them haven’t been told.
The identity friction shows up as resistance that looks irrational from the outside. A senior analyst who refuses to use an AI tool that could cut their research time in half. A project manager who insists on manual status updates when automated dashboards exist. A team lead who keeps scheduling coordination meetings that an AI scheduling tool has already made redundant. These aren’t Luddites. These are people whose professional identity is tied to the work that’s being automated, and no one has helped them construct a new identity around the work that remains.
This is where the psychological weight of AI transformation lives. Most change management frameworks treat resistance as an adoption problem: just show people the tool, train them, incentivize them. But when the tool threatens not just how you work but who you are at work, training doesn’t address the actual barrier. The barrier is existential, not operational. A financial analyst whose identity is built on being the person who can build the most complex Excel model doesn’t want to hear that an AI can do it in seconds. Not because they doubt the AI. Because they don’t know what they are without that skill.
97% of CEOs say they want leaders who can combine human capabilities with machine capabilities. But combining requires first understanding what the human capabilities actually are in a post-AI context. That demands honest, often uncomfortable conversations about which parts of each role are genuinely human and which parts were always just execution waiting to be automated.

The organizations navigating this well are doing something specific: they’re involving workers in the redesign of their own roles before deploying the technology. Not after. Not as an afterthought. As the starting point. What work do you find meaningful? What work drains you? Where does your judgment matter most? Those questions produce better role designs than any top-down restructuring, and they give people agency in a moment that otherwise feels like something being done to them.
Most organizations are skipping those conversations entirely. They deploy AI into roles without redesigning the roles themselves, then wonder why adoption stalls. The technology isn’t the problem. The absence of a conversation about what people become after the technology arrives is the problem.
This is the friction with the longest fuse and the biggest blast radius.
AI doesn’t primarily take entry-level jobs away from junior workers. It enables senior workers to do the entry-level work themselves. An experienced engineer uses AI to generate the boilerplate code that a junior developer would have written. A senior analyst uses AI to do the data cleaning that a research assistant would have handled. The junior role still exists on paper, but the learning path through it has been hollowed out. This is happening on the record at the C-suite level. Tracey Franklin, Moderna’s Chief People and Digital Technology Officer, told the Wall Street Journal that the company has built more than 3,000 custom GPTs to handle work that previously was routed to people. On the HR side, she put it plainly: “It’s like your virtual HR, AI agent. It’s what would normally be a junior-level HR analyst type; we’ve now converted it into a GPT.” The same month she said this, Moderna announced it was cutting 10% of digital technology jobs. She declined to specify which roles. Call it experience starvation: the systematic removal of the low-stakes, high-repetition work that builds professional judgment.
This is experience starvation: the systematic removal of the low-stakes, high-repetition work that builds professional judgment. The apprenticeship model, where junior people learn by doing progressively more complex work under expert supervision, depends on there being work at every level of complexity. AI is compressing the bottom of that ladder.
The evidence is already visible. Almost half of HR leaders report seeing signs of talent pipeline collapse. The World Economic Forum estimates 59% of the workforce needs fundamentally new skills in the next two to three years. And the Anthropic Economic Index shows that experienced AI users, those with six months or more of practice, achieve measurably better outcomes in their AI interactions. That’s the fluency gap in action: the people who already have professional judgment use AI to amplify it, while newcomers who haven’t built that judgment use AI as a crutch that never develops into competence.
The distinction that matters is between automation and augmentation. Automation delegates a task to AI. Augmentation uses AI as a thought partner for complex, creative, or strategic work. Experienced professionals gravitate toward augmentation. Newcomers default to automation. The gap between those two modes of use is where organizational capability either compounds or erodes.
There’s a concept that captures the core issue: discernment. It’s the accumulated ability to assess whether an AI output is correct, verifiable, and useful. An experienced professional reads an AI-generated analysis and immediately spots what’s plausible but wrong. A newcomer reads the same analysis and accepts it because it looks authoritative. Discernment can’t be trained in a workshop. It develops through years of doing the work that AI is now absorbing.
By 2028, Gartner projects that 40% of workers will be mentored first by AI, not by humans. Whether that produces capable professionals or a generation of workers who can prompt but can’t think depends entirely on how organizations design the experience. Some are already building the replacement: GenAI simulators that create realistic practice environments for high-stakes work. One insurance company using this approach saw an 85% skill increase and a 75% reduction in certification failures. But these solutions don’t emerge spontaneously. They require deliberate organizational choices about how people develop, and those choices require the kind of cross-functional consensus that brings us back to friction number one.
Every one of these five frictions shares a common root: they can’t be solved by the technology that created them. AI can’t facilitate the consensus conversation your leadership team is avoiding. It can’t rebuild trust between executives and a workforce that feels excluded. It can’t negotiate the governance agreements that require competing stakeholders to find common ground. It can’t help someone reconstruct their professional identity. And it can’t design the developmental experiences that build the next generation of your workforce.
These are human problems. Specifically, they are facilitation problems, problems of getting groups of people with different stakes, different information, and different fears to work through hard questions together and arrive at decisions they can actually execute.
We saw this firsthand working with Church & Dwight. When both teams and executives were in the room together, experiencing and witnessing teams using AI collaboratively, buy-in happened in real time. Not because someone presented a deck about the benefits of AI adoption. Because people saw each other working through the friction together, and both sides realized the obstacle was organizational, not technical. That kind of shared experience is something no rollout plan can replicate.
The organizations that treat AI adoption as a technology deployment will keep failing at it. The organizations that treat it as an organizational transformation, one that requires redesigning how people decide, trust, govern, grow, and work together, will capture the value that everyone else is leaving on the table.
The friction has moved. It’s no longer in the execution of the work. It’s in the human dynamics surrounding it. Right now, that friction is where most organizations are stuck, and it’s where the actual competitive differentiation is happening. The companies pulling ahead aren’t the ones with the biggest AI budgets. They’re the ones that figured out how to have the hard conversations: about priorities, about trust, about governance, about what people become when the nature of their work changes.
This is the new friction. Not forever, because the specific challenges will evolve as the technology matures and organizations adapt. But right now, in this moment of transformation, the friction that determines whether your AI investment creates value or destroys trust is organizational, not technical. It lives in your meeting rooms, not your server rooms.
The question isn’t whether your AI tools are good enough. They are. The question is whether your organization can have the conversations that make those tools actually matter.
If you are leading an AI transformation and recognize any of these five frictions in your own organization, that recognition is the first move. The second move is design: building the structures, conversations, and shared experiences that work the friction instead of avoiding it. That’s the work we do at Voltage Control. Start with our New Friction primer for the full framework, or reach out if you want to talk about where your organization is stuck.
Why do most AI adoption efforts fail?
Most AI adoption fails for organizational reasons, not technical ones. Five frictions consistently break the rollout: leaders haven’t aligned on what problems AI should solve, the workforce doesn’t trust the strategy, governance is stuck in a default of restriction, roles are shifting without anyone naming the change, and the talent pipeline that produced experienced workers is quietly hollowing out. The technology layer is rarely where the actual failure happens.
What are the biggest barriers to AI implementation?
The biggest barriers are conversational. Only 14% of organizations have clear alignment between business, IT, and executives on what problems AI can solve. 78% of employees don’t know whether they’ll lose their job to AI. 70% of IT leaders cite governance as their top blocker, while more than half respond by simply restricting AI use. None of these are problems the technology can solve.
How do you build consensus around AI initiatives?
Consensus on AI requires structured cross-functional conversation, not a town hall. Bring business, IT, and executive leaders into the same room early, define which problems AI is being deployed to solve, and clarify how value will be measured before tools roll out. Organizations that do this are three times more likely to report significant value from their AI investments.
What governance structure do you need for AI adoption?
Effective AI governance is a set of ongoing agreements, not a static policy document. The organizations doing this well, like Adidas with its three-tier Standard / Conditional / Forbidden model, build governance through structured negotiation between security, legal, HR, and business leaders. The structure matters less than the cross-functional facilitation that produces it.
How do you address employee trust issues with AI?
Trust is built by giving the workforce agency in how AI reshapes their roles, not by communicating decisions after they’ve been made. Ask workers directly which work they want to do, which work drains them, and where their judgment matters most. Then redesign roles around those answers before deploying the technology. That sequence builds the involvement that messaging alone cannot manufacture.
The post Why AI Adoption Fails appeared first on Voltage Control.
]]>The post When Execution Takes Zero Time, Human Collaboration Will Be Your Only Bottleneck appeared first on Voltage Control.
]]>We’ve spent the last year talking about how human collaboration is the real friction point of AI adoption. But let’s push that thinking further.
If generative models continue on their current trajectory, eventually the actual execution of almost every corporate task will be automated. The code will write itself. The reports will generate instantly. The logistics will just self-optimize.
When the execution of work takes zero time, the only true bottleneck left in the corporate world will not be processing power or technical capability. It will be human. That is a massive shift, and it reframes what AI decision making actually requires.
It will be human.
That is a massive shift.
Think about what slows down your organization today. Yes, there’s execution time—the hours spent writing code, building presentations, analyzing data, coordinating schedules. But underneath all of that is something slower and stickier: the time it takes for people to decide what to do and agree on why they’re doing it.
Most leaders recognize this in theory. In practice, we’ve built entire organizations around the assumption that execution is the constraint. Teams are organized by function. Success metrics measure output volume. Meetings exist to coordinate work that takes time to complete.

That assumption is collapsing.
AI is rapidly eliminating execution time. But it’s not eliminating the need for human judgment, strategic thinking, or interpersonal alignment. If anything, it’s making those capabilities more valuable because they’re about to become the only thing that determines your velocity.
Consider what happens when a task that once took your team two weeks now takes two minutes. The work itself isn’t the bottleneck anymore. The bottleneck is the conversation before the work. The bottleneck is getting five people in a room to agree on what “good” looks like. The bottleneck is navigating the power dynamics, hidden agendas, and competing priorities that exist in every organization.
In a world where AI decision-making is gated on human collaboration, the leader who knows how to facilitate—who can control the voltage of a room and align competing egos, priorities, and worldviews—will be the one holding all the cards.
Most organizations are still structured around execution. Your org chart maps to who does what. Your meetings exist to coordinate parallel work streams. Your KPIs measure throughput.
But if the tasks themselves become instantaneous, what’s the point of the org chart? What are we actually measuring? What are meetings even for?
The answers start to look fundamentally different.
Teams will organize around decision rights, not task execution. The question won’t be “who builds this?” but “who decides what we build and why?” Entire functions that exist today to coordinate execution will need to justify their purpose differently. The role of middle management shifts from task coordination to sensemaking and alignment.
Success metrics will shift from output volume to decision quality and speed. How fast can your leadership team converge on a strategic direction? How often do you revisit decisions because the group wasn’t actually aligned the first time? How much organizational energy gets burned in rework and misalignment? These become your performance indicators.
Meetings will exist to build shared understanding, not coordinate logistics. The status update meeting dies completely. The “let’s align on this” meeting becomes your highest-leverage activity. The quality of your meeting facilitation becomes a competitive advantage.
This isn’t some distant future. It’s already happening in pockets.
We’ve worked with leadership teams that have reduced their decision cycles from weeks to days by redesigning how they deliberate together. We’ve seen product organizations cut sprint planning time in half by introducing better frameworks for negotiating priorities. The teams that are winning aren’t just faster at execution. They’ve fundamentally restructured how they make decisions together.
Here’s what makes this shift so interesting: the skills that will matter most are not technical.
They’re human.
The ability to frame a decision clearly so everyone in the room is solving the same problem. The ability to surface the real disagreement underneath the surface-level debate—because what sounds like a tactical argument is usually a values conflict in disguise. The ability to create the conditions where competing perspectives can actually be synthesized rather than just compromised into mediocrity.

The ability to know when to push for resolution and when to let tension be productive. The ability to read power dynamics and make space for the voices that aren’t being heard. The ability to hold a group’s attention on the hardest question until something real emerges.
These are not skills that AI can replicate. These are skills that exist in the realm of human presence, intuition, and relationship. And these are not skills that most organizations have invested in systematically.
Walk into most leadership meetings and watch what happens. Someone presents an idea. A few people react. The loudest voices dominate. The quieter people check out. Side conversations start. The meeting ends without a clear decision, or with a decision that no one really believes in, or with an agreement that will unravel the moment people leave the room.
This is the tax that poor facilitation extracts. It’s been expensive for decades. It’s about to become catastrophic.
Because in an AI-accelerated world, that tax is the only tax left. The technical execution happens instantly. The delay between decision and reality collapses. The only thing standing between you and the outcome is the quality of human alignment.
The organizations that have invested in facilitation capability—that have trained their leaders to run rooms well, that have built cultures where productive conflict is expected and valued, that have made decision-making design a strategic priority—those organizations are about to see their investment compound.
You don’t have to wait for AI to reach its full potential to start building this muscle. The opportunity is already in your calendar.
Look at your leadership team’s meeting schedule for the next month. How many of those meetings are designed to actually produce a decision? How many have clear decision-making methods attached to them? How many leave space for dissent and synthesis rather than just debate and voting?
Most organizations run meetings the way they always have. Someone puts together an agenda. People show up. Someone talks. Other people react. Time runs out. The meeting ends with action items that may or may not reflect real alignment.
This approach worked—barely—when execution took time because there were natural checkpoints where misalignment would surface. You’d discover that two teams interpreted the decision differently when they came back with different work products. You’d course-correct. It was slow and expensive, but it was survivable.
When execution takes zero time, you don’t get those checkpoints. The misalignment doesn’t surface until the work is done (which is now instantly). You’ve burned velocity on the wrong thing before you even knew you were misaligned.
The fix isn’t better AI tools. The fix is better decision-making design.
That means introducing frameworks that make agreement visible. That means using consent-based methods where appropriate instead of defaulting to consensus or executive fiat. That means structuring pre-mortems and dissent protocols into your process. That means getting comfortable with the silence that happens when you ask a room to actually think instead of just react.
We’ve seen leadership teams cut their decision-making time by 40 to 60 percent by doing nothing more than redesigning how they facilitate their existing meetings. No new technology required. Just better process design and the courage to run a room differently.
If you’re a VP or above, this is on you. You can’t delegate decision-making design to HR or to a facilitator you bring in for offsites. Those resources help, but the muscle has to be internal and distributed.
That means three things:
The teams that do this now—while execution still takes time—will have a compounding advantage when execution becomes instantaneous. They’ll have built the reflexes and the trust required to move fast together. They’ll have learned how to disagree productively. They’ll have discovered which methods work for their culture and which don’t.
The teams that don’t will still be trying to figure out why they’re stuck in the same meetings they’ve always been stuck in, except now the stakes are higher because the market is moving faster.
There’s a deeper question underneath all of this, and it’s not about process. It’s about culture.
Most organizations say they want faster AI decision making. What they actually want is faster execution with the same decision-making culture. They want the speed without the discomfort of real deliberation.
But you can’t have it both ways.
Fast consensus requires psychological safety. It requires a culture where dissent is not just tolerated but actively invited. It requires leaders who can hear “I disagree” without interpreting it as disloyalty. It requires teams that trust each other enough to move forward even when not everyone is 100 percent convinced.
This is not the culture most organizations have built. Most organizations reward certainty over curiosity. They reward alignment over authenticity. They reward the appearance of consensus over the reality of synthesis.
If your culture punishes dissent, AI will just automate your way into faster bad decisions.

If your culture can’t distinguish between productive and unproductive conflict, you’ll spend all your newfound execution speed on rework.
If your leadership team doesn’t trust each other, no facilitation technique will save you.
The good news is that culture is malleable. It changes through practice. The way you run your meetings teaches your organization what behavior is valued. If you start running meetings that invite dissent, reward synthesis, and hold space for real thinking—your culture will start to shift.
The leaders who understand this are already building it. They’re not waiting for a mandate. They’re redesigning their own team’s rituals. They’re modeling what good facilitation looks like. They’re creating the conditions where others can practice it too.
Because they know that when execution takes zero time, culture is the only moat left.
Let’s be clear about what happens if you don’t invest in this.
Your competitors will. The organizations that figure out how to facilitate alignment faster will make better decisions faster. They’ll out-maneuver you. They’ll attract better talent because their meetings actually work. They’ll compound their advantage every quarter while you’re still stuck in the same decision-making patterns you’ve had for years.
You’ll have all the same AI tools they have. You’ll have the same access to instant execution. The difference won’t be technical. The difference will be human.
And here’s the thing: you can’t buy your way out of this gap. You can’t license decision-making capability. You can’t acquire good meeting culture. This has to be built internally, from the top down and the inside out.
The organizations that start now—that invest in facilitation training, that redesign their decision-making processes, that build cultures where real thinking is valued over performance—those organizations will dominate their industries.
The organizations that wait will spend the next five years wondering why they’re not moving faster despite having all the same technology as everyone else.
If this resonates and you’re not sure where to begin, start with one thing: your next contentious leadership decision.
Don’t run the meeting the way you normally would. Design it differently. Bring in a facilitator if you have one. If you don’t, read up on consent-based decision-making or Liberating Structures and try one. Build in time for real dissent. Create space for synthesis, not just debate.
Then debrief it. What worked? What didn’t? What did you learn about how your team actually makes decisions? Where did you feel the friction? Where did you feel the flow?
Do that ten times and you’ll start to see patterns. You’ll start to build the reflexes. You’ll start to discover what your organization actually needs to decide faster.
This isn’t a one-time workshop. It’s a practice. The same way you’ve built practices around quarterly planning or performance reviews, you need to build practices around decision-making design.
The organizations that treat this as a strategic priority—that invest in it, measure it, and iterate on it—will be the ones that thrive in an AI-accelerated world.
Because when execution takes zero time, the only thing left between you and the outcome is the quality and speed of human collaboration .
And the leader who can facilitate will be the one holding all the cards.
The post When Execution Takes Zero Time, Human Collaboration Will Be Your Only Bottleneck appeared first on Voltage Control.
]]>The post Problems Are Old, Speed Is New appeared first on Voltage Control.
]]>The AI wave feels brand new. The problems underneath it don’t. AI’s rapid advances are reshaping how work gets done, what’s possible, and how fast the future arrives. Yet under all that novelty sits something stubbornly familiar. Alignment. Behavior change. Decision quality. Adoption. These are the age-old challenges that have defined organizational life for decades. They’re not new problems; AI is simply putting them under a brighter, hotter light.
Many of the rituals and structures we rely on were inherited, not designed—remnants of Taylorism and top‑down models, with a dash of military metaphor thrown in for good measure. Think about how often we hear terms like action items and ammunition for a pitch. Even if we didn’t consciously lift these patterns from the factory floor or the command center, they’ve seeped in. We carry them from role to role, re-enacting them in new environments where they’re ill‑fitted to knowledge work, creativity, and human-centered problem solving.
Over the last decade, many teams began the long, important shift toward human-centered work. That project isn’t finished. Meanwhile, AI has changed the context around us: more inputs, more interdependencies, and far faster cycles. The result is a tangle of legacy habits, incomplete cultural transformation, and a new force multiplier. The fundamentals of good facilitation and design of team systems still apply. What’s different now is the cost of not applying them.
The work of clarifying purpose, roles, decision rules, and rituals isn’t a “nice to have” anymore. It’s the foundation that lets AI make your team better instead of magnifying dysfunction. Without it, the same old patterns will keep producing the same old outcomes—only now they’ll arrive at a speed that can overwhelm even high-performing teams.
What’s truly new about AI is the speed of change and the compounding nature of its effects. The “fast follower” posture that was viable for past technology shifts doesn’t work here. If you wait for standards to stabilize, you’ll miss months (or years) of capability building your competitors are banking. Learning has to become a core organizational muscle, not an initiative. The window between early adoption and obsolescence is narrowing.

Speed can be a gift. AI-enabled teams can spin up prototypes in hours, synthesize complex inputs in minutes, and ship with tighter feedback loops. But speed is neutral—it accelerates whatever it touches. Apply AI to a broken handoff and you don’t fix the handoff; you scale the chaos. Take a siloed process and add automation and you don’t remove the silo; you create automated isolation. The same reinforcing loops that can catapult a healthy system can drive a fragile one to failure.
We often used to meet teams with what we called a leaky faucet problem. Yes, it dripped. Yes, everyone noticed. But you could manage it with a bucket and some tape. You could hide the waste in the margins. AI turns that drip into pressure. It builds behind the surface until one day the levee breaks. What was tolerable friction becomes an existential constraint. When a small leak scales, “business as usual” screeches to a halt.
This is why so many leaders and facilitators are feeling the urgency right now. The problems aren’t new, but their consequences arrive faster and ripple further. It’s no longer sufficient to “know about” the leak; you need to find it, fix it, and redesign the system so you don’t spring another one two steps downstream. If you do this well, AI becomes a amplifier for clarity, flow, and value creation. If you don’t, it scales confusion.
If speed is neutral, cadence is how we give it purpose. Think of AI as a highly capable teammate that can sprint faster than anyone on your roster. The job of the facilitator is to design the practice field where that speed pays off and doesn’t run the team ragged. That means deliberately alternating between fast and slow modes: call on AI to generate or synthesize quickly, then slow down together to react, refine, and align.
Live synthesis is a superpower here. Many teams lack a consistent, fast synthesis muscle. Even strong synthesizers vary with energy, time of day, and workload. AI can provide a reliable baseline in the moment—capturing themes, options, and decisions while context is warm—so the team can react rather than rehash. You get the benefits of working “while the clay is wet,” without over-relying on a single person’s bandwidth.
Visible work becomes essential in this new cadence. Text alone is too linear and narrow for the complexity we’re navigating. Visual maps, canvases, and blueprints help teams create a shared reality—one that humans and AI can reference. If it’s ambiguous to a colleague, it will be ambiguous to your AI teammate. Tools like Miro let you turn a messy conversation into a shared model in real time; then you can hand that model to AI for targeted processing, scenario generation, or risk identification.
There’s also a delightful side effect: good prompting is just good communication. Teaching teams to brief AI with clearer intent, constraints, and success criteria is the same skill that improves human collaboration. We’ve seen groups adopt prompt hygiene—defining terms, naming assumptions, clarifying audience—and, almost by accident, elevate their everyday cross-functional dialogue. AI becomes a mirror for your clarity. What confuses the model often confuses your colleagues, too.

This month we’re spotlighting the Ways of Working Assessment because it delivers what March’s theme demands—a fast, focused way to surface leaks, align on fixes, and set a foundation where AI enhances rather than amplifies dysfunction. If you haven’t seen it yet, watch the quick overview: https://vimeo.com/899513366?share=copy&fl=sv&fe=ci
At its core, the assessment inventories how work actually gets done today. We capture the real rituals, decision rules, handoffs, briefs, and artifacts—not the idealized SOP version sitting on a wiki. We’re looking for two things: the healthy patterns to elevate and scale, and the bottlenecks or ambiguities that drive rework downstream. Artifacts like service blueprints and journey maps emerge, but they’re fed by lived experience, not theoretical flowcharts.
A simple shift unlocks rich insight: instead of asking “How does onboarding work here?” we ask “Walk me through the last time you onboarded someone.” Memory is sticky; it surfaces the tacit steps, workarounds, and unwritten rules that never make it into a process doc. We follow the timeline—who was involved, what was unclear, where the delays crept in, why the handoff failed—and we capture it visually so the whole team can see the same movie, not argue about the script.
From there, we prioritize together. Which one practice, if upgraded now, would reduce the most downstream rework? What would visible progress look like in two weeks? Where does AI belong in this flow—as a teammate, as a co-pilot, or not at all? This is where we start distinguishing human-in-the-loop moments, AI-augmented steps, and no-fly zones. The outcome isn’t a binder; it’s a shortlist of prototypes that teams can try immediately, with crisp measures of success. Culture lives in practice, so we practice differently—on purpose, in small loops that compound.
First, establish a roles and rituals charter that includes your AI teammates. Don’t bolt AI onto your old structure; integrate it into your system intentionally. Identify the core moments in your value stream—discovery, synthesis, decision, handoff, quality—and define who or what leads, who consults, and who validates at each step. Be explicit about what AI does and why. For example: “During weekly intake, AI generates a first-pass classification of requests and a risk heatmap; the PM adjusts classification and confirms risk with Legal for anything flagged above medium.” That level of clarity reduces ambiguity and builds trust.
Second, operationalize decision clarity using consent-based methods. In fast-moving contexts, decisions get stuck between consensus and command. Try consent: “Is it safe enough to try for now, and can we revisit soon?” Pair it with clear decision types (reversible vs. irreversible), a lightweight advice process, and crisp roles (driver, approver, consulted, informed). Write your decision rules down as prompts and checklists. AI can help here by generating the initial decision brief, listing trade-offs based on your criteria, and drafting communication to stakeholders. But you must define the guardrails: where human judgment is required, what risks are unacceptable, and who owns the outcome.
Third, make synthesis and visualization a live team habit. Don’t wait for someone to write a recap doc later. During meetings, have AI capture themes and open questions while a facilitator maps the conversation visually. Close with a quick team review: what’s missing, what needs correcting, what decision is ready now versus what requires another loop. Embed a short “make it visible” cadence into your rituals: if a decision isn’t on the map, it’s not a decision. If a next step isn’t in a public tracker, it’s not a next step. AI is excellent at formatting and distributing these artifacts instantly; your job is to ensure they reflect what the team actually agreed to.
All three shifts share a pattern: intentionality beats intensity. You don’t need to work faster for speed to pay off—you need to work clearer. By formalizing how humans and AI collaborate, you reduce churn, increase throughput, and create artifacts that compound learning. Your team will feel the difference quickly. Meetings stop being places we “talk about work” and start being places we “make work visible and move it forward.”
One of the most reliable ways to break free from legacy habits is to change what you measure. If you’ve been tracking only output (tickets closed, campaigns launched), start tracking flow. Lead time from idea to value. Work in progress per person. Rework rate after handoff. Decision cycle time for reversible versus irreversible calls. These measures surface the invisible friction you’ve tolerated for years and, critically, show whether your new rituals are paying off in days, not quarters.
Set up small reflection loops to create exponential gains. At the end of each sprint or milestone, run a brief retrospective: what worked, what didn’t, what will we try next between now and the next loop? Bring AI into that loop deliberately. Have it extract patterns from your sprint artifacts, flag recurring blockers, and propose two or three lightweight experiments. The team then chooses, adjusts, and commits. Next loop, you measure the difference in flow metrics and decide whether to adopt, adapt, or abandon. This is how practice compounds over time.

As you mature, think system-wide, not just individual or team-level. We often describe an AI maturity path that starts with individual use (personal productivity), progresses to co‑piloting within teams (pairing AI with core roles), evolves to AI teammates embedded in workflows, and culminates in systemic use where cross-functional processes, data, and governance align. Each stage demands new agreements: where humans must remain in the loop, what the no‑fly zones are for AI, how you audit outputs, and how you escalate issues of bias, privacy, or safety.
Governance shouldn’t be a blocker; it should be an enabler. Lightweight policies that clarify purpose and boundaries give teams confidence to experiment. Templates for risk assessment, model selection, prompt hygiene, and result verification help busy managers make good calls quickly. Training facilitators to guide these conversations—mapping the work, designing the cadence, making the trade-offs explicit—is how you steadily raise the organization’s capacity to move at the new speed without breaking.
The big idea of March can be summed up this way: the problems are old, the speed is new. The fundamentals of how people align, decide, and create together haven’t changed. What’s changed is the tempo and scale at which consequences arrive. That means the gap that matters most is the one between knowing and doing. Everyone knows where the leaks are. The teams who win will be the ones who fix them first and redesign their systems so speed serves them, not the other way around.
If you do one thing this month, run a mini Ways of Working Assessment with your team. Start small. Pick a critical flow, like intake to delivery or discovery to decision. Map the last time you did it together. Find one leak you can patch that would reduce the most downstream rework. Define the role of AI in that moment, teammate, co‑pilot, or no‑fly, and write the decision rule that goes with it. Make the change visible. Measure the impact in two weeks. Then iterate. These steps take hours, not months, and they create artifacts you can reuse and scale.
When you design cadence on purpose, AI stops being a source of overwhelm and becomes a source of momentum. You’ll find yourself moving faster where it matters and slower where it counts. You’ll see ambiguity shrink as your team’s shared models get clearer. You’ll feel meetings transform from status theatre into decision engines. And as your practices compound, you’ll notice something else: the same clarity that makes your AI prompts better will make your cross‑functional collaboration better. That’s the kind of win that compounds quarter after quarter.
You already know where the friction is. The Ways of Working Assessment gives you a structured way to surface it, prioritize it, and prototype something better – fast. Watch the overview, block 60 minutes with your team, and let’s get to work.
The post Problems Are Old, Speed Is New appeared first on Voltage Control.
]]>The post The Missing Layer in Enterprise AI Adoption: Navigating Edges appeared first on Voltage Control.
]]>Enterprise AI adoption isn’t a roadmap problem. It’s an edge problem.
Across organizations, AI initiatives are accelerating — pilots are multiplying, tools are proliferating, and policies are emerging in parallel. Executive teams are crafting AI strategies. Boards are asking about posture and readiness. Departments are experimenting with copilots and automation.
Yet many leadership teams feel the same tension: adoption is uneven, alignment is fragile, and anxiety lingers beneath the surface.
What’s often missing isn’t strategy. It’s a way to navigate the edges AI creates.
Edges aren’t problems to solve. They’re thresholds: places where something new is trying to emerge. When AI enters workflows, it doesn’t just add capability; it reshapes roles, decision rights, operating rhythms, and expectations. That reshaping generates friction. And friction, when unnamed, becomes resistance.
When named and structured, it becomes movement.
At our February summit, we debuted a simple tool called the Edge Maps and used it live with 150 leaders, many of them navigating AI adoption in their organizations. In eight focused minutes, the room surfaced present realities, named thresholds, and committed to small, reversible experiments. The energy shifted from ambient overwhelm to organized momentum.
This article explores why enterprise AI adoption stalls at the edge and how a lightweight, structured approach can turn tension into forward motion.
As February winds down, I’m reminded of a rhythm my wife lives with every year. She runs a garden center, and each spring the staff nearly triples. The ramp-up is expected. It’s seasonal. It’s planned.
And yet, every year feels different.
The mix of people shifts. Regulations change. Customer behavior evolves. Some seasonal employees return; many don’t. Training needs are familiar in shape but new in detail. Even when the pattern is predictable, the edge itself is not identical.
The edge is recurring, but never the same.
Enterprise AI adoption operates in much the same way.
You know AI waves are coming. You anticipate expansion. You build pilots. You set budgets. You hold strategy sessions.
The edge isn’t a surprise.
The shape of it is.
And because the shape changes, organizations can’t rely on static plans alone. They need a navigational practice — something that helps teams repeatedly step into uncertainty without freezing or overcorrecting.
Most AI strategies begin with tools, policies, or training plans. Those matter. But they don’t address the underlying edges teams are standing on.
Common enterprise AI edges look like this:
These aren’t purely technical issues. They’re transitional states.
And transitional states create psychological and operational edges.
At the executive layer, enthusiasm is often high. AI is framed as a competitive necessity or strategic imperative.
At the middle layer, uncertainty surfaces:
At the frontline, experimentation frequently happens quietly. Individuals test tools on their own, unsure whether their usage is encouraged or merely tolerated.

Legal and governance teams, tasked with managing exposure, can become perceived blockers, not because they oppose innovation, but because there are no structured lanes for safe exploration.
Without a structured way to name and navigate these thresholds, organizations default to one of three patterns:
The result? AI remains either an isolated productivity hack or a top-down mandate — not a coordinated, trust-building transformation.
What’s missing is a navigational layer.
When we hear “edge,” our bodies brace for a fall. It feels like a cliff that is irreversible and risky.
But what if enterprise AI is more like a shoreline?
Shorelines are dynamic. They shift daily. They invite navigation. They require rhythm, awareness, and adjustment — not panic.
This metaphor matters because it shifts energy from fear to curiosity. From avoidance to orientation.
Leaders can accelerate this shift by explicitly naming AI-related edges at the start of a meeting:
“We’re at the edge of redefining review workflows with AI.”
“We’re at the threshold of clarifying human vs. AI drafting roles.”
“We’re navigating the edge of safe AI-in-use.”
Naming the edge normalizes uncertainty without amplifying fear.
From there, you invite a consent-based experiment: time-boxed, safe-to-try, and small-but-real.
That move alone often transforms a session from:
“We might break something.”
to:
“We’re here to learn together.”
Closers matter just as much as openers. If you name an edge and run an experiment, close by harvesting learning, confirming ownership, and setting the next check-in. In this way, AI adoption becomes rhythmic rather than episodic.
Decision rules and working agreements become critical here. Edges produce ambiguity; decision rules clarify how you move within it. Working agreements make safety visible: how we’ll speak, pause, decide, and adjust.
Together, they form the container that makes AI transformation navigable.
AI is reshaping work in real time, and many organizations are experiencing multiple edges simultaneously:
For many teams, AI has become background anxiety, visible but hard to grasp.
The solution isn’t more slides.
It’s structured, small-scale experimentation.
It’s useful to treat AI like the weather. You forecast, prepare, and choose your route accordingly. Some days you sprint. On others you seek cover and regroup.
Practically, that means:
Minimum viable experiments create maximum alignment because they replace speculation with shared evidence.
Language is a lever here. Instead of “AI risk policy,” try “Safer AI-in-use.” Instead of “AI productivity targets,” try “Co-shaping AI-accelerated workflows.” Verbs like “co-shape,” “test,” “pilot,” and “harvest” nudge teams toward progression rather than perfection.
And while naming matters, don’t let it delay action. Begin exploration and refine language as you go. A named threshold becomes a door people can walk through together.
This is where the Edge Maps comes in.
At the summit, we used it to help participants surface AI-related edges and convert them into tangible next steps. In eight minutes, participants lined up present realities, named a threshold, envisioned the near future, and identified the smallest real actions to cross it.
The room’s energy shifted from overwhelmed to organized.
When edges become visible and legible, they become navigable.
After two days of deep practice and dialogue, participants were already holding powerful insights about facilitation, emergence, and AI-shaped work. The Edge Maps offered something different — a structured moment of reflection. It created space to pause, assess what was emerging, and decide how these ideas would translate into practice. For some, that meant facilitation experiments. For others, it meant operational shifts. And for many, it meant clarifying how they would bring AI adoption back into their teams with intention rather than urgency. Within minutes of mapping Present, Threshold, and Future, something tightened and clarified. Edges that felt expansive became specific. Possibilities became prototypes. Energy became ownership. Participants weren’t solving AI adoption in eight minutes. They were converting insight into commitment. That’s the difference.
Here’s the essence of the tool:
In the Present field, begin with strengths, resources, and curiosities. This regulates the nervous system, especially when AI carries risk or ambiguity. Then acknowledge tensions and constraints.
That pairing — strength plus reality — creates confident curiosity rather than brittle optimism or fear.
Naming the Threshold is the fulcrum. Give it a discussable name. Then define small but real actions to step into and through it. Keep steps reversible.
In the Future field, articulate how it will feel once crossed, what you’ll be doing differently, and how you’ll know you’re there.
The result is a compact artifact that converts ambient AI worry into a trackable learning plan.
Enterprise AI adoption isn’t a single edge. It’s a system of nested thresholds.
Strategic edges sit at the leadership layer.
Operational edges emerge in divisions.
Workflow edges surface inside teams.
Identity edges show up at the individual level.
The Edge Maps cascades effectively across levels:
Balance top-down clarity with bottom-up learning.

Leadership sets guardrails:
Teams co-shape experiments within those guardrails.
As local experiments produce wins, codify them into shared rituals, templates, and case studies. Innovation spreads without chaos.
Role clarity becomes a multiplier:
Consent-based trials reduce fear and increase participation. When people know experiments are time-bound and reversible, they’re more willing to engage.
Visibility accelerates adoption. Choose harvest formats that travel — brief write-ups, short demos, annotated templates. Make learning public and portable.
We’ve seen enterprise AI efforts transform simply by making experimentation legible.
A map only matters if you move.
Convert at least one Future statement into a prototype this week.
Small. Real. Reversible.
“Pilot a daily AI stand-up for two weeks” beats “launch an AI initiative.”
“Draft a one-page AI-in-review guideline” beats “complete enterprise framework.”
Before starting, define:
Agreeing on pivot rules in advance reduces emotional friction and strengthens trust.
Book the next check-in before leaving the room. Close each session with owner, due date, and smallest viable action.
Rotate an “edge steward” role if helpful — someone who keeps the threshold visible and curates learning. Over time, experimentation becomes habit rather than event.
That’s when AI adoption shifts from initiative to capability.
Enterprise AI adoption isn’t about eliminating uncertainty. It’s about building capacity to move within it.
Edges are invitations. They mark the place where capability wants to grow.
The Edge Maps provides a lightweight navigational layer — one that makes tension legible, experiments safe, and learning visible.
Name the threshold. Build a container. Take the smallest real next step together.
The shoreline is in sight.
Now move.
The post The Missing Layer in Enterprise AI Adoption: Navigating Edges appeared first on Voltage Control.
]]>The post AI at the Center for a Stronger 2026 appeared first on Voltage Control.
]]>December offers a rare pause—a pocket of time when teams naturally slow down, look back, and look ahead. It’s tempting to use that time to draft resolutions or curate highlight reels. This year, try something bolder: use the moment to move AI from the edges of your work to the operating core. Many teams still treat AI like a novelty or a personal productivity boost—handy at transcribing notes or drafting emails, useful in rare bursts, invisible to the rituals that actually power the business. That pattern yields pockets of efficiency, but it does little to raise the collective intelligence of the team or increase throughput on what matters most.

Putting AI at the center is not about “using AI more.” It’s about redesigning how work happens so AI shows up in the moments that shape clarity, alignment, decisions, and follow through. That means explicitly inviting AI into the room—not just to send the recap later, but as a seen and understood participant in the meeting arc. The shift is cultural as much as it is technical: moving from “What can I do faster alone?” to “What can we do better together—with AI as a teammate?” When you do that, you convert isolated wins into compounding outcomes that are visible across the system.
Think of this month as your strategic reset. Which rituals served you in the past but now hold you back? Which decisions routinely stall? Where does work bottleneck across roles or functions? Use those questions to identify places where AI can be designed in from the start—so it supports how you diverge, synthesize, converge, and decide. If the holidays tend to be a season of gifts, the gift you can give your future self is a deliberate redesign: AI-centered practices that create speed with quality and enable momentum you can feel.
The simplest way to re-center AI is to thread it through the full arc of a session—open, explore, decide, close—instead of sprinkling it into isolated moments. In the opener, invite participants to pair safely with AI. A prompt like “Ask AI to generate three provocative ‘what if’ questions about our purpose today—keep one that expands your thinking” primes both curiosity and comfort. When you normalize AI’s presence early, the team spends less energy on whether AI belongs and more on the quality of the work you’ll do together.
During divergence, let humans generate the raw ideas and let AI extend the option space: reframes, constraints, adjacent patterns, and “non-obvious” complements. As energy naturally shifts toward convergence, ask AI to produce a first synthesis—short, imperfect, and testable. When an AI-generated synthesis is on the table, people react faster and more concretely: “We can live with these parts, but not those.” That reaction accelerates prioritization and brings hidden misalignments into the open. Your job as a facilitator is to toggle the modes—solo-with-AI, small-group-with-AI, humans-only—and make those transitions visible so learning compounds.
Close with intent. A strong closer doesn’t just capture what happened; it evaluates how you worked with AI. Try, “What did AI do today that saved us time or improved quality? What should we ask it to avoid next time? What guardrails do we need to add?” Verifying AI summaries live, while the group can correct and clarify, prevents drift and creates a shared memory. Over time, the team will feel the difference: AI is no longer a shadow tool; it’s a visible collaborator that helps you open, expand, pattern, and decide.
Where teams lose the most time isn’t in generating ideas; it’s in making decisions. Endless loops, ambiguous thresholds, and unclear ownership sap energy. AI can help here—if you design the decision rules. Start by choosing one recurring decision that often creates churn (e.g., prioritizing backlog items, approving experiments, selecting messaging). Ask AI to propose three viable options with explicit trade-offs and risks, then use a consent-based method to move. Consent beats consensus when speed and learning matter because it asks, “Is this safe to try now?” instead of “Does everyone love it?”

Design an escalation path before you decide: when does human judgment override AI-suggested options; who breaks ties; what evidence triggers a revisit? Ask AI to draft that “decide how to decide” canvas, then tune it as a team. You can further improve momentum by capturing objections in context. Instead of archiving dissent, structure it: What threshold of evidence would resolve this objection? What signal would confirm a risk is materializing? Feed those conditions into your AI memory so it knows when to surface a check—preventing unnecessary re-litigation while honoring new learning.
Finally, draw the line on where AI must never decide alone. Ethics, safety, brand integrity, people decisions—name the categories that require human ownership. That act clarifies roles and builds trust. Then define the inverse: Where should AI always propose first, so humans can accelerate judgment? When you codify both, decision-making becomes transparent and repeatable. You move faster not because you cut corners, but because the lanes are clear and the work of deciding is designed.
If AI is going to sit at the center, it deserves formal working agreements—just like any teammate. These are short, visible norms that define boundaries, transparency, and shared responsibilities. They protect against two extremes you’ll likely find in any room: over-trusters who accept AI output without scrutiny and under-trusters who refuse to engage. Clear agreements pull the team into the productive middle, where AI accelerates and humans ensure quality.
Start small and make it living. Define what you will disclose and when (“Call out where AI contributed,” “Note the model or tool when relevant,” “Flag data sensitivity”), what you will verify every time (“We always review AI summaries live,” “We validate references, quotes, numbers”), and what you will avoid (“No AI generation on sensitive HR matters,” “No autonomous approvals”). Include bias checks in your openers—simple prompts like “Ask AI to generate counter-arguments from diverse perspectives” or “Scan for missing stakeholders.” Add a consent renewal check each month: “Are we still comfortable with how AI shows up in our work? What needs to change?”
Treat these agreements as pop-up rules that evolve as you learn and as the tools improve. Post them in the room or at the top of your collaborative doc. Invite the whole team to co-author and revisit them monthly. The act of co-creating and refreshing agreements builds trust, creates psychological safety, and reduces risk. It also sends a clear signal to your organization: AI here is not a stealth add-on—it’s an explicit collaborator governed by shared norms.
The biggest gains happen when you stop sprinkling prompts and start threading AI through end-to-end workflows. Pick one journey that matters (e.g., discovery to delivery, feature rollout to customer comms, incident to learning review), map the gates, and design AI invitations at each gate. Replace ad hoc “someone remembers to prompt” with structured moments: AI drafts a brief to react to; AI proposes test conditions; AI synthesizes stakeholder quotes; AI surfaces pattern risks; AI produces the first pass of the decision memo. None of this removes human accountability; it changes where human attention is most valuable.
Blueprints help you see the gaps. A quick service blueprint or journey map reveals where work crosses silos, where it stalls, and where people repeatedly rebuild context from scratch. That’s where AI can remove friction: creating living memory that recurs at each gate, sparking first drafts that the team can critique, highlighting dependencies you might miss. These are not “set-and-forget” automations running in the background; they are deliberate, in-the-room invitations that elevate the quality of collaboration while the team is together.
Prototype a threaded flow you can test in two weeks. Give it a visible name so the team can reference it (“Release Flow 1.0”). Pause an old ritual while you test, and watch which gaps emerge without it. Resist the urge to recreate the ritual—solve for the gap instead. Run a retro at the end and ask, “Where did AI add speed without sacrificing judgment? Where did it distract? Which gate needs a new invitation?” That cycle—prototype, run, retro, tune—compounds quickly and makes AI-centered work feel real, not theoretical.
To make the shift from edge use to center use tangible, run AI-at-the-Center (AI @ TC) Bingo with your team. It’s a hard-mode diagnostic masquerading as a playful game. Each square represents a concrete behavior—AI drafting specs, shaping rituals, generating prototypes, supporting decision-making, producing live synthesis, capturing objections, or maintaining living memory. The rule is simple and strict: mark only what is consistently true weekly. Aspirations and one-off experiments don’t count. That constraint makes the results honest, and honesty reveals where you really are on the maturity curve.

Run it as a fast, focused session. Start with a check-in that frames AI as a co-facilitator, not a mandate. Distribute the card (digital or printed), and give individuals a few minutes to mark their practice. Then compare patterns in small groups and as a whole. Where do you cluster at the periphery—personal productivity, transcription, occasional ideation? Where are there blank rows in the center—decision rules, consent methods, role clarity, living memory? Use the scoring guide to place yourselves on the spectrum from “AI at the Periphery” to “AI at the Center,” and normalize the result. Most teams discover they are earlier in maturity than they assumed. That’s a feature, not a bug; it creates a shared starting line.
Turn the snapshot into action. Choose one to three gaps to prototype in January. For each, define a visible artifact that will signal progress: a decision rule canvas, a weekly AI check-in, a living agenda template, or a workflow blueprint. Be thoughtful about who is in the room for the diagnostic—invite adjacent roles (ops, legal, data, customer success) to get a fuller picture and avoid blind spots. Set expectations upfront to reduce performative responses: “We mark only what’s truly weekly in our current practice.” Watch the AI-at-the-Center Bingo Diagnostic video for a quick walkthrough, and then schedule your session now while the year-end reflection energy is high.
Reflection is valuable only if it converts to habit. Translate your December insights into operating rhythms you can see on the calendar. Start with one weekly ritual that anchors AI at the center—for example, a 25-minute Monday “AI Enablement Standup” where each team member names one place AI will draft first, one place AI will synthesize live, and one decision where AI will propose options. Layer in a monthly agreement review to refresh guardrails, renew consent, and adjust bias checks. Consider a quarterly redesign sprint focused on one workflow—prototype, measure, and share what you learned with the broader org.
Build machine memory plus human judgment into your closers. Use AI to produce a concise, decision-forward summary while you’re still in the room, then verify as a group. Document objections with thresholds and next checks. Feed forward the summary into the next agenda so you don’t rely on imperfect recall. Choose a simple template to house decisions, context, and learnings—something your team will actually use. Establish a review cadence that keeps insights alive: weekly review for open decisions, monthly scan of agreement health, quarterly synthesis of what changed because of your AI-centered experiments.
Measure your momentum without micromanaging. Define two or three outcome signals that matter (reduction in time-to-decision, fewer re-opened debates, more cross-silo throughput, clearer accountability). Balance those with boundary checks that protect ethics, equity, and brand trust. Start small but start now: schedule one AI-at-the-Center Bingo session, pick one decision to move to consent with AI-generated options, and prototype one threaded workflow. Then tell us how it went—your stories help our community learn faster together.
If this year taught us anything, it’s that isolated use of AI by individuals yields isolated benefits. The organizations that will see meaningful ROI in 2025 will be the ones that put AI at the center—visible, designed-in, and co-facilitating the work that shapes results. That shift is not a top-down mandate. It’s a collaborative exploration where teams redesign rituals, clarify roles, codify decisions, and build living memory. It’s multiplayer AI, sitting in the room, helping us open the option space, converge faster, and decide with more clarity and less churn.
As you wrap December and look toward the new year, choose action over aspiration. Run the AI-at-the-Center Bingo Diagnostic. Draft your first “decide how to decide” canvas. Co-create three working agreements that will build trust between humans and AI. Prototype one threaded workflow you can test in two weeks. Put the cadence on your calendar now—commit by schedule, not by enthusiasm.
We’re here to help you make it real. Want the AI @ TC Bingo card, the scoring guide, and the activity video? Ready to bring a Voltage Control facilitator in to co-design your January flow or to run an AI-centered redesign sprint with your leadership team? Curious about integrating these practices into your Facilitation Certification journey? Reply to this newsletter or reach out to our team and we’ll get you everything you need. Let’s make 2025 the year your team moves AI from the edges to the center—and feels the difference in every meeting, every decision, and every outcome.
The post AI at the Center for a Stronger 2026 appeared first on Voltage Control.
]]>The post Facilitating Human Connection in the AI Era appeared first on Voltage Control.
]]>I’ve lived in Austin for 25 years, and while I haven’t immersed myself in every SXSW, I’ve managed to participate somewhat every single year I’ve been here. It’s been a big part of my Austin experience. As a musician, I’ve played official showcases, unofficial showcases and even alternative outsider festivals. As a startup founder, I’ve attended VC parties, client activations, networking events, and the beloved Fogo De Chow“meat-ups”, if you know, you know! In recent years, I have mainly been volunteering as a mentor and judge, which has been a wonderful experience in contributing to the ecosystem.
This year, Voltage Control partnered with SXSW to offer our Workshop Design process to all of their workshop facilitators, where we hosted several live sessions in January to help workshop facilitators prepare for their SXSW sessions. With the global success of our Facilitation Lab meetups we also thought it was a great opportunity to bring the meetup to SXSW as an official meetup. Then, just because we had to go all in, we ran a workshop on AI Teammates, to explore team collaboration use cases for generative AI.
SXSW is always brimming with innovation, creativity, and connection. However, this year through our meetup and workshop we had the opportunity to observe something especially compelling—a deep yearning for genuine, meaningful human-to-human interactions amid all of the passive talks and media consumption. Both of our sessions offered rich insights and genuine connections that we felt were important to share more broadly with the community. Read on for a detailed exploration of the key topics and themes that emerged as we reflected on our SXSW activations.
Our Facilitation Lab meetup sought to elevate interaction beyond the typical exchanges of business cards and superficial networking. As attendees entered, we warmly greeted each person, handing out customized “We Connect” name tags that featured prompts like “Something that’s on my mind right now…” or “I’m curious about…” This simple yet thoughtful intervention quickly transformed initial interactions from polite small talk to engaging conversations rooted in personal interests and genuine curiosity. For instance, one participant humorously noted he spent the entire day with his prompt about ‘remaining grounded under pressure,’ sparking deeper conversations even outside the meetup context. These intentional threshold moments proved pivotal, shifting the energy in the room and setting the tone for sustained and meaningful connections throughout the event.

Erik mentioned to me that he’d forgotten to remove his name badge, and it continued sparking meaningful conversations throughout the day. Inspired by a recent coaching session on the power of silence as a facilitation tool, I chose “Silence” as my own badge prompt. Interestingly—and humorously—some attendees interpreted this as me signaling a need for quiet reflection rather than a conversation starter. Clearly, it’s worth choosing your prompt carefully!
Equally impactful were the interactive posters that posed provocative “How Might We” questions around the room. Participants eagerly engaged with these prompts, leaving behind sticky notes filled with thoughtful observations, authentic vulnerabilities, and creative ideas. These carefully structured, yet simple, tools effectively lowered conversational barriers, inviting authentic exchanges and meaningful reflections.

Notably, we observed attendees continuing their conversations long after the scheduled meetup ended, underscoring the success of our deliberate design in fostering sustained engagement. This experience reinforced for us—and hopefully our attendees—that intentional facilitation of human connection can lead to powerful, lasting interactions that extend far beyond any singular event.
One unexpectedly resonant theme from our meetup was loneliness—a timely topic that surfaced repeatedly, even before Michelle Obama’s keynote on the subject. Participants openly shared their experiences of loneliness, highlighting its prevalence and impact across professional settings. Discussions around this theme revealed how critically loneliness intersects with facilitation, community building, and organizational leadership.

Participants emphasized the importance of creating environments where vulnerability is not only permitted but encouraged, seeing it as a catalyst for combating isolation and fostering deeper connections. Many suggested practical strategies, including dedicated moments within events for genuine personal exchange, structured affinity groups, and conscious efforts to normalize sharing vulnerabilities as part of organizational culture.
One compelling nuance that emerged in our discussions was the particular isolation facilitators often experience. While facilitators dedicate themselves to creating inclusive spaces that support and encourage vulnerability among others, attendees openly acknowledged how rarely facilitators themselves receive reciprocal support. This dynamic sparked insightful exchanges around the critical need to intentionally build community and support networks specifically for facilitators—spaces designed to nurture and sustain those whose roles inherently involve emotional labor and continuous support of others. This recognition deepened the collective understanding that addressing loneliness is not only about structured team-building but also about providing consistent, authentic support for those who hold space.
Meetings often carry negative connotations—viewed as tedious obligations rather than opportunities for genuine collaboration and innovation. During our meetup, attendees enthusiastically discussed ways to reinvent meetings using facilitation principles. The message was clear: stop “meeting” and start “designing collaboratively,” shifting from passive consumption of content toward active participation.
Innovative ideas emerged, including flipping traditional meeting agendas—prioritizing interactive engagement before addressing routine content—to ensure participant energy and creativity are maximized. A provocative attendee suggestion humorously yet pointedly captured this sentiment: “Any meeting over one hour is a waste of time.” This underscored a shared desire among attendees to prioritize engagement, interactivity, and co-creation in all meeting formats.
Our discussions also highlighted the necessity of making meetings purpose-driven, interactive, and intentional, transforming them from informational sessions into collaborative experiences that actively engage all participants. The clear takeaway for facilitators and business leaders is that intentionality and thoughtful design dramatically improve outcomes, making meetings more impactful and deeply satisfying for everyone involved.

One particularly insightful conversation during the meetup revolved around the theme of conflict—often perceived negatively but recognized here as a powerful opportunity for growth and stronger relationships. Attendees expressed the importance of normalizing conflict within teams, treating it not as a failure but as a natural byproduct of diverse perspectives working toward innovation.
To better leverage conflict, participants recommended simulating difficult conversations and scenarios proactively. Such practice not only prepares teams to handle future challenges constructively but also helps establish trust and clear boundaries around how conflicts can be handled effectively. This proactive approach allows teams to feel secure exploring differing opinions, leading to breakthroughs rather than breakdowns.

Further, principles of non-violent communication, depersonalizing disagreements, and establishing trust were frequently suggested as essential facilitation skills. This collective insight reinforced the power of intentional conflict management as a critical facilitation capability, ultimately fostering team cohesion, mutual respect, and collective resilience.
At our AI Teammates workshop, the conversation shifted dramatically as participants reconsidered their perceptions of AI—transforming their view from seeing it merely as a utilitarian tool to recognizing its potential as a dynamic teammate. This shift was vividly illustrated when an attendee, deeply familiar with AI in his professional life, experienced an “aha” moment, expressing excitement at recognizing AI’s fuller potential to engage and facilitate.
This cognitive shift emerged through our intentional design. We strategically used persona cards to introduce participants to new perspectives on AI, prompting deeper reflection on roles like historian, synthesizer, challenger, and optimist. Attendees discovered how leveraging AI in these roles could greatly enrich team discussions, sparking creativity and critical reflection.

A particularly powerful moment illustrating this cognitive shift occurred during our AI Teammates workshop. One attendee, who identified himself as deeply embedded professionally in AI technologies, experienced a profound “aha” moment as we explored AI’s collaborative roles. He candidly shared with the group that, despite his extensive use of AI tools, this workshop was the first time he genuinely saw AI’s deeper collaborative potential—not merely as a functional assistant but as an authentic partner in team interactions. Emotionally moved by this realization, he said the experience gave him “goosebumps,” capturing perfectly the transformative possibilities when facilitators intentionally design experiences that prompt meaningful shifts in perspective.
The workshop highlighted how AI, when positioned thoughtfully, could initiate conversations and engage team members who might otherwise hesitate to participate. In essence, AI provided a neutral voice, catalyzing richer dialogue and deeper insights. For teams hesitant about direct engagement, AI offered a safe starting point, a powerful insight into how technology can be a genuine collaborator rather than merely an information tool.
Participants left our workshop inspired to experiment with new AI tools such as Claude, Perplexity, and Miro Sidekick, seeing firsthand their potential to enhance real-time facilitation. The excitement was palpable as attendees brainstormed practical uses—such as using AI to facilitate deeper reflections, structure conversations, and provide new insights during collaborative sessions.
Yet, despite widespread enthusiasm, participants also candidly discussed concerns around security, privacy, and integration of AI into organizational practices. Attendees from Germany notably highlighted slower regional adoption due to institutional hesitance and rigorous privacy standards. Addressing these concerns became a vital element of our workshop, emphasizing the importance of thoughtful, context-sensitive integration of AI into diverse organizational cultures.
Despite these valid concerns, the workshop’s overwhelming takeaway was excitement about AI’s untapped potential. Attendees saw clear opportunities to enrich their practices through these emerging technologies, feeling empowered and equipped to thoughtfully advocate for and practically implement AI as a meaningful participant in their teams and meetings.
Both our meetup and workshop underscored the critical importance of thoughtfully designed thresholds—both physical entry into spaces and cognitive entry into new ideas. By consciously crafting these moments, we enabled attendees to shift from routine thinking into new possibilities, deeply enhancing their event experience.

Participants expressed gratitude for small yet impactful interventions, such as the persona cards we handed out while attendees were waiting in line for the workshop. Given the unique context of SXSW, where attendees often queue up 45 minutes or more in advance, the cards provided a delightful and unexpected moment of connection and reflection. As participants selected a card that best represented their approach or attitude toward AI, spontaneous conversations quickly blossomed among previously disconnected attendees. People eagerly compared their chosen personas—whether Historian, Synthesizer, Challenger, or Optimist—sparking curiosity, laughter, and immediate bonds. These thoughtfully designed threshold experiences didn’t just occupy waiting time; they actively reshaped the atmosphere, transitioning attendees from passive anticipation to active engagement and collaboration, dramatically influencing their openness, interactions, and reflections throughout the rest of the workshop.
And the big payoff was crossing cognitive thresholds around their use of AI demonstrated facilitation’s power to shift perspectives profoundly. Attendees repeatedly shared that thoughtfully guided experiences allowed them to see familiar tools and interactions in entirely new ways, demonstrating how effective facilitation can lead to significant shifts in understanding and collaboration practices.
Our experiences at SXSW demonstrated the incredible potential at the intersection of human connection, vulnerability, thoughtful facilitation, and AI integration. These moments provided rich insights and clear evidence that intentional facilitation can profoundly reshape organizational culture and interpersonal dynamics.
We invite you—our community of facilitators, leaders, students, and alumni—to embrace and carry these insights into your own practice. Experiment boldly with facilitation techniques, reimagine your meetings for deeper impact, navigate conflict constructively, and thoughtfully explore AI as an active, engaged teammate.

Join us at our upcoming Facilitation Lab events and continue exploring these themes with us. Together, let’s facilitate spaces where human connection, innovation, and meaningful change flourish.
The post Facilitating Human Connection in the AI Era appeared first on Voltage Control.
]]>[...]
The post How to Engage Quiet Participants appeared first on Voltage Control.
]]>One of the many great things about our supportive online community hub is how our members gather around specific topics of interest in what we call “huddles.” These spontaneous, participant-driven sessions are where some of the most meaningful insights and connections take shape. Recently, we had the pleasure of diving into the topic of engaging quiet participants, a challenge many facilitators face. This particular huddle, led by Marco Monterzino, sparked a wealth of ideas and strategies that truly resonated with everyone involved. In this blog post, I’m excited to share the key takeaways and lessons we uncovered during this dynamic session.
One of the most common challenges in Facilitation is encouraging participation from quieter members of the group. Whether due to cultural differences, anxiety, or past experiences, some participants may hesitate to share their thoughts. This can lead to unbalanced discussions and missed opportunities for diverse insights. This article delves into 13 effective techniques for facilitators to foster a more inclusive environment, ensuring that every voice is heard.
As facilitators, creating a safe and engaging space for all participants is crucial. By understanding the various reasons behind quietness and employing strategic approaches, facilitators can help draw out the valuable contributions of every group member. Read on to discover practical methods to enhance your facilitation skills and promote active participation.
Conducting pre-surveys is an excellent way to gauge participants’ concerns and expectations before the session begins. This approach allows facilitators to tailor their strategies to address specific worries, making participants feel heard and valued from the start. Asking questions about their comfort levels, past experiences, and expectations helps in creating a more welcoming environment.
Pre-surveys can also uncover hidden dynamics within the group that might affect participation. By understanding these nuances, facilitators can better prepare and adjust their facilitation techniques to meet the needs of all participants, ensuring a smoother and more effective session.
Establishing ground rules at the beginning of the session sets a clear framework for participation. Encourage talkative participants to be mindful of their airtime while inviting quieter members to contribute more actively. Ground rules create a sense of structure and fairness, which can help alleviate anxiety among participants.

Ground rules should be revisited periodically during the session to reinforce their importance. This consistent reminder helps maintain a balanced discussion and ensures that all voices are heard, creating a more inclusive and productive environment.
Providing options for anonymous feedback can significantly increase participation from quieter members. Tools like anonymous sticky notes on virtual boards allow participants to share their thoughts without fear of judgment. This method ensures that everyone has an opportunity to contribute, regardless of their comfort level with speaking up in a group setting.
Anonymous feedback can also reveal insights that might not surface in a more public forum. Facilitators can use this feedback to address concerns and adapt their approach, making the session more responsive to the needs of all participants.
Breaking participants into smaller groups can create a more comfortable environment for sharing. In virtual settings, breakout rooms facilitate more intimate discussions, allowing participants to feel less intimidated and more willing to contribute. Smaller groups can lead to more meaningful exchanges and better engagement from all members.
Facilitators should ensure that these smaller groups are diverse and balanced, promoting a variety of perspectives. This approach not only encourages quieter participants to speak up but also enriches the overall discussion with a wider range of insights.
Emphasizing the creation of a safe space is crucial for encouraging participation. Facilitators should actively work to make all participants feel comfortable and respected. Acknowledging the importance of psychological safety and organizational culture helps build trust and openness within the group.

Empathy plays a key role in this process. By understanding and addressing the underlying reasons for participants’ quietness, facilitators can create an environment where everyone feels valued and motivated to share their thoughts.
Strategic use of silence can be an effective way to encourage participation. Allowing moments of silence gives participants time to think and formulate their responses. This approach can be particularly beneficial for those who need a bit more time to feel comfortable speaking up.
Facilitators should balance silence with active engagement, ensuring that it does not lead to discomfort or disengagement. By using silence thoughtfully, facilitators can create a more reflective and inclusive discussion environment.
Politely inviting specific participants to share their thoughts can help draw out quieter members. Using phrases like, “I’d like to hear from some of the people that I haven’t heard from yet,” can gently encourage participation without putting anyone on the spot.
This approach should be used with sensitivity to avoid making participants feel singled out. Facilitators should aim to create a welcoming atmosphere where invitations to speak are seen as opportunities rather than obligations.
Tools like the Wheel of Names can randomly select participants to speak, reducing the pressure on any one individual. This method ensures that everyone gets a chance to participate and can add an element of fun to the session, helping to break the ice.
Random selection tools can democratize the discussion, making it clear that every participant’s input is valued. This approach can help reduce anxiety and encourage more spontaneous contributions.

Acknowledging that quietness can have various causes is important for facilitators. By reframing the understanding of quiet participants, facilitators can focus on the context rather than labeling them as difficult. This perspective shift can lead to more effective strategies for engagement.
Facilitators should consider the broader context of each participant’s quietness, whether it’s due to cultural differences, personal anxiety, or past experiences. This understanding can inform more empathetic and tailored facilitation approaches.
Asking for permission to facilitate and setting clear expectations at the beginning of the session establishes the facilitator’s role and authority. This clarity can help manage the session flow and ensure balanced participation.
Facilitators should consistently reinforce their role throughout the session, guiding the discussion and making adjustments as needed to keep the conversation inclusive and productive. This proactive approach helps maintain a positive and structured environment.
Using a “parking lot” for off-topic or lengthy discussions ensures that the session stays focused and on track. This technique allows facilitators to acknowledge important points without derailing the main agenda.
Revisiting parked items at an appropriate time shows participants that their contributions are valued and will be addressed. This approach helps manage time effectively while ensuring that all relevant topics are eventually covered.
Facilitators can integrate brief breathing exercises at various points during the session to maintain a sense of calm and focus. This technique helps create a supportive environment where participants feel more at ease sharing their thoughts.
Starting the session with a breathing exercise can help participants relax and feel more present. Deep breaths create a calm and focused atmosphere, which can reduce anxiety and promote better participation.
Using inclusive language that invites contributions without putting participants on the spot is crucial. Phrases like, “What are your thoughts?” can encourage participation in a non-threatening way.
Facilitators should be mindful of their language throughout the session, ensuring that it remains inviting and inclusive. This approach helps build a welcoming environment where all participants feel comfortable contributing.
Engaging quiet participants can be challenging, but with the right techniques, facilitators can create an inclusive and dynamic discussion environment. By implementing these strategies, you can ensure that every voice is heard and valued, leading to richer and more productive sessions.
Ready to enhance your facilitation skills further? Join us at the Facilitation Lab, where you can learn, practice, and refine your techniques in a supportive community of fellow facilitators. Let’s work together to create engaging and inclusive experiences for all participants.
The post How to Engage Quiet Participants appeared first on Voltage Control.
]]>