How AI turns lingering dysfunction into existential risk and what facilitators can do now

The AI wave feels brand new. The problems underneath it don’t. AI’s rapid advances are reshaping how work gets done, what’s possible, and how fast the future arrives. Yet under all that novelty sits something stubbornly familiar. Alignment. Behavior change. Decision quality. Adoption. These are the age-old challenges that have defined organizational life for decades. They’re not new problems; AI is simply putting them under a brighter, hotter light.

Many of the rituals and structures we rely on were inherited, not designed—remnants of Taylorism and top‑down models, with a dash of military metaphor thrown in for good measure. Think about how often we hear terms like action items and ammunition for a pitch. Even if we didn’t consciously lift these patterns from the factory floor or the command center, they’ve seeped in. We carry them from role to role, re-enacting them in new environments where they’re ill‑fitted to knowledge work, creativity, and human-centered problem solving.

Over the last decade, many teams began the long, important shift toward human-centered work. That project isn’t finished. Meanwhile, AI has changed the context around us: more inputs, more interdependencies, and far faster cycles. The result is a tangle of legacy habits, incomplete cultural transformation, and a new force multiplier. The fundamentals of good facilitation and design of team systems still apply. What’s different now is the cost of not applying them.

The work of clarifying purpose, roles, decision rules, and rituals isn’t a “nice to have” anymore. It’s the foundation that lets AI make your team better instead of magnifying dysfunction. Without it, the same old patterns will keep producing the same old outcomes—only now they’ll arrive at a speed that can overwhelm even high-performing teams.

Speed scales the chaos

What’s truly new about AI is the speed of change and the compounding nature of its effects. The “fast follower” posture that was viable for past technology shifts doesn’t work here. If you wait for standards to stabilize, you’ll miss months (or years) of capability building your competitors are banking. Learning has to become a core organizational muscle, not an initiative. The window between early adoption and obsolescence is narrowing.

Speed can be a gift. AI-enabled teams can spin up prototypes in hours, synthesize complex inputs in minutes, and ship with tighter feedback loops. But speed is neutral—it accelerates whatever it touches. Apply AI to a broken handoff and you don’t fix the handoff; you scale the chaos. Take a siloed process and add automation and you don’t remove the silo; you create automated isolation. The same reinforcing loops that can catapult a healthy system can drive a fragile one to failure.

We often used to meet teams with what we called a leaky faucet problem. Yes, it dripped. Yes, everyone noticed. But you could manage it with a bucket and some tape. You could hide the waste in the margins. AI turns that drip into pressure. It builds behind the surface until one day the levee breaks. What was tolerable friction becomes an existential constraint. When a small leak scales, “business as usual” screeches to a halt.

This is why so many leaders and facilitators are feeling the urgency right now. The problems aren’t new, but their consequences arrive faster and ripple further. It’s no longer sufficient to “know about” the leak; you need to find it, fix it, and redesign the system so you don’t spring another one two steps downstream. If you do this well, AI becomes a amplifier for clarity, flow, and value creation. If you don’t, it scales confusion.

Speed with intention

If speed is neutral, cadence is how we give it purpose. Think of AI as a highly capable teammate that can sprint faster than anyone on your roster. The job of the facilitator is to design the practice field where that speed pays off and doesn’t run the team ragged. That means deliberately alternating between fast and slow modes: call on AI to generate or synthesize quickly, then slow down together to react, refine, and align.

Live synthesis is a superpower here. Many teams lack a consistent, fast synthesis muscle. Even strong synthesizers vary with energy, time of day, and workload. AI can provide a reliable baseline in the moment—capturing themes, options, and decisions while context is warm—so the team can react rather than rehash. You get the benefits of working “while the clay is wet,” without over-relying on a single person’s bandwidth.

Visible work becomes essential in this new cadence. Text alone is too linear and narrow for the complexity we’re navigating. Visual maps, canvases, and blueprints help teams create a shared reality—one that humans and AI can reference. If it’s ambiguous to a colleague, it will be ambiguous to your AI teammate. Tools like Miro let you turn a messy conversation into a shared model in real time; then you can hand that model to AI for targeted processing, scenario generation, or risk identification.

There’s also a delightful side effect: good prompting is just good communication. Teaching teams to brief AI with clearer intent, constraints, and success criteria is the same skill that improves human collaboration. We’ve seen groups adopt prompt hygiene—defining terms, naming assumptions, clarifying audience—and, almost by accident, elevate their everyday cross-functional dialogue. AI becomes a mirror for your clarity. What confuses the model often confuses your colleagues, too.

Ways of Working Assessment

This month we’re spotlighting the Ways of Working Assessment because it delivers what March’s theme demands—a fast, focused way to surface leaks, align on fixes, and set a foundation where AI enhances rather than amplifies dysfunction. If you haven’t seen it yet, watch the quick overview: https://vimeo.com/899513366?share=copy&fl=sv&fe=ci

At its core, the assessment inventories how work actually gets done today. We capture the real rituals, decision rules, handoffs, briefs, and artifacts—not the idealized SOP version sitting on a wiki. We’re looking for two things: the healthy patterns to elevate and scale, and the bottlenecks or ambiguities that drive rework downstream. Artifacts like service blueprints and journey maps emerge, but they’re fed by lived experience, not theoretical flowcharts.

A simple shift unlocks rich insight: instead of asking “How does onboarding work here?” we ask “Walk me through the last time you onboarded someone.” Memory is sticky; it surfaces the tacit steps, workarounds, and unwritten rules that never make it into a process doc. We follow the timeline—who was involved, what was unclear, where the delays crept in, why the handoff failed—and we capture it visually so the whole team can see the same movie, not argue about the script.

From there, we prioritize together. Which one practice, if upgraded now, would reduce the most downstream rework? What would visible progress look like in two weeks? Where does AI belong in this flow—as a teammate, as a co-pilot, or not at all? This is where we start distinguishing human-in-the-loop moments, AI-augmented steps, and no-fly zones. The outcome isn’t a binder; it’s a shortlist of prototypes that teams can try immediately, with crisp measures of success. Culture lives in practice, so we practice differently—on purpose, in small loops that compound.

Three leverage shifts

First, establish a roles and rituals charter that includes your AI teammates. Don’t bolt AI onto your old structure; integrate it into your system intentionally. Identify the core moments in your value stream—discovery, synthesis, decision, handoff, quality—and define who or what leads, who consults, and who validates at each step. Be explicit about what AI does and why. For example: “During weekly intake, AI generates a first-pass classification of requests and a risk heatmap; the PM adjusts classification and confirms risk with Legal for anything flagged above medium.” That level of clarity reduces ambiguity and builds trust.

Second, operationalize decision clarity using consent-based methods. In fast-moving contexts, decisions get stuck between consensus and command. Try consent: “Is it safe enough to try for now, and can we revisit soon?” Pair it with clear decision types (reversible vs. irreversible), a lightweight advice process, and crisp roles (driver, approver, consulted, informed). Write your decision rules down as prompts and checklists. AI can help here by generating the initial decision brief, listing trade-offs based on your criteria, and drafting communication to stakeholders. But you must define the guardrails: where human judgment is required, what risks are unacceptable, and who owns the outcome.

Third, make synthesis and visualization a live team habit. Don’t wait for someone to write a recap doc later. During meetings, have AI capture themes and open questions while a facilitator maps the conversation visually. Close with a quick team review: what’s missing, what needs correcting, what decision is ready now versus what requires another loop. Embed a short “make it visible” cadence into your rituals: if a decision isn’t on the map, it’s not a decision. If a next step isn’t in a public tracker, it’s not a next step. AI is excellent at formatting and distributing these artifacts instantly; your job is to ensure they reflect what the team actually agreed to.

All three shifts share a pattern: intentionality beats intensity. You don’t need to work faster for speed to pay off—you need to work clearer. By formalizing how humans and AI collaborate, you reduce churn, increase throughput, and create artifacts that compound learning. Your team will feel the difference quickly. Meetings stop being places we “talk about work” and start being places we “make work visible and move it forward.”

Measuring what matters 

One of the most reliable ways to break free from legacy habits is to change what you measure. If you’ve been tracking only output (tickets closed, campaigns launched), start tracking flow. Lead time from idea to value. Work in progress per person. Rework rate after handoff. Decision cycle time for reversible versus irreversible calls. These measures surface the invisible friction you’ve tolerated for years and, critically, show whether your new rituals are paying off in days, not quarters.

Set up small reflection loops to create exponential gains. At the end of each sprint or milestone, run a brief retrospective: what worked, what didn’t, what will we try next between now and the next loop? Bring AI into that loop deliberately. Have it extract patterns from your sprint artifacts, flag recurring blockers, and propose two or three lightweight experiments. The team then chooses, adjusts, and commits. Next loop, you measure the difference in flow metrics and decide whether to adopt, adapt, or abandon. This is how practice compounds over time.

As you mature, think system-wide, not just individual or team-level. We often describe an AI maturity path that starts with individual use (personal productivity), progresses to co‑piloting within teams (pairing AI with core roles), evolves to AI teammates embedded in workflows, and culminates in systemic use where cross-functional processes, data, and governance align. Each stage demands new agreements: where humans must remain in the loop, what the no‑fly zones are for AI, how you audit outputs, and how you escalate issues of bias, privacy, or safety.

Governance shouldn’t be a blocker; it should be an enabler. Lightweight policies that clarify purpose and boundaries give teams confidence to experiment. Templates for risk assessment, model selection, prompt hygiene, and result verification help busy managers make good calls quickly. Training facilitators to guide these conversations—mapping the work, designing the cadence, making the trade-offs explicit—is how you steadily raise the organization’s capacity to move at the new speed without breaking.

Closing the gap

The big idea of March can be summed up this way: the problems are old, the speed is new. The fundamentals of how people align, decide, and create together haven’t changed. What’s changed is the tempo and scale at which consequences arrive. That means the gap that matters most is the one between knowing and doing. Everyone knows where the leaks are. The teams who win will be the ones who fix them first and redesign their systems so speed serves them, not the other way around.

If you do one thing this month, run a mini Ways of Working Assessment with your team. Start small. Pick a critical flow, like intake to delivery or discovery to decision. Map the last time you did it together. Find one leak you can patch that would reduce the most downstream rework. Define the role of AI in that moment, teammate, co‑pilot, or no‑fly, and write the decision rule that goes with it. Make the change visible. Measure the impact in two weeks. Then iterate. These steps take hours, not months, and they create artifacts you can reuse and scale.

When you design cadence on purpose, AI stops being a source of overwhelm and becomes a source of momentum. You’ll find yourself moving faster where it matters and slower where it counts. You’ll see ambiguity shrink as your team’s shared models get clearer. You’ll feel meetings transform from status theatre into decision engines. And as your practices compound, you’ll notice something else: the same clarity that makes your AI prompts better will make your cross‑functional collaboration better. That’s the kind of win that compounds quarter after quarter.

You already know where the friction is. The Ways of Working Assessment gives you a structured way to surface it, prioritize it, and prototype something better – fast. Watch the overview, block 60 minutes with your team, and let’s get to work.