system design
Rewire teams and systems
To realize the true value of AI and the promise of a transformed customer experience, the way your team works needs to change.
That doesn't mean starting from scratch. But it does mean rethinking how support is structured, how performance is owned, and how your systems evolve.
The teams that scale AI successfully treat it like infrastructure and design around it. It's an intentional shift that helps the technology keep performing as its scope grows.
In this section, we'll show you how to make a strong case for change, evolve your team structure, and create systems and ownership models that sustain AI performance over time.
Make the case for change and drive internal alignment
Scaling AI is as much about people as it is about technology. If people don't understand what's changing, doubt creeps in. They worry about their roles. They question whether the new system will work. They aren't open to trying new things and embracing a "new way." This lack of alignment puts the AI initiative at risk of stalling out, so it's important to address before it takes hold.
Here's how to approach this.
Provide proof that AI can help solve something real
Frame it around pain that's already obviously felt: the backlog that keeps growing, the volume that outpaces headcount, and the complex, time-intensive queries that keep your most experienced team members stuck in reactive mode instead of working strategically. When you show how AI can take the pressure off across all of this, people start to lean in.
Maintain executive momentum
You've already secured initial executive sponsorship to launch AI. Now you need to maintain and deepen that support as you scale.
Scaling AI reshapes how people work, roles evolve, and systems are structured. That kind of transformation requires sustained leadership backing – not just initial approval, but ongoing commitment to unlock resources, align cross-functional teams, and reinforce AI as a visible priority.
Keep executives engaged by making the proof tangible. Share clear, measurable wins that show what's possible: choose high-volume use cases where impact is easy to track, let the AI Agent handle them, observe what changes, and share the story broadly, consistently, and in places where people will notice.
That's what Dotdigital did:
Earn team support through transparency and involvement
Building trust on the ground is important to sustain momentum. People need to see how AI will improve their work and how it fits into the bigger picture. That starts with transparency: what's changing, why, and what success looks like.
Then, give them a role in shaping the system. They need to know exactly what the AI Agent is responsible for, when it hands off, and who owns its performance. That kind of clarity is what enables scale. It makes the change feel grounded and safe, and gives the organization something to align around.
Ask for feedback on AI performance. Show how their insights improve it. Turn the AI Agent into something the team is involved in building, not something that's happening to them.
Evolve your team structure
When AI Agents resolve the majority of customer conversations, the shape of your support team has to change.
The old tiered model built around queue management, handoffs, and volume-based productivity no longer fits. AI now handles the bulk of customer interactions, including many that used to require human involvement.
That shifts the role of your human team entirely. Responsibilities evolve, and success is measured differently.
It goes beyond just adding automation to existing ways of working. You're building an operating model that's entirely new.
Foundational roles
Most teams don't hire a dedicated AI function from day one. They start by distributing a few critical responsibilities across existing team members, and formalize those responsibilities as AI becomes central to how support works.
Once you have executive support and a clear strategy in place, these are the four foundational roles we believe are key to getting AI off the ground in a meaningful way:
AI Operations Lead
Owns day-to-day AI performance. Tracks quality. Tunes behavior. Prioritizes fixes. Drives iteration.
Often promoted from support ops. Deep understanding of workflows, systems, and tooling. Strong analytical and cross-functional coordination skills.
Without clear ownership, performance drifts. This role ensures the AI Agent constantly improves.
Knowledge Manager
Owns macros, snippets, and help content. Maintains structured, accurate inputs the AI Agent depends on.
Background in content ops, enablement, or support. Skilled in writing, taxonomy, and content architecture.
AI Agents are only as good as their inputs. This role keeps your knowledge layer reliable, current, and scalable.
Conversation Designer
Designs how the AI Agent communicates by focusing on tone of voice, structure, handoff logic, and interaction flow. Tunes how responses feel.
Background in content design, UX writing, or support enablement. Deep grasp of policy, CX standards, and conversational nuance.
This role ensures the AI Agent speaks like your brand – clearly, helpfully, and in line with customer expectations.
Support Automation Specialist
Builds workflows and backend actions the AI Agent can execute.
Background in support engineering, systems, or tooling. Works closely with product and engineering teams.
Enables the AI Agent to take action – not just respond. This role translates customer intents into business systems.
Evolving roles
Introducing new AI-first roles doesn't mean your existing functions disappear. But they do need to evolve. For AI to scale effectively, every function in your support organization must shift its focus from managing queue-level activity to improving the system's performance:
-
Enablement trains human agents to work with the AI Agent: managing handoffs, tuning responses, and understanding how to give feedback that improves the system.
-
QA evolves from reviewing conversations to reviewing behavior: where the AI succeeds, where it falls short, and how the system as a whole performs.
-
Workforce management plans capacity based on automation coverage, not just inbound volume.
Leadership model
You'll also need a new kind of leadership to make this model work. The traditional support leader doesn't map cleanly to an AI-first organization. You need a new layer: leaders who are part strategist, part operator. They roll up their sleeves to analyze the AI Agent's performance, refine content, and debug handoffs, but they also coach the team through a new way of working.
This is the "player-coach model" – leaders who actively shape both the system and the people within it.
These leaders see the AI Agent as a teammate to manage, not just a tool to monitor. They can't be purely people leaders or purely systems thinkers. They need to be both, and they're emerging as a critical hire in support right now.
Real-world examples
Some teams are restructuring their organizations around the AI Agent as a core product, not just a support tool.
At Dotdigital, a dedicated "Fin Ops" specialist role was created to refine content and improve AI performance.
At Clay, a dedicated GTM engineer role has been established as part of the ops team with a focus on making support more efficient at scale using Fin. Additionally, a support engineering function has been embedded directly in the CX organization to help reduce volume by fixing bugs and building internal tools.
At Intercom, our Support team is now structured around three pillars: Human Support, AI Support, and Support Operations and Optimization.
Each function includes evolving roles and responsibilities, but all of them work together as a system, with clear ownership and shared accountability for AI performance.

Rethink who you hire, and why
As your ways of working evolve, so will the traits you value in your team.
Traditional strengths like good communication skills, empathy, and the ability to work under pressure are still important, but they're not enough.
In an AI-first model, you'll want to work with people who demonstrate:
-
Systems thinking
-
Technical fluency
-
Consultative expertise
-
A continuous improvement mindset
Hire people who notice what's broken and suggest how to fix it. Who bring customer insights back to the product team. Who are quick to automate repetitive work. Who are curious, proactive, and genuinely excited about what AI makes possible.
And that mindset doesn't just apply to agents. It extends to the managers leading them. You need the whole team to care about how the system performs and treat the AI Agent like a product: something to tune, improve, and evolve over time.
You need people who wake up thinking about how to make the whole thing better.
At Intercom, we've started hiring (both internally and externally) with this mindset. Here's what we look for:
-
Technical skills: Strong aptitude across Intercom's core features, Fin, AI Insights, apps, and integrations. Confident problem-solver.
-
Customer obsession / empathy: Deep understanding of customer needs, emotions, and experiences. Listens actively, anticipates issues, and delivers solutions with care, urgency, and a genuine desire to help. Turns every interaction into an opportunity to build trust and loyalty.
-
Consultative skills: Ability to navigate complex issues with a tailored approach. Combines product expertise with critical thinking to solve for long-term value.
-
Continuous improvement mindset: Approaches each issue as a chance to prevent the next. Works to ensure every resolved query becomes a system improvement.
-
Insights driven: Uses data to guide decision-making, uncover trends, and prioritize improvements.
-
AI literacy: Understands how AI works and actively uses it in their day-to-day – both individually and at the team level.
Establish the systems and structure that sustain AI performance
Once the right people are in place, you need an operating model that makes progress part of how things work and keeps the AI Agent improving over time.
At Intercom, we use a simple mantra to guide how we think about this: "The first time you answer a question should be the last."
We're not chasing perfection. We're trying to build an operating model where every resolution improves the system, so that fewer issues repeat, quality compounds, and support becomes more scalable over time.
Getting this right takes intentional design. It takes clear ownership. Guardrails that let you move quickly without risk. A way to feed insights back in. And a culture that celebrates the work, not just the outcomes.
Let's break that down.
1. Start with clear ownership
One of the most common reasons AI performance plateaus is ambiguity.
When no one owns how the AI Agent performs, feedback gets lost, issues linger, and improvements stall. High-performing teams assign a single owner who's responsible for making the AI Agent better by:
-
Reviewing resolution trends and identifying where the system is underperforming.
-
Making targeted updates to content, configuration, and behavior.
-
Coordinating with product and engineering on systemic blockers.
-
Setting improvement priorities, targets, and timelines.
That owner (often referred to as the AI ops lead) typically sits within support operations or grows out of an existing role. The title or team they sit on isn't important. What matters is that they take clear ownership and have the authority to drive change.
At Dotdigital, AI performance plateaued after a strong start – resolving around 2,800 conversations per month for three consecutive months. To drive resolution rates up, the team created a dedicated support operations specialist role, filled by an experienced agent with deep product knowledge. This person will focus on refining snippets, improving content, and enhancing the AI's resolution capabilities.
2. Make iteration fast and safe
As the AI Agent handles more volume and complexity, change might start to feel risky. And when teams hesitate to make changes, performance stalls.
That's where lightweight governance comes in: a clear way to keep iterating without bureaucracy or endless approvals.
The teams that have developed a good rhythm with this put a few principles in place:
-
Everyone knows which changes need review, and which don't.
-
Decision-makers are named.
-
Updates are tested (lightly but reliably) before they go live.
-
Feedback flows through one place, so it's seen and acted on.
-
Progress happens on an agreed schedule (weekly reviews, monthly checkpoints, quarterly planning, etc.) not just when someone has time.
Anthropic ran a focused "Fin hackathon" sprint to improve their AI Agent's resolution rate. The team audited unresolved queries, identified underperforming topics, and created or updated content to close gaps. They converted frequently used macros into AI-usable snippets, monitored Fin's performance during live support, and continuously refined content based on real interactions. This structured approach enabled rapid improvement while maintaining quality standards.
Governance isn't extra overhead or red tape. It's what makes improvement routine and safe. When the path from insight to action is predictable, your AI Agent gets better every week and your support system keeps scaling with it.
3. Build a system that learns by default
AI performance isn't static, but most teams treat it like a one-time implementation. The most successful organizations design systems that learn: they analyze where the AI Agent struggles, then feed that insight directly into structured improvement.
That might look like:
-
Reviewing common handoff points to humans.
-
Tracking unresolved queries by topic or intent.
-
Measuring resolution rate trends over time.
-
Using these signals to prioritize fixes or content upgrades.
Whether you follow a formal loop (like the Train → Test → Deploy → Analyze framework from before) or something simpler, the goal is the same: make improvement inevitable.
4. Treat content as competitive infrastructure
Your AI Agent is only as good as what it knows. This makes content strategy a competitive advantage, not just a support function.
You need to treat knowledge like infrastructure, where:
-
Every topic has a clear owner.
-
Content is structured, versioned, and ingestion-ready.
-
New products ship with source-of-truth content by default.
-
Changes are shipped on a schedule, not when someone finds time.
At Intercom, we've evolved our New Product Introduction (NPI) process by aligning early with R&D on a single, canonical source of truth that becomes the foundation for all downstream content – including what the AI Agent uses to resolve queries. By embedding content creation into launch readiness, not as an afterthought, we've consistently hit 50%+ resolution rates on new features from day one.
This infrastructure layer is often what separates teams that scale confidently from those that stall out. Without it, every improvement is harder and AI performance remains inconsistent. With it, your AI Agent gets better every day – and the system compounds.
5. Make belief visible
Even the best system won't keep improving if people stop believing in it.
Belief will fade quietly if you don't reinforce it. Keep it strong by:
-
Sharing specific wins regularly.
-
Highlighting improvements with metrics.
-
Recognizing the people behind those improvements and giving them space to lead.
This is about more than just team morale. It's about keeping everyone aligned and excited about the bigger play you're all part of.
What great looks like
When AI is deeply embedded in your support organization, the entire system works differently:
-
Clear ownership and accountability: Dedicated leaders manage AI performance with the same rigor applied to human team management.
-
Evolved roles and responsibilities: Team structure reflects the new reality of AI-first operations, with specialists focused on system optimization, content creation, and high-value customer interactions.
-
Systematic improvement processes: Feedback loops, governance frameworks, and improvement cycles are embedded in how work gets done, not bolted on afterward.
-
Strategic knowledge management: Content is treated as infrastructure, with ownership, maintenance, and optimization processes that support AI performance.
-
Executive sponsorship and sustained internal momentum: AI is backed by leadership and championed across the team. Success is visible, celebrated, and continually reinforced throughout the organization.
The difference between AI as a promising experiment and AI as business infrastructure comes down to alignment. When people understand their new roles, processes support continuous improvement, and ownership drives accountability, AI performance compounds.





