Scale it
Scale the impact of AI
You’ve proven AI can deliver results. Now comes the real opportunity: scaling its impact.
At scale, AI Agents reshape how your business delivers value. But they require an evolution of your customer experience, the role of your team, and the economic model of your support function. If you don’t evolve the system around AI, you’ll hit a ceiling.
So the questions you need to ask now are:
- How do we sustain and expand value?
- How do we avoid the plateau?
- How do we rewire the organization around AI?
That’s what this chapter covers. How to scale beyond quick wins and unlock the compounding value across three transformational pillars:
- 1. Customer experienceRedesign journeys around AI-human collaboration. Build trust. Maximize customer value.
- 2. Organizational and system designRedefine roles. Assign ownership. Build systems that improve continuously.
- 3. EconomicsRethink how you measure value. Shift from volume and cost to outcome and value. Prove ROI at scale.
The outcome is a new model for support: AI as infrastructure, humans as system designers, and a transformed customer experience – where every interaction starts with an AI Agent, human experts collaborate when needed, and customers get real value.
Start with a mindset shift
If you were building support from scratch today, you’d design around AI from day one. That’s the mindset you need to adopt.
But many teams treat AI like a feature instead of infrastructure. They tack it onto existing processes and tools, limit its scope to tier-one issues, and fail to evolve the systems around it.
These teams are thinking too small. They chase incremental efficiency gains, underinvest in the system change needed to make AI successful, and get stuck as a result. The customer experience remains fragmented. The support team stays reactive. The business leaves value on the table.
AI Agents don’t just answer questions, they act. They pull in real-time data, trigger backend processes, and complete multi-step tasks. They’re not limited to “simple stuff,” and they’re not just a triage layer. They’re fully capable, end-to-end resolution engines, and are central to transforming your customer experience.
What scaled success looks like
Before we dive into the mechanics, it’s important to look at the end state. What does it look like when AI is deeply embedded, not just deployed?
- 80%+ of inbound conversations are resolved by AI, including multi‑step queries.
- Human teams are focused on system design, feedback, and strategic value, not queue clearing.
- Roles like AI ops, knowledge managers, automation specialists are embedded.
- CX is fast, personalized, and seamless across channels.
- Support is treated as infrastructure, and critical to customer experience and business growth.
- AI handles only basic queries, often in parallel with legacy workflows.
- Human agents remain reactive and overloaded.
- No one owns AI performance. Systems are ad hoc or siloed.
- Customers get inconsistent experiences, and notice.
- ROI is unclear. AI is seen as an experiment, not a strategic lever.
This chapter will show you how to avoid stalling, and scale with confidence.
Important: Scale doesn’t happen without executive buy-in
It’s easy to think of AI Agents as a support team initiative, but the initiative won’t scale without the C-suite.
AI Agents reshape how support works, how teams are structured, how performance is measured, and how cost and value flow through the organization. That kind of change touches everything, and it can’t succeed without top-down commitment: budget, air cover, and a willingness to rethink the old cost models. That means aligning your CFO around ROI, your CCO around journey design, and your CEO around customer experience as a strategic advantage.
Even a high-performing AI Agent will stall without this. You’ll see early wins, but the bigger opportunity – the one that shifts your economics, elevates your team, and transforms your customer experience – won’t materialize without someone at the top who understands that AI is infrastructure.
If you’re a support, CX, or AI transformation leader: use early wins to show your exec team what’s possible.
If you’re a C-suite leader: your support team can’t do this alone. But with the right mandate, they’ll help lead your company through the next era of customer experience.
Customer experience
Reinvent the way your business shows up for customers
For many companies, transforming the customer experience is AI’s biggest promise.
Efficiency might justify the investment, but the most ambitious support teams don’t just deploy AI to drive efficiency or automate Tier 1 volume. They use it to create improved and entirely new customer experiences; ones that are faster and smarter, that remove friction, anticipate needs, and deliver value in every conversation.
These teams aren’t layering AI on top of the old model. They’re designing around it, building new models with AI at the core.
This section outlines how to make that transition from reactive service to AI-powered experience design. It’s built around five principles that help teams scale trust, consistency, and delight across the customer journey.
First, reimagine the customer experience from first principles
What do customers actually want?
- Fast, accurate, and comprehensive resolutions to their problems.
- A feeling of being heard, respected, and understood, i.e., it feels personal.
- Minimal effort, repetition, or friction.
Those needs haven’t changed. What has changed is your ability to meet them – more consistently, more intelligently, and at greater scale.
In the traditional support model, if you had no constraints (infinite time, people, and budget), you might have delivered on those principles by:
- Staffing a global team of highly trained agents to cover every time zone.
- Equipping them with full customer context, real-time data, and flexible systems.
- Investing deeply in onboarding, coaching, and quality assurance.
But you didn’t have infinite resources, so you made trade‑offs. You might have buried the “Contact Us” button behind help articles, deflected when you could, and triaged the rest. You relied on queues, rules, and SLAs to manage complexity and control cost.
But those constraints are no longer fixed.
Modern AI Agents don’t just answer questions immediately. They understand context. They complete tasks. They get better over time. And when you design your systems with that in mind, something powerful happens: support stops being reactive.
So you can stop working around limitations, and start designing around what customers actually want.
How you design your customer experience shifts:
- From reactive to real-timeWith AI, help is instant. Customers don’t wait. They get fast, relevant, context-aware answers and resolutions.
- From high-effort to zero-frictionAI makes it possible to eliminate entire categories of customer effort, like dead ends or looping menus, repeating information, or unnecessary steps between question and outcome.
- From queues to journeysThe most advanced teams are using AI to guide customers through the journey, nudging them forward, preventing issues before they happen, and delivering help before it’s asked for.
- From table stakes to competitive edgeWith AI, your customers get help faster and with less friction. These great experiences don’t just get customers to stay, they encourage them to advocate. Great support becomes a reason to choose you, and stay with you.
Five principles for AI-powered experience design
Each of these principles is engineered to deliver on the first principles of great support: fast, accurate resolutions, empathy, and effortlessness.
1. Treat customer experience like a product
Treating support as a product means designing, building, and managing your support experience with the same rigor and accountability you would apply to your core product.
It’s a shift in mindset from seeing support as a reactive function or operational cost to treating it as a customer-facing experience that shapes perception of your brand, drives loyalty, contributes to growth, and accelerates the value customers get from your product.
Just like product teams:
- You define goals (faster onboarding, higher CSAT, lower churn).
- You map flows (AI starts the conversation, human handovers, proactive nudges).
- You instrument the journey (track handoffs, drop-offs, success states).
- You run tests and ship improvements (tone tweaks, fallback paths, training updates).
- You own the outcomes (gather feedback, measure performance, use insights to continuously improve the system).
2. Lead with AI, back with humans
AI isn’t replacing the human touch. It’s redefining when, where, and how it’s most valuable.
In a scaled model, AI becomes the first responder: the default entry point for every conversation (and the end point for most of those conversations too). But it doesn’t work in isolation. The experience should be hybrid by design:
- AI handles most conversations, including complex ones. It’s fast, scalable, and consistent.
- Humans step in where they add value, either to resolve high-context, high-stakes issues or to improve the system itself. The team’s value shifts from reactive resolution to proactive system design and high-value customer interactions.
- Even when humans take over, AI is still involved. AI copilot features like summarization, categorization, and suggested answers make human agents faster and more effective.
- Handoffs between the two are invisible. When a conversation moves from AI to human (or back again), the customer shouldn’t notice. Every message should be personalized and contextual. It should feel like part of a single, coherent experience, not a jarring switch.
Every message, whether from AI or a human, should feel like part of one coherent experience. Behind the scenes, you define who handles what, and why.
Ask:
- Where can AI resolve fully?
- Where does it need a fallback?
- Where do humans add the most value (judgment, empathy)?
- Who reviews failures and feeds them back into the system?
It’s also important that you don’t retrofit AI onto old processes or team structures.
A big mistake teams make when adding AI to the customer experience is tacking it onto something that wasn’t designed for it. That’s when experiences break. Handoffs feel clunky. Customers lose trust.
You need to design around AI from the beginning. For example, write knowledge content for AI, not humans. Design workflows where AI is the primary resolver. Define what happens after success: what feedback loops get triggered, what metrics matter, and who owns what.
When AI is at the core of the system, not tacked onto it, it performs better. And the customer experience feels better.
3. Be proactive
Use AI to anticipate customer needs and offer help, guidance, or nudges before they become problems.
It’s not about deflection, it’s about momentum: identifying drop-offs, surfacing friction, and stepping in at just the right time.
At Intercom, our team is working towards Fin acting as a “digital customer experience agent” for every customer; not only answering support queries but also offering tailored onboarding and proactive help. The aim is for AI to feel integrated into the full lifecycle experience, delivering relevant support without even needing to be asked.
4. Build for trust
Some customers still assume AI won’t help them. You’re dealing with the legacy of bad chatbots that gave vague answers, clunky menus, and left people in endless loops.
You build trust in AI by showing that it works. At scale, every interaction becomes a test. And every successful resolution is proof.
To build trust at scale:
- Lead with AIDon’t hide your AI Agent behind layers of “choose an option” – get the customer to the AI Agent as soon as possible. Show them that this is a different kind of interaction.
- Make handovers seamlessIf the AI Agent can’t resolve an issue, the handoff to your team should feel seamless. Design invisible transitions. Pass context, tone, and conversation history automatically. Empower agents with summaries and suggestions so they can pick up exactly where the AI left off.
- Show customers AI worksWhen customers second-guess a correct AI response, have your team back it up. Build feedback loops that reinforce what’s working and fix what’s not. Every interaction is a test. When AI consistently delivers the right answers, completes tasks end-to-end, and seamlessly hands over to your team when needed, it will earn their trust.
At Intercom, we surveyed over 1,000 US consumers to see if their sentiment toward AI-first support changed once they saw what an AI Agent was capable of. The change was significant: by simply showing people that modern AI Agents can conversationally engage with customers, perform actions, and seamlessly escalate queries to a human, positive sentiment increased by 20 percentage points and trust levels jumped by 18 points.
Before demo | After demo | |
---|---|---|
Trust | 39% | 57% |
Distrust | 35% | 18% |
5. Make it feel personal
Your AI Agent represents your brand. The way it speaks, follows policies, and responds matters. With AI Agents, you can use capabilities like tone control, fallback logic, and language preferences to align the experience to your brand’s standards.
For Fin, we call this “Guidance.” It’s a way to control how the AI Agent sounds and behaves using simple, natural language prompts.

Deep dive on best practices for AI Agent guidance →
Consistency builds trust, but personality builds connection and loyalty.
What great looks like
In an AI-native customer experience:
- The experience is built with AI at the core, not as an add-on.
- AI is the default first responder across all channels.
- Most queries are resolved instantly, without needing to be handed over to a human.
- Proactive support is embedded into the experience.
- Handoffs between AI and humans are seamless.
- Brand tone is consistent, even at scale.
- Support doesn’t just resolve, it retains, activates, and delights.
Organizational and
system design
Rewire teams and systems
To realize the true value of AI and the promise of a transformed customer experience, the way your team works needs to change.
That doesn’t mean starting from scratch. But it does mean rethinking how support is structured, how performance is owned, and how your systems evolve.
The teams that scale AI successfully treat it like infrastructure and design around it. It’s an intentional shift that helps the technology keep performing as its scope grows.
In this section, we’ll show you how to make a strong case for change, evolve your team structure, and create systems and ownership models that sustain AI performance over time.
Make the case for change and drive internal alignment
Scaling AI is as much about people as it is about technology. If people don’t understand what’s changing, doubt creeps in. They worry about their roles. They question whether the new system will work. They aren’t open to trying new things and embracing a “new way.” This lack of alignment puts the AI initiative at risk of stalling out, so it’s important to address before it takes hold.
Here’s how to approach this.
Provide proof that AI can help solve something real
Frame it around pain that’s already obviously felt: the backlog that keeps growing, the repetitive work that blocks strategic progress, the volume that outpaces headcount. When you show how AI can take the pressure off, people start to lean in.
Secure executive air cover early
Scaling AI successfully requires leadership backing to move at speed. Executive sponsorship provides the air cover to unlock resources, align cross-functional teams, and make AI a visible priority across the company.
You can have the right team and tooling in place, but without strong leadership support, momentum is harder to sustain. Scaling AI reshapes how people work, roles evolve, and systems are structured. That kind of shift works best when leaders are behind it.
To secure exec air cover, make the proof tangible by focusing on one or two clear, measurable wins that show what’s possible. For example, choose a high-volume use case where impact is easy to track. Let the AI Agent handle it, observe what changes, and then share the story broadly, consistently, and in places where people will notice.
That’s what Dotdigital did:
Earn team support through transparency and involvement
Executive backing will give you momentum, but you need to build trust on the ground to sustain it. People need to see how AI will improve their work and how it fits into the bigger picture. That starts with transparency: what’s changing, why, and what success looks like.
Then, give them a role in shaping the system. They need to know exactly what the AI Agent is responsible for, when it hands off, and who owns its performance. That kind of clarity is what enables scale. It makes the change feel grounded and safe, and gives the organization something to align around.
Ask for feedback on AI performance. Show how their insights improve it. Turn the AI Agent into something the team is involved in building, not something that’s happening to them.
Evolve your team structure and introduce the roles AI needs to succeed
When AI Agents resolve the majority of customer conversations, the shape of your support team has to change.
The old tiered model built around queue management, handoffs, and volume-based productivity no longer fits. AI now handles the bulk of customer interactions, including many that used to require human involvement.
That shifts the role of your human team entirely. Responsibilities evolve, and success is measured differently.
It goes beyond just adding automation to existing ways of working. You’re building an operating model that’s entirely new.
Where to start: Roles that unlock AI performance
Most teams don’t hire a dedicated AI function from day one. They start by distributing a few critical responsibilities across existing team members, and formalize those responsibilities as AI becomes central to how support works.
Once you have executive support and a clear strategy in place, these are the four foundational roles we believe are key to getting AI off the ground in a meaningful way:
Role | Responsibilities | Skillset / background | Why you need this |
---|---|---|---|
AI Operations Lead | Owns day-to-day AI performance. Tracks quality. Tunes behavior. Prioritizes fixes. Drives iteration. | Often promoted from support ops. Deep understanding of workflows, systems, and tooling. Strong analytical and cross-functional coordination skills. | Without clear ownership, performance drifts. This role ensures the AI Agent constantly improves. |
Knowledge Manager | Owns macros, snippets, and help content. Maintains structured, accurate inputs the AI Agent depends on. | Background in content ops, enablement, or support. Skilled in writing, taxonomy, and content architecture. | AI Agents are only as good as their inputs. This role keeps your knowledge layer reliable, current, and scalable. |
Conversation Designer | Designs how the AI Agent communicates by focusing on tone of voice, structure, handoff logic, and interaction flow. Tunes how responses feel. | Background in content design, UX writing, or support enablement. Deep grasp of policy, CX standards, and conversational nuance. | This role ensures the AI Agent speaks like your brand – clearly, helpfully, and in line with customer expectations. |
Support Automation Specialist | Builds workflows and backend actions the AI Agent can execute. | Background in support engineering, systems, or tooling. Works closely with product and engineering teams. | Enables the AI Agent to take action – not just respond. This role translates customer intents into business systems. |
What happens to other roles?
Introducing new AI-first roles doesn’t mean your existing functions disappear. But they do need to evolve. For AI to scale effectively, every function in your support organization must shift its focus from managing queue-level activity to improving the system’s performance:
Enablement trains human agents to work with the AI Agent: managing handoffs, tuning responses, and understanding how to give feedback that improves the system.
QA evolves from reviewing conversations to reviewing behavior: where the AI succeeds, where it falls short, and how the system as a whole performs.
Workforce management plans capacity based on automation coverage, not just inbound volume.
You’ll also need a new kind of leadership needed to make this model work. The traditional support leader doesn’t map cleanly to an AI-first organization. You need a new layer: leaders who are part strategist, part operator. They roll up their sleeves to analyze the AI Agent’s performance, retrain content, and debug handoffs, but they also coach the team through a new way of working.
This is the “player-coach model” – leaders who actively shape both the system and the people inside it.
These leaders see the AI Agent as a teammate to manage, not just a tool to monitor. They can’t be purely people leaders or purely systems thinkers. They need to be both, and they’re emerging as a critical hire in support right now.
Some teams are restructuring their organizations around the AI Agent as a core product, not just a support tool.
Real-world examples | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
At Dotdigital, a dedicated “Fin Ops” specialist role was created to refine content and improve AI performance. | At Clay, a dedicated GTM engineer role has been established as part of the ops team with a focus on making support more efficient at scale using Fin. Additionally, a support engineering function has been embedded directly in the CX organization to help reduce volume by fixing bugs and building internal tools. |
At Intercom, our Support team is now structured around three pillars: Human Support, AI Support, and Support Operations and Optimization.
Each function includes evolving roles and responsibilities, but all of them work together as a system, with clear ownership and shared accountability for AI performance.

Rethink who you hire, and why
As your ways of working evolve, so will the traits you value in your team.
Traditional strengths like good communication skills, empathy, and the ability to work under pressure are still important, but they’re not enough.
In an AI-first model, you’ll want to work with people who demonstrate:
- Systems thinking
- Technical fluency
- Consultative expertise
- A continuous improvement mindset
Hire people who notice what’s broken and suggest how to fix it. Who bring customer insights back to the product team. Who are quick to automate repetitive work. Who are curious, proactive, and genuinely excited about what AI makes possible.
And that mindset doesn’t just apply to agents. It extends to the managers leading them. You need the whole team to care about how the system performs and treat the AI Agent like a product: something to tune, improve, and evolve over time.
You need people who wake up thinking about how to make the whole thing better.
At Intercom, we’ve started hiring (both internally and externally) with this mindset. Here’s what we look for:
Strong aptitude across Intercom’s core features, Fin, AI Insights, apps, and integrations. Confident problem-solver.
Deep understanding of customer needs, emotions, and experiences. Listens actively, anticipates issues, and delivers solutions with care, urgency, and a genuine desire to help. Turns every interaction into an opportunity to build trust and loyalty.
Ability to navigate complex issues with a tailored approach. Combines product expertise with critical thinking to solve for long-term value.
Approaches each issue as a chance to prevent the next. Works to ensure every resolved query becomes a system improvement.
Uses data to guide decision-making, uncover trends, and prioritize improvements.
Understands how AI works and actively uses it in their day-to-day – both individually and at the team level.
Establish the systems and structure that sustain AI performance
Once the right people are in place, you need an operating model that makes progress part of how things work and keeps the AI Agent improving over time.
At Intercom, we use a simple mantra to guide how we think about this: “The first time you answer a question should be the last.”
We’re not chasing perfection. We’re trying to build an operating model where every resolution improves the system, so that fewer issues repeat, quality compounds, and support becomes more scalable over time.
Getting this right takes intentional design. It takes clear ownership. Guardrails that let you move quickly without risk. A way to feed insights back in. And a culture that celebrates the work, not just the outcomes.
Let’s break that down.
1. Start with clear ownership
One of the most common reasons AI performance plateaus is ambiguity.
When no one owns how the AI Agent performs, feedback gets lost, issues linger, and improvements stall. High-performing teams assign a single owner who’s responsible for making the AI Agent better by:
- Reviewing resolution trends and identifying where the system is underperforming.
- Making targeted updates to content, configuration, and behavior.
- Coordinating with product and engineering on systemic blockers.
- Setting improvement priorities, targets, and timelines.
That owner (often referred to as the AI ops Lead) typically sits within support operations or grows out of an existing role. The title or team they sit on isn’t important. What matters is that they take clear ownership and have the authority to drive change.
At Dotdigital, AI performance plateaued after a strong start – resolving around 2,800 conversations per month for three consecutive months. To drive resolution rates up, the team created a dedicated support operations specialist role, filled by an experienced agent with deep product knowledge. This person will focus on refining snippets, improving content, and enhancing the AI’s resolution capabilities.
2. Make iteration fast and safe
As the AI Agent handles more volume and complexity, change might start to feel risky. And when teams hesitate to make changes, performance stalls.
That’s where lightweight governance comes in: a clear way to keep iterating without bureaucracy or endless approvals.
The teams that have developed a good rhythm with this put a few principles in place:
- Everyone knows which changes need review, and which don’t.
- Decision-makers are named.
- Updates are tested (lightly but reliably) before they go live.
- Feedback flows through one place, so it’s seen and acted on.
- Progress happens on an agreed schedule (weekly reviews, monthly checkpoints, quarterly planning, etc.) not just when someone has time.
Anthropic ran a focused “Fin hackathon” sprint to improve their AI Agent’s resolution rate. The team audited unresolved queries, identified underperforming topics, and created or updated content to close gaps. They converted frequently used macros into AI-usable snippets, monitored Fin’s performance during live support, and continuously refined content based on real interactions. This structured approach enabled rapid improvement while maintaining quality standards.
Governance isn’t extra overhead or red tape. It’s what makes improvement routine and safe. When the path from insight to action is predictable, your AI Agent gets better every week and your support system keeps scaling with it.
3. Build a system that learns by default
AI performance isn’t static, but most teams treat it like a one-time implementation. The most successful organizations design systems that learn: they analyze where the AI Agent struggles, then feed that insight directly into structured improvement.
That might look like:
- Reviewing common handoff points to humans.
- Tracking unresolved queries by topic or intent.
- Measuring resolution rate trends over time.
- Using these signals to prioritize fixes or content upgrades.
Whether you follow a formal loop (like the Train → Test → Deploy → Analyze framework from before) or something simpler, the goal is the same: make improvement inevitable.
4. Treat content as competitive infrastructure
Your AI Agent is only as good as what it knows. This makes content strategy a competitive advantage, not just a support function.
You need to treat knowledge like infrastructure, where:
- Every topic has a clear owner.
- Content is structured, versioned, and ingestion-ready.
- New products ship with source-of-truth content by default.
- Changes are shipped on a schedule, not when someone finds time.
At Intercom, we’ve evolved our New Product Introduction (NPI) process by aligning early with R&D on a single, canonical source of truth that becomes the foundation for all downstream content – including what the AI Agent uses to resolve queries. By embedding content creation into launch readiness, not as an afterthought, we’ve consistently hit 50%+ resolution rates on new features from day one.
This infrastructure layer is often what separates teams that scale confidently from those that stall out. Without it, every improvement is harder and AI performance remains inconsistent. With it, your AI Agent gets better every day – and the system compounds.
5. Make belief visible
Even the best system won’t keep improving if people stop believing in it.
Belief will fade quietly if you don’t reinforce it. Keep it strong by:
- Sharing specific wins regularly.
- Highlighting improvements with metrics.
- Recognizing the people behind those improvements and giving them space to lead.
This is about more than just team morale. It’s about keeping everyone aligned and excited about the bigger play you’re all part of.
What great looks like
When AI is deeply embedded in your support organization, the entire system works differently:
- Clear ownership and accountabilityDedicated leaders manage AI performance with the same rigor applied to human team management.
- Evolved roles and responsibilitiesTeam structure reflects the new reality of AI-first operations, with specialists focused on system optimization, content creation, and high-value customer interactions.
- Systematic improvement processesFeedback loops, governance frameworks, and improvement cycles are embedded in how work gets done, not bolted on afterward.
- Strategic knowledge managementContent is treated as infrastructure, with ownership, maintenance, and optimization processes that support AI performance.
- Executive sponsorship and sustained internal momentumAI is backed by leadership and championed across the team. Success is visible, celebrated, and continually reinforced throughout the organization.
The difference between AI as a promising experiment and AI as business infrastructure comes down to alignment. When people understand their new roles, processes support continuous improvement, and ownership drives accountability, AI performance compounds.
Economics
Prove the value of AI at scale
To sustain AI investment and expand its scope, you need to move beyond cost-cutting narratives and build a case for business impact. You need a new economic model; one that redefines success, blends old budget lines, and links performance directly to business outcomes.
When done right, AI goes far beyond improving support efficiency. It rewires the financial model: breaking the link between support costs and revenue growth, and turning support into a contributor to customer activation, retention, and lifetime value.
Stop measuring AI like software and thinking about ROI in isolation. Start treating your AI Agent as a new workforce capability that changes how your support function creates and captures value.
In this section, we’ll explore how to redefine value in an AI-first model, evolve performance measurement, tie support directly to business outcomes, blend budgets for your new operating model, and reinvest savings for long-term gain.
Reframe how you define value
Traditional support economics are built around linear growth: more tickets means more headcount, more outsourcing, more software. Value was measured by containment: deflections, handle times, tickets closed.
Here’s what value looks like in an AI-first model:
- Human productivityYour team focuses on the edge cases, not the queue.
- System improvementEvery resolved query makes the system smarter.
- Revenue influenceSupport becomes a lever for activation, retention, and growth.
- Organizational agilityYou scale service without scaling headcount.
And here’s how that transformation looks visually:
Support costs and revenue grow in lockstep. Support headcount rises as the business grows. Value is hard to attribute.
Revenue continues to climb. Support costs flatten, grow slowly, or decline. Support starts influencing revenue through better CX and faster time-to-value.

This is what it means to decouple cost from growth (and what every CFO wants to see).
But that curve doesn’t bend on day one. The early stages often look like additive spend because you’re funding a new workforce layer while your old one is still intact. The real gains come gradually through attrition (not backfilling open roles), pausing BPO expansion, slowing down hiring plans, and the compounding effect of automation.
(See Chapter 2: Launch it for more detail on how automation rate influences time-to-ROI.)
Evolve your performance metrics
1. Measuring human team performance
Metrics like average handle time (AHT), cases handled, and first contact resolution (FCR) defined what “good” performance looked like when human agents managed the majority of support queries and success meant moving quickly through high volumes. But they no longer reflect the value of your team’s work.
In many cases, they create the wrong incentives: pushing speed over quality, or signaling failure when agents are actually solving more difficult problems. Here’s why they don’t work:
Legacy metric | Why it breaks in an AI-first model |
---|---|
Average Handle Time (AHT) Measures how long an agent spends on each conversation | AHT rises, for good reason. |
Cases handled Total cases, tickets, or conversations resolved | Cases handled ≠ productivity. |
First contact resolution (FCR) Measures how often a query is resolved in the first interaction | FCR drops, but reflects complexity, not failure. |
You don’t need to throw out these traditional metrics entirely, but they become less relevant once you’ve deployed an AI Agent. Instead, you need to focus on:
- Where human effort is going.
- What value it’s driving.
- How humans are improving the system as a whole.
Your team’s responsibilities are changing, and your metrics need to evolve with them. That means shifting away from blunt metrics that reward speed and volume, and toward metrics that reflect value:
Metric | Questions to ask | Metrics to track |
---|---|---|
Human involvement rate | How often do humans need to step in, and are those interventions high-value? |
|
Resolution of handover queries | Are humans completely resolving the questions that are handed over? |
|
Value-adding activities | How much of your team’s effort is now focused on work tied to retention, growth, or product insights? Are human-led conversations driving meaningful customer impact? |
|
Skill evolution | Are agents developing the skills needed in an AI-first organization, like consultative support, content creation, and knowledge management? |
|
Quality of human-AI collaboration | How effectively are human agents influencing AI performance through feedback and knowledge improvement? |
|
Instead of asking, “How many cases did we close?”
Ask:
- How are we helping customers get to value faster? E.g., what is it worth to unblock a customer from using your product instantly, in terms of retention or expansion?
- How are we shaping and strengthening AI performance?E.g., are human agents improving Fin’s resolution quality, coverage, or confidence?
- How are we improving the system overall?E.g., through better content, workflows, and journey design?
2. Measuring AI Agent performance
In an AI-first model, your AI Agent is your front line. It needs to be evaluated with the same rigor, ownership, and outcome-focus you apply to human teams.
These are the core metrics that matter when measuring AI Agent performance at scale:
Metric | Why it’s important | What good looks like |
---|---|---|
Resolution rate % of conversations fully resolved by the AI Agent without human intervention | Resolution, not deflection, is the true measure of value. This tells you whether the AI Agent is actually solving problems, not just containing volume. | ≥75% resolution (across tasks, queries, multi-turn flows) |
Involvement rate % of inbound support volume that the AI Agent is involved in, regardless of outcome | This shows the AI Agent’s footprint. It reveals whether it’s catching enough of your volume to make an impact. | ≥80% AI involvement |
Automation rate Resolution × Involvement = Automation % | This demonstrates the AI Agent’s overall impact across channels. | ≥60% |
These metrics work together. Resolution tells you if it’s working. Involvement shows where it’s working. Automation reveals how much work it’s doing.
Now, you need to measure how customers feel about it.
3. Measuring the customer experience
Teams often rely on metrics like CSAT and deflection rate to measure the customer experience, but neither offers a reliable read on what customers are actually experiencing.
CSAT only covers a small fraction of conversations (often less than 10%) and tends to capture extremes. Deflection rate assumes that if a query didn’t reach a human, it must have been resolved. But that’s a risky assumption. These metrics miss context and nuance. They make it hard for teams to know what to fix next, or whether anything’s actually improving.
At its core, assessing the customer experience comes down to two things:
- Did the customer get the help they needed?
- How did they feel about the experience?
These questions aren’t new, but until recently, support teams didn’t have reliable ways to answer them.
Here’s how to shift your measurement model to get a true read on the customer experience.
Focus less on deflection, and more on resolution
Traditionally, deflection rate focused purely on whether or not queries reached a human agent. It was used as a proxy for success, assuming that if a customer didn’t reach a human, their issue was resolved. But that’s not always true.
In the AI era, deflection alone doesn’t tell you enough. While helpful as an early signal, it’s a limited, and potentially misleading, measure of success. What matters is whether the issue was resolved, regardless of who handled it.
With AI now capable of resolving the majority of queries, we need to shift our focus from deflection to resolution.
Here’s what you need to do:
Change what you report on
If your dashboards prioritize deflection rate, update them. Focus on resolution rate instead, and track metrics like:
- % of conversations resolved end-to-end by AI.
- % of conversations resolved without repeat contact.
- % of handoffs that were avoidable (i.e. the AI could have resolved, but didn’t).
Update your targets
Set goals around resolution rate, not just deflection.
Use unresolved queries to iterate and improve
Resolution rate tells you what’s working. But what’s unresolved tells you where to improve. Look at what AI isn’t resolving, and why. Is the content unclear? Is it a configuration issue? Use this to continuously improve the system.
Fin surfaces recommended fixes right alongside these breakdowns – editing content, refining a prompt, or adding a missing step. This creates a feedback loop built directly into the system:

Deflection tells you what didn’t reach your team. Resolution tells you what’s actually working, and what to fix next.
If you want to scale AI successfully, you need to understand what AI is (and isn’t) resolving, and how the customer feels about it.
In the next section, we’ll unpack how to measure customer experience at scale.
Measure customer sentiment and satisfaction across all conversations with AI
Instead of relying on <10% CSAT survey coverage, you can use AI to analyze 100% of customer conversations. This level of insight means you can understand how customers actually feel and turn gaps into opportunities:
- Real-time problem detection replaces quarterly survey analysis.
- Proactive support replaces reactive problem-solving.
- Support data becomes an instant source of intelligence that drives continuous improvement.
There are a number of ways to approach this, but here’s an example of how Fin surfaces insights from conversations. It assesses three critical dimensions:
- Resolution status: Was the issue actually resolved? And if there were multiple issues, were each of them resolved?
- Customer sentiment: How did the customer feel throughout the interaction?
- Service quality: Was the response clear, helpful, and efficient?
Fin uses these inputs to generate a “CX rating” from 1–5 for each conversation. These individual ratings contribute to a broader “Customer Experience Score,” based on real-time insights from every support conversation.

Tie support performance directly to business outcomes
In the early phases of AI adoption, success is measured through hours saved, headcount avoided, CSAT maintained. But at scale, leadership needs to understand the broader value story: how support performance connects to retention, conversion, and revenue growth.
This means modeling not just cost reduction, but business impact over time:
- Cost reduction
- Revenue influence
- Churn prevention
- Product feedback loop
The most advanced teams go a step further: they directly attribute business value to support data. They track product usage lift after successful resolutions. They measure feature adoption linked to consultative support.
In this model, support becomes a real lever for growth. Not a cost center. A value driver.
Capture real ROI by blending budgets
In a traditional model, support spend is siloed. You fund people, tools, and services separately, and scale them linearly with support volume. When volume goes up, you hire. You expand BPO contracts. You add software to help the team move faster.
AI Agents break that model. They decouple headcount from volume. And that shift should be reflected in how you model cost and capture value.
AI Agents introduce flexibility. They allow you to break down the budget categories that have historically constrained how you invest in customer experience. When a single AI Agent can take on work previously spread across in-house agents, BPOs, and point solutions, your job isn’t just to count the savings. It’s to reallocate them strategically.
Here’s how to capture the value:
- Slow hiring plansWhen your AI Agent resolves the majority of Tier 1 conversations, you don’t need to backfill every open role or scale hiring as volume grows.
- Reduce BPO spendThe most direct cost trade-off is against outsourced headcount. If your AI Agent takes 60% of your volume, renegotiate your contract accordingly.
- Reassign internal headcountMove experienced agents into system roles (AI ops, content design, QA) rather than expanding the frontline.
- Shift budget linesMove spend from services or BPO to tech. Fund AI infrastructure that reduces future services spend. Reinvest AI-driven savings into customer experience design and innovation.
If you treat your AI Agent as “just software,” these decisions don’t happen. But if you recognize it as a new workforce capability, it becomes clear: your budget should follow your operating model.
Blending budgets doesn’t mean cutting. It means realigning spend to match your operating model. You need to shift your thinking from “cost savings” to “budget reallocation.”
Reinvest for compounding gains
Every hour your AI Agent saves, every role you no longer need to backfill, every BPO contract you downsize: these are dividends that can be reinvested for long-term growth.
You can use these dividends to reinvest in:
- Innovative CX designUse the capacity AI creates to reimagine support journeys, craft more proactive engagement, and test new service models.
- System improvementScale your knowledge management function. Assign more team members to content refinement, training data, and AI tuning.
- New roles and responsibilitiesInvest in AI ops leads, conversation designers, and QA specialists who can improve AI performance and experience quality at scale.
- Future-proofing your support organizationInvest in the training and enablement you need to build the muscle for sustained AI-led service.
This is what scaling AI actually looks like:
- Support becomes a system.
- AI becomes infrastructure.
- Capacity becomes currency that can be redeployed to drive growth.
Scale is the next frontier
You’ve proven that AI works. You’ve seen the early wins. But at scale, something bigger happens. AI stops being a tool and starts becoming infrastructure. Your customer experience gets faster, smarter, and more personalized. Your team evolves into system designers and strategic advisors. Support becomes a true lever for growth, not a cost center.
This is a chance to lead the way for your company, not just your team.
Here’s what you’re building toward:
- AI Agents resolve 80%+ of inbound volumeThey don’t just handle the simple stuff. They resolve complex, multi-step issues end-to-end, across channels. They interpret nuance, manage real-time data, and initiate backend actions.
- Human teams focus on system design, feedback, and strategy – not volumeThe role of the human team evolves. They’re no longer measured by how many tickets they close, but by how effectively they improve the systems behind the scenes. They’re still engaging with customers as needed, but their primary focus is training the AI Agent, refining the content, analyzing failure patterns, and shaping how support works at scale. Their job is not to react, it’s to design.
- Organizational structure reflects new roles and workflowsTraditional team structures optimized for queue handling don’t fit this model. New roles emerge. Workflows shift from ticket-based triage to continuous system improvement. Team rituals, reviews, and KPIs are reoriented around quality, collaboration, and iteration.
- Customer experience becomes a product: designed, measured, and iteratedSupport is no longer a reactive function. It’s a designed experience, just like your product. Every interaction can be shaped, tested, and refined. The handoffs between AI and humans feel seamless. The tone matches your brand. The experience improves with every conversation. It’s support as product, not just service.
- Support drives business outcomes, not just resolutionsThe value of support isn’t measured by case count. It’s measured by customer activation, retention, satisfaction, and expansion, enabled by AI. Support becomes a lever for growth, and a strategic input into product, marketing, and revenue.
Not every team will get there. But you can. With the right systems, roles, and mindset, your AI Agent doesn’t just support customers. It becomes the foundation for a smarter, faster, more resilient business.
For support leaders, this is a career-defining opportunity. The chance to pioneer a new model. To show what’s possible. To build something the rest of the business follows.