Step 2: Build a realistic test environment and train the AI Agent
Once you’ve defined what success looks like, you can begin testing.
1. Set up your test environment using real customer questions and your current knowledge base or help content
You can choose to test in a sandbox environment or with live conversations. The important thing is to use real customer questions to test the AI Agent from a business performance and conversation quality perspective.
Once you are confident with the baseline performance, we recommend testing the AI Agent with real users to validate the quantitative and qualitative metrics in the real world.
Source a range of customer conversations to test against:
-
Complex queries that typically require multiple touchpoints from different team members.
-
Vague queries that don’t contain any “real” information and require further clarification to resolve.
-
Edge cases that have been difficult for your human team to resolve.
-
A few sensitive scenarios, such as billing disputes and cases where customers have become frustrated.
-
Examples of queries in different languages, if you provide multilingual support.
Take this a step further and prepare variations of the same questions to test how the AI Agent handles different types of communication:
-
Difficult questions that require information from multiple sources to answer.
-
Different phrasings of the same question.
-
Incomplete or fragmented queries.
-
Questions with typos or grammatical errors.
-
Conversations with various levels of formality.
The goal is to simulate what happens in reality. Any AI Agent can look impressive in a controlled setting, but performing well when faced with real challenges customers bring is what separates “good enough” from great.
If you’re evaluating more than one solution, make sure you set up your AI Agents in the same way for a fair comparison. Split out the conversation volume equally so you can accurately test each of the solutions.
2. Train the AI Agent
Prepare your knowledge base or help content
You need quality content for an AI Agent to deliver good results. Assess your knowledge base content for:
-
Coverage
Make sure you have adequate coverage for the testing cohort to give the AI Agent all the information it needs to address key questions and topics. For example, if you want to test whether the AI Agent can fully resolve queries on a specific topic, like your accounting product, or for a specific audience, like your Freemium users, it must have access to relevant content for both.
-
Accuracy
To prevent the AI Agent from learning outdated information, make sure what you’re exposing it to is accurate and up to date. For example, if your return policy has changed from 60 days to 30 days, update this.
-
Structure
The more straightforward and comprehensive your articles are, the easier it will be for the AI Agent to consume them. Focus on simple language and an easy-to-scan structure.
You don’t have to reformat or rewrite all your help content before running tests. This is just something to be aware of and potentially return to should content gaps emerge during the testing or you spot any glaring issues.
Configure the AI Agent’s rules, tone, and behavior
Modern AI Agents let you control how they communicate and act – for example, you can instruct them to provide concise or comprehensive responses, use specific terminology for your industry, or follow protocols that match your support policies.
For Fin, we call this Guidance. It enables you to define Fin’s communication style, coach it to gather context and clarify issues, and set rules for routing and handovers.