AI engineered for precision, speed, and reliability.
AI engineered for precision, speed, and reliability.
Phase 1
Refine the query
In order to optimize the accuracy of an answer that an LLM generates, the inputs the LLM receives must be refined for comprehension. The clearer and more understandable the query, the better the output.
Specification
- 1.1Check for safety and relevance
- 1.2Optimize query comprehension
- 1.3Check for Workflows automation
- 1.4Check for Custom Answer
In order to optimize the accuracy of an answer that an LLM generates, the inputs the LLM receives must be refined for comprehension. The clearer and more understandable the query, the better the output.
Often in customer service, customers can write in to support without fully explaining or contextualizing their query. To solve this problem, the Fin AI Engine™ has been designed to refine the inputs that are sent to the LLM. This ensures that each customer message is optimized in terms of its meaning and context so that the LLM has the best possible chance of producing an accurate answer. In addition, the Fin AI Engine™ performs checks to see whether a Workflows automation or Custom Answer should be triggered based on the topic and context of a customer's query, as well as performing safety checks to filter out any questions that Fin shouldn't answer.
- 1.1Check for safety and relevanceAThe AI engine performs a comprehensive check of a customer query in order to filter for any questions that Fin shouldn't answer, such as requests for confidential information, irrelevant questions, malicious actors, data harvesting, and more.
- 1.2Optimize query comprehensionAThe AI engine optimizes the customer message to make it more searchable and easy to understand for the LLM. The AI engine performs many checks, and executes the correct optimization based on whether the query intent and topic is clear, if the query needs to be reworded, and much more.
- 1.3Check for Workflows automationAThe customer's message is checked for the presence of any pre-configured conditions which trigger a specific automation (such as a complaint).
- 1.4Check for Custom AnswerAAn additional check is performed to detect whether a pre-configured Custom Answer is needed for the customer query.
Phase 2
Generate a response
Once a query has been checked and optimized the next stage is to generate a response using the LLM. For this task, the Fin AI Engine™ has been designed to use a bespoke and enhanced retrieval augmented generation architecture (RAG for short).
Specification
- 2.1Optimize retrieval
- 2.2Integrate and augment
- 2.3Generate response
Once a query has been checked and optimized the next stage is to generate a response using the LLM. For this task, the Fin AI Engine™ has been designed to use a bespoke and enhanced retrieval augmented generation architecture (RAG for short).
RAG is a process that involves retrieving relevant information from a data source and combining it with a user’s prompt before passing it to an LLM. This additional context helps improve the model’s output by enhancing its base knowledge, and in doing so reducing the risk of inaccuracies like hallucinations. Intercom’s application of RAG is highly unique.
The AI group at Intercom has invested heavily in optimizing our unique application of RAG and continuously tests both the accuracy of the LLM and the individual steps of RAG to improve overall performance.
- 1.1Optimize retrievalAThe AI engine searches the available information, actions, or data and determines what is most relevant to the nature of the query and what is needed to solve the question or problem. Information sources include:ContentSuch as past Intercom conversations, help center articles, PDFs, and HTML/URLs that have been approved as accurate and safe sources.DataInternal or external to Intercom, including dynamic information that Fin can use to personalize the customer experience.Integrations & actionsDetermine whether any actions will be necessary on third party systems as a result of the customer queries intent.
- 1.2Integrate and augmentAThe retrieved information is then integrated and augmented with the optimized query or 'input'. This step ensures that the generative model has access to the most relevant and up-to-date information before producing a response. The augmented input is structured in a way that maintains the context and relevance of the retrieved information, making it easier for the model to understand and use.
- 1.3Generate responseAClarify and disambiguateIf the output from the model doesn’t meet Intercom’s AI Engine™ parameters for certainty, then a response is generated to ask the customer to clarify their query. This disambiguation step helps avoid risks like hallucinations as the response generated is contextual and grounded in the facts from your company’s available knowledge resources and support content.BTake an actionIf an action is required as a product of the user query and intent, the action will be performed using the necessary information, data, integrations and systems.CGenerate an answerThe generative model uses the augmented input to generate an answer. By incorporating the retrieved information, the model can produce more accurate, contextually relevant, and detailed responses. The generated answer may undergo post-processing to ensure clarity, coherence, and alignment with the user's query.
Phase 3
Validate accuracy
In the final step of the process, the Fin AI Engine™ performs checks to understand whether the output from the LLM meets the necessary response accuracy and safety standards.
Specification
- 3.1Validate the response
- 3.2Respond to customer
In the final step of the process, the Fin AI Engine™ performs checks to understand whether the output from the LLM meets the necessary response accuracy and safety standards. Many checks are performed, covering whether there is enough confidence in the response, the response is accurate enough, and whether the response is grounded enough in reality to address the question adequately.
- 1.1Validate the responseACompare the generated response to the original customer query.BDetermine if the generated response answers the query well enough.CDetermine if the generated response is grounded in your knowledge content as a source of truth.
- 1.2Respond to customerASend the generated response back to the customer through Fin.
always-on
Engine optimization
To calibrate and enhance engine performance, the Fin AI Engine™ has advanced integrated tools that help optimize answer generation, efficiency, precision, and coverage.
Specification
- 4.1Fin customization and control
- 4.2AI analytics and reporting
- 4.3AI recommendations
To calibrate and enhance engine performance, the Fin AI Engine™ has advanced integrated tools that help optimize answer generation, efficiency, precision, and coverage.
- 1.1Fin customization and controlAIntercom has incorporated features and tools designed to help users customize and control how Fin responds, what it can do, what information it can use and much more. Each of these pieces plays a part in how well Fin performs. The more it knows how to do, the more it can do, and the more you will be able to automate your support with human-quality customer experiences.
- 1.2AI analytics and reportingAThe Fin AI Engine™ has been designed to facilitate analysis on the effectiveness of each stage of the answer generation process. This gives the AI group at Intercom the tools they need to improve each stage of the process and overall performance. A rigorous amount of testing happens before any changes are made to the AI engine architecture, which takes into account how each small change impacts the engine as a whole. In addition, the AI engine offers Intercom users access to pre-built and customizable reports that help understand where Fin is working well, and what can be improved.
- 1.3AI recommendationsATo improve performance over time, the AI engine will offer recommendations for which new content could help fill gaps in Fin's knowledge, what current content isn't performing very well and how it can be further optimized, and what actions Fin should be able to do in order to resolve more queries on behalf of the customer.
safeguarding
AI Trust & Security
Intercom has implemented state-of-the-art security measures to protect Fin against a wide range of LLM threats, including those identified by the OWASP LLM Top 10. By consistently testing a variety of high-end LLMs, and deploying rigorous internal controls, security protocols, and safeguards, Fin is able to achieve the highest level of security and reliability while avoiding potential limitations and threats.
Specification
- 5.1Fin AI Security
- 5.2Regional hosting
- 5.3Compliance: International Standards
- 5.4Third-party AI Providers
Relying exclusively on the generative capabilities of an LLM to answer or solve a customer problem is not a reliable way to serve customers. Without the proper safeguards in place, LLMs can be open to manipulation or hallucinations, which could then impact your customers. To ensure safety and reliability, Intercom’s AI Engine™ has been designed with strict safety controls at each stage. If the necessary parameters for safety have not been met at any of the stages, Fin will let the customer know that it cannot answer the query and escalate to human support.
Intercom has implemented state-of-the-art security measures to protect Fin against a wide range of LLM threats, including those identified by the OWASP LLM Top 10. By consistently testing a variety of high-end LLMs, and deploying rigorous internal controls, security protocols, and safeguards, Fin is able to achieve the highest level of security and reliability while avoiding potential limitations and threats
That means you and your customers can always trust Fin’s answers as the safest, most accurate, and reliable of any AI agent. Learn more about Fin’s safety measures on trust.intercom.com.
- 1.1Fin AI SecurityAComprehensive overview of the security measures and testing protocols implemented for Intercom’s Fin AI features available here.
- 1.2Regional hostingAFin AI Agent is available on US, EU and AU regionally hosted workspaces (AU data is processed in the US) and can be configured to ensure that the 3rd Party AI providers process your customer data in accordance with your regional hosting terms.
- 1.3Compliance: International StandardsAIntercom is committed to security and data protection, meeting the highest industry standards, including SOC 2, ISO 27001, ISO 27018, ISO 27701, GDPR, CCPA, CSA, HIPAA, and HDS compliance. These certifications ensure we protect personal and sensitive data across industries and regions.In addition, we are certified under ISO/IEC 42001, an AI specific standard covering the implementation of an AI management system, to ensure responsible development and use of AI systems.ISO 27001, ISO27701, ISO 27018, and ISO42001: ETA Jan 2025HIPAA compliance.SOC 2 ReportSOC 2, Type II audit report covering controls specific to security, availability and confidentiality.HDS certificateCertification of compliance with the HDS Referential Version 1.1 (English and French version)Penetration test summarySummary of detailed penetration tests on Intercom's application and infrastructure by third-party security experts.Cloud security alliance assessmentSecurity and privacy self-assessments based off of the Cloud Controls Matrix and the CSA Code of Conduct for GDPR Compliance.
- 1.4Third-party AI ProvidersAThe Fin AI Engine ™ utilizes third party AI providers in addition to Intercom’s proprietary technology. Third Party AI providers are contractually restricted from using any of your customer data for AI model training or improvement. We also have zero data retention policies in place, meaning your customer data is not persistently stored by any of these AI providers, and is deleted once the data has been used to deliver the Output. You can find out more about what contractual assurances we provide you regarding these 3rd Party AI providers (all of which are listed in our subprocessor page) and what they can do with your customer data here