Fin-CX Models
Proprietary AI models purpose-built for customer service, including fin-cx-retrieval and fin-cx-reranker. Custom-trained on real support interactions, these models power the Fin AI Engine to deliver superior accuracy, speed, and reliability compared to generic LLMs.
Purpose-Built for Customer Service
The fin-cx suite of models represents a breakthrough in AI for customer service. While generic large language models are trained on broad internet data, fin-cx models are custom-trained on massive datasets of real customer service interactions, enabling them to understand the unique patterns, terminology, and requirements of support conversations.
These proprietary models are specifically designed to power the Fin AI Engine, delivering superior accuracy, speed, and reliability compared to generic LLMs in production customer service environments.
The Fin-CX Model Suite
The fin-cx family currently includes two specialized models that work together in the content retrieval pipeline:
- Fin-cx-retrieval: Scans vast knowledge bases using semantic understanding to identify and retrieve the most relevant content candidates for each query
- Fin-cx-reranker: Scores and ranks retrieved content for relevance, accuracy, and contextual fit, selecting the final pieces for response generation
Together, these models form a two-stage pipeline that ensures AI Agents pull the most accurate, relevant content from knowledge bases, enabling precise answers grounded in actual documentation.
Training and Performance
In production testing, fin-cx models outperform every other model combination tested. They're used alongside Claude Sonnet 4.0 in the Fin AI Engine to deliver best-in-class accuracy and performance. Performance uplift has been validated through out-of-sample backtesting and large-scale A/B testing in live customer service environments.
This specialized training enables Fin to achieve industry-leading resolution rates and consistently win head-to-head tests against competitors, with average resolution rates increasing 1% every month as the models continue to learn and improve.