{"id":309,"date":"2025-09-11T22:47:19","date_gmt":"2025-09-11T22:47:19","guid":{"rendered":"https:\/\/fin.ai\/research\/?p=309"},"modified":"2025-09-12T00:07:07","modified_gmt":"2025-09-12T00:07:07","slug":"how-we-built-a-world-class-reranker-for-fin","status":"publish","type":"post","link":"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/","title":{"rendered":"How We Built a World-Class Reranker for Fin"},"content":{"rendered":"\n<p>At Intercom, Fin AI Agent uses retrieval-augmented generation (RAG) to deliver fast, accurate answers to customer support questions.<\/p>\n\n\n\n<p>In this setup, a reranker plays a crucial role: after retrieving potential answers from our knowledge base, the reranker reorders them by relevance to help Fin choose the best content to include in its reply.<\/p>\n\n\n\n<p class=\"has-very-light-gray-background-color has-background\"><strong>We built our own reranker that outperforms Cohere Rerank v3.5, an industry-leading commercial solution. <\/strong>This improved our answer quality, reduced reranking costs by 80%, and gained more flexibility to evolve our system.<\/p>\n\n\n\n<h2 id=\"fins-rag-workflow\" class=\"wp-block-heading\">Fin\u2019s RAG Workflow<\/h2>\n\n\n\n<p>Here\u2019s how Fin uses RAG at a high level. <\/p>\n\n\n\n<p>When someone asks Fin for help, Fin starts by <a href=\"https:\/\/fin.ai\/research\/david-vs-goliath-are-small-llms-any-good\/\">summarizing the conversation<\/a> into a short, focused query, like &#8220;<em>How do I reset my password?<\/em>&#8221; or &#8220;<em>Where can I find my invoices?<\/em>\u201d. This query is used to search the knowledge base, where all help articles and snippets are pre\u2011processed into vector embeddings for efficient retrieval.<\/p>\n\n\n\n<p>Fin compares the query embedding to these vectors to find the closest matches. It then takes the top \\(K=40\\) candidates and re-ranks them using a specialized reranker model. Initial <a href=\"https:\/\/fin.ai\/research\/finetuning-retrieval-for-fin\/\">vector retrieval<\/a> is fast, but can miss nuances, so the reranker uses deeper context understanding to reorder the passages by relevance.<\/p>\n\n\n\n<p>Finally, a context budget filter selects the top-ranked passages, and Fin uses these to craft a clear, accurate answer for the user in real-time.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"735\" src=\"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/07\/fin_flow_with_logo_transparent-1024x735.png\" alt=\"Retrieval-Augmented Generation (RAG) flow in Fin AI agent for customer support\" class=\"wp-image-324\" srcset=\"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/07\/fin_flow_with_logo_transparent-1024x735.png 1024w, https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/07\/fin_flow_with_logo_transparent-300x215.png 300w, https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/07\/fin_flow_with_logo_transparent-768x551.png 768w, https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/07\/fin_flow_with_logo_transparent-1536x1103.png 1536w, https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/07\/fin_flow_with_logo_transparent.png 1596w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<h2 id=\"why-build-our-own-reranker\" class=\"wp-block-heading\">Why Build Our Own Reranker?<\/h2>\n\n\n\n<p>Previously, we relied on Cohere Rerank-v3.5, a commercial reranker offering high-quality results but incurring substantial costs. Previously tested open-source models (BGE-large and BGE-m3) couldn&#8217;t achieve required performance levels, and using LLM-based reranker caused latency issues.<\/p>\n\n\n\n<p>To address these challenges, we decided to develop our own reranker tailored specifically to the domain of English customer support. Our objectives were clear and ambitious: match or exceed Cohere&#8217;s quality, run efficiently on standard GPUs, and reduce vendor dependency.<\/p>\n\n\n\n<h2 id=\"fin-cx-reranker-our-custom-solution\" class=\"wp-block-heading\">Fin-cx-reranker: Our Custom Solution<\/h2>\n\n\n\n<p>Our custom reranker uses <a href=\"https:\/\/arxiv.org\/abs\/2412.13663\">ModernBERT-large<\/a> (2024) as a component. This is a state-of-the-art encoder-only transformer designed specifically for retrieval and classification tasks. ModernBERT supports an 8,192-token context window (vs. 512 in vanilla BERT), employs rotary\/relative positional encodings, GeGLU activations, efficient attention, and was trained on ~2T tokens. It consistently surpasses encoders like BERT, RoBERTa, and DeBERTaV3 across benchmarks such as <a href=\"https:\/\/github.com\/beir-cellar\/beir\">BEIR<\/a> and <a href=\"https:\/\/gluebenchmark.com\/\">GLUE<\/a> by 3-8pp.<\/p>\n\n\n\n<p>For scoring candidate passages, we concatenate each query and passage pair as <code><strong>[CLS] {query} [SEP] {passage[i]} [SEP]<\/strong><\/code> and feed this into ModernBERT. We then apply mean pooling across all token embeddings (excluding padding) to obtain a single vector. This vector passes through a linear layer, producing a final relevance score used for ranking.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"384\" src=\"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/07\/reranker_fin-1024x384.png\" alt=\"Fine-tuning cross-encoder reranker in RAG (Retrieval-Augmented Generation) based on ModernBERT with RankNet loss\" class=\"wp-image-321\" style=\"width:657px;height:auto\" srcset=\"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/07\/reranker_fin-1024x384.png 1024w, https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/07\/reranker_fin-300x113.png 300w, https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/07\/reranker_fin-768x288.png 768w, https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/07\/reranker_fin-1536x576.png 1536w, https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/07\/reranker_fin.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<h2 id=\"training-details\" class=\"wp-block-heading\">Training Details<\/h2>\n\n\n\n\n\n\n<p>We trained the reranker on 400,000 real Fin queries, each with \\(K=40\\) candidate passages, 16M pairs in total. Labels were provided by an <a href=\"https:\/\/fin.ai\/research\/using-llms-as-a-reranker-for-rag-a-practical-guide\/\">LLM-based pointwise reranker<\/a>, giving us high-quality training signals.<\/p>\n\n\n\n<p>Our implementation uses Hugging Face Transformers. To optimize ranking, we employ a <a href=\"https:\/\/arxiv.org\/abs\/2304.09542\">RankNet loss<\/a>. The teacher LLM first sorts the \\(K\\) passages by relevance, assigning each passage a rank \\(r_i\\) where a lower number means higher relevance (e.g., \\(r_i=1\\) means top-ranked). The model then produces a score \\(s_i\\) for each passage.<br><br>Training minimizes the following over all ordered pairs where the LLM says passage \\(i\\) should outrank \\(j\\):<br>$$<br>{\\mathscr{L}}_{\\text{RankNET}} = \\sum_{i=1}^{K}\\sum_{j=1}^{K} \\mathbf{1}[r_i &lt; r_j]\\,\\log(1 + \\exp(s_j &#8211; s_i)).<br>$$<br>Equivalently, this runs over the \\(\\frac{K(K-1)}{2}\\) ordered pairs \\(i&lt;j\\) with \\(r_i&lt;r_j\\):<br>$$<br>{\\mathscr{L}}_{\\text{RankNET}} = \\sum_{i &lt; j,\\,r_i &lt; r_j} \\log(1 + \\exp(s_j &#8211; s_i)).<br>$$<br>By penalizing cases where a lower-ranked passage scores higher than a higher-ranked one, the model learns to follow the correct order. This pairwise objective helps it judge passage relevance better and leads to smooth, stable convergence.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"712\" src=\"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/07\/loss_curve_transparent-1024x712.png\" alt=\"\" class=\"wp-image-323\" style=\"width:464px;height:auto\" srcset=\"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/07\/loss_curve_transparent-1024x712.png 1024w, https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/07\/loss_curve_transparent-300x209.png 300w, https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/07\/loss_curve_transparent-768x534.png 768w, https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/07\/loss_curve_transparent.png 1133w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Convergence of RankNet loss during training<\/figcaption><\/figure>\n\n\n\n<h2 id=\"evaluation\" class=\"wp-block-heading\">Evaluation<\/h2>\n\n\n\n<p>To confidently establish the superiority of Fin-cx-reranker, we used a rigorous three-stage evaluation funnel: from controlled offline tests to live production traffic.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>FinRank-en-v1: Offline internal benchmark:<\/strong><\/h3>\n\n\n\n<p>We built an internal static evaluation set with 3,000 real English queries sourced from 1k+ customer apps, each paired with 40 candidate passages. \u201cIdeal\u201d ground-truth rankings come from a two-stage LLM oracle. For queries with a confirmed (hard) resolution, passages cited by Fin were moved to the top. This setup allows us to directly compare models using classic information retrieval metrics: MAP, NDCG@10, Recall@10, and Kendall tau.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td><strong>Metric<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>Cohere Rerank\u2011v3.5&nbsp;<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>Fin-cx-reranker<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>\u0394<\/strong><\/td><\/tr><tr><td>MAP<\/td><td class=\"has-text-align-center\" data-align=\"center\">0.521<\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>0.612<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\">+17.5\u202f%<\/td><\/tr><tr><td>NDCG@10<\/td><td class=\"has-text-align-center\" data-align=\"center\">0.570<\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>0.665<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\">+16.7\u202f%<\/td><\/tr><tr><td>Recall@10<\/td><td class=\"has-text-align-center\" data-align=\"center\">0.636<\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>0.720<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\">+13.1\u202f%<\/td><\/tr><tr><td>Kendall tau<\/td><td class=\"has-text-align-center\" data-align=\"center\">0.326<\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>0.400<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\">+22.7\u202f%<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Backtesting production conversations<\/strong><\/h3>\n\n\n\n<p>We sampled 1,500 recent support conversations from 685 apps and ran them through a frozen RAG pipeline, measuring precision \/ recall for cited passages appearing in the first 1,500-token context window Fin uses. This stage also checks how well the model generalizes to out-of-distribution apps not seen during training.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><tbody><tr><td>Metric<\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>Cohere Rerank\u2011v3.5&nbsp;<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>Fin-cx-reranker<\/strong><\/td><\/tr><tr><td>Precision @1500 tok<\/td><td class=\"has-text-align-center\" data-align=\"center\">0.239\u202f\u00b1\u202f0.004<\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>0.254\u202f\u00b1\u202f0.005<\/strong><\/td><\/tr><tr><td>Recall @1500 tok<\/td><td class=\"has-text-align-center\" data-align=\"center\">0.677\u202f\u00b1\u202f0.010<\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>0.698\u202f\u00b1\u202f0.010<\/strong><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Online A\/B testing:<\/strong><\/h3>\n\n\n\n<p>We ran a two-arm, 1.5M-conversation A\/B test, resulting in no change in latency (P50 \u2248150\u202fms), but a statistically significant improvement in Resolution Rate over Cohere Rerank\u2011v3.5 (<strong>p\u202f&lt;\u202f0.01<\/strong>). We do not share the exact Resolution Rate effect size, for competitive reasons.<\/p>\n\n\n\n<h2 id=\"whats-next\" class=\"wp-block-heading\">What\u2019s Next<\/h2>\n\n\n\n<p>Bringing reranking capabilities in-house through Fin-cx-reranker has proven to be a clear win. We&#8217;ve improved answer quality, reduced costs on reranker by 80%, and gained more control to keep evolving the system. Our experience highlights that targeted, domain-specific models can indeed outperform top commercial solutions.<\/p>\n\n\n\n<p>Looking forward, we see clear opportunities to enhance performance further. We&#8217;re working on refining label quality by re-annotating with stronger models, and extending our reranker beyond English. These initiatives are already in progress.<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>We built our own reranker that outperforms Cohere Rerank v3.5, an industry-leading commercial solution. This improved our answer quality, reduced reranking costs by 80%, and gained more flexibility to evolve our system.<\/p>\n","protected":false},"author":36,"featured_media":490,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[33],"tags":[],"coauthors":[24],"class_list":["post-309","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-rag"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v24.6 (Yoast SEO v24.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>How We Built a World-Class Reranker for Fin - \/research<\/title>\n<meta name=\"description\" content=\"Inside Fin AI agent for customer support: Fine-tuning ModernBERT reranker for RAG, outperforming Cohere v3.5, while cutting latency and costs\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How We Built a World-Class Reranker for Fin\" \/>\n<meta property=\"og:description\" content=\"Inside Fin AI agent for customer support: Fine-tuning ModernBERT reranker for RAG, outperforming Cohere v3.5, while cutting latency and costs\" \/>\n<meta property=\"og:url\" content=\"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/\" \/>\n<meta property=\"og:site_name\" content=\"\/research\" \/>\n<meta property=\"article:published_time\" content=\"2025-09-11T22:47:19+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-09-12T00:07:07+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/09\/image-14-1-1024x512.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"512\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Ramil Yarullin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@intercom\" \/>\n<meta name=\"twitter:site\" content=\"@intercom\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Ramil Yarullin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/\"},\"author\":{\"name\":\"Ramil Yarullin\",\"@id\":\"https:\/\/fin.ai\/research\/#\/schema\/person\/f9421a715135d2012ef2d39e6dade5d2\"},\"headline\":\"How We Built a World-Class Reranker for Fin\",\"datePublished\":\"2025-09-11T22:47:19+00:00\",\"dateModified\":\"2025-09-12T00:07:07+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/\"},\"wordCount\":934,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/fin.ai\/research\/#organization\"},\"image\":{\"@id\":\"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/09\/image-14-1.png\",\"articleSection\":[\"RAG\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/\",\"url\":\"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/\",\"name\":\"How We Built a World-Class Reranker for Fin - \/research\",\"isPartOf\":{\"@id\":\"https:\/\/fin.ai\/research\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/09\/image-14-1.png\",\"datePublished\":\"2025-09-11T22:47:19+00:00\",\"dateModified\":\"2025-09-12T00:07:07+00:00\",\"description\":\"Inside Fin AI agent for customer support: Fine-tuning ModernBERT reranker for RAG, outperforming Cohere v3.5, while cutting latency and costs\",\"breadcrumb\":{\"@id\":\"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/#primaryimage\",\"url\":\"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/09\/image-14-1.png\",\"contentUrl\":\"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/09\/image-14-1.png\",\"width\":3072,\"height\":1536},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/fin.ai\/research\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"How We Built a World-Class Reranker for Fin\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/fin.ai\/research\/#website\",\"url\":\"https:\/\/fin.ai\/research\/\",\"name\":\"Intercom.ai\",\"description\":\"Insights and blogs from the AI Group building Fin at Intercom\",\"publisher\":{\"@id\":\"https:\/\/fin.ai\/research\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/fin.ai\/research\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/fin.ai\/research\/#organization\",\"name\":\"Intercom.ai\",\"url\":\"https:\/\/fin.ai\/research\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/fin.ai\/research\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/03\/favicon.png\",\"contentUrl\":\"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/03\/favicon.png\",\"width\":1024,\"height\":1024,\"caption\":\"Intercom.ai\"},\"image\":{\"@id\":\"https:\/\/fin.ai\/research\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/intercom\",\"https:\/\/www.linkedin.com\/company\/intercom\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/fin.ai\/research\/#\/schema\/person\/f9421a715135d2012ef2d39e6dade5d2\",\"name\":\"Ramil Yarullin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/fin.ai\/research\/#\/schema\/person\/image\/fd5365c919200a70cc952ae6bb3c256b\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/76cba499e063bf235208f2e6a4339cda?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/76cba499e063bf235208f2e6a4339cda?s=96&d=mm&r=g\",\"caption\":\"Ramil Yarullin\"},\"description\":\"is a Staff Machine Learning Scientist at Intercom with 8+ years of experience in engineering and applied research.\",\"sameAs\":[\"https:\/\/www.linkedin.com\/in\/ramil-yarullin\/\"],\"url\":\"https:\/\/fin.ai\/research\/author\/ramil-yarullin\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"How We Built a World-Class Reranker for Fin - \/research","description":"Inside Fin AI agent for customer support: Fine-tuning ModernBERT reranker for RAG, outperforming Cohere v3.5, while cutting latency and costs","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/","og_locale":"en_US","og_type":"article","og_title":"How We Built a World-Class Reranker for Fin","og_description":"Inside Fin AI agent for customer support: Fine-tuning ModernBERT reranker for RAG, outperforming Cohere v3.5, while cutting latency and costs","og_url":"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/","og_site_name":"\/research","article_published_time":"2025-09-11T22:47:19+00:00","article_modified_time":"2025-09-12T00:07:07+00:00","og_image":[{"width":1024,"height":512,"url":"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/09\/image-14-1-1024x512.png","type":"image\/png"}],"author":"Ramil Yarullin","twitter_card":"summary_large_image","twitter_creator":"@intercom","twitter_site":"@intercom","twitter_misc":{"Written by":"Ramil Yarullin","Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/#article","isPartOf":{"@id":"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/"},"author":{"name":"Ramil Yarullin","@id":"https:\/\/fin.ai\/research\/#\/schema\/person\/f9421a715135d2012ef2d39e6dade5d2"},"headline":"How We Built a World-Class Reranker for Fin","datePublished":"2025-09-11T22:47:19+00:00","dateModified":"2025-09-12T00:07:07+00:00","mainEntityOfPage":{"@id":"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/"},"wordCount":934,"commentCount":0,"publisher":{"@id":"https:\/\/fin.ai\/research\/#organization"},"image":{"@id":"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/#primaryimage"},"thumbnailUrl":"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/09\/image-14-1.png","articleSection":["RAG"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/","url":"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/","name":"How We Built a World-Class Reranker for Fin - \/research","isPartOf":{"@id":"https:\/\/fin.ai\/research\/#website"},"primaryImageOfPage":{"@id":"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/#primaryimage"},"image":{"@id":"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/#primaryimage"},"thumbnailUrl":"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/09\/image-14-1.png","datePublished":"2025-09-11T22:47:19+00:00","dateModified":"2025-09-12T00:07:07+00:00","description":"Inside Fin AI agent for customer support: Fine-tuning ModernBERT reranker for RAG, outperforming Cohere v3.5, while cutting latency and costs","breadcrumb":{"@id":"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/#primaryimage","url":"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/09\/image-14-1.png","contentUrl":"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/09\/image-14-1.png","width":3072,"height":1536},{"@type":"BreadcrumbList","@id":"https:\/\/fin.ai\/research\/how-we-built-a-world-class-reranker-for-fin\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/fin.ai\/research\/"},{"@type":"ListItem","position":2,"name":"How We Built a World-Class Reranker for Fin"}]},{"@type":"WebSite","@id":"https:\/\/fin.ai\/research\/#website","url":"https:\/\/fin.ai\/research\/","name":"Intercom.ai","description":"Insights and blogs from the AI Group building Fin at Intercom","publisher":{"@id":"https:\/\/fin.ai\/research\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/fin.ai\/research\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/fin.ai\/research\/#organization","name":"Intercom.ai","url":"https:\/\/fin.ai\/research\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/fin.ai\/research\/#\/schema\/logo\/image\/","url":"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/03\/favicon.png","contentUrl":"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/03\/favicon.png","width":1024,"height":1024,"caption":"Intercom.ai"},"image":{"@id":"https:\/\/fin.ai\/research\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/intercom","https:\/\/www.linkedin.com\/company\/intercom"]},{"@type":"Person","@id":"https:\/\/fin.ai\/research\/#\/schema\/person\/f9421a715135d2012ef2d39e6dade5d2","name":"Ramil Yarullin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/fin.ai\/research\/#\/schema\/person\/image\/fd5365c919200a70cc952ae6bb3c256b","url":"https:\/\/secure.gravatar.com\/avatar\/76cba499e063bf235208f2e6a4339cda?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/76cba499e063bf235208f2e6a4339cda?s=96&d=mm&r=g","caption":"Ramil Yarullin"},"description":"is a Staff Machine Learning Scientist at Intercom with 8+ years of experience in engineering and applied research.","sameAs":["https:\/\/www.linkedin.com\/in\/ramil-yarullin\/"],"url":"https:\/\/fin.ai\/research\/author\/ramil-yarullin\/"}]}},"_links":{"self":[{"href":"https:\/\/fin.ai\/research\/wp-json\/wp\/v2\/posts\/309","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fin.ai\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fin.ai\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fin.ai\/research\/wp-json\/wp\/v2\/users\/36"}],"replies":[{"embeddable":true,"href":"https:\/\/fin.ai\/research\/wp-json\/wp\/v2\/comments?post=309"}],"version-history":[{"count":0,"href":"https:\/\/fin.ai\/research\/wp-json\/wp\/v2\/posts\/309\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/fin.ai\/research\/wp-json\/wp\/v2\/media\/490"}],"wp:attachment":[{"href":"https:\/\/fin.ai\/research\/wp-json\/wp\/v2\/media?parent=309"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fin.ai\/research\/wp-json\/wp\/v2\/categories?post=309"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fin.ai\/research\/wp-json\/wp\/v2\/tags?post=309"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/fin.ai\/research\/wp-json\/wp\/v2\/coauthors?post=309"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}