{"version":"1.0","provider_name":"\/research","provider_url":"https:\/\/fin.ai\/research","author_name":"Ketan Bhatt","author_url":"https:\/\/fin.ai\/research\/author\/ketan-bhatt\/","title":"Fin: Running a Reliable Service over Unreliable Parts","type":"rich","width":600,"height":338,"html":"<blockquote class=\"wp-embedded-content\" data-secret=\"FAoLMnBK1r\"><a href=\"https:\/\/fin.ai\/research\/fin-running-a-reliable-service-over-unreliable-parts\/\">Fin: Running a Reliable Service over Unreliable Parts<\/a><\/blockquote><iframe sandbox=\"allow-scripts\" security=\"restricted\" src=\"https:\/\/fin.ai\/research\/fin-running-a-reliable-service-over-unreliable-parts\/embed\/#?secret=FAoLMnBK1r\" width=\"600\" height=\"338\" title=\"&#8220;Fin: Running a Reliable Service over Unreliable Parts&#8221; &#8212; \/research\" data-secret=\"FAoLMnBK1r\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" class=\"wp-embedded-content\"><\/iframe><script type=\"text\/javascript\">\n\/* <![CDATA[ *\/\n\/*! This file is auto-generated *\/\n!function(d,l){\"use strict\";l.querySelector&&d.addEventListener&&\"undefined\"!=typeof URL&&(d.wp=d.wp||{},d.wp.receiveEmbedMessage||(d.wp.receiveEmbedMessage=function(e){var t=e.data;if((t||t.secret||t.message||t.value)&&!\/[^a-zA-Z0-9]\/.test(t.secret)){for(var s,r,n,a=l.querySelectorAll('iframe[data-secret=\"'+t.secret+'\"]'),o=l.querySelectorAll('blockquote[data-secret=\"'+t.secret+'\"]'),c=new RegExp(\"^https?:$\",\"i\"),i=0;i<o.length;i++)o[i].style.display=\"none\";for(i=0;i<a.length;i++)s=a[i],e.source===s.contentWindow&&(s.removeAttribute(\"style\"),\"height\"===t.message?(1e3<(r=parseInt(t.value,10))?r=1e3:~~r<200&&(r=200),s.height=r):\"link\"===t.message&&(r=new URL(s.getAttribute(\"src\")),n=new URL(t.value),c.test(n.protocol))&&n.host===r.host&&l.activeElement===s&&(d.top.location.href=t.value))}},d.addEventListener(\"message\",d.wp.receiveEmbedMessage,!1),l.addEventListener(\"DOMContentLoaded\",function(){for(var e,t,s=l.querySelectorAll(\"iframe.wp-embedded-content\"),r=0;r<s.length;r++)(t=(e=s[r]).getAttribute(\"data-secret\"))||(t=Math.random().toString(36).substring(2,12),e.src+=\"#?secret=\"+t,e.setAttribute(\"data-secret\",t)),e.contentWindow.postMessage({message:\"ready\",secret:t},\"*\")},!1)))}(window,document);\n\/* ]]> *\/\n<\/script>\n","thumbnail_url":"https:\/\/fin.ai\/research\/wp-content\/uploads\/2025\/03\/image-14.png","thumbnail_width":1344,"thumbnail_height":896,"description":"Building reliable large language model (LLM) inference is still an emerging discipline. Although the field has matured considerably in recent years, we are far from the level of dependability seen in industry-standard services such as Amazon&hellip;"}