Simulations allow you to validate Fin Procedures, build confidence in your automation, and catch problems before they impact your customers. By modeling full conversations, simulations help your team handle high-volume or complex scenarios, such as cancellations and refunds, with certainty.
Designed to replace time-consuming manual checks, simulations help you identify issues or gradual changes in Fin's behavior as your business logic evolves.
Note: Fin Procedures are available through managed availability. Please contact your account manager to confirm eligibility and learn more about getting access.
Accessing simulations
Simulations are located within the testing panel of a Procedure. To access them:
Open the Procedure you wish to test.
Click Test in the top right corner of the canvas.
Select the Simulations tab in the right-hand panel.
Creating a simulation
You can create a simulation in two ways: using AI-generated suggestions for a quick start, or manually defining the scenario for full control.
AI-generated simulations: Use these to quickly cover common or expected customer scenarios based on your instructions. Fin AI generates "ready-made" starter tests to save you time.
Manual simulations: Use these when you need precise control over data, specific edge cases, or particular branches in your logic.
AI-generated simulations
Based on your instructions, Fin AI will generate starter tests to help you quickly create "ready-made" simulations.
Open the Simulations tab in the right-hand panel of your Procedure.
Under Suggested for these instructions, review the list of proposed scenarios (e.g., "Full cancellation request").
Click the Play icon next to a suggestion to run it instantly.
Once a simulation is created or accepted from the suggestions, it will appear in your list. You can then click Run all to execute all of your saved simulations at once.
Manually created simulations
You can also build a simulation from scratch to test specific edge cases based on the Procedure instructions.
In the Simulations tab, click + New.
Simulation name: Give your simulation a clear title.
Simulate as: Choose a specific user or brand to test personalization. You can select from a dropdown list of real users in your workspace.
Customer's opening message: Enter the first message the customer sends (e.g., "I need help with my order"). You can also attach an image, such as a screenshot of an error, to test how Fin handles visual context.
Additional details: Provide guidance regarding the customer's situation or specific actions they have taken.
Define available data
The Customer data available to Fin section allows you to define the data Fin has access to during the test. This ensures you are testing against precise data values rather than relying on vague descriptions.
Simulation time: Use this to define "when" the scenario is happening. Setting a specific date and time allows you to test time-sensitive logic, such as checking if a customer is within a 30-day refund window.
Attributes and Data Connectors: This section pre-populates with the attributes referenced in your Procedure. Update these values (e.g., set
People.Planto "Pro") to test different branching outcomes.
Note:
To ensure your simulation runs accurately, place data based on when Fin should "know" it:
Use Attributes: If Fin is supposed to already know the information at the start of the conversation (e.g., the customer's current Plan or Sign-up date).
Use Additional details: If the information is meant to be provided by the customer during the conversation (e.g., the customer provides their "Order ID" in a follow-up response). This allows you to test if Fin correctly captures and stores that data in an attribute.
Evaluate Fin's behaviour
Define the criteria that must be true for the test to pass. Click + Add criteria and select:
Fin reply: Specify what Fin should (or should not) say during the conversation.
Attributes: Verify if an attribute was set, was not set, was equal to, or was not equal to a specific value.
Data connector: Verify if a connector is triggered, is not triggered, or is triggered exactly X times.
Instruction outcome: Check if the conversation reached a specific conclusion, such as finishing, being handed off to a teammate, or other outcomes like switching to a different Procedure.
Once configured, click Save.
Note: When you click Save, Fin uses AI to review your simulation form. If the instructions are unclear or if the success criteria are inconsistent, you will see recommendations on how to improve the test for more accurate results.
Tip: When designing simulations, look at your branching logic. If a step implies multiple outcomes (e.g., "Check reason for card block"), create a separate test case for each path. This builds a "regression safety net" to ensure future updates don't break existing logic. This approach is recommended for Procedures with Conditions or sub-procedures, where multiple mutually exclusive paths can exist.
Running and reviewing results
Once you run a test, it appears in the Tests panel on the right-hand side with a status indicator:
Running: The test is actively being executed.
Passed: The test ran and successfully met all defined success criteria.
Failed: The test ran but did not meet the defined success criteria.
Queued: The test has been initiated but is waiting for the preceding simulation to finish before executing.
To investigate a result, click See conversation. This opens the full back-and-forth transcript between the simulated customer and Fin, making it easy to see exactly how the flow played out and why a test passed or failed.
Simulation usage limits
There is a limit to the number of simulations you can run each month. This limit is applied at the workspace level and resets on the first day of every calendar month.
Each workspace receives a monthly allowance of Simulation runs. The allowance is based on your workspace’s conversation-volume segment, with larger customers receiving higher allowances.
Simulation allowance is based on your workspace’s conversation volume in Intercom.
We assign your workspace to a segment using the number of conversations in the last calendar month.
Your segment is re-evaluated monthly and your allowance will reflect your most recent month’s conversation volume.
If your conversation volume increases or decreases, your Simulation allowance may change in the next monthly cycle.
Conversation Volume Segment | Simulation Limit per month |
Under 1K | 50 |
1K–15K | 200 |
15K–100K | 350 |
100K–1M | 1000 |
1M+ | 2500 |
Monitoring your usage
To help you manage your testing, Fin provides visual indicators within the Simulations tab:
Usage warning
When your workspace reaches 80% of its monthly limit, a yellow warning banner will appear. It displays your current usage (e.g., "85/100") and reminds you when the limit will reset.
Limit reached
Once you hit 100% of your monthly limit, a red error message will appear. You will be unable to run further simulations until the start of the next month.
Note: If you reach your limit, you can still review previous simulation results and transcripts by clicking See conversation, but the Run and Run all buttons will be disabled.
FAQs
Why use Simulations instead of manual testing?
Why use Simulations instead of manual testing?
Manual testing is great for quick spot checks or configuration reviews. However, Simulations allow you to validate Procedures at scale and ensure Fin performs reliably in complex, high-risk scenarios. Running them before every launch helps you catch unexpected behavior early.
What happens if a Simulation fails?
What happens if a Simulation fails?
You can review the full simulated conversation to understand why Fin didn’t behave as expected, adjust your Procedure, and re-run the Simulation with no impact to customers.
Why is my simulation marked "Failed" even though Fin successfully solved the problem?
Why is my simulation marked "Failed" even though Fin successfully solved the problem?
This usually happens when your Success Criteria are too rigid. For example, if you require Fin to "Ask for an Order ID," but Fin is smart enough to find the ID automatically, the test will fail because Fin skipped the question. Update your criteria to focus on the final outcome (e.g., "Procedure finished") rather than mandating specific intermediate steps.
Are Simulations billed separately from Procedures?
Are Simulations billed separately from Procedures?
Simulations are included with Procedures and aren't billed as a separate line item. You won't incur additional charges for running simulations.
Why is there a limit on Simulation runs?
Why is there a limit on Simulation runs?
Every simulation run requires resources to generate accurate AI predictions. We provide a monthly allowance to ensure you can test your Procedures freely for standard use cases while preventing extreme usage from creating runaway costs.









