Skip to main content

Custom-Endpoint Multi-Turn Simulations

Okareo can drive a full conversation against your running service (RAG pipeline, tool-calling agent, or any HTTP API) by mapping requests & JSON responses to a Custom Endpoint Target. This guide shows you, step-by-step, how to run a multi-turn simulation using custom endpoints, in either the Okareo UI or SDK.

You'll follow the same four core steps you saw in the Multi-Turn Overview.

Cookbook examples for this guide are available:

tip

New to simulations? See the Simulation Overview.

1 · Configure a Target

A Target is the system you’re testing (your RAG service, tool-calling agent, or any HTTP API). You can reuse the same Target across multiple simulations.

You can configure the Target in two ways:

  • Custom Endpoint – any HTTP-accessible API you provide (e.g., your RAG pipeline, tool-calling agent, or backend chat service).
  • Foundation Model – a pre-integrated model (e.g., GPT-4o mini) selected from Okareo’s catalog.

A Custom Endpoint Target specifies:

  • How sessions are started (e.g., to create a conversation thread or context).
  • How each turn is sent and how the responses are extracted.
  • How sessions are ended or finalized.
  1. Navigate to Targets and click ➕ Create Target.

    Targets Zero State

  2. Choose Custom Endpoint and fill in the three API calls that define your conversation:

    • Start Session (optional, called once before the first turn)
    • Next Turn (required, called every turn)
    • End Session (optional, called once after the final turn)

    All three calls share the same fields:

    • Method – HTTP verb (POST, GET, …)
    • URL – endpoint to call
    • Headers / Query Params – auth & custom metadata (e.g., api-key)
    • Body Template – JSON with template variables, e.g. {latest_message}, {message_history}, {session_id}
    • Response Message Path – JSONPath to the assistant’s reply
    • Response Session ID Path – JSONPath to the session/thread id (for Start Session)
    • {access_token} variable (when Credential Authentication is enabled; see below) – the acquired token, injected into your downstream calls (Start Session, Next Turn, End Session). Add a header Authorization: Bearer {access_token} to pass the token to your API.
    • Max Parallel Requests (optional) – limits how many concurrent requests Okareo sends to your API during a simulation. Leave empty for default concurrency. Set this when your API has rate limits or can only handle a limited number of simultaneous connections (e.g., 5).

    Target Form

    Credential Authentication

    In addition to the per-call settings above, your Target can acquire an OAuth token automatically before each simulation. Use this when your API requires a dynamically acquired OAuth2 token rather than a static API key or bearer token.

    Expand the Credential Authentication accordion at the bottom of the target form to configure it. Okareo acquires the token once at the start of each conversation. If a downstream call returns HTTP 401, Okareo re-authenticates once and retries the call.

    The most common configuration is an OAuth2 client_credentials flow. Your setup may differ depending on your provider.

    • Auth URL – the token endpoint (e.g., https://auth.example.com/oauth2/token)
    • Method – HTTP verb (typically POST)
    • Headers – request headers for the auth call (e.g., Content-Type: application/x-www-form-urlencoded). Use Add Secure Header for headers containing secrets.
    • Body Fields – key-value pairs for your auth request (e.g., grant_type=client_credentials, client_id=..., client_secret=...). Use Add Secure Body Field for values containing secrets (like client_secret). Secure fields are redacted with asterisks (********) in subsequent views, API responses, and logs. You cannot edit a secure field after saving — delete and re-add it instead.
    • Access Token Path – tells Okareo where to find the token in the auth response (see table below)

    Once authentication succeeds, use the {access_token} template variable in your downstream calls (Start Session, Next Turn, End Session) to pass the token to your API. The most common pattern is adding a header Authorization: Bearer {access_token}. See the {access_token} variable in the field list above.

    Access Token Path

    The path tells Okareo where to find the token in the auth response. Paths always start with response..

    Auth response shapeAccess Token Path
    {"access_token": "..."}response.access_token (default)
    {"data": {"token": "..."}}response.data.token
    {"result": {"access_token": "..."}}response.result.access_token

    If left blank, Okareo defaults to response.access_token, which works for most OAuth2 providers.

    Credential Auth Form

    Verifying with Test Calls

    When Credential Authentication is enabled, Test Calls shows an Auth section in the response with: the HTTP status code, response headers, the response body (with sensitive fields redacted), and parsed_values showing the extracted token (truncated for security). Use this to verify your auth configuration before running a simulation.

    Test Calls Auth Response

    Common Issues

    • Auth endpoint returns 401/403 – double-check your client_id, client_secret, and auth URL. Make sure the token endpoint accepts the grant type and credentials you're providing.
    • "Could not extract token" error – the Access Token Path doesn't match your auth response structure. Use Test Calls to see the raw auth response body and verify the JSON key path.
    • Token acquired but downstream calls fail with 401 – make sure your downstream headers reference {access_token}. For example: Authorization: Bearer {access_token}. Without this template variable, the token is acquired but never sent to your API.
    • Auth URL unreachable – verify the URL is accessible from Okareo's servers, not just your local machine.
    • Token expires mid-simulation – Okareo acquires the token once at the start of each conversation. If a downstream call returns HTTP 401, Okareo automatically re-authenticates once and retries the call. For very short-lived tokens, ensure your token endpoint is stable and responsive.
    Test before saving

    Use Test Calls to verify your mappings (paths & payloads) return the expected fields.

  3. Click Create. Your Target is now available to reuse in any simulation.

2 · Register a Driver

A Driver is the simulated user persona that talks to your Target.

info

Configuring LLMs to role-play as a user can be challenging. See our guide on Creating Drivers

  1. Go to Simulations → Drivers and click ➕ New Driver.

  2. Fill in:

    • Name – a descriptive label (e.g., “Busy User”).
    • Temperature – variability of the driver’s behavior (0 = deterministic).
    • Prompt Template – the persona & rules. You can start from a template and edit it, or paste your own.
      Use {scenario_input.*} to reference fields from your Scenario rows.

    Driver Form

  3. Click Create. Your Driver is now available to reuse in any simulation.

3 · Create a Scenario

A Scenario defines what should happen in each simulation run. Think of it as a test case matrix.

A Scenario is made up of one or more Scenario Rows. Each row supplies runtime parameters that are inserted into the Driver Prompt, plus an Expected Target Result that Okareo’s checks (like Behavior Adherence) will judge against.

How simulation count works:

The total number of simulations = Number of Scenario Rows × Repeats (from the Setting Profile)

Examples:

  • 1 Scenario Row × Repeats = 1 → 1 simulation
  • 2 Scenario Rows × Repeats = 1 → 2 simulations
  • 2 Scenario Rows × Repeats = 2 → 4 simulations (2 runs per row)
  1. Go to Studio → Synthetic Scenario Copilot.

  2. Add rows:

    • Input (JSON): any fields your driver prompt references, e.g. { "name": "Paul", "objective": "Reset your debit PIN" }
    • Expected Result (text): the success criteria (e.g., “User completes debit PIN reset and confirms it’s done.”)
  3. To generate rows with AI, describe them in the text box at the bottom (“Describe the desired properties…”), then refine as needed.

  4. Save the scenario set: hover the toolbar icon in the lower-right inside the dialog, then click the save icon to name and save.

    Scenario Copilot

Your scenario set is now available to reuse across simulations.

4 · Launch a Simulation

  1. Navigate to Simulations and click ➕ Create Multi-Turn Simulation.

Simulations

  1. Select a Target, Driver, Scenario, and Checks.

Simulations Form

  1. Click Create. You can watch the progress of the simulation.

Simulations Index

5 · Inspect Results

Click a Simulation tile to open its details. The results page breaks down the simulation into:

  • Conversation Transcript – View the full back-and-forth between the Driver and Target, one turn per row.
  • Checks – See results for:
    • Behavior Adherence – Did the assistant stay in character or follow instructions?
    • Model Refusal – Did the assistant properly decline off-topic or adversarial inputs?
    • Task Completed – Did it fulfill the main objective?
    • A custom check specific to your agent

Each turn is annotated with check results, so you can trace where things went wrong — or right.

Results


That's it! You now have a complete, repeatable workflow for evaluating agents with multi-turn simulations - entirely from the browser or your codebase.