Sending Traces via OpenTelemetry
Checks and monitors tell you what happened: a conversation scored low, or a check failed. Traces tell you why. By sending OpenTelemetry traces to Okareo, you can see the inner workings of your system (every tool call, retrieval step, and LLM invocation) alongside the simulation transcript or monitored conversation that produced a given score. When a simulation run surfaces a behavioral issue or a monitor flags a production conversation, traces give you the full picture of what your system actually did under the hood.
You can send OpenTelemetry (OTEL) traces directly to Okareo instead of, or in addition to, routing traffic through the Okareo Proxy. This is useful when you already have OTEL instrumentation in your application or when the proxy approach doesn't fit your architecture.
Traces Endpoint
Send OTEL traces to:
POST https://api.okareo.com/v1/traces
Authenticate with your Okareo API key in the request headers.
The endpoint accepts the OTLP/proto (protobuf) format, which is the default for OpenTelemetry SDK exporters.
Headers
| Header | Value |
|---|---|
api-key | Your Okareo API key |
Content-Type | application/x-protobuf |
Example
- Python (OTEL SDK)
- Google ADK
- TypeScript (OTEL SDK)
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
# Configure the OTEL exporter to send traces to Okareo
# The default exporter uses protobuf encoding (application/x-protobuf)
exporter = OTLPSpanExporter(
endpoint="https://api.okareo.com/v1/traces",
headers={"api-key": "<OKAREO_API_KEY>"},
)
provider = TracerProvider()
provider.add_span_processor(BatchSpanProcessor(exporter))
trace.set_tracer_provider(provider)
tracer = trace.get_tracer("my-llm-app")
# Instrument your LLM calls
with tracer.start_as_current_span("chat_completion") as span:
span.set_attribute("llm.request.model", "gpt-4o")
# ... your LLM call here ...
span.set_attribute("llm.response.content", response_text)
Install the required packages:
pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-http
For Google ADK agents, use the GoogleADKInstrumentor from the OpenInference library. It automatically patches the ADK to capture agent execution, LLM calls (model, input/output, token counts), and tool invocations.
1. Install dependencies:
pip install openinference-instrumentation-google-adk opentelemetry-exporter-otlp-proto-http
2. Create a tracing module (e.g., tracing.py):
import os
from openinference.instrumentation.google_adk import GoogleADKInstrumentor
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
def instrument_adk_with_okareo() -> trace_sdk.TracerProvider:
otlp_exporter = OTLPSpanExporter(
endpoint=f"{os.getenv('OKAREO_BASE_URL')}/v1/traces",
headers={"api-key": os.getenv("OKAREO_API_KEY", "")},
)
tracer_provider = trace_sdk.TracerProvider()
tracer_provider.add_span_processor(SimpleSpanProcessor(otlp_exporter))
trace.set_tracer_provider(tracer_provider)
GoogleADKInstrumentor().instrument(tracer_provider=tracer_provider)
return tracer_provider
3. Call it before your agent runs:
from your_project.tracing import instrument_adk_with_okareo
instrument_adk_with_okareo()
root_agent = Agent(
model="gemini-2.0-flash",
name="root_agent",
# ... your agent config
)
4. Set environment variables:
export OKAREO_BASE_URL=https://api.okareo.com
export OKAREO_API_KEY=your-api-key-here
All spans (agent execution, LLM calls, and tool invocations) are exported to Okareo via OTLP where they appear as full agent network traces.
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
import { trace } from "@opentelemetry/api";
// Configure the OTEL exporter to send traces to Okareo
// The proto exporter uses protobuf encoding (application/x-protobuf)
const exporter = new OTLPTraceExporter({
url: "https://api.okareo.com/v1/traces",
headers: { "api-key": "<OKAREO_API_KEY>" },
});
const provider = new NodeTracerProvider();
provider.addSpanProcessor(new BatchSpanProcessor(exporter));
provider.register();
const tracer = trace.getTracer("my-llm-app");
// Instrument your LLM calls
const span = tracer.startSpan("chat_completion");
span.setAttribute("llm.request.model", "gpt-4o");
// ... your LLM call here ...
span.setAttribute("llm.response.content", responseText);
span.end();
Install the required packages:
npm install @opentelemetry/api @opentelemetry/sdk-trace-node @opentelemetry/sdk-trace-base @opentelemetry/exporter-trace-otlp-proto
The endpoint only accepts protobuf-encoded payloads (application/x-protobuf). JSON (application/json) is not supported and will return an Unsupported Content-Type error. Make sure you use the otlp-proto exporter variants, not otlp-http with JSON.
Supported Span Formats
Okareo natively understands traces from several popular standards and frameworks:
gen_aisemantic conventions: the emerging OpenTelemetry standard for generative AI spans- LiteLLM: spans emitted by the LiteLLM proxy and SDK
- A2A / ADK: spans from Google's Agent-to-Agent protocol and Agent Development Kit (see the Google ADK example above for setup)
If your application already emits traces in any of these formats, Okareo will automatically extract the session or conversation identifier from the spans. No extra instrumentation is needed.
Linking Traces to Simulations via Context Token
When you define a Custom Endpoint Target with a session_id mapping, traces sent to Okareo are automatically associated with the corresponding simulation. In Okareo, this association key is called the Context Token. It links a trace span to a specific simulation conversation.
How it works
-
Configure your Target with a
response_session_id_paththat extracts a session or thread ID from your API responses (see Custom Endpoint → Configure a Target). -
Run a simulation. Okareo uses the
session_idfrom your Target as the Context Token and matches it against incoming traces. -
Traces are linked automatically. If your application emits
gen_ai, LiteLLM, or A2A/ADK spans, Okareo extracts the session identifier from the standard span attributes and associates the traces with the simulation. No manual span tagging is required.
If you're using gen_ai semantic conventions, LiteLLM, or A2A/ADK, the session/conversation ID is already present in your spans. Okareo will find it and set the Context Token automatically.
If your traces use a custom format that doesn't follow one of the supported standards, you can manually include the session ID as a span attribute so Okareo can match it:
# Only needed for custom/non-standard trace formats
with tracer.start_as_current_span("handle_message") as span:
span.set_attribute("session.id", thread_id) # same ID your API returns
# ... process the message ...
Once linked, you can view the full trace timeline alongside the simulation transcript in the Okareo UI. This shows not just what the agent said, but the internal calls, latencies, and tool invocations that led to each response.
This is especially valuable for multi-turn simulations where you want to verify that your agent's internal behavior (tool calls, retrieval steps, memory access) matches what it says in the conversation.
When to Use Direct Tracing vs. the Proxy
| Approach | Best for |
|---|---|
| Okareo Proxy | Quick setup, no existing OTEL instrumentation, want a unified gateway for multiple providers. See Setting Up Monitoring. |
| Direct OTEL Traces | Existing OTEL instrumentation, need detailed internal spans (tool calls, retrieval, memory), or architectures where a proxy isn't feasible. |
| Both | Use the proxy for LLM-level monitoring and direct traces for internal application observability. They complement each other. |
Next Steps
- Setting Up Monitoring: Proxy-based setup and monitor configuration.
- Custom Endpoint Target: Configure session ID mapping for simulation-trace linking.
- Checks: Evaluation metrics applied to your traces and interactions.