-
Notifications
You must be signed in to change notification settings - Fork 594
Description
The problem
withTrace() accepts metadata, but that metadata only exists on the Trace object. Child spans (LLM calls, tool calls, handoffs, etc.) have a traceId but no access to the trace's metadata.
This means if you set identifiers like chatType, userId, or env on a trace, those fields don't appear on the LLM spans that carry token usage and cost data. In observability dashboards that filter at the span level, filtering by any of these identifiers drops all cost information because the cost-carrying spans don't have the fields.
The Braintrust OpenAIAgentsTraceProcessor is one example where this causes a real problem and unique identifiers drop off and reduce visibility on how features behave in production.
Request
Propagate trace metadata to all child spans, so that identifiers set via withTrace() are available on every span in that trace -- especially on LLM/generation spans where usage and cost are recorded.
How other frameworks handle this
I did some research with codex on how it is done in Vercel AI, so the following is AI generated but it shows that other frameworks handle it in the same way.
Vercel AI SDK lets you set metadata via experimental_telemetry, and it shows up on every child span automatically:
const result = await generateText({
model: openai("gpt-4"),
prompt: "Hello",
experimental_telemetry: {
isEnabled: true,
functionId: "my-chat",
metadata: {
chatType: "EDITOR",
env: "production",
userId: "abc123",
},
},
});Internally, the SDK computes baseTelemetryAttributes once (converting each metadata key to ai.telemetry.metadata.* attributes) and spreads them into both the outer span and every inner LLM call span:
get-base-telemetry-attributes.ts-- buildsai.telemetry.metadata.*from settingsgenerate-text.ts--...baseTelemetryAttributesis spread into both theai.generateTextandai.generateText.doGeneratespans