You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: spring-ai-docs/src/main/antora/modules/ROOT/pages/observability/index.adoc
+135Lines changed: 135 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -315,3 +315,138 @@ whereas data is exported as span attributes if you use an OpenZipkin tracing bac
315
315
|===
316
316
317
317
WARNING: If you enable the inclusion of the vector search response data in the observations, there's a risk of exposing sensitive or private information. Please, be careful!
318
+
319
+
== Example: Sending traces to an OpenTelemetry Backend
320
+
321
+
This guide shows how to send traces to https://langfuse.com/[Langfuse]'s OpenTelemetry endpoint.
322
+
323
+
[NOTE]
324
+
====
325
+
Visit the https://github.com/langfuse/langfuse-examples/tree/main/applications/spring-ai-demo[Langfuse Example Repo] for a fully instrumented example application.
326
+
====
327
+
328
+
=== Step 1: Enable OpenTelemetry in Spring AI
329
+
330
+
*Add OpenTelemetry and Spring Observability Dependencies* (Maven):
331
+
332
+
Make sure your project includes Spring Boot Actuator and the Micrometer tracing libraries for OpenTelemetry. Spring Boot Actuator is required to enable Micrometer's observation and tracing features.
333
+
334
+
You'll also need the Micrometer -> OpenTelemetry bridge and an OTLP exporter. For Maven, add the following to your `pom.xml` (Gradle users can include equivalent coordinates in Gradle):
*Enable Span Export and Configure Spring AI Observations* (`application.yml`):
388
+
389
+
With the above dependencies, Spring Boot will auto-configure tracing using OpenTelemetry as long as we provide the proper settings. We need to specify where to send the spans (the OTLP endpoint) and ensure Spring AI is set up to include the desired data in those spans. Create or update your `application.yml` (or `application.properties`) with the following configurations:
390
+
391
+
[source,yaml]
392
+
----
393
+
spring:
394
+
application:
395
+
name: spring-ai-llm-app # Service name for tracing (appears in Langfuse UI as the source service)
396
+
ai:
397
+
chat:
398
+
observations:
399
+
include-prompt: true # Include prompt content in tracing (disabled by default for privacy)
400
+
include-completion: true # Include completion content in tracing (disabled by default)
401
+
management:
402
+
tracing:
403
+
sampling:
404
+
probability: 1.0 # Sample 100% of requests for full tracing (adjust in production as needed)
405
+
observations:
406
+
annotations:
407
+
enabled: true # Enable @Observed (if you use observation annotations in code)
408
+
----
409
+
410
+
With these configurations and dependencies in place, your Spring Boot application is ready to produce OpenTelemetry traces. Spring AI's internal calls (e.g. when you invoke a chat model or generate an embedding) will be recorded as spans.
411
+
412
+
Each span will carry attributes like `gen_ai.operation.name`, `gen_ai.system` (the provider, e.g. "openai"), model names, token usage, etc., and – since we enabled them – events for the prompt and response content
413
+
414
+
=== Step 2: Configure Langfuse
415
+
416
+
Now that your Spring AI application is emitting OpenTelemetry trace data, the next step is to direct that data to Langfuse.
417
+
418
+
Langfuse will act as the "backend" for OpenTelemetry in this setup – essentially replacing a typical Jaeger/Zipkin/OTel-Collector with Langfuse's trace ingestion API.
419
+
420
+
*Langfuse Setup*
421
+
422
+
- Sign up for https://cloud.langfuse.com/[Langfuse Cloud] or https://langfuse.com/self-hosting[self-hosted Langfuse].
423
+
- Set the OTLP endpoint (e.g. `https://cloud.langfuse.com/api/public/otel`) and API keys.
424
+
425
+
Configure these via environment variables:
426
+
427
+
[source,bash]
428
+
----
429
+
OTEL_EXPORTER_OTLP_ENDPOINT: set this to the Langfuse OTLP URL (e.g. https://cloud.langfuse.com/api/public/otel).
430
+
OTEL_EXPORTER_OTLP_HEADERS: set this to Authorization=Basic <base64 public:secret>.
431
+
----
432
+
433
+
[NOTE]
434
+
====
435
+
You can find more on authentication via Basic Auth https://langfuse.com/docs/opentelemetry/get-started[here].
436
+
====
437
+
438
+
=== Step 3: Run a Test AI Operation
439
+
440
+
Start your Spring Boot application. Trigger an AI operation that Spring AI handles – for example, call a service or controller that uses a `ChatModel` to generate a completion, or an `EmbeddingModel` to generate embeddings.
441
+
442
+
[source,java]
443
+
----
444
+
@Autowired
445
+
private ChatService chatService;
446
+
447
+
@EventListener(ApplicationReadyEvent.class)
448
+
public void testAiCall() {
449
+
String answer = chatService.chat("Hello, Spring AI!");
0 commit comments