Skip to content

Commit d3b88db

Browse files
docs: send traces to OpenTelemetry backends
This PR adds a part to the observability docs page that shows how to send Spring AI traces to an OpenTelemetry backend such as Langfuse.
1 parent 2294c5a commit d3b88db

File tree

1 file changed

+135
-0
lines changed
  • spring-ai-docs/src/main/antora/modules/ROOT/pages/observability

1 file changed

+135
-0
lines changed

spring-ai-docs/src/main/antora/modules/ROOT/pages/observability/index.adoc

Lines changed: 135 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -315,3 +315,138 @@ whereas data is exported as span attributes if you use an OpenZipkin tracing bac
315315
|===
316316

317317
WARNING: If you enable the inclusion of the vector search response data in the observations, there's a risk of exposing sensitive or private information. Please, be careful!
318+
319+
== Example: Sending traces to an OpenTelemetry Backend
320+
321+
This guide shows how to send traces to https://langfuse.com/[Langfuse]'s OpenTelemetry endpoint.
322+
323+
[NOTE]
324+
====
325+
Visit the https://github.com/langfuse/langfuse-examples/tree/main/applications/spring-ai-demo[Langfuse Example Repo] for a fully instrumented example application.
326+
====
327+
328+
=== Step 1: Enable OpenTelemetry in Spring AI
329+
330+
*Add OpenTelemetry and Spring Observability Dependencies* (Maven):
331+
332+
Make sure your project includes Spring Boot Actuator and the Micrometer tracing libraries for OpenTelemetry. Spring Boot Actuator is required to enable Micrometer's observation and tracing features.
333+
334+
You'll also need the Micrometer -> OpenTelemetry bridge and an OTLP exporter. For Maven, add the following to your `pom.xml` (Gradle users can include equivalent coordinates in Gradle):
335+
336+
[source,xml]
337+
----
338+
<dependencyManagement>
339+
<dependencies>
340+
<dependency>
341+
<groupId>io.opentelemetry.instrumentation</groupId>
342+
<artifactId>opentelemetry-instrumentation-bom</artifactId>
343+
<version>2.13.2</version>
344+
<type>pom</type>
345+
<scope>import</scope>
346+
</dependency>
347+
</dependencies>
348+
</dependencyManagement>
349+
350+
<dependencies>
351+
<dependency>
352+
<groupId>org.springframework.boot</groupId>
353+
<artifactId>spring-boot-starter</artifactId>
354+
</dependency>
355+
<dependency>
356+
<groupId>org.springframework.ai</groupId>
357+
<artifactId>spring-ai-openai-spring-boot-starter</artifactId>
358+
<version>1.0.0-M6</version>
359+
</dependency>
360+
<!-- Spring AI needs a reactive web server to run for some reason-->
361+
<dependency>
362+
<groupId>org.springframework.boot</groupId>
363+
<artifactId>spring-boot-starter-web</artifactId>
364+
</dependency>
365+
<dependency>
366+
<groupId>io.opentelemetry.instrumentation</groupId>
367+
<artifactId>opentelemetry-spring-boot-starter</artifactId>
368+
</dependency>
369+
<!-- Spring Boot Actuator for observability support -->
370+
<dependency>
371+
<groupId>org.springframework.boot</groupId>
372+
<artifactId>spring-boot-starter-actuator</artifactId>
373+
</dependency>
374+
<!-- Micrometer Observation -> OpenTelemetry bridge -->
375+
<dependency>
376+
<groupId>io.micrometer</groupId>
377+
<artifactId>micrometer-tracing-bridge-otel</artifactId>
378+
</dependency>
379+
<!-- OpenTelemetry OTLP exporter for traces -->
380+
<dependency>
381+
<groupId>io.opentelemetry</groupId>
382+
<artifactId>opentelemetry-exporter-otlp</artifactId>
383+
</dependency>
384+
</dependencies>
385+
----
386+
387+
*Enable Span Export and Configure Spring AI Observations* (`application.yml`):
388+
389+
With the above dependencies, Spring Boot will auto-configure tracing using OpenTelemetry as long as we provide the proper settings. We need to specify where to send the spans (the OTLP endpoint) and ensure Spring AI is set up to include the desired data in those spans. Create or update your `application.yml` (or `application.properties`) with the following configurations:
390+
391+
[source,yaml]
392+
----
393+
spring:
394+
application:
395+
name: spring-ai-llm-app # Service name for tracing (appears in Langfuse UI as the source service)
396+
ai:
397+
chat:
398+
observations:
399+
include-prompt: true # Include prompt content in tracing (disabled by default for privacy)
400+
include-completion: true # Include completion content in tracing (disabled by default)
401+
management:
402+
tracing:
403+
sampling:
404+
probability: 1.0 # Sample 100% of requests for full tracing (adjust in production as needed)
405+
observations:
406+
annotations:
407+
enabled: true # Enable @Observed (if you use observation annotations in code)
408+
----
409+
410+
With these configurations and dependencies in place, your Spring Boot application is ready to produce OpenTelemetry traces. Spring AI's internal calls (e.g. when you invoke a chat model or generate an embedding) will be recorded as spans.
411+
412+
Each span will carry attributes like `gen_ai.operation.name`, `gen_ai.system` (the provider, e.g. "openai"), model names, token usage, etc., and – since we enabled them – events for the prompt and response content​
413+
414+
=== Step 2: Configure Langfuse
415+
416+
Now that your Spring AI application is emitting OpenTelemetry trace data, the next step is to direct that data to Langfuse.
417+
418+
Langfuse will act as the "backend" for OpenTelemetry in this setup – essentially replacing a typical Jaeger/Zipkin/OTel-Collector with Langfuse's trace ingestion API.
419+
420+
*Langfuse Setup*
421+
422+
- Sign up for https://cloud.langfuse.com/[Langfuse Cloud] or https://langfuse.com/self-hosting[self-hosted Langfuse].
423+
- Set the OTLP endpoint (e.g. `https://cloud.langfuse.com/api/public/otel`) and API keys.
424+
425+
Configure these via environment variables:
426+
427+
[source,bash]
428+
----
429+
OTEL_EXPORTER_OTLP_ENDPOINT: set this to the Langfuse OTLP URL (e.g. https://cloud.langfuse.com/api/public/otel).
430+
OTEL_EXPORTER_OTLP_HEADERS: set this to Authorization=Basic <base64 public:secret>.
431+
----
432+
433+
[NOTE]
434+
====
435+
You can find more on authentication via Basic Auth https://langfuse.com/docs/opentelemetry/get-started[here].
436+
====
437+
438+
=== Step 3: Run a Test AI Operation
439+
440+
Start your Spring Boot application. Trigger an AI operation that Spring AI handles – for example, call a service or controller that uses a `ChatModel` to generate a completion, or an `EmbeddingModel` to generate embeddings.
441+
442+
[source,java]
443+
----
444+
@Autowired
445+
private ChatService chatService;
446+
447+
@EventListener(ApplicationReadyEvent.class)
448+
public void testAiCall() {
449+
String answer = chatService.chat("Hello, Spring AI!");
450+
System.out.println("AI answered: " + answer);
451+
}
452+
----

0 commit comments

Comments
 (0)