Skip to content

Commit c3c372e

Browse files
Add Amazon Bedrock AgentCore to the LLM Observability page (#4141)
<!-- Thank you for contributing to the Elastic Docs! 🎉 Use this template to help us efficiently review your contribution. --> ## Summary This PR adds Amazon Bedrock AgentCore to the LLM Observability page and renames various titles. Fixes #4074 <!-- Describe what your PR changes or improves. If your PR fixes an issue, link it here. If your PR does not fix an issue, describe the reason you are making the change. --> ## Generative AI disclosure <!-- To help us ensure compliance with the Elastic open source and documentation guidelines, please answer the following: --> 1. Did you use a generative AI (GenAI) tool to assist in creating this contribution? - [ ] Yes - [x] No <!-- 2. If you answered "Yes" to the previous question, please specify the tool(s) and model(s) used (e.g., Google Gemini, OpenAI ChatGPT-4, etc.). Tool(s) and model(s) used: --> --------- Co-authored-by: florent-leborgne <[email protected]>
1 parent 42a35db commit c3c372e

File tree

1 file changed

+8
-6
lines changed

1 file changed

+8
-6
lines changed

solutions/observability/applications/llm-observability.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
---
2-
navigation_title: LLM Observability
2+
navigation_title: LLM and agentic AI observability
33
---
44

5-
# LLM Observability
5+
# LLM and agentic AI observability
66

77
While LLMs hold incredible transformative potential, they also bring complex challenges in reliability, performance, and cost management. Traditional monitoring tools require an evolved set of observability capabilities to ensure these models operate efficiently and effectively.
88
To keep your LLM-powered applications reliable, efficient, cost-effective, and easy to troubleshoot, Elastic provides a powerful LLM observability framework including key metrics, logs, and traces, along with pre-configured, out-of-the-box dashboards that deliver deep insights into model prompts and responses, performance, usage, and costs.
@@ -11,19 +11,21 @@ Elastic’s end-to-end LLM observability is delivered through the following meth
1111
- Metrics and logs ingestion for LLM APIs (via [Elastic integrations](integration-docs://reference/index.md))
1212
- APM tracing for LLM Models (via [instrumentation](opentelemetry://reference/index.md))
1313

14-
## Metrics and logs ingestion for LLM APIs (via Elastic integrations)
14+
## LLM and agentic AI platform observability with Elastic integrations
1515

1616
Elastic’s LLM integrations now support the most widely adopted models, including OpenAI, Azure OpenAI, and a diverse range of models hosted on Amazon Bedrock and Google Vertex AI. Depending on the LLM provider you choose, the following table shows which type of data -- log or metrics -- you can collect.
1717

18-
| **LLM Provider** | **Metrics** | **Logs** |
18+
| **LLM or agentic AI platform** | **Metrics** | **Logs** |
1919
|--------|------------|------------|
2020
| [Amazon Bedrock](integration-docs://reference/aws_bedrock.md)|||
21+
| [Amazon Bedrock AgentCore](integration-docs://reference/aws_bedrock_agentcore.md)|||
22+
| [Azure AI Foundry](integration-docs://reference/azure_ai_foundry.md) |||
2123
| [Azure OpenAI](integration-docs://reference/azure_openai.md)|||
2224
| [GCP Vertex AI](integration-docs://reference/gcp_vertexai.md) |||
2325
| [OpenAI](integration-docs://reference/openai.md) || 🚧 |
24-
| [Azure AI Foundry](integration-docs://reference/azure_ai_foundry.md) |||
2526

26-
## APM tracing for LLM models (via instrumentation)
27+
28+
## LLM and agentic AI application observability with APM (distributed tracing)
2729

2830
Elastic offers specialized OpenTelemetry Protocol (OTLP) tracing for applications leveraging LLM models hosted on Amazon Bedrock, OpenAI, Azure OpenAI, and GCP Vertex AI, providing a detailed view of request flows. This tracing capability captures critical insights, including the specific models used, request duration, errors encountered, token consumption per request, and the interaction between prompts and responses. Ideal for troubleshooting, APM tracing allows you to find exactly where the issue is happening with precision and efficiency in your LLM-powered application.
2931

0 commit comments

Comments
 (0)