You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add Amazon Bedrock AgentCore to the LLM Observability page (#4141)
<!--
Thank you for contributing to the Elastic Docs! 🎉
Use this template to help us efficiently review your contribution.
-->
## Summary
This PR adds Amazon Bedrock AgentCore to the LLM Observability page and
renames various titles.
Fixes#4074
<!--
Describe what your PR changes or improves.
If your PR fixes an issue, link it here. If your PR does not fix an
issue, describe the reason you are making the change.
-->
## Generative AI disclosure
<!--
To help us ensure compliance with the Elastic open source and
documentation guidelines, please answer the following:
-->
1. Did you use a generative AI (GenAI) tool to assist in creating this
contribution?
- [ ] Yes
- [x] No
<!--
2. If you answered "Yes" to the previous question, please specify the
tool(s) and model(s) used (e.g., Google Gemini, OpenAI ChatGPT-4, etc.).
Tool(s) and model(s) used:
-->
---------
Co-authored-by: florent-leborgne <[email protected]>
Copy file name to clipboardExpand all lines: solutions/observability/applications/llm-observability.md
+8-6Lines changed: 8 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,8 @@
1
1
---
2
-
navigation_title: LLM Observability
2
+
navigation_title: LLM and agentic AI observability
3
3
---
4
4
5
-
# LLM Observability
5
+
# LLM and agentic AI observability
6
6
7
7
While LLMs hold incredible transformative potential, they also bring complex challenges in reliability, performance, and cost management. Traditional monitoring tools require an evolved set of observability capabilities to ensure these models operate efficiently and effectively.
8
8
To keep your LLM-powered applications reliable, efficient, cost-effective, and easy to troubleshoot, Elastic provides a powerful LLM observability framework including key metrics, logs, and traces, along with pre-configured, out-of-the-box dashboards that deliver deep insights into model prompts and responses, performance, usage, and costs.
@@ -11,19 +11,21 @@ Elastic’s end-to-end LLM observability is delivered through the following meth
11
11
- Metrics and logs ingestion for LLM APIs (via [Elastic integrations](integration-docs://reference/index.md))
12
12
- APM tracing for LLM Models (via [instrumentation](opentelemetry://reference/index.md))
13
13
14
-
## Metrics and logs ingestion for LLM APIs (via Elastic integrations)
14
+
## LLM and agentic AI platform observability with Elastic integrations
15
15
16
16
Elastic’s LLM integrations now support the most widely adopted models, including OpenAI, Azure OpenAI, and a diverse range of models hosted on Amazon Bedrock and Google Vertex AI. Depending on the LLM provider you choose, the following table shows which type of data -- log or metrics -- you can collect.
17
17
18
-
|**LLM Provider**|**Metrics**|**Logs**|
18
+
|**LLM or agentic AI platform**|**Metrics**|**Logs**|
|[Azure AI Foundry](integration-docs://reference/azure_ai_foundry.md)| ✅| ✅ |
25
26
26
-
## APM tracing for LLM models (via instrumentation)
27
+
28
+
## LLM and agentic AI application observability with APM (distributed tracing)
27
29
28
30
Elastic offers specialized OpenTelemetry Protocol (OTLP) tracing for applications leveraging LLM models hosted on Amazon Bedrock, OpenAI, Azure OpenAI, and GCP Vertex AI, providing a detailed view of request flows. This tracing capability captures critical insights, including the specific models used, request duration, errors encountered, token consumption per request, and the interaction between prompts and responses. Ideal for troubleshooting, APM tracing allows you to find exactly where the issue is happening with precision and efficiency in your LLM-powered application.
0 commit comments