You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: Learn how to deploy Memgraph in production for your workload and consider all the advices directly from the Memgraph Team based on our multi-year experiences.
Copy file name to clipboardExpand all lines: pages/deployment/workloads/memgraph-in-high-throughput-workloads.mdx
+6-7
Original file line number
Diff line number
Diff line change
@@ -9,8 +9,7 @@ import { CommunityLinks } from '/components/social-card/CommunityLinks'
9
9
# Memgraph in high-throughput workloads
10
10
11
11
<Callouttype="info">
12
-
👉 **Start here first**
13
-
Before diving into this guide, we recommend starting with the [**General suggestions**](/memgraph-in-production/general-suggestions)
12
+
Before diving into this guide, we recommend starting with the [Deployment best practices](/deployment/best-practices)
14
13
page. It provides **foundational, use-case-agnostic advice** for deploying Memgraph in production.
15
14
16
15
This guide builds on that foundation, offering **additional recommendations tailored to specific workloads**.
@@ -24,10 +23,10 @@ This guide is for you if you're working with **high-throughput graph workloads**
24
23
and scale are critical.
25
24
You’ll benefit from this content if:
26
25
27
-
-⚡ You’re handling **more than a thousand writes per second**, and your graph data is constantly changing at high velocity.
28
-
-🔍 You want your **read performance to remain consistent**, even as new data is continuously ingested.
29
-
-🔁 You’re dealing with **high volumes of concurrent reads and writes**, and need a database that can handle both without performance degradation.
30
-
-🌊 Your data is flowing in from **real-time streaming systems** like **Kafka**, and you need a database that can keep up.
26
+
- You’re handling **more than a thousand writes per second**, and your graph data is constantly changing at high velocity.
27
+
- You want your **read performance to remain consistent**, even as new data is continuously ingested.
28
+
- You’re dealing with **high volumes of concurrent reads and writes**, and need a database that can handle both without performance degradation.
29
+
- Your data is flowing in from **real-time streaming systems** like **Kafka**, and you need a database that can keep up.
31
30
32
31
If this sounds like your use case, this guide will walk you through how to configure and scale Memgraph for **reliable, high-throughput performance** in production.
33
32
@@ -211,7 +210,7 @@ If your payload contains **dynamic labels or edge types** and you still need **i
211
210
-**Optionally use the [`merge`](/advanced-algorithms/available-algorithms/merge) procedure from MAGE**
212
211
213
212
<Callouttype="warning">
214
-
Note: While MAGE procedures are **written in C++ and highly optimized**, they still introduce **slightly more overhead**
213
+
While MAGE procedures are **written in C++ and highly optimized**, they still introduce **slightly more overhead**
215
214
compared to **pure Cypher**, as they are executed as external modules. We recommend favoring pure Cypher when
0 commit comments