Skip to content

Commit 6181a95

Browse files
committed
fixed image references
1 parent d9e644d commit 6181a95

File tree

1 file changed

+8
-14
lines changed

1 file changed

+8
-14
lines changed

content/blog/2025-07-11-Zenoh-Pico-Peer-to-peer-unicast.md

Lines changed: 8 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: "Zenoh-Pico peer to peer unicast mode"
33
date: 2025-07-10
44
menu: "blog"
5-
weight: 20250630
5+
weight: 20250710
66
description: "July 10th, 2025 -- Paris."
77
draft: false
88
---
@@ -30,7 +30,7 @@ Architecture-wise, we use non-blocking sockets and I/O multiplexing to handle al
3030
Here is an example showing how to implement a 1:N (or N:1) communication graph:
3131

3232
{{< figure-inline
33-
src="../../img/20250630-Zenoh-Pico-peer-to-peer-unicast/1-n.png"
33+
src="../../img/20250711-Zenoh-Pico-peer-to-peer-unicast/1-n.png"
3434
class="figure-inline"
3535
alt="1:N diagram" >}}
3636

@@ -46,7 +46,7 @@ If we assume a single publisher connected to 3 subscribers, here’s how we coul
4646
To implement an N:N graph:
4747

4848
{{< figure-inline
49-
src="../../img/20250630-Zenoh-Pico-peer-to-peer-unicast/n-n.png"
49+
src="../../img/20250711-Zenoh-Pico-peer-to-peer-unicast/n-n.png"
5050
class="figure-inline"
5151
alt="N:N diagram" >}}
5252

@@ -74,14 +74,14 @@ Note that the Zenoh-Pico configuration used for testing deviates from the defaul
7474
## Results
7575

7676
{{< figure-inline
77-
src="../../img/20250630-Zenoh-Pico-peer-to-peer-unicast/perf_lat.png"
77+
src="../../img/20250711-Zenoh-Pico-peer-to-peer-unicast/perf_lat.png"
7878
class="figure-inline"
7979
alt="P2p latency" >}}
8080

8181
The round-trip time for packets below 16 KiB is under 20 µs—meaning a one-way latency of under 10 µs. Peer-to-peer unicast delivers up to **70% lower latency** compared to client mode.
8282

8383
{{< figure-inline
84-
src="../../img/20250630-Zenoh-Pico-peer-to-peer-unicast/perf_thr.png"
84+
src="../../img/20250711-Zenoh-Pico-peer-to-peer-unicast/perf_thr.png"
8585
class="figure-inline"
8686
alt="P2p throughput" >}}
8787

@@ -101,27 +101,21 @@ This feature is disabled by default and can be enabled by setting `Z_FEATURE_MUL
101101
Previously, we discussed reducing dynamic memory allocations without providing measurements. We've now addressed this by measuring allocations using [heaptrack](https://github.com/KDE/heaptrack). Below are the results from the client throughput test in 1.0:
102102

103103
{{< figure-inline
104-
src="../../img/20250630-Zenoh-Pico-peer-to-peer-unicast/malloc_1_0.png"
104+
src="../../img/20250711-Zenoh-Pico-peer-to-peer-unicast/malloc_1_0.png"
105105
class="figure-inline"
106106
alt="1.0 heaptrack" >}}
107107

108108
And here are the results for the current version:
109109

110110
{{< figure-inline
111-
src="../../img/20250630-Zenoh-Pico-peer-to-peer-unicast/malloc_current.png"
111+
src="../../img/20250711-Zenoh-Pico-peer-to-peer-unicast/malloc_current.png"
112112
class="figure-inline"
113113
alt="current heaptrack" >}}
114114

115115
## Memory Breakdown:
116116

117-
Version 1.0:
118-
* Handled 5.8 million messages in 20 seconds (~290k messages/sec)
119-
* Peak memory usage: 1.15 MB
120-
* 64 million allocations, 11 allocations per message
121-
* 600 kB: Two eagerly allocated 300 kB defragmentation buffers
122-
* 100 kB: TX and RX buffers (50 kB each)
117+
The latest version of Zenoh-Pico includes some major performance and memory utilisation improvements, here are the latest numbers:
123118

124-
Current version:
125119
* Handled 84.4 million messages in 20 seconds (~4.2M messages/sec) — 15x throughput increase
126120
* Peak memory usage: 101 kB — 91% less memory
127121
* 118 allocations total, no per-message allocations thanks to ownership transfers and the RX cache — 99.9998% fewer allocations

0 commit comments

Comments
 (0)