-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Description
Summary
The README states: "All data stays on your machine. No telemetry, no tracking, no cloud lock-in."
Hermes actively promotes a cloud integration (Honcho) through its setup wizard, documentation, and CLI. A user who follows the project's instructions to enable Honcho sends their full conversation stream to Plastic Labs' servers at api.honcho.dev. The README makes no exception or caveat for this.
Honcho is opt-in. It requires creating an account at app.honcho.dev and pasting an API key. But the setup flow describes Honcho only as "persistent cross-session memory" and never discloses what data is transmitted or how it's processed. Both peers are registered with observe_me=True [1], which means Honcho's backend runs its own LLM on both sides of the conversation to build a persistent model of the user and the agent.
What the setup tells users
The wizard (hermes honcho setup) says:
"Honcho gives Hermes persistent cross-session memory." [2]
When no key is present:
"No API key configured. Get your API key at https://app.honcho.dev" [3]
After the key is accepted, the wizard prints "Honcho is ready" along with session, workspace, and peer names. It does not mention what will be sent to the Honcho API or what processing occurs on the remote side. [4]
The migration wizard (hermes honcho migrate) is the only path that uses the word "cloud":
"Honcho replaces that with a cloud-backed, LLM-observable memory layer" [5]
What actually flows to api.honcho.dev
Once enabled, Honcho transmits:
- Bidirectional inference. Both peers are registered with
observe_me=True. Honcho's backend runs its own LLM on both sides of the conversation to build a "user model." Dialectic queries re-process the user's messages through this pipeline between turns, with no visible indicator. [1] [6] [7] - Every user message and assistant response, verbatim. The full text of each turn is synced to the remote API, either inline or via a background thread. These are complete messages, not summaries or embeddings. [8]
- Peer identity. User name, workspace ID, and a session key derived from the working directory name. [9]
- Local memory files. The contents of MEMORY.md, USER.md, and SOUL.md are uploaded during migration. [10]
- Full conversation history. The entire local conversation log is uploaded as an XML transcript file during migration. [11]
- Model-generated conclusions about the user (e.g. "User prefers dark mode"), created by the agent and stored remotely. [12]
Background prefetch threads send context and dialectic requests between turns with no visible indicator to the user. [7] [13]
The gap
A user who reads "persistent cross-session memory" and provides an API key reasonably expects something like "the service remembers things about me between sessions." What they get is their entire conversation stream fed into a third-party inference pipeline that builds a persistent model of both them and the agent. The setup flow does not distinguish between these two things.
Suggestions
-
Qualify the README claim. Either scope it ("All data stays on your machine unless you enable a cloud integration") or remove it. An unqualified privacy claim next to an actively promoted cloud integration is misleading regardless of opt-in mechanics.
-
Disclose data scope during setup. Before the user completes configuration, the wizard should state what data will be sent: full message text, local memory files, and that ongoing inference will be performed on the remote side.
[1] honcho_integration/session.py:166-167 (SessionPeerConfig observe_me=True)
[2] honcho_integration/cli.py:100
[3] honcho_integration/cli.py:125
[4] honcho_integration/cli.py:192-226
[5] honcho_integration/cli.py:563
[6] honcho_integration/session.py:464-502 (dialectic_query)
[7] honcho_integration/session.py:504-522 (prefetch_dialectic)
[8] honcho_integration/session.py:291-297 (message sync), session.py:361-363 (async enqueue)
[9] honcho_integration/client.py:429-437 (client init kwargs)
[10] honcho_integration/session.py:756-794 (migrate_memory_files)
[11] honcho_integration/session.py:654-665 (migrate_local_history)
[12] honcho_integration/session.py:870-903 (create_conclusion)
[13] honcho_integration/session.py:540-553 (prefetch_context)