Skip to content

fix: handle large IPC messages in daemon socket communication#218

Open
amitzur wants to merge 1 commit intophilschmid:mainfrom
amitzur:fix/daemon-large-message-ipc
Open

fix: handle large IPC messages in daemon socket communication#218
amitzur wants to merge 1 commit intophilschmid:mainfrom
amitzur:fix/daemon-large-message-ipc

Conversation

@amitzur
Copy link

@amitzur amitzur commented Mar 24, 2026

Problem

The daemon IPC layer silently drops data when responses exceed ~8KB (the kernel socket buffer size). This causes Daemon request timeout errors for MCP servers that return many tools (e.g. next-devtools, brainshop, or any server with >~10 tools with schemas).

Root cause

Bun's socket.write() on Unix sockets only writes up to the kernel buffer size (typically 8192 bytes) and returns the number of bytes actually written. The original code ignored this return value — so for a 42KB listTools response, only the first 8KB was sent, and the client waited forever for the rest.

On the client side, the data callback assumed the entire response would arrive in a single event. For responses that did partially arrive, the incomplete JSON failed to parse and was discarded.

How it manifests

$ mcp-cli
brainshop
  • <error: Daemon request timeout>
next-devtools
  • <error: Daemon request timeout>

Servers with small tool lists (e.g. filesystem with 14 tools, deepwiki with 3) worked fine because their responses fit within the 8KB buffer. The ping request (60 bytes) always succeeded, so the daemon appeared connected but non-functional.

Fix

Extracted the IPC socket primitives into a shared src/ipc.ts module used by both daemon.ts and daemon-client.ts:

Server side (createIpcServer):

  • writeAll() tracks partial writes — when socket.write() returns less than the payload length, the unsent remainder is stored in a per-socket buffer
  • A drain callback resumes writing when the kernel buffer frees up, handling payloads that require multiple drain cycles
  • Incoming requests are buffered per-socket until complete JSON is received

Client side (sendIpcRequest):

  • Accumulates data chunks in a buffer until the newline delimiter (\n) is found
  • Falls back to parsing the buffer on close in case the newline was lost
  • Properly deduplicates resolve/reject calls with a settled guard

Test plan

  • tests/daemon-socket.test.ts — 3 integration tests using a mock MCP server (tests/fixtures/mock-mcp-server.ts) that registers 50 tools via the official MCP SDK, producing a listTools response well over 8KB:
    • Baseline: all 50 tools listed via direct connection (no daemon) — verifies the mock server works
    • Daemon large message: all 50 tools listed via daemon — exercises the writeAll + drain + client buffering code path
    • Daemon with descriptions: same with -d flag for an even larger payload
  • Verified tests fail when the drain logic is removed (reverts to "Daemon request timeout")
  • bun run build compiles cleanly
  • All existing unit tests pass
  • Manual end-to-end test with next-devtools and brainshop servers

🤖 Generated with Claude Code

Bun's socket.write() only writes up to the kernel buffer size (~8KB).
The daemon was silently dropping the remainder of large responses
(e.g. listTools with many tools), causing the client to time out.

Extract IPC primitives into src/ipc.ts with drain-based write buffering
on the server side and newline-delimited read buffering on the client side.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant