-
Notifications
You must be signed in to change notification settings - Fork 8
Description
Problem
The agent currently records engagement metrics (likes, retweets, replies, views) on every post in the Post type, but never reads them back to inform future decisions. The engagement.lastChecked field exists but no
code uses engagement data to update scoring weights, topic preferences, or creative direction.
This means the agent produces content at a constant quality level — it never learns what resonates with its actual audience.
Proposed Solution
Implement a feedback loop that closes the gap between posting and future content decisions:
- Collect outcomes — After 24 hours, check how each post performed (likes, retweets, replies, views)
- Classify performance — Identify top-performing and worst-performing posts relative to the account's baseline
- Feed back into the pipeline — Pass high/low performers as examples into the scorer and ideator prompts so the LLM can learn patterns (topics, joke types, visual styles) that work for this audience
- Adjust topic preferences — Over time, shift scoring weights toward categories and themes that consistently perform well
- Track trends — Maintain a rolling summary of what's working and what isn't, feeding it into the worldview reflection cycle
Why This Matters
This is arguably the highest-leverage feature the agent is missing. Without it, the agent is a static pipeline — with it, the agent genuinely improves over time based on real audience signal. This is what separates an
"autonomous agent" from a sophisticated cron job.
Relevant Code
src/types.ts— Post.engagement already has the fieldssrc/agent/loop.ts— Main loop where feedback data should be consumedsrc/pipeline/scorer.ts— Scoring prompts should receive performance contextsrc/pipeline/ideator.ts— Ideation should learn from past winners/loserssrc/agent/worldview.ts— Reflection cycle could incorporate engagement trends
If this isn't currently under development, I'd be interested in taking a shot at implementing it.