-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathcursor.rules
More file actions
285 lines (248 loc) · 16.5 KB
/
cursor.rules
File metadata and controls
285 lines (248 loc) · 16.5 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
# Footnote Development Rules
# These guide both human contributors and AI tools (Cursor, Traycer)
# toward traceable, ethics-aligned, maintainable code.
## Code Quality
- Use the structured logger (`utils/logger.ts`) for all logs.
- Default to `async/await` for clarity, but use `.then()` or `Promise.all()` when parallelism or lazy chaining improves performance.
- Wrap risky operations in `try/catch` with informative error messages.
- Follow existing naming conventions for consistency.
## TypeScript Standards
- Use explicit types everywhere; avoid `any`.
- Use `interface` for public or extendable structures; use `type` for unions, aliases, or generics.
- Enable and honor strict null checks.
- Use generics thoughtfully when they clarify intent.
## Commenting Standards
- **Add meaningful comments when a new contributor would otherwise have to guess** (AI assistants tend to under-comment important decisions), but preserve readability.
- **Bias toward slightly more documentation than default AI output** when the tradeoff is between a clear explanation and a terse but guess-heavy file.
- Comments serve as documentation for future maintainers and community contributors.
- Prioritize comments that explain "why" and "what", not just "how".
- Write comments so a junior contributor can follow the code without guessing.
- Prefer short, plain language over dense jargon or architecture shorthand.
- Optimize for comment quality, not comment count. A smaller number of clear, useful comments is better than broad low-signal coverage.
- Prefer plain-language explanation over compressed technical shorthand when the longer wording makes intent easier to understand.
- Use more words when they genuinely reduce ambiguity for a junior contributor; do not force brevity if it harms clarity.
- A good comment should help a reader answer three questions quickly:
- what is happening here?
- why does this code exist?
- what could go wrong if someone changes or removes it?
### **Code Comments (// and /* */)**
- Add comments for complex business logic, non-obvious algorithms, or important decisions.
- Explain the reasoning behind implementation choices, especially when there are alternatives.
- Document workarounds, edge cases, and potential gotchas.
- Include context about external dependencies or API behaviors.
- Use parenthetical explanations for technical terms: `technicalTerm (plainEnglish)`.
- Keep comments concrete. Name the trigger, the behavior, and the consequence.
**Good Code Comments:**
```typescript
/**
* @description: Tracks one user's upload session so follow-up chunks are stitched together.
* @footnote-scope: core
* @footnote-module: UploadSessionTracker
* @footnote-risk: medium - Session mix-ups can attach chunks to the wrong upload and corrupt files.
* @footnote-ethics: low - This module coordinates uploads but does not make user-facing decisions.
*/
// Ignore the bot's own messages so we do not trigger another planning pass from our reply.
if (message.author.id === message.client.user!.id) {
this.resetCounter(channelKey);
return;
}
// Keep the oldest message when trimming so the summary still has the start of the conversation.
if (recentMessages.length > MAX_CONTEXT_MESSAGES) {
recentMessages = recentMessages.slice(0, 1).concat(recentMessages.slice(-19));
}
// Skip a new LLM call during message floods (rapid-fire messages) when the catch-up filter says
// the user is still in the middle of sending context.
const filterDecision = await this.catchupFilter.shouldSkipPlanner(message, recentMessages, channelKey);
// Fail open here: if the config is missing, keep the feature working instead of blocking users.
if (!this.config.enabled) {
return;
}
```
**Avoid Basic Explanations:**
```typescript
// Increment the counter by 1
counter++;
// Set the value to true
enabled = true;
```
### **JSDoc Documentation**
- Apply JSDoc strategically based on context and value, not universally.
- Use JSDoc for interfaces/types that benefit from hover documentation and AI assistant context.
- Focus on complex types, public APIs, and interfaces where understanding the "why" matters.
- Skip JSDoc for simple, self-explanatory types where the property names are clear.
- JSDoc coverage should be meaningfully above default AI habits, especially on exported surfaces and substantive modules, but do not optimize for a numeric quota.
- When deciding whether to add JSDoc, bias toward documenting exported surfaces and non-obvious orchestration code rather than leaving future readers to infer intent from implementation details.
- If a numeric coverage target would produce noisy or repetitive documentation, prefer fewer higher-quality JSDoc blocks.
**When to use JSDoc:**
- Public interfaces that other modules will use
- Complex types with business logic or non-obvious relationships
- Types where hover documentation helps developers understand requirements
- Interfaces that benefit AI assistants for better context understanding
- File headers and exported functions where a new contributor would otherwise need to read the whole file first
- Exported helpers that participate in core flow, policy, provenance, runtime boundaries, or other architectural seams
**When to skip JSDoc:**
- Simple, self-explanatory interfaces where property names are clear
- Internal types that are obvious from context
- Types that are just data containers without business meaning
- Tiny helpers where the name already says enough and a comment would repeat the code
- Small local functions whose behavior is obvious and whose surrounding code already explains the intent
**Good JSDoc Example:**
```typescript
/**
* Core cost breakdown for any model type
* @interface CostBreakdown
* @property {number} inputTokens - Tokens consumed from the input prompt (user message, context, etc.)
* @property {number} outputTokens - Tokens generated in the AI response
* @property {number} inputCost - Cost for processing input tokens (typically cheaper)
* @property {number} outputCost - Cost for generating output tokens (typically more expensive)
* @property {number} totalCost - Combined cost of input + output processing
*/
```
**Skip JSDoc for simple types:**
```typescript
export interface SimpleConfig {
enabled: boolean;
timeout: number;
retries: number;
}
```
## Project Framework Principles
- Reuse existing utilities (`logger.ts`, `env.ts`, `pricing.ts`, `openaiService.ts`) before adding new modules.
- Keep backend as the authoritative place that computes and records LLM spend. Use existing shared utilities such as `logger.ts`, `env.ts`, `pricing.ts`, and `openaiService.ts` instead of introducing Discord-local cost logic.
- Any caller in `packages/discord-bot` or `packages/backend` that makes an LLM call must invoke `ChannelContextManager.recordLLMUsage()` so the backend-owned accounting path can capture usage and shared pricing helpers can derive canonical cost data when needed.
- Discord/UI must NOT independently compute or persist cost. They may only call `ChannelContextManager.recordLLMUsage()` to report usage and display backend-returned or shared-helper cost facts.
- All buffers are RAM-only (no persistence to disk or database).
- Follow fail-open design: if uncertain, don’t block execution.
- Log every decision at debug level with structured data.
- Keep all public interfaces serializable for future web UI integration (e.g., Cognitive Budget panel).
## Runtime Boundary Guidance
- Treat `packages/backend` as the only public runtime/control-plane boundary for web and Discord unless a decision doc explicitly says otherwise.
- Keep framework-specific runtime code (for example VoltAgent integration) behind an internal package or boundary rather than spreading it across backend handlers.
- Keep Footnote-owned provenance, trace, incident, auth, and review semantics outside framework-specific adapters.
- Do not leak framework-native types into public API contracts when Footnote-owned contracts already exist.
- Prefer boundary-oriented names and abstractions such as runtime seam, runtime adapter, or reflect runtime over provider-specific naming when the code is intended to be replaceable.
## Current `@footnote-*` Module Tagging
- Canonical reference: `docs/architecture/footnote-annotations.md`.
- Every module must include the current structured `@footnote-*` annotations in its JSDoc header.
- Use this format (and order) for consistency and machine-parseability:
```typescript
/**
* @description: <1-3 lines summarizing what this module does.>
* @footnote-scope: <core|utility|interface|web|test>
* @footnote-module: <ModuleName>
* @footnote-risk: <low|medium|high> - <What could break or be compromised if mishandled.>
* @footnote-ethics: <low|medium|high> - <What human or governance impacts errors could cause.>
*/
```
- **@footnote-risk**: Technical blast radius if the module fails, is misconfigured, or is misused
- **@footnote-ethics**: User-facing or governance harm if the module behaves incorrectly
- **@footnote-scope**: Logical role in the system (helps auto-group modules)
- Run `pnpm validate-footnote-tags` before committing; CI enforces this to guarantee standardized risk/ethics levels.
- Keep annotations under 10 lines for readability
- Separate technical risk from ethical sensitivity for clarity
- Enable future automated audit tools (e.g., `pnpm audit-risk`)
## Current `@footnote-*` Scoped Logger Tagging
- Scoped logger annotations are a documentation convention. They are not part of the enforced five-tag module header schema.
- Use this format when documenting a scoped logger:
```typescript
/**
* @footnote-logger: <loggerName>
* @logs: <What this scoped logger tracks and logs.>
* @footnote-risk: <low|medium|high> - <What could go wrong if this logger is noisy, missing, or leaks data.>
* @footnote-ethics: <low|medium|high> - <What privacy, transparency, or governance harm poor logging could cause.>
*/
const <loggerName>Logger = logger.child({ module: '<loggerName>' });
```
- **@footnote-logger**: Logger module identifier (matches the child logger name)
- **@logs**: What specific operations, events, or data this logger logs
- **@footnote-risk**: Technical blast radius if logging fails, misleads, or leaks data
- **@footnote-ethics**: Privacy, transparency, or governance harm from poor logging behavior
- Follow the same tag style as module headers for consistency
## Code Changes
- Prefer small, well-scoped diffs.
- Preserve provenance comments, cost tracking, and licensing headers.
- Never remove risk annotations or audit metadata without explicit reason.
- Maintain backward compatibility unless explicitly breaking for a versioned release.
- After any file edit, run `pnpm lint:fix` by default (robot/local workflow).
- Use `pnpm lint` as the non-mutating CI/final verification gate.
- `pnpm format:check` / `pnpm format:write` operate on changed files by default; set `FORMAT_BASE_REF` in CI to evaluate a base-ref range.
- If a file is outside formatter/parser coverage (for example `.env.example`), preserve style manually and note the limitation in the change summary.
- Prefer linting only the touched files when the repo tooling supports it cleanly.
- If the repo only exposes a broader lint command, run that broader command and call out the wider scope in the summary.
## Refactoring.Guru discipline
- Use `docs/ai/refactoring_guru_playbook.md` as the canonical refactoring reference.
- For any refactor suggestion or plan, use `Smell -> Technique -> Steps`.
- Do not mix feature work or behavior changes into a refactor change.
- Refactor in small steps and keep tests green; run the relevant tests after each meaningful step.
- Treat patterns as optional. Default to no pattern unless it is justified against a simpler refactor or language feature.
## Interaction Guardrails
- When the user asks a question (not an explicit request to edit), do not modify files.
- Ask for confirmation before making any changes unless the user clearly requests edits.
- If uncertain whether a prompt is a question or an edit request, ask a brief clarifying question.
- Prefer a junior-friendly teaching tone by default.
- Explain changes in plain language first, then technical detail.
## CodeRabbit CLI
- CodeRabbit is installed in the terminal and can be used for code review support.
- Run `cr -h` to see available commands.
- Prefer CodeRabbit with `--prompt-only`.
- To review uncommitted changes, run: `coderabbit --prompt-only -t uncommitted`.
- Run CodeRabbit no more than 3 times per set of changes.
## Review Analysis with Cursor
- **Use Cursor for structural review analysis** before human code review to catch complexity, inconsistencies, and missing documentation.
- This augments human review by focusing on mechanical thoroughness while preserving human judgment for logic, ethics, and integration decisions.
- **Prerequisites**: Always run `pnpm review` first to ensure project-specific validation (including OpenAPI code-link checks) passes before Cursor analysis.
### Complexity Triage
- Prompt Cursor to identify functions that do too many things, deep conditionals, or unclear data flow.
- Flag areas that need human scrutiny versus easy cleanup opportunities.
- Use Cursor's Bugbot (Review PR) or Explain Changes features to get structural analysis.
- Use inline chat (`Ctrl+K`) to ask specific questions about complex code sections.
### Comment Scaffolding
- Prompt Cursor to "add comments where a new contributor would hesitate."
- Focus on teachability and knowledge transfer for open-source collaboration.
- Ensure comments explain "why" and "what", not just "how" (see Commenting Standards above).
### Future-Thinking Analysis
- Use Cursor's inline chat to explore forward-compatibility questions:
- "Would this API boundary survive a modular ethics-core refactor?"
- "Can this function be generalized for multi-lens reasoning?"
- "How would this scale with additional model providers?"
- Seed lightweight architectural review before merge decisions.
### Consistency Enforcement
- Ensure doc comments and logging consistently follow the project's provenance schema (risk tiers, license context, etc.).
- Verify `@footnote-*` module tagging compliance across new code.
- Check that structured logging patterns are maintained.
### Correctness/Safety-Critical Review Output
- Default review/summarization output should use three sections:
- `What Changed` (concise behavior/boundary summary)
- `Risk Check` (main failure mode and residual risk)
- `Validation` (exact checks run and outcome)
- For changes affecting schemas, validators, CI gates, auth, provenance/audit logging,
policy enforcement, or other correctness/safety-critical behavior, include:
- in `Risk Check`: 2-5 invariants the change relies on or enforces, plus at least one realistic failure mode
- in `Validation`: the tests, lint rules, validation steps, or CI checks that would catch that failure
- If no such check exists, state that explicitly as a review gap in `Validation`.
- Do not describe a correctness/safety-critical change as ready to merge unless the
invariants, failure mode, and catching checks can be explained concretely from code and
validation setup.
### Recommended Workflow
1. **Complete implementation**
2. **Run `pnpm lint:fix` by default** (robot/local cleanup)
3. **Run `pnpm lint`** for non-mutating verification
4. **Run automated validation**: `pnpm review` (validates `@footnote-*` tags, OpenAPI code links, types, linting)
5. **Run packaging validation when the change can affect deployable services**: `docker compose -f deploy/compose.yml build`
6. **PR-readiness gate for large/cross-cutting changes**: run both `pnpm review` and `docker compose -f deploy/compose.yml build` before marking review-ready
7. **Use Cursor's Bugbot (Review PR)** for automated code quality analysis
8. **Use inline chat (`Ctrl+K`)** with project-specific prompts (see `.cursor/footnote-prompts.md`)
9. **Accept suggested simplifications or comments** in-place
10. **Open human PR review** for logic, ethics, and integration focus
### Integration with Existing Tools
- **Review pipeline**: `pnpm review`
- **Packaging validation**: `docker compose -f deploy/compose.yml build`
- **`@footnote-*` validation**: `pnpm validate-footnote-tags` (enforced by CI)
- **OpenAPI linking validation**: `pnpm validate-openapi-links`
- **Cursor tasks**: Use `.cursor/tasks.json` commands for quick access
- **Checklist**: Follow `.cursor/review-checklist.md` before Cursor analysis
- **Prompts**: Use `.cursor/footnote-prompts.md` for consistent, effective Cursor interactions
## Testing
- Add or update tests for any new functionality.
- Follow existing test utilities and patterns.
- Tests must be deterministic; mock external services where possible.