Conversation
feat(pageview): add page_views table and migration Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
feat(pageview): add PAGEVIEW queue message type Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
fix(pageview): add missing migration meta files Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
feat(pageview): implement batch PAGEVIEW queue consumer Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
feat(pageview): add SSR pageview tracking and extract queue handler - 新建 recordPageViewFn 在 SSR 时 fire-and-forget 记录浏览量 - 将 queue handler 逻辑从 server.ts 抽离到 queue.handler.ts - 新增 PAGEVIEW_SALT 环境变量 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
feat(pageview): replace Umami API with D1 aggregate queries - 新建 pageview data layer (getStats, getTrafficTrend, getTopPosts) - 重写 dashboard service 使用自建统计 - 简化 overview 指标为 PV + UV - 更新 cache key 从 dashboard/umami → dashboard/traffic Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
feat(pageview): adapt dashboard UI to self-hosted PV/UV metrics - 概览指标从 5 个简化为 2 个(PV + UV) - 移除 Umami URL 外链和 "统计未配置" 提示 - topPages 改用 slug/views 字段 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
chore(pageview): remove Umami server-side code and auth env vars - 删除 umami.client.ts (294 行) - 移除 UMAMI_API_KEY/USERNAME/PASSWORD 环境变量 - 新增 PAGEVIEW_SALT 环境变量 - 更新 deploy workflow 和重新生成 worker types Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
feat(pageview): update MCP analytics tool for self-hosted stats - overview 指标简化为 PV + UV - topPages 改用 slug/views 字段 - 移除 umamiUrl 输出 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
为 posts 表新增 pinned_at 可空时间戳字段,用于支持文章置顶功能。 null 表示未置顶,有值表示已置顶,值越新排序越靠前。 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- PostSelectSchema 新增 pinnedAt: coercedDateNullable 覆写 - 新增 TogglePinPostInputSchema 及 TogglePinPostInput 类型 - POSTS_CACHE_KEYS 新增 pinned cache key 工厂函数 - GetPostsCursorInputSchema 新增 excludePinned 可选字段 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- 新增 findPinnedPosts 函数,查询所有已发布且 pinnedAt 不为空的文章,按 pinnedAt 降序排列,并展开 tags - getPostsCursor 选项新增 excludePinned 字段,传入时追加 pinnedAt IS NULL 条件 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
select 新增 title: PostsTable.title,返回类型更新为
Array<{ slug: string; title: string; views: number }>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
feat: add getPinnedPosts, togglePin, and getPopularPosts services Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
feat: add pinned/popular posts server functions and query options Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
feat: extend HomePageProps contract with pinned and popular posts Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
feat: homepage loader fetches pinned and popular posts in parallel Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
feat: add pin toggle to admin post editor Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
fix: add pinnedAt to select fields and restore z import Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
refactor: move popular cache key to pageview.schema Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
fix: internationalize pin toggle labels and toasts Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
refactor: remove togglePinPostFn, pin toggle uses auto-save via updatePost Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
feat: batch view counts API with KV cache + fix pinnedAt auto-save - Add getViewCountsBySlugs in data layer (batch COUNT by slugs) - Add getViewCounts service with 5min KV cache - Add getViewCountsFn server function (max 50 slugs) - Add useViewCounts hook (TanStack Query, 5min staleTime) - Fix auto-save not triggering on pinnedAt change
refactor: popular posts return full PostItem, remove PopularPostItem - Rename getTopPosts → getTopPages (slug+views for dashboard) - Add findPostsBySlugs in posts data layer - Popular posts service: get slugs → fetch full PostItem → preserve order - Contract: popularPosts now Array<PostItem>, remove PopularPostItem - Fuwari: PopularPostCard accepts PostItem + optional views prop - Fuwari: HomePage uses useViewCounts hook for client-side view counts
feat(theme/fuwari): add page view counts and loading states to home page cards - 收集主页所有的 slugs 进行批量浏览量查询 - 为 `PinnedPostCard`、`PopularPostCard` 和 `PostCard` 添加 `views` 与 `isLoadingViews` 属性 - 使用 `<Skeleton />` 组件在视图加载时作为占位符 - Update `allSlugs` in home page to fetch page views in batch - Pass `views` and `isLoadingViews` to `PinnedPostCard`, `PopularPostCard`, and `PostCard` - Display `<Skeleton />` placeholder when views are loading
style(theme/fuwari): refine popular post card layout to unify with Fuwari aesthetic - 为热门文章卡片增加主标签(Tag)展示 - 为热门文章卡片增加摘要(Summary)文本展示 - 调整卡片底部边框与元信息间距,使之更加符合 Fuwari 设计语言 - Add primary tag display to popular post card - Add summary text to popular post card - Adjust footer border and metadata spacing to align with Fuwari design language
fix: pin isDirty in use-post-actions + configurable popularPostsLimit + merge PinnedPostCard into PostCard - Fix pinnedAt not triggering isDirty in use-post-actions (publish button stayed disabled) - Add popularPostsLimit to ThemeConfig, fuwari defaults to 3 - Merge PinnedPostCard into PostCard with `pinned` prop - getPopularPostsFn now accepts optional limit parameter
refactor: rename featuredPosts → recentPosts + add clock icon to read time
fix: align read time and view count icon/text styles in PostCard
style(theme/fuwari): optimize popular posts style with inline badge and remove redundant section header - 移除主页“热门文章”独立区块横幅 - 在 `PopularPostCard` 中增加基于排名的“TOP N”火焰徽章 - 通过内置“热门”视觉元素,解决内容区块与顶部背景图重叠问题 - Remove standalone "Popular Posts" section header banner in home page - Add rank-based "TOP N" flame badge to `PopularPostCard` - Resolve overlap issue with top background image by integrating "hot" visual elements directly into cards
fix: include pinnedAt in calculatePostHash so pin changes trigger workflow
style(theme/fuwari): compact homepage section gaps and prune unused i18n keys - 缩减主页置顶、热门、最新区块之间的垂直间距(移动端 `gap-6`,桌面端 `gap-8`) - 运行 `bun i18n:prune-unused --write` 清理了失效的翻译键值(包括已被废弃的 `home_popular_posts` 等) - Reduce vertical gap between pinned, popular, and recent sections (mobile `gap-6`, desktop `gap-8`) - Run `bun i18n:prune-unused --write` to remove obsolete translation keys (including deprecated `home_popular_posts`, etc.)
fix: use POSTS_KEYS for popularPostsQuery + filter future posts in pinned/slugs queries
feat(theme/default): integrate pinned posts and i18n view counts into minimalist theme - 于 `en.json` 和 `zh.json` 中新增 `post_views_count` 键值以支持中英文的复数浏览量格式化 - 修改 Default 主页,将置顶文章与常规文章合并,保持原生列表式布局 - 在 `PostItem` 中引入 `Pin` 图标标识置顶文章,并在日期与标签所在的元数据行中悄然增加浏览数据,契合主题的极简风格 - Add `post_views_count` to `en.json` and `zh.json` to support plural view count formatting in both English and Chinese - Update Default homepage to gracefully merge pinned posts with regular posts, preserving the native list layout - Introduce a `Pin` icon in `PostItem` for pinned posts and subtly append view counts to the metadata row (alongside date and tags), aligning with the theme's minimalist aesthetic
refactor: merge pinned/popular/recent into single deduplicated feed - Merge all post sources with dedup (pinned → popular → recent) - PostCard gains `popular` prop for flame badge - Delete PopularPostCard component - Unified card style: all cards show left bar + chevron - Prune unused i18n key: home_recent_updates
📝 WalkthroughWalkthrough该变更将站点从 Umami 后端统计迁移为内置的页面浏览统计(PAGEVIEW),并添加帖子置顶与热门帖子功能,涉及数据库迁移、队列消息、API、服务层、主题与编辑器 UI 以及环境变量调整(引入 PAGEVIEW_SALT,移除 Umami 凭据)。 Changes
Sequence Diagram(s)sequenceDiagram
participant Browser as Browser
participant Server as API Server
participant Queue as Message Queue
participant Handler as Queue Handler
participant DB as Database
Browser->>Server: 请求帖子页面
Server->>Server: 读取 IP/UA 与 PAGEVIEW_SALT,计算 visitorHash (sha256)
Server->>Queue: enqueue PAGEVIEW message ({postId, visitorHash})
Server-->>Browser: 返回页面
Queue->>Handler: 批量消息交付
Handler->>DB: 批量插入 page_views ({postId, visitorHash, createdAt})
DB-->>Handler: 插入完成
Handler->>Queue: ack 对应消息
sequenceDiagram
participant Theme as Theme UI
participant API as Frontend API
participant Service as Pageview Service
participant Cache as Cache (CacheService)
participant DB as Database
Theme->>API: 请求 viewCounts(slugs)
API->>Service: getViewCounts(context, slugs)
Service->>Cache: CacheService.get(PAGEVIEW_CACHE_KEYS.viewCounts(slugs))
alt cache hit
Cache-->>Service: 返回缓存数据
else cache miss
Service->>DB: 查询按 slug 分组的 view counts
DB-->>Service: 返回 {slug: views}
Service->>Cache: 写入缓存 (TTL 5m)
end
Service-->>API: 返回 {slug: count}
API-->>Theme: 渲染 views 数据
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can suggest fixes for GitHub Check annotations.Configure the |
docs: update env var docs, remove deprecated Umami server-side vars - Add PAGEVIEW_SALT to README and deployment guide - Remove UMAMI_API_KEY, UMAMI_USERNAME, UMAMI_PASSWORD (no longer used) - Clarify UMAMI_SRC/VITE_UMAMI_WEBSITE_ID are client-side tracking only - Update FAQ: analytics section reflects built-in pageview stats
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/features/posts/services/posts.service.ts (2)
413-439:⚠️ Potential issue | 🟠 Major发布 revision 的快照还原不出置顶状态。
这里
snapshotHash已经把pinnedAt算进去了,但snapshotJson还是丢了这个字段。这样同一条 revision 的 hash 和可恢复 payload 描述的不是同一个状态,后续做 restore / diff 时会把置顶信息悄悄丢掉。🧩 建议补齐快照字段
snapshotJson: { title: post.title, summary: post.summary, slug: post.slug, status: post.status, publishedAt: post.publishedAt ? post.publishedAt.toISOString() : null, + pinnedAt: post.pinnedAt ? post.pinnedAt.toISOString() : null, readTimeInMinutes: post.readTimeInMinutes, contentJson: post.contentJson, tagIds: [...new Set(post.tags.map((tag) => tag.id))].sort( (a, b) => a - b, ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/posts/services/posts.service.ts` around lines 413 - 439, The snapshot JSON written in PostRevisionRepo.insertPostRevision is missing the pinnedAt field even though calculatePostHash includes pinnedAt in snapshotHash; update the snapshotJson payload in the publish flow to include pinnedAt (use the same value/format as used for publishedAt, e.g., post.pinnedAt ? post.pinnedAt.toISOString() : null) so the stored snapshotJson matches the data used to compute snapshotHash and preserves/restores pinned state correctly.
64-79:⚠️ Potential issue | 🟠 Major
excludePinned没进缓存键,会直接串缓存。
fetcher已经按excludePinned区分结果集了,但POSTS_CACHE_KEYS.list(...)仍然只按version / limit / cursor / tagName分桶。同一组查询参数下,首页的“排除置顶”结果和普通列表会互相污染,出现置顶文章重复或普通列表缺项。这里需要把data.excludePinned ?? false一起编码进 key factory。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/posts/services/posts.service.ts` around lines 64 - 79, The cache key for posts list doesn't include the excludePinned flag, so queries that set data.excludePinned produce results that collide; update the key generation used with POSTS_CACHE_KEYS.list to incorporate data.excludePinned (use data.excludePinned ?? false) so cached entries match the concrete parameter used by fetcher/PostRepo.getPostsCursor and CacheService.getVersion; ensure any call sites that build the cacheKey for posts:list include this flag to prevent pinned vs non-pinned result pollution.
🧹 Nitpick comments (5)
migrations/0010_romantic_roland_deschain.sql (1)
1-1: 建议为pinned_at增加索引以降低首页查询开销。如果后续有“置顶优先”列表查询,这一列会变成高频排序/过滤字段,建议在迁移里一并创建索引。
💡 建议补充
ALTER TABLE `posts` ADD `pinned_at` integer; +CREATE INDEX `idx_posts_pinned_at` ON `posts` (`pinned_at`);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@migrations/0010_romantic_roland_deschain.sql` at line 1, Add an index for the posts.pinned_at column in the migration: modify the migration to create an index (e.g., CREATE INDEX idx_posts_pinned_at ON posts(pinned_at);) as part of the up migration and drop it in the down/rollback (e.g., DROP INDEX idx_posts_pinned_at). If your hot path sorts by pinned_at plus another column (e.g., created_at), consider creating a composite index like (pinned_at, created_at) instead. Ensure the index name (idx_posts_pinned_at) is unique and consistent with other migrations.src/lib/db/schema/posts.table.ts (1)
33-36: 考虑为pinnedAt添加索引以优化查询性能。根据 AI 摘要,
findPinnedPosts查询会基于pinnedAt字段筛选置顶文章。如果置顶文章查询频繁(如首页加载),建议添加索引。♻️ 可选:添加 pinnedAt 索引
(table) => [ index("published_at_idx").on(table.publishedAt, table.status), index("created_at_idx").on(table.createdAt), + index("pinned_at_idx").on(table.pinnedAt), ],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/db/schema/posts.table.ts` around lines 33 - 36, 当前 schema 在 posts 表只为 publishedAt 和 createdAt 建立索引,但 findPinnedPosts 查询会基于 pinnedAt 筛选,建议在 posts 表的索引数组中为字段 pinnedAt 添加索引以提升查询性能;找到 posts.table.ts 中用于定义索引的片段(包含 index("published_at_idx").on(table.publishedAt, table.status) 和 index("created_at_idx").on(table.createdAt)),在同一位置新增类似的 index("pinned_at_idx").on(table.pinnedAt) 条目,从而让查询函数 findPinnedPosts 能利用该索引加速检索。src/features/posts/components/post-editor/hooks/use-auto-save.ts (1)
148-151: 建议使用 JSON 格式记录错误日志根据编码规范,应使用 JSON 格式记录日志以便在 Cloudflare Workers Observability 中进行搜索和过滤。
♻️ 建议的修改
} catch (err) { - console.error("Auto-save failed:", err); + console.error(JSON.stringify({ message: "Auto-save failed", error: String(err) })); setSaveStatus("ERROR"); setError("AUTO_SAVE_FAILED");As per coding guidelines:
**/*.{ts,tsx,js,jsx}: Use JSON format for logs to enable search/filtering in Cloudflare Workers Observability🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/posts/components/post-editor/hooks/use-auto-save.ts` around lines 148 - 151, Replace the plain console.error in the catch block of use-auto-save.ts with a JSON-formatted log: capture the error's message and stack (err.message, err.stack), add contextual fields such as component or hook name ("useAutoSave"), operation ("auto-save"), and any relevant identifiers/state, and emit via console.error(JSON.stringify(...)) so logs are searchable in Cloudflare Workers Observability; leave the setSaveStatus("ERROR") and setError("AUTO_SAVE_FAILED") calls intact.src/features/posts/data/posts.data.ts (1)
289-319: 新 helper 还在扩大 data 层接口的不一致性。
findPinnedPosts和findPostsBySlugs都继续使用 positional 参数,而且在 repo 层做了postTags -> tags的结果整形。既然这是新加的入口,建议直接统一成(db, params),把整形放回 service / mapper,避免 data 层职责继续变宽。As per coding guidelines:
src/features/*/data/*.ts: Data access functions indata/layer should have signature(db: DB, params) → Promise<T>with no business logic, performing only raw Drizzle queriesAlso applies to: 321-352
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/posts/data/posts.data.ts` around lines 289 - 319, findPinnedPosts (and the nearby findPostsBySlugs) violate the data-layer contract by using positional params and doing result reshaping; change both functions to have the signature (db: DB, params?) => Promise<...> (accept a params object even if empty) and remove any business logic/reshaping (e.g., the postTags -> tags mapping) so the data layer returns raw query results (including postTags) and leave mapping to service/mapper layers; update references/exports accordingly for findPinnedPosts and findPostsBySlugs.src/features/pageview/pageview.schema.ts (1)
29-30: 可选优化:viewCounts的 key 建议在排序前先去重。当前实现对顺序做了稳定化,但未去重;重复 slug 会放大缓存 key 空间。
建议改法(可选)
export const PAGEVIEW_CACHE_KEYS = { traffic: ["dashboard", "traffic"] as const, popular: ["homepage", "popular"] as const, viewCounts: (slugs: string[]) => - ["pageview", "counts", ...[...slugs].sort()] as const, + ["pageview", "counts", ...Array.from(new Set(slugs)).sort()] as const, } as const;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/pageview/pageview.schema.ts` around lines 29 - 30, The viewCounts key builder (viewCounts) doesn't deduplicate slugs before sorting, so duplicate slugs create unnecessary distinct cache keys; update viewCounts to first deduplicate the input (e.g., Array.from(new Set(slugs)) or [...new Set(slugs)]) and then sort the deduped array before spreading into the key (keeping the returned value as const) so keys are stable and duplicate slugs don't expand the cache key space.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/deploy.yml:
- Line 115: 在注入 PAGEVIEW_SALT 到 CI 环境并生成 secrets.json 之前加入非空校验:在所有将
PAGEVIEW_SALT 写入 $GITHUB_ENV(当前有 echo "PAGEVIEW_SALT=..." >> $GITHUB_ENV
的步骤)之前判断变量是否为空或未定义,若为空应打印错误并以非零状态退出(fail-fast),并在生成 secrets.json
的步骤前做同样的检查;定位并修改涉及 echo "PAGEVIEW_SALT=${{ secrets.PAGEVIEW_SALT }}" >>
$GITHUB_ENV 的两个位置以及生成 secrets.json 的任务,确保任何空值都会使 workflow 失败而不是继续部署。
In `@src/features/dashboard/service/dashboard.service.ts`:
- Around line 17-38: getTimeRange currently aligns startAt to hour/day
boundaries but leaves endAt as "now" and sets prevStartAt relative to startAt,
producing unequal-length current and previous windows; fix by computing the
current window duration after aligning both boundaries and derive the previous
window from that duration (e.g., after aligning startAt to 0 minutes/seconds for
"24h" or 0:00 for day ranges, compute duration = endAt - startAt and set
prevEndAt = startAt and prevStartAt = new Date(startAt.getTime() - duration)),
and return prevEndAt (update callers) so overview comparisons use equal-length
windows; change logic inside getTimeRange (variables: startAt, endAt,
prevStartAt) to implement this consistent-duration approach.
In `@src/features/pageview/api/pageview.consumer.ts`:
- Around line 5-16: handlePageviewMessages currently performs a blind bulk
insert via getDb(...).insert(PageViewsTable).values(...) with no handling for
empty batch or errors; update it to early-return when batch.length === 0, wrap
the insert in try/catch, and on error call a JSON-formatted logger (e.g.,
processLogger or context.env.LOGGER) with fields like {svc:"pageview",
fn:"handlePageviewMessages", level:"error", message:"bulk insert failed", error:
err.message, batchSize: batch.length} and rethrow or handle accordingly; also
log a JSON info entry on success with batchSize and any relevant metadata.
Ensure you reference handlePageviewMessages, getDb, and PageViewsTable in the
change.
In `@src/features/pageview/data/pageview.data.ts`:
- Around line 17-21: Change the exported data-layer functions in this module
(including getStats) to follow the repo convention signature (db: DB, params) →
Promise<T>: replace positional parameters (e.g., startAt, endAt, etc.) with a
single params object, define a clear params type or inline destructuring (e.g.,
params: { startAt: Date; endAt: Date; ... }), update the function body to read
values from params (e.g., params.startAt), and update the Promise return type
accordingly; ensure only raw Drizzle queries remain in these functions and
update all callers to pass a params object instead of positional args.
- Around line 67-70: The data layer function in
src/features/pageview/data/pageview.data.ts currently maps DB rows to {date,
views} and includes defaults/short-circuits; remove the mapping and any default
limit/empty-array logic so the function only executes the Drizzle query and
returns the raw rows (e.g., return rows as-is instead of rows.map(...)); then
move the result assembly (convert bucket → date, views) and any default
limit/empty-array handling into src/features/pageview/pageview.service.ts
(update the service to call the raw query, apply defaults/limits, handle empty
results, and map rows to the {date, views} shape). Also apply the same change
pattern to the other similar spots referenced (around lines 80, 107, 119) so all
data/* functions remain thin and business logic lives in pageview.service.ts.
In `@src/features/theme/themes/fuwari/pages/home/page.tsx`:
- Line 72: The divider's last:border-t-0 is ineffective because it's applied to
the single divider element inside each card; move the last-border rule to the
card/wrapper element that is iterated so the final card hides its divider, i.e.,
remove last:border-t-0 from the div with className "border-t border-dashed mx-6
border-black/10 dark:border-white/15 last:border-t-0 md:hidden" and instead add
the last:border-t-0 (or equivalent parent-level selector) to the card container
element that wraps this divider in page.tsx so the last card's divider is
suppressed.
In `@src/lib/queue/queue.handler.ts`:
- Around line 12-15: The PAGEVIEW batch retry is not idempotent because only
postId/visitorHash are stored; include the original message.id with each entry
in pageviewBatch (the structure currently using data: { postId: number;
visitorHash: string } and Message) and persist that id in the consumer's DB
write so you can deduplicate (e.g., unique constraint or insert-or-ignore).
Update the code paths that build/process pageviewBatch and the ack()/retry
handling (also the similar blocks around lines noted) to pass and persist
message.id, and use that unique ID during insert to prevent double-counting if
part of the batch is retried.
- Around line 18-25: 不要把原始 message.body 原文写入错误日志;在 queueMessageSchema.safeParse
的失败分支(变量 parsed、函数 queueMessageSchema.safeParse、当前 console.error
调用处)只记录安全元数据,比如消息标识(message.messageId 或 message.id)、payload 长度、一个不可逆哈希(例如
SHA-256 摘要)或前 N 字符的安全截断,以及 parsed.error.message。将现有的 body
字段替换为上述元数据,保留错误信息并去除或替换任何原文敏感内容以避免将完整负载写入日志。
---
Outside diff comments:
In `@src/features/posts/services/posts.service.ts`:
- Around line 413-439: The snapshot JSON written in
PostRevisionRepo.insertPostRevision is missing the pinnedAt field even though
calculatePostHash includes pinnedAt in snapshotHash; update the snapshotJson
payload in the publish flow to include pinnedAt (use the same value/format as
used for publishedAt, e.g., post.pinnedAt ? post.pinnedAt.toISOString() : null)
so the stored snapshotJson matches the data used to compute snapshotHash and
preserves/restores pinned state correctly.
- Around line 64-79: The cache key for posts list doesn't include the
excludePinned flag, so queries that set data.excludePinned produce results that
collide; update the key generation used with POSTS_CACHE_KEYS.list to
incorporate data.excludePinned (use data.excludePinned ?? false) so cached
entries match the concrete parameter used by fetcher/PostRepo.getPostsCursor and
CacheService.getVersion; ensure any call sites that build the cacheKey for
posts:list include this flag to prevent pinned vs non-pinned result pollution.
---
Nitpick comments:
In `@migrations/0010_romantic_roland_deschain.sql`:
- Line 1: Add an index for the posts.pinned_at column in the migration: modify
the migration to create an index (e.g., CREATE INDEX idx_posts_pinned_at ON
posts(pinned_at);) as part of the up migration and drop it in the down/rollback
(e.g., DROP INDEX idx_posts_pinned_at). If your hot path sorts by pinned_at plus
another column (e.g., created_at), consider creating a composite index like
(pinned_at, created_at) instead. Ensure the index name (idx_posts_pinned_at) is
unique and consistent with other migrations.
In `@src/features/pageview/pageview.schema.ts`:
- Around line 29-30: The viewCounts key builder (viewCounts) doesn't deduplicate
slugs before sorting, so duplicate slugs create unnecessary distinct cache keys;
update viewCounts to first deduplicate the input (e.g., Array.from(new
Set(slugs)) or [...new Set(slugs)]) and then sort the deduped array before
spreading into the key (keeping the returned value as const) so keys are stable
and duplicate slugs don't expand the cache key space.
In `@src/features/posts/components/post-editor/hooks/use-auto-save.ts`:
- Around line 148-151: Replace the plain console.error in the catch block of
use-auto-save.ts with a JSON-formatted log: capture the error's message and
stack (err.message, err.stack), add contextual fields such as component or hook
name ("useAutoSave"), operation ("auto-save"), and any relevant
identifiers/state, and emit via console.error(JSON.stringify(...)) so logs are
searchable in Cloudflare Workers Observability; leave the setSaveStatus("ERROR")
and setError("AUTO_SAVE_FAILED") calls intact.
In `@src/features/posts/data/posts.data.ts`:
- Around line 289-319: findPinnedPosts (and the nearby findPostsBySlugs) violate
the data-layer contract by using positional params and doing result reshaping;
change both functions to have the signature (db: DB, params?) => Promise<...>
(accept a params object even if empty) and remove any business logic/reshaping
(e.g., the postTags -> tags mapping) so the data layer returns raw query results
(including postTags) and leave mapping to service/mapper layers; update
references/exports accordingly for findPinnedPosts and findPostsBySlugs.
In `@src/lib/db/schema/posts.table.ts`:
- Around line 33-36: 当前 schema 在 posts 表只为 publishedAt 和 createdAt 建立索引,但
findPinnedPosts 查询会基于 pinnedAt 筛选,建议在 posts 表的索引数组中为字段 pinnedAt 添加索引以提升查询性能;找到
posts.table.ts 中用于定义索引的片段(包含 index("published_at_idx").on(table.publishedAt,
table.status) 和 index("created_at_idx").on(table.createdAt)),在同一位置新增类似的
index("pinned_at_idx").on(table.pinnedAt) 条目,从而让查询函数 findPinnedPosts 能利用该索引加速检索。
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 1aa4897a-7d43-4b13-b9ff-523f44b6951e
📒 Files selected for processing (57)
.dev.vars.example.github/workflows/deploy.ymlmessages/en.jsonmessages/zh.jsonmigrations/0009_light_brood.sqlmigrations/0010_romantic_roland_deschain.sqlmigrations/meta/0009_snapshot.jsonmigrations/meta/0010_snapshot.jsonmigrations/meta/_journal.jsonscripts/create-theme.tssrc/features/dashboard/api/dashboard.api.tssrc/features/dashboard/dashboard.schema.tssrc/features/dashboard/data/umami.client.tssrc/features/dashboard/service/dashboard.service.tssrc/features/mcp/features/analytics/schema/mcp-analytics.schema.tssrc/features/mcp/features/analytics/service/mcp-analytics.service.tssrc/features/mcp/features/analytics/tools/analytics-overview.tool.tssrc/features/pageview/api/pageview.api.tssrc/features/pageview/api/pageview.consumer.tssrc/features/pageview/data/pageview.data.tssrc/features/pageview/pageview.schema.tssrc/features/pageview/queries/index.tssrc/features/pageview/service/pageview.service.tssrc/features/pageview/utils/hash.tssrc/features/posts/api/posts.public.api.tssrc/features/posts/components/post-editor/hooks/use-auto-save.tssrc/features/posts/components/post-editor/hooks/use-post-actions.tsxsrc/features/posts/components/post-editor/index.tsxsrc/features/posts/components/post-editor/post-editor-metadata.tsxsrc/features/posts/components/post-editor/types.tssrc/features/posts/data/posts.data.tssrc/features/posts/queries/index.tssrc/features/posts/schema/posts.schema.tssrc/features/posts/services/posts.service.tssrc/features/posts/utils/sync.tssrc/features/posts/workflows/post-process.tssrc/features/theme/contract/config.tssrc/features/theme/contract/pages/home.tssrc/features/theme/themes/default/components/post-item.tsxsrc/features/theme/themes/default/config.tssrc/features/theme/themes/default/pages/home/page.tsxsrc/features/theme/themes/default/pages/post/skeleton.tsxsrc/features/theme/themes/fuwari/components/post-card.tsxsrc/features/theme/themes/fuwari/config.tssrc/features/theme/themes/fuwari/pages/home/page.tsxsrc/lib/db/schema/index.tssrc/lib/db/schema/page-views.table.tssrc/lib/db/schema/posts.table.tssrc/lib/env/server.env.tssrc/lib/queue/queue.handler.tssrc/lib/queue/queue.schema.tssrc/routes/_public/index.tsxsrc/routes/_public/post/$slug.tsxsrc/routes/admin/index.tsxsrc/routes/admin/posts/edit.$id.tsxsrc/server.tsworker-configuration.d.ts
💤 Files with no reviewable changes (1)
- src/features/dashboard/data/umami.client.ts
| echo "UMAMI_USERNAME=${{ secrets.UMAMI_USERNAME }}" >> $GITHUB_ENV | ||
| echo "UMAMI_PASSWORD=${{ secrets.UMAMI_PASSWORD }}" >> $GITHUB_ENV | ||
| echo "VITE_UMAMI_WEBSITE_ID=${{ vars.VITE_UMAMI_WEBSITE_ID }}" >> $GITHUB_ENV | ||
| echo "PAGEVIEW_SALT=${{ secrets.PAGEVIEW_SALT }}" >> $GITHUB_ENV |
There was a problem hiding this comment.
为 PAGEVIEW_SALT 增加必填校验,避免空值部署。
当前新增了注入和下发,但没有在生成 secrets.json 前做非空校验。建议在 CI 中 fail-fast,避免线上以空 salt 运行。
💡 建议修改
- name: Generate Secrets JSON
run: |
+ test -n "$PAGEVIEW_SALT" || { echo "PAGEVIEW_SALT is required"; exit 1; }
cat <<EOF > secrets.json
{
"BETTER_AUTH_SECRET": "$BETTER_AUTH_SECRET",Also applies to: 143-143
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/deploy.yml at line 115, 在注入 PAGEVIEW_SALT 到 CI 环境并生成
secrets.json 之前加入非空校验:在所有将 PAGEVIEW_SALT 写入 $GITHUB_ENV(当前有 echo
"PAGEVIEW_SALT=..." >> $GITHUB_ENV
的步骤)之前判断变量是否为空或未定义,若为空应打印错误并以非零状态退出(fail-fast),并在生成 secrets.json
的步骤前做同样的检查;定位并修改涉及 echo "PAGEVIEW_SALT=${{ secrets.PAGEVIEW_SALT }}" >>
$GITHUB_ENV 的两个位置以及生成 secrets.json 的任务,确保任何空值都会使 workflow 失败而不是继续部署。
| function getTimeRange(range: DashboardRange) { | ||
| const now = new Date(); | ||
| const endAt = now.getTime(); | ||
| let startAt: number; | ||
| let prevStartAt: number; | ||
| const endAt = now; | ||
|
|
||
| let startAt: Date; | ||
| let prevStartAt: Date; | ||
|
|
||
| if (range === "24h") { | ||
| const d = new Date(now); | ||
| d.setHours(d.getHours() - 24, 0, 0, 0); | ||
| startAt = d.getTime(); | ||
| const prev = new Date(startAt); | ||
| prev.setHours(prev.getHours() - 24); | ||
| prevStartAt = prev.getTime(); | ||
| } else if (range === "7d") { | ||
| const d = new Date(now); | ||
| d.setDate(d.getDate() - 7); | ||
| d.setHours(0, 0, 0, 0); | ||
| startAt = d.getTime(); | ||
| const prev = new Date(startAt); | ||
| prev.setDate(prev.getDate() - 7); | ||
| prevStartAt = prev.getTime(); | ||
| } else if (range === "30d") { | ||
| const d = new Date(now); | ||
| d.setDate(d.getDate() - 30); | ||
| d.setHours(0, 0, 0, 0); | ||
| startAt = d.getTime(); | ||
| const prev = new Date(startAt); | ||
| prev.setDate(prev.getDate() - 30); | ||
| prevStartAt = prev.getTime(); | ||
| startAt = new Date(now); | ||
| startAt.setHours(startAt.getHours() - 24, 0, 0, 0); | ||
| prevStartAt = new Date(startAt); | ||
| prevStartAt.setHours(prevStartAt.getHours() - 24); | ||
| } else { | ||
| // 90d | ||
| const d = new Date(now); | ||
| d.setDate(d.getDate() - 90); | ||
| d.setHours(0, 0, 0, 0); | ||
| startAt = d.getTime(); | ||
| const prev = new Date(startAt); | ||
| prev.setDate(prev.getDate() - 90); | ||
| prevStartAt = prev.getTime(); | ||
| const days = range === "7d" ? 7 : range === "30d" ? 30 : 90; | ||
| startAt = new Date(now); | ||
| startAt.setDate(startAt.getDate() - days); | ||
| startAt.setHours(0, 0, 0, 0); | ||
| prevStartAt = new Date(startAt); | ||
| prevStartAt.setDate(prevStartAt.getDate() - days); | ||
| } | ||
|
|
||
| return { startAt, endAt, prevStartAt }; |
There was a problem hiding this comment.
当前区间和对比区间不是等长窗口。
startAt 被截到了整点/零点,但 endAt 仍然是当前时刻;同时上一周期统计却只算到 startAt。这样当前窗口天然比上一窗口长一截,overview.prev 的对比结果会系统性偏高。这里要么把 endAt 也对齐到同一边界,要么按 endAt - startAt 反推 prevStartAt/prevEndAt。
Also applies to: 45-50
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/dashboard/service/dashboard.service.ts` around lines 17 - 38,
getTimeRange currently aligns startAt to hour/day boundaries but leaves endAt as
"now" and sets prevStartAt relative to startAt, producing unequal-length current
and previous windows; fix by computing the current window duration after
aligning both boundaries and derive the previous window from that duration
(e.g., after aligning startAt to 0 minutes/seconds for "24h" or 0:00 for day
ranges, compute duration = endAt - startAt and set prevEndAt = startAt and
prevStartAt = new Date(startAt.getTime() - duration)), and return prevEndAt
(update callers) so overview comparisons use equal-length windows; change logic
inside getTimeRange (variables: startAt, endAt, prevStartAt) to implement this
consistent-duration approach.
| export async function handlePageviewMessages( | ||
| context: { env: Env }, | ||
| batch: Array<PageviewMessage["data"]>, | ||
| ) { | ||
| const db = getDb(context.env); | ||
| await db.insert(PageViewsTable).values( | ||
| batch.map((item) => ({ | ||
| postId: item.postId, | ||
| visitorHash: item.visitorHash, | ||
| })), | ||
| ); | ||
| } |
There was a problem hiding this comment.
添加错误处理和日志记录。
当前实现缺少错误处理和日志记录。如果批量插入失败,错误会向上传播但不会被记录,这会增加排查问题的难度。此外,建议处理空批次的边界情况。
🛡️ 建议的改进
export async function handlePageviewMessages(
context: { env: Env },
batch: Array<PageviewMessage["data"]>,
) {
+ if (batch.length === 0) {
+ return;
+ }
+
const db = getDb(context.env);
- await db.insert(PageViewsTable).values(
- batch.map((item) => ({
- postId: item.postId,
- visitorHash: item.visitorHash,
- })),
- );
+ try {
+ await db.insert(PageViewsTable).values(
+ batch.map((item) => ({
+ postId: item.postId,
+ visitorHash: item.visitorHash,
+ })),
+ );
+ } catch (error) {
+ console.error(
+ JSON.stringify({
+ message: "Failed to insert pageview batch",
+ error: String(error),
+ batchSize: batch.length,
+ }),
+ );
+ throw error;
+ }
}As per coding guidelines: "Use JSON format for logs to enable search/filtering in Cloudflare Workers Observability"。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/pageview/api/pageview.consumer.ts` around lines 5 - 16,
handlePageviewMessages currently performs a blind bulk insert via
getDb(...).insert(PageViewsTable).values(...) with no handling for empty batch
or errors; update it to early-return when batch.length === 0, wrap the insert in
try/catch, and on error call a JSON-formatted logger (e.g., processLogger or
context.env.LOGGER) with fields like {svc:"pageview",
fn:"handlePageviewMessages", level:"error", message:"bulk insert failed", error:
err.message, batchSize: batch.length} and rethrow or handle accordingly; also
log a JSON info entry on success with batchSize and any relevant metadata.
Ensure you reference handlePageviewMessages, getDb, and PageViewsTable in the
change.
| export async function getStats( | ||
| db: DB, | ||
| startAt: Date, | ||
| endAt: Date, | ||
| ): Promise<{ pv: number; uv: number }> { |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
请统一 data 层导出函数签名为 (db, params)。
这 4 个函数都用了位置参数,偏离仓库 data 层约定;建议统一改为 params 对象,避免参数扩展时调用方错位。
建议改法(示例)
-export async function getStats(
- db: DB,
- startAt: Date,
- endAt: Date,
-): Promise<{ pv: number; uv: number }> {
+export async function getStats(
+ db: DB,
+ params: { startAt: Date; endAt: Date },
+): Promise<{ pv: number; uv: number }> {
+ const { startAt, endAt } = params;
...
}
-export async function getTrafficTrend(
- db: DB,
- startAt: Date,
- endAt: Date,
- unit: "hour" | "day",
-): Promise<Array<{ date: number; views: number }>> {
+export async function getTrafficTrend(
+ db: DB,
+ params: { startAt: Date; endAt: Date; unit: "hour" | "day" },
+): Promise<Array<{ date: number; views: number }>> {
+ const { startAt, endAt, unit } = params;
...
}
-export async function getTopPages(
- db: DB,
- startAt: Date,
- endAt: Date,
- limit = 5,
-): Promise<Array<{ slug: string; views: number }>> {
+export async function getTopPages(
+ db: DB,
+ params: { startAt: Date; endAt: Date; limit: number },
+): Promise<Array<{ slug: string; views: number }>> {
+ const { startAt, endAt, limit } = params;
...
}
-export async function getViewCountsBySlugs(
- db: DB,
- slugs: string[],
-): Promise<Record<string, number>> {
+export async function getViewCountsBySlugs(
+ db: DB,
+ params: { slugs: string[] },
+): Promise<Record<string, number>> {
+ const { slugs } = params;
...
}As per coding guidelines: src/features/*/data/*.ts: Data access functions in data/ layer should have signature (db: DB, params) → Promise<T> with no business logic, performing only raw Drizzle queries.
Also applies to: 41-46, 76-81, 103-106
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/pageview/data/pageview.data.ts` around lines 17 - 21, Change the
exported data-layer functions in this module (including getStats) to follow the
repo convention signature (db: DB, params) → Promise<T>: replace positional
parameters (e.g., startAt, endAt, etc.) with a single params object, define a
clear params type or inline destructuring (e.g., params: { startAt: Date; endAt:
Date; ... }), update the function body to read values from params (e.g.,
params.startAt), and update the Promise return type accordingly; ensure only raw
Drizzle queries remain in these functions and update all callers to pass a
params object instead of positional args.
| return rows.map((r) => ({ | ||
| date: Number(r.bucket) * 1000, // unix seconds → ms | ||
| views: r.views, | ||
| })); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
建议把默认值/短路分支/结果组装下沉到 service 层。
当前 data 层包含了默认 limit、空数组短路和结果结构拼装,建议保留为“只做 Drizzle 查询”的薄层,业务决策由 pageview.service.ts 处理。
Based on learnings: Organize feature modules in src/features/ with layered pattern: data/ for raw Drizzle queries and <name>.service.ts for business logic.
Also applies to: 80-80, 107-107, 119-119
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/pageview/data/pageview.data.ts` around lines 67 - 70, The data
layer function in src/features/pageview/data/pageview.data.ts currently maps DB
rows to {date, views} and includes defaults/short-circuits; remove the mapping
and any default limit/empty-array logic so the function only executes the
Drizzle query and returns the raw rows (e.g., return rows as-is instead of
rows.map(...)); then move the result assembly (convert bucket → date, views) and
any default limit/empty-array handling into
src/features/pageview/pageview.service.ts (update the service to call the raw
query, apply defaults/limits, handle empty results, and map rows to the {date,
views} shape). Also apply the same change pattern to the other similar spots
referenced (around lines 80, 107, 119) so all data/* functions remain thin and
business logic lives in pageview.service.ts.
| views={viewCounts?.[post.slug]} | ||
| isLoadingViews={isPendingViewCounts} | ||
| /> | ||
| <div className="border-t border-dashed mx-6 border-black/10 dark:border-white/15 last:border-t-0 md:hidden" /> |
There was a problem hiding this comment.
last:border-t-0 选择器可能不会生效。
last:border-t-0 应用在每个包装 div 内部的分隔线上,但每个包装内只有一个分隔线元素,所以它总是"最后一个"。要隐藏最后一个卡片的分隔线,需要在父级判断或使用 CSS 选择器 last:*:border-t-0 在包装元素上。
🔧 建议修复
- <div
- key={post.slug}
- className="fuwari-onload-animation"
+ <div
+ key={post.slug}
+ className="fuwari-onload-animation [&:last-child>.border-t]:border-t-0"
style={{
animationDelay: `calc(var(--fuwari-content-delay) + ${i * delayOffset}ms)`,
}}
>
<PostCard
post={post}
pinned={pinned}
popular={!pinned && popular}
views={viewCounts?.[post.slug]}
isLoadingViews={isPendingViewCounts}
/>
- <div className="border-t border-dashed mx-6 border-black/10 dark:border-white/15 last:border-t-0 md:hidden" />
+ <div className="border-t border-dashed mx-6 border-black/10 dark:border-white/15 md:hidden" />
</div>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/theme/themes/fuwari/pages/home/page.tsx` at line 72, The
divider's last:border-t-0 is ineffective because it's applied to the single
divider element inside each card; move the last-border rule to the card/wrapper
element that is iterated so the final card hides its divider, i.e., remove
last:border-t-0 from the div with className "border-t border-dashed mx-6
border-black/10 dark:border-white/15 last:border-t-0 md:hidden" and instead add
the last:border-t-0 (or equivalent parent-level selector) to the card container
element that wraps this divider in page.tsx so the last card's divider is
suppressed.
| const pageviewBatch: { | ||
| data: { postId: number; visitorHash: string }; | ||
| message: Message; | ||
| }[] = []; |
There was a problem hiding this comment.
PAGEVIEW 批处理重试现在不是幂等的。
这里批量重试时只传了 postId/visitorHash,没有任何稳定事件 ID。一旦写库成功后在 ack() 前抛错,或者整批里只有一条消息失败,剩下已经写入的浏览也会跟着整批重投并被重复记数。建议把 message.id 一起传给 consumer 并落库,用唯一约束或 insert or ignore 做去重。
Also applies to: 52-54, 72-87
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/queue/queue.handler.ts` around lines 12 - 15, The PAGEVIEW batch
retry is not idempotent because only postId/visitorHash are stored; include the
original message.id with each entry in pageviewBatch (the structure currently
using data: { postId: number; visitorHash: string } and Message) and persist
that id in the consumer's DB write so you can deduplicate (e.g., unique
constraint or insert-or-ignore). Update the code paths that build/process
pageviewBatch and the ack()/retry handling (also the similar blocks around lines
noted) to pass and persist message.id, and use that unique ID during insert to
prevent double-counting if part of the batch is retried.
| const parsed = queueMessageSchema.safeParse(message.body); | ||
| if (!parsed.success) { | ||
| console.error( | ||
| JSON.stringify({ | ||
| message: "queue invalid message", | ||
| body: message.body, | ||
| error: parsed.error.message, | ||
| }), |
There was a problem hiding this comment.
不要把整条队列消息原样打进错误日志。
message.body 很可能带邮箱地址、Webhook payload 或其他用户内容。这里只要 schema 校验失败,就会把整份原始负载永久留在 Workers 日志里,既有隐私风险,也会显著放大日志体积。
🛡️ 建议只记录安全元数据
console.error(
JSON.stringify({
message: "queue invalid message",
- body: message.body,
+ queueMessageId: message.id,
error: parsed.error.message,
}),
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const parsed = queueMessageSchema.safeParse(message.body); | |
| if (!parsed.success) { | |
| console.error( | |
| JSON.stringify({ | |
| message: "queue invalid message", | |
| body: message.body, | |
| error: parsed.error.message, | |
| }), | |
| const parsed = queueMessageSchema.safeParse(message.body); | |
| if (!parsed.success) { | |
| console.error( | |
| JSON.stringify({ | |
| message: "queue invalid message", | |
| queueMessageId: message.id, | |
| error: parsed.error.message, | |
| }), |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/queue/queue.handler.ts` around lines 18 - 25, 不要把原始 message.body
原文写入错误日志;在 queueMessageSchema.safeParse 的失败分支(变量 parsed、函数
queueMessageSchema.safeParse、当前 console.error
调用处)只记录安全元数据,比如消息标识(message.messageId 或 message.id)、payload 长度、一个不可逆哈希(例如
SHA-256 摘要)或前 N 字符的安全截断,以及 parsed.error.message。将现有的 body
字段替换为上述元数据,保留错误信息并去除或替换任何原文敏感内容以避免将完整负载写入日志。
docs: fix PAGEVIEW_SALT description, it strengthens anonymization not enables tracking
There was a problem hiding this comment.
🧹 Nitpick comments (3)
docs/deployment-guide.en.md (3)
116-116: 考虑增加 PAGEVIEW_SALT 长度说明或与其他密钥保持一致。
PAGEVIEW_SALT使用openssl rand -hex 16生成(16 字节),而BETTER_AUTH_SECRET(第 103 行)使用openssl rand -hex 32(32 字节)。虽然 16 字节对于哈希盐值可能足够,但这种不一致可能会让用户感到困惑。建议:
- 要么在描述中解释为何使用较短长度(例如"用于访客哈希的盐值,16 字节足以保证匿名性")
- 要么统一使用 32 字节以保持一致性
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/deployment-guide.en.md` at line 116, Update the docs so PAGEVIEW_SALT and BETTER_AUTH_SECRET are consistent or the difference is explained: either change the PAGEVIEW_SALT example to use `openssl rand -hex 32` to match BETTER_AUTH_SECRET, or keep `openssl rand -hex 16` but add a brief note after the PAGEVIEW_SALT entry stating that 16 bytes (hex 16) is sufficient for a visitor-hash salt and why it differs from the 32-byte BETTER_AUTH_SECRET; reference the PAGEVIEW_SALT and BETTER_AUTH_SECRET entries when making this change.
248-249: 建议补充 PAGEVIEW_SALT 的可选性说明。文档提到"可选设置
PAGEVIEW_SALT以增强访客哈希匿名化",但未说明如果不设置会发生什么。建议添加简短说明:
- 系统在没有 PAGEVIEW_SALT 的情况下是否仍能正常工作?
- 不设置时是否使用默认行为(例如使用 IP 地址的简单哈希)?
- 设置后的安全/隐私增益是什么?
这将帮助用户决定是否需要配置此变量。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/deployment-guide.en.md` around lines 248 - 249, The doc mentions PAGEVIEW_SALT but doesn't state what happens if it's not set; update the deployment-guide.en.md text around the PAGEVIEW_SALT sentence to explicitly say the system will still work without PAGEVIEW_SALT, describe the default hashing behavior used when unset (e.g., a simple deterministic visitor hash such as IP-based or timestamp-based fingerprinting used by the pageview pipeline), and add one short line about the security/privacy benefit of setting PAGEVIEW_SALT (makes visitor hashes non-deterministic across deployments and prevents cross-site correlation). Reference PAGEVIEW_SALT and the "pageview statistics" / "Cloudflare Queue + D1" sentence so readers can find and understand the change.
117-117: UMAMI_SRC 示例 URL 已验证有效。
https://cloud.umami.is确实是 Umami 官方云托管服务的有效端点,由 Umami 创建者运营。该示例 URL 可以使用。建议对"client-side tracking proxy URL"(客户端跟踪代理 URL)的描述进行改进,更明确地说明这是指 Umami 跟踪脚本的代理 URL 或自托管实例地址,以便用户更清楚地理解此配置项的用途。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/deployment-guide.en.md` at line 117, Update the UMAMI_SRC table entry to clarify that this value should be the Umami tracking script proxy URL or your self-hosted Umami instance address (i.e., the endpoint serving the Umami client tracking script), making clear it can point to the official cloud endpoint (`https://cloud.umami.is`) or a self-hosted URL; reference the config key UMAMI_SRC and reword the description to explicitly say "Umami tracking script proxy URL or self-hosted Umami instance address" so readers understand the intended use.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@docs/deployment-guide.en.md`:
- Line 116: Update the docs so PAGEVIEW_SALT and BETTER_AUTH_SECRET are
consistent or the difference is explained: either change the PAGEVIEW_SALT
example to use `openssl rand -hex 32` to match BETTER_AUTH_SECRET, or keep
`openssl rand -hex 16` but add a brief note after the PAGEVIEW_SALT entry
stating that 16 bytes (hex 16) is sufficient for a visitor-hash salt and why it
differs from the 32-byte BETTER_AUTH_SECRET; reference the PAGEVIEW_SALT and
BETTER_AUTH_SECRET entries when making this change.
- Around line 248-249: The doc mentions PAGEVIEW_SALT but doesn't state what
happens if it's not set; update the deployment-guide.en.md text around the
PAGEVIEW_SALT sentence to explicitly say the system will still work without
PAGEVIEW_SALT, describe the default hashing behavior used when unset (e.g., a
simple deterministic visitor hash such as IP-based or timestamp-based
fingerprinting used by the pageview pipeline), and add one short line about the
security/privacy benefit of setting PAGEVIEW_SALT (makes visitor hashes
non-deterministic across deployments and prevents cross-site correlation).
Reference PAGEVIEW_SALT and the "pageview statistics" / "Cloudflare Queue + D1"
sentence so readers can find and understand the change.
- Line 117: Update the UMAMI_SRC table entry to clarify that this value should
be the Umami tracking script proxy URL or your self-hosted Umami instance
address (i.e., the endpoint serving the Umami client tracking script), making
clear it can point to the official cloud endpoint (`https://cloud.umami.is`) or
a self-hosted URL; reference the config key UMAMI_SRC and reword the description
to explicitly say "Umami tracking script proxy URL or self-hosted Umami instance
address" so readers understand the intended use.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 960c18f5-8748-409d-8e10-d42f77e466e6
📒 Files selected for processing (3)
README.mddocs/README.en.mddocs/deployment-guide.en.md
✅ Files skipped from review due to trivial changes (2)
- README.md
- docs/README.en.md
Summary by CodeRabbit
更新说明
新功能
移除 / 变更