Skip to content

refactor(deepchat): add summary compaction#1331

Merged
zerob13 merged 3 commits intodevfrom
refactor/context-truncator
Mar 6, 2026
Merged

refactor(deepchat): add summary compaction#1331
zerob13 merged 3 commits intodevfrom
refactor/context-truncator

Conversation

@zerob13
Copy link
Collaborator

@zerob13 zerob13 commented Mar 6, 2026

Summary by CodeRabbit

  • New Features

    • Automatic context compaction for long conversations with visible status updates.
  • UI/UX

    • Inline compaction divider and status labels in chat message list; message rendering updated to show compaction state.
  • Bug Fixes

    • Prevent stale message loads so only the latest message fetch updates the view.
  • Localization

    • Added compaction status translations for 11 languages.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 6, 2026

📝 Walkthrough

Walkthrough

Adds session-level context compaction: a CompactionService plus storage, presenter, UI, type, and test changes to summarize and persist rolled-up conversation history and surface compaction status across main and renderer processes.

Changes

Cohort / File(s) Summary
Compaction service & core flow
src/main/presenter/deepchatAgentPresenter/compactionService.ts, src/main/presenter/deepchatAgentPresenter/index.ts, src/main/events.ts, src/renderer/src/events.ts
Adds CompactionService, session compaction state tracking, compaction intent lifecycle, and COMPACTION_UPDATED event; integrates compaction into user/resume flows and emits state updates.
Context building & token budgeting
src/main/presenter/deepchatAgentPresenter/contextBuilder.ts
Exports context-building APIs, introduces HistoryTurn abstraction, compaction-aware filtering, turn-based truncation, reserveTokens usage, and refined tool-call/message parsing and token estimation.
Session & message storage
src/main/presenter/deepchatAgentPresenter/sessionStore.ts, src/main/presenter/sqlitePresenter/tables/deepchatSessions.ts, src/main/presenter/deepchatAgentPresenter/messageStore.ts, src/main/presenter/sqlitePresenter/tables/deepchatMessages.ts
Adds summary columns and migration to sessions table, session summary CAS APIs, compaction message create/update/delete, and message deletion API.
Presenter & public APIs
src/main/presenter/newAgentPresenter/index.ts, src/shared/types/presenters/new-agent.presenter.d.ts
Adds getSessionCompactionState and getMessageTraceCount presenter methods and exposes SessionCompactionState in presenter types.
LLM presenter types
src/shared/types/presenters/legacy.presenters.d.ts, src/shared/types/presenters/llmprovider.presenter.d.ts
Adds generateText(...) method to LLM presenter interfaces for non-streamed text generation.
Shared types
src/shared/types/agent-interface.d.ts
Adds SessionCompactionStatus, SessionCompactionState, optional agent method getSessionCompactionState, and message metadata compaction fields.
Renderer: message list & page
src/renderer/src/components/chat/MessageList.vue, src/renderer/src/components/chat/messageListItems.ts, src/renderer/src/pages/ChatPage.vue
Introduces DisplayMessage/MessageListItem with compaction metadata, renders compaction-divider UI, adjusts iteration and capture logic, and updates page-level message conversions to return DisplayMessage.
Renderer: UI & i18n
src/renderer/src/components/chat/ChatTopBar.vue, src/renderer/src/i18n/*/chat.json
ChatTopBar attrs handling change; adds compaction.compacting and compaction.compacted translations across locales.
Renderer store concurrency
src/renderer/src/stores/ui/message.ts
Adds latestLoadRequestId to ignore stale loadMessages results and ensure correct load ordering.
SQLite table & tests
test/**, src/main/presenter/sqlitePresenter/tables/*
Extensive tests and mock updates for compaction flows, session summary CAS behavior, message deletion, and LLM generateText; adds tests for MessageList, message store, compaction service, and session table update behavior.
Docs / removed guides
.cursor/rules/*.mdc, AGENTS.md
Removes several local documentation files and updates AGENTS.md post-feature workflow to include an i18n step.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Agent as DeepChatAgent
    participant CompSvc as CompactionService
    participant MsgStore as MessageStore
    participant SessStore as SessionStore
    participant LLM as LLM Provider
    participant UI as UI/Renderer

    User->>Agent: sendMessage(content)
    Agent->>CompSvc: prepareForNextUserTurn()
    CompSvc->>SessStore: getSummaryState()
    SessStore-->>CompSvc: current summary state
    CompSvc->>MsgStore: fetch records from cursor
    MsgStore-->>CompSvc: chat history
    CompSvc->>CompSvc: prepareCompaction()
    alt needs summarization
        CompSvc->>CompSvc: generateRollingSummary()
        CompSvc->>CompSvc: groupBlocksByToken()
        CompSvc->>LLM: generateSummaryText(prompt)
        LLM-->>CompSvc: summarized content
        CompSvc->>CompSvc: sanitizeSummaryContent()
    end
    CompSvc-->>Agent: CompactionIntent
    Agent->>MsgStore: createCompactionMessage()
    MsgStore-->>Agent: compaction message created
    Agent->>SessStore: compareAndSetSummaryState(...)
    SessStore-->>Agent: result
    Agent->>UI: emit COMPACTION_UPDATED event
    UI->>UI: render compaction divider
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested reviewers

  • deepinfect

Poem

🐰
Whiskers twitch for tidy chat,
Old threads folded, neat and flat.
Tokens saved with gentle hop,
Summaries land — no extra stop.
A rabbit nods: compaction done!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 6.06% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'refactor(deepchat): add summary compaction' accurately describes the main feature being added—a comprehensive compaction/summarization system for deep-chat conversation sessions.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch refactor/context-truncator

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/renderer/src/stores/ui/message.ts (1)

109-115: ⚠️ Potential issue | 🟠 Major

clear() still allows stale stream hydrations to repopulate the store.

latestLoadRequestId only protects loadMessages(). A pending newAgentPresenter.getMessage() started by applyStreamingBlocksToMessage() can still resolve after clear() and insert a record from the previous session into the emptied cache.

Proposed fix
@@
   const hydratingStreamMessageIds = new Set<string>()
   let latestLoadRequestId = 0
+  let latestHydrationGeneration = 0

@@
   function clear(): void {
     latestLoadRequestId += 1
+    latestHydrationGeneration += 1
     messageIds.value = []
     messageCache.value.clear()
     clearStreamingState()
     hydratingStreamMessageIds.clear()
   }

@@
   function applyStreamingBlocksToMessage(
     messageId: string,
     conversationId: string,
     blocks: AssistantMessageBlock[]
   ): void {
+    const hydrationGeneration = latestHydrationGeneration
     const serializedBlocks = JSON.stringify(blocks)
     const existing = messageCache.value.get(messageId)

@@
     void newAgentPresenter
       .getMessage(messageId)
       .then((fetched) => {
-        if (!fetched || fetched.sessionId !== conversationId) return
+        if (
+          !fetched ||
+          fetched.sessionId !== conversationId ||
+          hydrationGeneration !== latestHydrationGeneration
+        ) {
+          return
+        }
         upsertMessageRecord({
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/src/stores/ui/message.ts` around lines 109 - 115, The clear()
function doesn't prevent in-flight stream hydrations (started by
applyStreamingBlocksToMessage() via newAgentPresenter.getMessage()) from
repopulating messageCache after clearing; add a simple generation token to guard
hydrations: introduce a numeric hydrationRequestId (or similar) alongside
latestLoadRequestId, increment it inside clear(), capture its value before
calling newAgentPresenter.getMessage() in applyStreamingBlocksToMessage(), and
when the getMessage() promise resolves verify the captured hydrationRequestId
still matches the current one before inserting into messageCache or
messageIds/hydratingStreamMessageIds; if it doesn't match, discard the result to
prevent stale data from being re-added.
src/main/presenter/deepchatAgentPresenter/contextBuilder.ts (1)

440-456: ⚠️ Potential issue | 🟠 Major

Fail fast when the resume target no longer exists.

If assistantMessageId is missing, targetOrderSeq stays undefined and this filter silently falls back to the whole sent history. That regenerates against the wrong branch instead of aborting the resume request.

Possible fix
   const allMessages = messageStore.getMessages(sessionId)
   const targetMessage = allMessages.find((message) => message.id === assistantMessageId)
-  const targetOrderSeq = targetMessage?.orderSeq
+  if (!targetMessage) {
+    return systemPrompt ? [{ role: 'system', content: systemPrompt }] : []
+  }
+  const targetOrderSeq = targetMessage.orderSeq
   const cursor = Math.max(1, options.summaryCursorOrderSeq ?? 1)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/presenter/deepchatAgentPresenter/contextBuilder.ts` around lines 440
- 456, The code silently falls back to full history when assistantMessageId is
provided but targetMessage is not found; after calling
messageStore.getMessages(sessionId) and computing targetMessage/targetOrderSeq,
add a fail-fast check: if assistantMessageId is truthy and targetMessage is
undefined, throw or return an error (e.g., throw new Error or return a rejected
result) to abort the resume flow instead of continuing; update the logic around
targetMessage/targetOrderSeq and historyRecords so the filter only runs when the
target exists.
🧹 Nitpick comments (6)
src/shared/types/presenters/legacy.presenters.d.ts (1)

1014-1020: Avoid maintaining this presenter contract in two places.

generateText() now has to stay synchronized in both src/shared/types/presenters/llmprovider.presenter.d.ts and this legacy declaration. Pulling the common ILlmProviderPresenter shape into one shared base type would reduce future API drift.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/shared/types/presenters/legacy.presenters.d.ts` around lines 1014 - 1020,
The legacy presenter declaration duplicates the generateText contract; refactor
by extracting the shared shape into a single base interface (e.g.,
ILlmProviderPresenter) that declares generateText(providerId: string, prompt:
string, modelId: string, temperature?: number, maxTokens?: number): Promise<{
content: string }>, then have the legacy presenter type extend or re-export that
base interface instead of re-declaring generateText so both
ILlmProviderPresenter and the legacy presenter reference the same definition and
avoid drift.
src/renderer/src/components/chat/messageListItems.ts (1)

12-14: Make isCompactionMessageItem a type guard.

Returning boolean prevents type narrowing in consumers. The guard is used in MessageList.vue template conditionals and filters where knowing messageType === 'compaction' would help type safety.

♻️ Proposed refactor
-export function isCompactionMessageItem(item: MessageListItem): boolean {
+export function isCompactionMessageItem(
+  item: MessageListItem
+): item is MessageListItem & { messageType: 'compaction' } {
   return item.messageType === 'compaction'
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/src/components/chat/messageListItems.ts` around lines 12 - 14,
Change isCompactionMessageItem to be a TypeScript type guard so callers get
narrowed types; update its signature from returning boolean to: export function
isCompactionMessageItem(item: MessageListItem): item is Extract<MessageListItem,
{ messageType: 'compaction' }> and keep the body (return item.messageType ===
'compaction'). This will let consumers (e.g., MessageList.vue
filters/conditionals) treat the item as the compaction-specific variant without
losing runtime behavior.
test/renderer/components/MessageList.test.ts (2)

84-89: Consider adding compactionStatus to the base createMessage return type.

The createMessage helper sets messageType: 'normal' and summaryUpdatedAt: null, but compactionStatus is only added in createCompactionMessage. For type consistency, consider initializing compactionStatus: undefined in the base helper to make the shape explicit.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/renderer/components/MessageList.test.ts` around lines 84 - 89, The base
test helper createMessage currently omits compactionStatus (only
createCompactionMessage adds it), causing inconsistent shapes; update
createMessage to include compactionStatus: undefined in its returned object so
the message shape is explicit and consistent with createCompactionMessage, and
adjust any related test type/interface expectations (e.g., the return type of
createMessage or shared Message fixture type) to reflect compactionStatus
possibly being undefined.

6-14: Consider using English translations in test mocks for readability.

The i18n mock returns Chinese strings ('正在自动压缩...', '上下文已自动压缩'). While this correctly simulates localized behavior, using English translations in tests improves readability and maintainability for all contributors.

Suggested change
 vi.mock('vue-i18n', () => ({
   useI18n: () => ({
     t: (key: string) => {
-      if (key === 'chat.compaction.compacting') return '正在自动压缩...'
-      if (key === 'chat.compaction.compacted') return '上下文已自动压缩'
+      if (key === 'chat.compaction.compacting') return 'Auto-compacting...'
+      if (key === 'chat.compaction.compacted') return 'Context auto-compacted'
       return key
     }
   })
 }))

Update assertions accordingly on lines 118, 130, and 143.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/renderer/components/MessageList.test.ts` around lines 6 - 14, The i18n
mock in the test uses Chinese strings for the keys 'chat.compaction.compacting'
and 'chat.compaction.compacted'; change those returned values to English (e.g.,
'Auto-compacting...' and 'Context auto-compacted') in the useI18n mock, and
update any assertions that compare against those messages to the new English
text so tests remain readable; locate the useI18n mock and the test assertions
that reference those i18n keys and update their expected strings accordingly.
src/renderer/src/components/chat/MessageList.vue (1)

153-206: Consider using Tailwind CSS for static styles where possible.

Per coding guidelines, Tailwind CSS is preferred over scoped CSS. While the keyframe animation requires custom CSS, the static layout styles (flexbox, padding, colors) could use Tailwind classes. However, the reduced-motion media query and animation are appropriately handled in scoped CSS.

As per coding guidelines: "Use Tailwind CSS for all styling instead of writing scoped CSS files"

Partial Tailwind migration example

The divider element could use Tailwind classes:

<div
  v-if="isCompactionMessageItem(item)"
  data-compaction-indicator="true"
  :data-compaction-status="item.compactionStatus ?? 'compacted'"
  class="flex items-center gap-3.5 py-4 select-none"
>

Keep the scoped CSS for animation-specific styles only.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/renderer/src/components/chat/MessageList.vue` around lines 153 - 206, The
scoped CSS in MessageList.vue mixes static layout styles with animation; move
static styles for the compaction divider into Tailwind classes on the divider
element (the element rendered when isCompactionMessageItem(item) is true —
currently targeted by .compaction-divider, .compaction-divider__line, and
.compaction-divider__label), e.g. replace flex, gap, padding, user-select, and
simple color/font-size/weight/whitespace rules with equivalent Tailwind classes
on that element and its children, then remove those static declarations from the
scoped CSS; keep only the animation-specific rules (keyframes
compaction-breathe, .compaction-divider__label--compacting animation and the
prefers-reduced-motion block) in the scoped CSS so animation and reduced-motion
behavior remain intact.
src/main/presenter/deepchatAgentPresenter/index.ts (1)

1898-1946: Consider adding error logging when compaction fails.

When applyCompaction fails (line 1935-1937), the compaction message is deleted but no error is logged. Consider adding a warning log to aid debugging.

Suggested change
     if (result.succeeded) {
       this.messageStore.updateCompactionMessage(
         compactionMessageId,
         'compacted',
         result.summaryState.summaryUpdatedAt
       )
     } else {
+      console.warn('[DeepChatAgent] Compaction failed, reverting to previous state')
       this.messageStore.deleteMessage(compactionMessageId)
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/presenter/deepchatAgentPresenter/index.ts` around lines 1898 - 1946,
The compaction failure path in applyCompactionIntent doesn't log errors; when
compactionService.applyCompaction returns result.succeeded === false (the branch
that calls this.messageStore.deleteMessage), add a warning/error log using the
class's existing logger (e.g., this.logger.warn or similar) that includes the
sessionId, compactionMessageId, intent.targetCursorOrderSeq, and any
error/details from result (or result.summaryState) to aid debugging; update the
branch around compactionService.applyCompaction → messageStore.deleteMessage to
emit that log before deleting the message and before calling
emitCompactionState.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/main/presenter/deepchatAgentPresenter/compactionService.ts`:
- Around line 466-480: The branch that calls generateSummaryText after
splitLargeBlock returns the same number of blocks can still pass an oversized
payload into generateSummaryText; before calling generateSummaryText (in the
block that checks splitBlocks.length === normalizedBlocks.length) compute the
token count of splitBlocks.join('\n\n') against the allowed input budget (use
the same budget calculation as getMaxChunkTokens/options.model.contextLength and
options.reserveTokens and options.previousSummary token cost) and if the joined
payload still exceeds the budget, do not call generateSummaryText — instead fall
back to the existing summarizeBlocks flow (or reduce chunkTokens and re-run
splitLargeBlock) so you never feed an over-budget payload; reference functions:
getMaxChunkTokens, splitLargeBlock, generateSummaryText, summarizeBlocks,
groupBlocksByToken.
- Around line 551-579: The prompt currently injects untrusted strings directly
via buildSummaryPrompt(previousSummary, spanText) which allows prompt injection;
sanitize or neutralize both previousSummary and spanText before injecting: treat
them as untrusted by escaping or stripping any lines that begin with system-like
directives (e.g., lines starting with "You are", "##", "Output format", or
"Produce"), or wrap them as an explicit quoted/fenced block labeled
"User-provided (untrusted) content — DO NOT EXECUTE" so the model is instructed
not to treat them as instructions; update buildSummaryPrompt to call a sanitizer
helper (e.g., sanitizeSummaryInput) and ensure any caller such as
appendSummarySection uses the sanitized output.
- Around line 71-91: serializeUserRecord currently strips file bodies and only
keeps metadata, causing uploaded document contents to be lost during compaction;
update serializeUserRecord to reuse the same user-content serialization used by
normal context building (e.g., call buildUserMessageContent and
normalizeUserInput) or otherwise include a bounded representation of file
contents (e.g., truncated/plaintext snippet) when files exist, and ensure
composeSections still receives the combined text+file-content block so that file
bodies are preserved in summaries; target the serializeUserRecord function and
related helpers (buildUserMessageContent, normalizeUserInput, composeSections)
when making the change.
- Around line 240-252: The code unconditionally writes the generated summary
which can regress the session when an older async compaction finishes after a
newer one; after generateRollingSummary() and before calling
this.sessionStore.updateSummaryState, re-read the current summary state for
intent.sessionId (e.g., via this.sessionStore.getSummaryState or equivalent) and
compare its summaryCursorOrderSeq to intent.targetCursorOrderSeq, and only call
this.sessionStore.updateSummaryState(intent.sessionId, updatedState) if the
current stored summaryCursorOrderSeq is <= intent.targetCursorOrderSeq (or use a
strict < if you prefer not to overwrite equal versions); if your store supports
a CAS/update-if-newer API prefer that to make the check atomic.

In `@src/main/presenter/sqlitePresenter/tables/deepchatMessages.ts`:
- Around line 202-204: The current delete(messageId) method on the deepchat
messages table only removes the message row and will leave orphaned rows in
deepchat_message_traces; fix by coordinating cleanup either by adding a foreign
key with ON DELETE CASCADE to the deepchat_message_traces schema (ensure the
schema migration/creation includes FOREIGN KEY(message_id) REFERENCES
deepchat_messages(id) ON DELETE CASCADE) or by updating the presenter flow so
deleteMessage() calls deepchatMessageTracesTable.deleteByMessageId(messageId)
before/alongside deepchatMessagesTable.delete(messageId); target the
delete(messageId) implementation in the deepchatMessages table and the presenter
method deleteMessage(), or the traces table schema/migration that defines
deepchat_message_traces.

---

Outside diff comments:
In `@src/main/presenter/deepchatAgentPresenter/contextBuilder.ts`:
- Around line 440-456: The code silently falls back to full history when
assistantMessageId is provided but targetMessage is not found; after calling
messageStore.getMessages(sessionId) and computing targetMessage/targetOrderSeq,
add a fail-fast check: if assistantMessageId is truthy and targetMessage is
undefined, throw or return an error (e.g., throw new Error or return a rejected
result) to abort the resume flow instead of continuing; update the logic around
targetMessage/targetOrderSeq and historyRecords so the filter only runs when the
target exists.

In `@src/renderer/src/stores/ui/message.ts`:
- Around line 109-115: The clear() function doesn't prevent in-flight stream
hydrations (started by applyStreamingBlocksToMessage() via
newAgentPresenter.getMessage()) from repopulating messageCache after clearing;
add a simple generation token to guard hydrations: introduce a numeric
hydrationRequestId (or similar) alongside latestLoadRequestId, increment it
inside clear(), capture its value before calling newAgentPresenter.getMessage()
in applyStreamingBlocksToMessage(), and when the getMessage() promise resolves
verify the captured hydrationRequestId still matches the current one before
inserting into messageCache or messageIds/hydratingStreamMessageIds; if it
doesn't match, discard the result to prevent stale data from being re-added.

---

Nitpick comments:
In `@src/main/presenter/deepchatAgentPresenter/index.ts`:
- Around line 1898-1946: The compaction failure path in applyCompactionIntent
doesn't log errors; when compactionService.applyCompaction returns
result.succeeded === false (the branch that calls
this.messageStore.deleteMessage), add a warning/error log using the class's
existing logger (e.g., this.logger.warn or similar) that includes the sessionId,
compactionMessageId, intent.targetCursorOrderSeq, and any error/details from
result (or result.summaryState) to aid debugging; update the branch around
compactionService.applyCompaction → messageStore.deleteMessage to emit that log
before deleting the message and before calling emitCompactionState.

In `@src/renderer/src/components/chat/MessageList.vue`:
- Around line 153-206: The scoped CSS in MessageList.vue mixes static layout
styles with animation; move static styles for the compaction divider into
Tailwind classes on the divider element (the element rendered when
isCompactionMessageItem(item) is true — currently targeted by
.compaction-divider, .compaction-divider__line, and .compaction-divider__label),
e.g. replace flex, gap, padding, user-select, and simple
color/font-size/weight/whitespace rules with equivalent Tailwind classes on that
element and its children, then remove those static declarations from the scoped
CSS; keep only the animation-specific rules (keyframes compaction-breathe,
.compaction-divider__label--compacting animation and the prefers-reduced-motion
block) in the scoped CSS so animation and reduced-motion behavior remain intact.

In `@src/renderer/src/components/chat/messageListItems.ts`:
- Around line 12-14: Change isCompactionMessageItem to be a TypeScript type
guard so callers get narrowed types; update its signature from returning boolean
to: export function isCompactionMessageItem(item: MessageListItem): item is
Extract<MessageListItem, { messageType: 'compaction' }> and keep the body
(return item.messageType === 'compaction'). This will let consumers (e.g.,
MessageList.vue filters/conditionals) treat the item as the compaction-specific
variant without losing runtime behavior.

In `@src/shared/types/presenters/legacy.presenters.d.ts`:
- Around line 1014-1020: The legacy presenter declaration duplicates the
generateText contract; refactor by extracting the shared shape into a single
base interface (e.g., ILlmProviderPresenter) that declares
generateText(providerId: string, prompt: string, modelId: string, temperature?:
number, maxTokens?: number): Promise<{ content: string }>, then have the legacy
presenter type extend or re-export that base interface instead of re-declaring
generateText so both ILlmProviderPresenter and the legacy presenter reference
the same definition and avoid drift.

In `@test/renderer/components/MessageList.test.ts`:
- Around line 84-89: The base test helper createMessage currently omits
compactionStatus (only createCompactionMessage adds it), causing inconsistent
shapes; update createMessage to include compactionStatus: undefined in its
returned object so the message shape is explicit and consistent with
createCompactionMessage, and adjust any related test type/interface expectations
(e.g., the return type of createMessage or shared Message fixture type) to
reflect compactionStatus possibly being undefined.
- Around line 6-14: The i18n mock in the test uses Chinese strings for the keys
'chat.compaction.compacting' and 'chat.compaction.compacted'; change those
returned values to English (e.g., 'Auto-compacting...' and 'Context
auto-compacted') in the useI18n mock, and update any assertions that compare
against those messages to the new English text so tests remain readable; locate
the useI18n mock and the test assertions that reference those i18n keys and
update their expected strings accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1d14c7a2-41cf-409a-82f0-ad314b99c6a1

📥 Commits

Reviewing files that changed from the base of the PR and between 366f3d4 and f614aa8.

📒 Files selected for processing (38)
  • src/main/events.ts
  • src/main/presenter/deepchatAgentPresenter/compactionService.ts
  • src/main/presenter/deepchatAgentPresenter/contextBuilder.ts
  • src/main/presenter/deepchatAgentPresenter/index.ts
  • src/main/presenter/deepchatAgentPresenter/messageStore.ts
  • src/main/presenter/deepchatAgentPresenter/sessionStore.ts
  • src/main/presenter/newAgentPresenter/index.ts
  • src/main/presenter/sqlitePresenter/tables/deepchatMessages.ts
  • src/main/presenter/sqlitePresenter/tables/deepchatSessions.ts
  • src/renderer/src/components/chat/ChatTopBar.vue
  • src/renderer/src/components/chat/MessageList.vue
  • src/renderer/src/components/chat/messageListItems.ts
  • src/renderer/src/events.ts
  • src/renderer/src/i18n/da-DK/chat.json
  • src/renderer/src/i18n/en-US/chat.json
  • src/renderer/src/i18n/fa-IR/chat.json
  • src/renderer/src/i18n/fr-FR/chat.json
  • src/renderer/src/i18n/he-IL/chat.json
  • src/renderer/src/i18n/ja-JP/chat.json
  • src/renderer/src/i18n/ko-KR/chat.json
  • src/renderer/src/i18n/pt-BR/chat.json
  • src/renderer/src/i18n/ru-RU/chat.json
  • src/renderer/src/i18n/zh-CN/chat.json
  • src/renderer/src/i18n/zh-HK/chat.json
  • src/renderer/src/i18n/zh-TW/chat.json
  • src/renderer/src/pages/ChatPage.vue
  • src/renderer/src/stores/ui/message.ts
  • src/shared/types/agent-interface.d.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/shared/types/presenters/new-agent.presenter.d.ts
  • test/main/presenter/deepchatAgentPresenter/contextBuilder.test.ts
  • test/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.ts
  • test/main/presenter/newAgentPresenter/integration.test.ts
  • test/main/presenter/newAgentPresenter/newAgentPresenter.test.ts
  • test/renderer/components/MessageList.test.ts
  • test/renderer/stores/messageStore.test.ts
  • test/renderer/stores/sessionStore.test.ts

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
src/main/presenter/deepchatAgentPresenter/compactionService.ts (1)

171-183: Good secret redaction, but consider expanding patterns.

The sanitizeSummaryContent function redacts common secret patterns (Bearer tokens, sk- keys, AIza keys, JWTs). This is a solid baseline for preventing accidental secret leakage in summaries.

Consider whether additional patterns are needed for your use case (e.g., AWS keys starting with AKIA, GitHub tokens starting with ghp_/gho_).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/main/presenter/deepchatAgentPresenter/compactionService.ts` around lines
171 - 183, sanitizeSummaryContent currently redacts Bearer, sk-, AIza, and JWT
patterns but omits other common token formats; update the function
(sanitizeSummaryContent) to add additional regex replacements for other secret
patterns your app may encounter (e.g., AWS access keys like AKIA/ASIA, GitHub
tokens like ghp_/gho_/ghu_/gha_/ghs_, and other provider-specific prefixes) by
appending new .replace(...) calls with appropriate case-insensitive regexes and
unified redaction labels (e.g., [REDACTED_SECRET] or [REDACTED_TOKEN]) so all
known token families are covered while preserving the existing trimming and
think-tag removal behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/main/presenter/newAgentPresenter/integration.test.ts`:
- Around line 135-145: The mock getSummaryState currently returns snake_case
keys (summary_text, summary_cursor_order_seq, summary_updated_at) while the new
write helpers expect camelCase; make the shapes consistent by changing the
getSummaryState implementation to return camelCase properties (e.g. summaryText,
summaryCursorOrderSeq, summaryUpdatedAt) derived from
deepchatSessionsStore.get(id) so reads match the write helpers and avoid hiding
wiring bugs in the compaction path.
- Around line 15-32: The mock currently replaces entire SESSION_EVENTS and
STREAM_EVENTS objects causing drift; update the vi.mock callback so it spreads
the real nested maps from the imported actual (e.g. use
{...actual.SESSION_EVENTS, COMPACTION_UPDATED: 'session:compaction-updated'} and
{...actual.STREAM_EVENTS} with any needed overrides) instead of overwriting them
wholesale — locate the vi.mock(async (importOriginal) => { const actual = await
importOriginal<typeof import('@/events')>() ... }) block and merge into
actual.SESSION_EVENTS and actual.STREAM_EVENTS while only overriding
COMPACTION_UPDATED (and any intentional stream overrides).
- Around line 592-596: The test currently asserts for a constant fragment '##
Runtime Capabilities' which is always present; update the assertions on
secondCallMessages to verify the actual prior exchange appears before the new
user message by asserting that a user message and an assistant message from the
first turn exist in the sequence (e.g., check for roles 'user' and 'assistant'
and their expected contents from the first turn) and that they appear before the
final new user message; locate the test uses of secondCallMessages and replace
the content-based assertion with checks that validate the previous turn's user
and assistant messages (leveraging how systemEnvPromptBuilder and contextBuilder
assemble history) so the test confirms inclusion of the prior exchange rather
than a constant system prompt fragment.

---

Nitpick comments:
In `@src/main/presenter/deepchatAgentPresenter/compactionService.ts`:
- Around line 171-183: sanitizeSummaryContent currently redacts Bearer, sk-,
AIza, and JWT patterns but omits other common token formats; update the function
(sanitizeSummaryContent) to add additional regex replacements for other secret
patterns your app may encounter (e.g., AWS access keys like AKIA/ASIA, GitHub
tokens like ghp_/gho_/ghu_/gha_/ghs_, and other provider-specific prefixes) by
appending new .replace(...) calls with appropriate case-insensitive regexes and
unified redaction labels (e.g., [REDACTED_SECRET] or [REDACTED_TOKEN]) so all
known token families are covered while preserving the existing trimming and
think-tag removal behavior.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 16909f15-ea72-4eeb-9854-e04fb76aa400

📥 Commits

Reviewing files that changed from the base of the PR and between f614aa8 and 0510984.

📒 Files selected for processing (12)
  • .cursor/rules/development-setup.mdc
  • .cursor/rules/i18n.mdc
  • .cursor/rules/vue-stack-guide.mdc
  • AGENTS.md
  • src/main/presenter/deepchatAgentPresenter/compactionService.ts
  • src/main/presenter/deepchatAgentPresenter/sessionStore.ts
  • src/main/presenter/sqlitePresenter/tables/deepchatSessions.ts
  • test/main/presenter/deepchatAgentPresenter/compactionService.test.ts
  • test/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.ts
  • test/main/presenter/deepchatAgentPresenter/messageStore.test.ts
  • test/main/presenter/newAgentPresenter/integration.test.ts
  • test/main/presenter/sqlitePresenter/deepchatSessionsTable.test.ts
💤 Files with no reviewable changes (3)
  • .cursor/rules/development-setup.mdc
  • .cursor/rules/vue-stack-guide.mdc
  • .cursor/rules/i18n.mdc

Comment on lines +15 to +32
vi.mock('@/events', async (importOriginal) => {
const actual = await importOriginal<typeof import('@/events')>()
return {
...actual,
SESSION_EVENTS: {
LIST_UPDATED: 'session:list-updated',
ACTIVATED: 'session:activated',
DEACTIVATED: 'session:deactivated',
STATUS_CHANGED: 'session:status-changed',
COMPACTION_UPDATED: 'session:compaction-updated'
},
STREAM_EVENTS: {
RESPONSE: 'stream:response',
END: 'stream:end',
ERROR: 'stream:error'
}
}
}))
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Preserve the real nested event maps in this mock.

SESSION_EVENTS and STREAM_EVENTS are replaced wholesale here. That makes this integration mock drift-prone: if @/events adds another key that production code uses, tests will see undefined unless this file is updated too. Starting from the real nested maps keeps the mock aligned while still letting you override COMPACTION_UPDATED.

♻️ Suggested fix
 vi.mock('@/events', async (importOriginal) => {
   const actual = await importOriginal<typeof import('@/events')>()
   return {
     ...actual,
     SESSION_EVENTS: {
+      ...actual.SESSION_EVENTS,
       LIST_UPDATED: 'session:list-updated',
       ACTIVATED: 'session:activated',
       DEACTIVATED: 'session:deactivated',
       STATUS_CHANGED: 'session:status-changed',
       COMPACTION_UPDATED: 'session:compaction-updated'
     },
     STREAM_EVENTS: {
+      ...actual.STREAM_EVENTS,
       RESPONSE: 'stream:response',
       END: 'stream:end',
       ERROR: 'stream:error'
     }
   }
 })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/main/presenter/newAgentPresenter/integration.test.ts` around lines 15 -
32, The mock currently replaces entire SESSION_EVENTS and STREAM_EVENTS objects
causing drift; update the vi.mock callback so it spreads the real nested maps
from the imported actual (e.g. use {...actual.SESSION_EVENTS,
COMPACTION_UPDATED: 'session:compaction-updated'} and {...actual.STREAM_EVENTS}
with any needed overrides) instead of overwriting them wholesale — locate the
vi.mock(async (importOriginal) => { const actual = await importOriginal<typeof
import('@/events')>() ... }) block and merge into actual.SESSION_EVENTS and
actual.STREAM_EVENTS while only overriding COMPACTION_UPDATED (and any
intentional stream overrides).

Comment on lines +135 to +145
getSummaryState: vi.fn((id: string) => {
const row = deepchatSessionsStore.get(id)
if (!row) {
return null
}
return {
summary_text: row.summary_text ?? null,
summary_cursor_order_seq: row.summary_cursor_order_seq ?? 1,
summary_updated_at: row.summary_updated_at ?? null
}
}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Keep summary-state reads and writes on the same shape.

getSummaryState() returns snake_case fields, but the new write helpers all accept camelCase. That makes the mock contract internally inconsistent and can hide wiring bugs in the compaction path.

♻️ Suggested fix
       getSummaryState: vi.fn((id: string) => {
         const row = deepchatSessionsStore.get(id)
         if (!row) {
           return null
         }
         return {
-          summary_text: row.summary_text ?? null,
-          summary_cursor_order_seq: row.summary_cursor_order_seq ?? 1,
-          summary_updated_at: row.summary_updated_at ?? null
+          summaryText: row.summary_text ?? null,
+          summaryCursorOrderSeq: row.summary_cursor_order_seq ?? 1,
+          summaryUpdatedAt: row.summary_updated_at ?? null
         }
       }),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
getSummaryState: vi.fn((id: string) => {
const row = deepchatSessionsStore.get(id)
if (!row) {
return null
}
return {
summary_text: row.summary_text ?? null,
summary_cursor_order_seq: row.summary_cursor_order_seq ?? 1,
summary_updated_at: row.summary_updated_at ?? null
}
}),
getSummaryState: vi.fn((id: string) => {
const row = deepchatSessionsStore.get(id)
if (!row) {
return null
}
return {
summaryText: row.summary_text ?? null,
summaryCursorOrderSeq: row.summary_cursor_order_seq ?? 1,
summaryUpdatedAt: row.summary_updated_at ?? null
}
}),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/main/presenter/newAgentPresenter/integration.test.ts` around lines 135 -
145, The mock getSummaryState currently returns snake_case keys (summary_text,
summary_cursor_order_seq, summary_updated_at) while the new write helpers expect
camelCase; make the shapes consistent by changing the getSummaryState
implementation to return camelCase properties (e.g. summaryText,
summaryCursorOrderSeq, summaryUpdatedAt) derived from
deepchatSessionsStore.get(id) so reads match the write helpers and avoid hiding
wiring bugs in the compaction path.

Comment on lines +592 to 596
expect(secondCallMessages[0].role).toBe('system')
expect(secondCallMessages[0].content).toContain('You are a helpful assistant.')
expect(secondCallMessages[0].content).toContain('## Runtime Capabilities')
// Should contain prior user and assistant messages before the new user message
expect(secondCallMessages.length).toBeGreaterThanOrEqual(3) // system + at least history + new user
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Assert the previous turn, not a constant prompt fragment.

## Runtime Capabilities is always present when the normal system prompt builder runs, so this assertion doesn't tell us whether the second request actually includes the first exchange. Please assert that the previous user and assistant turns are present before the new user message instead. Cross-file evidence: src/main/presenter/agentPresenter/message/systemEnvPromptBuilder.ts:89-97 and src/main/presenter/deepchatAgentPresenter/contextBuilder.ts:395-428.

♻️ Suggested fix
     const secondCallMessages = providerInstance.coreStream.mock.calls[1][0]
     expect(secondCallMessages[0].role).toBe('system')
     expect(secondCallMessages[0].content).toContain('You are a helpful assistant.')
     expect(secondCallMessages[0].content).toContain('## Runtime Capabilities')
+    expect(secondCallMessages.slice(1, -1)).toEqual(
+      expect.arrayContaining([
+        expect.objectContaining({ role: 'user', content: 'Hello' }),
+        expect.objectContaining({ role: 'assistant' })
+      ])
+    )
     // Should contain prior user and assistant messages before the new user message
     expect(secondCallMessages.length).toBeGreaterThanOrEqual(3) // system + at least history + new user
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/main/presenter/newAgentPresenter/integration.test.ts` around lines 592 -
596, The test currently asserts for a constant fragment '## Runtime
Capabilities' which is always present; update the assertions on
secondCallMessages to verify the actual prior exchange appears before the new
user message by asserting that a user message and an assistant message from the
first turn exist in the sequence (e.g., check for roles 'user' and 'assistant'
and their expected contents from the first turn) and that they appear before the
final new user message; locate the test uses of secondCallMessages and replace
the content-based assertion with checks that validate the previous turn's user
and assistant messages (leveraging how systemEnvPromptBuilder and contextBuilder
assemble history) so the test confirms inclusion of the prior exchange rather
than a constant system prompt fragment.

@zerob13 zerob13 merged commit f09bf61 into dev Mar 6, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant