refactor(deepchat): add summary compaction#1331
Conversation
📝 WalkthroughWalkthroughAdds session-level context compaction: a CompactionService plus storage, presenter, UI, type, and test changes to summarize and persist rolled-up conversation history and surface compaction status across main and renderer processes. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Agent as DeepChatAgent
participant CompSvc as CompactionService
participant MsgStore as MessageStore
participant SessStore as SessionStore
participant LLM as LLM Provider
participant UI as UI/Renderer
User->>Agent: sendMessage(content)
Agent->>CompSvc: prepareForNextUserTurn()
CompSvc->>SessStore: getSummaryState()
SessStore-->>CompSvc: current summary state
CompSvc->>MsgStore: fetch records from cursor
MsgStore-->>CompSvc: chat history
CompSvc->>CompSvc: prepareCompaction()
alt needs summarization
CompSvc->>CompSvc: generateRollingSummary()
CompSvc->>CompSvc: groupBlocksByToken()
CompSvc->>LLM: generateSummaryText(prompt)
LLM-->>CompSvc: summarized content
CompSvc->>CompSvc: sanitizeSummaryContent()
end
CompSvc-->>Agent: CompactionIntent
Agent->>MsgStore: createCompactionMessage()
MsgStore-->>Agent: compaction message created
Agent->>SessStore: compareAndSetSummaryState(...)
SessStore-->>Agent: result
Agent->>UI: emit COMPACTION_UPDATED event
UI->>UI: render compaction divider
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/renderer/src/stores/ui/message.ts (1)
109-115:⚠️ Potential issue | 🟠 Major
clear()still allows stale stream hydrations to repopulate the store.
latestLoadRequestIdonly protectsloadMessages(). A pendingnewAgentPresenter.getMessage()started byapplyStreamingBlocksToMessage()can still resolve afterclear()and insert a record from the previous session into the emptied cache.Proposed fix
@@ const hydratingStreamMessageIds = new Set<string>() let latestLoadRequestId = 0 + let latestHydrationGeneration = 0 @@ function clear(): void { latestLoadRequestId += 1 + latestHydrationGeneration += 1 messageIds.value = [] messageCache.value.clear() clearStreamingState() hydratingStreamMessageIds.clear() } @@ function applyStreamingBlocksToMessage( messageId: string, conversationId: string, blocks: AssistantMessageBlock[] ): void { + const hydrationGeneration = latestHydrationGeneration const serializedBlocks = JSON.stringify(blocks) const existing = messageCache.value.get(messageId) @@ void newAgentPresenter .getMessage(messageId) .then((fetched) => { - if (!fetched || fetched.sessionId !== conversationId) return + if ( + !fetched || + fetched.sessionId !== conversationId || + hydrationGeneration !== latestHydrationGeneration + ) { + return + } upsertMessageRecord({🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/src/stores/ui/message.ts` around lines 109 - 115, The clear() function doesn't prevent in-flight stream hydrations (started by applyStreamingBlocksToMessage() via newAgentPresenter.getMessage()) from repopulating messageCache after clearing; add a simple generation token to guard hydrations: introduce a numeric hydrationRequestId (or similar) alongside latestLoadRequestId, increment it inside clear(), capture its value before calling newAgentPresenter.getMessage() in applyStreamingBlocksToMessage(), and when the getMessage() promise resolves verify the captured hydrationRequestId still matches the current one before inserting into messageCache or messageIds/hydratingStreamMessageIds; if it doesn't match, discard the result to prevent stale data from being re-added.src/main/presenter/deepchatAgentPresenter/contextBuilder.ts (1)
440-456:⚠️ Potential issue | 🟠 MajorFail fast when the resume target no longer exists.
If
assistantMessageIdis missing,targetOrderSeqstaysundefinedand this filter silently falls back to the whole sent history. That regenerates against the wrong branch instead of aborting the resume request.Possible fix
const allMessages = messageStore.getMessages(sessionId) const targetMessage = allMessages.find((message) => message.id === assistantMessageId) - const targetOrderSeq = targetMessage?.orderSeq + if (!targetMessage) { + return systemPrompt ? [{ role: 'system', content: systemPrompt }] : [] + } + const targetOrderSeq = targetMessage.orderSeq const cursor = Math.max(1, options.summaryCursorOrderSeq ?? 1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/presenter/deepchatAgentPresenter/contextBuilder.ts` around lines 440 - 456, The code silently falls back to full history when assistantMessageId is provided but targetMessage is not found; after calling messageStore.getMessages(sessionId) and computing targetMessage/targetOrderSeq, add a fail-fast check: if assistantMessageId is truthy and targetMessage is undefined, throw or return an error (e.g., throw new Error or return a rejected result) to abort the resume flow instead of continuing; update the logic around targetMessage/targetOrderSeq and historyRecords so the filter only runs when the target exists.
🧹 Nitpick comments (6)
src/shared/types/presenters/legacy.presenters.d.ts (1)
1014-1020: Avoid maintaining this presenter contract in two places.
generateText()now has to stay synchronized in bothsrc/shared/types/presenters/llmprovider.presenter.d.tsand this legacy declaration. Pulling the commonILlmProviderPresentershape into one shared base type would reduce future API drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/shared/types/presenters/legacy.presenters.d.ts` around lines 1014 - 1020, The legacy presenter declaration duplicates the generateText contract; refactor by extracting the shared shape into a single base interface (e.g., ILlmProviderPresenter) that declares generateText(providerId: string, prompt: string, modelId: string, temperature?: number, maxTokens?: number): Promise<{ content: string }>, then have the legacy presenter type extend or re-export that base interface instead of re-declaring generateText so both ILlmProviderPresenter and the legacy presenter reference the same definition and avoid drift.src/renderer/src/components/chat/messageListItems.ts (1)
12-14: MakeisCompactionMessageItema type guard.Returning
booleanprevents type narrowing in consumers. The guard is used inMessageList.vuetemplate conditionals and filters where knowingmessageType === 'compaction'would help type safety.♻️ Proposed refactor
-export function isCompactionMessageItem(item: MessageListItem): boolean { +export function isCompactionMessageItem( + item: MessageListItem +): item is MessageListItem & { messageType: 'compaction' } { return item.messageType === 'compaction' }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/src/components/chat/messageListItems.ts` around lines 12 - 14, Change isCompactionMessageItem to be a TypeScript type guard so callers get narrowed types; update its signature from returning boolean to: export function isCompactionMessageItem(item: MessageListItem): item is Extract<MessageListItem, { messageType: 'compaction' }> and keep the body (return item.messageType === 'compaction'). This will let consumers (e.g., MessageList.vue filters/conditionals) treat the item as the compaction-specific variant without losing runtime behavior.test/renderer/components/MessageList.test.ts (2)
84-89: Consider addingcompactionStatusto the basecreateMessagereturn type.The
createMessagehelper setsmessageType: 'normal'andsummaryUpdatedAt: null, butcompactionStatusis only added increateCompactionMessage. For type consistency, consider initializingcompactionStatus: undefinedin the base helper to make the shape explicit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/renderer/components/MessageList.test.ts` around lines 84 - 89, The base test helper createMessage currently omits compactionStatus (only createCompactionMessage adds it), causing inconsistent shapes; update createMessage to include compactionStatus: undefined in its returned object so the message shape is explicit and consistent with createCompactionMessage, and adjust any related test type/interface expectations (e.g., the return type of createMessage or shared Message fixture type) to reflect compactionStatus possibly being undefined.
6-14: Consider using English translations in test mocks for readability.The i18n mock returns Chinese strings (
'正在自动压缩...','上下文已自动压缩'). While this correctly simulates localized behavior, using English translations in tests improves readability and maintainability for all contributors.Suggested change
vi.mock('vue-i18n', () => ({ useI18n: () => ({ t: (key: string) => { - if (key === 'chat.compaction.compacting') return '正在自动压缩...' - if (key === 'chat.compaction.compacted') return '上下文已自动压缩' + if (key === 'chat.compaction.compacting') return 'Auto-compacting...' + if (key === 'chat.compaction.compacted') return 'Context auto-compacted' return key } }) }))Update assertions accordingly on lines 118, 130, and 143.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/renderer/components/MessageList.test.ts` around lines 6 - 14, The i18n mock in the test uses Chinese strings for the keys 'chat.compaction.compacting' and 'chat.compaction.compacted'; change those returned values to English (e.g., 'Auto-compacting...' and 'Context auto-compacted') in the useI18n mock, and update any assertions that compare against those messages to the new English text so tests remain readable; locate the useI18n mock and the test assertions that reference those i18n keys and update their expected strings accordingly.src/renderer/src/components/chat/MessageList.vue (1)
153-206: Consider using Tailwind CSS for static styles where possible.Per coding guidelines, Tailwind CSS is preferred over scoped CSS. While the keyframe animation requires custom CSS, the static layout styles (flexbox, padding, colors) could use Tailwind classes. However, the reduced-motion media query and animation are appropriately handled in scoped CSS.
As per coding guidelines: "Use Tailwind CSS for all styling instead of writing scoped CSS files"
Partial Tailwind migration example
The divider element could use Tailwind classes:
<div v-if="isCompactionMessageItem(item)" data-compaction-indicator="true" :data-compaction-status="item.compactionStatus ?? 'compacted'" class="flex items-center gap-3.5 py-4 select-none" >Keep the scoped CSS for animation-specific styles only.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/src/components/chat/MessageList.vue` around lines 153 - 206, The scoped CSS in MessageList.vue mixes static layout styles with animation; move static styles for the compaction divider into Tailwind classes on the divider element (the element rendered when isCompactionMessageItem(item) is true — currently targeted by .compaction-divider, .compaction-divider__line, and .compaction-divider__label), e.g. replace flex, gap, padding, user-select, and simple color/font-size/weight/whitespace rules with equivalent Tailwind classes on that element and its children, then remove those static declarations from the scoped CSS; keep only the animation-specific rules (keyframes compaction-breathe, .compaction-divider__label--compacting animation and the prefers-reduced-motion block) in the scoped CSS so animation and reduced-motion behavior remain intact.src/main/presenter/deepchatAgentPresenter/index.ts (1)
1898-1946: Consider adding error logging when compaction fails.When
applyCompactionfails (line 1935-1937), the compaction message is deleted but no error is logged. Consider adding a warning log to aid debugging.Suggested change
if (result.succeeded) { this.messageStore.updateCompactionMessage( compactionMessageId, 'compacted', result.summaryState.summaryUpdatedAt ) } else { + console.warn('[DeepChatAgent] Compaction failed, reverting to previous state') this.messageStore.deleteMessage(compactionMessageId) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/presenter/deepchatAgentPresenter/index.ts` around lines 1898 - 1946, The compaction failure path in applyCompactionIntent doesn't log errors; when compactionService.applyCompaction returns result.succeeded === false (the branch that calls this.messageStore.deleteMessage), add a warning/error log using the class's existing logger (e.g., this.logger.warn or similar) that includes the sessionId, compactionMessageId, intent.targetCursorOrderSeq, and any error/details from result (or result.summaryState) to aid debugging; update the branch around compactionService.applyCompaction → messageStore.deleteMessage to emit that log before deleting the message and before calling emitCompactionState.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/main/presenter/deepchatAgentPresenter/compactionService.ts`:
- Around line 466-480: The branch that calls generateSummaryText after
splitLargeBlock returns the same number of blocks can still pass an oversized
payload into generateSummaryText; before calling generateSummaryText (in the
block that checks splitBlocks.length === normalizedBlocks.length) compute the
token count of splitBlocks.join('\n\n') against the allowed input budget (use
the same budget calculation as getMaxChunkTokens/options.model.contextLength and
options.reserveTokens and options.previousSummary token cost) and if the joined
payload still exceeds the budget, do not call generateSummaryText — instead fall
back to the existing summarizeBlocks flow (or reduce chunkTokens and re-run
splitLargeBlock) so you never feed an over-budget payload; reference functions:
getMaxChunkTokens, splitLargeBlock, generateSummaryText, summarizeBlocks,
groupBlocksByToken.
- Around line 551-579: The prompt currently injects untrusted strings directly
via buildSummaryPrompt(previousSummary, spanText) which allows prompt injection;
sanitize or neutralize both previousSummary and spanText before injecting: treat
them as untrusted by escaping or stripping any lines that begin with system-like
directives (e.g., lines starting with "You are", "##", "Output format", or
"Produce"), or wrap them as an explicit quoted/fenced block labeled
"User-provided (untrusted) content — DO NOT EXECUTE" so the model is instructed
not to treat them as instructions; update buildSummaryPrompt to call a sanitizer
helper (e.g., sanitizeSummaryInput) and ensure any caller such as
appendSummarySection uses the sanitized output.
- Around line 71-91: serializeUserRecord currently strips file bodies and only
keeps metadata, causing uploaded document contents to be lost during compaction;
update serializeUserRecord to reuse the same user-content serialization used by
normal context building (e.g., call buildUserMessageContent and
normalizeUserInput) or otherwise include a bounded representation of file
contents (e.g., truncated/plaintext snippet) when files exist, and ensure
composeSections still receives the combined text+file-content block so that file
bodies are preserved in summaries; target the serializeUserRecord function and
related helpers (buildUserMessageContent, normalizeUserInput, composeSections)
when making the change.
- Around line 240-252: The code unconditionally writes the generated summary
which can regress the session when an older async compaction finishes after a
newer one; after generateRollingSummary() and before calling
this.sessionStore.updateSummaryState, re-read the current summary state for
intent.sessionId (e.g., via this.sessionStore.getSummaryState or equivalent) and
compare its summaryCursorOrderSeq to intent.targetCursorOrderSeq, and only call
this.sessionStore.updateSummaryState(intent.sessionId, updatedState) if the
current stored summaryCursorOrderSeq is <= intent.targetCursorOrderSeq (or use a
strict < if you prefer not to overwrite equal versions); if your store supports
a CAS/update-if-newer API prefer that to make the check atomic.
In `@src/main/presenter/sqlitePresenter/tables/deepchatMessages.ts`:
- Around line 202-204: The current delete(messageId) method on the deepchat
messages table only removes the message row and will leave orphaned rows in
deepchat_message_traces; fix by coordinating cleanup either by adding a foreign
key with ON DELETE CASCADE to the deepchat_message_traces schema (ensure the
schema migration/creation includes FOREIGN KEY(message_id) REFERENCES
deepchat_messages(id) ON DELETE CASCADE) or by updating the presenter flow so
deleteMessage() calls deepchatMessageTracesTable.deleteByMessageId(messageId)
before/alongside deepchatMessagesTable.delete(messageId); target the
delete(messageId) implementation in the deepchatMessages table and the presenter
method deleteMessage(), or the traces table schema/migration that defines
deepchat_message_traces.
---
Outside diff comments:
In `@src/main/presenter/deepchatAgentPresenter/contextBuilder.ts`:
- Around line 440-456: The code silently falls back to full history when
assistantMessageId is provided but targetMessage is not found; after calling
messageStore.getMessages(sessionId) and computing targetMessage/targetOrderSeq,
add a fail-fast check: if assistantMessageId is truthy and targetMessage is
undefined, throw or return an error (e.g., throw new Error or return a rejected
result) to abort the resume flow instead of continuing; update the logic around
targetMessage/targetOrderSeq and historyRecords so the filter only runs when the
target exists.
In `@src/renderer/src/stores/ui/message.ts`:
- Around line 109-115: The clear() function doesn't prevent in-flight stream
hydrations (started by applyStreamingBlocksToMessage() via
newAgentPresenter.getMessage()) from repopulating messageCache after clearing;
add a simple generation token to guard hydrations: introduce a numeric
hydrationRequestId (or similar) alongside latestLoadRequestId, increment it
inside clear(), capture its value before calling newAgentPresenter.getMessage()
in applyStreamingBlocksToMessage(), and when the getMessage() promise resolves
verify the captured hydrationRequestId still matches the current one before
inserting into messageCache or messageIds/hydratingStreamMessageIds; if it
doesn't match, discard the result to prevent stale data from being re-added.
---
Nitpick comments:
In `@src/main/presenter/deepchatAgentPresenter/index.ts`:
- Around line 1898-1946: The compaction failure path in applyCompactionIntent
doesn't log errors; when compactionService.applyCompaction returns
result.succeeded === false (the branch that calls
this.messageStore.deleteMessage), add a warning/error log using the class's
existing logger (e.g., this.logger.warn or similar) that includes the sessionId,
compactionMessageId, intent.targetCursorOrderSeq, and any error/details from
result (or result.summaryState) to aid debugging; update the branch around
compactionService.applyCompaction → messageStore.deleteMessage to emit that log
before deleting the message and before calling emitCompactionState.
In `@src/renderer/src/components/chat/MessageList.vue`:
- Around line 153-206: The scoped CSS in MessageList.vue mixes static layout
styles with animation; move static styles for the compaction divider into
Tailwind classes on the divider element (the element rendered when
isCompactionMessageItem(item) is true — currently targeted by
.compaction-divider, .compaction-divider__line, and .compaction-divider__label),
e.g. replace flex, gap, padding, user-select, and simple
color/font-size/weight/whitespace rules with equivalent Tailwind classes on that
element and its children, then remove those static declarations from the scoped
CSS; keep only the animation-specific rules (keyframes compaction-breathe,
.compaction-divider__label--compacting animation and the prefers-reduced-motion
block) in the scoped CSS so animation and reduced-motion behavior remain intact.
In `@src/renderer/src/components/chat/messageListItems.ts`:
- Around line 12-14: Change isCompactionMessageItem to be a TypeScript type
guard so callers get narrowed types; update its signature from returning boolean
to: export function isCompactionMessageItem(item: MessageListItem): item is
Extract<MessageListItem, { messageType: 'compaction' }> and keep the body
(return item.messageType === 'compaction'). This will let consumers (e.g.,
MessageList.vue filters/conditionals) treat the item as the compaction-specific
variant without losing runtime behavior.
In `@src/shared/types/presenters/legacy.presenters.d.ts`:
- Around line 1014-1020: The legacy presenter declaration duplicates the
generateText contract; refactor by extracting the shared shape into a single
base interface (e.g., ILlmProviderPresenter) that declares
generateText(providerId: string, prompt: string, modelId: string, temperature?:
number, maxTokens?: number): Promise<{ content: string }>, then have the legacy
presenter type extend or re-export that base interface instead of re-declaring
generateText so both ILlmProviderPresenter and the legacy presenter reference
the same definition and avoid drift.
In `@test/renderer/components/MessageList.test.ts`:
- Around line 84-89: The base test helper createMessage currently omits
compactionStatus (only createCompactionMessage adds it), causing inconsistent
shapes; update createMessage to include compactionStatus: undefined in its
returned object so the message shape is explicit and consistent with
createCompactionMessage, and adjust any related test type/interface expectations
(e.g., the return type of createMessage or shared Message fixture type) to
reflect compactionStatus possibly being undefined.
- Around line 6-14: The i18n mock in the test uses Chinese strings for the keys
'chat.compaction.compacting' and 'chat.compaction.compacted'; change those
returned values to English (e.g., 'Auto-compacting...' and 'Context
auto-compacted') in the useI18n mock, and update any assertions that compare
against those messages to the new English text so tests remain readable; locate
the useI18n mock and the test assertions that reference those i18n keys and
update their expected strings accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1d14c7a2-41cf-409a-82f0-ad314b99c6a1
📒 Files selected for processing (38)
src/main/events.tssrc/main/presenter/deepchatAgentPresenter/compactionService.tssrc/main/presenter/deepchatAgentPresenter/contextBuilder.tssrc/main/presenter/deepchatAgentPresenter/index.tssrc/main/presenter/deepchatAgentPresenter/messageStore.tssrc/main/presenter/deepchatAgentPresenter/sessionStore.tssrc/main/presenter/newAgentPresenter/index.tssrc/main/presenter/sqlitePresenter/tables/deepchatMessages.tssrc/main/presenter/sqlitePresenter/tables/deepchatSessions.tssrc/renderer/src/components/chat/ChatTopBar.vuesrc/renderer/src/components/chat/MessageList.vuesrc/renderer/src/components/chat/messageListItems.tssrc/renderer/src/events.tssrc/renderer/src/i18n/da-DK/chat.jsonsrc/renderer/src/i18n/en-US/chat.jsonsrc/renderer/src/i18n/fa-IR/chat.jsonsrc/renderer/src/i18n/fr-FR/chat.jsonsrc/renderer/src/i18n/he-IL/chat.jsonsrc/renderer/src/i18n/ja-JP/chat.jsonsrc/renderer/src/i18n/ko-KR/chat.jsonsrc/renderer/src/i18n/pt-BR/chat.jsonsrc/renderer/src/i18n/ru-RU/chat.jsonsrc/renderer/src/i18n/zh-CN/chat.jsonsrc/renderer/src/i18n/zh-HK/chat.jsonsrc/renderer/src/i18n/zh-TW/chat.jsonsrc/renderer/src/pages/ChatPage.vuesrc/renderer/src/stores/ui/message.tssrc/shared/types/agent-interface.d.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/shared/types/presenters/llmprovider.presenter.d.tssrc/shared/types/presenters/new-agent.presenter.d.tstest/main/presenter/deepchatAgentPresenter/contextBuilder.test.tstest/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.tstest/main/presenter/newAgentPresenter/integration.test.tstest/main/presenter/newAgentPresenter/newAgentPresenter.test.tstest/renderer/components/MessageList.test.tstest/renderer/stores/messageStore.test.tstest/renderer/stores/sessionStore.test.ts
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
src/main/presenter/deepchatAgentPresenter/compactionService.ts (1)
171-183: Good secret redaction, but consider expanding patterns.The
sanitizeSummaryContentfunction redacts common secret patterns (Bearer tokens,sk-keys, AIza keys, JWTs). This is a solid baseline for preventing accidental secret leakage in summaries.Consider whether additional patterns are needed for your use case (e.g., AWS keys starting with
AKIA, GitHub tokens starting withghp_/gho_).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/presenter/deepchatAgentPresenter/compactionService.ts` around lines 171 - 183, sanitizeSummaryContent currently redacts Bearer, sk-, AIza, and JWT patterns but omits other common token formats; update the function (sanitizeSummaryContent) to add additional regex replacements for other secret patterns your app may encounter (e.g., AWS access keys like AKIA/ASIA, GitHub tokens like ghp_/gho_/ghu_/gha_/ghs_, and other provider-specific prefixes) by appending new .replace(...) calls with appropriate case-insensitive regexes and unified redaction labels (e.g., [REDACTED_SECRET] or [REDACTED_TOKEN]) so all known token families are covered while preserving the existing trimming and think-tag removal behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/main/presenter/newAgentPresenter/integration.test.ts`:
- Around line 135-145: The mock getSummaryState currently returns snake_case
keys (summary_text, summary_cursor_order_seq, summary_updated_at) while the new
write helpers expect camelCase; make the shapes consistent by changing the
getSummaryState implementation to return camelCase properties (e.g. summaryText,
summaryCursorOrderSeq, summaryUpdatedAt) derived from
deepchatSessionsStore.get(id) so reads match the write helpers and avoid hiding
wiring bugs in the compaction path.
- Around line 15-32: The mock currently replaces entire SESSION_EVENTS and
STREAM_EVENTS objects causing drift; update the vi.mock callback so it spreads
the real nested maps from the imported actual (e.g. use
{...actual.SESSION_EVENTS, COMPACTION_UPDATED: 'session:compaction-updated'} and
{...actual.STREAM_EVENTS} with any needed overrides) instead of overwriting them
wholesale — locate the vi.mock(async (importOriginal) => { const actual = await
importOriginal<typeof import('@/events')>() ... }) block and merge into
actual.SESSION_EVENTS and actual.STREAM_EVENTS while only overriding
COMPACTION_UPDATED (and any intentional stream overrides).
- Around line 592-596: The test currently asserts for a constant fragment '##
Runtime Capabilities' which is always present; update the assertions on
secondCallMessages to verify the actual prior exchange appears before the new
user message by asserting that a user message and an assistant message from the
first turn exist in the sequence (e.g., check for roles 'user' and 'assistant'
and their expected contents from the first turn) and that they appear before the
final new user message; locate the test uses of secondCallMessages and replace
the content-based assertion with checks that validate the previous turn's user
and assistant messages (leveraging how systemEnvPromptBuilder and contextBuilder
assemble history) so the test confirms inclusion of the prior exchange rather
than a constant system prompt fragment.
---
Nitpick comments:
In `@src/main/presenter/deepchatAgentPresenter/compactionService.ts`:
- Around line 171-183: sanitizeSummaryContent currently redacts Bearer, sk-,
AIza, and JWT patterns but omits other common token formats; update the function
(sanitizeSummaryContent) to add additional regex replacements for other secret
patterns your app may encounter (e.g., AWS access keys like AKIA/ASIA, GitHub
tokens like ghp_/gho_/ghu_/gha_/ghs_, and other provider-specific prefixes) by
appending new .replace(...) calls with appropriate case-insensitive regexes and
unified redaction labels (e.g., [REDACTED_SECRET] or [REDACTED_TOKEN]) so all
known token families are covered while preserving the existing trimming and
think-tag removal behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 16909f15-ea72-4eeb-9854-e04fb76aa400
📒 Files selected for processing (12)
.cursor/rules/development-setup.mdc.cursor/rules/i18n.mdc.cursor/rules/vue-stack-guide.mdcAGENTS.mdsrc/main/presenter/deepchatAgentPresenter/compactionService.tssrc/main/presenter/deepchatAgentPresenter/sessionStore.tssrc/main/presenter/sqlitePresenter/tables/deepchatSessions.tstest/main/presenter/deepchatAgentPresenter/compactionService.test.tstest/main/presenter/deepchatAgentPresenter/deepchatAgentPresenter.test.tstest/main/presenter/deepchatAgentPresenter/messageStore.test.tstest/main/presenter/newAgentPresenter/integration.test.tstest/main/presenter/sqlitePresenter/deepchatSessionsTable.test.ts
💤 Files with no reviewable changes (3)
- .cursor/rules/development-setup.mdc
- .cursor/rules/vue-stack-guide.mdc
- .cursor/rules/i18n.mdc
| vi.mock('@/events', async (importOriginal) => { | ||
| const actual = await importOriginal<typeof import('@/events')>() | ||
| return { | ||
| ...actual, | ||
| SESSION_EVENTS: { | ||
| LIST_UPDATED: 'session:list-updated', | ||
| ACTIVATED: 'session:activated', | ||
| DEACTIVATED: 'session:deactivated', | ||
| STATUS_CHANGED: 'session:status-changed', | ||
| COMPACTION_UPDATED: 'session:compaction-updated' | ||
| }, | ||
| STREAM_EVENTS: { | ||
| RESPONSE: 'stream:response', | ||
| END: 'stream:end', | ||
| ERROR: 'stream:error' | ||
| } | ||
| } | ||
| })) | ||
| }) |
There was a problem hiding this comment.
Preserve the real nested event maps in this mock.
SESSION_EVENTS and STREAM_EVENTS are replaced wholesale here. That makes this integration mock drift-prone: if @/events adds another key that production code uses, tests will see undefined unless this file is updated too. Starting from the real nested maps keeps the mock aligned while still letting you override COMPACTION_UPDATED.
♻️ Suggested fix
vi.mock('@/events', async (importOriginal) => {
const actual = await importOriginal<typeof import('@/events')>()
return {
...actual,
SESSION_EVENTS: {
+ ...actual.SESSION_EVENTS,
LIST_UPDATED: 'session:list-updated',
ACTIVATED: 'session:activated',
DEACTIVATED: 'session:deactivated',
STATUS_CHANGED: 'session:status-changed',
COMPACTION_UPDATED: 'session:compaction-updated'
},
STREAM_EVENTS: {
+ ...actual.STREAM_EVENTS,
RESPONSE: 'stream:response',
END: 'stream:end',
ERROR: 'stream:error'
}
}
})🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/main/presenter/newAgentPresenter/integration.test.ts` around lines 15 -
32, The mock currently replaces entire SESSION_EVENTS and STREAM_EVENTS objects
causing drift; update the vi.mock callback so it spreads the real nested maps
from the imported actual (e.g. use {...actual.SESSION_EVENTS,
COMPACTION_UPDATED: 'session:compaction-updated'} and {...actual.STREAM_EVENTS}
with any needed overrides) instead of overwriting them wholesale — locate the
vi.mock(async (importOriginal) => { const actual = await importOriginal<typeof
import('@/events')>() ... }) block and merge into actual.SESSION_EVENTS and
actual.STREAM_EVENTS while only overriding COMPACTION_UPDATED (and any
intentional stream overrides).
| getSummaryState: vi.fn((id: string) => { | ||
| const row = deepchatSessionsStore.get(id) | ||
| if (!row) { | ||
| return null | ||
| } | ||
| return { | ||
| summary_text: row.summary_text ?? null, | ||
| summary_cursor_order_seq: row.summary_cursor_order_seq ?? 1, | ||
| summary_updated_at: row.summary_updated_at ?? null | ||
| } | ||
| }), |
There was a problem hiding this comment.
Keep summary-state reads and writes on the same shape.
getSummaryState() returns snake_case fields, but the new write helpers all accept camelCase. That makes the mock contract internally inconsistent and can hide wiring bugs in the compaction path.
♻️ Suggested fix
getSummaryState: vi.fn((id: string) => {
const row = deepchatSessionsStore.get(id)
if (!row) {
return null
}
return {
- summary_text: row.summary_text ?? null,
- summary_cursor_order_seq: row.summary_cursor_order_seq ?? 1,
- summary_updated_at: row.summary_updated_at ?? null
+ summaryText: row.summary_text ?? null,
+ summaryCursorOrderSeq: row.summary_cursor_order_seq ?? 1,
+ summaryUpdatedAt: row.summary_updated_at ?? null
}
}),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| getSummaryState: vi.fn((id: string) => { | |
| const row = deepchatSessionsStore.get(id) | |
| if (!row) { | |
| return null | |
| } | |
| return { | |
| summary_text: row.summary_text ?? null, | |
| summary_cursor_order_seq: row.summary_cursor_order_seq ?? 1, | |
| summary_updated_at: row.summary_updated_at ?? null | |
| } | |
| }), | |
| getSummaryState: vi.fn((id: string) => { | |
| const row = deepchatSessionsStore.get(id) | |
| if (!row) { | |
| return null | |
| } | |
| return { | |
| summaryText: row.summary_text ?? null, | |
| summaryCursorOrderSeq: row.summary_cursor_order_seq ?? 1, | |
| summaryUpdatedAt: row.summary_updated_at ?? null | |
| } | |
| }), |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/main/presenter/newAgentPresenter/integration.test.ts` around lines 135 -
145, The mock getSummaryState currently returns snake_case keys (summary_text,
summary_cursor_order_seq, summary_updated_at) while the new write helpers expect
camelCase; make the shapes consistent by changing the getSummaryState
implementation to return camelCase properties (e.g. summaryText,
summaryCursorOrderSeq, summaryUpdatedAt) derived from
deepchatSessionsStore.get(id) so reads match the write helpers and avoid hiding
wiring bugs in the compaction path.
| expect(secondCallMessages[0].role).toBe('system') | ||
| expect(secondCallMessages[0].content).toContain('You are a helpful assistant.') | ||
| expect(secondCallMessages[0].content).toContain('## Runtime Capabilities') | ||
| // Should contain prior user and assistant messages before the new user message | ||
| expect(secondCallMessages.length).toBeGreaterThanOrEqual(3) // system + at least history + new user |
There was a problem hiding this comment.
Assert the previous turn, not a constant prompt fragment.
## Runtime Capabilities is always present when the normal system prompt builder runs, so this assertion doesn't tell us whether the second request actually includes the first exchange. Please assert that the previous user and assistant turns are present before the new user message instead. Cross-file evidence: src/main/presenter/agentPresenter/message/systemEnvPromptBuilder.ts:89-97 and src/main/presenter/deepchatAgentPresenter/contextBuilder.ts:395-428.
♻️ Suggested fix
const secondCallMessages = providerInstance.coreStream.mock.calls[1][0]
expect(secondCallMessages[0].role).toBe('system')
expect(secondCallMessages[0].content).toContain('You are a helpful assistant.')
expect(secondCallMessages[0].content).toContain('## Runtime Capabilities')
+ expect(secondCallMessages.slice(1, -1)).toEqual(
+ expect.arrayContaining([
+ expect.objectContaining({ role: 'user', content: 'Hello' }),
+ expect.objectContaining({ role: 'assistant' })
+ ])
+ )
// Should contain prior user and assistant messages before the new user message
expect(secondCallMessages.length).toBeGreaterThanOrEqual(3) // system + at least history + new user🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/main/presenter/newAgentPresenter/integration.test.ts` around lines 592 -
596, The test currently asserts for a constant fragment '## Runtime
Capabilities' which is always present; update the assertions on
secondCallMessages to verify the actual prior exchange appears before the new
user message by asserting that a user message and an assistant message from the
first turn exist in the sequence (e.g., check for roles 'user' and 'assistant'
and their expected contents from the first turn) and that they appear before the
final new user message; locate the test uses of secondCallMessages and replace
the content-based assertion with checks that validate the previous turn's user
and assistant messages (leveraging how systemEnvPromptBuilder and contextBuilder
assemble history) so the test confirms inclusion of the prior exchange rather
than a constant system prompt fragment.
Summary by CodeRabbit
New Features
UI/UX
Bug Fixes
Localization