Conversation
📝 WalkthroughWalkthroughAdds group-level eVault support: GraphQL/eVault fetch utilities, server APIs for group discovery, insights and message listing, sender bucketing and resolution, UI pages and charts for group listing/details/messages, and a small server-side sender-profile TTL cache. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Browser
participant SvelteKit as Control Panel<br/>(SvelteKit)
participant Registry as Registry<br/>Service
participant eVault as eVault<br/>GraphQL
participant Cache as Cache<br/>Service
Client->>SvelteKit: GET /api/evaults/groups
activate SvelteKit
SvelteKit->>Registry: POST /certify (requestPlatformToken)
Registry-->>SvelteKit: token? / error
SvelteKit->>SvelteKit: fetchRegistryEvaultRows(token)
SvelteKit->>Registry: registryService.getEVaults()
Registry-->>SvelteKit: vault list
loop per-vault
SvelteKit->>eVault: GraphQL identity/parsed queries (evaultGraphqlPost)
eVault-->>SvelteKit: parsed/metaEnvelope or null
end
SvelteKit->>SvelteKit: resolveVaultIdentity -> rows
SvelteKit-->>Client: JSON { evaults: [...] }
deactivate SvelteKit
sequenceDiagram
participant Client as Browser
participant SvelteKit as Control Panel<br/>(SvelteKit)
participant Registry as Registry
participant GroupEVault as Group eVault<br/>GraphQL
participant UserEVault as User eVaults
participant Cache as Cache<br/>Service
Client->>SvelteKit: GET /api/evaults/{id}/group-insights
activate SvelteKit
SvelteKit->>Registry: getEVaults() (build lookups)
SvelteKit->>Registry: requestPlatformToken
SvelteKit->>GroupEVault: Query messages (paginated) by MESSAGE_ONTOLOGY_ID
loop pages
GroupEVault-->>SvelteKit: batch of metaEnvelopes (parsed)
SvelteKit->>SvelteKit: extract sender ids, increment buckets, record hints
end
loop per-sender bucket
SvelteKit->>Cache: getCachedSenderProfileDisplayName(bucketId)
alt cache miss & UUID-like probe needed
SvelteKit->>UserEVault: fetchMetaEnvelopeById(senderId) (evaultGraphqlPost)
UserEVault-->>SvelteKit: profile parsed or null
SvelteKit->>Cache: setCachedSenderProfileDisplayName(senderId, name)
end
SvelteKit->>SvelteKit: resolve displayName / ename / evaultPageId
end
SvelteKit-->>Client: JSON { manifest, messageStats, senderRows }
deactivate SvelteKit
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
infrastructure/control-panel/config/admin-enames.json (1)
2-10:⚠️ Potential issue | 🟡 MinorDeduplicate the
adminsarray while updating this list.There is a duplicate admin identifier in the current array (
@82f7a77a-f03a-52aa-88fc-1b1e488ad498appears at both Line 4 and Line 6). Please remove duplicates to keep access-control config deterministic and easier to audit.🧹 Proposed fix
{ "admins": [ "@7218b67d-da21-54d6-9a85-7c4db1d09768", "@82f7a77a-f03a-52aa-88fc-1b1e488ad498", "@35a31f0d-dd76-5780-b383-29f219fcae99", - "@82f7a77a-f03a-52aa-88fc-1b1e488ad498", "@af7e4f55-ad9d-537c-81ef-4f3a234bdd2c", "@6e1bbcd4-1f59-5bd8-aa3c-6f5301c356d7", "@b995a88a-90d1-56fc-ba42-1e1eb664861c" ] }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infrastructure/control-panel/config/admin-enames.json` around lines 2 - 10, The admins array contains a duplicate entry "@82f7a77a-f03a-52aa-88fc-1b1e488ad498"; open the "admins" array in admin-enames.json and remove the duplicated identifier so each admin ID appears only once (i.e., deduplicate the "admins" array while preserving the remaining entries and order). Ensure the resulting JSON remains valid and that the "admins" array contains unique strings only.
🧹 Nitpick comments (4)
infrastructure/control-panel/src/routes/groups/[evaultId]/+page.svelte (1)
12-12: Consider reloading whenevaultIdchanges.Currently,
load()is only called inonMount, butevaultIdis derived from route params. If the user navigates between different group pages without unmounting, the data won't refresh.♻️ Proposed fix to react to param changes
- onMount(load); + $effect(() => { + if (evaultId) load(); + });This ensures data is reloaded whenever
evaultIdchanges, which can happen with client-side navigation between groups.Also applies to: 37-37
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infrastructure/control-panel/src/routes/groups/`[evaultId]/+page.svelte at line 12, The page currently only calls load() inside onMount while evaultId is derived from page.params, so client-side navigation to a different evaultId won't reload data; update the component to react to evaultId changes by subscribing/reactively watching evaultId (e.g., use a Svelte reactive statement $: or subscribe to the derived store) and call load() whenever evaultId changes (in addition to the existing onMount call), referencing the evaultId store and the load() function so data is re-fetched on param change.infrastructure/control-panel/src/routes/groups/+page.svelte (1)
119-122: Minor: "Showing" text may display incorrect range when filtered list is empty.When
filtered().length === 0, the display shows "Showing 1 – 0 of 0" which is slightly confusing.🔧 Optional fix for empty state display
- <span> - Showing {(currentPage - 1) * itemsPerPage + 1} – - {Math.min(currentPage * itemsPerPage, filtered().length)} of {filtered().length} - </span> + <span> + {`#if` filtered().length > 0} + Showing {(currentPage - 1) * itemsPerPage + 1} – + {Math.min(currentPage * itemsPerPage, filtered().length)} of {filtered().length} + {:else} + No results + {/if} + </span>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infrastructure/control-panel/src/routes/groups/`+page.svelte around lines 119 - 122, The "Showing" range renders "1 – 0 of 0" when filtered().length is 0; update the template to compute a start value that is 0 when filtered().length === 0 (e.g. start = filtered().length ? (currentPage - 1) * itemsPerPage + 1 : 0) and use that start along with end = Math.min(currentPage * itemsPerPage, filtered().length) so the display reads "Showing 0 – 0 of 0" (or "Showing 0 of 0") for empty results; change the expressions in the component where filtered(), currentPage, and itemsPerPage are used to render the corrected start/end values.infrastructure/control-panel/src/routes/api/evaults/[evaultId]/group-insights/+server.ts (1)
726-737: Summary log includes user identifiers.The
byBucketobject logged here contains sender bucket keys (enames, UUIDs) mapped to counts. While less sensitive than message content, consider whether these user identifiers should appear in server logs for compliance purposes. If this is intentional for debugging, document it; otherwise, consider logging only aggregate counts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infrastructure/control-panel/src/routes/api/evaults/`[evaultId]/group-insights/+server.ts around lines 726 - 737, The summary console.log is emitting user identifiers via the byBucket object; remove or replace that sensitive detail by logging only aggregates (e.g., total buckets and counts) or a redacted form. Update the log in the block that prints '[group-insights:summary]' to exclude byBucket keys (use something like bucketCount = Object.keys(byBucket).length and bucketTotals = Object.values(byBucket) or a masked summary) and keep senderRowCount, capped, messagesScanned, etc.; if the original detailed byBucket was intentionally needed for debugging, add an explicit comment and gate it behind a debug flag (using LOG_EACH_MESSAGE or a new DEBUG_LOG flag) so identifiers are not emitted in normal server logs.infrastructure/control-panel/src/lib/server/evault-graphql.ts (1)
384-401: Unbounded concurrency could cause issues with large registries.
Promise.allspawns GraphQL requests for all vaults simultaneously. EachresolveVaultIdentitycall can make 1-3 GraphQL requests, so with 100+ vaults, this could create hundreds of concurrent requests—risking timeouts, rate limits, or memory pressure.The group-insights handler uses
mapInChunksfor similar operations. Consider applying chunked concurrency here as well.♻️ Proposed fix: Add chunked concurrency
+async function mapInChunks<T, R>(items: T[], chunkSize: number, fn: (item: T) => Promise<R>): Promise<R[]> { + const out: R[] = []; + for (let i = 0; i < items.length; i += chunkSize) { + const chunk = items.slice(i, i + chunkSize); + out.push(...(await Promise.all(chunk.map(fn)))); + } + return out; +} + export async function fetchRegistryEvaultRows(token?: string): Promise<RegistryEvaultRow[]> { const registryVaults = await registryService.getEVaults(); - return Promise.all( - registryVaults.map(async (vault) => { + return mapInChunks(registryVaults, 10, async (vault) => { const evaultId = vault.evault || vault.ename; const identity = await resolveVaultIdentity(vault, token); return { id: evaultId, name: identity.name, type: identity.type, ename: vault.ename, uri: vault.uri, evault: vault.evault, status: 'Unknown', serviceUrl: vault.uri }; - }) - ); + }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infrastructure/control-panel/src/lib/server/evault-graphql.ts` around lines 384 - 401, The current fetchRegistryEvaultRows function uses Promise.all over registryVaults.map which can spawn unbounded concurrent resolveVaultIdentity calls; change this to process vaults in controlled concurrency using the existing chunked helper (e.g., mapInChunks or similar used in group-insights) so only a fixed number of resolveVaultIdentity calls run at once: fetch registryVaults via registryService.getEVaults(), then call mapInChunks(registryVaults, async vault => { const identity = await resolveVaultIdentity(vault, token); return { id: evaultId, name: identity.name, ... }; }, <reasonableChunkSize>) and return the flattened results, keeping references to fetchRegistryEvaultRows, resolveVaultIdentity, registryService.getEVaults and mapInChunks to locate the code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@infrastructure/control-panel/src/routes/api/evaults/`[evaultId]/group-insights/+server.ts:
- Around line 592-593: LOG_EACH_MESSAGE currently defaults to true which can
expose PII; change it to be opt-in by evaluating
CONTROL_PANEL_LOG_GROUP_MESSAGES === '1' (or similar explicit truthy value) so
logging is disabled unless the env var is set, update the LOG_EACH_MESSAGE
constant accordingly, and ensure any downstream usage (e.g., places that log
message.preview) respects the new flag and avoid logging preview content unless
explicitly enabled.
---
Outside diff comments:
In `@infrastructure/control-panel/config/admin-enames.json`:
- Around line 2-10: The admins array contains a duplicate entry
"@82f7a77a-f03a-52aa-88fc-1b1e488ad498"; open the "admins" array in
admin-enames.json and remove the duplicated identifier so each admin ID appears
only once (i.e., deduplicate the "admins" array while preserving the remaining
entries and order). Ensure the resulting JSON remains valid and that the
"admins" array contains unique strings only.
---
Nitpick comments:
In `@infrastructure/control-panel/src/lib/server/evault-graphql.ts`:
- Around line 384-401: The current fetchRegistryEvaultRows function uses
Promise.all over registryVaults.map which can spawn unbounded concurrent
resolveVaultIdentity calls; change this to process vaults in controlled
concurrency using the existing chunked helper (e.g., mapInChunks or similar used
in group-insights) so only a fixed number of resolveVaultIdentity calls run at
once: fetch registryVaults via registryService.getEVaults(), then call
mapInChunks(registryVaults, async vault => { const identity = await
resolveVaultIdentity(vault, token); return { id: evaultId, name: identity.name,
... }; }, <reasonableChunkSize>) and return the flattened results, keeping
references to fetchRegistryEvaultRows, resolveVaultIdentity,
registryService.getEVaults and mapInChunks to locate the code.
In
`@infrastructure/control-panel/src/routes/api/evaults/`[evaultId]/group-insights/+server.ts:
- Around line 726-737: The summary console.log is emitting user identifiers via
the byBucket object; remove or replace that sensitive detail by logging only
aggregates (e.g., total buckets and counts) or a redacted form. Update the log
in the block that prints '[group-insights:summary]' to exclude byBucket keys
(use something like bucketCount = Object.keys(byBucket).length and bucketTotals
= Object.values(byBucket) or a masked summary) and keep senderRowCount, capped,
messagesScanned, etc.; if the original detailed byBucket was intentionally
needed for debugging, add an explicit comment and gate it behind a debug flag
(using LOG_EACH_MESSAGE or a new DEBUG_LOG flag) so identifiers are not emitted
in normal server logs.
In `@infrastructure/control-panel/src/routes/groups/`[evaultId]/+page.svelte:
- Line 12: The page currently only calls load() inside onMount while evaultId is
derived from page.params, so client-side navigation to a different evaultId
won't reload data; update the component to react to evaultId changes by
subscribing/reactively watching evaultId (e.g., use a Svelte reactive statement
$: or subscribe to the derived store) and call load() whenever evaultId changes
(in addition to the existing onMount call), referencing the evaultId store and
the load() function so data is re-fetched on param change.
In `@infrastructure/control-panel/src/routes/groups/`+page.svelte:
- Around line 119-122: The "Showing" range renders "1 – 0 of 0" when
filtered().length is 0; update the template to compute a start value that is 0
when filtered().length === 0 (e.g. start = filtered().length ? (currentPage - 1)
* itemsPerPage + 1 : 0) and use that start along with end = Math.min(currentPage
* itemsPerPage, filtered().length) so the display reads "Showing 0 – 0 of 0" (or
"Showing 0 of 0") for empty results; change the expressions in the component
where filtered(), currentPage, and itemsPerPage are used to render the corrected
start/end values.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0e74d5e3-d23c-4315-bce8-2275a17452c4
📒 Files selected for processing (12)
infrastructure/control-panel/config/admin-enames.jsoninfrastructure/control-panel/src/lib/server/evault-graphql.tsinfrastructure/control-panel/src/lib/services/cacheService.tsinfrastructure/control-panel/src/lib/services/evaultService.tsinfrastructure/control-panel/src/routes/+layout.svelteinfrastructure/control-panel/src/routes/+page.svelteinfrastructure/control-panel/src/routes/api/evaults/+server.tsinfrastructure/control-panel/src/routes/api/evaults/[evaultId]/group-insights/+server.tsinfrastructure/control-panel/src/routes/api/evaults/groups/+server.tsinfrastructure/control-panel/src/routes/evaults/[evaultId]/+page.svelteinfrastructure/control-panel/src/routes/groups/+page.svelteinfrastructure/control-panel/src/routes/groups/[evaultId]/+page.svelte
| /** Log one JSON line per scanned message. Set `CONTROL_PANEL_LOG_GROUP_MESSAGES=0` to disable. */ | ||
| const LOG_EACH_MESSAGE = process.env.CONTROL_PANEL_LOG_GROUP_MESSAGES !== '0'; |
There was a problem hiding this comment.
Default-on message logging could expose PII in production logs.
LOG_EACH_MESSAGE defaults to true (enabled when env var is unset), which logs every message including the preview field containing truncated message body content. This could inadvertently log sensitive user communications to server logs.
Consider inverting the default so logging is opt-in rather than opt-out:
🔒 Proposed fix: Make message logging opt-in
-/** Log one JSON line per scanned message. Set `CONTROL_PANEL_LOG_GROUP_MESSAGES=0` to disable. */
-const LOG_EACH_MESSAGE = process.env.CONTROL_PANEL_LOG_GROUP_MESSAGES !== '0';
+/** Log one JSON line per scanned message. Set `CONTROL_PANEL_LOG_GROUP_MESSAGES=1` to enable. */
+const LOG_EACH_MESSAGE = process.env.CONTROL_PANEL_LOG_GROUP_MESSAGES === '1';📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /** Log one JSON line per scanned message. Set `CONTROL_PANEL_LOG_GROUP_MESSAGES=0` to disable. */ | |
| const LOG_EACH_MESSAGE = process.env.CONTROL_PANEL_LOG_GROUP_MESSAGES !== '0'; | |
| /** Log one JSON line per scanned message. Set `CONTROL_PANEL_LOG_GROUP_MESSAGES=1` to enable. */ | |
| const LOG_EACH_MESSAGE = process.env.CONTROL_PANEL_LOG_GROUP_MESSAGES === '1'; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@infrastructure/control-panel/src/routes/api/evaults/`[evaultId]/group-insights/+server.ts
around lines 592 - 593, LOG_EACH_MESSAGE currently defaults to true which can
expose PII; change it to be opt-in by evaluating
CONTROL_PANEL_LOG_GROUP_MESSAGES === '1' (or similar explicit truthy value) so
logging is disabled unless the env var is set, update the LOG_EACH_MESSAGE
constant accordingly, and ensure any downstream usage (e.g., places that log
message.preview) respects the new flag and avoid logging preview content unless
explicitly enabled.
There was a problem hiding this comment.
Actionable comments posted: 8
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@infrastructure/control-panel/src/lib/server/group-message-buckets.ts`:
- Around line 222-235: The current logic synthesizes an evault link from
enameCol or bucketKey when v is null, causing broken UI links; change the flow
in the block around v / enameCol / bucketKey so that if v is null you do NOT
return enameCol or bucketKey as an evaultPageId—return null unless you first
verify the candidate against the registry. Concretely: when v is falsy skip the
synthesize branch, and for enameCol/bucketKey (the variables en, bk) perform a
registry lookup/validation (use your existing registry lookup helper) to confirm
a real registry match before returning en or bk; otherwise return null. Ensure
you still preserve the UUID check via isLikelyUuid and the NO_SENDER_BUCKET
exclusion.
- Around line 107-122: The code currently returns the raw fromPayload when
registryLookup misses, causing case/format variants like "@Alice", "alice", and
"ALICE" to become separate buckets; change the fallback to a normalized
identifier instead. In the fromPayload resolution block (using
firstNonEmptyStringField, registryLookup and normalizeW3id), compute the lookup
result first and if no registry entry is found return a normalized form (e.g.,
normalizeW3id(fromPayload) or fromPayload.toLowerCase()) rather than the raw
fromPayload so unresolved eNames produce a single canonical bucket key.
In `@infrastructure/control-panel/src/lib/server/group-sender-resolve.server.ts`:
- Around line 150-153: The shared cache
(cacheService.getCachedSenderProfileDisplayName /
cacheService.setCachedSenderProfileDisplayName) only stores non-empty strings
while probeRequestCache memoizes null, so negative probe results (unknown UUID
senders) never get written to the shared TTL cache and cause repeated probes up
to MAX_USER_VAULTS_TO_PROBE; modify the logic that handles ttlHit and probe
results to write an explicit negative sentinel to the shared cache (e.g., a
specific empty-marker or a canonical string like "<NOT_FOUND>" or adapt
cacheService to accept and persist null-equivalents) whenever
probeRequestCache.set(metaEnvelopeId, null) would be used, and ensure subsequent
reads treat that sentinel as a cached miss result to avoid re-probing.
- Around line 169-177: Promise.all currently rejects if fetchMetaEnvelopeById
throws, turning one bad vault into a fatal error; inside the chunk.map mapper
(where registryRowToVault, fetchMetaEnvelopeById, isUserOntologyId, and
displayNameFromUserProfile are used), wrap the await fetchMetaEnvelopeById call
in a try/catch and on any error treat it as a miss by returning null (optionally
logging the error), so that resolveSenderRowFields continues probing other
vaults instead of failing the whole Promise.all.
In
`@infrastructure/control-panel/src/routes/api/evaults/`[evaultId]/group-insights/+server.ts:
- Around line 176-177: The accumulator byBucket should not be a plain object
because bucket keys come from untrusted message payloads; change the declaration
from const byBucket: Record<string, number> = {} to const byBucket = new
Map<string, number>(); then update all places that read/write counts to use
byBucket.get(bucket) ?? 0 and byBucket.set(bucket, count + 1) (or increment
logic), and adjust any iterations that previously used Object.entries/byBucket
keys to use byBucket.forEach or Array.from(byBucket.entries()); apply the same
replacement for the second accumulator used later in this file (the similar
bucket-counting location around the other messagesScanned usage).
- Around line 266-277: The summary console.log at '[group-insights:summary]' is
emitting participant identifiers because it includes the byBucket object; change
it to avoid logging per-sender data by either removing byBucket from the summary
log or gating its inclusion behind the same debug flag used for per-message
logging (LOG_EACH_MESSAGE). Update the logging call in the handler that
constructs evaultId/vault.ename/totalCount/... to log only aggregate fields
(totalCount, messagesScanned, messagesWithSenderBucket, messagesWithoutSender,
senderRowCount, capped, perMessageLinesLogged) and, if you still want per-sender
details, include byBucket only when LOG_EACH_MESSAGE is truthy (or a new
explicit debug flag) to prevent participant identifiers from being written in
production.
In
`@infrastructure/control-panel/src/routes/api/evaults/`[evaultId]/group-messages/+server.ts:
- Around line 99-111: The code calls resolveSenderRowFields(...) too early so
hintByBucket remains empty; move the resolveSenderRowFields call to after the
scanning loop that populates hints (the same scan logic used by the insights
route) so senderDisplayName/senderEname are resolved using collected hints;
ensure hintByBucket and userRowsForProbe are created before the scan and passed
into the scan so it can populate them, then invoke resolveSenderRowFields({
bucketKey: filterBucket, allVaults, registryEnameLookup, token,
dashboardNameByKey, hintByBucket, userRowsForProbe, probeRequestCache }) after
scanning finishes; apply the same change to the other occurrence around the
161-179 block.
In
`@infrastructure/control-panel/src/routes/groups/`[evaultId]/messages/+page.svelte:
- Around line 137-139: The UI hides "0 total messages in vault" because the
Svelte conditional uses truthiness on payload.totalCount; change the check to
test for presence (not falsiness) so zero renders—replace the {`#if`
payload.totalCount} branch with an explicit null/undefined check such as using
payload.totalCount !== undefined && payload.totalCount !== null (or the nullish
check payload.totalCount ?? null) around the same block so payload.totalCount
=== 0 still shows "0 total messages in vault".
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f93d0d37-cf09-46fe-94fa-b940e6569e00
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (10)
infrastructure/control-panel/package.jsoninfrastructure/control-panel/src/lib/components/charts/GroupContributionDonut.svelteinfrastructure/control-panel/src/lib/components/charts/GroupDonutArcs.svelteinfrastructure/control-panel/src/lib/server/group-message-buckets.tsinfrastructure/control-panel/src/lib/server/group-sender-resolve.server.tsinfrastructure/control-panel/src/lib/services/evaultService.tsinfrastructure/control-panel/src/routes/api/evaults/[evaultId]/group-insights/+server.tsinfrastructure/control-panel/src/routes/api/evaults/[evaultId]/group-messages/+server.tsinfrastructure/control-panel/src/routes/groups/[evaultId]/+page.svelteinfrastructure/control-panel/src/routes/groups/[evaultId]/messages/+page.svelte
✅ Files skipped from review due to trivial changes (1)
- infrastructure/control-panel/package.json
🚧 Files skipped from review as they are similar to previous changes (2)
- infrastructure/control-panel/src/routes/groups/[evaultId]/+page.svelte
- infrastructure/control-panel/src/lib/services/evaultService.ts
| const fromPayload = firstNonEmptyStringField(parsed, [ | ||
| 'senderEname', | ||
| 'sender_ename', | ||
| 'senderEName', | ||
| 'eName', | ||
| 'ename', | ||
| 'senderW3id', | ||
| 'w3id' | ||
| ]); | ||
| if (fromPayload) { | ||
| return ( | ||
| registryLookup.get(fromPayload) ?? | ||
| registryLookup.get(fromPayload.toLowerCase()) ?? | ||
| registryLookup.get(normalizeW3id(fromPayload)) ?? | ||
| fromPayload | ||
| ); |
There was a problem hiding this comment.
Normalize unresolved eNames before they become bucket keys.
When the payload eName is not in registryLookup, this falls back to the raw string. That makes @Alice, alice, and ALICE separate buckets for the same sender whenever the registry has no row, which skews both the chart and the per-sender message list.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@infrastructure/control-panel/src/lib/server/group-message-buckets.ts` around
lines 107 - 122, The code currently returns the raw fromPayload when
registryLookup misses, causing case/format variants like "@Alice", "alice", and
"ALICE" to become separate buckets; change the fallback to a normalized
identifier instead. In the fromPayload resolution block (using
firstNonEmptyStringField, registryLookup and normalizeW3id), compute the lookup
result first and if no registry entry is found return a normalized form (e.g.,
normalizeW3id(fromPayload) or fromPayload.toLowerCase()) rather than the raw
fromPayload so unresolved eNames produce a single canonical bucket key.
| if (v) { | ||
| const link = (v.evault?.trim() || v.ename?.trim()) ?? ''; | ||
| if (link) { | ||
| return link; | ||
| } | ||
| } | ||
| const en = enameCol.trim(); | ||
| if (en && en !== '—' && !isLikelyUuid(en)) { | ||
| return en; | ||
| } | ||
| const bk = bucketKey.trim(); | ||
| if (bk && bk !== NO_SENDER_BUCKET && !isLikelyUuid(bk)) { | ||
| return bk; | ||
| } |
There was a problem hiding this comment.
Don’t synthesize /evaults/[id] links for unresolved senders.
If v is null, any non-UUID enameCol / bucketKey becomes an evaultPageId here. For senders that only exist in message payloads, the UI will render a clickable profile link that just 404s. Return null unless the candidate is backed by a registry match, or validate it against the registry before emitting it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@infrastructure/control-panel/src/lib/server/group-message-buckets.ts` around
lines 222 - 235, The current logic synthesizes an evault link from enameCol or
bucketKey when v is null, causing broken UI links; change the flow in the block
around v / enameCol / bucketKey so that if v is null you do NOT return enameCol
or bucketKey as an evaultPageId—return null unless you first verify the
candidate against the registry. Concretely: when v is falsy skip the synthesize
branch, and for enameCol/bucketKey (the variables en, bk) perform a registry
lookup/validation (use your existing registry lookup helper) to confirm a real
registry match before returning en or bk; otherwise return null. Ensure you
still preserve the UUID check via isLikelyUuid and the NO_SENDER_BUCKET
exclusion.
| const ttlHit = cacheService.getCachedSenderProfileDisplayName(metaEnvelopeId); | ||
| if (ttlHit !== undefined) { | ||
| probeRequestCache.set(metaEnvelopeId, ttlHit); | ||
| return ttlHit; |
There was a problem hiding this comment.
Negative probe results never reach the shared TTL cache.
probeRequestCache memoizes null, but the shared cache only distinguishes string vs undefined and only stores non-empty strings. Unknown UUID senders therefore re-probe up to MAX_USER_VAULTS_TO_PROBE user vaults on every request.
Also applies to: 181-189
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@infrastructure/control-panel/src/lib/server/group-sender-resolve.server.ts`
around lines 150 - 153, The shared cache
(cacheService.getCachedSenderProfileDisplayName /
cacheService.setCachedSenderProfileDisplayName) only stores non-empty strings
while probeRequestCache memoizes null, so negative probe results (unknown UUID
senders) never get written to the shared TTL cache and cause repeated probes up
to MAX_USER_VAULTS_TO_PROBE; modify the logic that handles ttlHit and probe
results to write an explicit negative sentinel to the shared cache (e.g., a
specific empty-marker or a canonical string like "<NOT_FOUND>" or adapt
cacheService to accept and persist null-equivalents) whenever
probeRequestCache.set(metaEnvelopeId, null) would be used, and ensure subsequent
reads treat that sentinel as a cached miss result to avoid re-probing.
| const names = await Promise.all( | ||
| chunk.map(async (row) => { | ||
| const vault = registryRowToVault(row); | ||
| const me = await fetchMetaEnvelopeById(vault, metaEnvelopeId, token); | ||
| if (!me || !isUserOntologyId(me.ontology)) { | ||
| return null; | ||
| } | ||
| return displayNameFromUserProfile(me.parsed, row.ename); | ||
| }) |
There was a problem hiding this comment.
Treat per-vault probe failures as misses, not fatal errors.
Promise.all() rejects on the first thrown fetchMetaEnvelopeById() call. That bubbles out of resolveSenderRowFields() and turns one bad user vault into a 500 for the whole insights/messages request. Catch inside the mapper and continue probing the rest.
🛡️ Proposed fix
const names = await Promise.all(
chunk.map(async (row) => {
- const vault = registryRowToVault(row);
- const me = await fetchMetaEnvelopeById(vault, metaEnvelopeId, token);
- if (!me || !isUserOntologyId(me.ontology)) {
+ try {
+ const vault = registryRowToVault(row);
+ const me = await fetchMetaEnvelopeById(vault, metaEnvelopeId, token);
+ if (!me || !isUserOntologyId(me.ontology)) {
+ return null;
+ }
+ return displayNameFromUserProfile(me.parsed, row.ename);
+ } catch {
return null;
}
- return displayNameFromUserProfile(me.parsed, row.ename);
})
);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@infrastructure/control-panel/src/lib/server/group-sender-resolve.server.ts`
around lines 169 - 177, Promise.all currently rejects if fetchMetaEnvelopeById
throws, turning one bad vault into a fatal error; inside the chunk.map mapper
(where registryRowToVault, fetchMetaEnvelopeById, isUserOntologyId, and
displayNameFromUserProfile are used), wrap the await fetchMetaEnvelopeById call
in a try/catch and on any error treat it as a miss by returning null (optionally
logging the error), so that resolveSenderRowFields continues probing other
vaults instead of failing the whole Promise.all.
| const byBucket: Record<string, number> = {}; | ||
| let messagesScanned = 0; |
There was a problem hiding this comment.
Do not accumulate untrusted bucket keys in a normal object.
bucket comes from message payload data. Values like toString, constructor, or __proto__ can corrupt this accumulator and produce wrong counts and sender rows. Use Map<string, number> or a null-prototype object here.
🧱 Minimal hardening
- const byBucket: Record<string, number> = {};
+ const byBucket: Record<string, number> = Object.create(null);Also applies to: 214-215
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@infrastructure/control-panel/src/routes/api/evaults/`[evaultId]/group-insights/+server.ts
around lines 176 - 177, The accumulator byBucket should not be a plain object
because bucket keys come from untrusted message payloads; change the declaration
from const byBucket: Record<string, number> = {} to const byBucket = new
Map<string, number>(); then update all places that read/write counts to use
byBucket.get(bucket) ?? 0 and byBucket.set(bucket, count + 1) (or increment
logic), and adjust any iterations that previously used Object.entries/byBucket
keys to use byBucket.forEach or Array.from(byBucket.entries()); apply the same
replacement for the second accumulator used later in this file (the similar
bucket-counting location around the other messagesScanned usage).
| console.log('[group-insights:summary]', { | ||
| evaultId, | ||
| vaultEname: vault.ename, | ||
| totalCount, | ||
| messagesScanned, | ||
| messagesWithSenderBucket, | ||
| messagesWithoutSender, | ||
| senderRowCount: senderRows.length, | ||
| capped, | ||
| byBucket, | ||
| perMessageLinesLogged: LOG_EACH_MESSAGE | ||
| }); |
There was a problem hiding this comment.
The summary log still emits participant identifiers.
byBucket contains per-sender enames/raw sender ids, and this log runs on every request regardless of LOG_EACH_MESSAGE. Even after making per-message logging opt-in, this still writes participant-level identifiers to production logs. Keep the summary aggregate-only or gate byBucket behind the same explicit debug flag.
🔒 Safer summary
console.log('[group-insights:summary]', {
evaultId,
vaultEname: vault.ename,
totalCount,
messagesScanned,
messagesWithSenderBucket,
messagesWithoutSender,
senderRowCount: senderRows.length,
capped,
- byBucket,
perMessageLinesLogged: LOG_EACH_MESSAGE
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| console.log('[group-insights:summary]', { | |
| evaultId, | |
| vaultEname: vault.ename, | |
| totalCount, | |
| messagesScanned, | |
| messagesWithSenderBucket, | |
| messagesWithoutSender, | |
| senderRowCount: senderRows.length, | |
| capped, | |
| byBucket, | |
| perMessageLinesLogged: LOG_EACH_MESSAGE | |
| }); | |
| console.log('[group-insights:summary]', { | |
| evaultId, | |
| vaultEname: vault.ename, | |
| totalCount, | |
| messagesScanned, | |
| messagesWithSenderBucket, | |
| messagesWithoutSender, | |
| senderRowCount: senderRows.length, | |
| capped, | |
| perMessageLinesLogged: LOG_EACH_MESSAGE | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@infrastructure/control-panel/src/routes/api/evaults/`[evaultId]/group-insights/+server.ts
around lines 266 - 277, The summary console.log at '[group-insights:summary]' is
emitting participant identifiers because it includes the byBucket object; change
it to avoid logging per-sender data by either removing byBucket from the summary
log or gating its inclusion behind the same debug flag used for per-message
logging (LOG_EACH_MESSAGE). Update the logging call in the handler that
constructs evaultId/vault.ename/totalCount/... to log only aggregate fields
(totalCount, messagesScanned, messagesWithSenderBucket, messagesWithoutSender,
senderRowCount, capped, perMessageLinesLogged) and, if you still want per-sender
details, include byBucket only when LOG_EACH_MESSAGE is truthy (or a new
explicit debug flag) to prevent participant identifiers from being written in
production.
| const hintByBucket = new Map<string, Map<string, number>>(); | ||
| const userRowsForProbe = userRegistryRowsForProbe(registryRows); | ||
| const probeRequestCache = new Map<string, string | null>(); | ||
| const { displayName: senderDisplayName, ename: senderEname } = await resolveSenderRowFields({ | ||
| bucketKey: filterBucket, | ||
| allVaults, | ||
| registryEnameLookup, | ||
| token, | ||
| dashboardNameByKey, | ||
| hintByBucket, | ||
| userRowsForProbe, | ||
| probeRequestCache | ||
| }); |
There was a problem hiding this comment.
Resolve senderDisplayName after the scan populates hints.
This route creates hintByBucket but calls resolveSenderRowFields() before scanning, and the scan never records any hints. For senders that are not in the registry, the messages page can therefore show the raw bucket/UUID even when the matched messages contain senderName / displayName data. The insights route already collects hints during scanning; this one should do the same before resolving the header.
Also applies to: 161-179
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@infrastructure/control-panel/src/routes/api/evaults/`[evaultId]/group-messages/+server.ts
around lines 99 - 111, The code calls resolveSenderRowFields(...) too early so
hintByBucket remains empty; move the resolveSenderRowFields call to after the
scanning loop that populates hints (the same scan logic used by the insights
route) so senderDisplayName/senderEname are resolved using collected hints;
ensure hintByBucket and userRowsForProbe are created before the scan and passed
into the scan so it can populate them, then invoke resolveSenderRowFields({
bucketKey: filterBucket, allVaults, registryEnameLookup, token,
dashboardNameByKey, hintByBucket, userRowsForProbe, probeRequestCache }) after
scanning finishes; apply the same change to the other occurrence around the
161-179 block.
| {#if payload.totalCount} | ||
| · {payload.totalCount} total messages in vault (reported) | ||
| {/if} |
There was a problem hiding this comment.
0 total messages disappears from the UI.
This branch is truthy-based, so an empty group suppresses the reported total instead of showing 0 total messages in vault. Check for undefined / null rather than truthiness.
💡 Proposed fix
- {`#if` payload.totalCount}
+ {`#if` payload.totalCount !== undefined}
· {payload.totalCount} total messages in vault (reported)
{/if}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| {#if payload.totalCount} | |
| · {payload.totalCount} total messages in vault (reported) | |
| {/if} | |
| {`#if` payload.totalCount !== undefined} | |
| · {payload.totalCount} total messages in vault (reported) | |
| {/if} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@infrastructure/control-panel/src/routes/groups/`[evaultId]/messages/+page.svelte
around lines 137 - 139, The UI hides "0 total messages in vault" because the
Svelte conditional uses truthiness on payload.totalCount; change the check to
test for presence (not falsiness) so zero renders—replace the {`#if`
payload.totalCount} branch with an explicit null/undefined check such as using
payload.totalCount !== undefined && payload.totalCount !== null (or the nullish
check payload.totalCount ?? null) around the same block so payload.totalCount
=== 0 still shows "0 total messages in vault".
Description of change
displays group statistics like messages sent by a participant to infer level of participation
Issue Number
closes #870
Type of change
How the change has been tested
Manual on prod registry and provisioner
Change checklist
Summary by CodeRabbit
New Features
Chores