Nightly report: post details as follow-up comments instead of truncating#1239
Conversation
When the full report exceeds GitHub's 65K body limit, the summary table stays in the discussion/issue body and the verbose skill/agent output is posted as follow-up comments (split into chunks if needed). This ensures no output is lost.
There was a problem hiding this comment.
Pull request overview
Adjusts the nightly Skill Quality Report workflow to avoid losing output when the GitHub Discussion/Issue body exceeds the ~65KB limit by keeping a compact summary in the main post and publishing verbose sections as follow-up comments (chunked if necessary).
Changes:
- Adds byte-size aware logic to decide whether to inline the full report or move verbose sections into comment “parts”.
- Updates the Discussion creation step to post any overflow parts as Discussion comments via GraphQL.
- Updates the Issue fallback path to post overflow parts as Issue comments via
gh issue comment.
Comments suppressed due to low confidence (1)
.github/workflows/skill-quality-report.yml:362
- The discussion comment loop assumes every
report-comment-${i}.mdexists and will throw onreadFileSyncif any file is missing/corrupt, which would fail the step and trigger the issue fallback. Add an existence check (similar to the issue fallback loop) and decide whether missing parts should be skipped or treated as a hard failure.
// Post overflow detail comments
for (let i = 0; i < commentCount; i++) {
const commentBody = fs.readFileSync(`report-comment-${i}.md`, 'utf8');
await github.graphql(`
mutation($discussionId: ID!, $body: String!) {
| function chunkContent(label, content) { | ||
| const prefix = `## ${label}\n\n\`\`\`\n`; | ||
| const suffix = '\n```'; | ||
| const overhead = Buffer.byteLength(prefix + suffix, 'utf8'); | ||
| const budget = MAX_BYTES - overhead; | ||
|
|
||
| const buf = Buffer.from(content, 'utf8'); | ||
| if (buf.length <= budget) { | ||
| return [prefix + content + suffix]; | ||
| } | ||
| ); | ||
| } | ||
| const parts = []; | ||
| let offset = 0; | ||
| let partNum = 1; | ||
| while (offset < buf.length) { | ||
| const slice = buf.slice(offset, offset + budget).toString('utf8'); | ||
| // Remove trailing replacement char from mid-codepoint cut | ||
| const clean = slice.replace(/\uFFFD$/, ''); | ||
| const hdr = `## ${label} (part ${partNum})\n\n\`\`\`\n`; | ||
| parts.push(hdr + clean + suffix); |
There was a problem hiding this comment.
chunkContent() computes budget using prefix/suffix overhead, but when splitting it actually uses a larger header (hdr includes (part N)). That means each chunk can exceed MAX_BYTES by the extra header bytes, risking GraphQL failures due to comment body size. Compute the per-part budget using the actual hdr + suffix byte length (or reserve for the worst-case header size) before slicing.
This issue also appears on line 358 of the same file.
| for (let i = 0; i < commentCount; i++) { | ||
| const commentBody = fs.readFileSync(`report-comment-${i}.md`, 'utf8'); | ||
| await github.graphql(` | ||
| mutation($discussionId: ID!, $body: String!) { | ||
| addDiscussionComment(input: { | ||
| discussionId: $discussionId, | ||
| body: $body | ||
| }) { | ||
| comment { id } | ||
| } | ||
| } | ||
| `, { discussionId, body: commentBody }); | ||
| console.log(`Posted detail comment ${i + 1}/${commentCount}`); |
There was a problem hiding this comment.
The step is continue-on-error: true and the script posts follow-up comments after creating the discussion. If any addDiscussionComment call fails (size/rate limits/transient error), the step will fail and the fallback issue step will run even though the discussion was already created, leading to duplicate report posts. Consider catching errors during comment posting (and logging) so the step can still succeed once the discussion exists, or otherwise gate the fallback on whether the discussion was created.
| for (let i = 0; i < commentCount; i++) { | |
| const commentBody = fs.readFileSync(`report-comment-${i}.md`, 'utf8'); | |
| await github.graphql(` | |
| mutation($discussionId: ID!, $body: String!) { | |
| addDiscussionComment(input: { | |
| discussionId: $discussionId, | |
| body: $body | |
| }) { | |
| comment { id } | |
| } | |
| } | |
| `, { discussionId, body: commentBody }); | |
| console.log(`Posted detail comment ${i + 1}/${commentCount}`); | |
| try { | |
| for (let i = 0; i < commentCount; i++) { | |
| const commentBody = fs.readFileSync(`report-comment-${i}.md`, 'utf8'); | |
| await github.graphql(` | |
| mutation($discussionId: ID!, $body: String!) { | |
| addDiscussionComment(input: { | |
| discussionId: $discussionId, | |
| body: $body | |
| }) { | |
| comment { id } | |
| } | |
| } | |
| `, { discussionId, body: commentBody }); | |
| console.log(`Posted detail comment ${i + 1}/${commentCount}`); | |
| } | |
| } catch (error) { | |
| core.warning(`One or more discussion comments failed to post: ${error.message || error}`); | |
| console.error(error); |
Context
When the full report exceeds GitHub's 65K body limit, the summary table stays in the discussion/issue body and the verbose skill/agent output is posted as follow-up comments (split into chunks if needed). This ensures no output is lost.
e.g. #1235