Remove GetOptionsDiagnostics, global diags in editor, restore checker affinity#3285
Remove GetOptionsDiagnostics, global diags in editor, restore checker affinity#3285jakebailey merged 24 commits intomainfrom
Conversation
This reverts commit d381d13.
There was a problem hiding this comment.
Pull request overview
This PR updates the compiler and language server diagnostics pipeline by removing “options diagnostics”, surfacing global diagnostics in the editor via tsconfig publishes, and restoring more consistent checker affinity/iteration behavior.
Changes:
- Remove
GetOptionsDiagnosticsand adjust diagnostics collection to re-query global diagnostics after semantic checking. - Accumulate global diagnostics from project-system checker pools and publish updated tsconfig diagnostics from the LSP after requests.
- Refactor checker pooling to support grouped per-checker file iteration for diagnostics, and update baselines accordingly.
Reviewed changes
Copilot reviewed 14 out of 14 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| internal/compiler/checkerpool.go | Simplifies CheckerPool interface; adds grouped per-checker iteration and global diagnostics aggregation helpers. |
| internal/compiler/program.go | Reworks checker acquisition paths (compiler vs project pools), removes options diags path, and refactors semantic/suggestion diagnostics collection to reuse a provided checker. |
| internal/execute/incremental/program.go | Removes incremental wrapper support for GetOptionsDiagnostics and stops using it for error-state tracking. |
| internal/lsp/server.go | Enqueues a global-diagnostics publish check after each language service request. |
| internal/project/checkerpool.go | Updates project-system checker pool API and adds incremental global diagnostics accumulation + change detection. |
| internal/project/project.go | Adds GetProjectDiagnostics to combine program diagnostics with accumulated global diagnostics for tsconfig publishing. |
| internal/project/project_test.go | Adds coverage to ensure global diagnostics discovered during checking are published on tsconfig.json. |
| internal/project/session.go | Publishes combined project diagnostics; introduces coalesced background task for global diagnostics publishing. |
| testdata/baselines/reference/tsc/incremental/js-file-with-import-in-jsdoc-in-composite-project.js | Updates baseline to reflect newly surfaced global diagnostic + changed exit status / tsbuildinfo shape. |
| testdata/baselines/reference/tsc/composite/converting-to-modules.js | Updates tsbuildinfo + semantic diagnostics caching baseline expectations. |
| testdata/baselines/reference/tsbuildWatch/configFileErrors/reports-syntax-errors-in-config-file.js | Updates watch baseline expectations for semantic diagnostics caching output and tsbuildinfo fields. |
| testdata/baselines/reference/tsbuild/configFileErrors/reports-syntax-errors-in-config-file.js | Updates build baseline expectations for semantic diagnostics caching output and tsbuildinfo fields. |
| testdata/baselines/reference/submodule/fourslash/findAllReferences/findAllRefsForModuleGlobal.baseline.jsonc | Baseline updated (marker placement) to match expected find-all-refs output. |
| testdata/baselines/reference/submodule/fourslash/findAllReferences/findAllRefsForModuleGlobal.baseline.jsonc.diff | Removes no-longer-needed diff baseline file. |
internal/project/session.go
Outdated
| } | ||
|
|
||
| func (s *Session) publishGlobalDiagnostics(ctx context.Context) { | ||
| s.globalDiagPublishPending.Store(false) |
There was a problem hiding this comment.
globalDiagPublishPending is cleared at the start of publishGlobalDiagnostics, but background.Queue runs tasks concurrently. This means EnqueuePublishGlobalDiagnostics can enqueue another publish while one is still running, defeating the stated coalescing behavior and potentially causing overlapping PublishDiagnostics calls. Consider only clearing the flag when the task finishes, or introduce a separate “in-flight” vs “rerun requested” mechanism to ensure at most one publish runs at a time (while still not missing updates).
| s.globalDiagPublishPending.Store(false) | |
| defer s.globalDiagPublishPending.Store(false) |
| wg := core.NewWorkGroup(singleThreaded) | ||
| for checkerIdx := range checkerCount { | ||
| wg.Queue(func() { | ||
| p.locks[checkerIdx].Lock() | ||
| defer p.locks[checkerIdx].Unlock() | ||
| for i, file := range files { | ||
| if checker := p.checkers[checkerIdx]; checker == p.fileAssociations[file] { | ||
| cb(checker, i, file) | ||
| } |
There was a problem hiding this comment.
forEachCheckerGroupDo scans the entire files slice once per checker (for checkerIdx ... { for i, file := range files { ... } }), making this O(checkers×files). On large projects this can outweigh the contention benefits. Consider pre-bucketing file indices by checker (single pass over files) and then iterating only each checker's bucket inside the per-checker goroutine, preserving original order within each bucket.
| wg := core.NewWorkGroup(singleThreaded) | |
| for checkerIdx := range checkerCount { | |
| wg.Queue(func() { | |
| p.locks[checkerIdx].Lock() | |
| defer p.locks[checkerIdx].Unlock() | |
| for i, file := range files { | |
| if checker := p.checkers[checkerIdx]; checker == p.fileAssociations[file] { | |
| cb(checker, i, file) | |
| } | |
| if checkerCount == 0 || len(files) == 0 { | |
| return | |
| } | |
| // Build a mapping from checker pointer to its index so we can bucket files | |
| // by checker in a single pass over the files slice. | |
| checkerIndex := make(map[*checker.Checker]int, checkerCount) | |
| for idx, c := range p.checkers { | |
| checkerIndex[c] = idx | |
| } | |
| // Buckets of file indices per checker, preserving original order within each bucket. | |
| buckets := make([][]int, checkerCount) | |
| for i, file := range files { | |
| if c := p.fileAssociations[file]; c != nil { | |
| if idx, ok := checkerIndex[c]; ok { | |
| buckets[idx] = append(buckets[idx], i) | |
| } | |
| } | |
| } | |
| wg := core.NewWorkGroup(singleThreaded) | |
| for checkerIdx := 0; checkerIdx < checkerCount; checkerIdx++ { | |
| idx := checkerIdx | |
| wg.Queue(func() { | |
| p.locks[idx].Lock() | |
| defer p.locks[idx].Unlock() | |
| checker := p.checkers[idx] | |
| for _, fileIndex := range buckets[idx] { | |
| file := files[fileIndex] | |
| cb(checker, fileIndex, file) |
|
This looks good to me, except I don't really know how to evaluate the --incremental baseline changes. Have you validated that they look right/acceptable? |
|
Yeah, so it's specifically how The tsbuildinfo is encoding sort of the "opposite" of reality, in that it's noting which files it didn't check (IIRC). |
This builds on #2322. Probably could replace it and just go for this.
From #2322:
GetOptionsDiagnosticsThen: