Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request refactors the schema store's DDL event processing for Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughChanges introduce storage-aware query splitting for multi-table CREATE TABLE DDLs by replacing direct parsing with a new Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
logservice/schemastore/persist_storage_ddl_handlers.go (1)
2797-2832: The query reconstruction logic is well-structured.The
splitCreateTablesQueriesfunction handles the edge case where the original query cannot be properly split to match the table count. The approach of usingConstructResultOfShowCreateTableto rebuild queries from table info is reasonable.A few observations:
- The variable name
querysshould conventionally bequeries(proper English plural), though this is consistent with usage elsewhere in the file.- Good defensive check for nil
tableInfoat line 2816-2818.- Proper error wrapping with
cerror.WrapError.Minor: Consider consistent naming with English plural
-func splitCreateTablesQueries(rawEvent *PersistedDDLEvent) ([]string, error) { - querys, err := commonEvent.SplitQueries(rawEvent.Query) - if err == nil && len(querys) == len(rawEvent.MultipleTableInfos) { - return querys, nil +func splitCreateTablesQueries(rawEvent *PersistedDDLEvent) ([]string, error) { + queries, err := commonEvent.SplitQueries(rawEvent.Query) + if err == nil && len(queries) == len(rawEvent.MultipleTableInfos) { + return queries, nil } fields := []zap.Field{ - zap.Int("queryCount", len(querys)), + zap.Int("queryCount", len(queries)), zap.Int("tableCount", len(rawEvent.MultipleTableInfos)),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@logservice/schemastore/persist_storage_ddl_handlers.go` around lines 2797 - 2832, In splitCreateTablesQueries rename the misspelled local variable querys to queries (and update all uses within the function including the len(querys) checks and zap.Int("queryCount", ...) reference) to use the correct English plural; ensure you update the early-return return queries, nil and the rebuiltQuerys logic so names are consistent and compile cleanly, keeping all error handling and behavior unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@logservice/schemastore/persist_storage_ddl_handlers.go`:
- Line 2813: Replace the test-only mocked context usage: remove
mock.NewContext() and instead obtain a real session/context for production use
(for example, create a session from the TiDB domain/session pool or use
session.CreateSession and its returned context) so that
ConstructResultOfShowCreateTable() can access real session state (e.g.,
ctx.GetSessionVars().SQLMode); alternatively, if this handler is for external
callers, execute SHOW CREATE TABLE as SQL and use that result rather than
calling ConstructResultOfShowCreateTable() with a mocked ctx. Ensure the new
code uses the domain/session pool API to create or borrow a session, sets any
required SessionVars, calls ConstructResultOfShowCreateTable() (or runs SHOW
CREATE TABLE) with that real context, and properly closes/returns the session.
---
Nitpick comments:
In `@logservice/schemastore/persist_storage_ddl_handlers.go`:
- Around line 2797-2832: In splitCreateTablesQueries rename the misspelled local
variable querys to queries (and update all uses within the function including
the len(querys) checks and zap.Int("queryCount", ...) reference) to use the
correct English plural; ensure you update the early-return return queries, nil
and the rebuiltQuerys logic so names are consistent and compile cleanly, keeping
all error handling and behavior unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 45d1d5f3-fdc9-42bb-ae81-9207a76979d0
📒 Files selected for processing (2)
logservice/schemastore/persist_storage_ddl_handlers.gologservice/schemastore/persist_storage_test.go
There was a problem hiding this comment.
Code Review
This pull request improves the robustness of handling CREATE TABLES DDLs by introducing a fallback mechanism. When splitting the DDL query string fails or results in a mismatched number of queries, it now reconstructs the CREATE TABLE statements from the table metadata. This is a solid improvement that makes the system more resilient to unexpected DDL query formats. I've added one suggestion to make the fallback logic even more robust.
|
/test all-v8-1 |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
logservice/schemastore/persist_storage_ddl_handlers.go (2)
2729-2733:⚠️ Potential issue | 🟡 MinorGuard
MultipleTableInfosbefore consulting the filter.Line 2731 dereferences
info.Name.Obefore the new nil-check insplitCreateTablesQueries(). A malformed persisted event with anilentry and a non-niltableFilterwill panic here instead of returningErrTiDBUnexpectedJobMeta.🩹 Proposed fix
for _, info := range rawEvent.MultipleTableInfos { + if info == nil { + return commonEvent.DDLEvent{}, false, cerror.WrapError( + cerror.ErrTiDBUnexpectedJobMeta, + errors.New("nil table info in create tables ddl"), + ) + } if tableFilter != nil && tableFilter.ShouldDiscardDDL( rawEvent.SchemaName, info.Name.O, model.ActionType(rawEvent.Type), common.WrapTableInfo(rawEvent.SchemaName, info)) { continue }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@logservice/schemastore/persist_storage_ddl_handlers.go` around lines 2729 - 2733, In splitCreateTablesQueries, avoid dereferencing a nil entry in rawEvent.MultipleTableInfos: before calling tableFilter.ShouldDiscardDDL or accessing info.Name.O, check that info is non-nil; if a nil entry is found return ErrTiDBUnexpectedJobMeta. Move the nil-check ahead of the tableFilter invocation and ensure you still call tableFilter.ShouldDiscardDDL only for non-nil info, referencing rawEvent.MultipleTableInfos, tableFilter.ShouldDiscardDDL, info.Name.O and ErrTiDBUnexpectedJobMeta.
2745-2850:⚠️ Potential issue | 🟠 MajorKeep query reconstruction off the replay path.
This code runs while
fetchTableDDLEvents/fetchTableTriggerDDLEventsare replaying persisted events. Turning a query-count mismatch intosession.CreateSession(storage)makes replay depend on livekv.Storageagain, even though the neededMultipleTableInfosare already stored with the event. That weakens the schema-store “read from disk” path and couples dispatcher fetch latency/availability to upstream storage.Rebuild and persist the per-table queries when the DDL is handled, or serialize the rebuilt queries alongside the persisted event.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@logservice/schemastore/persist_storage_ddl_handlers.go` around lines 2745 - 2850, The replay path currently calls splitCreateTablesQueries which may call session.CreateSession(storage) to rebuild per-table CREATE queries from TableInfo, coupling replay to live kv.Storage; instead precompute and persist per-table queries when handling the original CREATE TABLES DDL and read them back on replay. Modify the DDL handling code (the place that builds PersistedDDLEvent for create-tables) to call executor.ConstructResultOfShowCreateTable per TableInfo, store the resulting per-table SQLs (e.g. add a PerTableQueries []string field to PersistedDDLEvent or serialize them alongside Query), and change splitCreateTablesQueries to simply return those persisted per-table queries (and only fall back to SplitQueries(rawEvent.Query) if PerTableQueries is empty) without touching session.CreateSession/storage; update references to rawEvent.Query and rawEvent.MultipleTableInfos accordingly and remove the storage-dependent rebuild logic.logservice/schemastore/persist_storage.go (1)
523-541:⚠️ Potential issue | 🟠 MajorClose
storageSnapon the error paths here.
storageSnapis only closed at Line 541. IfallTargetTs[0] < p.gcTsorp.buildDDLEvent()returns an error, this loop returns early and leaks the Pebble snapshot.🩹 Proposed fix
// ensure the order: get target ts -> get storage snap -> check gc ts storageSnap := p.db.NewSnapshot() p.mu.RLock() if allTargetTs[0] < p.gcTs { p.mu.RUnlock() + storageSnap.Close() return nil, fmt.Errorf("startTs %d is smaller than gcTs %d", allTargetTs[0], p.gcTs) } p.mu.RUnlock() for _, ts := range allTargetTs { rawEvent := readPersistedDDLEvent(storageSnap, ts) // the tableID of buildDDLEvent is not used in this function, set it to 0 ddlEvent, ok, err := p.buildDDLEvent(&rawEvent, tableFilter, 0) if err != nil { + storageSnap.Close() return nil, errors.Trace(err) } if ok { events = append(events, ddlEvent) } } storageSnap.Close()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@logservice/schemastore/persist_storage.go` around lines 523 - 541, The snapshot created by p.db.NewSnapshot() (storageSnap) is only closed at the end of the function, so early returns (when allTargetTs[0] < p.gcTs or when p.buildDDLEvent returns an error) leak the Pebble snapshot; fix by ensuring storageSnap.Close() is called on all error paths (e.g., call storageSnap.Close() before returning in the gcTs check and before returning on buildDDLEvent errors, or immediately after creation use defer storageSnap.Close() to guarantee closure), touching the storageSnap variable and the p.buildDDLEvent / p.db.NewSnapshot call sites.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@logservice/schemastore/persist_storage_ddl_handlers.go`:
- Around line 2729-2733: In splitCreateTablesQueries, avoid dereferencing a nil
entry in rawEvent.MultipleTableInfos: before calling
tableFilter.ShouldDiscardDDL or accessing info.Name.O, check that info is
non-nil; if a nil entry is found return ErrTiDBUnexpectedJobMeta. Move the
nil-check ahead of the tableFilter invocation and ensure you still call
tableFilter.ShouldDiscardDDL only for non-nil info, referencing
rawEvent.MultipleTableInfos, tableFilter.ShouldDiscardDDL, info.Name.O and
ErrTiDBUnexpectedJobMeta.
- Around line 2745-2850: The replay path currently calls
splitCreateTablesQueries which may call session.CreateSession(storage) to
rebuild per-table CREATE queries from TableInfo, coupling replay to live
kv.Storage; instead precompute and persist per-table queries when handling the
original CREATE TABLES DDL and read them back on replay. Modify the DDL handling
code (the place that builds PersistedDDLEvent for create-tables) to call
executor.ConstructResultOfShowCreateTable per TableInfo, store the resulting
per-table SQLs (e.g. add a PerTableQueries []string field to PersistedDDLEvent
or serialize them alongside Query), and change splitCreateTablesQueries to
simply return those persisted per-table queries (and only fall back to
SplitQueries(rawEvent.Query) if PerTableQueries is empty) without touching
session.CreateSession/storage; update references to rawEvent.Query and
rawEvent.MultipleTableInfos accordingly and remove the storage-dependent rebuild
logic.
In `@logservice/schemastore/persist_storage.go`:
- Around line 523-541: The snapshot created by p.db.NewSnapshot() (storageSnap)
is only closed at the end of the function, so early returns (when allTargetTs[0]
< p.gcTs or when p.buildDDLEvent returns an error) leak the Pebble snapshot; fix
by ensuring storageSnap.Close() is called on all error paths (e.g., call
storageSnap.Close() before returning in the gcTs check and before returning on
buildDDLEvent errors, or immediately after creation use defer
storageSnap.Close() to guarantee closure), touching the storageSnap variable and
the p.buildDDLEvent / p.db.NewSnapshot call sites.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9285bc7a-e913-44d0-82ba-2f0af1f3e2ad
📒 Files selected for processing (3)
logservice/schemastore/persist_storage.gologservice/schemastore/persist_storage_ddl_handlers.gologservice/schemastore/persist_storage_test.go
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
logservice/schemastore/persist_storage_test.go (1)
3253-3321: Please cover the restart/round-trip case too.This test only exercises the fresh build path with
kvStorageavailable. The upgrade-risk is on the read path (buildDDLEventForCreateTablesover an already-persisted raw event), so a disk round-trip or handcrafted legacy raw-event case is still needed to protect restart compatibility.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@logservice/schemastore/persist_storage_test.go` around lines 3253 - 3321, The test currently only exercises the in-memory fresh-build path; add a restart/round-trip case that persists the raw persisted DDL event to the mock KV store and then loads it back before calling buildDDLEventForCreateTables to validate the read/recovery path. Concretely, after creating rawEvent via buildPersistedDDLEventForCreateTables (using buildPersistedDDLEventFuncArgs), write the rawEvent into the mockstore (using the same keys/serialization the production persistence uses), close and re-open the mock store/session (or simulate a legacy raw-event by constructing the persisted bytes), then call buildDDLEventForCreateTables on the reloaded raw event and assert the same query splitting, NeedAddedTables and table-info checks as the existing assertions; reference buildPersistedDDLEventForCreateTables, buildDDLEventForCreateTables, mockstore.NewMockStore, and rawEvent to locate where to add the persistence and reload logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@logservice/schemastore/persist_storage_ddl_handlers.go`:
- Around line 863-865: The code currently panics on rebuildCreateTablesQueries
failures; instead, log the error (include zap.Error(rebuildErr) and
zap.String("query", event.Query)) and propagate the error back through the
normal DDL handling path by returning rebuildErr (or converting it into the
function's error return) so the caller can handle it rather than crashing;
update the branch handling rebuiltQuerys/rebuildErr in
persist_storage_ddl_handlers.go (look for variables rebuiltQuerys, rebuildErr
and the call to rebuildCreateTablesQueries(event.Query,
event.MultipleTableInfos, args.kvStorage)) to remove log.Panic and return or
forward the error.
---
Nitpick comments:
In `@logservice/schemastore/persist_storage_test.go`:
- Around line 3253-3321: The test currently only exercises the in-memory
fresh-build path; add a restart/round-trip case that persists the raw persisted
DDL event to the mock KV store and then loads it back before calling
buildDDLEventForCreateTables to validate the read/recovery path. Concretely,
after creating rawEvent via buildPersistedDDLEventForCreateTables (using
buildPersistedDDLEventFuncArgs), write the rawEvent into the mockstore (using
the same keys/serialization the production persistence uses), close and re-open
the mock store/session (or simulate a legacy raw-event by constructing the
persisted bytes), then call buildDDLEventForCreateTables on the reloaded raw
event and assert the same query splitting, NeedAddedTables and table-info checks
as the existing assertions; reference buildPersistedDDLEventForCreateTables,
buildDDLEventForCreateTables, mockstore.NewMockStore, and rawEvent to locate
where to add the persistence and reload logic.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ccf3c6fc-1459-429a-baf7-cc5f11a90619
📒 Files selected for processing (3)
logservice/schemastore/persist_storage.gologservice/schemastore/persist_storage_ddl_handlers.gologservice/schemastore/persist_storage_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- logservice/schemastore/persist_storage.go
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
logservice/schemastore/persist_storage_ddl_handlers.go (1)
862-869:⚠️ Potential issue | 🟠 MajorPanic on reconstruction failures should be replaced with error propagation.
The
log.Panicat line 866 will crash the capture on a single bad DDL instead of surfacing an error through the normal DDL handling path. Since this branch handles an expected mismatch case (query count vs table count), runtime failures during reconstruction should be propagated as errors.🛠️ Proposed fix to return error instead of panic
The function signature would need to change to return an error. Consider:
-func buildPersistedDDLEventForCreateTables(args buildPersistedDDLEventFuncArgs) PersistedDDLEvent { +func buildPersistedDDLEventForCreateTables(args buildPersistedDDLEventFuncArgs) (PersistedDDLEvent, error) { event := buildPersistedDDLEventCommon(args) event.SchemaName = getSchemaName(args.databaseMap, event.SchemaID) event.MultipleTableInfos = args.job.BinlogInfo.MultipleTableInfos querys, err := commonEvent.SplitQueries(event.Query) if err != nil || len(querys) != len(event.MultipleTableInfos) { rebuiltQuerys, rebuildErr := rebuildCreateTablesQueries(event.Query, event.MultipleTableInfos, args.kvStorage) if rebuildErr != nil { - log.Panic("rebuild create tables queries failed", zap.Error(rebuildErr), zap.String("query", event.Query)) + log.Error("rebuild create tables queries failed", zap.Error(rebuildErr), zap.String("query", event.Query)) + return PersistedDDLEvent{}, rebuildErr } event.Query = strings.Join(rebuiltQuerys, "") } - return event + return event, nil }This would require updating the
buildPersistedDDLEventFunctype signature and all callers accordingly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@logservice/schemastore/persist_storage_ddl_handlers.go` around lines 862 - 869, The panic on rebuild failure must be converted into error propagation: in the branch where rebuildCreateTablesQueries(event.Query, event.MultipleTableInfos, args.kvStorage) returns rebuildErr, return that error instead of calling log.Panic and stop mutating event.Query; update the function that contains this code (the buildPersistedDDLEventFunc implementation) to return an error, adjust its type/signature (buildPersistedDDLEventFunc) and all callers to handle and propagate the error, and ensure when rebuild succeeds you still set event.Query = strings.Join(rebuiltQuerys, "") before returning nil.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@logservice/schemastore/persist_storage_ddl_handlers.go`:
- Around line 2808-2814: splitCreateTablesQueries currently fails hard when
SplitQueries returns an error or counts mismatch, which breaks reads of old
pre-fix events; change it to attempt the same rebuild fallback used by
buildPersistedDDLEventForCreateTables: when SplitQueries fails or len mismatch,
call the rebuild routine (or logic that uses rawEvent.MultipleTableInfos +
kvStorage to reconstruct per-table CREATE queries) and return its results; only
return the wrapped ErrTiDBUnexpectedJobMeta if the rebuild attempt cannot run
(e.g., kvStorage is nil) or also fails, ensuring callers of
buildDDLEventForCreateTables can recover older persisted events.
---
Duplicate comments:
In `@logservice/schemastore/persist_storage_ddl_handlers.go`:
- Around line 862-869: The panic on rebuild failure must be converted into error
propagation: in the branch where rebuildCreateTablesQueries(event.Query,
event.MultipleTableInfos, args.kvStorage) returns rebuildErr, return that error
instead of calling log.Panic and stop mutating event.Query; update the function
that contains this code (the buildPersistedDDLEventFunc implementation) to
return an error, adjust its type/signature (buildPersistedDDLEventFunc) and all
callers to handle and propagate the error, and ensure when rebuild succeeds you
still set event.Query = strings.Join(rebuiltQuerys, "") before returning nil.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 96aafe9a-7103-4788-8fa8-604914dc0eac
📒 Files selected for processing (1)
logservice/schemastore/persist_storage_ddl_handlers.go
34945e6 to
cbb382e
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
logservice/schemastore/persist_storage_ddl_handlers.go (1)
2935-2950:⚠️ Potential issue | 🟠 MajorThis still turns mismatched
CreateTablesrecords into unrecoverable read failures.
buildDDLEventForCreateTablesnow hard-fails whenever the split count doesn't matchMultipleTableInfos, and the caller inlogservice/schemastore/persist_storage.goat Lines 462-468 propagates that error up the read path. That means older persisted events with mismatched counts are still unreadable. Also, whenSplitQueriesitself fails, this helper logs the original error but replaces it with a new generic one, so callers lose the actual parse/split cause. Please keep the read-path rebuild fallback here, and if parsing fails, return the original wrappederrinstead of synthesizing a new one.As per coding guidelines, "When an error comes from a third-party or library call in Go, wrap it immediately with
errors.Trace(err)orerrors.WrapError(...)to attach a stack trace; upstream callers should propagate wrapped errors without wrapping again".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@logservice/schemastore/persist_storage_ddl_handlers.go` around lines 2935 - 2950, splitCreateTablesQueries currently converts mismatched/split errors into a new generic unrecoverable error; instead preserve the read-path fallback and the original split error: in splitCreateTablesQueries (used by buildDDLEventForCreateTables) do not synthesize a new ErrTiDBUnexpectedJobMeta when SplitQueries returns an error—wrap and return the original error (use errors.Trace(err) or cerror.WrapError(err) to attach a stack trace) so callers can inspect the real cause; if SplitQueries succeeds but query count != len(rawEvent.MultipleTableInfos), log the mismatch and return a non-fatal result that lets buildDDLEventForCreateTables perform its rebuild fallback (i.e., avoid returning a hard failure here).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/integration_tests/_utils/run_sql_file`:
- Line 8: The shared runner currently forces fast_create_table off by embedding
"set global tidb_enable_fast_create_table = OFF;" into the prepare variable;
remove that hard-coded setting from prepare in
tests/integration_tests/_utils/run_sql_file and instead make it opt-in (e.g.,
accept an optional parameter or environment flag like FAST_CREATE_TABLE or a
prepare_overrides argument to run_sql_file) so callers can choose to set
tidb_enable_fast_create_table on/off; update run_sql_file to append the
fast_create_table SET only when the opt-in flag indicates it, and update callers
(e.g., run_with_fast_create_table in
tests/integration_tests/batch_add_table/run.sh) to pass the flag when they want
the feature enabled.
In `@tests/integration_tests/batch_add_table/run.sh`:
- Line 69: The script currently calls skip_if_tidb_version_less_than which exits
the whole process, preventing later steps like run_with_fast_create_table and
check_logs from running; remove that global skip and instead guard only the
fast-create branch with a local version check that returns or skips that branch
(e.g., replace the call to skip_if_tidb_version_less_than "v8.5.0" with an
if-version check and only avoid calling run_with_fast_create_table when the TiDB
version is less than v8.5.0, while allowing the rest of the script (including
check_logs) to continue).
---
Duplicate comments:
In `@logservice/schemastore/persist_storage_ddl_handlers.go`:
- Around line 2935-2950: splitCreateTablesQueries currently converts
mismatched/split errors into a new generic unrecoverable error; instead preserve
the read-path fallback and the original split error: in splitCreateTablesQueries
(used by buildDDLEventForCreateTables) do not synthesize a new
ErrTiDBUnexpectedJobMeta when SplitQueries returns an error—wrap and return the
original error (use errors.Trace(err) or cerror.WrapError(err) to attach a stack
trace) so callers can inspect the real cause; if SplitQueries succeeds but query
count != len(rawEvent.MultipleTableInfos), log the mismatch and return a
non-fatal result that lets buildDDLEventForCreateTables perform its rebuild
fallback (i.e., avoid returning a hard failure here).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 44d27784-490b-42d2-b7e8-8b9465264a00
📒 Files selected for processing (6)
logservice/schemastore/persist_storage_ddl_handlers.gologservice/schemastore/persist_storage_test.gotests/integration_tests/_utils/run_sqltests/integration_tests/_utils/run_sql_filetests/integration_tests/_utils/run_sql_ignore_errortests/integration_tests/batch_add_table/run.sh
| set -eu | ||
|
|
||
| prepare="set global tidb_enable_clustered_index = 'int_only';" | ||
| prepare="set global tidb_enable_clustered_index = 'int_only'; set global tidb_enable_fast_create_table = OFF;" |
There was a problem hiding this comment.
Don't force fast_create_table off in the shared SQL-file runner.
This helper now overrides caller-selected global state on every invocation. In tests/integration_tests/batch_add_table/run.sh, run_with_fast_create_table enables the feature at Line 16, but the later run_sql_file calls at Lines 52-53 flip it back off before those SQL files run. Please make this opt-in per test instead of hard-coding it in the shared runner.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/integration_tests/_utils/run_sql_file` at line 8, The shared runner
currently forces fast_create_table off by embedding "set global
tidb_enable_fast_create_table = OFF;" into the prepare variable; remove that
hard-coded setting from prepare in tests/integration_tests/_utils/run_sql_file
and instead make it opt-in (e.g., accept an optional parameter or environment
flag like FAST_CREATE_TABLE or a prepare_overrides argument to run_sql_file) so
callers can choose to set tidb_enable_fast_create_table on/off; update
run_sql_file to append the fast_create_table SET only when the opt-in flag
indicates it, and update callers (e.g., run_with_fast_create_table in
tests/integration_tests/batch_add_table/run.sh) to pass the flag when they want
the feature enabled.
|
/test all-v8-1 |
|
/test all-v8-1 |
|
/test all-v8-1 |
|
/test all-v8-1 |
|
/test all-v7-5 |
|
@lidezhu: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
What problem does this PR solve?
Issue Number: close #4406
What is changed and how it works?
Check List
Tests
Questions
Will it cause performance regression or break compatibility?
Do you need to update user documentation, design documentation or monitoring documentation?
Release note
Summary by CodeRabbit
Bug Fixes
Tests
Chores