-
Notifications
You must be signed in to change notification settings - Fork 0
Sync upstream rust-v0.77.0 #142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
CSRessel
wants to merge
545
commits into
dev
Choose a base branch
from
sync/upstream-v0.77.0
base: dev
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
### Summary Linux codesigning with sigstore and test run output at https://github.com/openai/codex/actions/runs/19994328162?pr=7662. Sigstore is one of the few ways for codesigning for linux platform. Linux is open sourced and therefore binary/dist validation comes with the build itself instead of a central authority like Windows or Mac. Alternative here is to use GPG which again a public key included with the bundle for validation. Advantage with Sigstore is that we do not have to create a private key for signing but rather with[ keyless signing](https://docs.sigstore.dev/cosign/signing/overview/). This should be sufficient for us at this point and if we want to we can support GPG in the future.
## Summary Extend Ctrl+n/Ctrl+p navigation support to selection popups (model picker, approval mode, etc.) This is a follow-up to #7530, which added Ctrl+n/Ctrl+p navigation to the textarea. The same keybindings were missing from `ListSelectionView`, causing inconsistent behavior when navigating selection popups. ## Related - #7530 - feat(tui): map Ctrl-P/N to arrow navigation in textarea ## Changes - Added Ctrl+n as alternative to Down arrow in selection popups - Added Ctrl+p as alternative to Up arrow in selection popups - Added unit tests for the new keybindings ## Test Plan - [x] `cargo test -p codex-tui list_selection_view` - all tests pass - [x] Manual testing: verified Ctrl+n/p navigation works in model selection popup --------- Co-authored-by: Eric Traut <etraut@openai.com>
# External (non-OpenAI) Pull Request Requirements Before opening this Pull Request, please read the dedicated "Contributing" markdown file or your PR may be closed: https://github.com/openai/codex/blob/main/docs/contributing.md If your PR conforms to our contribution guidelines, replace this text with a detailed and high quality description of your changes. Include a link to a bug report or enhancement request.
also includes minor refactor merging `ApprovalDecision` with `CommandExecutionRequestAcceptSettings`
# External (non-OpenAI) Pull Request Requirements Before opening this Pull Request, please read the dedicated "Contributing" markdown file or your PR may be closed: https://github.com/openai/codex/blob/main/docs/contributing.md If your PR conforms to our contribution guidelines, replace this text with a detailed and high quality description of your changes. Include a link to a bug report or enhancement request.
## Summary - restore the previous status header when a non-error event arrives after a stream retry - add a regression test to ensure the reconnect banner clears once streaming resumes ## Testing - cargo fmt -- --config imports_granularity=Item - cargo clippy --fix --all-features --tests --allow-dirty -p codex-tui - NO_COLOR=0 cargo test -p codex-tui *(fails: vt100 color assertion tests expect colored cells but the environment returns Default colors even with NO_COLOR cleared and TERM/COLORTERM set)* ------ [Codex Task](https://chatgpt.com/codex/tasks/task_i_69337f8c77508329b3ea85134d4a7ac7)
To avoid regression with special builds like alphas
### Summary Set up codesign for windows dist with [Azure trusted signing](https://azure.microsoft.com/en-us/products/trusted-signing) and [its github action integration](https://github.com/Azure/trusted-signing-action).
This is a step towards removing the need to know `model` when constructing config. We firstly don't need to know `model_info` and just respect if the user has already set it. Next step, we don't need to know `model` unless the user explicitly set it in `config.toml`
We received a bug report that Codex CLI crashes when an env var contains a non-ASCII character, or more specifically, cannot be decoded as UTF-8: ```shell $ RUST_BACKTRACE=full RÖDBURK=1 codex thread '<unnamed>' panicked at library/std/src/env.rs:162:57: called `Result::unwrap()` on an `Err` value: "RÃ\xB6DBURK" stack backtrace: 0: 0x101905c18 - __mh_execute_header 1: 0x1012bd76c - __mh_execute_header 2: 0x1019050e4 - __mh_execute_header 3: 0x101905ad8 - __mh_execute_header 4: 0x101905874 - __mh_execute_header 5: 0x101904f38 - __mh_execute_header 6: 0x1019347bc - __mh_execute_header 7: 0x10193472c - __mh_execute_header 8: 0x101937884 - __mh_execute_header 9: 0x101b3bcd0 - __mh_execute_header 10: 0x101b3c0bc - __mh_execute_header 11: 0x101927a20 - __mh_execute_header 12: 0x1005c58d8 - __mh_execute_header thread '<unnamed>' panicked at library/core/src/panicking.rs:225:5: panic in a function that cannot unwind stack backtrace: 0: 0x101905c18 - __mh_execute_header 1: 0x1012bd76c - __mh_execute_header 2: 0x1019050e4 - __mh_execute_header 3: 0x101905ad8 - __mh_execute_header 4: 0x101905874 - __mh_execute_header 5: 0x101904f38 - __mh_execute_header 6: 0x101934794 - __mh_execute_header 7: 0x10193472c - __mh_execute_header 8: 0x101937884 - __mh_execute_header 9: 0x101b3c144 - __mh_execute_header 10: 0x101b3c1a0 - __mh_execute_header 11: 0x101b3c158 - __mh_execute_header 12: 0x1005c5ef8 - __mh_execute_header thread caused non-unwinding panic. aborting. ``` I discovered I could reproduce this on a release build, but not a dev build, so between that and the unhelpful stack trace, my mind went to the pre-`main()` logic we run in prod builds. Sure enough, we were operating on `std::env::vars()` instead of `std::env::vars_os()`, which is why the non-UTF-8 environment variable was causing an issue. This PR updates the logic to use `std::env::vars_os()` and adds a unit test. And to be extra sure, I also verified the fix works with a local release build: ``` $ cargo build --bin codex --release $ RÖDBURK=1 ./target/release/codex --version codex-cli 0.0.0 ```
Making sure we can override base instructions
This endpoint only exist on chatgpt
## What Fix PageUp/PageDown behaviour in the Ctrl+T transcript overlay so that paging is continuous and reversible, and add tests to lock in the expected behaviour. ## Why Today, paging in the transcript overlay uses the raw viewport height instead of the effective content height after layout. Because the overlay reserves some rows for chrome (header/footer), this can cause: - PageDown to skip transcript lines between pages. - PageUp/PageDown not to “round-trip” cleanly (PageDown then PageUp does not always return to the same set of visible lines). This shows up when inspecting longer transcripts via Ctrl+T; see #7356 for context. ## How - Add a dedicated `PagerView::page_step` helper that computes the page size from the last rendered content height and falls back to `content_area(viewport_area).height` when that is not yet available. - Use `page_step(...)` for both PageUp and PageDown (including SPACE) so the scroll step always matches the actual content area height, not the full viewport height. - Add a focused test `transcript_overlay_paging_is_continuous_and_round_trips` that: - Renders a synthetic transcript with numbered `line-NN` rows. - Asserts that successive PageDown operations show continuous line numbers (no gaps). - Asserts that PageDown+PageUp and PageUp+PageDown round-trip correctly from non-edge offsets. The change is limited to `codex-rs/tui/src/pager_overlay.rs` and only affects the transcript overlay paging semantics. ## Related issue - #7356 ## Testing On Windows 11, using PowerShell 7 in the repo root: ```powershell cargo test cargo clippy --tests cargo fmt -- --config imports_granularity=Item ``` - All tests passed. - `cargo clippy --tests` reported some pre-existing warnings that are unrelated to this change; no new lints were introduced in the modified code. --------- Signed-off-by: muyuanjin <24222808+muyuanjin@users.noreply.github.com> Co-authored-by: Eric Traut <etraut@openai.com>
Fixes #7759: - Drop the stale `rmcp` entry from `codex-rs/default.nix`’s `cargoLock.outputHashes` since the crate now comes from crates.io and no longer needs a git hash. - Add the missing hash for the filedescriptor-0.8.3 git dependency (from `pakrym/wezterm`) so `buildRustPackage` can vendor it.
The repo we were depending on is very large and we need very small part of it. --------- Co-authored-by: Pavel <pavel@krymets.com>
…7779) This changes our default Landlock policy to allow `sendmsg(2)` and `recvmsg(2)` syscalls. We believe these were originally denied out of an abundance of caution, but given that `send(2)` nor `recv(2)` are allowed today [which provide comparable capability to the `*msg` equivalents], we do not believe allowing them grants any privileges beyond what we already allow. Rather than using the syscall as the security boundary, preventing access to the potentially hazardous file descriptor in the first place seems like the right layer of defense. In particular, this makes it possible for `shell-tool-mcp` to run on Linux when using a read-only sandbox for the Bash process, as demonstrated by `accept_elicitation_for_prompt_rule()` now succeeding in CI.
## Summary - add vim-style pager navigation for transcript overlays (j/k, ctrl+f/b/d/u) without removing existing keys - add shift-space to page up ------ [Codex Task](https://chatgpt.com/codex/tasks/task_i_69309d26da508329908b2dc8ca40afb7)
Fix for #7459 ## What Since codex errors out for unsupported images, stop attempting to base64/attach them and instead emit a clear placeholder when the file isn’t a supported image MIME. ## Why Local uploads for unsupported formats (e.g., SVG/GIF/etc.) were dead-ending after decode failures because of the 400 retry loop. Users now get an explicit “cannot attach … unsupported image format …” response. ## How Replace the fallback read/encode path with MIME detection that bails out for non-image or unsupported image types, returning a consistent placeholder. Unreadable and invalid images still produce their existing error placeholders.
## Summary Support "j" and "k" keys as aliases for "down" and "up" so vim users feel loved. Only support these keys when the selection is not searchable. ## Testing - env -u NO_COLOR TERM=xterm-256color cargo test -p codex-tui ------ [Codex Task](https://chatgpt.com/codex/tasks/task_i_693771b53bc8833088669060dfac2083)
Introduce a new codex-tui2 crate that re-exports the existing interactive TUI surface and delegates run_main directly to codex-tui. This keeps behavior identical while giving tui2 its own crate for future viewport work. Wire the codex CLI to select the frontend via the tui2 feature flag. When the merged CLI overrides include features.tui2=true (e.g. via --enable tui2), interactive runs are routed through codex_tui2::run_main; otherwise they continue to use the original codex_tui::run_main. Register Feature::Tui2 in the core feature registry and add the tui2 crate and dependency entries so the new frontend builds alongside the existing TUI. This is a stub that only wires up the feature flag for this. <img width="619" height="364" alt="image" src="https://github.com/user-attachments/assets/4893f030-932f-471e-a443-63fe6b5d8ed9" />
…oml (#7796) This PR attempts to solve two problems by introducing a `AbsolutePathBuf` type with a special deserializer: - `AbsolutePathBuf` attempts to be a generally useful abstraction, as it ensures, by constructing, that it represents a value that is an absolute, normalized path, which is a stronger guarantee than an arbitrary `PathBuf`. - Values in `config.toml` that can be either an absolute or relative path should be resolved against the folder containing the `config.toml` in the relative path case. This PR makes this easy to support: the main cost is ensuring `AbsolutePathBufGuard` is used inside `deserialize_config_toml_with_base()`. While `AbsolutePathBufGuard` may seem slightly distasteful because it relies on thread-local storage, this seems much cleaner to me than using than my various experiments with https://docs.rs/serde/latest/serde/de/trait.DeserializeSeed.html. Further, since the `deserialize()` method from the `Deserialize` trait is not async, we do not really have to worry about the deserialization work being spread across multiple threads in a way that would interfere with `AbsolutePathBufGuard`. To start, this PR introduces the use of `AbsolutePathBuf` in `OtelTlsConfig`. Note how this simplifies `otel_provider.rs` because it no longer requires `settings.codex_home` to be threaded through. Furthermore, this sets us up better for a world where multiple `config.toml` files from different folders could be loaded and then merged together, as the absolutifying of the paths must be done against the correct parent folder.
### Summary * Added `mcpServer/oauthLogin` in app server for supporting in session MCP server login * Added `McpServerOauthLoginParams` and `McpServerOauthLoginResponse` to support above method with response returning the auth URL for consumer to open browser or display accordingly. * Added `McpServerOauthLoginCompletedNotification` which the app server would emit on MCP server login success or failure (i.e. timeout). * Refactored rmcp-client oath_login to have the ability on starting a auth server which the codex_message_processor uses for in-session auth.
- updating helpers, refactoring some functions that will be used in the elevated sandbox - better logging - better and faster handling of ACL checks/writes - No functional change—legacy restricted-token sandbox remains the only path.
…st (#8142) Historically, `accept_elicitation_for_prompt_rule()` was flaky because we were using a notification to update the sandbox followed by a `shell` tool request that we expected to be subject to the new sandbox config, but because [rmcp](https://crates.io/crates/rmcp) MCP servers delegate each incoming message to a new Tokio task, messages are not guaranteed to be processed in order, so sometimes the `shell` tool call would run before the notification was processed. Prior to this PR, we relied on a generous `sleep()` between the notification and the request to reduce the change of the test flaking out. This PR implements a proper fix, which is to use a _request_ instead of a notification for the sandbox update so that we can wait for the response to the sandbox request before sending the request to the `shell` tool call. Previously, `rmcp` did not support custom requests, but I fixed that in modelcontextprotocol/rust-sdk#590, which made it into the `0.12.0` release (see #8288). This PR updates `shell-tool-mcp` to expect `"codex/sandbox-state/update"` as a _request_ instead of a notification and sends the appropriate ack. Note this behavior is tied to our custom `codex/sandbox-state` capability, which Codex honors as an MCP client, which is why `core/src/mcp_connection_manager.rs` had to be updated as part of this PR, as well. This PR also updates the docs at `shell-tool-mcp/README.md`.
regression: #8199 Signed-off-by: Koichi Shiraishi <zchee.io@gmail.com>
…onfigBuilder (#8276) openai/codex#8235 introduced `ConfigBuilder` and this PR updates all call non-test call sites to use it instead of `Config::load_from_base_config_with_overrides()`. This is important because `load_from_base_config_with_overrides()` uses an empty `ConfigRequirements`, which is a reasonable default for testing so the tests are not influenced by the settings on the host. This method is now guarded by `#[cfg(test)]` so it cannot be used by business logic. Because `ConfigBuilder::build()` is `async`, many of the test methods had to be migrated to be `async`, as well. On the bright side, this made it possible to eliminate a bunch of `block_on_future()` stuff.
## Description
Introduced `ExternalSandbox` policy to cover use case when sandbox
defined by outside environment, effectively it translates to
`SandboxMode#DangerFullAccess` for file system (since sandbox configured
on container level) and configurable `network_access` (either Restricted
or Enabled by outside environment).
as example you can configure `ExternalSandbox` policy as part of
`sendUserTurn` v1 app_server API:
```
{
"conversationId": <id>,
"cwd": <cwd>,
"approvalPolicy": "never",
"sandboxPolicy": {
"type": ""external-sandbox",
"network_access": "enabled"/"restricted"
},
"model": <model>,
"effort": <effort>,
....
}
```
Screenshots here but check the snapshot files to see it better <img width="712" height="408" alt="Screenshot 2025-12-18 at 11 58 02" src="https://github.com/user-attachments/assets/84a2c410-0767-4870-84d1-ae1c0d4c445e" /> <img width="523" height="352" alt="Screenshot 2025-12-18 at 11 17 41" src="https://github.com/user-attachments/assets/d029c7ea-0feb-4493-9dca-af43a0c70c52" />
Only display the skill name (not the folder), and truncate the skill description to a maximum of two lines.
Fix broken tests.
skills default on.
a new scope reads from /etc/codex
We were assembling the skill roots in two different places, and the admin root was missing in one of them. This change centralizes root selection into a helper so both paths stay in sync.
Keep windows OFF first.
# External (non-OpenAI) Pull Request Requirements Before opening this Pull Request, please read the dedicated "Contributing" markdown file or your PR may be closed: https://github.com/openai/codex/blob/main/docs/contributing.md If your PR conforms to our contribution guidelines, replace this text with a detailed and high quality description of your changes. Include a link to a bug report or enhancement request.
Fixes #8214 by removing the '--staged' flag from the undo git restore command. This ensures that while the working tree is reverted to the snapshot state, the user's staged changes (index) are preserved, preventing data loss. Also adds a regression test.
This will make it easier to test for expected errors in unit tests since we can compare based on the field values rather than the message (which might change over time). See openai/codex#8298 for an example. It also ensures more consistency in the way a `ConstraintError` is constructed.
This test was introduced in openai/codex#6507, but was not included in `mod.rs`. It does not appear that it was getting compiled?
Automated update of models.json. Co-authored-by: aibrahim-oai <219906144+aibrahim-oai@users.noreply.github.com>
Problem - Mouse wheel events were scheduling a redraw on every event, which could backlog and create lag during fast scrolling. Solution - Schedule transcript scroll redraws with a short delay (16ms) so the frame requester coalesces bursts into fewer draws. Why - Smooths rapid wheel scrolling while keeping the UI responsive. Testing - Manual: Scrolled in iTerm and Ghostty; no lag observed. - `cargo clippy --fix --all-features --tests --allow-dirty --allow-no-vcs -p codex-tui2`
## Summary - centralize file name derivation in codex-file-search - reuse the helper in app-server fuzzy search to avoid duplicate logic - add unit tests for file_name_from_path ## Testing - cargo test -p codex-file-search - cargo test -p codex-app-server
This adds support for `allowed_sandbox_modes` in `requirements.toml` and provides legacy support for constraining sandbox modes in `managed_config.toml`. This is converted to `Constrained<SandboxPolicy>` in `ConfigRequirements` and applied to `Config` such that constraints are enforced throughout the harness. Note that, because `managed_config.toml` is deprecated, we do not add support for the new `external-sandbox` variant recently introduced in openai/codex#8290. As noted, that variant is not supported in `config.toml` today, but can be configured programmatically via app server.
# External (non-OpenAI) Pull Request Requirements Before opening this Pull Request, please read the dedicated "Contributing" markdown file or your PR may be closed: https://github.com/openai/codex/blob/main/docs/contributing.md If your PR conforms to our contribution guidelines, replace this text with a detailed and high quality description of your changes. Include a link to a bug report or enhancement request.
…wd (#8353) `load_config_layers_state()` should load config from a `.codex/config.toml` in any folder between the `cwd` for a thread and the project root. Though in order to do that, `load_config_layers_state()` needs to know what the `cwd` is, so this PR does the work to thread the `cwd` through for existing callsites. A notable exception is the `/config` endpoint in app server for which a `cwd` is not guaranteed to be associated with the query, so the `cwd` param is `Option<AbsolutePathBuf>` to account for this case. The logic to make use of the `cwd` will be done in a follow-up PR.
## TUI2: Normalize Mouse Scroll Input Across Terminals (Wheel + Trackpad) This changes TUI2 scrolling to a stream-based model that normalizes terminal scroll event density into consistent wheel behavior (default: ~3 transcript lines per physical wheel notch) while keeping trackpad input higher fidelity via fractional accumulation. Primary code: `codex-rs/tui2/src/tui/scrolling/mouse.rs` Doc of record (model + probe-derived data): `codex-rs/tui2/docs/scroll_input_model.md` ### Why Terminals encode both mouse wheels and trackpads as discrete scroll up/down events with direction but no magnitude, and they vary widely in how many raw events they emit per physical wheel notch (commonly 1, 3, or 9+). Timing alone doesn’t reliably distinguish wheel vs trackpad, so cadence-based heuristics are unstable across terminals/hardware. This PR treats scroll input as short *streams* separated by silence or direction flips, normalizes raw event density into tick-equivalents, coalesces redraws for dense streams, and exposes explicit config overrides. ### What Changed #### Scroll Model (TUI2) - Stream detection - Start a stream on the first scroll event. - End a stream on an idle gap (`STREAM_GAP_MS`) or a direction flip. - Normalization - Convert raw events into tick-equivalents using per-terminal `tui.scroll_events_per_tick`. - Wheel-like vs trackpad-like behavior - Wheel-like: fixed “classic” lines per wheel notch; flush immediately for responsiveness. - Trackpad-like: fractional accumulation + carry across stream boundaries; coalesce flushes to ~60Hz to avoid floods and reduce “stop lag / overshoot”. - Trackpad divisor is intentionally capped: `min(scroll_events_per_tick, 3)` so terminals with dense wheel ticks (e.g. 9 events per notch) don’t make trackpads feel artificially slow. - Auto mode (default) - Start conservatively as trackpad-like (avoid overshoot). - Promote to wheel-like if the first tick-worth of events arrives quickly. - Fallback for 1-event-per-tick terminals (no tick-completion timing signal). #### Trackpad Acceleration Some terminals produce relatively low vertical event density for trackpad gestures, which makes large/faster swipes feel sluggish even when small motions feel correct. To address that, trackpad-like streams apply a bounded multiplier based on event count: - `multiplier = clamp(1 + abs(events) / scroll_trackpad_accel_events, 1..scroll_trackpad_accel_max)` The multiplier is applied to the trackpad stream’s computed line delta (including carried fractional remainder). Defaults are conservative and bounded. #### Config Knobs (TUI2) All keys live under `[tui]`: - `scroll_wheel_lines`: lines per physical wheel notch (default: 3). - `scroll_events_per_tick`: raw vertical scroll events per physical wheel notch (terminal-specific default; fallback: 3). - Wheel-like per-event contribution: `scroll_wheel_lines / scroll_events_per_tick`. - `scroll_trackpad_lines`: baseline trackpad sensitivity (default: 1). - Trackpad-like per-event contribution: `scroll_trackpad_lines / min(scroll_events_per_tick, 3)`. - `scroll_trackpad_accel_events` / `scroll_trackpad_accel_max`: bounded trackpad acceleration (defaults: 30 / 3). - `scroll_mode = auto|wheel|trackpad`: force behavior or use the heuristic (default: `auto`). - `scroll_wheel_tick_detect_max_ms`: auto-mode promotion threshold (ms). - `scroll_wheel_like_max_duration_ms`: auto-mode fallback for 1-event-per-tick terminals (ms). - `scroll_invert`: invert scroll direction (applies to wheel + trackpad). Config docs: `docs/config.md` and field docs in `codex-rs/core/src/config/types.rs`. #### App Integration - The app schedules follow-up ticks to close idle streams (via `ScrollUpdate::next_tick_in` and `schedule_frame_in`) and finalizes streams on draw ticks. - `codex-rs/tui2/src/app.rs` #### Docs - Single doc of record describing the model + preserved probe findings/spec: - `codex-rs/tui2/docs/scroll_input_model.md` #### Other (jj-only friendliness) - `codex-rs/tui2/src/diff_render.rs`: prefer stable cwd-relative paths when the file is under the cwd even if there’s no `.git`. ### Terminal Defaults Per-terminal defaults are derived from scroll-probe logs (see doc). Notable: - Ghostty currently defaults to `scroll_events_per_tick = 3` even though logs measured ~9 in one setup. This is a deliberate stopgap; if your Ghostty build emits ~9 events per wheel notch, set: ```toml [tui] scroll_events_per_tick = 9 ``` ### Testing - `just fmt` - `just fix -p codex-core --allow-no-vcs` - `cargo test -p codex-core --lib` (pass) - `cargo test -p codex-tui2` (scroll tests pass; remaining failures are known flaky VT100 color tests in `insert_history`) ### Review Focus - Stream finalization + frame scheduling in `codex-rs/tui2/src/app.rs`. - Auto-mode promotion thresholds and the 1-event-per-tick fallback behavior. - Trackpad divisor cap (`min(events_per_tick, 3)`) and acceleration defaults. - Ghostty default tradeoff (3 vs ~9) and whether we should change it.
### Summary With codesigning on Mac, Windows and Linux, we should be able to safely remove `features.rmcp_client` and `use_experimental_use_rmcp_client` check from the codebase now.
Removes plan from system skills. It has been rewritten into `create-plan` for evaluation and feedback: openai/skills#22
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Upstream Sync
This PR syncs changes from upstream release
rust-v0.77.0.Summary
rust-v0.77.0Workflow Sanitization
The following upstream workflows had their triggers replaced with `workflow_dispatch`:
cargo-deny.ymlci.ymlcla.ymlclose-stale-contributor-prs.ymlcodespell.ymlissue-deduplicator.ymlissue-labeler.ymlrust-release-prepare.ymlrust-release.ymlsdk.ymlshell-tool-mcp-ci.ymlshell-tool-mcp.ymlMerge Instructions
git checkout dev git merge sync/upstream-v0.77.0 --no-ff # Resolve conflicts if anycd codex-rs && cargo testcargo insta reviewAfter Merge
sync/upstream-v0.77.0branch