-
Notifications
You must be signed in to change notification settings - Fork 451
Description
Problem Statement
I'm part of a platform team responsible for providing standardized local development environments across 600+ repositories. We use devcontainers as the foundation, with a shared base image that bundles Kubernetes tooling (Docker-in-Docker, k3d, kubectl, helm, tilt, k9s, stern, etc.).
The base image is versioned with semver and published to our private container registry. Teams reference it in their project-specific Dockerfiles via FROM registry.example.com/devcontainer-base:2.3.1.
The image solves what goes inside the container. What it doesn't solve is the configuration around the container -- the devcontainer.json properties, the lifecycle scripts, the host-side initialization, the mounts, the environment variable passthrough, the IDE settings. This configuration is just as important as the image itself, and we have no good way to version and distribute it.
What we need to distribute and keep in sync across 600+ repos
A typical project's devcontainer.json looks like this:
{
"initializeCommand": ".devcontainer/lifecycle/initializeCommand/main.sh",
"appPort": ["443:31402", "80:31861", "10001:32432", "5354:30054/udp"],
"build": { "dockerfile": "Dockerfile" },
"remoteUser": "vscode",
"runArgs": ["--init", "--add-host=host.docker.internal:host-gateway", "--privileged"],
"forwardPorts": [10350],
"containerEnv": { "SHELL": "/bin/zsh" },
"remoteEnv": {
"DOCKER_CONTEXT": "default",
"REGISTRY_USERNAME": "${localEnv:REGISTRY_USERNAME}",
"REGISTRY_PASSWORD": "${localEnv:REGISTRY_PASSWORD}",
"ML_PLATFORM_HOST": "${localEnv:ML_PLATFORM_HOST}",
"ML_PLATFORM_API_KEY": "${localEnv:ML_PLATFORM_API_KEY}"
},
"overrideCommand": false,
"postStartCommand": "./.devcontainer/lifecycle/postStartCommand/main.sh",
"workspaceMount": "source=${localWorkspaceFolder},target=/home/app,type=bind",
"workspaceFolder": "/home/app",
"mounts": [
"source=dind-my-project,target=/var/lib/docker,type=volume",
"source=${localEnv:HOME}${localEnv:USERPROFILE}/.docker/config.json,target=/home/vscode/.docker/config.json,type=bind,consistency=cached,readonly",
"source=${localEnv:HOME}${localEnv:USERPROFILE}/.config/helm,target=/home/vscode/.config/helm,type=bind,consistency=cached",
"source=${localEnv:HOME}${localEnv:USERPROFILE}/.sbt,target=/home/vscode/.sbt,type=bind,consistency=cached",
"source=${localEnv:HOME}${localEnv:USERPROFILE}/.config/pip/pip.conf,target=/home/vscode/.config/pip/pip.conf,type=bind,consistency=cached,readonly",
"source=${localEnv:HOME}/.local/share/dev-certs,target=/home/vscode/.dev-certs,type=bind,consistency=cached"
],
"customizations": {
"vscode": {
"extensions": ["tilt-dev.tiltfile", "vscjava.vscode-java-pack"],
"settings": { "terminal.integrated.defaultProfile.linux": "zsh" }
}
}
}This configuration is a mix of org-standard properties (shared across all repos) and project-specific properties. The org-standard parts include:
remoteUser,containerEnv,runArgs,forwardPorts-- identical across all projects- Standard
mounts(Docker config, Helm preferences, CA certificates) -- shared infrastructure - Standard
remoteEnvpassthrough (private registry credentials) -- org-wide - Standard
customizations(Tilt extension, zsh terminal) -- shared tooling initializeCommandscript that sets up DNS resolution and generates/trusts CA certificates on the host -- critical shared infrastructure- Container-side lifecycle scripts that bootstrap the K8s cluster, configure helm repos, etc.
The project-specific parts include:
appPortmappings (different per service)workspaceMount/workspaceFolder(may vary)- Project-specific
mounts(.sbtfor Scala,pip.conffor Python) - Project-specific
remoteEnv(ML platform credentials, etc.) - Project-specific
postStartCommand - Project-specific VS Code extensions
The core problem: when we need to change any org-standard property -- add a new mount, update a lifecycle script, change an environment variable, add an IDE extension -- we have no mechanism to push that change to 600+ repos. Each repo has its own copy of these values, and they drift.
Requirements
- Version pinning: Teams should be able to pin to a specific version of the shared configuration and upgrade on their own schedule
- Override capability: Teams must be able to add or modify properties on top of the shared configuration (add project-specific mounts, env vars, extensions, lifecycle hooks)
- CLI and IDE compatibility: Must work with
devcontainerCLI (for CI/headless use) and IDE extensions (VS Code AND JetBrains)
What I tried
Attempt 1: Image metadata labels (devcontainer.metadata)
I embedded org-standard configuration into the base Docker image using the devcontainer.metadata label:
LABEL devcontainer.metadata='[{
"remoteUser": "vscode",
"containerEnv": { "SHELL": "/bin/zsh" },
"remoteEnv": {
"DOCKER_CONTEXT": "default",
"REGISTRY_USERNAME": "${localEnv:REGISTRY_USERNAME}"
},
"forwardPorts": [10350],
"mounts": [
"source=${localEnv:HOME}/.docker/config.json,target=/home/vscode/.docker/config.json,type=bind,consistency=cached,readonly",
"source=${localEnv:HOME}/.local/share/dev-certs,target=/home/vscode/.dev-certs,type=bind,consistency=cached"
],
"customizations": {
"vscode": {
"extensions": ["tilt-dev.tiltfile"],
"settings": { "terminal.integrated.defaultProfile.linux": "zsh" }
}
}
}]'What this solved: Container-side lifecycle hooks, environment variables, mounts, IDE settings, and port forwards can all live in the image label and merge with each repo's devcontainer.json. When we release a new image version, the config updates automatically -- zero changes needed in project repos.
What this didn't solve:
-
initializeCommandcannot be set via labels. This is the only lifecycle hook that runs on the host machine (before the container exists). OurinitializeCommandgenerates CA certificates and configures the host's DNS resolver -- genuine host-side work that produces files consumed by bind mounts. There is no way to distribute these host-side scripts through the image. -
Several important properties are not label-settable.
runArgs,appPort,workspaceMount,workspaceFolder, andbuild/imagecan only be set indevcontainer.json. So even with maximum label usage, each repo still needs a non-trivialdevcontainer.json. -
Config is coupled to the image version. I can't update a mount or an environment variable without rebuilding and releasing a new image. Configuration and tooling should be independently versionable.
-
IDE support uncertainty. JetBrains' devcontainer implementation may not fully support image metadata label parsing. For orgs that use both VS Code and JetBrains, this is a risk.
Attempt 2: Devcontainer Features
I packaged the shared configuration as a custom Devcontainer Feature, published to our OCI registry.
What this solved: Features can carry lifecycle hooks (onCreateCommand, postCreateCommand, postStartCommand, postAttachCommand), containerEnv, mounts, customizations, and more. They support semver via OCI tags, and teams can pin at major/minor/patch granularity. Feature options provide type-safe, documented customization points.
What this didn't solve:
-
Features cannot contribute
initializeCommand. Same gap as image labels -- the host-side hook is unreachable. Our DNS/CA setup must run on the host before the container starts, producing files that bind mounts reference. There is no Feature mechanism for this. -
Features cannot set
runArgs,appPort,workspaceMount, orworkspaceFolder. Same gap as labels. -
Feature + image creates two version pins to manage. Teams must coordinate Feature version and image version independently.
Attempt 3: Bootstrap script for initializeCommand
To solve the host-side gap, I added a small bootstrap script to each repo that downloads versioned host scripts from our package registry:
#!/usr/bin/env bash
VERSION=$(grep -oP '(?<=base:)[0-9]+\.[0-9]+\.[0-9]+' .devcontainer/Dockerfile)
CACHE="${HOME}/.cache/dv-devcontainer/${VERSION}"
[ -d "${CACHE}" ] || (mkdir -p "${CACHE}" && curl -sSL "<registry>/${VERSION}/host-scripts.tar.gz" | tar -xz -C "${CACHE}")
"${CACHE}/main.sh" "$(pwd)"What this solved: Host scripts are versioned centrally, cached locally, and the bootstrap is generic enough to rarely need updating.
What this didn't solve: This is a workaround, not a solution. Every repo still needs this bootstrap script. It requires network access on first run. It requires a separate publishing pipeline for the host scripts tarball. And it's invisible to the devcontainer tooling -- there's no spec-level awareness that this is a versioned dependency.
Attempt 4: Rendered configuration via automated merge requests
As a last resort, I considered treating devcontainer config as generated code: a central repo holds templates, a CI pipeline renders project-specific .devcontainer/ directories, and automated MRs push them to each repo.
What this solved: Everything. Complete self-containment, no runtime dependencies, works in air-gapped environments, maximum IDE compatibility.
What this didn't solve: This is essentially giving up on the devcontainer spec and building a bespoke distribution system outside of it. It requires maintaining a project registry of 600+ repos, building MR automation, handling merge conflicts when teams modify generated files, and dealing with MR fatigue at scale. It's the heaviest approach by far.
The gap in the spec
After exhausting all available mechanisms, I see two fundamental gaps:
1. No extends or remote configuration inheritance
There is no way for a devcontainer.json to reference a remote, versioned base configuration. The proposed extends property (#22) only supports relative paths within the same repo -- it solves configuration sharing within a single project (e.g., between a base config and a CI-specific variant), but it doesn't solve the cross-repo distribution problem at all.
What's needed is something like:
{
"extends": "oci://registry.example.com/devcontainer-configs/k8s-standard:2.3",
"appPort": ["443:31402", "80:31861"],
"workspaceFolder": "/home/app"
}Where the referenced config is a versioned artifact containing a base devcontainer.json with all the org-standard properties, and the local devcontainer.json only specifies overrides and additions.
Remote fetching and authentication are critical parts of this. In an enterprise setting, these configuration packages live in private registries. The devcontainer tooling would need to authenticate to fetch them. But this is a solved problem -- the same authentication that's already needed for:
- OCI registries: Features already use OCI distribution. The same registry credentials (from
~/.docker/config.jsonor credential helpers) could be reused for fetching extended configs. - Private npm registries: If the distribution format were npm packages, the existing
.npmrccredential configuration would apply. - Private git repos: SSH keys or token-based auth, already configured on developer machines.
The key insight is that the authentication infrastructure already exists on every developer's machine (they need it to pull the base image and any private Features). A remote extends should piggyback on these existing credential flows rather than inventing a new one. If the distribution mechanism is OCI (like Features), no additional auth setup would be needed at all -- the same Docker credential helpers that authenticate docker pull for the base image would also authenticate the extends fetch.
This would also solve versioning naturally. OCI tags support semver. Teams could pin to k8s-standard:2 (get all non-breaking updates), k8s-standard:2.3 (get patches only), or k8s-standard:2.3.1 (exact pin). The same semantics already work for Features.
This single change would eliminate the need for most of the workarounds described above.
2. Features cannot contribute initializeCommand
initializeCommand is the only lifecycle hook that runs on the host, and it's the only one that cannot be set via Features or image labels. This creates an asymmetry: Features can fully manage the container-side lifecycle, but the host-side lifecycle is completely unaddressable.
For our use case, initializeCommand performs critical infrastructure setup (DNS resolver configuration, CA certificate generation and trust) that produces artifacts on the host. These artifacts are then consumed by bind mounts when the container starts. This is a dependency chain: host-side scripts produce files that the container's mounts reference.
If Features could contribute initializeCommand entries (following the same concatenation semantics as other lifecycle hooks, with Feature-provided commands running before user-provided commands), the host-side distribution problem would be largely solved.
3. No way to version lifecycle hook scripts alongside configuration
Even if all the above gaps were addressed -- even if extends supported remote references, and Features could contribute initializeCommand, and all properties were label/Feature-settable -- there's still a fundamental problem: lifecycle hooks reference script files, and there is no spec-level mechanism to version and distribute those scripts.
Today, lifecycle hooks like postStartCommand: ".devcontainer/lifecycle/postStartCommand/main.sh" point to script files that must exist somewhere. These scripts contain the actual logic: bootstrapping a K8s cluster, configuring helm repos, setting up DNS, generating certificates. They are just as much a part of the devcontainer "package" as the devcontainer.json properties, but they have no versioning story.
With image labels, I can version the configuration (mounts, env vars, ports) by embedding it in the image. But the scripts those lifecycle hooks reference must either:
- Live in each project repo (not centrally versionable)
- Be baked into the image at a known path (works, but the label can only reference them by absolute path, and there's no formal contract between "what the label declares" and "what scripts exist in the image")
- Be downloaded at runtime by a bootstrap script (a workaround, not a solution)
The image is the natural place to version everything together. The image already contains the tools. The image labels already carry the configuration. The image filesystem can already contain the lifecycle scripts. What's missing is a formal model where the devcontainer tooling understands that lifecycle scripts are part of the image artifact.
Consider a flow like:
- Pull the devcontainer image (or build from a Dockerfile that
FROMs the base image) - Read
devcontainer.metadatalabels to get the configuration - Copy lifecycle scripts from a well-known path inside the image (e.g.,
/usr/local/share/devcontainer/lifecycle/) to the host or make them available for execution - Run
initializeCommand(host-side scripts extracted from the image) - Start the container with the label-derived configuration
- Run container-side lifecycle hooks (which reference scripts already inside the image)
This would mean the image is a complete, self-contained, versioned package: tools + configuration + lifecycle scripts. Upgrading the image version upgrades everything together. No separate script distribution, no bootstrap scripts, no version mismatches between config and scripts.
For host-side scripts specifically (initializeCommand), this would require the tooling to extract them from the image before starting the container -- essentially a "pre-container" extraction step. This is a new capability, but it's a natural extension of the existing pattern where the tooling already reads image labels before starting the container.
4. Properties not settable via labels or Features
Several properties that are commonly org-standardized cannot be centralized through any existing mechanism:
runArgs(thoughinit,privileged,capAdd,securityOpthave label/Feature equivalents,--add-hostdoes not)appPortworkspaceMount/workspaceFolder
Adding label and/or Feature support for these properties (or providing label-settable equivalents where runArgs flags are used) would reduce the per-repo configuration surface further.
What would help
In rough priority order:
1. Remote extends with versioning
Let devcontainer.json reference a remote base configuration via OCI registry, HTTPS URL, or git ref, with semver support. Define clear merge semantics (same as image label merging: lifecycle hooks concatenate, env vars merge per-key, arrays union).
The distribution mechanism should reuse existing credential infrastructure. If the format is OCI (like Features), no new auth configuration is needed -- the same Docker credential helpers and ~/.docker/config.json that authenticate image pulls would also authenticate the extends fetch. This is the same design decision that made Features work seamlessly with private registries.
The extended config should be able to include not just devcontainer.json properties, but also the associated scripts and files that lifecycle hooks reference (much like how Features include an install.sh and supporting files). This would allow the extended config to carry host-side scripts for initializeCommand, container-side lifecycle scripts, and supporting configuration files -- all versioned together as a single package.
2. Allow Features to contribute initializeCommand
Close the host-side gap. If Features can set all other lifecycle hooks, initializeCommand should follow the same pattern. The security implications are understood (it runs on the host), but the same is true of any initializeCommand in devcontainer.json today -- the user explicitly opts in by referencing the Feature.
3. Treat the image as a complete devcontainer package (config + scripts)
Formalize a model where the devcontainer image is the single versioned artifact that carries everything: tools (already the case), configuration (via labels -- already possible), and lifecycle scripts (new: scripts at well-known paths inside the image, recognized by the tooling).
For host-side scripts (initializeCommand), the tooling would extract them from the image before starting the container. This is a natural extension of the existing pattern -- the tooling already reads image labels (metadata) before container start; extracting scripts from a well-known path (e.g., /usr/local/share/devcontainer/lifecycle/initializeCommand/) would follow the same flow.
This would eliminate the need for separate script distribution pipelines, bootstrap scripts, or bespoke package registries. The image IS the package.
4. Expand label-settable and Feature-settable properties
Specifically runArgs equivalents (e.g., a label-settable extraHosts property as an alternative to --add-host), appPort, and workspaceMount/workspaceFolder.
Related issues
-
Addition of "extends" top level property to enable simple configuration inheritance #22 --
extendstop-level property: Currently proposed for relative paths only. The proposal addresses a real need (config layering within a repo), but it doesn't solve cross-repo distribution. Extending Addition of "extends" top level property to enable simple configuration inheritance #22 to support remote references with OCI/HTTPS/git distribution and semver pinning would address the enterprise use case described here. The merge semantics proposed in Addition of "extends" top level property to enable simple configuration inheritance #22 (arrays union, scalars override, objects merge) are exactly what's needed. -
Lifecycle hooks support for dev container features #60 -- Lifecycle hooks for Features:
initializeCommandwas explicitly excluded from Feature lifecycle hook support. The reasoning was likely thatinitializeCommandruns on the host and is therefore a different security domain. However, the same trust model applies -- users explicitly add Features to their config, just as they explicitly writeinitializeCommandin theirdevcontainer.json. The enterprise use case shows that the host-side lifecycle gap forces workarounds (bootstrap scripts, separate CLI tools) that are outside the spec's visibility and versioning model.
Summary
The devcontainer spec has excellent primitives for what happens inside the container (Features, image labels, lifecycle hooks). But for organizations managing hundreds of repos, there's a gap in how to distribute, version, and override the configuration itself across those repos. The existing mechanisms (labels, Features, Templates) each solve part of the problem but leave significant gaps, particularly around:
- Remote configuration inheritance with versioning and authentication
- Host-side lifecycle hook distribution (the
initializeCommandgap) - Lifecycle hook script versioning -- scripts are just as important as configuration, but have no versioning story. The image is the natural place to co-version tools, configuration (via labels), and scripts (via well-known paths).
- Properties that remain locked to
devcontainer.jsonwith no centralization path
I'd love to hear if others are facing similar challenges at scale, and whether any of these directions are being considered.