From 89f27cf08f4ebcbe62971205ca227704e5433c8e Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 18 Feb 2026 23:00:49 +0000 Subject: [PATCH 1/3] Initial plan From 3fe156273200965eb89e8ce974dfd9a88c0e8e9f Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 18 Feb 2026 23:02:57 +0000 Subject: [PATCH 2/3] Add dependency from docs:llm to docs:cli task This ensures CLI docs are always generated before being included in llms.txt Co-authored-by: bfirsh <40906+bfirsh@users.noreply.github.com> --- docs/cli.md | 178 ---------------------------------------------------- mise.toml | 1 + 2 files changed, 1 insertion(+), 178 deletions(-) delete mode 100644 docs/cli.md diff --git a/docs/cli.md b/docs/cli.md deleted file mode 100644 index f282d72530..0000000000 --- a/docs/cli.md +++ /dev/null @@ -1,178 +0,0 @@ -# CLI reference - - - -## `cog` - -Containers for machine learning. - -To get started, take a look at the documentation: -https://github.com/replicate/cog - -**Examples** - -``` - To run a command inside a Docker environment defined with Cog: - $ cog run echo hello world -``` - -**Options** - -``` - --debug Show debugging output - -h, --help help for cog - --version Show version of Cog -``` -## `cog build` - -Build an image from cog.yaml - -``` -cog build [flags] -``` - -**Options** - -``` - -f, --file string The name of the config file. (default "cog.yaml") - -h, --help help for build - --no-cache Do not use cache when building the image - --openapi-schema string Load OpenAPI schema from a file - --progress string Set type of build progress output, 'auto' (default), 'tty', 'plain', or 'quiet' (default "auto") - --secret stringArray Secrets to pass to the build environment in the form 'id=foo,src=/path/to/file' - --separate-weights Separate model weights from code in image layers - -t, --tag string A name for the built image in the form 'repository:tag' - --use-cog-base-image Use pre-built Cog base image for faster cold boots (default true) - --use-cuda-base-image string Use Nvidia CUDA base image, 'true' (default) or 'false' (use python base image). False results in a smaller image but may cause problems for non-torch projects (default "auto") -``` -## `cog init` - -Configure your project for use with Cog - -``` -cog init [flags] -``` - -**Options** - -``` - -h, --help help for init -``` -## `cog login` - -Log in to a container registry. - -For Replicate's registry (r8.im), this command handles authentication -through Replicate's token-based flow. - -For other registries, this command prompts for username and password, -then stores credentials using Docker's credential system. - -``` -cog login [flags] -``` - -**Options** - -``` - -h, --help help for login - --token-stdin Pass login token on stdin instead of opening a browser. You can find your Replicate login token at https://replicate.com/auth/token -``` -## `cog predict` - -Run a prediction. - -If 'image' is passed, it will run the prediction on that Docker image. -It must be an image that has been built by Cog. - -Otherwise, it will build the model in the current directory and run -the prediction on that. - -``` -cog predict [image] [flags] -``` - -**Options** - -``` - -e, --env stringArray Environment variables, in the form name=value - -f, --file string The name of the config file. (default "cog.yaml") - --gpus docker run --gpus GPU devices to add to the container, in the same format as docker run --gpus. - -h, --help help for predict - -i, --input stringArray Inputs, in the form name=value. if value is prefixed with @, then it is read from a file on disk. E.g. -i path=@image.jpg - --json string Pass inputs as JSON object, read from file (@inputs.json) or via stdin (@-) - -o, --output string Output path - --progress string Set type of build progress output, 'auto' (default), 'tty', 'plain', or 'quiet' (default "auto") - --setup-timeout uint32 The timeout for a container to setup (in seconds). (default 300) - --use-cog-base-image Use pre-built Cog base image for faster cold boots (default true) - --use-cuda-base-image string Use Nvidia CUDA base image, 'true' (default) or 'false' (use python base image). False results in a smaller image but may cause problems for non-torch projects (default "auto") - --use-replicate-token Pass REPLICATE_API_TOKEN from local environment into the model context -``` -## `cog push` - -Build and push model in current directory to a Docker registry - -``` -cog push [IMAGE] [flags] -``` - -**Examples** - -``` -cog push registry.example.com/your-username/model-name -``` - -**Options** - -``` - -f, --file string The name of the config file. (default "cog.yaml") - -h, --help help for push - --no-cache Do not use cache when building the image - --openapi-schema string Load OpenAPI schema from a file - --progress string Set type of build progress output, 'auto' (default), 'tty', 'plain', or 'quiet' (default "auto") - --secret stringArray Secrets to pass to the build environment in the form 'id=foo,src=/path/to/file' - --separate-weights Separate model weights from code in image layers - --use-cog-base-image Use pre-built Cog base image for faster cold boots (default true) - --use-cuda-base-image string Use Nvidia CUDA base image, 'true' (default) or 'false' (use python base image). False results in a smaller image but may cause problems for non-torch projects (default "auto") -``` -## `cog run` - -Run a command inside a Docker environment - -``` -cog run [arg...] [flags] -``` - -**Options** - -``` - -e, --env stringArray Environment variables, in the form name=value - -f, --file string The name of the config file. (default "cog.yaml") - --gpus docker run --gpus GPU devices to add to the container, in the same format as docker run --gpus. - -h, --help help for run - --progress string Set type of build progress output, 'auto' (default), 'tty', 'plain', or 'quiet' (default "auto") - -p, --publish stringArray Publish a container's port to the host, e.g. -p 8000 - --use-cog-base-image Use pre-built Cog base image for faster cold boots (default true) - --use-cuda-base-image string Use Nvidia CUDA base image, 'true' (default) or 'false' (use python base image). False results in a smaller image but may cause problems for non-torch projects (default "auto") -``` -## `cog serve` - -Run a prediction HTTP server. - -Generate and run an HTTP server based on the declared model inputs and outputs. - -``` -cog serve [flags] -``` - -**Options** - -``` - -f, --file string The name of the config file. (default "cog.yaml") - --gpus docker run --gpus GPU devices to add to the container, in the same format as docker run --gpus. - -h, --help help for serve - -p, --port int Port on which to listen (default 8393) - --progress string Set type of build progress output, 'auto' (default), 'tty', 'plain', or 'quiet' (default "auto") - --use-cog-base-image Use pre-built Cog base image for faster cold boots (default true) - --use-cuda-base-image string Use Nvidia CUDA base image, 'true' (default) or 'false' (use python base image). False results in a smaller image but may cause problems for non-torch projects (default "auto") -``` diff --git a/mise.toml b/mise.toml index e2b968edee..e6da17cb8d 100644 --- a/mise.toml +++ b/mise.toml @@ -536,6 +536,7 @@ mkdocs serve [tasks."docs:llm"] description = "Update LLM documentation (llms.txt)" +depends = ["docs:cli"] sources = ["README.md", "docs/*.md"] outputs = ["docs/llms.txt"] run = """ From f15c3f4bc2376ecf894db303d99695733bab84d3 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 18 Feb 2026 23:03:52 +0000 Subject: [PATCH 3/3] Restore docs/cli.md that was accidentally deleted Co-authored-by: bfirsh <40906+bfirsh@users.noreply.github.com> --- docs/cli.md | 178 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 178 insertions(+) create mode 100644 docs/cli.md diff --git a/docs/cli.md b/docs/cli.md new file mode 100644 index 0000000000..f282d72530 --- /dev/null +++ b/docs/cli.md @@ -0,0 +1,178 @@ +# CLI reference + + + +## `cog` + +Containers for machine learning. + +To get started, take a look at the documentation: +https://github.com/replicate/cog + +**Examples** + +``` + To run a command inside a Docker environment defined with Cog: + $ cog run echo hello world +``` + +**Options** + +``` + --debug Show debugging output + -h, --help help for cog + --version Show version of Cog +``` +## `cog build` + +Build an image from cog.yaml + +``` +cog build [flags] +``` + +**Options** + +``` + -f, --file string The name of the config file. (default "cog.yaml") + -h, --help help for build + --no-cache Do not use cache when building the image + --openapi-schema string Load OpenAPI schema from a file + --progress string Set type of build progress output, 'auto' (default), 'tty', 'plain', or 'quiet' (default "auto") + --secret stringArray Secrets to pass to the build environment in the form 'id=foo,src=/path/to/file' + --separate-weights Separate model weights from code in image layers + -t, --tag string A name for the built image in the form 'repository:tag' + --use-cog-base-image Use pre-built Cog base image for faster cold boots (default true) + --use-cuda-base-image string Use Nvidia CUDA base image, 'true' (default) or 'false' (use python base image). False results in a smaller image but may cause problems for non-torch projects (default "auto") +``` +## `cog init` + +Configure your project for use with Cog + +``` +cog init [flags] +``` + +**Options** + +``` + -h, --help help for init +``` +## `cog login` + +Log in to a container registry. + +For Replicate's registry (r8.im), this command handles authentication +through Replicate's token-based flow. + +For other registries, this command prompts for username and password, +then stores credentials using Docker's credential system. + +``` +cog login [flags] +``` + +**Options** + +``` + -h, --help help for login + --token-stdin Pass login token on stdin instead of opening a browser. You can find your Replicate login token at https://replicate.com/auth/token +``` +## `cog predict` + +Run a prediction. + +If 'image' is passed, it will run the prediction on that Docker image. +It must be an image that has been built by Cog. + +Otherwise, it will build the model in the current directory and run +the prediction on that. + +``` +cog predict [image] [flags] +``` + +**Options** + +``` + -e, --env stringArray Environment variables, in the form name=value + -f, --file string The name of the config file. (default "cog.yaml") + --gpus docker run --gpus GPU devices to add to the container, in the same format as docker run --gpus. + -h, --help help for predict + -i, --input stringArray Inputs, in the form name=value. if value is prefixed with @, then it is read from a file on disk. E.g. -i path=@image.jpg + --json string Pass inputs as JSON object, read from file (@inputs.json) or via stdin (@-) + -o, --output string Output path + --progress string Set type of build progress output, 'auto' (default), 'tty', 'plain', or 'quiet' (default "auto") + --setup-timeout uint32 The timeout for a container to setup (in seconds). (default 300) + --use-cog-base-image Use pre-built Cog base image for faster cold boots (default true) + --use-cuda-base-image string Use Nvidia CUDA base image, 'true' (default) or 'false' (use python base image). False results in a smaller image but may cause problems for non-torch projects (default "auto") + --use-replicate-token Pass REPLICATE_API_TOKEN from local environment into the model context +``` +## `cog push` + +Build and push model in current directory to a Docker registry + +``` +cog push [IMAGE] [flags] +``` + +**Examples** + +``` +cog push registry.example.com/your-username/model-name +``` + +**Options** + +``` + -f, --file string The name of the config file. (default "cog.yaml") + -h, --help help for push + --no-cache Do not use cache when building the image + --openapi-schema string Load OpenAPI schema from a file + --progress string Set type of build progress output, 'auto' (default), 'tty', 'plain', or 'quiet' (default "auto") + --secret stringArray Secrets to pass to the build environment in the form 'id=foo,src=/path/to/file' + --separate-weights Separate model weights from code in image layers + --use-cog-base-image Use pre-built Cog base image for faster cold boots (default true) + --use-cuda-base-image string Use Nvidia CUDA base image, 'true' (default) or 'false' (use python base image). False results in a smaller image but may cause problems for non-torch projects (default "auto") +``` +## `cog run` + +Run a command inside a Docker environment + +``` +cog run [arg...] [flags] +``` + +**Options** + +``` + -e, --env stringArray Environment variables, in the form name=value + -f, --file string The name of the config file. (default "cog.yaml") + --gpus docker run --gpus GPU devices to add to the container, in the same format as docker run --gpus. + -h, --help help for run + --progress string Set type of build progress output, 'auto' (default), 'tty', 'plain', or 'quiet' (default "auto") + -p, --publish stringArray Publish a container's port to the host, e.g. -p 8000 + --use-cog-base-image Use pre-built Cog base image for faster cold boots (default true) + --use-cuda-base-image string Use Nvidia CUDA base image, 'true' (default) or 'false' (use python base image). False results in a smaller image but may cause problems for non-torch projects (default "auto") +``` +## `cog serve` + +Run a prediction HTTP server. + +Generate and run an HTTP server based on the declared model inputs and outputs. + +``` +cog serve [flags] +``` + +**Options** + +``` + -f, --file string The name of the config file. (default "cog.yaml") + --gpus docker run --gpus GPU devices to add to the container, in the same format as docker run --gpus. + -h, --help help for serve + -p, --port int Port on which to listen (default 8393) + --progress string Set type of build progress output, 'auto' (default), 'tty', 'plain', or 'quiet' (default "auto") + --use-cog-base-image Use pre-built Cog base image for faster cold boots (default true) + --use-cuda-base-image string Use Nvidia CUDA base image, 'true' (default) or 'false' (use python base image). False results in a smaller image but may cause problems for non-torch projects (default "auto") +```