Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/workflows/build_images.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,7 @@ jobs:
cp ./LICENSE ./ci/docker/context/LICENSE
cp ./VERSION ./ci/docker/context/VERSION
cp ./thirdparty/THIRD_PARTY_LICENSES ./ci/docker/context/THIRD_PARTY_LICENSES
cp ./ci/docker/entrypoint.sh ./ci/docker/context/entrypoint.sh
- name: Copy Commit SHA and commit time
run: |
git rev-parse HEAD > ./ci/docker/context/COMMIT_SHA
Expand Down
11 changes: 11 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -212,6 +212,17 @@ export RAPIDS_DATASET_ROOT_DIR=$CUOPT_HOME/datasets/
cd $CUOPT_HOME/python
pytest -v ${CUOPT_HOME}/python/cuopt/cuopt/tests
```
## gRPC Remote Execution

NVIDIA cuOpt includes a gRPC-based remote execution system for running solves on a
GPU server from a program using the API locally. User documentation lives under `docs/cuopt/source/cuopt-grpc/` (Sphinx **gRPC remote execution** section):

- `quick-start.rst` — Install/Docker/selector, how remote execution works, minimal LP and CLI examples (default C bundle).
- `advanced.rst` — TLS, tuning, limitations, troubleshooting.
- `examples.rst`, `api.rst` — Sample patterns and RPC overview.
- `docs/cuopt/source/cuopt-grpc/grpc-server-architecture.md` — Short **gRPC server behavior** page in user docs.
- `cpp/docs/grpc-server-architecture.md` — Full contributor reference (IPC, C++ source map, streaming).

## Debugging cuOpt

### Building in debug mode from source
Expand Down
392 changes: 0 additions & 392 deletions GRPC_INTERFACE.md

This file was deleted.

248 changes: 0 additions & 248 deletions GRPC_QUICK_START.md

This file was deleted.

39 changes: 13 additions & 26 deletions ci/docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ RUN ln -sf /usr/bin/python${PYTHON_SHORT_VER} /usr/bin/python

FROM python-env AS install-env

ARG CUDA_VER
ARG CUOPT_VER
ARG PYTHON_SHORT_VER

Expand All @@ -68,36 +69,18 @@ FROM install-env AS cuopt-final

ARG PYTHON_SHORT_VER

# Consolidate all directory creation, permissions, and file operations into a single layer
# Make cuopt_grpc_server, cuopt_cli, and shared libraries available to all processes
# (profile.d scripts are only sourced by login shells; ENV works for all containers)
ENV PATH="/usr/local/cuda/bin:/usr/bin:/usr/local/bin:/usr/local/nvidia/bin/:/usr/local/lib/python${PYTHON_SHORT_VER}/dist-packages/libcuopt/bin:${PATH}"
ENV LD_LIBRARY_PATH="/usr/lib/x86_64-linux-gnu:/usr/lib/aarch64-linux-gnu:/usr/local/cuda/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/wsl/lib:/usr/lib/wsl/lib/libnvidia-container:/usr/lib/nvidia:/usr/lib/nvidia-current:/usr/local/lib/python${PYTHON_SHORT_VER}/dist-packages/libcuopt/lib/:/usr/local/lib/python${PYTHON_SHORT_VER}/dist-packages/rapids_logger/lib64:${LD_LIBRARY_PATH}"

# Directory creation, permissions
RUN mkdir -p /opt/cuopt && \
chmod 777 /opt/cuopt && \
# Create profile.d script for universal access
echo '#!/bin/bash' > /etc/profile.d/cuopt.sh && \
echo 'export PATH="/usr/local/cuda/bin:/usr/bin:/usr/local/bin:/usr/local/nvidia/bin/:/usr/local/lib/python${PYTHON_SHORT_VER}/dist-packages/libcuopt/bin:$PATH"' >> /etc/profile.d/cuopt.sh && \
echo 'export LD_LIBRARY_PATH="/usr/lib/x86_64-linux-gnu:/usr/lib/aarch64-linux-gnu:/usr/local/cuda/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/wsl/lib:/usr/lib/wsl/lib/libnvidia-container:/usr/lib/nvidia:/usr/lib/nvidia-current:/usr/local/lib/python${PYTHON_SHORT_VER}/dist-packages/libcuopt/lib/:/usr/local/lib/python${PYTHON_SHORT_VER}/dist-packages/rapids_logger/lib64:${LD_LIBRARY_PATH}"' >> /etc/profile.d/cuopt.sh && \
chmod +x /etc/profile.d/cuopt.sh && \
# Set in /etc/environment for system-wide access
echo 'PATH="/usr/local/cuda/bin:/usr/bin:/usr/local/bin:/usr/local/nvidia/bin/:/usr/local/lib/python${PYTHON_SHORT_VER}/dist-packages/libcuopt/bin:$PATH"' >> /etc/environment && \
echo 'LD_LIBRARY_PATH="/usr/lib/x86_64-linux-gnu:/usr/lib/aarch64-linux-gnu:/usr/local/cuda/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:/usr/lib/wsl/lib:/usr/lib/wsl/lib/libnvidia-container:/usr/lib/nvidia:/usr/lib/nvidia-current:/usr/local/lib/python${PYTHON_SHORT_VER}/dist-packages/libcuopt/lib/:/usr/local/lib/python${PYTHON_SHORT_VER}/dist-packages/rapids_logger/lib64:${LD_LIBRARY_PATH}"' >> /etc/environment && \
# Set proper permissions for cuOpt installation
chmod -R 755 /usr/local/lib/python${PYTHON_SHORT_VER}/dist-packages/cuopt* && \
chmod -R 755 /usr/local/lib/python${PYTHON_SHORT_VER}/dist-packages/libcuopt* && \
chmod -R 755 /usr/local/lib/python${PYTHON_SHORT_VER}/dist-packages/cuopt_* && \
chmod -R 755 /usr/local/bin/* && \
# Create entrypoint script in a single operation
echo '#!/bin/bash' > /opt/cuopt/entrypoint.sh && \
echo 'set -e' >> /opt/cuopt/entrypoint.sh && \
echo '' >> /opt/cuopt/entrypoint.sh && \
echo '# Get current user info from Docker environment variables' >> /opt/cuopt/entrypoint.sh && \
echo 'CURRENT_UID=${UID:-1000}' >> /opt/cuopt/entrypoint.sh && \
echo 'CURRENT_GID=${GID:-1000}' >> /opt/cuopt/entrypoint.sh && \
echo '' >> /opt/cuopt/entrypoint.sh && \
echo '# Set environment variables for the current user' >> /opt/cuopt/entrypoint.sh && \
echo 'export HOME="/opt/cuopt"' >> /opt/cuopt/entrypoint.sh && \
echo '' >> /opt/cuopt/entrypoint.sh && \
echo '# Execute the command' >> /opt/cuopt/entrypoint.sh && \
echo 'exec "$@"' >> /opt/cuopt/entrypoint.sh && \
chmod +x /opt/cuopt/entrypoint.sh
chmod -R 755 /usr/local/bin/*

# Set the default working directory to the cuopt folder
WORKDIR /opt/cuopt
Expand All @@ -112,6 +95,10 @@ COPY --from=cuda-libs /usr/local/cuda/lib64/libnvJitLink* /usr/local/cuda/lib64/
# Copy CUDA headers needed for runtime compilation (e.g., CuPy NVRTC).
COPY --from=cuda-headers /usr/local/cuda/include/ /usr/local/cuda/include/

# Use the flexible entrypoint
# Entrypoint supports server selection:
# Default: Python REST server
# CUOPT_SERVER_TYPE=grpc: gRPC server (uses CUOPT_SERVER_PORT, CUOPT_GPU_COUNT)
# Explicit command: docker run <image> cuopt_grpc_server [args...]
COPY ./entrypoint.sh /opt/cuopt/entrypoint.sh
ENTRYPOINT ["/opt/cuopt/entrypoint.sh"]
CMD ["python", "-m", "cuopt_server.cuopt_service"]
44 changes: 44 additions & 0 deletions ci/docker/entrypoint.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
#!/bin/bash
# SPDX-FileCopyrightText: Copyright (c) 2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Entrypoint for the cuOpt container image.
#
# Server selection (in order of precedence):
# 1. Explicit command: docker run <image> cuopt_grpc_server [args...]
# 2. Environment variable: CUOPT_SERVER_TYPE=grpc
# 3. Default: Python REST server (cuopt_server.cuopt_service)
#
# When CUOPT_SERVER_TYPE=grpc, the following env vars configure the gRPC server:
# CUOPT_SERVER_PORT — listen port (default: 5001)
# CUOPT_GPU_COUNT — worker processes (default: 1)
# CUOPT_GRPC_ARGS — additional CLI flags passed verbatim
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Link to details on CUOPT_GRPC_ARGS or add more details.

# (e.g. "--tls --tls-cert server.crt --log-to-console")
# See docs/cuopt/source/cuopt-grpc/advanced.rst (flags/env);
# cpp/docs/grpc-server-architecture.md for contributor IPC details.
# for all available flags.

set -e

export HOME="/opt/cuopt"

# If CUOPT_SERVER_TYPE=grpc, build a command line from env vars and launch.
if [ "${CUOPT_SERVER_TYPE}" = "grpc" ]; then
GRPC_CMD=(cuopt_grpc_server)

GRPC_CMD+=(--port "${CUOPT_SERVER_PORT:-5001}")

if [ -n "${CUOPT_GPU_COUNT}" ]; then
GRPC_CMD+=(--workers "${CUOPT_GPU_COUNT}")
fi

# Allow arbitrary extra flags (e.g. --tls, --log-to-console)
if [ -n "${CUOPT_GRPC_ARGS}" ]; then
read -ra EXTRA <<< "${CUOPT_GRPC_ARGS}"
GRPC_CMD+=("${EXTRA[@]}")
fi
Comment on lines +36 to +39
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Word-splitting caveat for CUOPT_GRPC_ARGS.

The read -ra approach splits on whitespace, so arguments containing embedded spaces (e.g., --arg "value with spaces") won't be preserved correctly. This is acceptable for typical CLI flags but worth noting in documentation if users might need complex arguments.

For most use cases (flags like --tls --log-to-console), the current implementation is fine.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ci/docker/entrypoint.sh` around lines 32 - 35, The current block that reads
CUOPT_GRPC_ARGS into EXTRA via read -ra will split on whitespace and lose
embedded spaces in quoted arguments; either document this limitation or change
parsing to preserve quoted segments—replace the read -ra approach with a safe
eval-based split (e.g., eval "set -- $CUOPT_GRPC_ARGS" then append "$@" to
GRPC_CMD) so quoted values like "--arg \"value with spaces\"" are preserved;
update the comment above the block to mention CUOPT_GRPC_ARGS semantics and
reference CUOPT_GRPC_ARGS and GRPC_CMD in entrypoint.sh.


exec "${GRPC_CMD[@]}"
fi

exec "$@"
1 change: 1 addition & 0 deletions cpp/docs/DEVELOPER_GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
This document serves as a guide for contributors to cuOpt C++ code. Developers should also refer
to these additional files for further documentation of cuOpt best practices.

* [gRPC server architecture](grpc-server-architecture.md) — full `cuopt_grpc_server` IPC, source file map, and streaming internals (end-user summary lives under `docs/cuopt/source/cuopt-grpc/`).
* [Documentation Guide](TODO) for guidelines on documenting cuOpt code.
* [Testing Guide](TODO) for guidelines on writing unit tests.
* [Benchmarking Guide](TODO) for guidelines on writing unit benchmarks.
Expand Down
Loading
Loading