GCS (Google Cloud Storage) client implementation for the unified storage-cli tool. This module provides Google Cloud Storage operations through the main storage-cli binary.
Note: This is not a standalone CLI. Use the main storage-cli binary with -s gcs flag to access GCS functionality.
For general usage and build instructions, see the main README.
This is not an official Google Product.
The GCS client requires a JSON configuration file.
{
"bucket_name": "<string> (required)",
"credentials_source": "<string> ['static'|'none'|""]",
"json_key": "<string> (required if credentials_source = 'static')",
"storage_class": "<string> (optional - default: 'STANDARD', check for more options=https://docs.cloud.google.com/storage/docs/storage-classes)",
"encryption_key": "<string> (optional)",
"uniform_bucket_level_access": "<boolean> (optional)"
}- "": specifies that credentials should be detected. Application Default Credentials will be used if avaliable. A read-only client will be used otherwise.
- "none": specifies that credentials are explicitly empty and that the client should be restricted to a read-only scope.
- "static:" specifies that a service account file included in json_key should be used for authentication.
The ensure-storage-exists command creates a bucket if it does not already exist. The uniform_bucket_level_access configuration option controls the access control model:
true: Creates a bucket with uniform bucket-level access (IAM-only, ACLs disabled)falseor omitted (default): Creates a bucket with fine-grained access control (ACLs enabled)
static: A service account key will be provided via thejson_keyfield.none: No credentials are provided. The client is reading from a public bucket.- <empty>: Application Default Credentials
will be used if they exist (either through
gcloud auth application-default loginor a service account). If they don't exist the client will fall back tononebehavior.
Usage examples:
# Upload an object
storage-cli -s gcs -c gcs-config.json put local-file.txt remote-blob
# Fetch an object
storage-cli -s gcs -c gcs-config.json get remote-blob local-file.txt
# Delete an object
storage-cli -s gcs -c gcs-config.json delete remote-blob
# Check if an object exists
storage-cli -s gcs -c gcs-config.json exists remote-blob
# Generate a signed URL (e.g., GET for 1 hour)
storage-cli -s gcs -c gcs-config.json sign remote-blob get 60sRun unit tests from the repository root:
ginkgo --skip-package=integration --cover -v -r ./gcs/...- Create a service account with the
Storage Adminrole. - Create a new key for your service account and download credential as JSON file.
- Export json content with
export google_json_key_data="$(cat <path-to-json-file.json>)". - Export
export SKIP_LONG_TESTS=yesif you want to run only the fast running tests. - Navigate to project's root folder.
- Run environment setup script to create buckets
/.github/scripts/gcs/setup.sh. - Run tests
/.github/scripts/gcs/run-int.sh. - Run environment teardown script to delete resources
/.github/scripts/gcs/teardown.sh.