diff --git a/.claude/skills/add-operators/SKILL.md b/.claude/skills/add-operators/SKILL.md new file mode 100644 index 00000000..cb0ab4ce --- /dev/null +++ b/.claude/skills/add-operators/SKILL.md @@ -0,0 +1,84 @@ +--- +name: add-operators +description: Add new operators to an existing Charon distributed validator cluster +user-invokable: true +--- + +# Add Operators + +> **Warning:** This is an alpha feature and is not yet recommended for production use. + +Expand a Charon cluster by adding new operators. This is a coordinated operation involving both existing and new operators. + +## Prerequisites + +Read `scripts/edit/add-operators/README.md` for full details if needed. + +Common prerequisites: +1. `.env` file exists with `NETWORK` and `VC` variables set +2. `.charon` directory with `cluster-lock.json` and `charon-enr-private-key` +3. Docker is running +4. `jq` installed + +## Role Selection + +Ask the user: **"Are you an existing operator in the cluster, or a new operator joining?"** + +### If Existing Operator + +**Script**: `scripts/edit/add-operators/existing-operator.sh` + +**Additional prerequisites**: +- `.charon/cluster-lock.json` and `.charon/validator_keys/` must exist +- The script will automatically stop the VC container for ASDB export + +**Arguments to gather**: +- `--new-operator-enrs`: Comma-separated ENRs of the new operators joining +- Whether to use `--dry-run` first + +**Run**: +```bash +./scripts/edit/add-operators/existing-operator.sh \ + --new-operator-enrs "enr:-...,enr:-..." \ + [--dry-run] +``` + +Set `WORK_DIR` env var to override the repository root directory if running from a custom location. + + +The script will export the anti-slashing database, run the P2P ceremony, update keys, and print commands to start containers manually. After completion, remind the user to **wait ~2 epochs before starting** containers. + +### If New Operator + +**Script**: `scripts/edit/add-operators/new-operator.sh` + +This is a **two-step process**: + +#### Step 1: Generate ENR + +Ask if the user needs to generate an ENR (first time setup): + +```bash +./scripts/edit/add-operators/new-operator.sh --generate-enr +``` + +This creates `.charon/charon-enr-private-key` and displays the ENR. Tell the user to **share this ENR with the existing operators**. +The existing operators, in turn, need to share the `cluster-lock.json` with the new operators, which contains the current cluster configuration and is required for the P2P ceremony. + +#### Step 2: Join the Ceremony + +After the existing operators have the ENR, gather: +- `--new-operator-enrs`: Comma-separated ENRs of ALL new operators (including their own) +- `--cluster-lock`: Path to the `cluster-lock.json` received from existing operators +- Whether to use `--dry-run` first + +```bash +./scripts/edit/add-operators/new-operator.sh \ + --new-operator-enrs "enr:-...,enr:-..." \ + --cluster-lock ./received-cluster-lock.json \ + [--dry-run] +``` + +Set `WORK_DIR` env var to override the repository root directory if running from a custom location. + +Remind the user that **all operators (existing AND new) must participate simultaneously** in the P2P ceremony. After completion, the script will print commands to start containers manually. The new operator does NOT have slashing protection history (fresh start). diff --git a/.claude/skills/add-validators/SKILL.md b/.claude/skills/add-validators/SKILL.md new file mode 100644 index 00000000..4a331988 --- /dev/null +++ b/.claude/skills/add-validators/SKILL.md @@ -0,0 +1,58 @@ +--- +name: add-validators +description: Add new validators to an existing Charon distributed validator cluster +user-invokable: true +--- + +# Add Validators + +> **Warning:** This is an alpha feature and is not yet recommended for production use. + +Add new validators to an existing Charon distributed validator cluster. All operators must run this simultaneously as it requires a P2P ceremony. + +## Prerequisites + +Before running, verify: +1. `.env` file exists with `NETWORK` and `VC` variables set +2. `.charon/cluster-lock.json` and `.charon/deposit-data*.json` exist +3. Docker is running +4. `jq` is installed + +Read `scripts/edit/add-validators/README.md` for full details if needed. + +## Gather Arguments + +Ask the user for the following required arguments using AskUserQuestion: + +1. **Number of validators** (`--num-validators`): How many new validators to add (positive integer) +2. **Withdrawal addresses** (`--withdrawal-addresses`): Comma-separated Ethereum withdrawal address(es) +3. **Fee recipient addresses** (`--fee-recipient-addresses`): Comma-separated fee recipient address(es) + +Also ask whether they want to: +- Run with `--dry-run` first to preview the operation +- Use `--unverified` flag (skip key verification, used for remote KeyManager API setups) + +## Execution + +Run the script from the repository root: + +```bash +./scripts/edit/add-validators/add-validators.sh \ + --num-validators \ + --withdrawal-addresses \ + --fee-recipient-addresses \ + [--unverified] [--dry-run] +``` + +Set `WORK_DIR` env var to override the repository root directory if running from a custom location. + +The script will: +1. Validate prerequisites +2. Display current cluster info (operators, validators) +3. Run a P2P ceremony (all operators must participate simultaneously) +4. Stop containers if they were running +5. Backup `.charon/` to `./backups/` +6. Install new configuration +7. Print commands to start containers manually + +Remind the user that **all operators must run this script at the same time** for the P2P ceremony to succeed. diff --git a/.claude/skills/export-asdb/SKILL.md b/.claude/skills/export-asdb/SKILL.md new file mode 100644 index 00000000..5deac63e --- /dev/null +++ b/.claude/skills/export-asdb/SKILL.md @@ -0,0 +1,31 @@ +--- +name: export-asdb +description: Export the anti-slashing database (EIP-3076) from the validator client +user-invokable: true +--- + +# Export Anti-Slashing Database + +> **Warning:** This is an alpha feature and is not yet recommended for production use. + +Export the EIP-3076 anti-slashing database from the validator client. The VC container must be stopped before export. + +## Prerequisites + +1. `.env` file exists with `VC` variable set +2. VC container must be **stopped** + +Read `scripts/edit/vc/README.md` for full details if needed. + +## Gather Arguments + +Ask the user for: +- `--output-file`: Path to write the exported JSON file (e.g., `./asdb-export/slashing-protection.json`) + +## Execution + +```bash +./scripts/edit/vc/export_asdb.sh --output-file +``` + +The `VC` variable is read from `.env` automatically. The script routes to the appropriate VC-specific export implementation (lodestar, teku, prysm, or nimbus). diff --git a/.claude/skills/import-asdb/SKILL.md b/.claude/skills/import-asdb/SKILL.md new file mode 100644 index 00000000..f2363fa3 --- /dev/null +++ b/.claude/skills/import-asdb/SKILL.md @@ -0,0 +1,31 @@ +--- +name: import-asdb +description: Import an anti-slashing database (EIP-3076) into the validator client +user-invokable: true +--- + +# Import Anti-Slashing Database + +> **Warning:** This is an alpha feature and is not yet recommended for production use. + +Import an EIP-3076 anti-slashing database into the validator client. The VC container must be stopped. + +## Prerequisites + +1. `.env` file exists with `VC` variable set +2. VC container must be **stopped** + +Read `scripts/edit/vc/README.md` for full details if needed. + +## Gather Arguments + +Ask the user for: +- `--input-file`: Path to the JSON file to import (e.g., `./asdb-export/slashing-protection.json`) + +## Execution + +```bash +./scripts/edit/vc/import_asdb.sh --input-file +``` + +The `VC` variable is read from `.env` automatically. The script routes to the appropriate VC-specific import implementation (lodestar, teku, prysm, or nimbus). diff --git a/.claude/skills/recreate-private-keys/SKILL.md b/.claude/skills/recreate-private-keys/SKILL.md new file mode 100644 index 00000000..99b0994c --- /dev/null +++ b/.claude/skills/recreate-private-keys/SKILL.md @@ -0,0 +1,52 @@ +--- +name: recreate-private-keys +description: Recreate private key shares for a Charon cluster while keeping the same validator public keys +user-invokable: true +--- + +# Recreate Private Keys + +> **Warning:** This is an alpha feature and is not yet recommended for production use. + +Refresh private key shares held by operators while keeping the same validator public keys. Validators stay registered on the beacon chain - only the operator key shares change. All operators must participate simultaneously. + +## Use Cases + +- Security concerns: private key shares may have been compromised +- Key rotation: regular security practice +- Recovery: after a security incident + +## Prerequisites + +Before running, verify: +1. `.env` file exists with `NETWORK` and `VC` variables set +2. `.charon` directory with `cluster-lock.json` and `validator_keys` +3. Docker is running +4. `jq` installed + +Read `scripts/edit/recreate-private-keys/README.md` for full details if needed. + +## Execution + +Ask the user whether they want to run with `--dry-run` first to preview the operation. + +```bash +./scripts/edit/recreate-private-keys/recreate-private-keys.sh [--dry-run] +``` + +Set `WORK_DIR` env var to override the repository root directory if running from a custom location. + +The script will: +1. Validate prerequisites +2. Stop the VC container and export the anti-slashing database +3. Run a P2P ceremony (all operators must participate simultaneously) +4. Update ASDB pubkeys to match new key shares +5. Stop containers +6. Backup `.charon/` to `./backups/` +7. Install new key shares +8. Import updated ASDB +9. Print commands to start containers manually + +After completion, remind the user to **wait ~2 epochs before starting** containers. + +Remind the user that **all operators must run this script at the same time** for the P2P ceremony to succeed. diff --git a/.claude/skills/remove-operators/SKILL.md b/.claude/skills/remove-operators/SKILL.md new file mode 100644 index 00000000..01a741ba --- /dev/null +++ b/.claude/skills/remove-operators/SKILL.md @@ -0,0 +1,88 @@ +--- +name: remove-operators +description: Remove operators from an existing Charon distributed validator cluster +user-invokable: true +--- + +# Remove Operators + +> **Warning:** This is an alpha feature and is not yet recommended for production use. + +Remove one or more operators from a Charon cluster. Whether removed operators need to participate depends on fault tolerance. + +## Prerequisites + +Read `scripts/edit/remove-operators/README.md` for full details if needed. + +Common prerequisites: +1. `.env` file exists with `NETWORK` and `VC` variables set +2. `.charon` directory with `cluster-lock.json` and `validator_keys` +3. Docker is running +4. `jq` installed + +## Fault Tolerance Context + +Explain to the user: +- Fault tolerance `f = operators - threshold` +- If removing **<= f** operators: removed operators do NOT need to participate (they just stop their nodes) +- If removing **> f** operators: removed operators MUST also participate using `removed-operator.sh` + +## Role Selection + +Ask the user: **"Are you a remaining operator (staying in the cluster) or a removed operator (leaving the cluster)?"** + +### If Remaining Operator + +**Script**: `scripts/edit/remove-operators/remaining-operator.sh` + +**Additional prerequisites**: +- `.charon/validator_keys/` must exist +- The script will automatically stop the VC container for ASDB export + +**Arguments to gather**: +- `--operator-enrs-to-remove`: Comma-separated ENRs of operators being removed +- `--participating-operator-enrs` (only if removal exceeds fault tolerance): Comma-separated ENRs of ALL participating operators +- `--new-threshold` (optional): Override the default threshold (defaults to ceil(n * 2/3)) +- Whether to use `--dry-run` first + +**Run**: +```bash +./scripts/edit/remove-operators/remaining-operator.sh \ + --operator-enrs-to-remove "enr:-...,enr:-..." \ + [--participating-operator-enrs "enr:-...,enr:-..."] \ + [--new-threshold N] \ + [--dry-run] +``` + +Set `WORK_DIR` env var to override the repository root directory if running from a custom location. + +After completion, the script will print commands to start containers manually. Remind the user to **wait ~2 epochs before starting** containers. + +### If Removed Operator + +**Script**: `scripts/edit/remove-operators/removed-operator.sh` + +This is **only needed when the removal exceeds fault tolerance**. If within fault tolerance, the removed operator simply stops their node. + +**Additional prerequisites**: +- `.charon/charon-enr-private-key` must exist +- `.charon/validator_keys/` must exist + +**Arguments to gather**: +- `--operator-enrs-to-remove`: Comma-separated ENRs of operators being removed +- `--participating-operator-enrs`: Comma-separated ENRs of ALL participating operators (must include your own ENR) +- `--new-threshold` (optional): Override the default threshold +- Whether to use `--dry-run` first + +**Run**: +```bash +./scripts/edit/remove-operators/removed-operator.sh \ + --operator-enrs-to-remove "enr:-...,enr:-..." \ + --participating-operator-enrs "enr:-...,enr:-..." \ + [--new-threshold N] \ + [--dry-run] +``` + +Set `WORK_DIR` env var to override the repository root directory if running from a custom location. + +The script will participate in the ceremony and then stop your charon and VC containers. No ASDB operations are needed since you're leaving the cluster. diff --git a/.claude/skills/replace-operator/SKILL.md b/.claude/skills/replace-operator/SKILL.md new file mode 100644 index 00000000..9546415c --- /dev/null +++ b/.claude/skills/replace-operator/SKILL.md @@ -0,0 +1,94 @@ +--- +name: replace-operator +description: Replace a single operator in a Charon distributed validator cluster +user-invokable: true +--- + +# Replace Operator + +> **Warning:** This is an alpha feature and is not yet recommended for production use. + +Replace a single operator in a Charon cluster with a new one. All participating operators (remaining + new) run a `charon alpha edit replace-operator` ceremony together (P2P via relay). The new operator must receive the current cluster-lock.json before the ceremony begins. + +## Prerequisites + +Read `scripts/edit/replace-operator/README.md` for full details if needed. + +Common prerequisites: +1. `.env` file exists with `NETWORK` and `VC` variables set +2. `.charon` directory with `cluster-lock.json` and `charon-enr-private-key` +3. Docker is running +4. `jq` installed + +## Role Selection + +Ask the user: **"Are you a remaining operator (performing the replacement) or the new operator joining as a replacement?"** + +### If Remaining Operator + +**Script**: `scripts/edit/replace-operator/remaining-operator.sh` + +**Additional prerequisites**: +- `.charon/cluster-lock.json` and `.charon/charon-enr-private-key` must exist +- The script will automatically stop the VC container for ASDB export (unless `--skip-export` is used) + +**Arguments to gather**: +- `--new-enr`: ENR of the new replacement operator +- `--old-enr`: ENR of the operator being replaced +- `--skip-export` (optional): Skip ASDB export if already done +- Whether to use `--dry-run` first + +**Run**: +```bash +./scripts/edit/replace-operator/remaining-operator.sh \ + --new-enr "enr:-..." \ + --old-enr "enr:-..." \ + [--skip-export] \ + [--dry-run] +``` + +Set `WORK_DIR` env var to override the repository root directory if running from a custom location. + +After completion, the script will print commands to start containers manually. Remind the user to **wait ~2 epochs before starting** containers. + +### If New Operator + +**Script**: `scripts/edit/replace-operator/new-operator.sh` + +This is a **two-step process**: + +#### Step 1: Generate ENR + +Ask if the user needs to generate an ENR: + +```bash +./scripts/edit/replace-operator/new-operator.sh --generate-enr +``` + +This creates `.charon/charon-enr-private-key` and displays the ENR. Tell the user to **share this ENR with the existing operators**. +The existing operators, in turn, need to share the `cluster-lock.json` with the new operators, which contains the current cluster configuration and is required for the P2P ceremony. + +#### Step 2: Run the Ceremony + +After receiving the current `cluster-lock.json` from remaining operators: +- `--cluster-lock`: Path to the received `cluster-lock.json` +- `--old-enr`: ENR of the operator being replaced +- Whether to use `--dry-run` first + +```bash +./scripts/edit/replace-operator/new-operator.sh \ + --cluster-lock ./received-cluster-lock.json \ + --old-enr "enr:-..." \ + [--dry-run] +``` + +Set `WORK_DIR` env var to override the repository root directory if running from a custom location. + +The new operator runs this **at the same time** as the remaining operators run their ceremony. All operators must participate together. + +After the ceremony completes, the script automatically: +- Backs up the old `.charon` directory +- Moves the output directory to `.charon` (contains complete configuration) +- Prints commands to start containers + +Note: the new operator does NOT have slashing protection history (fresh start). diff --git a/.gitignore b/.gitignore index be6f45cf..b55ae221 100644 --- a/.gitignore +++ b/.gitignore @@ -13,3 +13,4 @@ data/ .charon prometheus/prometheus.yml commit-boost/config.toml + diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 00000000..2346f6dc --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,151 @@ +# CLAUDE.md + +This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. + +## Project Overview + +This repository contains Docker Compose configurations for running a Charon Distributed Validator Node (CDVN), which coordinates multiple operators to run Ethereum validators. A distributed validator node runs four main components: +- Execution client (EL): Processes Ethereum transactions +- Consensus client (CL/beacon node): Participates in Ethereum's proof-of-stake consensus +- Charon: Obol Network's distributed validator middleware that coordinates between operators +- Validator client (VC): Signs attestations and proposals through Charon + +## Architecture & Multi-Client System + +The repository uses a **profile-based multi-client architecture** where different Ethereum client implementations can be swapped via `.env` configuration: + +- **Compose file structure**: `compose-el.yml` (execution), `compose-cl.yml` (consensus), `compose-vc.yml` (validator), `compose-mev.yml` (MEV), and `docker-compose.yml` (main/monitoring) +- **Client selection**: Set via environment variables `EL`, `CL`, `VC`, `MEV` in `.env` (e.g., `EL=el-nethermind`, `CL=cl-lighthouse`, `VC=vc-lodestar`, `MEV=mev-mevboost`) +- **Profiles**: Docker Compose profiles automatically activate the selected clients via `COMPOSE_PROFILES=${EL},${CL},${VC},${MEV}` +- **Service naming**: Client services use prefixed names (e.g., `el-nethermind`, `cl-lighthouse`, `vc-lodestar`) while the main compose file uses unprefixed names for backward compatibility + +### Supported Clients + +- **Execution Layer**: `el-nethermind`, `el-reth`, `el-none` +- **Consensus Layer**: `cl-lighthouse`, `cl-grandine`, `cl-teku`, `cl-lodestar`, `cl-none` +- **Validator Clients**: `vc-lodestar`, `vc-nimbus`, `vc-prysm`, `vc-teku` +- **MEV Clients**: `mev-mevboost`, `mev-commitboost`, `mev-none` + +### Key Integration Points + +- Charon connects to the consensus layer at `http://${CL}:5052` (beacon node API) +- Validator clients connect to Charon at `http://charon:3600` (distributed validator middleware API) +- Consensus layer connects to execution layer at `http://${EL}:8551` (Engine API with JWT auth) +- MEV clients expose builder API at port `18550` + +## Common Commands + +### Starting/Stopping the Cluster + +```bash +# Start the full cluster (uses profile from .env) +docker compose up -d + +# Stop specific services +docker compose down + +# Stop all services +docker compose down + +# View logs +docker compose logs -f + +# Restart after config changes +docker compose restart +``` + +### Switching Clients + +```bash +# 1. Stop the old client +docker compose down cl-lighthouse + +# 2. Update .env to change CL variable (e.g., CL=cl-grandine) + +# 3. Start new client +docker compose up cl-grandine -d + +# 4. Restart charon to use new beacon node +docker compose restart charon + +# 5. Optional: clean up old client data +rm -rf ./data/lighthouse +``` + +### Testing + +```bash +# Verify containers can be created +docker compose up --no-start + +# Test with debug profile +docker compose -f docker-compose.yml -f compose-debug.yml up --no-start +``` + +## Configuration + +### Environment Setup + +1. Copy the appropriate sample file: `.env.sample.mainnet` or `.env.sample.hoodi` → `.env` +2. Set `NETWORK` (mainnet, hoodi) +3. Select clients by uncommenting the desired `EL`, `CL`, `VC`, `MEV` variables +4. Configure optional settings (ports, external hostnames, monitoring tokens, etc.) + +### Important Environment Variables + +- `NETWORK`: Ethereum network (mainnet, hoodi) +- `EL`, `CL`, `VC`, `MEV`: Client selection (determines which Docker profiles activate) +- `CHARON_BEACON_NODE_ENDPOINTS`: Override default beacon node (defaults to selected CL client) +- `CHARON_FALLBACK_BEACON_NODE_ENDPOINTS`: Fallback beacon nodes for redundancy +- `BUILDER_API_ENABLED`: Enable/disable MEV-boost integration +- `CLUSTER_NAME`, `CLUSTER_PEER`: Required for monitoring with Alloy/Prometheus +- `ALERT_DISCORD_IDS`: Discord IDs for Obol Agent monitoring alerts + +### Key Directories + +- `.charon/`: Cluster configuration and validator keys (created by DKG or add-validators) +- `data/`: Persistent data for all clients (execution, consensus, validator databases) +- `jwt/`: JWT secret for execution<->consensus authentication +- `grafana/`: Monitoring dashboards and configuration +- `prometheus/`: Metrics collection configuration +- `scripts/`: Automation scripts for cluster operations + +## Cluster Edit Scripts + +Located in `scripts/edit/`, these automate complex cluster modification operations. Each has its own README with full usage details: + +- **[Add Validators](scripts/edit/add-validators/README.md)** - Add new validators to an existing cluster +- **[Add Operators](scripts/edit/add-operators/README.md)** - Expand the cluster by adding new operators +- **[Remove Operators](scripts/edit/remove-operators/README.md)** - Remove operators from the cluster +- **[Replace Operator](scripts/edit/replace-operator/README.md)** - Replace a single operator in the cluster +- **[Recreate Private Keys](scripts/edit/recreate-private-keys/README.md)** - Refresh private key shares while keeping the same validator public keys +- **[Anti-Slashing DB (vc/)](scripts/edit/vc/README.md)** - Export/import/update anti-slashing databases (EIP-3076) + +## Monitoring Stack + +- **Grafana** (port 3000): Dashboards for cluster health, validator performance +- **Prometheus**: Metrics collection from all services +- **Loki**: Log aggregation (optional, via `CHARON_LOKI_ADDRESSES`) +- **Tempo**: Distributed tracing (debug profile) +- **Alloy**: Log and metric forwarding (uses `alloy-monitored` labels on services) + +Access Grafana at `http://localhost:3000` (or `${MONITORING_PORT_GRAFANA}`). + +## Development Workflow + +When modifying this repository: + +1. **Test container creation** before committing changes to compose files +2. **Preserve backward compatibility** for existing node operators (data paths, service names) +3. **Update all sample .env files** when adding new configuration options +4. **Test client switching** if modifying compose file structure +5. **Update version defaults** to tested/stable releases + +## Important Notes + +- **Never commit `.env` files** - they contain operator-specific configuration +- **JWT secret** in `jwt/jwt.hex` must be shared between EL and CL clients +- **Cluster lock** in `.charon/cluster-lock.json` is critical - back it up before any edit operations +- **Validator keys** in `.charon/validator_keys/` must be kept secure and never committed +- **Data directory compatibility**: When switching VCs, verify the new client can handle existing key state +- **Slashing protection**: Always export/import ASDB when switching VCs or replacing operators diff --git a/Users/pinebit/charon-distributed-validator-node/scripts/edit/vc/test/output/slashing-protection.json b/Users/pinebit/charon-distributed-validator-node/scripts/edit/vc/test/output/slashing-protection.json new file mode 100644 index 00000000..97722811 --- /dev/null +++ b/Users/pinebit/charon-distributed-validator-node/scripts/edit/vc/test/output/slashing-protection.json @@ -0,0 +1,38 @@ +{ + "metadata": { + "interchange_format_version": "5", + "genesis_validators_root": "0x212f13fc4df078b6cb7db228f1c8307566dcecf900867401a92023d7ba99cb5f" + }, + "data": [ + { + "pubkey": "0xa3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", + "signed_blocks": [ + { + "slot": "81952", + "signing_root": "0x4ff6f743a43f3b4f95350831aeaf0a122a1a392922c45d804280284a69eb850b" + }, + { + "slot": "81984", + "signing_root": "0x5a2b9c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b" + } + ], + "signed_attestations": [ + { + "source_epoch": "2560", + "target_epoch": "2561", + "signing_root": "0x587d6a4f59a58fe15bdac1234e3d51a1d5c8b2e0e3f5e0f2a1b3c4d5e6f7a8b9" + }, + { + "source_epoch": "2561", + "target_epoch": "2562", + "signing_root": "0x6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b" + }, + { + "source_epoch": "2562", + "target_epoch": "2563", + "signing_root": "0x7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c" + } + ] + } + ] +} \ No newline at end of file diff --git a/scripts/README.md b/scripts/README.md new file mode 100644 index 00000000..a195af60 --- /dev/null +++ b/scripts/README.md @@ -0,0 +1,27 @@ +# Cluster Edit Automation Scripts + +Automation scripts for Charon distributed validator cluster editing operations. + +## Documentation + +- [Charon Edit Commands](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/) +- [EIP-3076 Slashing Protection Interchange Format](https://eips.ethereum.org/EIPS/eip-3076) + +## Scripts + +| Directory | Description | +|-----------|-------------| +| [edit/add-validators/](edit/add-validators/README.md) | Add new validators to an existing cluster | +| [edit/recreate-private-keys/](edit/recreate-private-keys/README.md) | Refresh private key shares while keeping the same validator public keys | +| [edit/add-operators/](edit/add-operators/README.md) | Expand the cluster by adding new operators | +| [edit/remove-operators/](edit/remove-operators/README.md) | Remove operators from the cluster | +| [edit/replace-operator/](edit/replace-operator/README.md) | Replace a single operator in a cluster | +| [edit/vc/](edit/vc/README.md) | Export/import/update anti-slashing databases (EIP-3076) | +| [edit/test/](edit/test/README.md) | E2E integration tests for all edit scripts | + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables +- Docker and `docker compose` +- `jq` + diff --git a/scripts/edit/add-operators/README.md b/scripts/edit/add-operators/README.md new file mode 100644 index 00000000..b862d120 --- /dev/null +++ b/scripts/edit/add-operators/README.md @@ -0,0 +1,106 @@ +# Add-Operators Scripts + +Scripts to automate the [add-operators ceremony](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-operators) for Charon distributed validators. + +## Overview + +These scripts help operators expand an existing distributed validator cluster by adding new operators. This is useful for: + +- **Cluster expansion**: Adding more operators for increased redundancy +- **Decentralization**: Distributing validator duties across more parties +- **Resilience**: Expanding the operator set while maintaining the same validators + +**Important**: This is a coordinated ceremony. All operators (existing AND new) must run their respective scripts simultaneously to complete the process. + +> **Warning**: This is an alpha feature in Charon and is not yet recommended for production use. + +There are two scripts for the two roles involved: + +- **`existing-operator.sh`** - For operators already in the cluster +- **`new-operator.sh`** - For new operators joining the cluster + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- Docker running +- `jq` installed +- **Existing operators**: `.charon` directory with `cluster-lock.json` and `validator_keys` +- **New operators**: `cluster-lock.json` received from existing operators, and ENR private key (generated via `--generate-enr`) + +## For Existing Operators + +Automates the complete workflow for operators already in the cluster: + +```bash +./scripts/edit/add-operators/existing-operator.sh \ + --new-operator-enrs "enr:-..." +``` + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--new-operator-enrs ` | Yes | Comma-separated ENRs of new operators | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +| Environment Variable | Description | +|----------------------|-------------| +| `WORK_DIR` | Override the repository root directory (defaults to auto-detected repo root) | + +### Workflow + +1. **Export ASDB** - Stop VC if running and export anti-slashing database +2. **Run ceremony** - P2P coordinated add-operators ceremony with all operators +3. **Update ASDB** - Replace pubkeys in exported ASDB to match new cluster-lock +4. **Stop containers** - Stop charon and VC +5. **Backup and replace** - Backup current `.charon/` to `./backups/`, install new configuration +6. **Import ASDB** - Import updated anti-slashing database +7. **Print start commands** - Display commands to start containers manually (wait ~2 epochs before starting) + +## For New Operators + +Two-step workflow for new operators joining the cluster. + +**Step 1:** Generate ENR and share with existing operators: + +```bash +./scripts/edit/add-operators/new-operator.sh --generate-enr +``` + +**Step 2:** Download the existing cluster-lock from one of the existing operators: + +```bash +curl -o .charon/cluster-lock.json https://example.com/cluster-lock.json +``` + +**Step 3:** Run the ceremony with the cluster-lock: + +```bash +./scripts/edit/add-operators/new-operator.sh \ + --new-operator-enrs "enr:-...,enr:-..." \ + --cluster-lock .charon/cluster-lock.json +``` + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--new-operator-enrs ` | For ceremony | Comma-separated ENRs of ALL new operators | +| `--cluster-lock ` | For ceremony | Path to existing cluster-lock.json | +| `--generate-enr` | No | Generate new ENR private key | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +| Environment Variable | Description | +|----------------------|-------------| +| `WORK_DIR` | Override the repository root directory (defaults to auto-detected repo root) | + +## Related + +- [Add-Validators Workflow](../add-validators/README.md) +- [Remove-Operators Workflow](../remove-operators/README.md) +- [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Replace-Operator Workflow](../replace-operator/README.md) +- [Anti-Slashing DB Scripts](../vc/README.md) +- [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-operators) diff --git a/scripts/edit/add-operators/existing-operator.sh b/scripts/edit/add-operators/existing-operator.sh new file mode 100755 index 00000000..b4e32632 --- /dev/null +++ b/scripts/edit/add-operators/existing-operator.sh @@ -0,0 +1,323 @@ +#!/usr/bin/env bash + +# Add-Operators Script for EXISTING Operators - See README.md for documentation + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" +cd "$REPO_ROOT" + +# Default values +NEW_OPERATOR_ENRS="" +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" +ASDB_EXPORT_DIR="./asdb-export" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/add-operators/existing-operator.sh [OPTIONS] + +Automates the add-operators ceremony for operators already in the cluster. +This is a CEREMONY that ALL operators (existing AND new) must run simultaneously. + +Options: + --new-operator-enrs Comma-separated ENRs of new operators (required) + --dry-run Show what would be done without executing + -h, --help Show this help message + +Example: + # Add one new operator + ./scripts/edit/add-operators/existing-operator.sh \ + --new-operator-enrs "enr:-..." + + # Add multiple new operators + ./scripts/edit/add-operators/existing-operator.sh \ + --new-operator-enrs "enr:-...,enr:-..." + +Prerequisites: + - .env file with NETWORK and VC variables set + - .charon directory with cluster-lock.json and validator_keys + - Docker and docker compose installed and running + - VC container running (for ASDB export) + - All operators must participate in the ceremony +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --new-operator-enrs) + NEW_OPERATOR_ENRS="$2" + shift 2 + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +# Validate required arguments +if [ -z "$NEW_OPERATOR_ENRS" ]; then + log_error "Missing required argument: --new-operator-enrs" + echo "Use --help for usage information" + exit 1 +fi + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Add-Operators Workflow - EXISTING OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +# Preserve COMPOSE_FILE and COMPOSE_PROJECT_NAME if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" +SAVED_COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-}" + +source .env + +# Restore COMPOSE_FILE and COMPOSE_PROJECT_NAME if they were set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi +if [ -n "$SAVED_COMPOSE_PROJECT_NAME" ]; then + export COMPOSE_PROJECT_NAME="$SAVED_COMPOSE_PROJECT_NAME" +fi + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if [ ! -d .charon/validator_keys ]; then + log_error ".charon/validator_keys directory not found" + log_info "All operators must have their current validator private key shares." + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" +log_info " New operator ENRs: ${NEW_OPERATOR_ENRS:0:80}..." + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +# Show current cluster info +if [ -f .charon/cluster-lock.json ]; then + CURRENT_VALIDATORS=$(jq '.distributed_validators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + CURRENT_OPERATORS=$(jq '.operators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + log_info " Current cluster: $CURRENT_VALIDATORS validator(s), $CURRENT_OPERATORS operator(s)" +fi + +echo "" + +# Step 1: Export anti-slashing database +log_step "Step 1: Exporting anti-slashing database..." + +# VC container must be stopped before export (Lodestar locks the database while running) +if [ "$DRY_RUN" = false ]; then + if docker compose ps --format '{{.Status}}' "$VC" 2>/dev/null | grep -qi running; then + log_info "Stopping VC container ($VC) for ASDB export..." + docker compose stop "$VC" + fi +else + log_warn "Would stop $VC container if running" +fi + +mkdir -p "$ASDB_EXPORT_DIR" + +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/export_asdb.sh" \ + --output-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database exported to $ASDB_EXPORT_DIR/slashing-protection.json" + +echo "" + +# Step 2: Run ceremony +log_step "Step 2: Running add-operators ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL operators must run this ceremony simultaneously║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit add-operators" +log_info " New operator ENRs: ${NEW_OPERATOR_ENRS:0:80}..." +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all operators to connect..." +echo "" + +if [ "$DRY_RUN" = false ]; then + # Use -i for stdin (needed for ceremony coordination), skip -t if no TTY available + DOCKER_FLAGS="-i" + if [ -t 0 ]; then + DOCKER_FLAGS="-it" + fi + + docker run --rm $DOCKER_FLAGS \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.9.0-rc3}" \ + alpha edit add-operators \ + --new-operator-enrs="$NEW_OPERATOR_ENRS" \ + --output-dir=/opt/charon/output + + # Verify ceremony output + if [ -f "$OUTPUT_DIR/cluster-lock.json" ]; then + log_info "Ceremony completed successfully!" + NEW_VALIDATORS=$(jq '.distributed_validators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + NEW_OPERATORS=$(jq '.operators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + log_info "New cluster-lock.json generated with $NEW_VALIDATORS validator(s), $NEW_OPERATORS operator(s)" + else + log_error "Ceremony may have failed - no cluster-lock.json in $OUTPUT_DIR/" + exit 1 + fi +else + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit add-operators --new-operator-enrs=... --output-dir=$OUTPUT_DIR" +fi + +echo "" + +# Step 3: Update ASDB pubkeys +log_step "Step 3: Updating anti-slashing database pubkeys..." + +run_cmd "$SCRIPT_DIR/../vc/update-anti-slashing-db.sh" \ + "$ASDB_EXPORT_DIR/slashing-protection.json" \ + ".charon/cluster-lock.json" \ + "$OUTPUT_DIR/cluster-lock.json" + +log_info "Anti-slashing database pubkeys updated" + +echo "" + +# Step 4: Stop containers +log_step "Step 4: Stopping containers..." + +run_cmd docker compose stop "$VC" charon + +log_info "Containers stopped" + +echo "" + +# Step 5: Backup and replace .charon +log_step "Step 5: Backing up and replacing .charon directory..." + +TIMESTAMP=$(date +%Y%m%d_%H%M%S) +mkdir -p "$BACKUP_DIR" + +run_cmd mv .charon "$BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info "Current .charon backed up to $BACKUP_DIR/.charon-backup.$TIMESTAMP" + +run_cmd mv "$OUTPUT_DIR" .charon +log_info "New cluster configuration installed to .charon/" + +echo "" + +# Step 6: Import updated ASDB +log_step "Step 6: Importing updated anti-slashing database..." + +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/import_asdb.sh" \ + --input-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database imported" + +echo "" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Add-Operators Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Old .charon backed up to: $BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info " - New cluster configuration installed in: .charon/" +log_info " - Anti-slashing database updated and imported" +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: Wait at least 2 epochs (~13 min) before starting ║" +log_warn "║ containers to avoid slashing risk from duplicate attestations ║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "When ready, start containers with:" +echo " docker compose up -d charon $VC" +echo "" +log_info "After starting, verify:" +log_info " 1. Check charon logs: docker compose logs -f charon" +log_info " 2. Verify all nodes connected and healthy" +log_info " 3. Verify cluster is producing attestations" +log_info " 4. Confirm new operators have joined successfully" +echo "" +log_warn "Keep the backup until you've verified normal operation for several epochs." +echo "" +log_info "Current limitations:" +log_info " - The new configuration will not be reflected on the Obol Launchpad" +log_info " - The cluster will have a new cluster hash (different observability ID)" +echo "" diff --git a/scripts/edit/add-operators/new-operator.sh b/scripts/edit/add-operators/new-operator.sh new file mode 100755 index 00000000..201ea58d --- /dev/null +++ b/scripts/edit/add-operators/new-operator.sh @@ -0,0 +1,374 @@ +#!/usr/bin/env bash + +# Add-Operators Script for NEW Operators - See README.md for documentation + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" +cd "$REPO_ROOT" + +# Default values +NEW_OPERATOR_ENRS="" +CLUSTER_LOCK_PATH="" +GENERATE_ENR=false +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/add-operators/new-operator.sh [OPTIONS] + +Helps new operators join an existing cluster during the add-operators ceremony. +This is a CEREMONY that ALL operators (existing AND new) must run simultaneously. + +Options: + --new-operator-enrs Comma-separated ENRs of ALL new operators (required for ceremony) + --cluster-lock Path to existing cluster-lock.json (required for ceremony) + --generate-enr Generate a new ENR private key if not present + --dry-run Show what would be done without executing + -h, --help Show this help message + +Examples: + # Step 1: Generate ENR and share with existing operators + ./scripts/edit/add-operators/new-operator.sh --generate-enr + + # Step 2: Run ceremony with cluster-lock and all new operator ENRs + ./scripts/edit/add-operators/new-operator.sh \ + --new-operator-enrs "enr:-...,enr:-..." \ + --cluster-lock ./received-cluster-lock.json + +Prerequisites: + - .env file with NETWORK and VC variables set + - For --generate-enr: Docker installed + - For ceremony: .charon/charon-enr-private-key must exist + - For ceremony: cluster-lock.json received from existing operators +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --new-operator-enrs) + NEW_OPERATOR_ENRS="$2" + shift 2 + ;; + --cluster-lock) + CLUSTER_LOCK_PATH="$2" + shift 2 + ;; + --generate-enr) + GENERATE_ENR=true + shift + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Add-Operators Workflow - NEW OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +# Preserve COMPOSE_FILE and COMPOSE_PROJECT_NAME if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" +SAVED_COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-}" + +source .env + +# Restore COMPOSE_FILE and COMPOSE_PROJECT_NAME if they were set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi +if [ -n "$SAVED_COMPOSE_PROJECT_NAME" ]; then + export COMPOSE_PROJECT_NAME="$SAVED_COMPOSE_PROJECT_NAME" +fi + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +echo "" + +# Handle ENR generation mode +if [ "$GENERATE_ENR" = true ]; then + log_step "Step 1: Generating ENR private key..." + + if [ -f .charon/charon-enr-private-key ]; then + log_warn "ENR private key already exists at .charon/charon-enr-private-key" + log_warn "Skipping generation to avoid overwriting existing key." + log_info "If you want to generate a new key, remove the existing file first." + else + mkdir -p .charon + + if [ "$DRY_RUN" = false ]; then + docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.9.0-rc3}" \ + create enr + else + echo " [DRY-RUN] docker run --rm ... charon create enr" + fi + + log_info "ENR private key generated" + fi + + if [ -f .charon/charon-enr-private-key ]; then + echo "" + log_warn "╔════════════════════════════════════════════════════════════════╗" + log_warn "║ SHARE YOUR ENR WITH THE EXISTING OPERATORS ║" + log_warn "╚════════════════════════════════════════════════════════════════╝" + echo "" + + # Extract and display the ENR + if [ "$DRY_RUN" = false ]; then + ENR=$(docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.9.0-rc3}" \ + enr 2>/dev/null || echo "") + + if [ -n "$ENR" ]; then + log_info "Your ENR:" + echo "" + echo "$ENR" + echo "" + fi + fi + + log_info "Send this ENR to the existing operators." + log_info "They will use it with: --new-operator-enrs \"\"" + log_info "" + log_info "You will also need the existing cluster-lock.json from them." + log_info "" + log_info "After receiving it, run the ceremony with:" + log_info " ./scripts/edit/add-operators/new-operator.sh \\" + log_info " --new-operator-enrs \"\" \\" + log_info " --cluster-lock " + else + log_error "ENR private key generation failed - .charon/charon-enr-private-key not found" + exit 1 + fi + + exit 0 +fi + +# Ceremony mode: validate required arguments +if [ -z "$NEW_OPERATOR_ENRS" ]; then + log_error "Missing required argument: --new-operator-enrs" + echo "Use --help for usage information" + exit 1 +fi + +if [ -z "$CLUSTER_LOCK_PATH" ]; then + log_error "Missing required argument: --cluster-lock" + echo "Use --help for usage information" + exit 1 +fi + +# Step 1: Check ceremony prerequisites +log_step "Step 1: Checking ceremony prerequisites..." + +if [ "$DRY_RUN" = false ]; then + if [ ! -d .charon ]; then + log_error ".charon directory not found" + log_info "First generate your ENR with: ./scripts/edit/add-operators/new-operator.sh --generate-enr" + exit 1 + fi + + if [ ! -f .charon/charon-enr-private-key ]; then + log_error ".charon/charon-enr-private-key not found" + log_info "First generate your ENR with: ./scripts/edit/add-operators/new-operator.sh --generate-enr" + exit 1 + fi + + if [ ! -f "$CLUSTER_LOCK_PATH" ]; then + log_error "Cluster-lock file not found: $CLUSTER_LOCK_PATH" + exit 1 + fi + + # Validate cluster-lock is valid JSON + if ! jq empty "$CLUSTER_LOCK_PATH" 2>/dev/null; then + log_error "Cluster-lock file is not valid JSON: $CLUSTER_LOCK_PATH" + exit 1 + fi +else + if [ ! -d .charon ]; then + log_warn "Would check for .charon directory (not found)" + fi + if [ ! -f .charon/charon-enr-private-key ]; then + log_warn "Would check for .charon/charon-enr-private-key (not found)" + fi +fi + +log_info "Using cluster-lock: $CLUSTER_LOCK_PATH" +log_info "New operator ENRs: ${NEW_OPERATOR_ENRS:0:80}..." + +# Show cluster info +if [ "$DRY_RUN" = false ] && [ -f "$CLUSTER_LOCK_PATH" ]; then + NUM_VALIDATORS=$(jq '.distributed_validators | length' "$CLUSTER_LOCK_PATH" 2>/dev/null || echo "?") + NUM_OPERATORS=$(jq '.operators | length' "$CLUSTER_LOCK_PATH" 2>/dev/null || echo "?") + log_info "Cluster info: $NUM_VALIDATORS validator(s), $NUM_OPERATORS operator(s)" +fi + +log_info "Prerequisites OK" + +echo "" + +# Step 2: Run ceremony +log_step "Step 2: Running add-operators ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL operators must run this ceremony simultaneously║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit add-operators" +log_info " New operator ENRs: ${NEW_OPERATOR_ENRS:0:80}..." +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all operators to connect..." +echo "" + +if [ "$DRY_RUN" = false ]; then + # Use -i for stdin (needed for ceremony coordination), skip -t if no TTY available + DOCKER_FLAGS="-i" + if [ -t 0 ]; then + DOCKER_FLAGS="-it" + fi + + # Handle absolute vs relative cluster-lock path + if [[ "$CLUSTER_LOCK_PATH" = /* ]]; then + CLUSTER_LOCK_MOUNT="$CLUSTER_LOCK_PATH" + else + CLUSTER_LOCK_MOUNT="$REPO_ROOT/$CLUSTER_LOCK_PATH" + fi + + docker run --rm $DOCKER_FLAGS \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" \ + -v "$CLUSTER_LOCK_MOUNT:/opt/charon/cluster-lock.json:ro" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.9.0-rc3}" \ + alpha edit add-operators \ + --new-operator-enrs="$NEW_OPERATOR_ENRS" \ + --output-dir=/opt/charon/output \ + --lock-file=/opt/charon/cluster-lock.json \ + --private-key-file=/opt/charon/.charon/charon-enr-private-key + + # Verify ceremony output + if [ -f "$OUTPUT_DIR/cluster-lock.json" ]; then + log_info "Ceremony completed successfully!" + NEW_VALIDATORS=$(jq '.distributed_validators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + NEW_OPERATORS=$(jq '.operators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + log_info "New cluster-lock.json generated with $NEW_VALIDATORS validator(s), $NEW_OPERATORS operator(s)" + else + log_error "Ceremony may have failed - no cluster-lock.json in $OUTPUT_DIR/" + exit 1 + fi +else + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit add-operators --new-operator-enrs=... --output-dir=$OUTPUT_DIR --lock-file=... --private-key-file=..." +fi + +echo "" + +# Step 3: Install .charon from output +log_step "Step 3: Installing new cluster configuration..." + +if [ -d .charon ]; then + TIMESTAMP=$(date +%Y%m%d_%H%M%S) + mkdir -p "$BACKUP_DIR" + run_cmd mv .charon "$BACKUP_DIR/.charon-backup.$TIMESTAMP" + log_info "Old .charon backed up to $BACKUP_DIR/.charon-backup.$TIMESTAMP" +fi + +run_cmd mv "$OUTPUT_DIR" .charon +log_info "New cluster configuration installed to .charon/" + +echo "" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ New Operator Setup COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Cluster configuration installed in: .charon/" +echo "" +log_info "When ready, start containers with:" +echo " docker compose up -d charon $VC" +echo "" +log_info "After starting, verify:" +log_info " 1. Check charon logs: docker compose logs -f charon" +log_info " 2. Verify VC is running: docker compose logs -f $VC" +log_info " 3. Monitor validator duties once synced" +echo "" +log_warn "Note: As a new operator, you do NOT have any slashing protection history." +log_warn "Your VC will start fresh. Ensure all existing operators have completed" +log_warn "their add-operators workflow before validators resume duties." +echo "" diff --git a/scripts/edit/add-validators/README.md b/scripts/edit/add-validators/README.md new file mode 100644 index 00000000..f46bd99f --- /dev/null +++ b/scripts/edit/add-validators/README.md @@ -0,0 +1,67 @@ +# Add-Validators Script + +Script to automate the [add-validators ceremony](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-validators) for Charon distributed validators. + +## Overview + +This script helps operators add new validators to an existing distributed validator cluster. This is useful for: + +- **Expanding capacity**: Add more validators without creating a new cluster +- **Scaling operations**: Grow your staking operation with existing operators + +**Important**: This is a coordinated ceremony. All operators must run this script simultaneously to complete the process. + +> **Warning**: This is an alpha feature in Charon and is not yet recommended for production use. + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- `.charon` directory with `cluster-lock.json` and `deposit-data*.json` files +- Docker running +- `jq` installed +- **All operators must participate in the ceremony** + +## Usage + +All operators must run this script simultaneously: + +```bash +./scripts/edit/add-validators/add-validators.sh \ + --num-validators 10 \ + --withdrawal-addresses 0x123...abc \ + --fee-recipient-addresses 0x456...def +``` + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--num-validators ` | Yes | Number of validators to add | +| `--withdrawal-addresses ` | Yes | Withdrawal address(es), comma-separated for multiple | +| `--fee-recipient-addresses ` | Yes | Fee recipient address(es), comma-separated | +| `--unverified` | No | Skip key verification (for remote KeyManager) | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +| Environment Variable | Description | +|----------------------|-------------| +| `WORK_DIR` | Override the repository root directory (defaults to auto-detected repo root) | + +## Workflow + +The script performs the following steps: + +1. **Check prerequisites** - Verify environment, cluster-lock, and detect running containers +2. **Run ceremony** - P2P coordinated add-validators ceremony with all operators +3. **Stop containers** - Stop charon and VC (only if they were running) +4. **Backup and replace** - Backup current `.charon/` to `./backups/`, install new configuration +5. **Print start commands** - Display commands to start containers manually + +## Related + +- [Add-Operators Workflow](../add-operators/README.md) +- [Remove-Operators Workflow](../remove-operators/README.md) +- [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Replace-Operator Workflow](../replace-operator/README.md) +- [Anti-Slashing DB Scripts](../vc/README.md) +- [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/add-validators) diff --git a/scripts/edit/add-validators/add-validators.sh b/scripts/edit/add-validators/add-validators.sh new file mode 100755 index 00000000..6ad01725 --- /dev/null +++ b/scripts/edit/add-validators/add-validators.sh @@ -0,0 +1,344 @@ +#!/usr/bin/env bash + +# Add-Validators Script - See README.md for documentation + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" +cd "$REPO_ROOT" + +# Default values +NUM_VALIDATORS="" +WITHDRAWAL_ADDRESSES="" +FEE_RECIPIENT_ADDRESSES="" +UNVERIFIED=false +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/add-validators/add-validators.sh [OPTIONS] + +Adds new validators to an existing distributed validator cluster. + +Options: + --num-validators Number of validators to add (required) + --withdrawal-addresses Withdrawal address(es), comma-separated (required) + --fee-recipient-addresses Fee recipient address(es), comma-separated (required) + --unverified Skip key verification (when keys not accessible) + --dry-run Show what would be done without executing + -h, --help Show this help message + +See README.md for detailed documentation. +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --num-validators) + NUM_VALIDATORS="$2" + shift 2 + ;; + --withdrawal-addresses) + WITHDRAWAL_ADDRESSES="$2" + shift 2 + ;; + --fee-recipient-addresses) + FEE_RECIPIENT_ADDRESSES="$2" + shift 2 + ;; + --unverified) + UNVERIFIED=true + shift + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +# Validate required arguments +if [ -z "$NUM_VALIDATORS" ]; then + log_error "Missing required argument: --num-validators" + echo "Use --help for usage information" + exit 1 +fi + +if [ -z "$WITHDRAWAL_ADDRESSES" ]; then + log_error "Missing required argument: --withdrawal-addresses" + echo "Use --help for usage information" + exit 1 +fi + +if [ -z "$FEE_RECIPIENT_ADDRESSES" ]; then + log_error "Missing required argument: --fee-recipient-addresses" + echo "Use --help for usage information" + exit 1 +fi + +# Validate num-validators is a positive integer +if ! [[ "$NUM_VALIDATORS" =~ ^[1-9][0-9]*$ ]]; then + log_error "Invalid --num-validators: must be a positive integer" + exit 1 +fi + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Add Validators Workflow ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +# Preserve COMPOSE_FILE and COMPOSE_PROJECT_NAME if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" +SAVED_COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-}" + +source .env + +# Restore COMPOSE_FILE and COMPOSE_PROJECT_NAME if they were set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi +if [ -n "$SAVED_COMPOSE_PROJECT_NAME" ]; then + export COMPOSE_PROJECT_NAME="$SAVED_COMPOSE_PROJECT_NAME" +fi + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +# Check if containers are currently running +CHARON_WAS_RUNNING=false +VC_WAS_RUNNING=false +if docker compose ps --format '{{.Status}}' charon 2>/dev/null | grep -qi running; then + CHARON_WAS_RUNNING=true +fi +if docker compose ps --format '{{.Status}}' "$VC" 2>/dev/null | grep -qi running; then + VC_WAS_RUNNING=true +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" +log_info " Validators to add: $NUM_VALIDATORS" + +if [ -n "$WITHDRAWAL_ADDRESSES" ]; then + log_info " Withdrawal addresses: $WITHDRAWAL_ADDRESSES" +fi +if [ -n "$FEE_RECIPIENT_ADDRESSES" ]; then + log_info " Fee recipient addresses: $FEE_RECIPIENT_ADDRESSES" +fi +if [ "$UNVERIFIED" = true ]; then + log_warn " Mode: UNVERIFIED (key verification skipped)" +fi + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +# Check if output directory already exists +if [ -d "$OUTPUT_DIR" ]; then + log_error "Output directory '$OUTPUT_DIR' already exists." + log_error "Please remove it first: rm -rf $OUTPUT_DIR" + exit 1 +fi + +# Show current cluster info +if [ -f .charon/cluster-lock.json ]; then + CURRENT_VALIDATORS=$(jq '.distributed_validators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + CURRENT_OPERATORS=$(jq '.operators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + log_info " Current cluster: $CURRENT_VALIDATORS validator(s), $CURRENT_OPERATORS operator(s)" +fi + +echo "" + +# Step 1: Run ceremony +log_step "Step 1: Running add-validators ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL operators must run this ceremony simultaneously║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit add-validators" +log_info " Number of validators: $NUM_VALIDATORS" +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all operators to connect..." +echo "" + +# Use -i for stdin (needed for ceremony coordination), skip -t if no TTY available +DOCKER_FLAGS="-i" +if [ -t 0 ]; then + DOCKER_FLAGS="-it" +fi + +# Build Docker command arguments +DOCKER_ARGS=( + run --rm $DOCKER_FLAGS + -v "$REPO_ROOT/.charon:/opt/charon/.charon" + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" + "obolnetwork/charon:${CHARON_VERSION:-v1.9.0-rc3}" + alpha edit add-validators + --num-validators="$NUM_VALIDATORS" + --output-dir=/opt/charon/output +) + +if [ -n "$WITHDRAWAL_ADDRESSES" ]; then + DOCKER_ARGS+=(--withdrawal-addresses="$WITHDRAWAL_ADDRESSES") +fi + +if [ -n "$FEE_RECIPIENT_ADDRESSES" ]; then + DOCKER_ARGS+=(--fee-recipient-addresses="$FEE_RECIPIENT_ADDRESSES") +fi + +if [ "$UNVERIFIED" = true ]; then + DOCKER_ARGS+=(--unverified) +fi + +if [ "$DRY_RUN" = false ]; then + docker "${DOCKER_ARGS[@]}" + + # Verify ceremony output + if [ -f "$OUTPUT_DIR/cluster-lock.json" ]; then + log_info "Ceremony completed successfully!" + NEW_VALIDATORS=$(jq '.distributed_validators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + log_info "New cluster-lock.json generated with $NEW_VALIDATORS validator(s)" + else + log_error "Ceremony may have failed - no cluster-lock.json in $OUTPUT_DIR/" + exit 1 + fi +else + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit add-validators --num-validators=$NUM_VALIDATORS --output-dir=$OUTPUT_DIR" +fi + +echo "" + +# Step 2: Stop containers (if they were running) +log_step "Step 2: Stopping containers..." + +if [ "$CHARON_WAS_RUNNING" = true ] || [ "$VC_WAS_RUNNING" = true ]; then + run_cmd docker compose stop "$VC" charon + log_info "Containers stopped" +else + log_info "Containers were not running, skipping stop" +fi + +echo "" + +# Step 3: Backup and replace .charon +log_step "Step 3: Backing up and replacing .charon directory..." + +TIMESTAMP=$(date +%Y%m%d_%H%M%S) +mkdir -p "$BACKUP_DIR" + +run_cmd mv .charon "$BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info "Current .charon backed up to $BACKUP_DIR/.charon-backup.$TIMESTAMP" + +run_cmd mv "$OUTPUT_DIR" .charon +log_info "New cluster configuration installed to .charon/" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Add Validators Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Old .charon backed up to: $BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info " - New cluster configuration installed in: .charon/" +log_info " - $NUM_VALIDATORS new validator(s) added" +echo "" +log_info "When ready, start containers with:" +echo " docker compose up -d charon $VC" +echo "" +log_info "After starting, verify:" +log_info " 1. Check charon logs: docker compose logs -f charon" +log_info " 2. Verify VC is running: docker compose logs -f $VC" +log_info " 3. Verify cluster is producing attestations" +echo "" +if [ "$UNVERIFIED" = true ]; then + log_warn "IMPORTANT: You used --unverified mode." + log_warn "Ensure CHARON_NO_VERIFY=true is set in your .env file for future restarts." + echo "" +fi +log_warn "Keep the backup until you've verified normal operation for several epochs." +echo "" + +echo "" +log_info "Next steps:" +log_info " 1. Check charon logs: docker compose logs -f charon" +log_info " 2. Wait for threshold operators to complete their upgrades" +log_info " 3. Verify new validators appear in cluster" +log_info " 4. Generate deposit data for new validators (in .charon/deposit-data.json)" +log_info " 5. Activate new validators on the beacon chain" +echo "" +log_warn "Keep the backup until you've verified normal operation for several epochs." +echo "" + diff --git a/scripts/edit/recreate-private-keys/README.md b/scripts/edit/recreate-private-keys/README.md new file mode 100644 index 00000000..472ba0c8 --- /dev/null +++ b/scripts/edit/recreate-private-keys/README.md @@ -0,0 +1,61 @@ +# Recreate-Private-Keys Script + +Script to automate the [recreate-private-keys ceremony](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/recreate-private-keys) for Charon distributed validators. + +## Overview + +This script helps operators recreate validator private key shares while keeping the same validator public keys. This is useful for: + +- **Security concerns**: If private key shares may have been compromised +- **Key rotation**: As part of regular security practices +- **Recovery**: After a security incident to refresh key material + +**Important**: This operation maintains the same validator public keys, so validators remain registered on the beacon chain without any changes. Only the underlying private key shares held by operators are refreshed. + +> **Warning**: This is an alpha feature in Charon and is not yet recommended for production use. + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- `.charon` directory with `cluster-lock.json` and `validator_keys` +- Docker running +- `jq` installed +- **All operators must participate in the ceremony** + +## Usage + +All operators must run this script simultaneously: + +```bash +./scripts/edit/recreate-private-keys/recreate-private-keys.sh +``` + +The script will: +1. Export the anti-slashing database from the validator client +2. Run the recreate-private-keys ceremony (P2P coordinated with all operators) +3. Update the ASDB pubkeys to match new key shares +4. Stop charon and VC containers +5. Backup current `.charon` directory to `./backups/` +6. Move new keys from `./output/` to `.charon/` +7. Import the updated anti-slashing database +8. Print start commands (wait ~2 epochs before starting) + +## Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +| Environment Variable | Description | +|----------------------|-------------| +| `WORK_DIR` | Override the repository root directory (defaults to auto-detected repo root) | + +## Related + +- [Add-Validators Workflow](../add-validators/README.md) +- [Add-Operators Workflow](../add-operators/README.md) +- [Remove-Operators Workflow](../remove-operators/README.md) +- [Replace-Operator Workflow](../replace-operator/README.md) +- [Anti-Slashing DB Scripts](../vc/README.md) +- [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/recreate-private-keys) diff --git a/scripts/edit/recreate-private-keys/recreate-private-keys.sh b/scripts/edit/recreate-private-keys/recreate-private-keys.sh new file mode 100755 index 00000000..34f68109 --- /dev/null +++ b/scripts/edit/recreate-private-keys/recreate-private-keys.sh @@ -0,0 +1,295 @@ +#!/usr/bin/env bash + +# Recreate-Private-Keys Script - See README.md for documentation + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" +cd "$REPO_ROOT" + +# Default values +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" +ASDB_EXPORT_DIR="./asdb-export" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/recreate-private-keys/recreate-private-keys.sh [OPTIONS] + +Recreates validator private key shares for the cluster. This is a CEREMONY +that ALL operators must run simultaneously. + +Use cases: + - Security concerns: If private key shares may have been compromised + - Key rotation: As part of regular security practices + - Recovery: After a security incident to refresh key material + +NOTE: This operation maintains the same validator public keys. Only the +underlying private key shares held by operators are refreshed. + +Options: + --dry-run Show what would be done without executing + -h, --help Show this help message + +Example: + ./scripts/edit/recreate-private-keys/recreate-private-keys.sh + +Prerequisites: + - .env file with NETWORK and VC variables set + - .charon directory with cluster-lock.json and validator_keys + - Docker and docker compose installed and running + - All operators must participate in the ceremony +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Recreate Private Keys Workflow ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +# Preserve COMPOSE_FILE and COMPOSE_PROJECT_NAME if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" +SAVED_COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-}" + +source .env + +# Restore COMPOSE_FILE and COMPOSE_PROJECT_NAME if they were set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi +if [ -n "$SAVED_COMPOSE_PROJECT_NAME" ]; then + export COMPOSE_PROJECT_NAME="$SAVED_COMPOSE_PROJECT_NAME" +fi + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if [ ! -d .charon/validator_keys ]; then + log_error ".charon/validator_keys directory not found" + log_info "All operators must have their current validator private key shares." + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +echo "" + +# Step 1: Export anti-slashing database +log_step "Step 1: Exporting anti-slashing database..." + +# VC container must be stopped before export (Lodestar locks the database while running) +if [ "$DRY_RUN" = false ]; then + if docker compose ps --format '{{.Status}}' "$VC" 2>/dev/null | grep -qi running; then + log_info "Stopping VC container ($VC) for ASDB export..." + docker compose stop "$VC" + fi +else + log_warn "Would stop $VC container if running" +fi + +mkdir -p "$ASDB_EXPORT_DIR" + +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/export_asdb.sh" \ + --output-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database exported to $ASDB_EXPORT_DIR/slashing-protection.json" + +echo "" + +# Step 2: Run ceremony +log_step "Step 2: Running recreate-private-keys ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL operators must run this ceremony simultaneously║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit recreate-private-keys" +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all operators to connect..." +echo "" + +if [ "$DRY_RUN" = false ]; then + # Use -i for stdin (needed for ceremony coordination), skip -t if no TTY available + DOCKER_FLAGS="-i" + if [ -t 0 ]; then + DOCKER_FLAGS="-it" + fi + + docker run --rm $DOCKER_FLAGS \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.9.0-rc3}" \ + alpha edit recreate-private-keys \ + --output-dir=/opt/charon/output + + # Verify ceremony output + if [ -f "$OUTPUT_DIR/cluster-lock.json" ]; then + log_info "Ceremony completed successfully!" + log_info "New cluster-lock.json generated in $OUTPUT_DIR/" + else + log_error "Ceremony may have failed - no cluster-lock.json in $OUTPUT_DIR/" + exit 1 + fi +else + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit recreate-private-keys --output-dir=output" +fi + +echo "" + +# Step 3: Update ASDB pubkeys +log_step "Step 3: Updating anti-slashing database pubkeys..." + +run_cmd "$SCRIPT_DIR/../vc/update-anti-slashing-db.sh" \ + "$ASDB_EXPORT_DIR/slashing-protection.json" \ + ".charon/cluster-lock.json" \ + "$OUTPUT_DIR/cluster-lock.json" + +log_info "Anti-slashing database pubkeys updated" + +echo "" + +# Step 4: Stop containers +log_step "Step 4: Stopping containers..." + +run_cmd docker compose stop "$VC" charon + +log_info "Containers stopped" + +echo "" + +# Step 5: Backup and replace .charon +log_step "Step 5: Backing up and replacing .charon directory..." + +TIMESTAMP=$(date +%Y%m%d_%H%M%S) +mkdir -p "$BACKUP_DIR" + +run_cmd mv .charon "$BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info "Current .charon backed up to $BACKUP_DIR/.charon-backup.$TIMESTAMP" + +run_cmd mv "$OUTPUT_DIR" .charon +log_info "New keys installed to .charon/" + +echo "" + +# Step 6: Import updated ASDB +log_step "Step 6: Importing updated anti-slashing database..." + +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/import_asdb.sh" \ + --input-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database imported" + +echo "" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Recreate Private Keys Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Old .charon backed up to: $BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info " - New keys installed in: .charon/" +log_info " - Anti-slashing database updated and imported" +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: Wait at least 2 epochs (~13 min) before starting ║" +log_warn "║ containers to avoid slashing risk from duplicate attestations ║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "When ready, start containers with:" +echo " docker compose up -d charon $VC" +echo "" +log_info "After starting, verify:" +log_info " 1. Check charon logs: docker compose logs -f charon" +log_info " 2. Verify all nodes connected and healthy" +log_info " 3. Verify cluster is producing attestations" +log_info " 4. Check no signature verification errors in logs" +echo "" +log_warn "Keep the backup until you've verified normal operation for several epochs." +echo "" diff --git a/scripts/edit/remove-operators/README.md b/scripts/edit/remove-operators/README.md new file mode 100644 index 00000000..645f0667 --- /dev/null +++ b/scripts/edit/remove-operators/README.md @@ -0,0 +1,101 @@ +# Remove-Operators Scripts + +Scripts to automate the [remove-operators ceremony](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/remove-operators) for Charon distributed validators. + +## Overview + +These scripts help operators remove specific operators from an existing distributed validator cluster while preserving all validators. This is useful for: + +- **Operator offboarding**: Removing an operator who is leaving the cluster +- **Cluster downsizing**: Reducing the number of operators +- **Security response**: Removing a compromised operator + +**Important**: This is a coordinated ceremony. All participating operators must run their respective scripts simultaneously to complete the process. + +> **Warning**: This is an alpha feature in Charon and is not yet recommended for production use. + +There are two scripts for the two roles involved: + +- **`remaining-operator.sh`** - For operators staying in the cluster +- **`removed-operator.sh`** - For operators being removed who need to participate (only required when removal exceeds fault tolerance) + +### Fault Tolerance + +The cluster's fault tolerance is `f = operators - threshold`. When removing more operators than `f`, removed operators must participate in the ceremony by running `removed-operator.sh` with the `--participating-operator-enrs` flag. + +When the removal is within fault tolerance, removed operators simply stop their nodes after the ceremony completes. + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- `.charon` directory with `cluster-lock.json` and `validator_keys` +- Docker running +- `jq` installed + +## For Remaining Operators + +Automates the complete workflow for operators staying in the cluster: + +```bash +./scripts/edit/remove-operators/remaining-operator.sh \ + --operator-enrs-to-remove "enr:-..." +``` + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--operator-enrs-to-remove ` | Yes | Comma-separated ENRs of operators to remove | +| `--participating-operator-enrs ` | When exceeding fault tolerance | Comma-separated ENRs of all participating operators | +| `--new-threshold ` | No | Override default threshold (defaults to ceil(n * 2/3)) | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +| Environment Variable | Description | +|----------------------|-------------| +| `WORK_DIR` | Override the repository root directory (defaults to auto-detected repo root) | + +### Workflow + +1. **Export ASDB** - Stop VC if running and export anti-slashing database +2. **Run ceremony** - P2P coordinated remove-operators ceremony with all participants +3. **Update ASDB** - Replace pubkeys in exported ASDB to match new cluster-lock +4. **Stop containers** - Stop charon and VC +5. **Backup and replace** - Backup current `.charon/` to `./backups/`, install new configuration +6. **Import ASDB** - Import updated anti-slashing database +7. **Print start commands** - Display commands to start containers manually (wait ~2 epochs before starting) + +## For Removed Operators + +Only required when the removal exceeds the cluster's fault tolerance. In that case, removed operators must participate in the ceremony to provide their key shares. + +```bash +./scripts/edit/remove-operators/removed-operator.sh \ + --operator-enrs-to-remove "enr:-..." \ + --participating-operator-enrs "enr:-...,enr:-...,enr:-..." +``` + +If the removal is within fault tolerance, removed operators do **not** need to run this script - simply stop your node after the remaining operators complete the ceremony. + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--operator-enrs-to-remove ` | Yes | Comma-separated ENRs of operators to remove | +| `--participating-operator-enrs ` | Yes | Comma-separated ENRs of ALL participating operators | +| `--new-threshold ` | No | Override default threshold (defaults to ceil(n * 2/3)) | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +| Environment Variable | Description | +|----------------------|-------------| +| `WORK_DIR` | Override the repository root directory (defaults to auto-detected repo root) | + +## Related + +- [Add-Validators Workflow](../add-validators/README.md) +- [Add-Operators Workflow](../add-operators/README.md) +- [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Replace-Operator Workflow](../replace-operator/README.md) +- [Anti-Slashing DB Scripts](../vc/README.md) +- [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/remove-operators) diff --git a/scripts/edit/remove-operators/remaining-operator.sh b/scripts/edit/remove-operators/remaining-operator.sh new file mode 100755 index 00000000..0b99a395 --- /dev/null +++ b/scripts/edit/remove-operators/remaining-operator.sh @@ -0,0 +1,368 @@ +#!/usr/bin/env bash + +# Remove-Operators Script for REMAINING Operators - See README.md for documentation + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" +cd "$REPO_ROOT" + +# Default values +OPERATOR_ENRS_TO_REMOVE="" +PARTICIPATING_OPERATOR_ENRS="" +NEW_THRESHOLD="" +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" +ASDB_EXPORT_DIR="./asdb-export" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/remove-operators/remaining-operator.sh [OPTIONS] + +Automates the remove-operators ceremony for operators staying in the cluster. +All participating operators must run their respective scripts simultaneously. + +Options: + --operator-enrs-to-remove Comma-separated ENRs of operators to remove (required) + --participating-operator-enrs Comma-separated ENRs of participating operators + (required when removing beyond fault tolerance) + --new-threshold Override default threshold (defaults to ceil(n * 2/3)) + --dry-run Show what would be done without executing + -h, --help Show this help message + +Example: + # Remove one operator (within fault tolerance) + ./scripts/edit/remove-operators/remaining-operator.sh \ + --operator-enrs-to-remove "enr:-..." + + # Remove operators beyond fault tolerance (must specify participants) + ./scripts/edit/remove-operators/remaining-operator.sh \ + --operator-enrs-to-remove "enr:-...,enr:-..." \ + --participating-operator-enrs "enr:-...,enr:-...,enr:-..." + + # Remove operator with custom threshold + ./scripts/edit/remove-operators/remaining-operator.sh \ + --operator-enrs-to-remove "enr:-..." \ + --new-threshold 3 + +Prerequisites: + - .env file with NETWORK and VC variables set + - .charon directory with cluster-lock.json and validator_keys + - Docker and docker compose installed and running + - VC container running (for ASDB export) + - All participating operators must run the ceremony +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --operator-enrs-to-remove) + OPERATOR_ENRS_TO_REMOVE="$2" + shift 2 + ;; + --participating-operator-enrs) + PARTICIPATING_OPERATOR_ENRS="$2" + shift 2 + ;; + --new-threshold) + NEW_THRESHOLD="$2" + shift 2 + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +# Validate required arguments +if [ -z "$OPERATOR_ENRS_TO_REMOVE" ]; then + log_error "Missing required argument: --operator-enrs-to-remove" + echo "Use --help for usage information" + exit 1 +fi + +# Validate new-threshold is a positive integer if provided +if [ -n "$NEW_THRESHOLD" ] && ! [[ "$NEW_THRESHOLD" =~ ^[1-9][0-9]*$ ]]; then + log_error "Invalid --new-threshold: must be a positive integer" + exit 1 +fi + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Remove-Operators Workflow - REMAINING OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +# Preserve COMPOSE_FILE and COMPOSE_PROJECT_NAME if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" +SAVED_COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-}" + +source .env + +# Restore COMPOSE_FILE and COMPOSE_PROJECT_NAME if they were set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi +if [ -n "$SAVED_COMPOSE_PROJECT_NAME" ]; then + export COMPOSE_PROJECT_NAME="$SAVED_COMPOSE_PROJECT_NAME" +fi + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if [ ! -d .charon/validator_keys ]; then + log_error ".charon/validator_keys directory not found" + log_info "All remaining operators must have their current validator private key shares." + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" +log_info " Operators to remove: ${OPERATOR_ENRS_TO_REMOVE:0:80}..." + +if [ -n "$PARTICIPATING_OPERATOR_ENRS" ]; then + log_info " Participating operators: ${PARTICIPATING_OPERATOR_ENRS:0:80}..." +fi +if [ -n "$NEW_THRESHOLD" ]; then + log_info " New threshold: $NEW_THRESHOLD" +fi + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +# Show current cluster info +if [ -f .charon/cluster-lock.json ]; then + CURRENT_VALIDATORS=$(jq '.distributed_validators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + CURRENT_OPERATORS=$(jq '.operators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + log_info " Current cluster: $CURRENT_VALIDATORS validator(s), $CURRENT_OPERATORS operator(s)" +fi + +echo "" + +# Step 1: Export anti-slashing database +log_step "Step 1: Exporting anti-slashing database..." + +# VC container must be stopped before export (Lodestar locks the database while running) +if [ "$DRY_RUN" = false ]; then + if docker compose ps --format '{{.Status}}' "$VC" 2>/dev/null | grep -qi running; then + log_info "Stopping VC container ($VC) for ASDB export..." + docker compose stop "$VC" + fi +else + log_warn "Would stop $VC container if running" +fi + +mkdir -p "$ASDB_EXPORT_DIR" + +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/export_asdb.sh" \ + --output-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database exported to $ASDB_EXPORT_DIR/slashing-protection.json" + +echo "" + +# Step 2: Run ceremony +log_step "Step 2: Running remove-operators ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL participating operators must run simultaneously║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit remove-operators" +log_info " Operators to remove: ${OPERATOR_ENRS_TO_REMOVE:0:80}..." +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all participants to connect..." +echo "" + +if [ "$DRY_RUN" = false ]; then + # Use -i for stdin (needed for ceremony coordination), skip -t if no TTY available + DOCKER_FLAGS="-i" + if [ -t 0 ]; then + DOCKER_FLAGS="-it" + fi + + # Build Docker command arguments + DOCKER_ARGS=( + run --rm $DOCKER_FLAGS + -v "$REPO_ROOT/.charon:/opt/charon/.charon" + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" + "obolnetwork/charon:${CHARON_VERSION:-v1.9.0-rc3}" + alpha edit remove-operators + --operator-enrs-to-remove="$OPERATOR_ENRS_TO_REMOVE" + --output-dir=/opt/charon/output + ) + + if [ -n "$PARTICIPATING_OPERATOR_ENRS" ]; then + DOCKER_ARGS+=(--participating-operator-enrs="$PARTICIPATING_OPERATOR_ENRS") + fi + + if [ -n "$NEW_THRESHOLD" ]; then + DOCKER_ARGS+=(--new-threshold="$NEW_THRESHOLD") + fi + + docker "${DOCKER_ARGS[@]}" + + # Verify ceremony output + if [ -f "$OUTPUT_DIR/cluster-lock.json" ]; then + log_info "Ceremony completed successfully!" + NEW_VALIDATORS=$(jq '.distributed_validators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + NEW_OPERATORS=$(jq '.operators | length' "$OUTPUT_DIR/cluster-lock.json" 2>/dev/null || echo "?") + log_info "New cluster-lock.json generated with $NEW_VALIDATORS validator(s), $NEW_OPERATORS operator(s)" + else + log_error "Ceremony may have failed - no cluster-lock.json in $OUTPUT_DIR/" + exit 1 + fi +else + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit remove-operators --operator-enrs-to-remove=... --output-dir=$OUTPUT_DIR" +fi + +echo "" + +# Step 3: Update ASDB pubkeys +log_step "Step 3: Updating anti-slashing database pubkeys..." + +run_cmd "$SCRIPT_DIR/../vc/update-anti-slashing-db.sh" \ + "$ASDB_EXPORT_DIR/slashing-protection.json" \ + ".charon/cluster-lock.json" \ + "$OUTPUT_DIR/cluster-lock.json" + +log_info "Anti-slashing database pubkeys updated" + +echo "" + +# Step 4: Stop containers +log_step "Step 4: Stopping containers..." + +run_cmd docker compose stop "$VC" charon + +log_info "Containers stopped" + +echo "" + +# Step 5: Backup and replace .charon +log_step "Step 5: Backing up and replacing .charon directory..." + +TIMESTAMP=$(date +%Y%m%d_%H%M%S) +mkdir -p "$BACKUP_DIR" + +run_cmd mv .charon "$BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info "Current .charon backed up to $BACKUP_DIR/.charon-backup.$TIMESTAMP" + +run_cmd mv "$OUTPUT_DIR" .charon +log_info "New cluster configuration installed to .charon/" + +echo "" + +# Step 6: Import updated ASDB +log_step "Step 6: Importing updated anti-slashing database..." + +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/import_asdb.sh" \ + --input-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database imported" + +echo "" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Remove-Operators Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Old .charon backed up to: $BACKUP_DIR/.charon-backup.$TIMESTAMP" +log_info " - New cluster configuration installed in: .charon/" +log_info " - Anti-slashing database updated and imported" +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: Wait at least 2 epochs (~13 min) before starting ║" +log_warn "║ containers to avoid slashing risk from duplicate attestations ║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "When ready, start containers with:" +echo " docker compose up -d charon $VC" +echo "" +log_info "After starting, verify:" +log_info " 1. Check charon logs: docker compose logs -f charon" +log_info " 2. Verify all remaining nodes connected and healthy" +log_info " 3. Verify cluster is producing attestations" +log_info " 4. Confirm removed operators have stopped their nodes" +echo "" +log_warn "Keep the backup until you've verified normal operation for several epochs." +echo "" +log_info "Current limitations:" +log_info " - The new configuration will not be reflected on the Obol Launchpad" +log_info " - The cluster will have a new cluster hash (different observability ID)" +echo "" diff --git a/scripts/edit/remove-operators/removed-operator.sh b/scripts/edit/remove-operators/removed-operator.sh new file mode 100755 index 00000000..37b85a2f --- /dev/null +++ b/scripts/edit/remove-operators/removed-operator.sh @@ -0,0 +1,284 @@ +#!/usr/bin/env bash + +# Remove-Operators Script for REMOVED Operators - See README.md for documentation + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" +cd "$REPO_ROOT" + +# Default values +OPERATOR_ENRS_TO_REMOVE="" +PARTICIPATING_OPERATOR_ENRS="" +NEW_THRESHOLD="" +DRY_RUN=false + +# Output directories +OUTPUT_DIR="./output" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/remove-operators/removed-operator.sh [OPTIONS] + +Helps removed operators participate in the remove-operators ceremony. +This is only required when the removal exceeds the cluster's fault tolerance. + +If the removal is within fault tolerance, removed operators do NOT need to +run this script - simply stop your node after the remaining operators complete +the ceremony. + +Options: + --operator-enrs-to-remove Comma-separated ENRs of operators to remove (required) + --participating-operator-enrs Comma-separated ENRs of ALL participating operators (required) + --new-threshold Override default threshold (defaults to ceil(n * 2/3)) + --dry-run Show what would be done without executing + -h, --help Show this help message + +Example: + ./scripts/edit/remove-operators/removed-operator.sh \ + --operator-enrs-to-remove "enr:-..." \ + --participating-operator-enrs "enr:-...,enr:-...,enr:-..." + +Prerequisites: + - .env file with NETWORK and VC variables set + - .charon directory with cluster-lock.json, charon-enr-private-key, and validator_keys + - Docker and docker compose installed and running + - Your ENR must be listed in --participating-operator-enrs +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --operator-enrs-to-remove) + OPERATOR_ENRS_TO_REMOVE="$2" + shift 2 + ;; + --participating-operator-enrs) + PARTICIPATING_OPERATOR_ENRS="$2" + shift 2 + ;; + --new-threshold) + NEW_THRESHOLD="$2" + shift 2 + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +# Validate required arguments +if [ -z "$OPERATOR_ENRS_TO_REMOVE" ]; then + log_error "Missing required argument: --operator-enrs-to-remove" + echo "Use --help for usage information" + exit 1 +fi + +if [ -z "$PARTICIPATING_OPERATOR_ENRS" ]; then + log_error "Missing required argument: --participating-operator-enrs" + echo "Use --help for usage information" + exit 1 +fi + +# Validate new-threshold is a positive integer if provided +if [ -n "$NEW_THRESHOLD" ] && ! [[ "$NEW_THRESHOLD" =~ ^[1-9][0-9]*$ ]]; then + log_error "Invalid --new-threshold: must be a positive integer" + exit 1 +fi + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Remove-Operators Workflow - REMOVED OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +# Preserve COMPOSE_FILE and COMPOSE_PROJECT_NAME if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" +SAVED_COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-}" + +source .env + +# Restore COMPOSE_FILE and COMPOSE_PROJECT_NAME if they were set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi +if [ -n "$SAVED_COMPOSE_PROJECT_NAME" ]; then + export COMPOSE_PROJECT_NAME="$SAVED_COMPOSE_PROJECT_NAME" +fi + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if [ ! -f .charon/charon-enr-private-key ]; then + log_error ".charon/charon-enr-private-key not found" + exit 1 +fi + +if [ ! -d .charon/validator_keys ]; then + log_error ".charon/validator_keys directory not found" + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" +log_info " Operators to remove: ${OPERATOR_ENRS_TO_REMOVE:0:80}..." +log_info " Participating operators: ${PARTICIPATING_OPERATOR_ENRS:0:80}..." + +if [ -n "$NEW_THRESHOLD" ]; then + log_info " New threshold: $NEW_THRESHOLD" +fi + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +# Show current cluster info +if [ -f .charon/cluster-lock.json ]; then + CURRENT_VALIDATORS=$(jq '.distributed_validators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + CURRENT_OPERATORS=$(jq '.operators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + log_info " Current cluster: $CURRENT_VALIDATORS validator(s), $CURRENT_OPERATORS operator(s)" +fi + +echo "" + +# Step 1: Run ceremony +log_step "Step 1: Running remove-operators ceremony..." + +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: ALL participating operators must run simultaneously║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit remove-operators (as removed operator)" +log_info " Operators to remove: ${OPERATOR_ENRS_TO_REMOVE:0:80}..." +log_info " Output directory: $OUTPUT_DIR" +log_info "" +log_info "The ceremony will coordinate with other operators via P2P relay." +log_info "Please wait for all participants to connect..." +echo "" + +if [ "$DRY_RUN" = false ]; then + # Use -i for stdin (needed for ceremony coordination), skip -t if no TTY available + DOCKER_FLAGS="-i" + if [ -t 0 ]; then + DOCKER_FLAGS="-it" + fi + + # Build Docker command arguments + DOCKER_ARGS=( + run --rm $DOCKER_FLAGS + -v "$REPO_ROOT/.charon:/opt/charon/.charon" + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" + "obolnetwork/charon:${CHARON_VERSION:-v1.9.0-rc3}" + alpha edit remove-operators + --operator-enrs-to-remove="$OPERATOR_ENRS_TO_REMOVE" + --participating-operator-enrs="$PARTICIPATING_OPERATOR_ENRS" + --private-key-file=/opt/charon/.charon/charon-enr-private-key + --lock-file=/opt/charon/.charon/cluster-lock.json + --validator-keys-dir=/opt/charon/.charon/validator_keys + --output-dir=/opt/charon/output + ) + + if [ -n "$NEW_THRESHOLD" ]; then + DOCKER_ARGS+=(--new-threshold="$NEW_THRESHOLD") + fi + + docker "${DOCKER_ARGS[@]}" + + log_info "Ceremony completed successfully!" +else + echo " [DRY-RUN] docker run --rm -it ... charon alpha edit remove-operators --operator-enrs-to-remove=... --participating-operator-enrs=... --output-dir=$OUTPUT_DIR" +fi + +echo "" + +# Step 2: Stop containers +log_step "Step 2: Stopping containers..." + +run_cmd docker compose stop "$VC" charon + +log_info "Containers stopped" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Removed Operator Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Ceremony participation completed" +log_info " - Containers stopped: charon, $VC" +echo "" +log_warn "You have been removed from the cluster." +log_warn "Your node no longer needs to run for this cluster." +echo "" +log_info "Next steps:" +log_info " 1. Confirm with remaining operators that the ceremony succeeded" +log_info " 2. Optionally clean up cluster data: rm -rf .charon data/" +log_info " 3. Optionally remove Docker resources: docker compose down -v" +echo "" diff --git a/scripts/edit/replace-operator/README.md b/scripts/edit/replace-operator/README.md new file mode 100644 index 00000000..205a11ea --- /dev/null +++ b/scripts/edit/replace-operator/README.md @@ -0,0 +1,106 @@ +# Replace-Operator Scripts + +Scripts to automate the [replace-operator workflow](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/replace-operator) for Charon distributed validators. + +## Overview + +These scripts help operators replace a single operator in an existing distributed validator cluster. This is useful for: + +- **Operator rotation**: Replacing an operator who is leaving the cluster +- **Infrastructure migration**: Moving an operator to new infrastructure +- **Recovery**: Replacing an operator whose keys may have been compromised + +> **Warning**: This is an alpha feature in Charon and is not yet recommended for production use. + +There are two scripts for the two roles involved: + +- **`remaining-operator.sh`** - For operators staying in the cluster +- **`new-operator.sh`** - For the new operator joining the cluster + +**Important**: All participating operators (remaining + new) run the `charon alpha edit replace-operator` ceremony together. The new operator must receive the current `cluster-lock.json` BEFORE the ceremony begins. + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- `.charon` directory with `cluster-lock.json` and `charon-enr-private-key` +- Docker running +- `jq` installed + +## For Remaining Operators + +Automates the complete workflow for operators staying in the cluster: + +```bash +./scripts/edit/replace-operator/remaining-operator.sh \ + --new-enr "enr:-..." \ + --old-enr "enr:-..." +``` + +**Before running**: Share your current `cluster-lock.json` with the new operator so it can participate in the ceremony. + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--new-enr ` | Yes | ENR of the new operator | +| `--old-enr ` | Yes | ENR of the operator being replaced | +| `--skip-export` | No | Skip ASDB export if already done | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +| Environment Variable | Description | +|----------------------|-------------| +| `WORK_DIR` | Override the repository root directory (defaults to auto-detected repo root) | + +### Workflow + +1. **Export ASDB** - Stop VC if running and export anti-slashing database +2. **Run ceremony** - Execute `charon alpha edit replace-operator` with new ENR +3. **Update ASDB** - Replace pubkeys in exported ASDB to match new cluster-lock +4. **Stop containers** - Stop charon and VC +5. **Backup and replace** - Backup old cluster-lock, install new one +6. **Import ASDB** - Import updated anti-slashing database +7. **Print start commands** - Display commands to start containers manually (wait ~2 epochs before starting) + +## For New Operators + +Two-step workflow for the new operator joining the cluster. + +**Step 1:** Generate ENR and share with remaining operators: + +```bash +./scripts/edit/replace-operator/new-operator.sh --generate-enr +``` + +**Step 2:** After receiving `cluster-lock.json` from remaining operators, run the ceremony together with all other operators: + +```bash +./scripts/edit/replace-operator/new-operator.sh \ + --cluster-lock ./received-cluster-lock.json \ + --old-enr "enr:-..." +``` + +After the ceremony completes, the script automatically backs up the old `.charon` directory and installs the new configuration from the output directory. + +### Options + +| Option | Required | Description | +|--------|----------|-------------| +| `--cluster-lock ` | No | Path to cluster-lock.json (for ceremony) | +| `--old-enr ` | No | ENR of the operator being replaced (for ceremony) | +| `--generate-enr` | No | Generate new ENR private key | +| `--dry-run` | No | Preview without executing | +| `-h, --help` | No | Show help message | + +| Environment Variable | Description | +|----------------------|-------------| +| `WORK_DIR` | Override the repository root directory (defaults to auto-detected repo root) | + +## Related + +- [Add-Validators Workflow](../add-validators/README.md) +- [Add-Operators Workflow](../add-operators/README.md) +- [Remove-Operators Workflow](../remove-operators/README.md) +- [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Anti-Slashing DB Scripts](../vc/README.md) +- [Obol Documentation](https://docs.obol.org/next/advanced-and-troubleshooting/advanced/replace-operator) diff --git a/scripts/edit/replace-operator/new-operator.sh b/scripts/edit/replace-operator/new-operator.sh new file mode 100755 index 00000000..2b95ca43 --- /dev/null +++ b/scripts/edit/replace-operator/new-operator.sh @@ -0,0 +1,364 @@ +#!/usr/bin/env bash + +# Replace-Operator Script for NEW Operator - See README.md for documentation +# The new operator participates in the ceremony together with remaining operators. +# Both run the same `charon alpha edit replace-operator` command. + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" +cd "$REPO_ROOT" + +# Default values +CLUSTER_LOCK_PATH="" +OLD_ENR="" +GENERATE_ENR=false +DRY_RUN=false + +# Output directories +BACKUP_DIR="./backups" +OUTPUT_DIR="./output" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/replace-operator/new-operator.sh [OPTIONS] + +Helps a new operator join an existing cluster by participating in the +replace-operator ceremony together with the remaining operators. +Both remaining and new operators run the same ceremony command. + +Options: + --cluster-lock Path to the current cluster-lock.json (for ceremony) + --old-enr ENR of the operator being replaced (for ceremony) + --generate-enr Generate a new ENR private key if not present + --dry-run Show what would be done without executing + -h, --help Show this help message + +Examples: + # Step 1: Generate ENR and share with remaining operators + ./scripts/edit/replace-operator/new-operator.sh --generate-enr + + # Step 2: Run ceremony (after receiving cluster-lock from remaining operators) + ./scripts/edit/replace-operator/new-operator.sh \ + --cluster-lock ./received-cluster-lock.json \ + --old-enr "enr:-..." + +Prerequisites: + - .env file with NETWORK and VC variables set + - For --generate-enr: Docker installed + - For ceremony: .charon/charon-enr-private-key must exist +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --cluster-lock) + CLUSTER_LOCK_PATH="$2" + shift 2 + ;; + --old-enr) + OLD_ENR="$2" + shift 2 + ;; + --generate-enr) + GENERATE_ENR=true + shift + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Replace-Operator Workflow - NEW OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +# Preserve COMPOSE_FILE and COMPOSE_PROJECT_NAME if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" +SAVED_COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-}" + +source .env + +# Restore COMPOSE_FILE and COMPOSE_PROJECT_NAME if they were set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi +if [ -n "$SAVED_COMPOSE_PROJECT_NAME" ]; then + export COMPOSE_PROJECT_NAME="$SAVED_COMPOSE_PROJECT_NAME" +fi + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +echo "" + +# Step 1: Handle ENR generation +if [ "$GENERATE_ENR" = true ]; then + log_step "Step 1: Generating ENR private key..." + + if [ -f .charon/charon-enr-private-key ]; then + log_warn "ENR private key already exists at .charon/charon-enr-private-key" + log_warn "Skipping generation to avoid overwriting existing key." + log_info "If you want to generate a new key, remove the existing file first." + else + mkdir -p .charon + + if [ "$DRY_RUN" = false ]; then + docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.9.0-rc3}" \ + create enr + else + echo " [DRY-RUN] docker run --rm ... charon create enr" + fi + + log_info "ENR private key generated" + fi + + if [ -f .charon/charon-enr-private-key ]; then + echo "" + log_warn "╔════════════════════════════════════════════════════════════════╗" + log_warn "║ SHARE YOUR ENR WITH THE REMAINING OPERATORS ║" + log_warn "╚════════════════════════════════════════════════════════════════╝" + echo "" + + # Extract and display the ENR + if [ -f .charon/charon-enr-private-key ]; then + ENR=$(docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.9.0-rc3}" \ + enr 2>/dev/null || echo "") + + if [ -n "$ENR" ]; then + log_info "Your ENR:" + echo "" + echo "$ENR" + echo "" + fi + fi + + log_info "Send this ENR to the remaining operators." + log_info "They will use it with: --new-enr \"\"" + log_info "" + log_info "Ask them to share the current cluster-lock.json with you BEFORE the ceremony." + log_info "" + log_info "Then run the ceremony together with remaining operators using:" + log_info " ./scripts/edit/replace-operator/new-operator.sh --cluster-lock --old-enr " + fi + + exit 0 +fi + +# Ceremony mode: --cluster-lock + --old-enr +if [ -n "$CLUSTER_LOCK_PATH" ] && [ -n "$OLD_ENR" ]; then + log_step "Step 1: Checking prerequisites..." + + if [ "$DRY_RUN" = false ]; then + if [ ! -d .charon ]; then + log_error ".charon directory not found" + log_info "First generate your ENR with: ./scripts/edit/replace-operator/new-operator.sh --generate-enr" + exit 1 + fi + + if [ ! -f .charon/charon-enr-private-key ]; then + log_error ".charon/charon-enr-private-key not found" + log_info "First generate your ENR with: ./scripts/edit/replace-operator/new-operator.sh --generate-enr" + exit 1 + fi + + if [ ! -f "$CLUSTER_LOCK_PATH" ]; then + log_error "Cluster-lock file not found: $CLUSTER_LOCK_PATH" + exit 1 + fi + fi + + log_info "Prerequisites OK" + log_info " Using cluster-lock: $CLUSTER_LOCK_PATH" + + # Get our own ENR + OUR_ENR=$(docker run --rm \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.9.0-rc3}" \ + enr 2>/dev/null || echo "") + + if [ -n "$OUR_ENR" ]; then + log_info " Our ENR: ${OUR_ENR:0:50}..." + fi + log_info " Old ENR: ${OLD_ENR:0:50}..." + + echo "" + + # Step 2: Copy cluster-lock to .charon for ceremony + log_step "Step 2: Preparing for ceremony..." + + mkdir -p .charon + if [ "$DRY_RUN" = false ]; then + cp "$CLUSTER_LOCK_PATH" .charon/cluster-lock.json + log_info "Cluster-lock copied to .charon/" + else + echo " [DRY-RUN] cp $CLUSTER_LOCK_PATH .charon/cluster-lock.json" + fi + + echo "" + + # Step 3: Run ceremony + log_step "Step 3: Running replace-operator ceremony..." + log_warn "This requires ALL operators (remaining + you) to run the ceremony simultaneously." + + mkdir -p "$OUTPUT_DIR" + + if [ "$DRY_RUN" = false ]; then + # Use -i for stdin (needed for ceremony coordination), skip -t if no TTY available + DOCKER_FLAGS="-i" + if [ -t 0 ]; then + DOCKER_FLAGS="-it" + fi + + docker run --rm $DOCKER_FLAGS \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.9.0-rc3}" \ + alpha edit replace-operator \ + --lock-file=/opt/charon/.charon/cluster-lock.json \ + --output-dir=/opt/charon/output \ + --old-operator-enr="$OLD_ENR" \ + --new-operator-enr="$OUR_ENR" + else + echo " [DRY-RUN] docker run --rm ... charon alpha edit replace-operator ..." + fi + + log_info "Ceremony completed successfully" + + echo "" + + # Step 4: Backup and install new .charon directory + log_step "Step 4: Installing new cluster configuration..." + + TIMESTAMP=$(date +%Y%m%d_%H%M%S) + mkdir -p "$BACKUP_DIR" + + run_cmd mv .charon "$BACKUP_DIR/.charon.$TIMESTAMP" + log_info "Old .charon backed up to $BACKUP_DIR/.charon.$TIMESTAMP" + + run_cmd mv "$OUTPUT_DIR" .charon + log_info "New configuration installed to .charon/" + + # Verify our ENR is in the new cluster-lock + if [ "$DRY_RUN" = false ] && [ -f .charon/cluster-lock.json ]; then + if grep -q "${OUR_ENR:0:50}" .charon/cluster-lock.json 2>/dev/null; then + log_info "Verified: Your ENR is present in the new cluster-lock" + else + log_warn "Your ENR may not be in this cluster-lock." + log_warn "Please verify the ceremony completed successfully." + fi + + # Show cluster info + NUM_VALIDATORS=$(jq '.distributed_validators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + NUM_OPERATORS=$(jq '.operators | length' .charon/cluster-lock.json 2>/dev/null || echo "?") + log_info "Cluster info: $NUM_VALIDATORS validator(s), $NUM_OPERATORS operator(s)" + fi + + echo "" + + echo "" + echo "╔════════════════════════════════════════════════════════════════╗" + echo "║ Replace-Operator Workflow COMPLETED ║" + echo "╚════════════════════════════════════════════════════════════════╝" + echo "" + log_info "Summary:" + log_info " - Old .charon backed up to: $BACKUP_DIR/.charon.$TIMESTAMP" + log_info " - New configuration installed to: .charon/" + echo "" + log_warn "╔════════════════════════════════════════════════════════════════╗" + log_warn "║ IMPORTANT: Wait at least 2 epochs (~13 min) before starting ║" + log_warn "║ containers to avoid slashing risk from duplicate attestations ║" + log_warn "╚════════════════════════════════════════════════════════════════╝" + echo "" + log_info "When ready, start containers with:" + echo " docker compose up -d charon $VC" + echo "" + log_info "After starting, verify:" + log_info " 1. Check charon logs: docker compose logs -f charon" + log_info " 2. Verify VC is running: docker compose logs -f $VC" + log_info " 3. Monitor validator duties once synced" + echo "" + log_warn "Note: As a new operator, you do NOT have any slashing protection history." + log_warn "Your VC will start fresh." + echo "" + log_warn "Keep the backup until you've verified normal operation for several epochs." + echo "" + + exit 0 +fi + +# Error: missing required arguments +log_error "Missing required arguments." +echo "" +echo "To generate ENR: --generate-enr" +echo "To run ceremony: --cluster-lock --old-enr " +echo "" +echo "Use --help for full usage information." +exit 1 diff --git a/scripts/edit/replace-operator/remaining-operator.sh b/scripts/edit/replace-operator/remaining-operator.sh new file mode 100755 index 00000000..b86ef38b --- /dev/null +++ b/scripts/edit/replace-operator/remaining-operator.sh @@ -0,0 +1,315 @@ +#!/usr/bin/env bash + +# Replace-Operator Script for REMAINING Operators - See README.md for documentation + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="${WORK_DIR:-$(cd "$SCRIPT_DIR/../../.." && pwd)}" +cd "$REPO_ROOT" + +# Default values +NEW_ENR="" +OLD_ENR="" +SKIP_EXPORT=false +DRY_RUN=false + +# Output directories +ASDB_EXPORT_DIR="./asdb-export" +OUTPUT_DIR="./output" +BACKUP_DIR="./backups" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } +log_step() { echo -e "${BLUE}[STEP]${NC} $1"; } + +usage() { + cat << 'EOF' +Usage: ./scripts/edit/replace-operator/remaining-operator.sh [OPTIONS] + +Automates the complete replace-operator workflow for operators +who are staying in the cluster (continuing operators). + +Options: + --new-enr ENR of the new operator (required) + --old-enr ENR of the operator being replaced (required) + --skip-export Skip ASDB export (if already exported) + --dry-run Show what would be done without executing + -h, --help Show this help message + +Example: + ./scripts/edit/replace-operator/remaining-operator.sh \ + --new-enr "enr:-..." \ + --old-enr "enr:-..." + +Prerequisites: + - .env file with NETWORK and VC variables set + - .charon directory with cluster-lock.json and charon-enr-private-key + - Docker and docker compose installed and running + - VC container running (for initial export) +EOF + exit 0 +} + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --new-enr) + NEW_ENR="$2" + shift 2 + ;; + --old-enr) + OLD_ENR="$2" + shift 2 + ;; + --skip-export) + SKIP_EXPORT=true + shift + ;; + --dry-run) + DRY_RUN=true + shift + ;; + -h|--help) + usage + ;; + *) + log_error "Unknown argument: $1" + echo "Use --help for usage information" + exit 1 + ;; + esac +done + +# Validate required arguments +if [ -z "$NEW_ENR" ]; then + log_error "Missing required argument: --new-enr" + echo "Use --help for usage information" + exit 1 +fi +if [ -z "$OLD_ENR" ]; then + log_error "Missing required argument: --old-enr" + echo "Use --help for usage information" + exit 1 +fi + +run_cmd() { + if [ "$DRY_RUN" = true ]; then + echo " [DRY-RUN] $*" + else + "$@" + fi +} + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Replace-Operator Workflow - REMAINING OPERATOR ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" + +# Step 0: Check prerequisites +log_step "Step 0: Checking prerequisites..." + +if [ ! -f .env ]; then + log_error ".env file not found. Please create one with NETWORK and VC variables." + exit 1 +fi + +# Preserve COMPOSE_FILE and COMPOSE_PROJECT_NAME if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" +SAVED_COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-}" + +source .env + +# Restore COMPOSE_FILE and COMPOSE_PROJECT_NAME if they were set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi +if [ -n "$SAVED_COMPOSE_PROJECT_NAME" ]; then + export COMPOSE_PROJECT_NAME="$SAVED_COMPOSE_PROJECT_NAME" +fi + +if [ -z "${NETWORK:-}" ]; then + log_error "NETWORK variable not set in .env" + exit 1 +fi + +if [ -z "${VC:-}" ]; then + log_error "VC variable not set in .env (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus)" + exit 1 +fi + +if [ ! -d .charon ]; then + log_error ".charon directory not found" + exit 1 +fi + +if [ ! -f .charon/cluster-lock.json ]; then + log_error ".charon/cluster-lock.json not found" + exit 1 +fi + +if [ ! -f .charon/charon-enr-private-key ]; then + log_error ".charon/charon-enr-private-key not found" + exit 1 +fi + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +log_info "Prerequisites OK" +log_info " Network: $NETWORK" +log_info " Validator Client: $VC" + +if [ "$DRY_RUN" = true ]; then + log_warn "DRY-RUN MODE: No changes will be made" +fi + +echo "" + +# Step 1: Export anti-slashing database +log_step "Step 1: Exporting anti-slashing database..." + +if [ "$SKIP_EXPORT" = true ]; then + log_warn "Skipping export (--skip-export specified)" + if [ ! -f "$ASDB_EXPORT_DIR/slashing-protection.json" ]; then + log_error "Cannot skip export: $ASDB_EXPORT_DIR/slashing-protection.json not found" + exit 1 + fi +else + # VC container must be stopped before export (Lodestar locks the database while running) + if [ "$DRY_RUN" = false ]; then + if docker compose ps --format '{{.Status}}' "$VC" 2>/dev/null | grep -qi running; then + log_info "Stopping VC container ($VC) for ASDB export..." + docker compose stop "$VC" + fi + else + log_warn "Would stop $VC container if running" + fi + + mkdir -p "$ASDB_EXPORT_DIR" + + VC="$VC" run_cmd "$SCRIPT_DIR/../vc/export_asdb.sh" \ + --output-file "$ASDB_EXPORT_DIR/slashing-protection.json" + + log_info "Anti-slashing database exported to $ASDB_EXPORT_DIR/slashing-protection.json" +fi + +echo "" + +# Step 2: Run replace-operator ceremony +log_step "Step 2: Running replace-operator ceremony..." +log_warn "Ensure the new operator has received the current cluster-lock.json BEFORE starting." +log_warn "All operators (remaining + new) must run the ceremony together." + +mkdir -p "$OUTPUT_DIR" + +log_info "Running: charon alpha edit replace-operator" +log_info " Old ENR: ${OLD_ENR:0:50}..." +log_info " New ENR: ${NEW_ENR:0:50}..." + +if [ "$DRY_RUN" = false ]; then + # Use -i for stdin (needed for ceremony coordination), skip -t if no TTY available + DOCKER_FLAGS="-i" + if [ -t 0 ]; then + DOCKER_FLAGS="-it" + fi + + docker run --rm $DOCKER_FLAGS \ + -v "$REPO_ROOT/.charon:/opt/charon/.charon" \ + -v "$REPO_ROOT/$OUTPUT_DIR:/opt/charon/output" \ + "obolnetwork/charon:${CHARON_VERSION:-v1.9.0-rc3}" \ + alpha edit replace-operator \ + --lock-file=/opt/charon/.charon/cluster-lock.json \ + --output-dir=/opt/charon/output \ + --old-operator-enr="$OLD_ENR" \ + --new-operator-enr="$NEW_ENR" +else + echo " [DRY-RUN] docker run --rm ... charon alpha edit replace-operator ..." +fi + +log_info "New cluster-lock generated at $OUTPUT_DIR/cluster-lock.json" + +echo "" + +# Step 3: Update ASDB pubkeys +log_step "Step 3: Updating anti-slashing database pubkeys..." + +run_cmd "$SCRIPT_DIR/../vc/update-anti-slashing-db.sh" \ + "$ASDB_EXPORT_DIR/slashing-protection.json" \ + ".charon/cluster-lock.json" \ + "$OUTPUT_DIR/cluster-lock.json" + +log_info "Anti-slashing database pubkeys updated" + +echo "" + +# Step 4: Stop containers +log_step "Step 4: Stopping charon and VC containers..." + +run_cmd docker compose stop "$VC" charon + +log_info "Containers stopped" + +echo "" + +# Step 5: Backup and replace cluster-lock +log_step "Step 5: Backing up and replacing cluster-lock..." + +TIMESTAMP=$(date +%Y%m%d_%H%M%S) +mkdir -p "$BACKUP_DIR" + +run_cmd cp .charon/cluster-lock.json "$BACKUP_DIR/cluster-lock.json.$TIMESTAMP" +log_info "Old cluster-lock backed up to $BACKUP_DIR/cluster-lock.json.$TIMESTAMP" + +# Remove existing file first (may be read-only from Charon) +rm -f .charon/cluster-lock.json +run_cmd cp "$OUTPUT_DIR/cluster-lock.json" .charon/cluster-lock.json +log_info "New cluster-lock installed" + +echo "" + +# Step 6: Import updated ASDB +log_step "Step 6: Importing updated anti-slashing database..." + +VC="$VC" run_cmd "$SCRIPT_DIR/../vc/import_asdb.sh" \ + --input-file "$ASDB_EXPORT_DIR/slashing-protection.json" + +log_info "Anti-slashing database imported" + +echo "" + +echo "" +echo "╔════════════════════════════════════════════════════════════════╗" +echo "║ Replace-Operator Workflow COMPLETED ║" +echo "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "Summary:" +log_info " - Old cluster-lock backed up to: $BACKUP_DIR/cluster-lock.json.$TIMESTAMP" +log_info " - New cluster-lock installed in: .charon/cluster-lock.json" +log_info " - Anti-slashing database updated and imported" +echo "" +log_warn "╔════════════════════════════════════════════════════════════════╗" +log_warn "║ IMPORTANT: Wait at least 2 epochs (~13 min) before starting ║" +log_warn "║ containers to avoid slashing risk from duplicate attestations ║" +log_warn "╚════════════════════════════════════════════════════════════════╝" +echo "" +log_info "When ready, start containers with:" +echo " docker compose up -d charon $VC" +echo "" +log_info "After starting, verify:" +log_info " 1. Check charon logs: docker compose logs -f charon" +log_info " 2. Verify VC is running: docker compose logs -f $VC" +echo "" +log_warn "Keep the backup until you've verified normal operation for several epochs." +echo "" diff --git a/scripts/edit/test/README.md b/scripts/edit/test/README.md new file mode 100644 index 00000000..ad0f78c7 --- /dev/null +++ b/scripts/edit/test/README.md @@ -0,0 +1,67 @@ +# E2E Integration Tests for Edit Scripts + +End-to-end tests that verify the cluster edit scripts work correctly across the full workflow using real Docker Compose services. + +## Prerequisites + +- **Docker** running locally +- **jq** installed +- **Internet access** (charon ceremonies use the Obol P2P relay) + +## Running + +```bash +./scripts/edit/test/e2e_test.sh +``` + +Override the charon version: + +```bash +CHARON_VERSION=v1.9.0-rc3 ./scripts/edit/test/e2e_test.sh +``` + +## What It Tests + +| # | Test | Type | Description | +|---|------|------|-------------| +| 1 | recreate-private-keys | P2P ceremony (4 ops) | Refreshes key shares. Verifies public_shares changed, same validator count. | +| 2 | add-validators | P2P ceremony (4 ops) | Adds 1 validator to a 4-operator, 1-validator cluster. Verifies 2 validators in output. | +| 3 | add-operators | P2P ceremony (4+3 ops) | Adds 3 new operators (4→7). Verifies 7 operators in output. | +| 4 | remove-operators | P2P ceremony (6 of 7 ops) | Removes 1 operator (7→6). Verifies 6 operators in output. | + +## How It Works + +1. Creates a real test cluster using `charon create cluster` (4 nodes, 1 validator) +2. Sets up isolated operator directories, each with: + - `.charon/` — cluster config and validator keys + - `.env` — network and VC configuration + - `docker-compose.e2e.yml` — minimal compose file (busybox for charon, real lodestar for VC) + - `data/lodestar/` — persisted lodestar data directory +3. Starts Docker Compose stacks for each operator (isolated via `COMPOSE_PROJECT_NAME`) +4. Seeds lodestar anti-slashing databases from cluster-lock pubkeys +5. Runs each edit script through its happy path using `WORK_DIR`, `COMPOSE_FILE`, and `COMPOSE_PROJECT_NAME` for isolation +6. Verifies outputs (validator count, operator count, key changes) at each step +7. Restarts containers and re-seeds ASDB between tests (pubkeys change after ceremonies) + +### Docker Compose Architecture + +Each operator gets its own Docker Compose project (`e2e-op0`, `e2e-op1`, ...) running: +- **charon** — busybox placeholder (ceremonies use standalone `docker run`, not compose) +- **vc-lodestar** — real lodestar image so ASDB export/import works via `docker compose run` + +Both services use `tail -f /dev/null` to stay alive without real network connections. + +### Environment Variable Isolation + +Edit scripts already preserve `COMPOSE_FILE` and `COMPOSE_PROJECT_NAME` from the environment around `.env` sourcing, so setting these externally works without script modifications: + +```bash +WORK_DIR="$op_dir" \ +COMPOSE_FILE="$op_dir/docker-compose.e2e.yml" \ +COMPOSE_PROJECT_NAME="e2e-op${i}" \ + "$REPO_ROOT/scripts/edit/recreate-private-keys/recreate-private-keys.sh" +``` + +## Expected Runtime + +Approximately 5-10 minutes depending on P2P relay connectivity and Docker image pull times. The P2P ceremonies require all operators to connect through the relay simultaneously. diff --git a/scripts/edit/test/docker-compose.e2e.yml b/scripts/edit/test/docker-compose.e2e.yml new file mode 100644 index 00000000..e7073909 --- /dev/null +++ b/scripts/edit/test/docker-compose.e2e.yml @@ -0,0 +1,20 @@ +# Minimal compose file for E2E testing of edit scripts. +# +# - charon: busybox placeholder (ceremonies use standalone `docker run`) +# - vc-lodestar: real lodestar image for ASDB export/import via `docker compose run` +# +# Both services use `tail -f /dev/null` to stay alive without real network connections. + +services: + charon: + image: busybox:latest + entrypoint: ["sh", "-c", "tail -f /dev/null"] + + vc-lodestar: + image: chainsafe/lodestar:${VC_LODESTAR_VERSION:-v1.38.0} + entrypoint: ["sh", "-c", "tail -f /dev/null"] + volumes: + - .charon/validator_keys:/home/charon/validator_keys + - ./data/lodestar:/opt/data + environment: + NETWORK: ${NETWORK:-hoodi} diff --git a/scripts/edit/test/e2e_test.sh b/scripts/edit/test/e2e_test.sh new file mode 100755 index 00000000..f5eb4b52 --- /dev/null +++ b/scripts/edit/test/e2e_test.sh @@ -0,0 +1,878 @@ +#!/usr/bin/env bash + +# E2E Integration Test for Cluster Edit Scripts +# +# This test uses real Docker Compose services (busybox for charon, real lodestar +# for ASDB operations) and real charon ceremonies via P2P relay. +# +# Prerequisites: +# - Docker running +# - jq installed +# - Internet access (charon uses Obol relay for P2P ceremonies) +# +# Usage: +# ./scripts/edit/test/e2e_test.sh + +set -euo pipefail + +# --- Configuration --- + +TEST_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$TEST_DIR/../../.." && pwd)" +CHARON_VERSION="${CHARON_VERSION:-v1.9.0-rc3}" +CHARON_IMAGE="obolnetwork/charon:${CHARON_VERSION}" +LODESTAR_IMAGE="chainsafe/lodestar:${VC_LODESTAR_VERSION:-v1.38.0}" +NUM_OPERATORS=4 +ZERO_ADDR="0x0000000000000000000000000000000000000001" +HOODI_GVR="0x212f13fc4df078b6cb7db228f1c8307566dcecf900867401a92023d7ba99cb5f" + +# Colors +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' + +# Counters +TESTS_RUN=0 +TESTS_PASSED=0 +TESTS_FAILED=0 + +# Track active compose projects for cleanup +ACTIVE_PROJECTS=() + +# --- Helpers --- + +log_info() { printf "${GREEN}[INFO]${NC} %s\n" "$1"; } +log_warn() { printf "${YELLOW}[WARN]${NC} %s\n" "$1"; } +log_error() { printf "${RED}[ERROR]${NC} %s\n" "$1"; } +log_test() { printf "${BLUE}[TEST]${NC} %s\n" "$1"; } + +assert_eq() { + local desc="$1" expected="$2" actual="$3" + if [ "$expected" = "$actual" ]; then + log_info " PASS: $desc (got $actual)" + return 0 + else + log_error " FAIL: $desc - expected '$expected', got '$actual'" + return 1 + fi +} + +assert_ne() { + local desc="$1" not_expected="$2" actual="$3" + if [ "$not_expected" != "$actual" ]; then + log_info " PASS: $desc (values differ)" + return 0 + else + log_error " FAIL: $desc - expected different from '$not_expected', but got same" + return 1 + fi +} + +run_test() { + local name="$1" + shift + TESTS_RUN=$((TESTS_RUN + 1)) + echo "" + echo "================================================================" + log_test "TEST $TESTS_RUN: $name" + echo "================================================================" + echo "" + if "$@"; then + TESTS_PASSED=$((TESTS_PASSED + 1)) + log_info "TEST $TESTS_RUN PASSED: $name" + else + TESTS_FAILED=$((TESTS_FAILED + 1)) + log_error "TEST $TESTS_RUN FAILED: $name" + fi +} + +# --- Compose helpers --- + +# Returns env vars for docker compose targeting operator i's directory. +compose_env() { + local i="$1" + local op_dir="$TMP_DIR/operator${i}" + echo "COMPOSE_FILE=$op_dir/docker-compose.e2e.yml" \ + "COMPOSE_PROJECT_NAME=e2e-op${i}" +} + +# Runs docker compose for operator i. +compose_cmd() { + local i="$1" + shift + local op_dir="$TMP_DIR/operator${i}" + COMPOSE_FILE="$op_dir/docker-compose.e2e.yml" \ + COMPOSE_PROJECT_NAME="e2e-op${i}" \ + docker compose "$@" +} + +start_operator() { + local i="$1" + log_info " Starting compose stack for operator $i..." + compose_cmd "$i" up -d 2>/dev/null + # Track this project for cleanup + local project="e2e-op${i}" + if ! printf '%s\n' "${ACTIVE_PROJECTS[@]}" 2>/dev/null | grep -qx "$project"; then + ACTIVE_PROJECTS+=("$project") + fi +} + +stop_operator() { + local i="$1" + log_info " Stopping compose stack for operator $i..." + compose_cmd "$i" down --remove-orphans 2>/dev/null || true +} + +# Generate and import a minimal EIP-3076 ASDB for operator i. +seed_asdb() { + local op_dir="$1" + local op_index="$2" + local lock="$op_dir/.charon/cluster-lock.json" + + if [ ! -f "$lock" ]; then + log_warn " No cluster-lock.json for ASDB seed at $op_dir" + return 0 + fi + + # Extract this operator's public shares + local pubkeys + pubkeys=$(jq -r --argjson idx "$op_index" \ + '[.distributed_validators[].public_shares[$idx]] | map(select(. != null)) | .[]' \ + "$lock" 2>/dev/null || echo "") + + if [ -z "$pubkeys" ]; then + log_warn " No pubkeys found for operator $op_index, skipping ASDB seed" + return 0 + fi + + # Build EIP-3076 JSON + local data_entries="" + local first=true + while IFS= read -r pk; do + [ -z "$pk" ] && continue + if [ "$first" = true ]; then + first=false + else + data_entries="${data_entries}," + fi + data_entries="${data_entries}{\"pubkey\":\"${pk}\",\"signed_blocks\":[],\"signed_attestations\":[]}" + done <<< "$pubkeys" + + local asdb_file="$op_dir/asdb-seed.json" + cat > "$asdb_file" </dev/null 2>&1 || log_warn " ASDB seed import returned non-zero for operator $op_index (may be OK on first run)" + + log_info " ASDB seeded for operator $op_index" +} + +# Restart containers and re-seed ASDB for an operator. +restart_and_seed() { + local i="$1" + local op_dir="$TMP_DIR/operator${i}" + compose_cmd "$i" down --remove-orphans 2>/dev/null || true + start_operator "$i" + seed_asdb "$op_dir" "$i" +} + +# Set up an operator directory with .charon, .env, compose file, and data dirs. +setup_operator() { + local i="$1" + local charon_node_dir="$2" + local op_dir="$TMP_DIR/operator${i}" + mkdir -p "$op_dir/data/lodestar" + + # Copy node contents to operator's .charon directory + cp -r "$charon_node_dir" "$op_dir/.charon" + + # Create .env file + cat > "$op_dir/.env" </dev/null || true + done + + if [ -n "$TMP_DIR" ] && [ -d "$TMP_DIR" ]; then + rm -rf "$TMP_DIR" + log_info "Cleaned up $TMP_DIR" + fi +} +trap cleanup EXIT + +check_prerequisites() { + log_info "Checking prerequisites..." + + if ! command -v jq &>/dev/null; then + log_error "jq is required but not installed" + exit 1 + fi + + if ! docker info &>/dev/null; then + log_error "Docker is not running" + exit 1 + fi + + log_info "Pulling images..." + docker pull "$CHARON_IMAGE" >/dev/null 2>&1 || true + docker pull "$LODESTAR_IMAGE" >/dev/null 2>&1 || true + docker pull busybox:latest >/dev/null 2>&1 || true + + log_info "Prerequisites OK" +} + +setup_tmp_dir() { + TMP_DIR=$(mktemp -d) + log_info "Working directory: $TMP_DIR" +} + +create_cluster() { + log_info "Creating test cluster with $NUM_OPERATORS nodes, 1 validator..." + + local cluster_dir="$TMP_DIR/cluster" + mkdir -p "$cluster_dir" + + docker run --rm \ + --user "$(id -u):$(id -g)" \ + -v "$cluster_dir:/opt/charon/.charon" \ + "$CHARON_IMAGE" \ + create cluster \ + --nodes="$NUM_OPERATORS" \ + --num-validators=1 \ + --network=hoodi \ + --withdrawal-addresses="$ZERO_ADDR" \ + --fee-recipient-addresses="$ZERO_ADDR" \ + --cluster-dir=/opt/charon/.charon + + if [ ! -d "$cluster_dir/node0" ]; then + log_error "Cluster creation failed - no node0 directory" + exit 1 + fi + + log_info "Cluster created successfully" + + # Set up operator work directories + # Note: deposit-data*.json files are inside each node directory, + # so setup_operator copies them along with the rest of the node contents. + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + setup_operator "$i" "$cluster_dir/node${i}" + done +} + +start_all_operators() { + local max_idx="${1:-$((NUM_OPERATORS - 1))}" + log_info "Starting compose stacks for operators 0-${max_idx}..." + for i in $(seq 0 "$max_idx"); do + start_operator "$i" + done +} + +seed_all_operators() { + local max_idx="${1:-$((NUM_OPERATORS - 1))}" + log_info "Seeding ASDB for operators 0-${max_idx}..." + for i in $(seq 0 "$max_idx"); do + seed_asdb "$TMP_DIR/operator${i}" "$i" + done +} + +restart_and_seed_all() { + local max_idx="${1:-$((NUM_OPERATORS - 1))}" + log_info "Restarting and re-seeding operators 0-${max_idx}..." + for i in $(seq 0 "$max_idx"); do + restart_and_seed "$i" + done +} + +# --- Test Functions --- + +test_recreate_private_keys() { + log_info "Running recreate-private-keys ceremony ($NUM_OPERATORS operators in parallel)..." + + # Save current state for comparison + local old_shares + old_shares=$(jq -r '.distributed_validators[0].public_shares[0]' \ + "$TMP_DIR/operator0/.charon/cluster-lock.json") + local expected_vals + expected_vals=$(jq '.distributed_validators | length' \ + "$TMP_DIR/operator0/.charon/cluster-lock.json") + + local pids=() + local logs_dir="$TMP_DIR/logs/recreate-private-keys" + mkdir -p "$logs_dir" + + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + ( + WORK_DIR="$op_dir" \ + COMPOSE_FILE="$op_dir/docker-compose.e2e.yml" \ + COMPOSE_PROJECT_NAME="e2e-op${i}" \ + "$REPO_ROOT/scripts/edit/recreate-private-keys/recreate-private-keys.sh" + ) < /dev/null > "$logs_dir/operator${i}.log" 2>&1 & + pids+=($!) + done + + local all_ok=true + for i in "${!pids[@]}"; do + if ! wait "${pids[$i]}"; then + log_error "Operator $i failed. Log:" + sed 's/\r$//' "$logs_dir/operator${i}.log" | while IFS= read -r line; do echo " $line"; done || true + all_ok=false + fi + done + + if [ "$all_ok" = false ]; then + return 1 + fi + + # Verify: still 1 validator, same operator count, different public_shares + local ok=true + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + local lock="$op_dir/.charon/cluster-lock.json" + + if [ ! -f "$lock" ]; then + log_error "Operator $i: cluster-lock.json not found" + ok=false + continue + fi + + local num_vals + num_vals=$(jq '.distributed_validators | length' "$lock") + assert_eq "Operator $i has $expected_vals validators" "$expected_vals" "$num_vals" || ok=false + + local num_ops + num_ops=$(jq '.cluster_definition.operators | length' "$lock") + assert_eq "Operator $i has $NUM_OPERATORS operators" "$NUM_OPERATORS" "$num_ops" || ok=false + done + + # Check that public shares changed + local new_shares + new_shares=$(jq -r '.distributed_validators[0].public_shares[0]' \ + "$TMP_DIR/operator0/.charon/cluster-lock.json") + assert_ne "Public shares changed after recreate" "$old_shares" "$new_shares" || ok=false + + [ "$ok" = true ] +} + +test_add_validators() { + log_info "Running add-validators ceremony ($NUM_OPERATORS operators in parallel)..." + + local pids=() + local logs_dir="$TMP_DIR/logs/add-validators" + mkdir -p "$logs_dir" + + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + ( + WORK_DIR="$op_dir" \ + COMPOSE_FILE="$op_dir/docker-compose.e2e.yml" \ + COMPOSE_PROJECT_NAME="e2e-op${i}" \ + "$REPO_ROOT/scripts/edit/add-validators/add-validators.sh" \ + --num-validators 1 \ + --withdrawal-addresses "$ZERO_ADDR" \ + --fee-recipient-addresses "$ZERO_ADDR" + ) < /dev/null > "$logs_dir/operator${i}.log" 2>&1 & + pids+=($!) + done + + local all_ok=true + for i in "${!pids[@]}"; do + if ! wait "${pids[$i]}"; then + log_error "Operator $i failed. Log:" + sed 's/\r$//' "$logs_dir/operator${i}.log" | while IFS= read -r line; do echo " $line"; done || true + all_ok=false + fi + done + + if [ "$all_ok" = false ]; then + return 1 + fi + + # Verify: each operator should have 2 validators + local ok=true + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + local lock="$op_dir/.charon/cluster-lock.json" + + if [ ! -f "$lock" ]; then + log_error "Operator $i: cluster-lock.json not found" + ok=false + continue + fi + + local num_vals + num_vals=$(jq '.distributed_validators | length' "$lock") + assert_eq "Operator $i has 2 validators" "2" "$num_vals" || ok=false + + local num_ops + num_ops=$(jq '.cluster_definition.operators | length' "$lock") + assert_eq "Operator $i has $NUM_OPERATORS operators" "$NUM_OPERATORS" "$num_ops" || ok=false + done + + [ "$ok" = true ] +} + +test_add_operators() { + log_info "Running add-operators ceremony ($NUM_OPERATORS existing + 3 new = 7 total)..." + + local new_enrs=() + local new_ops_start=$NUM_OPERATORS + local new_ops_end=$((NUM_OPERATORS + 2)) # 3 new operators: 4, 5, 6 + + # Create new operator directories and generate ENRs + for i in $(seq "$new_ops_start" "$new_ops_end"); do + local new_op_dir="$TMP_DIR/operator${i}" + mkdir -p "$new_op_dir/.charon" "$new_op_dir/data/lodestar" + + log_info " Generating ENR for new operator $i..." + docker run --rm \ + --user "$(id -u):$(id -g)" \ + -v "$new_op_dir/.charon:/opt/charon/.charon" \ + "$CHARON_IMAGE" \ + create enr + + local new_enr + new_enr=$(docker run --rm \ + --user "$(id -u):$(id -g)" \ + -v "$new_op_dir/.charon:/opt/charon/.charon" \ + "$CHARON_IMAGE" \ + enr 2>/dev/null) + + if [ -z "$new_enr" ]; then + log_error "Failed to get ENR for new operator $i" + return 1 + fi + log_info " Operator $i ENR: ${new_enr:0:50}..." + new_enrs+=("$new_enr") + + # Copy cluster-lock from operator0 + cp "$TMP_DIR/operator0/.charon/cluster-lock.json" "$new_op_dir/.charon/cluster-lock.json" + + # Create .env and compose file + cat > "$new_op_dir/.env" </dev/null || true # May fail for new ops without existing shares + done + + # Build comma-separated ENR list + local enr_list + enr_list=$(IFS=,; echo "${new_enrs[*]}") + + local pids=() + local logs_dir="$TMP_DIR/logs/add-operators" + mkdir -p "$logs_dir" + + # Run existing operators + for i in $(seq 0 $((NUM_OPERATORS - 1))); do + local op_dir="$TMP_DIR/operator${i}" + ( + WORK_DIR="$op_dir" \ + COMPOSE_FILE="$op_dir/docker-compose.e2e.yml" \ + COMPOSE_PROJECT_NAME="e2e-op${i}" \ + "$REPO_ROOT/scripts/edit/add-operators/existing-operator.sh" \ + --new-operator-enrs "$enr_list" + ) < /dev/null > "$logs_dir/operator${i}.log" 2>&1 & + pids+=($!) + done + + # Run new operators + for i in $(seq "$new_ops_start" "$new_ops_end"); do + local new_op_dir="$TMP_DIR/operator${i}" + ( + WORK_DIR="$new_op_dir" \ + COMPOSE_FILE="$new_op_dir/docker-compose.e2e.yml" \ + COMPOSE_PROJECT_NAME="e2e-op${i}" \ + "$REPO_ROOT/scripts/edit/add-operators/new-operator.sh" \ + --new-operator-enrs "$enr_list" \ + --cluster-lock ".charon/cluster-lock.json" + ) < /dev/null > "$logs_dir/operator${i}.log" 2>&1 & + pids+=($!) + done + + # Wait for all + local all_ok=true + for i in "${!pids[@]}"; do + if ! wait "${pids[$i]}"; then + log_error "Process $i failed. Log:" + sed 's/\r$//' "$logs_dir/operator${i}.log" 2>/dev/null | while IFS= read -r line; do echo " $line"; done || true + # Also check new operator logs + for j in $(seq "$new_ops_start" "$new_ops_end"); do + if [ -f "$logs_dir/operator${j}.log" ]; then + log_error "New operator $j log:" + sed 's/\r$//' "$logs_dir/operator${j}.log" 2>/dev/null | while IFS= read -r line; do echo " $line"; done || true + fi + done + all_ok=false + fi + done + + if [ "$all_ok" = false ]; then + return 1 + fi + + # Verify: all 7 operators should have 7 operators in cluster-lock + local total_ops=$((new_ops_end + 1)) # 7 + local ok=true + for i in $(seq 0 "$new_ops_end"); do + local op_dir="$TMP_DIR/operator${i}" + local lock="$op_dir/.charon/cluster-lock.json" + + if [ ! -f "$lock" ]; then + log_error "Operator $i: cluster-lock.json not found" + ok=false + continue + fi + + local num_ops + num_ops=$(jq '.cluster_definition.operators | length' "$lock") + assert_eq "Operator $i has $total_ops operators" "$total_ops" "$num_ops" || ok=false + done + + # Update NUM_OPERATORS to reflect new total + NUM_OPERATORS="$total_ops" + + [ "$ok" = true ] +} + +test_remove_operators() { + log_info "Running remove-operators ceremony (removing operator6, 6 remaining)..." + + local op_to_remove=$((NUM_OPERATORS - 1)) # operator6 + + # Get operator6's ENR from cluster-lock + local remove_enr + remove_enr=$(jq -r --argjson idx "$op_to_remove" \ + '.cluster_definition.operators[$idx].enr' \ + "$TMP_DIR/operator0/.charon/cluster-lock.json") + + if [ -z "$remove_enr" ] || [ "$remove_enr" = "null" ]; then + log_error "Failed to get operator${op_to_remove} ENR from cluster-lock" + return 1 + fi + log_info " Operator${op_to_remove} ENR to remove: ${remove_enr:0:50}..." + + local remaining_max=$((op_to_remove - 1)) # operators 0-5 + + local pids=() + local logs_dir="$TMP_DIR/logs/remove-operators" + mkdir -p "$logs_dir" + + # Run remaining operators (0-5) — operator6 does NOT participate + for i in $(seq 0 "$remaining_max"); do + local op_dir="$TMP_DIR/operator${i}" + ( + WORK_DIR="$op_dir" \ + COMPOSE_FILE="$op_dir/docker-compose.e2e.yml" \ + COMPOSE_PROJECT_NAME="e2e-op${i}" \ + "$REPO_ROOT/scripts/edit/remove-operators/remaining-operator.sh" \ + --operator-enrs-to-remove "$remove_enr" + ) < /dev/null > "$logs_dir/operator${i}.log" 2>&1 & + pids+=($!) + done + + local all_ok=true + for i in "${!pids[@]}"; do + if ! wait "${pids[$i]}"; then + log_error "Operator $i failed. Log:" + sed 's/\r$//' "$logs_dir/operator${i}.log" | while IFS= read -r line; do echo " $line"; done || true + all_ok=false + fi + done + + if [ "$all_ok" = false ]; then + return 1 + fi + + # Verify: 6 operators in new cluster-lock + local expected_ops=$((NUM_OPERATORS - 1)) + local ok=true + for i in $(seq 0 "$remaining_max"); do + local op_dir="$TMP_DIR/operator${i}" + local lock="$op_dir/.charon/cluster-lock.json" + + if [ ! -f "$lock" ]; then + log_error "Operator $i: cluster-lock.json not found" + ok=false + continue + fi + + local num_ops + num_ops=$(jq '.cluster_definition.operators | length' "$lock") + assert_eq "Operator $i has $expected_ops operators" "$expected_ops" "$num_ops" || ok=false + done + + # Clean up removed operator's compose stack + stop_operator "$op_to_remove" + + # Update NUM_OPERATORS + NUM_OPERATORS="$expected_ops" + + [ "$ok" = true ] +} + +test_replace_operator() { + local op_to_replace=$((NUM_OPERATORS - 1)) # replace the last operator + log_info "Running replace-operator ceremony (replacing operator${op_to_replace})..." + + # Get the old operator's ENR from cluster-lock + local old_enr + old_enr=$(jq -r --argjson idx "$op_to_replace" \ + '.cluster_definition.operators[$idx].enr' \ + "$TMP_DIR/operator0/.charon/cluster-lock.json") + + if [ -z "$old_enr" ] || [ "$old_enr" = "null" ]; then + log_error "Failed to get operator${op_to_replace} ENR from cluster-lock" + return 1 + fi + log_info " Old operator ENR: ${old_enr:0:50}..." + + # Create new operator directory and generate ENR + local new_op_idx="new" + local new_op_dir="$TMP_DIR/operator-replace-new" + mkdir -p "$new_op_dir/.charon" "$new_op_dir/data/lodestar" + + log_info " Generating ENR for new operator..." + docker run --rm \ + --user "$(id -u):$(id -g)" \ + -v "$new_op_dir/.charon:/opt/charon/.charon" \ + "$CHARON_IMAGE" \ + create enr + + local new_enr + new_enr=$(docker run --rm \ + --user "$(id -u):$(id -g)" \ + -v "$new_op_dir/.charon:/opt/charon/.charon" \ + "$CHARON_IMAGE" \ + enr 2>/dev/null) + + if [ -z "$new_enr" ]; then + log_error "Failed to get ENR for new operator" + return 1 + fi + log_info " New operator ENR: ${new_enr:0:50}..." + + # Set up new operator directory with .env, compose file, and cluster-lock + cat > "$new_op_dir/.env" < "$logs_dir/operator${i}.log" 2>&1 & + pids+=($!) + done + + # New operator also participates in the ceremony + ( + docker run --rm -i \ + --user "$(id -u):$(id -g)" \ + -v "$new_op_dir/.charon:/opt/charon/.charon" \ + -v "$new_op_dir/output:/opt/charon/output" \ + "$CHARON_IMAGE" \ + alpha edit replace-operator \ + --lock-file=/opt/charon/.charon/cluster-lock.json \ + --output-dir=/opt/charon/output \ + --old-operator-enr="$old_enr" \ + --new-operator-enr="$new_enr" + ) < /dev/null > "$logs_dir/new-operator-ceremony.log" 2>&1 & + pids+=($!) + + local all_ok=true + for i in "${!pids[@]}"; do + if ! wait "${pids[$i]}"; then + log_error "Process $i failed. Log:" + local logfile="$logs_dir/operator${i}.log" + # Last pid is the new operator + if [ "$i" -eq $((${#pids[@]} - 1)) ]; then + logfile="$logs_dir/new-operator-ceremony.log" + fi + sed 's/\r$//' "$logfile" | while IFS= read -r line; do echo " $line"; done || true + all_ok=false + fi + done + + if [ "$all_ok" = false ]; then + return 1 + fi + + # Post-ceremony: install the new .charon directory for the new operator + # (The script now does this automatically, but since we ran raw docker for the test, + # we do it manually here) + if [ -d "$new_op_dir/output" ]; then + mv "$new_op_dir/.charon" "$new_op_dir/.charon-backup" 2>/dev/null || true + mv "$new_op_dir/output" "$new_op_dir/.charon" + log_info "New operator: installed output to .charon" + else + log_error "New operator: output directory not found after ceremony" + all_ok=false + fi + + if [ "$all_ok" = false ]; then + return 1 + fi + + # Verify: same number of operators, new ENR present, old ENR gone + local ok=true + for i in $(seq 0 "$remaining_max"); do + local op_dir="$TMP_DIR/operator${i}" + local lock="$op_dir/.charon/cluster-lock.json" + + if [ ! -f "$lock" ]; then + log_error "Operator $i: cluster-lock.json not found" + ok=false + continue + fi + + local num_ops + num_ops=$(jq '.cluster_definition.operators | length' "$lock") + assert_eq "Operator $i has $NUM_OPERATORS operators" "$NUM_OPERATORS" "$num_ops" || ok=false + done + + # Verify new operator has the cluster-lock installed + if [ ! -f "$new_op_dir/.charon/cluster-lock.json" ]; then + log_error "New operator: cluster-lock.json not found" + ok=false + else + local new_num_ops + new_num_ops=$(jq '.cluster_definition.operators | length' "$new_op_dir/.charon/cluster-lock.json") + assert_eq "New operator has $NUM_OPERATORS operators" "$NUM_OPERATORS" "$new_num_ops" || ok=false + fi + + # Verify the old ENR is gone and new ENR is present in the cluster-lock + local lock="$TMP_DIR/operator0/.charon/cluster-lock.json" + local has_new_enr + has_new_enr=$(jq -r --arg enr "$new_enr" \ + '[.cluster_definition.operators[].enr] | map(select(. == $enr)) | length' "$lock") + assert_eq "New ENR present in cluster-lock" "1" "$has_new_enr" || ok=false + + local has_old_enr + has_old_enr=$(jq -r --arg enr "$old_enr" \ + '[.cluster_definition.operators[].enr] | map(select(. == $enr)) | length' "$lock") + assert_eq "Old ENR removed from cluster-lock" "0" "$has_old_enr" || ok=false + + # Clean up replaced operator's compose stack + stop_operator "$op_to_replace" + + [ "$ok" = true ] +} + +# --- Main --- + +main() { + echo "" + echo "╔════════════════════════════════════════════════════════════════╗" + echo "║ E2E Integration Test for Cluster Edit Scripts ║" + echo "║ (Real Docker Compose) ║" + echo "╚════════════════════════════════════════════════════════════════╝" + echo "" + + check_prerequisites + setup_tmp_dir + create_cluster + + # Start compose stacks and seed ASDB for all operators + start_all_operators + seed_all_operators + + # Test 1: add-validators (4 ops in parallel) + run_test "add-validators" test_add_validators + restart_and_seed_all + + # Test 2: recreate-private-keys (4 ops in parallel) + run_test "recreate-private-keys" test_recreate_private_keys + restart_and_seed_all + + # Test 3: add-operators (+3 new = 7 total) + run_test "add-operators" test_add_operators + restart_and_seed_all $((NUM_OPERATORS - 1)) + + # Test 4: remove-operators (remove 1, leaving 6) + run_test "remove-operators" test_remove_operators + restart_and_seed_all $((NUM_OPERATORS - 1)) + + # Test 5: replace-operator (replace last operator with a new one) + run_test "replace-operator" test_replace_operator + + # Summary + echo "" + echo "╔════════════════════════════════════════════════════════════════╗" + echo "║ Test Summary ║" + echo "╚════════════════════════════════════════════════════════════════╝" + echo "" + echo " Tests run: $TESTS_RUN" + printf " Tests passed: ${GREEN}%s${NC}\n" "$TESTS_PASSED" + if [ "$TESTS_FAILED" -gt 0 ]; then + printf " Tests failed: ${RED}%s${NC}\n" "$TESTS_FAILED" + else + echo " Tests failed: $TESTS_FAILED" + fi + echo "" + + if [ "$TESTS_FAILED" -gt 0 ]; then + log_error "SOME TESTS FAILED" + exit 1 + else + log_info "ALL TESTS PASSED" + exit 0 + fi +} + +main "$@" diff --git a/scripts/edit/vc/README.md b/scripts/edit/vc/README.md new file mode 100644 index 00000000..1ec77785 --- /dev/null +++ b/scripts/edit/vc/README.md @@ -0,0 +1,65 @@ +# Anti-Slashing Database Scripts + +Scripts to export, import, and update validator anti-slashing databases (ASDB) in [EIP-3076](https://eips.ethereum.org/EIPS/eip-3076) format for Charon distributed validators. + +## Overview + +When performing cluster edit operations (replace-operator, recreate-private-keys, add-operators, remove-operators), the anti-slashing database must be exported, updated with new pubkeys, and re-imported to prevent slashing violations. These scripts automate that process across all supported validator clients. + +## Prerequisites + +- `.env` file with `NETWORK` and `VC` variables set +- Docker running +- `jq` installed (for `update-anti-slashing-db.sh`) + +## Scripts + +### Router Scripts + +| Script | Description | +|--------|-------------| +| `export_asdb.sh` | Routes to the appropriate VC-specific export script based on `VC` env var | +| `import_asdb.sh` | Routes to the appropriate VC-specific import script based on `VC` env var | + +Usage: + +```bash +# Export ASDB (VC container must be stopped) +VC=vc-lodestar ./scripts/edit/vc/export_asdb.sh --output-file ./asdb-export/slashing-protection.json + +# Import ASDB (VC container must be stopped) +VC=vc-lodestar ./scripts/edit/vc/import_asdb.sh --input-file ./asdb-export/slashing-protection.json +``` + +### Update Anti-Slashing DB + +Updates pubkeys in an EIP-3076 file by mapping them between source and target cluster-lock files. + +```bash +./scripts/edit/vc/update-anti-slashing-db.sh +``` + +### Supported Validator Clients + +Each client has its own `export_asdb.sh` and `import_asdb.sh` in a subdirectory: + +| Client | Directory | Export requires | Import requires | +|--------|-----------|-----------------|-----------------| +| Lodestar | `lodestar/` | Container stopped | Container stopped | +| Prysm | `prysm/` | Container stopped | Container stopped | +| Teku | `teku/` | Container stopped | Container stopped | +| Nimbus | `nimbus/` | Container stopped | Container stopped | + +**Note**: All validator clients require the container to be stopped before export/import to avoid database locking issues. The main ceremony scripts (e.g., `replace-operator`, `recreate-private-keys`) handle stopping the VC automatically. + +## Testing + +See [test/README.md](test/README.md) for integration tests. + +## Related + +- [Add-Validators Workflow](../add-validators/README.md) +- [Add-Operators Workflow](../add-operators/README.md) +- [Remove-Operators Workflow](../remove-operators/README.md) +- [Recreate-Private-Keys Workflow](../recreate-private-keys/README.md) +- [Replace-Operator Workflow](../replace-operator/README.md) diff --git a/scripts/edit/vc/export_asdb.sh b/scripts/edit/vc/export_asdb.sh new file mode 100755 index 00000000..f7ed0b68 --- /dev/null +++ b/scripts/edit/vc/export_asdb.sh @@ -0,0 +1,56 @@ +#!/usr/bin/env bash + +# Script to export validator anti-slashing database to EIP-3076 format. +# +# This script routes to the appropriate VC-specific export script based on the VC environment variable. +# +# Usage: VC=vc-lodestar ./scripts/edit/vc/export_asdb.sh [options] +# +# Environment Variables: +# VC Validator client type (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus) +# +# All options are passed through to the VC-specific script. + +set -euo pipefail + +# Check if VC environment variable is set +if [ -z "${VC:-}" ]; then + echo "Error: VC environment variable is not set" >&2 + echo "Usage: VC=vc-lodestar $0 [options]" >&2 + echo "" >&2 + echo "Supported VC types:" >&2 + echo " - vc-lodestar" >&2 + echo " - vc-teku" >&2 + echo " - vc-prysm" >&2 + echo " - vc-nimbus" >&2 + exit 1 +fi + +# Extract the VC name (remove "vc-" prefix) +VC_NAME="${VC#vc-}" + +# Get the script directory +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + +# Path to the VC-specific script +VC_SCRIPT="${SCRIPT_DIR}/${VC_NAME}/export_asdb.sh" + +# Check if the VC-specific script exists +if [ ! -f "$VC_SCRIPT" ]; then + echo "Error: Export script for '$VC' not found at: $VC_SCRIPT" >&2 + echo "" >&2 + echo "Available VC types:" >&2 + for dir in "${SCRIPT_DIR}"/*; do + if [ -d "$dir" ] && [ -f "$dir/export_asdb.sh" ]; then + basename "$dir" + fi + done | sed 's/^/ - vc-/' >&2 + exit 1 +fi + +# Make sure the VC-specific script is executable +chmod +x "$VC_SCRIPT" + +# Run the VC-specific script with all arguments passed through +echo "Running export for $VC..." +exec "$VC_SCRIPT" "$@" diff --git a/scripts/edit/vc/import_asdb.sh b/scripts/edit/vc/import_asdb.sh new file mode 100755 index 00000000..6e8facd7 --- /dev/null +++ b/scripts/edit/vc/import_asdb.sh @@ -0,0 +1,56 @@ +#!/usr/bin/env bash + +# Script to import validator anti-slashing database from EIP-3076 format. +# +# This script routes to the appropriate VC-specific import script based on the VC environment variable. +# +# Usage: VC=vc-lodestar ./scripts/edit/vc/import_asdb.sh [options] +# +# Environment Variables: +# VC Validator client type (e.g., vc-lodestar, vc-teku, vc-prysm, vc-nimbus) +# +# All options are passed through to the VC-specific script. + +set -euo pipefail + +# Check if VC environment variable is set +if [ -z "${VC:-}" ]; then + echo "Error: VC environment variable is not set" >&2 + echo "Usage: VC=vc-lodestar $0 [options]" >&2 + echo "" >&2 + echo "Supported VC types:" >&2 + echo " - vc-lodestar" >&2 + echo " - vc-teku" >&2 + echo " - vc-prysm" >&2 + echo " - vc-nimbus" >&2 + exit 1 +fi + +# Extract the VC name (remove "vc-" prefix) +VC_NAME="${VC#vc-}" + +# Get the script directory +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + +# Path to the VC-specific script +VC_SCRIPT="${SCRIPT_DIR}/${VC_NAME}/import_asdb.sh" + +# Check if the VC-specific script exists +if [ ! -f "$VC_SCRIPT" ]; then + echo "Error: Import script for '$VC' not found at: $VC_SCRIPT" >&2 + echo "" >&2 + echo "Available VC types:" >&2 + for dir in "${SCRIPT_DIR}"/*; do + if [ -d "$dir" ] && [ -f "$dir/import_asdb.sh" ]; then + basename "$dir" + fi + done | sed 's/^/ - vc-/' >&2 + exit 1 +fi + +# Make sure the VC-specific script is executable +chmod +x "$VC_SCRIPT" + +# Run the VC-specific script with all arguments passed through +echo "Running import for $VC..." +exec "$VC_SCRIPT" "$@" diff --git a/scripts/edit/vc/lodestar/export_asdb.sh b/scripts/edit/vc/lodestar/export_asdb.sh new file mode 100755 index 00000000..fe143caa --- /dev/null +++ b/scripts/edit/vc/lodestar/export_asdb.sh @@ -0,0 +1,136 @@ +#!/usr/bin/env bash + +# Script to export Lodestar validator anti-slashing database to EIP-3076 format. +# +# This script is run by continuing operators before the replace-operator ceremony. +# It exports the slashing protection database using a temporary container +# to a JSON file that can be updated and re-imported after the ceremony. +# +# Usage: export_asdb.sh [--data-dir ] [--output-file ] +# +# Options: +# --data-dir Path to Lodestar data directory (default: ./data/lodestar) +# --output-file Path for exported slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-lodestar container must be STOPPED (Lodestar locks the database while running) +# - docker and docker compose must be available + +set -euo pipefail + +# Default values +DATA_DIR="./data/lodestar" +OUTPUT_FILE="./asdb-export/slashing-protection.json" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + --output-file) + OUTPUT_FILE="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--data-dir ] [--output-file ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE and COMPOSE_PROJECT_NAME if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" +SAVED_COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE and COMPOSE_PROJECT_NAME if they were set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi +if [ -n "$SAVED_COMPOSE_PROJECT_NAME" ]; then + export COMPOSE_PROJECT_NAME="$SAVED_COMPOSE_PROJECT_NAME" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Exporting anti-slashing database for Lodestar validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Output file: $OUTPUT_FILE" +echo "" + +# Check if vc-lodestar container is running (it should be stopped to avoid DB locking) +if docker compose ps --format '{{.Status}}' vc-lodestar 2>/dev/null | grep -qi running; then + echo "Error: vc-lodestar container is still running" >&2 + echo "Please stop the validator client before exporting:" >&2 + echo " docker compose stop vc-lodestar" >&2 + echo "" >&2 + echo "Lodestar locks the database while running, preventing export." >&2 + exit 1 +fi + +# Make paths absolute for docker bind mount +if [[ "$OUTPUT_FILE" != /* ]]; then + OUTPUT_FILE="$(pwd)/$OUTPUT_FILE" +fi +OUTPUT_DIR=$(dirname "$OUTPUT_FILE") + +# Create output directory if it doesn't exist +mkdir -p "$OUTPUT_DIR" + +echo "Exporting slashing protection data using vc-lodestar container..." + +# Export slashing protection data using a temporary container based on vc-lodestar service. +# The container writes to /tmp/export.json, then we bind-mount the output directory. +# We MUST override the entrypoint because the default run.sh ignores arguments. +if ! docker compose run --rm -T \ + --entrypoint node \ + -v "$OUTPUT_DIR":/tmp/asdb-export \ + vc-lodestar /usr/app/packages/cli/bin/lodestar validator slashing-protection export \ + --file /tmp/asdb-export/slashing-protection.json \ + --dataDir /opt/data \ + --network "$NETWORK"; then + echo "Error: Failed to export slashing protection from vc-lodestar container" >&2 + exit 1 +fi + +# Fix file ownership (docker creates it as root) +EXPORTED_FILE="$OUTPUT_DIR/slashing-protection.json" +if [ -f "$EXPORTED_FILE" ]; then + sudo chown "$(id -u):$(id -g)" "$EXPORTED_FILE" +fi + +# Move to correct output file if different from default +if [ "$EXPORTED_FILE" != "$OUTPUT_FILE" ]; then + mv "$EXPORTED_FILE" "$OUTPUT_FILE" +fi + +# Validate the exported JSON +if ! jq empty "$OUTPUT_FILE" 2>/dev/null; then + echo "Error: Exported file is not valid JSON" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully exported anti-slashing database" +echo " Output file: $OUTPUT_FILE" +echo "" +echo "You can now proceed with the replace-operator ceremony." diff --git a/scripts/edit/vc/lodestar/import_asdb.sh b/scripts/edit/vc/lodestar/import_asdb.sh new file mode 100755 index 00000000..dd936e56 --- /dev/null +++ b/scripts/edit/vc/lodestar/import_asdb.sh @@ -0,0 +1,130 @@ +#!/usr/bin/env bash + +# Script to import Lodestar validator anti-slashing database from EIP-3076 format. +# +# This script is run by continuing operators after the replace-operator ceremony +# and anti-slashing database update. It imports the updated slashing protection +# database back into the vc-lodestar container. +# +# Usage: import_asdb.sh [--input-file ] [--data-dir ] +# +# Options: +# --input-file Path to updated slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# --data-dir Path to Lodestar data directory (default: ./data/lodestar) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-lodestar container must be STOPPED before import +# - docker and docker compose must be available +# - Input file must be valid EIP-3076 JSON + +set -euo pipefail + +# Default values +INPUT_FILE="./asdb-export/slashing-protection.json" +DATA_DIR="./data/lodestar" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --input-file) + INPUT_FILE="$2" + shift 2 + ;; + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--input-file ] [--data-dir ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE and COMPOSE_PROJECT_NAME if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" +SAVED_COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE and COMPOSE_PROJECT_NAME if they were set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi +if [ -n "$SAVED_COMPOSE_PROJECT_NAME" ]; then + export COMPOSE_PROJECT_NAME="$SAVED_COMPOSE_PROJECT_NAME" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Importing anti-slashing database for Lodestar validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Input file: $INPUT_FILE" +echo "" + +# Check if input file exists +if [ ! -f "$INPUT_FILE" ]; then + echo "Error: Input file not found: $INPUT_FILE" >&2 + exit 1 +fi + +# Validate input file is valid JSON +if ! jq empty "$INPUT_FILE" 2>/dev/null; then + echo "Error: Input file is not valid JSON: $INPUT_FILE" >&2 + exit 1 +fi + +# Make INPUT_FILE absolute for docker bind mount +if [[ "$INPUT_FILE" != /* ]]; then + INPUT_FILE="$(pwd)/$INPUT_FILE" +fi + +# Check if vc-lodestar container is running (it should be stopped) +if docker compose ps --format '{{.Status}}' vc-lodestar 2>/dev/null | grep -qi running; then + echo "Error: vc-lodestar container is still running" >&2 + echo "Please stop the validator client before importing:" >&2 + echo " docker compose stop vc-lodestar" >&2 + echo "" >&2 + echo "Importing while the container is running may cause database corruption." >&2 + exit 1 +fi + +echo "Importing slashing protection data into vc-lodestar container..." + +# Import slashing protection data using a temporary container based on the vc-lodestar service. +# The input file is bind-mounted into the container at /tmp/import.json (read-only). +# We MUST override the entrypoint because the default run.sh ignores arguments. +# Using --force to allow importing even if some data already exists. +if ! docker compose run --rm -T \ + --entrypoint node \ + -v "$INPUT_FILE":/tmp/import.json:ro \ + vc-lodestar /usr/app/packages/cli/bin/lodestar validator slashing-protection import \ + --file /tmp/import.json \ + --dataDir /opt/data \ + --network "$NETWORK" \ + --force; then + echo "Error: Failed to import slashing protection into vc-lodestar container" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully imported anti-slashing database" +echo "" +echo "You can now restart the validator client:" +echo " docker compose up -d vc-lodestar" diff --git a/scripts/edit/vc/nimbus/export_asdb.sh b/scripts/edit/vc/nimbus/export_asdb.sh new file mode 100755 index 00000000..12d31b47 --- /dev/null +++ b/scripts/edit/vc/nimbus/export_asdb.sh @@ -0,0 +1,132 @@ +#!/usr/bin/env bash + +# Script to export Nimbus validator anti-slashing database to EIP-3076 format. +# +# This script is run by continuing operators before the replace-operator ceremony. +# It exports the slashing protection database using a temporary container +# to a JSON file that can be updated and re-imported after the ceremony. +# +# Usage: export_asdb.sh [--data-dir ] [--output-file ] +# +# Options: +# --data-dir Path to Nimbus data directory (default: ./data/nimbus) +# --output-file Path for exported slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-nimbus container must be STOPPED (to avoid database locking) +# - docker and docker compose must be available + +set -euo pipefail + +# Default values +DATA_DIR="./data/nimbus" +OUTPUT_FILE="./asdb-export/slashing-protection.json" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + --output-file) + OUTPUT_FILE="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--data-dir ] [--output-file ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE and COMPOSE_PROJECT_NAME if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" +SAVED_COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE and COMPOSE_PROJECT_NAME if they were set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi +if [ -n "$SAVED_COMPOSE_PROJECT_NAME" ]; then + export COMPOSE_PROJECT_NAME="$SAVED_COMPOSE_PROJECT_NAME" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Exporting anti-slashing database for Nimbus validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Output file: $OUTPUT_FILE" +echo "" + +# Check if vc-nimbus container is running (it should be stopped to avoid DB locking) +if docker compose ps --format '{{.Status}}' vc-nimbus 2>/dev/null | grep -qi running; then + echo "Error: vc-nimbus container is still running" >&2 + echo "Please stop the validator client before exporting:" >&2 + echo " docker compose stop vc-nimbus" >&2 + exit 1 +fi + +# Make paths absolute for docker bind mount +if [[ "$OUTPUT_FILE" != /* ]]; then + OUTPUT_FILE="$(pwd)/$OUTPUT_FILE" +fi +OUTPUT_DIR=$(dirname "$OUTPUT_FILE") + +# Create output directory if it doesn't exist +mkdir -p "$OUTPUT_DIR" + +echo "Exporting slashing protection data using vc-nimbus container..." + +# Export slashing protection data using a temporary container based on vc-nimbus service. +# Note: slashingdb commands are in nimbus_beacon_node, not nimbus_validator_client. +# Nimbus requires --data-dir BEFORE the subcommand. +# We use docker compose run to create a temporary container with the same volumes. +if ! docker compose run --rm -T \ + -v "$OUTPUT_DIR":/tmp/asdb-export \ + --entrypoint /home/user/nimbus_beacon_node \ + vc-nimbus --data-dir=/home/user/data slashingdb export /tmp/asdb-export/slashing-protection.json; then + echo "Error: Failed to export slashing protection from vc-nimbus" >&2 + exit 1 +fi + +# Fix file ownership (docker creates it as root) +EXPORTED_FILE="$OUTPUT_DIR/slashing-protection.json" +if [ -f "$EXPORTED_FILE" ]; then + sudo chown "$(id -u):$(id -g)" "$EXPORTED_FILE" +fi + +# Move file to expected output path if not already there +if [ "$EXPORTED_FILE" != "$OUTPUT_FILE" ]; then + mv "$EXPORTED_FILE" "$OUTPUT_FILE" +fi + +# Validate the exported JSON +if ! jq empty "$OUTPUT_FILE" 2>/dev/null; then + echo "Error: Exported file is not valid JSON" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully exported anti-slashing database" +echo " Output file: $OUTPUT_FILE" +echo "" +echo "You can now proceed with the replace-operator ceremony." diff --git a/scripts/edit/vc/nimbus/import_asdb.sh b/scripts/edit/vc/nimbus/import_asdb.sh new file mode 100755 index 00000000..f538491f --- /dev/null +++ b/scripts/edit/vc/nimbus/import_asdb.sh @@ -0,0 +1,126 @@ +#!/usr/bin/env bash + +# Script to import Nimbus validator anti-slashing database from EIP-3076 format. +# +# This script is run by continuing operators after the replace-operator ceremony +# and anti-slashing database update. It imports the updated slashing protection +# database back into the vc-nimbus container. +# +# Usage: import_asdb.sh [--input-file ] [--data-dir ] +# +# Options: +# --input-file Path to updated slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# --data-dir Path to Nimbus data directory (default: ./data/nimbus) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-nimbus container must be STOPPED before import +# - docker and docker compose must be available +# - Input file must be valid EIP-3076 JSON + +set -euo pipefail + +# Default values +INPUT_FILE="./asdb-export/slashing-protection.json" +DATA_DIR="./data/nimbus" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --input-file) + INPUT_FILE="$2" + shift 2 + ;; + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--input-file ] [--data-dir ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE and COMPOSE_PROJECT_NAME if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" +SAVED_COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE and COMPOSE_PROJECT_NAME if they were set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi +if [ -n "$SAVED_COMPOSE_PROJECT_NAME" ]; then + export COMPOSE_PROJECT_NAME="$SAVED_COMPOSE_PROJECT_NAME" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Importing anti-slashing database for Nimbus validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Input file: $INPUT_FILE" +echo "" + +# Check if input file exists +if [ ! -f "$INPUT_FILE" ]; then + echo "Error: Input file not found: $INPUT_FILE" >&2 + exit 1 +fi + +# Validate input file is valid JSON +if ! jq empty "$INPUT_FILE" 2>/dev/null; then + echo "Error: Input file is not valid JSON: $INPUT_FILE" >&2 + exit 1 +fi + +# Make INPUT_FILE absolute for docker bind mount +if [[ "$INPUT_FILE" != /* ]]; then + INPUT_FILE="$(pwd)/$INPUT_FILE" +fi + +# Check if vc-nimbus container is running (it should be stopped) +if docker compose ps --format '{{.Status}}' vc-nimbus 2>/dev/null | grep -qi running; then + echo "Error: vc-nimbus container is still running" >&2 + echo "Please stop the validator client before importing:" >&2 + echo " docker compose stop vc-nimbus" >&2 + echo "" >&2 + echo "Importing while the container is running may cause database corruption." >&2 + exit 1 +fi + +echo "Importing slashing protection data into vc-nimbus container..." + +# Import slashing protection data using a temporary container based on the vc-nimbus service. +# The input file is bind-mounted into the container at /tmp/import.json (read-only). +# Note: slashingdb commands are in nimbus_beacon_node, not nimbus_validator_client. +# Nimbus requires --data-dir BEFORE the subcommand. +if ! docker compose run --rm -T \ + --entrypoint sh \ + -v "$INPUT_FILE":/tmp/import.json:ro \ + vc-nimbus -c "/home/user/nimbus_beacon_node --data-dir=/home/user/data slashingdb import /tmp/import.json"; then + echo "Error: Failed to import slashing protection into vc-nimbus container" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully imported anti-slashing database" +echo "" +echo "You can now restart the validator client:" +echo " docker compose up -d vc-nimbus" diff --git a/scripts/edit/vc/prysm/export_asdb.sh b/scripts/edit/vc/prysm/export_asdb.sh new file mode 100755 index 00000000..ab6534a1 --- /dev/null +++ b/scripts/edit/vc/prysm/export_asdb.sh @@ -0,0 +1,136 @@ +#!/usr/bin/env bash + +# Script to export Prysm validator anti-slashing database to EIP-3076 format. +# +# This script is run by continuing operators before the replace-operator ceremony. +# It exports the slashing protection database using a temporary container +# to a JSON file that can be updated and re-imported after the ceremony. +# +# Usage: export_asdb.sh [--data-dir ] [--output-file ] +# +# Options: +# --data-dir Path to Prysm data directory (default: ./data/prysm) +# --output-file Path for exported slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-prysm container must be STOPPED (to avoid database locking) +# - docker and docker compose must be available + +set -euo pipefail + +# Default values +DATA_DIR="./data/prysm" +OUTPUT_FILE="./asdb-export/slashing-protection.json" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + --output-file) + OUTPUT_FILE="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--data-dir ] [--output-file ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE and COMPOSE_PROJECT_NAME if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" +SAVED_COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE and COMPOSE_PROJECT_NAME if they were set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi +if [ -n "$SAVED_COMPOSE_PROJECT_NAME" ]; then + export COMPOSE_PROJECT_NAME="$SAVED_COMPOSE_PROJECT_NAME" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Exporting anti-slashing database for Prysm validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Output file: $OUTPUT_FILE" +echo "" + +# Check if vc-prysm container is running (it should be stopped to avoid DB locking) +if docker compose ps --format '{{.Status}}' vc-prysm 2>/dev/null | grep -qi running; then + echo "Error: vc-prysm container is still running" >&2 + echo "Please stop the validator client before exporting:" >&2 + echo " docker compose stop vc-prysm" >&2 + exit 1 +fi + +# Make paths absolute for docker bind mount +if [[ "$OUTPUT_FILE" != /* ]]; then + OUTPUT_FILE="$(pwd)/$OUTPUT_FILE" +fi +OUTPUT_DIR=$(dirname "$OUTPUT_FILE") + +# Create output directory if it doesn't exist +mkdir -p "$OUTPUT_DIR" + +echo "Exporting slashing protection data using vc-prysm container..." + +# Export slashing protection data using a temporary container based on vc-prysm service. +# Prysm stores data in /data/vc and wallet in /prysm-wallet. +# We use docker compose run to create a temporary container with the same volumes. +if ! docker compose run --rm -T \ + -v "$OUTPUT_DIR":/tmp/asdb-export \ + --entrypoint /app/cmd/validator/validator \ + vc-prysm slashing-protection-history export \ + --accept-terms-of-use \ + --datadir=/data/vc \ + --slashing-protection-export-dir=/tmp/asdb-export \ + --$NETWORK; then + echo "Error: Failed to export slashing protection from vc-prysm" >&2 + exit 1 +fi + +# Fix file ownership (docker creates it as root) +# Prysm creates a file named slashing_protection.json in the export directory +EXPORTED_FILE="$OUTPUT_DIR/slashing_protection.json" +if [ -f "$EXPORTED_FILE" ]; then + sudo chown "$(id -u):$(id -g)" "$EXPORTED_FILE" +fi + +# Rename it to match our expected output file name +if [ -f "$EXPORTED_FILE" ]; then + mv "$EXPORTED_FILE" "$OUTPUT_FILE" +fi + +# Validate the exported JSON +if ! jq empty "$OUTPUT_FILE" 2>/dev/null; then + echo "Error: Exported file is not valid JSON" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully exported anti-slashing database" +echo " Output file: $OUTPUT_FILE" +echo "" +echo "You can now proceed with the replace-operator ceremony." diff --git a/scripts/edit/vc/prysm/import_asdb.sh b/scripts/edit/vc/prysm/import_asdb.sh new file mode 100755 index 00000000..bca362b2 --- /dev/null +++ b/scripts/edit/vc/prysm/import_asdb.sh @@ -0,0 +1,130 @@ +#!/usr/bin/env bash + +# Script to import Prysm validator anti-slashing database from EIP-3076 format. +# +# This script is run by continuing operators after the replace-operator ceremony +# and anti-slashing database update. It imports the updated slashing protection +# database back into the vc-prysm container. +# +# Usage: import_asdb.sh [--input-file ] [--data-dir ] +# +# Options: +# --input-file Path to updated slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# --data-dir Path to Prysm data directory (default: ./data/prysm) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-prysm container must be STOPPED before import +# - docker and docker compose must be available +# - Input file must be valid EIP-3076 JSON + +set -euo pipefail + +# Default values +INPUT_FILE="./asdb-export/slashing-protection.json" +DATA_DIR="./data/prysm" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --input-file) + INPUT_FILE="$2" + shift 2 + ;; + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--input-file ] [--data-dir ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE and COMPOSE_PROJECT_NAME if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" +SAVED_COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE and COMPOSE_PROJECT_NAME if they were set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi +if [ -n "$SAVED_COMPOSE_PROJECT_NAME" ]; then + export COMPOSE_PROJECT_NAME="$SAVED_COMPOSE_PROJECT_NAME" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Importing anti-slashing database for Prysm validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Input file: $INPUT_FILE" +echo "" + +# Check if input file exists +if [ ! -f "$INPUT_FILE" ]; then + echo "Error: Input file not found: $INPUT_FILE" >&2 + exit 1 +fi + +# Validate input file is valid JSON +if ! jq empty "$INPUT_FILE" 2>/dev/null; then + echo "Error: Input file is not valid JSON: $INPUT_FILE" >&2 + exit 1 +fi + +# Make INPUT_FILE absolute for docker bind mount +if [[ "$INPUT_FILE" != /* ]]; then + INPUT_FILE="$(pwd)/$INPUT_FILE" +fi + +# Check if vc-prysm container is running (it should be stopped) +if docker compose ps --format '{{.Status}}' vc-prysm 2>/dev/null | grep -qi running; then + echo "Error: vc-prysm container is still running" >&2 + echo "Please stop the validator client before importing:" >&2 + echo " docker compose stop vc-prysm" >&2 + echo "" >&2 + echo "Importing while the container is running may cause database corruption." >&2 + exit 1 +fi + +echo "Importing slashing protection data into vc-prysm container..." + +# Import slashing protection data using a temporary container based on the vc-prysm service. +# The input file is bind-mounted into the container at /tmp/slashing_protection.json (read-only). +# We MUST override the entrypoint because the default run.sh ignores arguments. +# Prysm expects the file to be named slashing_protection.json +if ! docker compose run --rm -T \ + --entrypoint /app/cmd/validator/validator \ + -v "$INPUT_FILE":/tmp/slashing_protection.json:ro \ + vc-prysm slashing-protection-history import \ + --accept-terms-of-use \ + --datadir=/data/vc \ + --slashing-protection-json-file=/tmp/slashing_protection.json \ + --$NETWORK; then + echo "Error: Failed to import slashing protection into vc-prysm container" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully imported anti-slashing database" +echo "" +echo "You can now restart the validator client:" +echo " docker compose up -d vc-prysm" diff --git a/scripts/edit/vc/teku/export_asdb.sh b/scripts/edit/vc/teku/export_asdb.sh new file mode 100755 index 00000000..8dc2354e --- /dev/null +++ b/scripts/edit/vc/teku/export_asdb.sh @@ -0,0 +1,133 @@ +#!/usr/bin/env bash + +# Script to export Teku validator anti-slashing database to EIP-3076 format. +# +# This script is run by continuing operators before the replace-operator ceremony. +# It exports the slashing protection database using a temporary container +# to a JSON file that can be updated and re-imported after the ceremony. +# +# Usage: export_asdb.sh [--data-dir ] [--output-file ] +# +# Options: +# --data-dir Path to Teku data directory (default: ./data/vc-teku) +# --output-file Path for exported slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-teku container must be STOPPED (to avoid database locking) +# - docker and docker compose must be available + +set -euo pipefail + +# Default values +DATA_DIR="./data/vc-teku" +OUTPUT_FILE="./asdb-export/slashing-protection.json" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + --output-file) + OUTPUT_FILE="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--data-dir ] [--output-file ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE and COMPOSE_PROJECT_NAME if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" +SAVED_COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE and COMPOSE_PROJECT_NAME if they were set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi +if [ -n "$SAVED_COMPOSE_PROJECT_NAME" ]; then + export COMPOSE_PROJECT_NAME="$SAVED_COMPOSE_PROJECT_NAME" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Exporting anti-slashing database for Teku validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Output file: $OUTPUT_FILE" +echo "" + +# Check if vc-teku container is running (it should be stopped to avoid DB locking) +if docker compose ps --format '{{.Status}}' vc-teku 2>/dev/null | grep -qi running; then + echo "Error: vc-teku container is still running" >&2 + echo "Please stop the validator client before exporting:" >&2 + echo " docker compose stop vc-teku" >&2 + exit 1 +fi + +# Make paths absolute for docker bind mount +if [[ "$OUTPUT_FILE" != /* ]]; then + OUTPUT_FILE="$(pwd)/$OUTPUT_FILE" +fi +OUTPUT_DIR=$(dirname "$OUTPUT_FILE") + +# Create output directory if it doesn't exist +mkdir -p "$OUTPUT_DIR" + +echo "Exporting slashing protection data using vc-teku container..." + +# Export slashing protection data using a temporary container based on vc-teku service. +# Teku stores data in /home/data (mapped from ./data/vc-teku). +# We use docker compose run to create a temporary container with the same volumes. +if ! docker compose run --rm -T \ + -v "$OUTPUT_DIR":/tmp/asdb-export \ + --entrypoint /opt/teku/bin/teku \ + vc-teku slashing-protection export \ + --data-path=/home/data \ + --to=/tmp/asdb-export/slashing-protection.json; then + echo "Error: Failed to export slashing protection from vc-teku" >&2 + exit 1 +fi + +# Fix file ownership (docker creates it as root) +EXPORTED_FILE="$OUTPUT_DIR/slashing-protection.json" +if [ -f "$EXPORTED_FILE" ]; then + sudo chown "$(id -u):$(id -g)" "$EXPORTED_FILE" +fi + +# Move file to expected output path if not already there +if [ "$EXPORTED_FILE" != "$OUTPUT_FILE" ]; then + mv "$EXPORTED_FILE" "$OUTPUT_FILE" +fi + +# Validate the exported JSON +if ! jq empty "$OUTPUT_FILE" 2>/dev/null; then + echo "Error: Exported file is not valid JSON" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully exported anti-slashing database" +echo " Output file: $OUTPUT_FILE" +echo "" +echo "You can now proceed with the replace-operator ceremony." diff --git a/scripts/edit/vc/teku/import_asdb.sh b/scripts/edit/vc/teku/import_asdb.sh new file mode 100755 index 00000000..50d15719 --- /dev/null +++ b/scripts/edit/vc/teku/import_asdb.sh @@ -0,0 +1,127 @@ +#!/usr/bin/env bash + +# Script to import Teku validator anti-slashing database from EIP-3076 format. +# +# This script is run by continuing operators after the replace-operator ceremony +# and anti-slashing database update. It imports the updated slashing protection +# database back into the vc-teku container. +# +# Usage: import_asdb.sh [--input-file ] [--data-dir ] +# +# Options: +# --input-file Path to updated slashing protection JSON (default: ./asdb-export/slashing-protection.json) +# --data-dir Path to Teku data directory (default: ./data/vc-teku) +# +# Requirements: +# - .env file must exist with NETWORK variable set +# - vc-teku container must be STOPPED before import +# - docker and docker compose must be available +# - Input file must be valid EIP-3076 JSON + +set -euo pipefail + +# Default values +INPUT_FILE="./asdb-export/slashing-protection.json" +DATA_DIR="./data/vc-teku" + +# Parse arguments +while [[ $# -gt 0 ]]; do + case $1 in + --input-file) + INPUT_FILE="$2" + shift 2 + ;; + --data-dir) + DATA_DIR="$2" + shift 2 + ;; + *) + echo "Error: Unknown argument '$1'" >&2 + echo "Usage: $0 [--input-file ] [--data-dir ]" >&2 + exit 1 + ;; + esac +done + +# Check if .env file exists +if [ ! -f .env ]; then + echo "Error: .env file not found in current directory" >&2 + echo "Please ensure you are running this script from the repository root" >&2 + exit 1 +fi + +# Preserve COMPOSE_FILE and COMPOSE_PROJECT_NAME if already set (e.g., by test scripts) +SAVED_COMPOSE_FILE="${COMPOSE_FILE:-}" +SAVED_COMPOSE_PROJECT_NAME="${COMPOSE_PROJECT_NAME:-}" + +# Source .env to get NETWORK +source .env + +# Restore COMPOSE_FILE and COMPOSE_PROJECT_NAME if they were set before sourcing .env +if [ -n "$SAVED_COMPOSE_FILE" ]; then + export COMPOSE_FILE="$SAVED_COMPOSE_FILE" +fi +if [ -n "$SAVED_COMPOSE_PROJECT_NAME" ]; then + export COMPOSE_PROJECT_NAME="$SAVED_COMPOSE_PROJECT_NAME" +fi + +# Check if NETWORK is set +if [ -z "${NETWORK:-}" ]; then + echo "Error: NETWORK variable not set in .env file" >&2 + echo "Please set NETWORK (e.g., mainnet, hoodi, sepolia) in your .env file" >&2 + exit 1 +fi + +echo "Importing anti-slashing database for Teku validator client" +echo "Network: $NETWORK" +echo "Data directory: $DATA_DIR" +echo "Input file: $INPUT_FILE" +echo "" + +# Check if input file exists +if [ ! -f "$INPUT_FILE" ]; then + echo "Error: Input file not found: $INPUT_FILE" >&2 + exit 1 +fi + +# Validate input file is valid JSON +if ! jq empty "$INPUT_FILE" 2>/dev/null; then + echo "Error: Input file is not valid JSON: $INPUT_FILE" >&2 + exit 1 +fi + +# Make INPUT_FILE absolute for docker bind mount +if [[ "$INPUT_FILE" != /* ]]; then + INPUT_FILE="$(pwd)/$INPUT_FILE" +fi + +# Check if vc-teku container is running (it should be stopped) +if docker compose ps --format '{{.Status}}' vc-teku 2>/dev/null | grep -qi running; then + echo "Error: vc-teku container is still running" >&2 + echo "Please stop the validator client before importing:" >&2 + echo " docker compose stop vc-teku" >&2 + echo "" >&2 + echo "Importing while the container is running may cause database corruption." >&2 + exit 1 +fi + +echo "Importing slashing protection data into vc-teku container..." + +# Import slashing protection data using a temporary container based on the vc-teku service. +# The input file is bind-mounted into the container at /tmp/import.json (read-only). +# We override the command to run the import instead of the validator client. +if ! docker compose run --rm -T \ + -v "$INPUT_FILE":/tmp/import.json:ro \ + --entrypoint /opt/teku/bin/teku \ + vc-teku slashing-protection import \ + --data-path=/home/data \ + --from=/tmp/import.json; then + echo "Error: Failed to import slashing protection into vc-teku container" >&2 + exit 1 +fi + +echo "" +echo "✓ Successfully imported anti-slashing database" +echo "" +echo "You can now restart the validator client:" +echo " docker compose up -d vc-teku" diff --git a/scripts/edit/vc/test/.gitignore b/scripts/edit/vc/test/.gitignore new file mode 100644 index 00000000..0f8c84f0 --- /dev/null +++ b/scripts/edit/vc/test/.gitignore @@ -0,0 +1,4 @@ +# Temporary test artifacts +output/ +data/ +*.tmp diff --git a/scripts/edit/vc/test/README.md b/scripts/edit/vc/test/README.md new file mode 100644 index 00000000..6e9c768b --- /dev/null +++ b/scripts/edit/vc/test/README.md @@ -0,0 +1,34 @@ +# Integration Tests for ASDB Export/Import Scripts + +These tests verify export/import scripts for various VC types work correctly with test data. + +## Prerequisites + +- Docker must be running +- No `.charon` folder required (test uses fixtures) + +## Running Tests + +```bash +# Lodestar VC test +# (for other VC types the usage is identical) +./scripts/edit/vc/test/test_lodestar_asdb.sh +``` + +## ⚠️ Test Isolation + +The test uses isolated data directories within `scripts/edit/vc/test/data/` to avoid any interference with production data in `data/`. + +## Test Flow + +1. Starts vc-lodestar container (no charon dependency) +2. Imports sample slashing protection data from fixtures +3. Exports slashing protection via `export_asdb.sh` +4. Transforms pubkeys via `update-anti-slashing-db.sh` +5. Re-imports updated data via `import_asdb.sh` + +## Test Artifacts + +After running, inspect results in `scripts/edit/vc/test/output/`: +- `exported-asdb.json` - Original export +- `updated-asdb.json` - After pubkey transformation diff --git a/scripts/edit/vc/test/docker-compose.test.yml b/scripts/edit/vc/test/docker-compose.test.yml new file mode 100644 index 00000000..8e694c80 --- /dev/null +++ b/scripts/edit/vc/test/docker-compose.test.yml @@ -0,0 +1,51 @@ +# Test override for validator client services +# Removes charon dependency and keeps container alive for testing +# Mounts test fixtures instead of .charon/validator_keys +# Uses dedicated test data directory to avoid conflicts + +services: + # Mock charon service that starts quickly for tests + # (docker compose run starts dependencies by default) + charon: + image: busybox:latest + entrypoint: ["sh", "-c", "tail -f /dev/null"] + volumes: [] + environment: [] + ports: [] + healthcheck: + disable: true + + vc-lodestar: + depends_on: [] + entrypoint: ["sh", "-c", "tail -f /dev/null"] + volumes: + - ./lodestar/run.sh:/opt/lodestar/run.sh + - ./scripts/edit/vc/test/fixtures/validator_keys:/home/charon/validator_keys + - ./scripts/edit/vc/test/data/lodestar:/opt/data + + vc-nimbus: + depends_on: [] + entrypoint: ["sh", "-c", "tail -f /dev/null"] + volumes: + # Mount run.sh from INSIDE the test data directory to avoid conflicts + # with the base compose's run.sh mount (volumes are merged, not replaced) + - ./scripts/edit/vc/test/data/nimbus/run.sh:/home/user/data/run.sh + - ./scripts/edit/vc/test/fixtures/validator_keys:/home/validator_keys + - ./scripts/edit/vc/test/data/nimbus:/home/user/data + + vc-prysm: + depends_on: [] + entrypoint: ["sh", "-c", "tail -f /dev/null"] + volumes: + # Mount run.sh from INSIDE the test data directory to avoid conflicts + - ./scripts/edit/vc/test/data/prysm/run.sh:/home/prysm/run.sh + - ./scripts/edit/vc/test/fixtures/validator_keys:/home/charon/validator_keys + - ./scripts/edit/vc/test/data/prysm:/data/vc + + vc-teku: + depends_on: [] + entrypoint: ["sh", "-c", "tail -f /dev/null"] + volumes: + # Mount test fixtures validator keys and test data directory + - ./scripts/edit/vc/test/fixtures/validator_keys:/opt/charon/validator_keys + - ./scripts/edit/vc/test/data/teku:/home/data diff --git a/scripts/edit/vc/test/fixtures/sample-slashing-protection.json b/scripts/edit/vc/test/fixtures/sample-slashing-protection.json new file mode 100644 index 00000000..6c1f42bb --- /dev/null +++ b/scripts/edit/vc/test/fixtures/sample-slashing-protection.json @@ -0,0 +1,38 @@ +{ + "metadata": { + "interchange_format_version": "5", + "genesis_validators_root": "0x212f13fc4df078b6cb7db228f1c8307566dcecf900867401a92023d7ba99cb5f" + }, + "data": [ + { + "pubkey": "0xa3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", + "signed_blocks": [ + { + "slot": "81952", + "signing_root": "0x4ff6f743a43f3b4f95350831aeaf0a122a1a392922c45d804280284a69eb850b" + }, + { + "slot": "81984", + "signing_root": "0x5a2b9c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b" + } + ], + "signed_attestations": [ + { + "source_epoch": "2560", + "target_epoch": "2561", + "signing_root": "0x587d6a4f59a58fe15bdac1234e3d51a1d5c8b2e0e3f5e0f2a1b3c4d5e6f7a8b9" + }, + { + "source_epoch": "2561", + "target_epoch": "2562", + "signing_root": "0x6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b" + }, + { + "source_epoch": "2562", + "target_epoch": "2563", + "signing_root": "0x7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2c3d4e5f6a7b8c" + } + ] + } + ] +} diff --git a/scripts/edit/vc/test/fixtures/source-cluster-lock.json b/scripts/edit/vc/test/fixtures/source-cluster-lock.json new file mode 100644 index 00000000..d17c11fe --- /dev/null +++ b/scripts/edit/vc/test/fixtures/source-cluster-lock.json @@ -0,0 +1,19 @@ +{ + "cluster_definition": { + "name": "TestCluster", + "num_validators": 1, + "threshold": 3 + }, + "distributed_validators": [ + { + "distributed_public_key": "0xa9fb2be415318eb77709f7c378ab26025371c0b11213d93fd662ffdb06e77a05c7b04573a478e9d5c0c0fd98078965ef", + "public_shares": [ + "0xa3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", + "0x8afba316fdcf51e25a89e05e17377b8c72fd465c95346df4ed5694f295faa2ce061e14e579c5bc01a468dbbb191c58e8", + "0xa1aeebe0980509f5f8d8d424beb89004a967da8d8093248f64eb27c4ee5d22ba9c0f157025f551f47b31833f8bc585f8", + "0xa6c283c82cd0b65436861a149fb840849d06ded1dd8d2f900afb358c6a4232004309120f00a553cdccd8a43f6b743c82" + ] + } + ], + "lock_hash": "0xe9dbc87171f99bd8b6f348f6bf314291651933256e712ace299190f5e04e7795" +} diff --git a/scripts/edit/vc/test/fixtures/target-cluster-lock.json b/scripts/edit/vc/test/fixtures/target-cluster-lock.json new file mode 100644 index 00000000..8449e309 --- /dev/null +++ b/scripts/edit/vc/test/fixtures/target-cluster-lock.json @@ -0,0 +1,19 @@ +{ + "cluster_definition": { + "name": "TestCluster", + "num_validators": 1, + "threshold": 3 + }, + "distributed_validators": [ + { + "distributed_public_key": "0xa9fb2be415318eb77709f7c378ab26025371c0b11213d93fd662ffdb06e77a05c7b04573a478e9d5c0c0fd98078965ef", + "public_shares": [ + "0xb11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111", + "0x8afba316fdcf51e25a89e05e17377b8c72fd465c95346df4ed5694f295faa2ce061e14e579c5bc01a468dbbb191c58e8", + "0xa1aeebe0980509f5f8d8d424beb89004a967da8d8093248f64eb27c4ee5d22ba9c0f157025f551f47b31833f8bc585f8", + "0xa6c283c82cd0b65436861a149fb840849d06ded1dd8d2f900afb358c6a4232004309120f00a553cdccd8a43f6b743c82" + ] + } + ], + "lock_hash": "0xf0000000000000000000000000000000000000000000000000000000000000000" +} diff --git a/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.json b/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.json new file mode 100644 index 00000000..dba1e6ff --- /dev/null +++ b/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.json @@ -0,0 +1,31 @@ +{ + "crypto": { + "checksum": { + "function": "sha256", + "message": "eeaf8c59d062a397f74d62b97243860cef812cf168662135b9fca023d26c71df", + "params": {} + }, + "cipher": { + "function": "aes-128-ctr", + "message": "c3daae6234285577322e5d674ed90469da1d888b0a406cde50b6472d5206e165", + "params": { + "iv": "87350b9c54dc1e7563b9d784eba86f6d" + } + }, + "kdf": { + "function": "pbkdf2", + "message": "", + "params": { + "c": 262144, + "dklen": 32, + "prf": "hmac-sha256", + "salt": "f3d31631d40448dd9134bcf54630e2ad2f1668bb8470af8f5394c12e214a6fed" + } + } + }, + "description": "", + "pubkey": "a3fd47653b13a3a0c09d3d1fee3e3c305b8336cbcbfb9bacaf138d21fe7c6b1159a219e70b2d1447143af141c5721b27", + "path": "m/12381/3600/0/0/0", + "uuid": "840CFCF8-A23B-7742-9057-3B149122244A", + "version": 4 +} diff --git a/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.txt b/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.txt new file mode 100644 index 00000000..c0245cc2 --- /dev/null +++ b/scripts/edit/vc/test/fixtures/validator_keys/keystore-0.txt @@ -0,0 +1 @@ +90bb9cd1986560f92016c8766fe8c528 \ No newline at end of file diff --git a/scripts/edit/vc/test/test_lodestar_asdb.sh b/scripts/edit/vc/test/test_lodestar_asdb.sh new file mode 100755 index 00000000..be19d937 --- /dev/null +++ b/scripts/edit/vc/test/test_lodestar_asdb.sh @@ -0,0 +1,218 @@ +#!/usr/bin/env bash + +# Integration test for export/import ASDB scripts with Lodestar VC. +# +# This script: +# 1. Starts vc-lodestar via docker-compose with test override (no charon dependency) +# 2. Sets up keystores in the container +# 3. Stops container and imports sample slashing protection data +# 4. Calls scripts/edit/vc/export_asdb.sh to export slashing protection (container stopped) +# 5. Runs update-anti-slashing-db.sh to transform pubkeys +# 6. Calls scripts/edit/vc/import_asdb.sh to import updated slashing protection (container stopped) +# +# Usage: ./scripts/edit/vc/test/test_lodestar_asdb.sh + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" +cd "$REPO_ROOT" + +# Test artifacts directories +TEST_OUTPUT_DIR="$SCRIPT_DIR/output" +TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" +TEST_COMPOSE_FILE="$SCRIPT_DIR/docker-compose.test.yml" +TEST_DATA_DIR="$SCRIPT_DIR/data/lodestar" +TEST_COMPOSE_FILES="docker-compose.yml:compose-vc.yml:$TEST_COMPOSE_FILE" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } + +cleanup() { + log_info "Cleaning up test resources..." + COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-lodestar down 2>/dev/null || true + # Keep TEST_OUTPUT_DIR for inspection + # Clean test data to avoid stale DB locks + rm -rf "$TEST_DATA_DIR" 2>/dev/null || true +} + +trap cleanup EXIT + +# Clean test data directory before starting (remove stale locks) +log_info "Preparing test environment..." +COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-lodestar down 2>/dev/null || true +rm -rf "$TEST_DATA_DIR" +mkdir -p "$TEST_DATA_DIR" + +# Check prerequisites +log_info "Checking prerequisites..." + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +# Check for test validator keys in fixtures +KEYSTORE_COUNT=$(ls "$TEST_FIXTURES_DIR/validator_keys"/keystore-*.json 2>/dev/null | wc -l | tr -d ' ') +if [ "$KEYSTORE_COUNT" -eq 0 ]; then + log_error "No keystore files found in $TEST_FIXTURES_DIR/validator_keys" + exit 1 +fi +log_info "Found $KEYSTORE_COUNT test keystore file(s)" + +# Verify test fixtures exist +if [ ! -f "$TEST_FIXTURES_DIR/source-cluster-lock.json" ] || [ ! -f "$TEST_FIXTURES_DIR/target-cluster-lock.json" ]; then + log_error "Test fixtures not found in $TEST_FIXTURES_DIR" + exit 1 +fi +log_info "Test fixtures verified" + +# Source .env for NETWORK, then override COMPOSE_FILE with test compose +if [ ! -f .env ]; then + log_warn ".env file not found, creating with NETWORK=hoodi" + echo "NETWORK=hoodi" > .env +fi + +source .env +NETWORK="${NETWORK:-hoodi}" + +# Override COMPOSE_FILE after sourcing .env (which may have its own COMPOSE_FILE) +export COMPOSE_FILE="$TEST_COMPOSE_FILES" + +log_info "Using network: $NETWORK" +log_info "Using compose files: $COMPOSE_FILE" + +# Create test output directory +mkdir -p "$TEST_OUTPUT_DIR" + +# Step 1: Start vc-lodestar via docker-compose +log_info "Step 1: Starting vc-lodestar via docker-compose..." + +docker compose --profile vc-lodestar up -d vc-lodestar + +sleep 2 + +# Verify container is running +if ! docker compose ps --format '{{.Status}}' vc-lodestar 2>/dev/null | grep -qi running; then + log_error "Container failed to start. Checking logs:" + docker compose logs vc-lodestar 2>&1 || true + exit 1 +fi + +log_info "Container started successfully" + +# Step 2: Set up keystores (normally done by run.sh but we override entrypoint) +log_info "Step 2: Setting up keystores..." + +docker compose exec -T vc-lodestar sh -c ' + mkdir -p /opt/data/keystores /opt/data/secrets + for f in /home/charon/validator_keys/keystore-*.json; do + PUBKEY="0x$(grep "\"pubkey\"" "$f" | sed "s/.*: *\"\([^\"]*\)\".*/\1/")" + mkdir -p "/opt/data/keystores/$PUBKEY" + cp "$f" "/opt/data/keystores/$PUBKEY/voting-keystore.json" + cp "${f%.json}.txt" "/opt/data/secrets/$PUBKEY" + echo "Imported keystore for $PUBKEY" + done +' + +log_info "Keystores set up successfully" + +# Step 3: Stop container and import sample slashing protection data +log_info "Step 3: Importing sample slashing protection data..." + +docker compose stop vc-lodestar + +SAMPLE_ASDB="$TEST_FIXTURES_DIR/sample-slashing-protection.json" + +if VC=vc-lodestar "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$SAMPLE_ASDB"; then + log_info "Sample data imported successfully!" +else + log_error "Failed to import sample data" + exit 1 +fi + +# Step 4: Test export using the actual script (container should remain stopped) +log_info "Step 4: Testing export_asdb.sh script..." + +EXPORT_FILE="$TEST_OUTPUT_DIR/exported-asdb.json" + +if VC=vc-lodestar "$REPO_ROOT/scripts/edit/vc/export_asdb.sh" --output-file "$EXPORT_FILE"; then + log_info "Export script successful!" + log_info "Exported content:" + jq '.' "$EXPORT_FILE" + + # Verify exported data matches what we imported + EXPORTED_COUNT=$(jq '.data | length' "$EXPORT_FILE") + EXPORTED_ATTESTATIONS=$(jq '.data[0].signed_attestations | length' "$EXPORT_FILE") + log_info "Exported $EXPORTED_COUNT validator(s) with $EXPORTED_ATTESTATIONS attestation(s)" +else + log_error "Export script failed" + exit 1 +fi + +# Step 5: Run update-anti-slashing-db.sh to transform pubkeys +log_info "Step 5: Running update-anti-slashing-db.sh..." + +UPDATE_SCRIPT="$REPO_ROOT/scripts/edit/vc/update-anti-slashing-db.sh" +SOURCE_LOCK="$TEST_FIXTURES_DIR/source-cluster-lock.json" +TARGET_LOCK="$TEST_FIXTURES_DIR/target-cluster-lock.json" + +# Copy export to a working file that will be modified in place +UPDATED_FILE="$TEST_OUTPUT_DIR/updated-asdb.json" +cp "$EXPORT_FILE" "$UPDATED_FILE" + +log_info "Source pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$SOURCE_LOCK")" +log_info "Target pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$TARGET_LOCK")" + +if "$UPDATE_SCRIPT" "$UPDATED_FILE" "$SOURCE_LOCK" "$TARGET_LOCK"; then + log_info "Update successful!" + log_info "Updated content:" + jq '.' "$UPDATED_FILE" + + # Verify the pubkey was transformed + EXPORTED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$EXPORT_FILE") + UPDATED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$UPDATED_FILE") + + if [ -n "$EXPORTED_PUBKEY" ] && [ -n "$UPDATED_PUBKEY" ]; then + if [ "$EXPORTED_PUBKEY" != "$UPDATED_PUBKEY" ]; then + log_info "Pubkey transformation verified:" + log_info " Before: $EXPORTED_PUBKEY" + log_info " After: $UPDATED_PUBKEY" + else + log_error "Pubkey was NOT transformed - test fixture mismatch!" + exit 1 + fi + else + log_error "No pubkey data in exported file - sample import may have failed" + exit 1 + fi +else + log_error "Update script failed" + exit 1 +fi + +# Step 6: Test import using the actual script (container is already stopped) +log_info "Step 6: Testing import_asdb.sh script..." + +if VC=vc-lodestar "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$UPDATED_FILE"; then + log_info "Import script successful!" +else + log_error "Import script failed" + exit 1 +fi + +echo "" +log_info "=========================================" +log_info "All tests passed successfully!" +log_info "=========================================" +log_info "" +log_info "Test artifacts in: $TEST_OUTPUT_DIR" +log_info " - exported-asdb.json (original export)" +log_info " - updated-asdb.json (after pubkey transformation)" diff --git a/scripts/edit/vc/test/test_nimbus_asdb.sh b/scripts/edit/vc/test/test_nimbus_asdb.sh new file mode 100755 index 00000000..298165d6 --- /dev/null +++ b/scripts/edit/vc/test/test_nimbus_asdb.sh @@ -0,0 +1,248 @@ +#!/usr/bin/env bash + +# Integration test for export/import ASDB scripts with Nimbus VC. +# +# This script: +# 1. Builds vc-nimbus image if needed +# 2. Starts vc-nimbus via docker-compose with test override (no charon dependency) +# 3. Sets up keystores in the container +# 4. Stops container and imports sample slashing protection data +# 5. Calls scripts/edit/vc/export_asdb.sh to export slashing protection (container stopped) +# 6. Runs update-anti-slashing-db.sh to transform pubkeys +# 7. Calls scripts/edit/vc/import_asdb.sh to import updated slashing protection (container stopped) +# +# Usage: ./scripts/edit/vc/test/test_nimbus_asdb.sh + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" +cd "$REPO_ROOT" + +# Test artifacts directories +TEST_OUTPUT_DIR="$SCRIPT_DIR/output" +TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" +TEST_COMPOSE_FILE="$SCRIPT_DIR/docker-compose.test.yml" +TEST_DATA_DIR="$SCRIPT_DIR/data/nimbus" +TEST_COMPOSE_FILES="docker-compose.yml:compose-vc.yml:$TEST_COMPOSE_FILE" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } + +cleanup() { + log_info "Cleaning up test resources..." + COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-nimbus down 2>/dev/null || true + # Keep TEST_OUTPUT_DIR for inspection + # Clean test data to avoid stale DB locks + rm -rf "$TEST_DATA_DIR" 2>/dev/null || true +} + +trap cleanup EXIT + +# Clean test data directory before starting (remove stale locks) +log_info "Preparing test environment..." +COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-nimbus down 2>/dev/null || true +rm -rf "$TEST_DATA_DIR" +mkdir -p "$TEST_DATA_DIR" + +# Copy run.sh into test data directory to satisfy the volume mount from base compose +# (compose merge keeps the original mount ./nimbus/run.sh:/home/user/data/run.sh, +# which conflicts with our test data mount unless we provide the file there) +cp "$REPO_ROOT/nimbus/run.sh" "$TEST_DATA_DIR/run.sh" + +# Check prerequisites +log_info "Checking prerequisites..." + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +# Check for test validator keys in fixtures +KEYSTORE_COUNT=$(ls "$TEST_FIXTURES_DIR/validator_keys"/keystore-*.json 2>/dev/null | wc -l | tr -d ' ') +if [ "$KEYSTORE_COUNT" -eq 0 ]; then + log_error "No keystore files found in $TEST_FIXTURES_DIR/validator_keys" + exit 1 +fi +log_info "Found $KEYSTORE_COUNT test keystore file(s)" + +# Verify test fixtures exist +if [ ! -f "$TEST_FIXTURES_DIR/source-cluster-lock.json" ] || [ ! -f "$TEST_FIXTURES_DIR/target-cluster-lock.json" ]; then + log_error "Test fixtures not found in $TEST_FIXTURES_DIR" + exit 1 +fi +log_info "Test fixtures verified" + +# Source .env for NETWORK, then override COMPOSE_FILE with test compose +if [ ! -f .env ]; then + log_warn ".env file not found, creating with NETWORK=hoodi" + echo "NETWORK=hoodi" > .env +fi + +source .env +NETWORK="${NETWORK:-hoodi}" + +# Override COMPOSE_FILE after sourcing .env (which may have its own COMPOSE_FILE) +export COMPOSE_FILE="$TEST_COMPOSE_FILES" + +log_info "Using network: $NETWORK" +log_info "Using compose files: $COMPOSE_FILE" + +# Create test output directory +mkdir -p "$TEST_OUTPUT_DIR" + +# Step 0: Build vc-nimbus image if needed +log_info "Step 0: Building vc-nimbus image..." + +if ! docker compose --profile vc-nimbus build vc-nimbus; then + log_error "Failed to build vc-nimbus image" + exit 1 +fi +log_info "Image built successfully" + +# Step 1: Start vc-nimbus via docker-compose +log_info "Step 1: Starting vc-nimbus via docker-compose..." + +docker compose --profile vc-nimbus up -d vc-nimbus + +sleep 2 + +# Verify container is running +if ! docker compose ps --format '{{.Status}}' vc-nimbus 2>/dev/null | grep -qi running; then + log_error "Container failed to start. Checking logs:" + docker compose logs vc-nimbus 2>&1 || true + exit 1 +fi + +log_info "Container started successfully" + +# Step 2: Set up keystores using nimbus_beacon_node deposits import +log_info "Step 2: Setting up keystores..." + +# Create a temporary directory in the container for importing +docker compose exec -T vc-nimbus sh -c ' + mkdir -p /home/user/data/validators /tmp/keyimport + + for f in /home/validator_keys/keystore-*.json; do + echo "Importing key from $f" + + # Read password + password=$(cat "${f%.json}.txt") + + # Copy keystore to temp dir + cp "$f" /tmp/keyimport/ + + # Import using nimbus_beacon_node + echo "$password" | /home/user/nimbus_beacon_node deposits import \ + --data-dir=/home/user/data \ + /tmp/keyimport + + # Clean temp dir + rm /tmp/keyimport/* + done + + rm -rf /tmp/keyimport + echo "Done importing keystores" +' + +log_info "Keystores set up successfully" + +# Step 3: Stop container and import sample slashing protection data +log_info "Step 3: Importing sample slashing protection data..." + +docker compose stop vc-nimbus + +SAMPLE_ASDB="$TEST_FIXTURES_DIR/sample-slashing-protection.json" + +if VC=vc-nimbus "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$SAMPLE_ASDB"; then + log_info "Sample data imported successfully!" +else + log_error "Failed to import sample data" + exit 1 +fi + +# Step 4: Test export using the actual script (container should remain stopped) +log_info "Step 4: Testing export_asdb.sh script..." + +EXPORT_FILE="$TEST_OUTPUT_DIR/exported-asdb.json" + +if VC=vc-nimbus "$REPO_ROOT/scripts/edit/vc/export_asdb.sh" --output-file "$EXPORT_FILE"; then + log_info "Export script successful!" + log_info "Exported content:" + jq '.' "$EXPORT_FILE" + + # Verify exported data matches what we imported + EXPORTED_COUNT=$(jq '.data | length' "$EXPORT_FILE") + EXPORTED_ATTESTATIONS=$(jq '.data[0].signed_attestations | length' "$EXPORT_FILE" 2>/dev/null || echo "0") + log_info "Exported $EXPORTED_COUNT validator(s) with $EXPORTED_ATTESTATIONS attestation(s)" +else + log_error "Export script failed" + exit 1 +fi + +# Step 5: Run update-anti-slashing-db.sh to transform pubkeys +log_info "Step 5: Running update-anti-slashing-db.sh..." + +UPDATE_SCRIPT="$REPO_ROOT/scripts/edit/vc/update-anti-slashing-db.sh" +SOURCE_LOCK="$TEST_FIXTURES_DIR/source-cluster-lock.json" +TARGET_LOCK="$TEST_FIXTURES_DIR/target-cluster-lock.json" + +# Copy export to a working file that will be modified in place +UPDATED_FILE="$TEST_OUTPUT_DIR/updated-asdb.json" +cp "$EXPORT_FILE" "$UPDATED_FILE" + +log_info "Source pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$SOURCE_LOCK")" +log_info "Target pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$TARGET_LOCK")" + +if "$UPDATE_SCRIPT" "$UPDATED_FILE" "$SOURCE_LOCK" "$TARGET_LOCK"; then + log_info "Update successful!" + log_info "Updated content:" + jq '.' "$UPDATED_FILE" + + # Verify the pubkey was transformed + EXPORTED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$EXPORT_FILE") + UPDATED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$UPDATED_FILE") + + if [ -n "$EXPORTED_PUBKEY" ] && [ -n "$UPDATED_PUBKEY" ]; then + if [ "$EXPORTED_PUBKEY" != "$UPDATED_PUBKEY" ]; then + log_info "Pubkey transformation verified:" + log_info " Before: $EXPORTED_PUBKEY" + log_info " After: $UPDATED_PUBKEY" + else + log_error "Pubkey was NOT transformed - test fixture mismatch!" + exit 1 + fi + else + log_error "No pubkey data in exported file - sample import may have failed" + exit 1 + fi +else + log_error "Update script failed" + exit 1 +fi + +# Step 6: Test import using the actual script (container is already stopped) +log_info "Step 6: Testing import_asdb.sh script..." + +if VC=vc-nimbus "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$UPDATED_FILE"; then + log_info "Import script successful!" +else + log_error "Import script failed" + exit 1 +fi + +echo "" +log_info "=========================================" +log_info "All tests passed successfully!" +log_info "=========================================" +log_info "" +log_info "Test artifacts in: $TEST_OUTPUT_DIR" +log_info " - exported-asdb.json (original export)" +log_info " - updated-asdb.json (after pubkey transformation)" diff --git a/scripts/edit/vc/test/test_prysm_asdb.sh b/scripts/edit/vc/test/test_prysm_asdb.sh new file mode 100755 index 00000000..5aacd2d0 --- /dev/null +++ b/scripts/edit/vc/test/test_prysm_asdb.sh @@ -0,0 +1,265 @@ +#!/usr/bin/env bash + +# Integration test for export/import ASDB scripts with Prysm VC. +# +# This script: +# 1. Starts vc-prysm via docker-compose with test override (no charon dependency) +# 2. Sets up wallet and keystores in the container +# 3. Stops container and imports sample slashing protection data +# 4. Calls scripts/edit/vc/export_asdb.sh to export slashing protection (container stopped) +# 5. Runs update-anti-slashing-db.sh to transform pubkeys +# 6. Calls scripts/edit/vc/import_asdb.sh to import updated slashing protection (container stopped) +# +# Usage: ./scripts/edit/vc/test/test_prysm_asdb.sh + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" +cd "$REPO_ROOT" + +# Test artifacts directories +TEST_OUTPUT_DIR="$SCRIPT_DIR/output" +TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" +TEST_COMPOSE_FILE="$SCRIPT_DIR/docker-compose.test.yml" +TEST_DATA_DIR="$SCRIPT_DIR/data/prysm" +TEST_COMPOSE_FILES="docker-compose.yml:compose-vc.yml:$TEST_COMPOSE_FILE" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } + +cleanup() { + log_info "Cleaning up test resources..." + COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-prysm down 2>/dev/null || true + # Keep TEST_OUTPUT_DIR for inspection + # Clean test data to avoid stale DB locks + rm -rf "$TEST_DATA_DIR" 2>/dev/null || true +} + +trap cleanup EXIT + +# Clean test data directory before starting (remove stale locks) +log_info "Preparing test environment..." +COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-prysm down 2>/dev/null || true +rm -rf "$TEST_DATA_DIR" +mkdir -p "$TEST_DATA_DIR" + +# Copy run.sh into test data directory to satisfy the volume mount from base compose +cp "$REPO_ROOT/prysm/run.sh" "$TEST_DATA_DIR/run.sh" + +# Check prerequisites +log_info "Checking prerequisites..." + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +# Check for test validator keys in fixtures +KEYSTORE_COUNT=$(ls "$TEST_FIXTURES_DIR/validator_keys"/keystore-*.json 2>/dev/null | wc -l | tr -d ' ') +if [ "$KEYSTORE_COUNT" -eq 0 ]; then + log_error "No keystore files found in $TEST_FIXTURES_DIR/validator_keys" + exit 1 +fi +log_info "Found $KEYSTORE_COUNT test keystore file(s)" + +# Verify test fixtures exist +if [ ! -f "$TEST_FIXTURES_DIR/source-cluster-lock.json" ] || [ ! -f "$TEST_FIXTURES_DIR/target-cluster-lock.json" ]; then + log_error "Test fixtures not found in $TEST_FIXTURES_DIR" + exit 1 +fi +log_info "Test fixtures verified" + +# Source .env for NETWORK, then override COMPOSE_FILE with test compose +if [ ! -f .env ]; then + log_warn ".env file not found, creating with NETWORK=hoodi" + echo "NETWORK=hoodi" > .env +fi + +source .env +NETWORK="${NETWORK:-hoodi}" + +# Override COMPOSE_FILE after sourcing .env (which may have its own COMPOSE_FILE) +export COMPOSE_FILE="$TEST_COMPOSE_FILES" + +log_info "Using network: $NETWORK" +log_info "Using compose files: $COMPOSE_FILE" + +# Create test output directory +mkdir -p "$TEST_OUTPUT_DIR" + +# Step 1: Start vc-prysm via docker-compose +log_info "Step 1: Starting vc-prysm via docker-compose..." + +docker compose --profile vc-prysm up -d vc-prysm + +sleep 2 + +# Verify container is running +if ! docker compose ps --format '{{.Status}}' vc-prysm 2>/dev/null | grep -qi running; then + log_error "Container failed to start. Checking logs:" + docker compose logs vc-prysm 2>&1 || true + exit 1 +fi + +log_info "Container started successfully" + +# Step 2: Set up wallet and keystores (similar to run.sh) +# Note: We use /data/vc/wallet so it's persisted in the test data directory +log_info "Step 2: Setting up wallet and keystores..." + +docker compose exec -T vc-prysm sh -c ' + WALLET_DIR="/data/vc/wallet" + WALLET_PASSWORD="prysm-validator-secret" + + # Create wallet + rm -rf $WALLET_DIR + mkdir -p $WALLET_DIR + echo $WALLET_PASSWORD > /data/vc/wallet-password.txt + + /app/cmd/validator/validator wallet create \ + --accept-terms-of-use \ + --wallet-password-file=/data/vc/wallet-password.txt \ + --keymanager-kind=direct \ + --wallet-dir="$WALLET_DIR" + + # Import keys + tmpkeys="/home/validator_keys/tmpkeys" + mkdir -p ${tmpkeys} + + for f in /home/charon/validator_keys/keystore-*.json; do + echo "Importing key ${f}" + + # Copy keystore file to tmpkeys/ directory + cp "${f}" "${tmpkeys}" + + # Import keystore with password + /app/cmd/validator/validator accounts import \ + --accept-terms-of-use=true \ + --wallet-dir="$WALLET_DIR" \ + --keys-dir="${tmpkeys}" \ + --account-password-file="${f//json/txt}" \ + --wallet-password-file=/data/vc/wallet-password.txt + + # Delete tmpkeys/keystore-*.json file + filename="$(basename ${f})" + rm "${tmpkeys}/${filename}" + done + + rm -r ${tmpkeys} + + # Initialize the validator DB by starting and immediately stopping the validator + # This creates the necessary database structure for slashing protection import + echo "Initializing validator database..." + timeout 3 /app/cmd/validator/validator \ + --wallet-dir="$WALLET_DIR" \ + --accept-terms-of-use=true \ + --datadir="/data/vc" \ + --wallet-password-file="/data/vc/wallet-password.txt" \ + --beacon-rpc-provider="http://localhost:3600" \ + --hoodi || true + + echo "Done setting up wallet and initializing DB" +' + +log_info "Wallet and keystores set up successfully" + +# Step 3: Stop container and import sample slashing protection data +log_info "Step 3: Importing sample slashing protection data..." + +docker compose stop vc-prysm + +SAMPLE_ASDB="$TEST_FIXTURES_DIR/sample-slashing-protection.json" + +if VC=vc-prysm "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$SAMPLE_ASDB"; then + log_info "Sample data imported successfully!" +else + log_error "Failed to import sample data" + exit 1 +fi + +# Step 4: Test export using the actual script (container should remain stopped) +log_info "Step 4: Testing export_asdb.sh script..." + +EXPORT_FILE="$TEST_OUTPUT_DIR/exported-asdb.json" + +if VC=vc-prysm "$REPO_ROOT/scripts/edit/vc/export_asdb.sh" --output-file "$EXPORT_FILE"; then + log_info "Export script successful!" + log_info "Exported content:" + jq '.' "$EXPORT_FILE" + + # Verify exported data matches what we imported + EXPORTED_COUNT=$(jq '.data | length' "$EXPORT_FILE") + EXPORTED_ATTESTATIONS=$(jq '.data[0].signed_attestations | length' "$EXPORT_FILE" 2>/dev/null || echo "0") + log_info "Exported $EXPORTED_COUNT validator(s) with $EXPORTED_ATTESTATIONS attestation(s)" +else + log_error "Export script failed" + exit 1 +fi + +# Step 5: Run update-anti-slashing-db.sh to transform pubkeys +log_info "Step 5: Running update-anti-slashing-db.sh..." + +UPDATE_SCRIPT="$REPO_ROOT/scripts/edit/vc/update-anti-slashing-db.sh" +SOURCE_LOCK="$TEST_FIXTURES_DIR/source-cluster-lock.json" +TARGET_LOCK="$TEST_FIXTURES_DIR/target-cluster-lock.json" + +# Copy export to a working file that will be modified in place +UPDATED_FILE="$TEST_OUTPUT_DIR/updated-asdb.json" +cp "$EXPORT_FILE" "$UPDATED_FILE" + +log_info "Source pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$SOURCE_LOCK")" +log_info "Target pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$TARGET_LOCK")" + +if "$UPDATE_SCRIPT" "$UPDATED_FILE" "$SOURCE_LOCK" "$TARGET_LOCK"; then + log_info "Update successful!" + log_info "Updated content:" + jq '.' "$UPDATED_FILE" + + # Verify the pubkey was transformed + EXPORTED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$EXPORT_FILE") + UPDATED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$UPDATED_FILE") + + if [ -n "$EXPORTED_PUBKEY" ] && [ -n "$UPDATED_PUBKEY" ]; then + if [ "$EXPORTED_PUBKEY" != "$UPDATED_PUBKEY" ]; then + log_info "Pubkey transformation verified:" + log_info " Before: $EXPORTED_PUBKEY" + log_info " After: $UPDATED_PUBKEY" + else + log_error "Pubkey was NOT transformed - test fixture mismatch!" + exit 1 + fi + else + log_error "No pubkey data in exported file - sample import may have failed" + exit 1 + fi +else + log_error "Update script failed" + exit 1 +fi + +# Step 6: Test import using the actual script (container is already stopped) +log_info "Step 6: Testing import_asdb.sh script..." + +if VC=vc-prysm "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$UPDATED_FILE"; then + log_info "Import script successful!" +else + log_error "Import script failed" + exit 1 +fi + +echo "" +log_info "=========================================" +log_info "All tests passed successfully!" +log_info "=========================================" +log_info "" +log_info "Test artifacts in: $TEST_OUTPUT_DIR" +log_info " - exported-asdb.json (original export)" +log_info " - updated-asdb.json (after pubkey transformation)" diff --git a/scripts/edit/vc/test/test_teku_asdb.sh b/scripts/edit/vc/test/test_teku_asdb.sh new file mode 100755 index 00000000..ec626522 --- /dev/null +++ b/scripts/edit/vc/test/test_teku_asdb.sh @@ -0,0 +1,201 @@ +#!/usr/bin/env bash + +# Integration test for export/import ASDB scripts with Teku VC. +# +# This script: +# 1. Starts vc-teku via docker-compose with test override (no charon dependency) +# 2. Stops container and imports sample slashing protection data +# 3. Calls scripts/edit/vc/export_asdb.sh to export slashing protection (container stopped) +# 4. Runs update-anti-slashing-db.sh to transform pubkeys +# 5. Calls scripts/edit/vc/import_asdb.sh to import updated slashing protection (container stopped) +# +# Usage: ./scripts/edit/vc/test/test_teku_asdb.sh + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +REPO_ROOT="$(cd "$SCRIPT_DIR/../../../.." && pwd)" +cd "$REPO_ROOT" + +# Test artifacts directories +TEST_OUTPUT_DIR="$SCRIPT_DIR/output" +TEST_FIXTURES_DIR="$SCRIPT_DIR/fixtures" +TEST_COMPOSE_FILE="$SCRIPT_DIR/docker-compose.test.yml" +TEST_DATA_DIR="$SCRIPT_DIR/data/teku" +TEST_COMPOSE_FILES="docker-compose.yml:compose-vc.yml:$TEST_COMPOSE_FILE" + +# Colors for output +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +NC='\033[0m' + +log_info() { echo -e "${GREEN}[INFO]${NC} $1"; } +log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; } +log_error() { echo -e "${RED}[ERROR]${NC} $1"; } + +cleanup() { + log_info "Cleaning up test resources..." + COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-teku down 2>/dev/null || true + # Keep TEST_OUTPUT_DIR for inspection + # Clean test data to avoid stale DB locks + rm -rf "$TEST_DATA_DIR" 2>/dev/null || true +} + +trap cleanup EXIT + +# Clean test data directory before starting (remove stale locks) +log_info "Preparing test environment..." +COMPOSE_FILE="$TEST_COMPOSE_FILES" docker compose --profile vc-teku down 2>/dev/null || true +rm -rf "$TEST_DATA_DIR" +mkdir -p "$TEST_DATA_DIR" + +# Check prerequisites +log_info "Checking prerequisites..." + +if ! docker info >/dev/null 2>&1; then + log_error "Docker is not running" + exit 1 +fi + +# Check for test validator keys in fixtures +KEYSTORE_COUNT=$(ls "$TEST_FIXTURES_DIR/validator_keys"/keystore-*.json 2>/dev/null | wc -l | tr -d ' ') +if [ "$KEYSTORE_COUNT" -eq 0 ]; then + log_error "No keystore files found in $TEST_FIXTURES_DIR/validator_keys" + exit 1 +fi +log_info "Found $KEYSTORE_COUNT test keystore file(s)" + +# Verify test fixtures exist +if [ ! -f "$TEST_FIXTURES_DIR/source-cluster-lock.json" ] || [ ! -f "$TEST_FIXTURES_DIR/target-cluster-lock.json" ]; then + log_error "Test fixtures not found in $TEST_FIXTURES_DIR" + exit 1 +fi +log_info "Test fixtures verified" + +# Source .env for NETWORK, then override COMPOSE_FILE with test compose +if [ ! -f .env ]; then + log_warn ".env file not found, creating with NETWORK=hoodi" + echo "NETWORK=hoodi" > .env +fi + +source .env +NETWORK="${NETWORK:-hoodi}" + +# Override COMPOSE_FILE after sourcing .env (which may have its own COMPOSE_FILE) +export COMPOSE_FILE="$TEST_COMPOSE_FILES" + +log_info "Using network: $NETWORK" +log_info "Using compose files: $COMPOSE_FILE" + +# Create test output directory +mkdir -p "$TEST_OUTPUT_DIR" + +# Step 1: Start vc-teku via docker-compose +log_info "Step 1: Starting vc-teku via docker-compose..." + +docker compose --profile vc-teku up -d vc-teku + +sleep 2 + +# Verify container is running +if ! docker compose ps --format '{{.Status}}' vc-teku 2>/dev/null | grep -qi running; then + log_error "Container failed to start. Checking logs:" + docker compose logs vc-teku 2>&1 || true + exit 1 +fi + +log_info "Container started successfully" + +# Step 2: Stop container and import sample slashing protection data +log_info "Step 2: Importing sample slashing protection data..." + +docker compose stop vc-teku + +SAMPLE_ASDB="$TEST_FIXTURES_DIR/sample-slashing-protection.json" + +if VC=vc-teku "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$SAMPLE_ASDB"; then + log_info "Sample data imported successfully!" +else + log_error "Failed to import sample data" + exit 1 +fi + +# Step 3: Test export using the actual script (container should remain stopped) +log_info "Step 3: Testing export_asdb.sh script..." + +EXPORT_FILE="$TEST_OUTPUT_DIR/exported-asdb.json" + +if VC=vc-teku "$REPO_ROOT/scripts/edit/vc/export_asdb.sh" --output-file "$EXPORT_FILE"; then + log_info "Export script successful!" + log_info "Exported content:" + jq '.' "$EXPORT_FILE" + + # Verify exported data matches what we imported + EXPORTED_COUNT=$(jq '.data | length' "$EXPORT_FILE") + EXPORTED_ATTESTATIONS=$(jq '.data[0].signed_attestations | length' "$EXPORT_FILE" 2>/dev/null || echo "0") + log_info "Exported $EXPORTED_COUNT validator(s) with $EXPORTED_ATTESTATIONS attestation(s)" +else + log_error "Export script failed" + exit 1 +fi + +# Step 4: Run update-anti-slashing-db.sh to transform pubkeys +log_info "Step 4: Running update-anti-slashing-db.sh..." + +UPDATE_SCRIPT="$REPO_ROOT/scripts/edit/vc/update-anti-slashing-db.sh" +SOURCE_LOCK="$TEST_FIXTURES_DIR/source-cluster-lock.json" +TARGET_LOCK="$TEST_FIXTURES_DIR/target-cluster-lock.json" + +# Copy export to a working file that will be modified in place +UPDATED_FILE="$TEST_OUTPUT_DIR/updated-asdb.json" +cp "$EXPORT_FILE" "$UPDATED_FILE" + +log_info "Source pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$SOURCE_LOCK")" +log_info "Target pubkey (operator 0): $(jq -r '.distributed_validators[0].public_shares[0]' "$TARGET_LOCK")" + +if "$UPDATE_SCRIPT" "$UPDATED_FILE" "$SOURCE_LOCK" "$TARGET_LOCK"; then + log_info "Update successful!" + log_info "Updated content:" + jq '.' "$UPDATED_FILE" + + # Verify the pubkey was transformed + EXPORTED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$EXPORT_FILE") + UPDATED_PUBKEY=$(jq -r '.data[0].pubkey // empty' "$UPDATED_FILE") + + if [ -n "$EXPORTED_PUBKEY" ] && [ -n "$UPDATED_PUBKEY" ]; then + if [ "$EXPORTED_PUBKEY" != "$UPDATED_PUBKEY" ]; then + log_info "Pubkey transformation verified:" + log_info " Before: $EXPORTED_PUBKEY" + log_info " After: $UPDATED_PUBKEY" + else + log_error "Pubkey was NOT transformed - test fixture mismatch!" + exit 1 + fi + else + log_error "No pubkey data in exported file - sample import may have failed" + exit 1 + fi +else + log_error "Update script failed" + exit 1 +fi + +# Step 5: Test import using the actual script (container is already stopped) +log_info "Step 5: Testing import_asdb.sh script..." + +if VC=vc-teku "$REPO_ROOT/scripts/edit/vc/import_asdb.sh" --input-file "$UPDATED_FILE"; then + log_info "Import script successful!" +else + log_error "Import script failed" + exit 1 +fi + +echo "" +log_info "=========================================" +log_info "All tests passed successfully!" +log_info "=========================================" +log_info "" +log_info "Test artifacts in: $TEST_OUTPUT_DIR" +log_info " - exported-asdb.json (original export)" +log_info " - updated-asdb.json (after pubkey transformation)" diff --git a/scripts/edit/vc/update-anti-slashing-db.sh b/scripts/edit/vc/update-anti-slashing-db.sh new file mode 100755 index 00000000..688002a9 --- /dev/null +++ b/scripts/edit/vc/update-anti-slashing-db.sh @@ -0,0 +1,233 @@ +#!/usr/bin/env bash + +# Script to update EIP-3076 anti-slashing DB by replacing pubkey values +# based on lookup in source and target cluster-lock.json files. +# +# Usage: update-anti-slashing-db.sh +# +# Arguments: +# eip3076-file - Path to EIP-3076 JSON file to update in place +# source-cluster-lock - Path to source cluster-lock.json (original) +# target-cluster-lock - Path to target cluster-lock.json (new, from output/) +# +# The script traverses the EIP-3076 JSON file and finds all "pubkey" values in the +# data array. For each pubkey, it looks up the value in the source cluster-lock.json's +# distributed_validators[].public_shares[] arrays, remembers the indices, and then +# replaces the pubkey with the corresponding value from the target cluster-lock.json +# at the same indices. + +set -euo pipefail + +# Check if jq is installed +if ! command -v jq &> /dev/null; then + echo "Error: jq is required but not installed. Please install jq first." >&2 + exit 1 +fi + +# Validate arguments +if [ "$#" -ne 3 ]; then + echo "Usage: $0 " >&2 + exit 1 +fi + +EIP3076_FILE="$1" +SOURCE_LOCK="$2" +TARGET_LOCK="$3" + +# Validate files exist +if [ ! -f "$EIP3076_FILE" ]; then + echo "Error: EIP-3076 file not found: $EIP3076_FILE" >&2 + exit 1 +fi + +if [ ! -f "$SOURCE_LOCK" ]; then + echo "Error: Source cluster-lock file not found: $SOURCE_LOCK" >&2 + exit 1 +fi + +if [ ! -f "$TARGET_LOCK" ]; then + echo "Error: Target cluster-lock file not found: $TARGET_LOCK" >&2 + exit 1 +fi + +# Validate all files contain valid JSON +if ! jq empty "$EIP3076_FILE" 2>/dev/null; then + echo "Error: EIP-3076 file contains invalid JSON: $EIP3076_FILE" >&2 + exit 1 +fi + +if ! jq empty "$SOURCE_LOCK" 2>/dev/null; then + echo "Error: Source cluster-lock file contains invalid JSON: $SOURCE_LOCK" >&2 + exit 1 +fi + +if ! jq empty "$TARGET_LOCK" 2>/dev/null; then + echo "Error: Target cluster-lock file contains invalid JSON: $TARGET_LOCK" >&2 + exit 1 +fi + +# Create temporary files for processing +TEMP_FILE=$(mktemp) +trap 'rm -f "$TEMP_FILE" "${TEMP_FILE}.tmp"' EXIT INT TERM + +# Function to find pubkey in cluster-lock and return validator_index,share_index +# Returns empty string if not found +find_pubkey_indices() { + local pubkey="$1" + local cluster_lock_file="$2" + + # Search through distributed_validators and public_shares + jq -r --arg pubkey "$pubkey" ' + .distributed_validators as $validators | + foreach range(0; $validators | length) as $v_idx ( + null; + . ; + $validators[$v_idx].public_shares as $shares | + foreach range(0; $shares | length) as $s_idx ( + null; + . ; + if $shares[$s_idx] == $pubkey then + "\($v_idx),\($s_idx)" + else + empty + end + ) + ) | select(. != null) + ' "$cluster_lock_file" | head -n 1 +} + +# Function to get pubkey from cluster-lock at specific indices +get_pubkey_at_indices() { + local validator_idx="$1" + local share_idx="$2" + local cluster_lock_file="$3" + + jq -r --argjson v_idx "$validator_idx" --argjson s_idx "$share_idx" ' + .distributed_validators[$v_idx].public_shares[$s_idx] + ' "$cluster_lock_file" +} + +echo "Reading EIP-3076 file: $EIP3076_FILE" +echo "Source cluster-lock: $SOURCE_LOCK" +echo "Target cluster-lock: $TARGET_LOCK" +echo "" + +# Validate cluster-lock structure +source_validators=$(jq '.distributed_validators | length' "$SOURCE_LOCK") +target_validators=$(jq '.distributed_validators | length' "$TARGET_LOCK") + +# Validate that we got valid numeric values +if [ -z "$source_validators" ] || [ "$source_validators" = "null" ]; then + echo "Error: Source cluster-lock missing 'distributed_validators' field" >&2 + exit 1 +fi + +if [ -z "$target_validators" ] || [ "$target_validators" = "null" ]; then + echo "Error: Target cluster-lock missing 'distributed_validators' field" >&2 + exit 1 +fi + +echo "Source cluster-lock has $source_validators validators" +echo "Target cluster-lock has $target_validators validators" + +if [ "$source_validators" -eq 0 ]; then + echo "Error: Source cluster-lock has no validators" >&2 + exit 1 +fi + +if [ "$target_validators" -eq 0 ]; then + echo "Error: Target cluster-lock has no validators" >&2 + exit 1 +fi + +# Verify that target has at least as many validators as source +if [ "$target_validators" -lt "$source_validators" ]; then + echo "Error: Target cluster-lock has fewer validators ($target_validators) than source ($source_validators)" >&2 + echo " This may result in missing pubkey replacements" >&2 + exit 1 +fi + +echo "" + +# Get all unique pubkeys from the data array +# Note: The same pubkey may appear multiple times, so we deduplicate with sort -u +pubkeys=$(jq -r '.data[].pubkey' "$EIP3076_FILE" | sort -u) + +if [ -z "$pubkeys" ]; then + echo "Warning: No pubkeys found in EIP-3076 file" >&2 + exit 0 +fi + +pubkey_count=$(grep -c '^' <<< "$pubkeys") +echo "Found $pubkey_count unique pubkey(s) to process" +echo "" + +# Copy original file to temp file, we'll modify it in place +cp "$EIP3076_FILE" "$TEMP_FILE" + +# Process each pubkey +while IFS= read -r old_pubkey; do + echo "Processing pubkey: $old_pubkey" + + # Find indices in source cluster-lock + indices=$(find_pubkey_indices "$old_pubkey" "$SOURCE_LOCK") + + if [ -z "$indices" ]; then + echo " Error: Pubkey not found in source cluster-lock.json" >&2 + echo " Cannot proceed without mapping for all pubkeys" >&2 + exit 1 + fi + + # Split indices + validator_idx=$(echo "$indices" | cut -d',' -f1) + share_idx=$(echo "$indices" | cut -d',' -f2) + + echo " Found at distributed_validators[$validator_idx].public_shares[$share_idx]" + + # Verify target has sufficient validators + if [ "$validator_idx" -ge "$target_validators" ]; then + echo " Error: Target cluster-lock.json doesn't have validator at index $validator_idx" >&2 + echo " Target has only $target_validators validators" >&2 + exit 1 + fi + + # Verify target validator has sufficient public_shares + target_share_count=$(jq --argjson v_idx "$validator_idx" '.distributed_validators[$v_idx].public_shares | length' "$TARGET_LOCK") + if [ "$share_idx" -ge "$target_share_count" ]; then + echo " Error: Target cluster-lock.json validator[$validator_idx] doesn't have share at index $share_idx" >&2 + echo " Target validator has only $target_share_count shares" >&2 + exit 1 + fi + + # Get corresponding pubkey from target cluster-lock + new_pubkey=$(get_pubkey_at_indices "$validator_idx" "$share_idx" "$TARGET_LOCK") + + if [ -z "$new_pubkey" ] || [ "$new_pubkey" = "null" ]; then + echo " Error: Could not find pubkey at same indices in target cluster-lock.json" >&2 + exit 1 + fi + + echo " Replacing with: $new_pubkey" + + # Replace the pubkey in the JSON data + # Note: The same pubkey may appear multiple times in the data array (one per validator). + # This filter will update ALL occurrences of the old pubkey with the new one. + # We modify the temp file in place using jq's output redirection + jq --arg old "$old_pubkey" --arg new "$new_pubkey" ' + (.data[] | select(.pubkey == $old) | .pubkey) |= $new + ' "$TEMP_FILE" > "${TEMP_FILE}.tmp" && mv "${TEMP_FILE}.tmp" "$TEMP_FILE" + + echo " Done" + echo "" +done <<< "$pubkeys" + +# Validate the output is valid JSON +if ! jq empty "$TEMP_FILE" 2>/dev/null; then + echo "Error: Generated invalid JSON" >&2 + exit 1 +fi + +# Replace original file with updated version +cp "$TEMP_FILE" "$EIP3076_FILE" + +echo "Successfully updated $EIP3076_FILE"