Get the ThemisDB Performance Dashboard up and running in 5 minutes!
- Docker & Docker Compose installed
- ThemisDB repository cloned
- (Optional) ThemisDB built with benchmarks
# Navigate to grafana directory
cd grafana
# Start Grafana + Prometheus
docker-compose up -d
# Verify services are running
docker-compose psExpected output:
NAME COMMAND STATUS PORTS
grafana "/run.sh" Up 0.0.0.0:3000->3000/tcp
prometheus "/bin/prometheus --c…" Up 0.0.0.0:9090->9090/tcp
- Open browser: http://localhost:3000
- Login:
admin/admin - (Optional) Change password or skip
- Navigate to: Dashboards → ThemisDB Performance Dashboard
That's it! 🎉 Dashboard is ready.
# Option A: Quick test with example data
cd benchmarks
python3 performance_tracker.py \
--results examples/sample_results.json \
--storage performance_data
# Option B: Run actual benchmarks
cd build
./bench_crud --benchmark_format=json --benchmark_out=results.json
cd ..
python3 benchmarks/performance_tracker.py \
--results build/results.json \
--storage benchmarks/performance_dataRefresh the Grafana dashboard. You should see:
- ✅ Throughput metrics populated
- ✅ Latency charts showing data
- ✅ No critical regressions (first run)
# Start
cd grafana && docker-compose up -d
# Stop
docker-compose down
# Restart
docker-compose restart
# View logs
docker-compose logs -f grafana# Track results
python3 benchmarks/performance_tracker.py \
--results <path-to-results> \
--storage benchmarks/performance_data \
--branch $(git branch --show-current) \
--commit $(git rev-parse --short HEAD)# Save baseline
python3 benchmarks/baseline_manager.py save \
--results build/benchmark_results \
--branch main \
--version $(cat VERSION) \
--commit $(git rev-parse HEAD)
# List baselines
python3 benchmarks/baseline_manager.py list
# Load baseline
python3 benchmarks/baseline_manager.py load --branch main# Compare with baseline
python3 benchmarks/performance_regression_detector.py \
--baseline benchmarks/baselines/main/latest.json \
--current build/results.json \
--output report.txtCheck 1: Is Prometheus running?
curl http://localhost:9090/-/healthy
# Should return: Prometheus is Healthy.Check 2: Are metrics being generated?
ls -la benchmarks/performance_data/metrics/
# Should see: benchmarks.promCheck 3: Prometheus datasource configured?
- Go to Grafana → Configuration → Data Sources
- Click "Prometheus"
- Click "Save & Test"
- Should see: "Data source is working"
Fix: If still not working:
# Restart services
cd grafana
docker-compose restart
# Check Prometheus config
docker-compose exec prometheus cat /etc/prometheus/prometheus.ymlDefault credentials:
- Username:
admin - Password:
admin
If locked out:
# Reset admin password
docker-compose exec grafana grafana-cli admin reset-admin-password newpassword
# Or recreate containers
docker-compose down -v
docker-compose up -dCheck build configuration:
# Ensure benchmarks are enabled
cmake -B build -DBUILD_BENCHMARKS=ON
cmake --build build --target bench_crudMissing dependencies?
# Install Google Benchmark
sudo apt-get install libbenchmark-dev- Click through different panels
- Try different time ranges (top-right)
- Use template variables (Branch, Release, Hardware)
- Export charts (Share → Export)
# Configure Slack webhook
export SLACK_WEBHOOK_URL="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
# Edit prometheus.yml
vim grafana/prometheus.yml
# Add alerting section
# Restart Prometheus
cd grafana && docker-compose restart prometheusThe performance checks run automatically on PRs. To test manually:
# Trigger workflow
gh workflow run performance-regression-check.yml# For main branch
git checkout main
# Run benchmarks...
python3 benchmarks/baseline_manager.py save \
--results build/benchmark_results \
--branch main \
--version $(cat VERSION) \
--commit $(git rev-parse HEAD)Want to see the dashboard with sample data?
# Generate sample metrics
python3 << 'EOF'
import random
import time
with open('benchmarks/performance_data/metrics/benchmarks.prom', 'w') as f:
f.write('# HELP themisdb_benchmark_throughput_ops Throughput\n')
f.write('# TYPE themisdb_benchmark_throughput_ops gauge\n')
for i in range(10):
throughput = random.randint(40000, 50000)
f.write(f'themisdb_benchmark_throughput_ops{{benchmark="crud",branch="main"}} {throughput}\n')
time.sleep(0.1)
EOF
# Prometheus will scrape these metrics automaticallyHaving issues?
- Check Troubleshooting section above
- Search GitHub Issues
- Create new issue with
performancelabel
Total Time: ~5 minutes ⏱️
Difficulty: Easy 🟢
You're ready to monitor performance! 🎯