This document explains how to run tests, generate test coverage reports, and covers key testing scenarios.
go test ./...# Run server package tests
go test ./internal/server/...
# Run hook package tests
go test ./internal/hook/...# Run concurrency tests
go test -v ./internal/server -run TestConcurrent
# Run security tests
go test -v ./internal/server -run TestCommand|TestPath
# Run performance tests
go test -v ./internal/server -run TestStress|TestLoadWe provide a convenient script scripts/test-coverage.sh to generate test coverage reports. Run it from the project root (or any directory; the script will change to the repo root):
# Run all tests and generate coverage report
./scripts/test-coverage.sh all
# Run only server package tests
./scripts/test-coverage.sh server
# Run only critical scenario tests
./scripts/test-coverage.sh critical
# Generate HTML coverage report
./scripts/test-coverage.sh html
# View function-level coverage
./scripts/test-coverage.sh func
# Clean coverage files
./scripts/test-coverage.sh clean# 1. Run tests and generate coverage file
go test -coverprofile=coverage.out -covermode=atomic ./...
# 2. View coverage statistics
go tool cover -func=coverage.out
# 3. Generate HTML report
go tool cover -html=coverage.out -o coverage.htmlTests various error conditions to ensure the system handles errors gracefully:
- File Not Found:
TestMakeSureCallable_FileNotExists - Permission Errors:
TestMakeSureCallable_PermissionDenied - Working Directory Not Found:
TestMakeSureCallable_WorkingDirectoryNotExists - Command Timeout:
TestHandleHook_CommandTimeout - Configuration File Errors:
TestCreateHookHandler_ConfigFileError
Tests system stability and correctness under high concurrency:
- Concurrent Execution of Same Hook:
TestConcurrentHookExecution_SameHook - Concurrent File Operations:
TestConcurrentHookExecution_FileOperations - Resource Contention:
TestConcurrentHookExecution_ResourceContention
Tests system security protection capabilities:
- Command Injection Prevention:
TestCommandInjection_Prevention - Path Traversal Prevention:
TestPathTraversal_Prevention - Strict Mode Validation:
TestCommandValidator_StrictMode - Path Whitelist:
TestCommandValidator_PathWhitelist - Argument Length Limits:
TestCommandValidator_ArgLengthLimits - Special Character Handling:
TestSpecialCharacters_Handling
Tests system performance and scalability:
- Benchmark Tests:
BenchmarkHookExecution,BenchmarkConcurrentHookExecution - Load Testing:
TestLoadTest_MultipleHooks - Stress Testing:
TestStressTest_HighConcurrency - Memory Leak Testing:
TestMemoryLeak_RepeatedExecutions
# Run a single test
go test -v ./internal/server -run TestConcurrentHookExecution_SameHook
# Run tests matching a pattern
go test -v ./internal/server -run TestCommand# Run all benchmark tests
go test -bench=. ./internal/server
# Run a specific benchmark test
go test -bench=BenchmarkHookExecution ./internal/server
# Generate CPU profile
go test -bench=. -cpuprofile=cpu.prof ./internal/serverWe recommend maintaining the following test coverage targets:
- Overall Coverage: ≥ 80%
- Critical Path Coverage: ≥ 90%
- Security-Related Code Coverage: ≥ 95%
# Generate HTML report and open
./scripts/test-coverage.sh html
open coverage.html # macOS
# or
xdg-open coverage.html # LinuxIn CI/CD pipelines, you can use the following commands:
# Run tests and check coverage
go test -coverprofile=coverage.out -covermode=atomic ./...
go tool cover -func=coverage.out | grep total | awk '{print $3}'
# Exit with non-zero code if coverage is below threshold
COVERAGE=$(go tool cover -func=coverage.out | grep total | awk '{print substr($3, 1, length($3)-1)}')
if (( $(echo "$COVERAGE < 80" | bc -l) )); then
echo "Coverage $COVERAGE% is below 80%"
exit 1
fi- Test Isolation: Each test should be independent and not depend on the execution order of other tests
- Resource Cleanup: Use
t.TempDir()to create temporary directories that are automatically cleaned up after tests - Error Handling: Tests should verify error conditions, not just success paths
- Concurrency Safety: Concurrency tests should verify data races and deadlocks
- Performance Benchmarks: Regularly run benchmark tests to monitor performance regressions
If tests fail, you can:
- Use the
-vflag to view detailed output - Use
-runto run specific tests for debugging - Check error messages in test logs
If coverage reports are inaccurate:
- Ensure you use
-covermode=atomicfor concurrent tests - Check for untested code branches
- Verify that tests actually execute the target code