This is my reproducible Python implementation of a single‑point Selection Perturbative Hyper‑Heuristic (SPHH) with a benchmark suite (f1–f24) covering dimensions ranging from 2, 10, 30, and 50 dimensions from the Research Paper: "A modified particle swarm optimization algorithm based on velocity updating mechanism by Chunfeng Wang and Wenxin Song" and a full experiment pipeline: run all problems across multiple seeds, aggregate results, export publication‑ready tables, and plot log‑scaled optimality‑gap convergence curves.
Python 3.10+ is recommended. Dependencies: numpy, matplotlib.
-
SPHH optimizer & benchmark registry —
single_point_sphh.pydefines:- Six low‑level perturbative heuristics (LLHs):
gaussian_full,gaussian_kdims,cauchy_full,random_reset_coord,opposition_blend,pull_to_best. - Selection modes: UCB1 bandit (
selection_mode="ucb") and uniform random ("random"). - Acceptance modes: Simulated Annealing (
acceptance_mode="sa") and Greedy ("greedy"). - Step‑size self‑adaptation using the 1/5 success rule (update every 50 evals).
- A small
OBJECTIVESregistry mapping names (e.g.,f3_D10) to(func, lo, hi, D).
- Six low‑level perturbative heuristics (LLHs):
-
Experiment config —
exp_config.pycentralizes seeds and per‑dimension evaluation budgets, plus theresults/directory path. -
Batch runner —
run_suite.pyiterates over all objective keys × seeds and writes one row per run toresults/seeds_all.csv. -
Aggregator —
aggregate_suite.pygroups by function key and writesresults/summary_all.csvwith mean/std/best and runtime stats. -
Table builders —
make_tables.py→results/tables.md(Markdown) andresults/tables.tex(LaTeX). -
Params snapshot —
make_params_table.pyrecords your final settings toresults/params.md/results/params.tex(selection/acceptance, budgets, etc.). -
Convergence plots —
plot_convergence.pyre‑runs selected base functions (BASE_FUNCTIONS = ["f24"]by default) and writes PNG plots toresults/plots/and median CSV traces toresults/traces/using |f − f*| on a log y‑axis.
# 1) (optional) create and activate a virtualenv
python -m venv .venv
# Windows
.venv\Scripts\activate
# macOS/Linux
source .venv/bin/activate
# 2) install dependencies
pip install numpy matplotlib
# 3) run all objectives across all seeds; write per‑run CSV
python run_suite.py
# 4) aggregate per‑run CSV into a per‑function summary
python aggregate_suite.py
# 5) build Markdown/LaTeX tables from the summary
python make_tables.py
# 6) record the experiment parameters you used (Markdown + LaTeX)
python make_params_table.py
# 7) (optional) plot convergence curves and write CSV traces
python plot_convergence.pyAll outputs are written to the results/ folder (created automatically). See the tree below.
- Python: 3.10+ recommended
- Packages:
numpy,matplotlib - Working dir: run all commands from the repo root
python -m venv .venv
# Windows
.venv\Scripts\activate
# macOS/Linux
source .venv/bin/activate
pip install numpy matplotlib- Single‑point search: maintain one incumbent
xwithin bounds[ℓ, u]^D. - Operator pool (LLHs): propose
x'via one of six perturbation heuristics, then clamp to bounds. - Heuristic selection: choose the next LLH using UCB1 (or random for ablations).
- Move acceptance: accept with SA (or greedy for ablations).
- Self‑adaptation: per‑dimension step sizes shrink/expand via the 1/5 success rule.
- Verbose single run: run
python single_point_sphh.pyto print iterations for a chosen problem.
Key constructor args (see single_point_sphh.py):
SPHH(
objective, bounds=(lo, hi), dim=D, max_evals=...,
seed=..., selection_mode="ucb"|"random",
acceptance_mode="sa"|"greedy", ucb_c=1.5, cooling_frac=0.2,
init="random", verbose=False, print_every=...
)results/
seeds_all.csv # one row per (objective key, seed) run
summary_all.csv # aggregated stats per function key
tables.md # Markdown tables (by base function × D)
tables.tex # LaTeX tables
params.md # experiment parameters (human‑readable)
params.tex # LaTeX version of parameters
plots/
conv_gap_<base>.png # log‑scale optimality‑gap plots by D
traces/
trace_gap_<key>.csv # median optimality‑gap vs eval index
Edit exp_config.py:
SEEDS = list(range(10))→ change number/values of seeds.MAX_EVALS_BY_D = {10:..., 30:..., 50:...}→ per‑dimension evaluation budgets (fallback toMAX_EVALS_DEFAULT).
run_suite.py runs every key in OBJECTIVES (e.g., f3_D10, f3_D30, f3_D50, …). To restrict:
- Temporarily comment out entries in
OBJECTIVESinsingle_point_sphh.py, or - Add a filter inside
sorted_keys()inrun_suite.py.
- Suite defaults:
selection_mode="ucb",acceptance_mode="sa",verbose=False(set insiderun_suite.py). - For a one‑off verbose run, execute
python single_point_sphh.pyand adjustwhich,max_evals,verbose,print_everyat the bottom.
- Edit
BASE_FUNCTIONSinplot_convergence.pyto select base functions (e.g.,"f17","f21","f23","f4"). - Update
FSTAR_BY_BASEfor any base with non‑zero optimum (e.g.,f2,f17) to keep the optimality‑gap correct.
make_tables.pygenerates compact tables with:D,f_best,mean_f_best,f_best_std_dev,mean run time (s)for each base function.plot_convergence.pyproduces log‑scale optimality‑gap curves by dimension and writes the per‑base median gap traces as CSV for reproducibility.
- All randomness is seeded via
exp_config.SEEDSand numpy’sdefault_rnginSPHH. - Budgets are controlled centrally (
exp_config.budget_for_dim(D)) so changing one place updates the entire pipeline. - The aggregator and table builders are pure functions of the CSV outputs – delete
results/and re‑run to regenerate everything.