Unified pseudocost object for the regular and deterministic mode#1020
Unified pseudocost object for the regular and deterministic mode#1020nguidotti wants to merge 53 commits intoNVIDIA:mainfrom
Conversation
…er and worker pool in two files. removed local reduced cost fixing.
…tions. fix incorrect LP problem passed to the coefficient diving.
…the cutoff when no incumbent was found.
… improve-reliable-branching # Conflicts: # cpp/src/branch_and_bound/pseudo_costs.cpp
…change estimate via dual simplex single pivot (NVIDIA#963).
…ange instead of the objective. fixed candidate ranking in reliability branching.
…. fix incorrect check for infeasible nodes in the tree.
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
…unction). Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
…g branching as a setting Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
…fixing # Conflicts: # cpp/src/branch_and_bound/branch_and_bound.cpp
…nd deterministic mode. Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
|
@CodeRabbit review |
✅ Actions performedReview triggered.
|
📝 WalkthroughWalkthroughThis pull request restructures the branch-and-bound solver architecture by extracting worker management into separate headers, introducing mode-parameterized pseudo-cost structures, adding reduced-cost fixing functionality, and refactoring bounds propagation logic. Key changes include removal of Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cpp/src/branch_and_bound/pseudo_costs.cpp (1)
835-874:⚠️ Potential issue | 🟡 MinorInitialize
averageseven whenreliable_threshold != 0.
averagesis only populated in thereliable_threshold == 0branch, but the final score recomputation can still fall back to it if one side stayed uninitialized after trial branching. That leaves this path reading indeterminate values.🩹 Minimal fix
- pseudo_cost_averages_t<i_t, f_t> averages; + pseudo_cost_averages_t<i_t, f_t> averages = compute_averages(); @@ - if (reliable_threshold == 0) { - averages = compute_averages(); + if (reliable_threshold == 0) { log.printf("PC: num initialized down %d up %d avg down %e up %e\n", averages.num_init_down, averages.num_init_up, averages.down_avg, averages.up_avg);Also applies to: 1056-1056
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/branch_and_bound/pseudo_costs.cpp` around lines 835 - 874, The variable averages (pseudo_cost_averages_t<i_t, f_t> averages) is only assigned inside the reliable_threshold == 0 branch but later code can use it when one side's pseudocosts remain uninitialized; call compute_averages() unconditionally (e.g., assign averages = compute_averages() right after declaring averages or before the if(reliable_threshold==0) block) so averages is always valid, and keep the existing log.printf call inside the reliable_threshold == 0 branch so logging behavior is unchanged; update references to averages accordingly (symbols: averages, compute_averages(), reliable_threshold).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 1471-1474: The abs_gap is incorrectly computed as lower_bound -
lower_bound making it zero; update both inner search loop locations (the block
that sets lower_bound, upper_bound, rel_gap, abs_gap and the similar block
around lines 1597-1600) to compute abs_gap as the difference between upper and
lower bounds (e.g., abs_gap = upper_bound - lower_bound or fabs(upper_bound -
lower_bound)) while keeping rel_gap = user_relative_gap(original_lp_,
upper_bound, lower_bound); adjust the two occurrences that reference variables
lower_bound, upper_bound, rel_gap, abs_gap so abs_gap reflects the true gap.
- Around line 2491-2493: The absolute-gap check uses the stale pre-cut
root_relax_objective, so abs_gap is computed incorrectly and can miss early
termination; change the abs_gap computation to use the cut-improved
root_objective_ (i.e., compute abs_gap = upper_bound_ - root_objective_) so the
check that compares abs_gap against settings_.absolute_mip_gap_tol (and the
related rel_gap computed via user_relative_gap(original_lp_,
upper_bound_.load(), root_objective_)) reflects the tightened root bound; update
the abs_gap declaration near where rel_gap, original_lp_, upper_bound_.load(),
root_relax_objective and root_objective_ are used in branch_and_bound.cpp
accordingly.
- Around line 391-408: The root_reduced_cost_fixing function currently can run
concurrently with the async root LP solve and reads/mutates original_lp_ and
uses root_objective_/root_relax_soln_.z before they are finalized; guard this by
checking a root-relaxation-complete flag (e.g., root_relaxation_done_ or
similar) or otherwise ensure the root solve is finished before proceeding, and
only then snapshot original_lp_.lower/upper under mutex_original_lp_ and call
reduced_cost_fixing; in short, add a precondition that verifies root_objective_
and root_relax_soln_.z are valid (or wait for completion) and only then
lock/copy original_lp_ and invoke reduced_cost_fixing from
root_reduced_cost_fixing to eliminate the race on original_lp_ and uninitialized
root data.
In `@cpp/src/branch_and_bound/diving_heuristics.cpp`:
- Around line 154-160: The guided diving loop reads incumbent[j] without
checking whether an incumbent snapshot exists; if incumbent is empty this can
read past bounds and crash. Fix by guarding access to incumbent in the loop over
fractional (e.g., check incumbent.empty() or incumbent.size() <= j) inside the
function that contains fractional, solution, and rounding_direction_t logic, and
when no incumbent is available compute dir using a safe fallback (for example
decide by f_down vs f_up or treat distances as infinite so rounding uses DOWN/UP
deterministically); ensure the same guarded logic is used wherever guided diving
relies on incumbent to avoid out-of-range reads.
In `@cpp/src/branch_and_bound/diving_heuristics.hpp`:
- Around line 25-33: The header must directly include constants.hpp before any
use of branch_and_bound_mode_t to avoid include-order dependencies: add an
`#include` "constants.hpp" at the top of
cpp/src/branch_and_bound/diving_heuristics.hpp (before the template declarations
for pseudocost_diving and guided_diving that use branch_and_bound_mode_t) and
apply the same change to cpp/src/branch_and_bound/pseudo_costs.hpp so
rounding_direction_t and branch_and_bound_mode_t are defined regardless of
include order.
In `@cpp/src/branch_and_bound/mip_node.hpp`:
- Around line 153-164: The get_variable_bounds flow currently overwrites
start_lower/start_upper with branch bounds and can undo previously tightened
bounds; change the logic so you intersect (lower[i] = max(lower[i],
branch_var_lower), upper[i] = min(upper[i], branch_var_upper)) instead of
assigning, update bounds_changed accordingly, and only then test feasibility
(upper >= lower) and return false if infeasible; apply the same
intersection/fetch-and-test fix where the same pattern appears (the other block
around update_branched_variable_bounds usage, e.g., lines referenced in the
comment), and ensure functions update_branched_variable_bounds /
get_variable_bounds use the lower/upper vectors (and bounds_changed flags) as
inputs to merge rather than skipping when bounds_changed[branch_var] is already
true.
In `@cpp/src/branch_and_bound/pseudo_costs.cpp`:
- Around line 574-576: The vectors strong_branch_down and strong_branch_up are
being initialized to 0 which treats unvisited candidates as zero-cost
observations; instead initialize them to NaN (e.g.
std::numeric_limits<f_t>::quiet_NaN()) wherever they are created (including the
other occurrences referenced) so untouched entries are marked missing, and
ensure the update/aggregation code that currently checks only for NaN continues
to skip those entries (and that any counters like num_strong_branches_completed
only reflect actual updates). This preserves the existing NaN-skip logic and
prevents zero-initialized pseudocosts from skewing reliability branching and
objective estimates.
In `@cpp/src/dual_simplex/phase2.cpp`:
- Around line 174-205: The CHECK_CHANGE_IN_REDUCED_COST debug block calls
phase2::compute_reduced_cost_update with outdated parameters (it references lp,
basic_list, nonbasic_list) and the helper was moved/renamed; update the debug
call to use the new helper signature (the standalone compute_delta_z or
new_compute_reduced_cost_update function) and pass the current variables used by
the non-debug path (e.g., delta_y_dense, leaving_index, direction,
delta_z_mark_check, delta_z_indices_check, delta_z_check, work_estimate) instead
of lp/basic_list/nonbasic_list, or adapt the helper call to
compute_delta_z(delta_y_dense, leaving_index, direction, delta_z_mark_check,
delta_z_indices_check, delta_z_check, work_estimate) so the debug path compiles
and validates the same sparse update as the main code.
---
Outside diff comments:
In `@cpp/src/branch_and_bound/pseudo_costs.cpp`:
- Around line 835-874: The variable averages (pseudo_cost_averages_t<i_t, f_t>
averages) is only assigned inside the reliable_threshold == 0 branch but later
code can use it when one side's pseudocosts remain uninitialized; call
compute_averages() unconditionally (e.g., assign averages = compute_averages()
right after declaring averages or before the if(reliable_threshold==0) block) so
averages is always valid, and keep the existing log.printf call inside the
reliable_threshold == 0 branch so logging behavior is unchanged; update
references to averages accordingly (symbols: averages, compute_averages(),
reliable_threshold).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5683742e-2ed8-4385-ba08-a3f28add35cf
📒 Files selected for processing (24)
cpp/src/branch_and_bound/CMakeLists.txtcpp/src/branch_and_bound/branch_and_bound.cppcpp/src/branch_and_bound/branch_and_bound.hppcpp/src/branch_and_bound/branch_and_bound_worker.hppcpp/src/branch_and_bound/constants.hppcpp/src/branch_and_bound/deterministic_workers.hppcpp/src/branch_and_bound/diving_heuristics.cppcpp/src/branch_and_bound/diving_heuristics.hppcpp/src/branch_and_bound/mip_node.cppcpp/src/branch_and_bound/mip_node.hppcpp/src/branch_and_bound/pseudo_costs.cppcpp/src/branch_and_bound/pseudo_costs.hppcpp/src/branch_and_bound/reduced_cost_fixing.hppcpp/src/branch_and_bound/worker.hppcpp/src/branch_and_bound/worker_pool.hppcpp/src/dual_simplex/phase2.cppcpp/src/dual_simplex/phase2.hppcpp/src/dual_simplex/simplex_solver_settings.hppcpp/src/math_optimization/solver_settings.cucpp/src/mip_heuristics/diversity/lns/rins.cucpp/src/mip_heuristics/diversity/recombiners/sub_mip.cuhcpp/src/mip_heuristics/solver.cucpp/src/utilities/omp_helpers.hppcpp/tests/mip/determinism_test.cu
💤 Files with no reviewable changes (3)
- cpp/src/branch_and_bound/mip_node.cpp
- cpp/src/branch_and_bound/CMakeLists.txt
- cpp/src/branch_and_bound/branch_and_bound_worker.hpp
| void branch_and_bound_t<i_t, f_t>::root_reduced_cost_fixing(f_t upper_bound) | ||
| { | ||
| if (settings_.reduced_cost_strengthening < 3) { return; } | ||
|
|
||
| mutex_original_lp_.lock(); | ||
| std::vector<f_t> lower = original_lp_.lower; | ||
| std::vector<f_t> upper = original_lp_.upper; | ||
| std::vector<bool> bounds_changed; | ||
| mutex_original_lp_.unlock(); | ||
|
|
||
| auto [num_fixed, num_improved] = reduced_cost_fixing(root_relax_soln_.z, | ||
| var_types_, | ||
| settings_, | ||
| root_objective_, | ||
| upper_bound, | ||
| lower, | ||
| upper, | ||
| bounds_changed); |
There was a problem hiding this comment.
Guard root reduced-cost fixing until the root relaxation is finalized.
This helper can now run before the root LP is finished, but it depends on root_objective_/root_relax_soln_.z being valid and it mutates original_lp_.lower/upper while the async root solve is still using original_lp_. That is a race and can also apply fixing against uninitialized root data.
🛡️ Suggested guard
void branch_and_bound_t<i_t, f_t>::root_reduced_cost_fixing(f_t upper_bound)
{
- if (settings_.reduced_cost_strengthening < 3) { return; }
+ if (settings_.reduced_cost_strengthening < 3 || solving_root_relaxation_.load() ||
+ !std::isfinite(root_objective_)) {
+ return;
+ }As per coding guidelines, "Ensure race conditions are absent in multi-threaded server implementations; verify proper synchronization of shared state".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 391 - 408, The
root_reduced_cost_fixing function currently can run concurrently with the async
root LP solve and reads/mutates original_lp_ and uses
root_objective_/root_relax_soln_.z before they are finalized; guard this by
checking a root-relaxation-complete flag (e.g., root_relaxation_done_ or
similar) or otherwise ensure the root solve is finished before proceeding, and
only then snapshot original_lp_.lower/upper under mutex_original_lp_ and call
reduced_cost_fixing; in short, add a precondition that verifies root_objective_
and root_relax_soln_.z are valid (or wait for completion) and only then
lock/copy original_lp_ and invoke reduced_cost_fixing from
root_reduced_cost_fixing to eliminate the race on original_lp_ and uninitialized
root data.
| for (i_t j : fractional) { | ||
| f_t f_down = solution[j] - std::floor(solution[j]); | ||
| f_t f_up = std::ceil(solution[j]) - solution[j]; | ||
| f_t down_dist = std::abs(incumbent[j] - std::floor(solution[j])); | ||
| f_t up_dist = std::abs(std::ceil(solution[j]) - incumbent[j]); | ||
| rounding_direction_t dir = | ||
| down_dist < up_dist + eps ? rounding_direction_t::DOWN : rounding_direction_t::UP; |
There was a problem hiding this comment.
Handle the no-incumbent case in guided diving.
incumbent[j] is unguarded now. The deterministic path can broadcast an empty incumbent snapshot before the first feasible solution is found, so a guided dive can read past the end of the vector and crash.
🐛 Minimal hardening
- f_t down_dist = std::abs(incumbent[j] - std::floor(solution[j]));
- f_t up_dist = std::abs(std::ceil(solution[j]) - incumbent[j]);
- rounding_direction_t dir =
- down_dist < up_dist + eps ? rounding_direction_t::DOWN : rounding_direction_t::UP;
+ rounding_direction_t dir = rounding_direction_t::DOWN;
+ if (j < static_cast<i_t>(incumbent.size())) {
+ f_t down_dist = std::abs(incumbent[j] - std::floor(solution[j]));
+ f_t up_dist = std::abs(std::ceil(solution[j]) - incumbent[j]);
+ dir = down_dist < up_dist + eps ? rounding_direction_t::DOWN : rounding_direction_t::UP;
+ } else {
+ dir = f_down < f_up + eps ? rounding_direction_t::DOWN : rounding_direction_t::UP;
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/src/branch_and_bound/diving_heuristics.cpp` around lines 154 - 160, The
guided diving loop reads incumbent[j] without checking whether an incumbent
snapshot exists; if incumbent is empty this can read past bounds and crash. Fix
by guarding access to incumbent in the loop over fractional (e.g., check
incumbent.empty() or incumbent.size() <= j) inside the function that contains
fractional, solution, and rounding_direction_t logic, and when no incumbent is
available compute dir using a safe fallback (for example decide by f_down vs
f_up or treat distances as infinite so rounding uses DOWN/UP deterministically);
ensure the same guarded logic is used wherever guided diving relies on incumbent
to avoid out-of-range reads.
| template <typename i_t, typename f_t, branch_and_bound_mode_t Mode> | ||
| branch_variable_t<i_t> pseudocost_diving(pseudo_costs_t<i_t, f_t, Mode>& pc, | ||
| const std::vector<i_t>& fractional, | ||
| const std::vector<f_t>& solution, | ||
| const std::vector<f_t>& root_solution, | ||
| logger_t& log); | ||
|
|
||
| template <typename i_t, typename f_t> | ||
| branch_variable_t<i_t> guided_diving(pseudo_costs_t<i_t, f_t>& pc, | ||
| template <typename i_t, typename f_t, branch_and_bound_mode_t Mode> | ||
| branch_variable_t<i_t> guided_diving(pseudo_costs_t<i_t, f_t, Mode>& pc, |
There was a problem hiding this comment.
Include constants.hpp directly before using branch_and_bound_mode_t.
These declarations now depend on branch_and_bound_mode_t, but this header still only includes pseudo_costs.hpp. cpp/src/branch_and_bound/pseudo_costs.hpp still uses rounding_direction_t without including constants.hpp, so standalone includes now depend on include order and can fail to compile.
🔧 Minimal fix in this header
-#include <branch_and_bound/pseudo_costs.hpp>
+#include <branch_and_bound/constants.hpp>
+#include <branch_and_bound/pseudo_costs.hpp>cpp/src/branch_and_bound/pseudo_costs.hpp needs the same direct include.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| template <typename i_t, typename f_t, branch_and_bound_mode_t Mode> | |
| branch_variable_t<i_t> pseudocost_diving(pseudo_costs_t<i_t, f_t, Mode>& pc, | |
| const std::vector<i_t>& fractional, | |
| const std::vector<f_t>& solution, | |
| const std::vector<f_t>& root_solution, | |
| logger_t& log); | |
| template <typename i_t, typename f_t> | |
| branch_variable_t<i_t> guided_diving(pseudo_costs_t<i_t, f_t>& pc, | |
| template <typename i_t, typename f_t, branch_and_bound_mode_t Mode> | |
| branch_variable_t<i_t> guided_diving(pseudo_costs_t<i_t, f_t, Mode>& pc, | |
| `#include` <branch_and_bound/constants.hpp> | |
| `#include` <branch_and_bound/pseudo_costs.hpp> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/src/branch_and_bound/diving_heuristics.hpp` around lines 25 - 33, The
header must directly include constants.hpp before any use of
branch_and_bound_mode_t to avoid include-order dependencies: add an `#include`
"constants.hpp" at the top of cpp/src/branch_and_bound/diving_heuristics.hpp
(before the template declarations for pseudocost_diving and guided_diving that
use branch_and_bound_mode_t) and apply the same change to
cpp/src/branch_and_bound/pseudo_costs.hpp so rounding_direction_t and
branch_and_bound_mode_t are defined regardless of include order.
| bool get_variable_bounds(const std::vector<f_t>& start_lower, | ||
| const std::vector<f_t>& start_upper, | ||
| std::vector<f_t>& lower, | ||
| std::vector<f_t>& upper, | ||
| std::vector<bool>& bounds_changed) const | ||
| { | ||
| update_branched_variable_bounds(lower, upper, bounds_changed); | ||
| lower = start_lower; | ||
| upper = start_upper; | ||
|
|
||
| mip_node_t<i_t, f_t>* parent_ptr = parent; | ||
| bool feasible = | ||
| update_branched_variable_bounds(start_lower, start_upper, lower, upper, bounds_changed); | ||
| if (!feasible) { return false; } |
There was a problem hiding this comment.
Intersect with the current bounds instead of overwriting them.
get_variable_bounds() now starts from start_lower / start_upper, but update_branched_variable_bounds() still replaces that interval with the node's stored [branch_var_lower, branch_var_upper] and skips entirely when bounds_changed[branch_var] is already true. After root reduced-cost fixing, an older node can therefore re-widen a strengthened bound or drop its own tighter branch bound (for example, start [0,7], stored child [6,10] → returned [6,10]). Merge with max/min and test feasibility after the merge.
🐛 Suggested fix
- if (!bounds_changed[branch_var]) {
- // Apply the bounds at the current node
- lower[branch_var] = branch_var_lower;
- upper[branch_var] = branch_var_upper;
- bounds_changed[branch_var] = true;
- }
-
- return true;
+ lower[branch_var] = std::max(lower[branch_var], branch_var_lower);
+ upper[branch_var] = std::min(upper[branch_var], branch_var_upper);
+ bounds_changed[branch_var] = true;
+ return upper[branch_var] >= lower[branch_var];As per coding guidelines, validate algorithm correctness in optimization logic: simplex pivots, branch-and-bound decisions, routing heuristics, and constraint/objective handling must produce correct results.
Also applies to: 191-203
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/src/branch_and_bound/mip_node.hpp` around lines 153 - 164, The
get_variable_bounds flow currently overwrites start_lower/start_upper with
branch bounds and can undo previously tightened bounds; change the logic so you
intersect (lower[i] = max(lower[i], branch_var_lower), upper[i] = min(upper[i],
branch_var_upper)) instead of assigning, update bounds_changed accordingly, and
only then test feasibility (upper >= lower) and return false if infeasible;
apply the same intersection/fetch-and-test fix where the same pattern appears
(the other block around update_branched_variable_bounds usage, e.g., lines
referenced in the comment), and ensure functions update_branched_variable_bounds
/ get_variable_bounds use the lower/upper vectors (and bounds_changed flags) as
inputs to merge rather than skipping when bounds_changed[branch_var] is already
true.
| std::vector<f_t> strong_branch_down(fractional.size(), 0); | ||
| std::vector<f_t> strong_branch_up(fractional.size(), 0); | ||
| omp_atomic_t<i_t> num_strong_branches_completed = 0; |
There was a problem hiding this comment.
Don't treat unvisited strong-branch candidates as zero-cost observations.
strong_branch_down/up now start at 0, but the update path only skips NaN. If strong branching stops early on the global time limit, every untouched candidate gets recorded as an initialized zero pseudocost, which skews later reliability branching and objective estimates.
🧭 Proposed fix
- std::vector<f_t> strong_branch_down(fractional.size(), 0);
- std::vector<f_t> strong_branch_up(fractional.size(), 0);
+ std::vector<f_t> strong_branch_down(
+ fractional.size(), std::numeric_limits<f_t>::quiet_NaN());
+ std::vector<f_t> strong_branch_up(
+ fractional.size(), std::numeric_limits<f_t>::quiet_NaN());Also applies to: 720-721, 1098-1119
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/src/branch_and_bound/pseudo_costs.cpp` around lines 574 - 576, The
vectors strong_branch_down and strong_branch_up are being initialized to 0 which
treats unvisited candidates as zero-cost observations; instead initialize them
to NaN (e.g. std::numeric_limits<f_t>::quiet_NaN()) wherever they are created
(including the other occurrences referenced) so untouched entries are marked
missing, and ensure the update/aggregation code that currently checks only for
NaN continues to skip those entries (and that any counters like
num_strong_branches_completed only reflect actual updates). This preserves the
existing NaN-skip logic and prevents zero-initialized pseudocosts from skewing
reliability branching and objective estimates.
| #ifdef CHECK_CHANGE_IN_REDUCED_COST | ||
| const i_t m = A_transpose.n; | ||
| const i_t n = A_transpose.m; | ||
| std::vector<f_t> delta_y_dense(m); | ||
| delta_y.to_dense(delta_y_dense); | ||
| std::vector<f_t> delta_z_check(n); | ||
| std::vector<i_t> delta_z_mark_check(n, 0); | ||
| std::vector<i_t> delta_z_indices_check; | ||
| phase2::compute_reduced_cost_update(lp, | ||
| basic_list, | ||
| nonbasic_list, | ||
| delta_y_dense, | ||
| leaving_index, | ||
| direction, | ||
| delta_z_mark_check, | ||
| delta_z_indices_check, | ||
| delta_z_check, | ||
| work_estimate); | ||
| f_t error_check = 0.0; | ||
| for (i_t k = 0; k < n; ++k) { | ||
| const f_t diff = std::abs(delta_z[k] - delta_z_check[k]); | ||
| if (diff > 1e-6) { | ||
| printf("delta_z error %d transpose %e no transpose %e diff %e\n", | ||
| k, | ||
| delta_z[k], | ||
| delta_z_check[k], | ||
| diff); | ||
| } | ||
| error_check = std::max(error_check, diff); | ||
| } | ||
| if (error_check > 1e-6) { printf("delta_z error %e\n", error_check); } | ||
| #endif |
There was a problem hiding this comment.
CHECK_CHANGE_IN_REDUCED_COST no longer builds.
This validation block still references lp, basic_list, and nonbasic_list, but compute_delta_z(...) no longer receives them, and the helper was moved out of namespace phase2. Enabling the check now turns the debug path into a compile error instead of validating the sparse update.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/src/dual_simplex/phase2.cpp` around lines 174 - 205, The
CHECK_CHANGE_IN_REDUCED_COST debug block calls
phase2::compute_reduced_cost_update with outdated parameters (it references lp,
basic_list, nonbasic_list) and the helper was moved/renamed; update the debug
call to use the new helper signature (the standalone compute_delta_z or
new_compute_reduced_cost_update function) and pass the current variables used by
the non-debug path (e.g., delta_y_dense, leaving_index, direction,
delta_z_mark_check, delta_z_indices_check, delta_z_check, work_estimate) instead
of lp/basic_list/nonbasic_list, or adapt the helper call to
compute_delta_z(delta_y_dense, leaving_index, direction, delta_z_mark_check,
delta_z_indices_check, delta_z_check, work_estimate) so the debug path compiles
and validates the same sparse update as the main code.
This PR simplify the pseudo cost class in such a way that both regular and deterministic B&B use the same code path as much as possible. It also disable mutexes and atomics when running in deterministic mode as each thread has its own snapshot of the pseudocost.
It also move the routines related with the diving heuristics back to the
diving_heuristics.cpp.It includes #979 and #956.
Pending: MIPLIB benchmark
Checklist