-
Notifications
You must be signed in to change notification settings - Fork 122
Add cuts to the MIP solver #814
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: release/26.02
Are you sure you want to change the base?
Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughAdds a full cuts subsystem and APIs, expands MIP/cut/reliability settings and parameter bindings, threads timing and basis/edge-norm state through branch-and-bound, augments CSR/dense/vector utilities, and standardizes CONCURRENT_HALT_RETURN propagation across many solver codepaths. Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
/ok to test 9ea0271 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
cpp/src/dual_simplex/basis_updates.cpp (1)
2274-2317: Fix incomplete error propagation fromrefactor_basis.
refactor_basispropagates-2(lines 2284, 2317), but call sites handle it inconsistently:
phase2.cpp:2327, 3060, 3067: Check only> 0, missing negative return valuescuts.cpp:2682: Ignores return value entirelyExpected: all call sites check for
-2(and other error codes) and propagate appropriately, not just positive values. Use!= 0checks or explicit-2handling where applicable.cpp/src/dual_simplex/mip_node.hpp (1)
229-243: Missinginteger_infeasibleindetach_copy().The
detach_copy()method copies most node fields but does not copyinteger_infeasible. If diving heuristics rely on this value, it could cause incorrect behavior or inconsistent logging.🐛 Proposed fix
mip_node_t<i_t, f_t> detach_copy() const { mip_node_t<i_t, f_t> copy(lower_bound, vstatus); copy.branch_var = branch_var; copy.branch_dir = branch_dir; copy.branch_var_lower = branch_var_lower; copy.branch_var_upper = branch_var_upper; copy.fractional_val = fractional_val; copy.objective_estimate = objective_estimate; copy.node_id = node_id; + copy.integer_infeasible = integer_infeasible; return copy; }
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/branch_and_bound.hpp`:
- Around line 154-159: Reads of original_lp_ fields (.lower, .upper, .obj_scale)
in diving_thread() and the various multithreaded heuristic routines must be
protected with the same mutex used for writes: acquire mutex_original_lp_ before
any access to original_lp_ and release it after the read; this applies to all
occurrences (e.g., the accesses noted in diving_thread() and the other
heuristic/thread functions that read original_lp_). Locate uses of original_lp_
in those functions, wrap each read (and any short sequence of dependent reads)
with a lock/unlock on mutex_original_lp_, and keep the critical section minimal
to avoid contention while ensuring consistent snapshot semantics matching the
existing locked writes.
In `@cpp/src/dual_simplex/crossover.cpp`:
- Around line 1381-1396: The dual-feasibility check is using phase-1 duals in
solution.z while vstatus reflects duals for the original LP; before calling
dual_infeasibility (or the gate that may call dual_phase2) restore or recompute
the original objective duals (y/z) rather than using the phase-1 solution.z, or
avoid overwriting solution.z when assigning phase1_solution; update the code
paths around phase1_solution, solution, dual_infeasibility, vstatus, and
dual_phase2 so that dual_infeasibility receives y/z corresponding to the
original LP objective (e.g., recompute duals for the original objective or
save/restore phase-1 duals prior to the gate).
In `@cpp/src/dual_simplex/cuts.cpp`:
- Around line 103-109: The cut_orthogonality_ vector is not being reset each
scoring pass because resize(cut_storage_.m, 1) won't overwrite existing entries
when the size is unchanged; change the reset logic to explicitly set all entries
to the default (e.g., use std::fill or assign) after sizing so
cut_orthogonality_ is fully overwritten for the current pass (refer to
cut_orthogonality_ and cut_storage_.m in the scoring function where
cut_distances_, cut_norms_, and cut_scores_ are resized).
- Around line 47-98: The distance and orthogonality computations can divide by
zero/near-zero norms; update cut_distance and cut_orthogonality to guard those
divisions by clamping any computed norm to a small positive epsilon before
dividing: in cut_distance, after computing cut_norm set something like
guarded_norm = std::max(cut_norm, eps) and use guarded_norm to compute distance;
in cut_orthogonality, compute guarded_norm_i = std::max(norm_i, eps) and
guarded_norm_j = std::max(norm_j, eps) and use those when dividing the absolute
dot; choose eps using a numeric limit for f_t (e.g.,
std::numeric_limits<f_t>::epsilon() or a small constant) so cut_norms_ and
cut_norm remain unchanged but divisions are stable.
In `@cpp/src/dual_simplex/cuts.hpp`:
- Around line 99-133: The function minimum_violation currently calls exit(1)
when it finds a non-cut; instead remove the process termination and return a
sentinel failure value (e.g., std::numeric_limits<f_t>::lowest() or -inf) so
callers can handle the error; update the loop in minimum_violation (which
computes Cx from C via to_compressed_col and matrix_vector_multiply into Cx and
compares to cut_rhs) to, upon detecting Cx[k] <= cut_rhs[k], optionally emit a
debug-only assertion or logging and then return the chosen sentinel (and do not
call exit), ensuring min_cut_violation is not used after this early return.
In `@cpp/src/mip/presolve/third_party_presolve.cpp`:
- Line 412: Guard the use of PAPILO_GITHASH so builds without that macro still
compile: add a fallback definition (e.g. before the CUOPT_LOG_INFO call) by
checking `#ifndef` PAPILO_GITHASH and `#define` PAPILO_GITHASH "unknown" (or wrap
the CUOPT_LOG_INFO invocation in an `#ifdef/`#else that logs a safe string).
Target the CUOPT_LOG_INFO(...) call that references PAPILO_GITHASH and ensure
PAPILO_GITHASH is defined to a string literal fallback when absent.
🧹 Nitpick comments (9)
cpp/src/dual_simplex/sparse_matrix.cpp (1)
409-433: Consider adding column index bounds validation inappend_row.The
append_rowmethod does not validate that column indices in the input sparse vector are within bounds (c.i[k] < this->n). While the existingcheck_matrixmethod can detect this post-hoc, invalid indices could cause out-of-bounds access in downstream operations before validation occurs.This is consistent with the approach in
append_rowswhich validatesC.nbut not individual column indices. If this is intentional (relying on debug-onlycheck_matrixvalidation per the codebase pattern), this can remain as-is.cpp/tests/dual_simplex/unit_tests/solve.cpp (1)
329-478: Disabled test code should be enabled or removed.This test block is wrapped in
#if 0, so it will never compile or execute. Disabled test code adds maintenance burden without providing value. Consider:
- Enable the test if it's ready and the
solve_linear_program_with_cutsfunction is available in this PR.- Remove the code if it's not ready, and track it in an issue instead.
- Use a proper skip mechanism (e.g.,
GTEST_SKIP()) if the test should be temporarily disabled but still compile.cpp/src/dual_simplex/basis_updates.cpp (1)
1114-1323: Add a debug‑only dimension check forcuts_basic.
append_cutsassumescuts_basic.n == L0_.m. If violated, the Uᵀ solve and subsequent indexing are undefined. A CHECK_MATRIX‑guarded assert would catch misuse without impacting release performance.🔧 Proposed fix
i_t basis_update_mpf_t<i_t, f_t>::append_cuts(const csr_matrix_t<i_t, f_t>& cuts_basic) { const i_t m = L0_.m; +#ifdef CHECK_MATRIX + assert(cuts_basic.n == m); +#endifBased on learnings: In the cuOPT dual simplex solver, CSR/CSC matrices (including the quadratic objective matrix Q) are required to have valid dimensions and indices by construction. Runtime bounds checking in performance-critical paths like matrix scaling is avoided to prevent slowdowns. Validation is performed via debug-only check_matrix() calls wrapped in
#ifdefCHECK_MATRIX.cpp/include/cuopt/linear_programming/mip/solver_settings.hpp (1)
81-95: Consider adding brief documentation for new MIP settings.The new public data members control important MIP solver behavior. While these are internal headers per project conventions, brief inline comments would improve maintainability. For example:
i_t node_limit = ...; // Maximum nodes to explore (max = unlimited) i_t reliability_branching = -1; // -1=auto, 0=off, >0=enabled i_t max_cut_passes = 0; // Number of cutting-plane rounds at root i_t mir_cuts = -1; // -1=auto, 0=off, 1=on (Mixed Integer Rounding) // etc.The defaults and structure are consistent with
simplex_solver_settings.hpp.cpp/src/dual_simplex/branch_and_bound.cpp (4)
342-406: Verify numerical stability in reduced cost fixing calculations.The function uses hardcoded thresholds (
threshold = 1e-3,weaken = 1e-5) for bound tightening. Consider:
- Line 359: The
!std::isfinitecheck correctly guards against NaN/Inf reduced costs, but a warning toprintfbypasses the logger.- Lines 370-373 and 381-384: The
weakenfactor loosens bounds slightly to avoid numerical issues, which is good practice.The algorithm appears correct for reduced cost fixing. The
printfat Line 360 should use the logger for consistency.Use logger instead of printf
if (!std::isfinite(reduced_costs[j])) { - printf("Variable %d reduced cost is %e\n", j, reduced_costs[j]); + settings_.log.printf("Variable %d reduced cost is %e\n", j, reduced_costs[j]); continue; }
408-458: Verify mutex ordering to prevent potential deadlocks.The
set_new_solutionfunction acquires multiple mutexes:mutex_original_lp_(Lines 411, 430, 433), andmutex_upper_(Lines 423, 434, 450). While the current pattern appears safe (locks are released before acquiring new ones), the interleaved locking could be fragile.Consider documenting the expected lock ordering or consolidating critical sections where possible to reduce complexity.
1856-1858: Remove debug timing output or make it conditional.Multiple timing blocks use
if (1 || ...)which always prints timing information:
- Line 1856:
if (1 || cut_generation_time > 1.0)- Line 1863:
if (1 || score_time > 1.0)- Line 1918:
if (1 || add_cuts_time > 1.0)- Line 1958:
if (1 || node_presolve_time > 1.0)- Line 1989:
if (1 || dual_phase2_time > 1.0)- Line 2025:
if (1 || remove_cuts_time > 1.0)These appear to be debug statements. Consider removing the
1 ||to only log when operations exceed 1 second, or guard with a debug flag.Remove unconditional timing output
- if (1 || cut_generation_time > 1.0) { + if (cut_generation_time > 1.0) { settings_.log.printf("Cut generation time %.2f seconds\n", cut_generation_time); }Apply similarly to the other occurrences.
Also applies to: 1862-1865, 1918-1920, 1958-1960, 1989-1991, 2025-2027
1873-1877: Remove or guard debug print statements.Lines 1873-1877 contain unconditional debug output (
#if 1and directprintf). This should be removed or guarded with a debug macro for release builds.Guard debug output
-#if 1 - cut_pool.print_cutpool_types(); - print_cut_types("In LP ", cut_types, settings_); - printf("Cut pool size: %d\n", cut_pool.pool_size()); -#endif +#ifdef DEBUG_CUTS + cut_pool.print_cutpool_types(); + print_cut_types("In LP ", cut_types, settings_); + settings_.log.printf("Cut pool size: %d\n", cut_pool.pool_size()); +#endifcpp/src/dual_simplex/cuts.cpp (1)
2345-2685: Remove unusedsolutionparameter fromadd_cuts().The
solutionparameter is declared but never used in the function body, creating confusion about responsibilities. The caller (branch_and_bound.cpp) independently resizesroot_relax_soln_post-call anyway (lines 1963–1965), so the parameter serves no purpose. Either remove it from the signature or, if the intent is foradd_cuts()to manage solution resizing, use it within the function to resize for the expanded problem. Current design creates maintenance risk without providing value.
…nality. Remove minimum_violation
|
/ok to test 4c086ac |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/cuts.cpp`:
- Around line 770-772: The loop currently exits completely when
negate_inequality == -1 (the break on that condition), which prevents generating
MIR cuts for subsequent rows; change the control flow to skip the current
iteration instead of breaking the loop by replacing the break with continue so
other rows are still processed (locate the check of negate_inequality in
cuts.cpp where the TODO comment sits and make the single-line replacement from
break to continue).
- Around line 431-436: In greedy_knapsack_problem, protect the division
ratios[i] = values[i] / weights[i] against zero (or near-zero) weights: detect
if weights[i] is zero (or fabs(weights[i]) < epsilon) and set ratios[i]
appropriately (e.g., +infinity when weight==0 and value>0, -infinity when
value<0, or 0 when value==0) using std::numeric_limits<f_t>::infinity();
otherwise compute the normal division; update the ratios vector calculation to
use this guarded logic so no division-by-zero occurs.
- Around line 2761-2788: The function write_solution_for_cut_verification
currently opens a raw FILE* (fid) and may leak it if any operation throws or
early-returns; replace the raw FILE* use with an RAII wrapper (e.g.,
std::unique_ptr<FILE, decltype(&fclose)> or a small scope-guard) so fclose is
guaranteed on all exit paths, or switch to std::ofstream; update the code that
writes to fid (fprintf calls) to use the chosen RAII-managed stream or the
managed FILE pointer while preserving behavior in
write_solution_for_cut_verification and ensure fclose is removed from manual
calls.
🧹 Nitpick comments (2)
cpp/src/dual_simplex/cuts.hpp (1)
442-470: Verify parameter ordering consistency inadd_cutsandremove_cuts.The
add_cutsandremove_cutsfunctions have different parameter orderings (e.g.,settingsposition,Arowpresence). While not incorrect, this inconsistency could lead to confusion at call sites.cpp/src/dual_simplex/cuts.cpp (1)
361-363: NaN comparison for early return is fragile.The check
objective != objectiveis an idiomatic NaN test, but this pattern can be optimized away by aggressive compilers with-ffast-math. Consider usingstd::isnan()for clarity and robustness.♻️ Suggested fix
- if (objective != objective) { return -1; } + if (std::isnan(objective)) { return -1; }
|
/ok to test 606c482 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/branch_and_bound.cpp`:
- Around line 323-339: The division that computes iter_node uses
exploration_stats_.total_lp_iters / nodes_explored and can divide by zero when
nodes_explored is 0; modify the report code around iter_node so you check
nodes_explored (the i_t variable nodes_explored) before dividing: if
nodes_explored > 0 compute iter_node = exploration_stats_.total_lp_iters /
static_cast<f_t>(nodes_explored) else set iter_node to 0.0 (or another safe
sentinel), then use iter_node in the printf; this prevents a crash and preserves
numeric stability in report/branch_and_bound.cpp.
- Around line 351-367: The loop performing reduced‑cost fixing must skip any
tightening when the current gap is non‑positive; add a guard at the start of the
per‑column logic (inside the for over reduced_costs in branch_and_bound.cpp)
that checks upper_bound <= root_obj (or abs_gap <= 0) and continues to next j if
true, so that compute_objective(root_obj) and abs_gap are validated before using
them to tighten bounds (symbols to update: root_obj, upper_bound, abs_gap,
reduced_costs loop and the per‑column tightening logic).
🧹 Nitpick comments (2)
cpp/src/dual_simplex/cuts.cpp (2)
16-44: Track cut deduplication TODO.The TODO indicates planned dedup logic in a hot path; consider creating a follow-up issue to avoid pool bloat.
Would you like me to draft an issue template or a minimal dedup sketch?
225-229:drop_cuts()is still a stub.If this is intentionally deferred, please track it; otherwise I can help sketch a safe policy.
Would you like me to propose a simple age/score-based eviction policy?
|
/ok to test 606c482 |
|
/ok to test fb50454 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/cuts.hpp`:
- Around line 148-167: Protect the division in cut_orthogonality by checking
norms before dividing: inside the cut_orthogonality(i_t i, i_t j)
implementation, compute norm_i and norm_j as before but if either is below
settings_.zero_tol treat the pair as maximally non-aligned by returning 1.0;
otherwise compute and return 1.0 - std::abs(dot) / (norm_i * norm_j). This uses
the existing settings_.zero_tol guard to avoid dividing by near-zero values and
preserves current return semantics.
🧹 Nitpick comments (2)
cpp/src/dual_simplex/cuts.hpp (2)
65-91: Consider reusingcut_info_tto avoid duplicate counting logic.The
print_cut_typesfunction duplicates the cut-type counting logic that already exists incut_info_t::record_cut_types. This could be simplified:♻️ Suggested refactor
template <typename i_t, typename f_t> void print_cut_types(const std::string& prefix, const std::vector<cut_type_t>& cut_types, const simplex_solver_settings_t<i_t, f_t>& settings) { - i_t num_gomory_cuts = 0; - i_t num_mir_cuts = 0; - i_t num_knapsack_cuts = 0; - i_t num_cg_cuts = 0; - for (i_t i = 0; i < cut_types.size(); i++) { - if (cut_types[i] == cut_type_t::MIXED_INTEGER_GOMORY) { - num_gomory_cuts++; - } else if (cut_types[i] == cut_type_t::MIXED_INTEGER_ROUNDING) { - num_mir_cuts++; - } else if (cut_types[i] == cut_type_t::KNAPSACK) { - num_knapsack_cuts++; - } else if (cut_types[i] == cut_type_t::CHVATAL_GOMORY) { - num_cg_cuts++; - } - } + cut_info_t<i_t, f_t> info; + // Note: record_cut_types takes non-const ref, may need adjustment + auto mutable_types = cut_types; + info.record_cut_types(mutable_types); settings.log.printf("%s: Gomory cuts: %d, MIR cuts: %d, Knapsack cuts: %d CG cuts: %d\n", prefix.c_str(), - num_gomory_cuts, - num_mir_cuts, - num_knapsack_cuts, - num_cg_cuts); + info.num_gomory_cuts, + info.num_mir_cuts, + info.num_knapsack_cuts, + info.num_cg_cuts); }Also, the loop at line 74 compares signed
i_twith unsignedsize_tfromcut_types.size(), which may trigger compiler warnings. Consider usingsize_tfor the loop index or casting.
267-304: Consider documenting the workspace vector invariants.The
tableau_equality_tclass maintains several workspace vectors (b_bar_,x_workspace_,c_workspace_, etc.) and marking arrays (nonbasic_mark_,x_mark_). Consider adding brief comments documenting when these workspaces are valid/dirty, especially sincegenerate_base_equalitylikely modifies them and may require reset between calls.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cpp/src/dual_simplex/bounds_strengthening.cpp (1)
243-277:⚠️ Potential issue | 🟡 MinorDebug code references undefined
problemvariable — will not compile if enabled.The debug block references
problem.lowerandproblem.upper(lines 250, 254-255, 258, 263), butproblemis not in scope. It was a constructor parameter, not stored as a member. IfDEBUG_BOUND_STRENGTHENINGis set to1, this will cause a compilation error.🐛 Proposed fix: use the correct parameter names
for (i_t i = 0; i < n; ++i) { - if (lower[i] > problem.lower[i] + settings.primal_tol || - (!std::isfinite(problem.lower[i]) && std::isfinite(lower[i]))) { + if (lower[i] > lower_bounds[i] + settings.primal_tol || + (!std::isfinite(lower_bounds[i]) && std::isfinite(lower[i]))) { num_lb_changed++; lb_change += - std::isfinite(problem.lower[i]) - ? (lower[i] - problem.lower[i]) / (1e-6 + std::max(abs(lower[i]), abs(problem.lower[i]))) + std::isfinite(lower_bounds[i]) + ? (lower[i] - lower_bounds[i]) / (1e-6 + std::max(abs(lower[i]), abs(lower_bounds[i]))) : 1.0; } - if (upper[i] < problem.upper[i] - settings.primal_tol || - (!std::isfinite(problem.upper[i]) && std::isfinite(upper[i]))) { + if (upper[i] < upper_bounds[i] - settings.primal_tol || + (!std::isfinite(upper_bounds[i]) && std::isfinite(upper[i]))) { num_ub_changed++; ub_change += - std::isfinite(problem.upper[i]) - ? (problem.upper[i] - upper[i]) / (1e-6 + std::max(abs(problem.upper[i]), abs(upper[i]))) + std::isfinite(upper_bounds[i]) + ? (upper_bounds[i] - upper[i]) / (1e-6 + std::max(abs(upper_bounds[i]), abs(upper[i]))) : 1.0; } }Note: After this fix, the debug comparison would show changes from the input bounds (before strengthening) to the computed bounds (stored in
lower/upper), which appears to be the intended behavior.
🧹 Nitpick comments (7)
cpp/src/dual_simplex/bounds_strengthening.cpp (3)
228-229: Variable shadowing: localbounds_changedshadows the parameter.The local variable
bool bounds_changedon line 228 shadows the function parameterconst std::vector<bool>& bounds_changedfrom line 94. While this doesn't cause a bug currently (the parameter is only used before this point), it can lead to confusion and potential issues if future code attempts to use the parameter after this line.♻️ Proposed fix: rename the local variable
- bool bounds_changed = lb_updated || ub_updated; - if (bounds_changed) { num_bounds_changed++; } + bool bound_updated = lb_updated || ub_updated; + if (bound_updated) { num_bounds_changed++; }
15-27: Consider guarding against near-zero coefficients for numerical stability.The
update_lbandupdate_ubfunctions divide bycoeffwithout guarding against near-zero values. While sparse matrix coefficients are typically non-zero, near-zero coefficients could cause numerical instability by producing very large bounds. The downstreamstd::max/std::minoperations provide some protection, but an explicit guard would be more robust.♻️ Optional: Add near-zero coefficient guard
template <typename f_t> static inline f_t update_lb(f_t curr_lb, f_t coeff, f_t delta_min_act, f_t delta_max_act) { + constexpr f_t coeff_eps = 1e-12; + if (std::abs(coeff) < coeff_eps) { return curr_lb; } auto comp_bnd = (coeff < 0.) ? delta_min_act / coeff : delta_max_act / coeff; return std::max(curr_lb, comp_bnd); } template <typename f_t> static inline f_t update_ub(f_t curr_ub, f_t coeff, f_t delta_min_act, f_t delta_max_act) { + constexpr f_t coeff_eps = 1e-12; + if (std::abs(coeff) < coeff_eps) { return curr_ub; } auto comp_bnd = (coeff < 0.) ? delta_max_act / coeff : delta_min_act / coeff; return std::min(curr_ub, comp_bnd); }Based on learnings: "Guard numerical stability by guarding near-zero norms in cut-related computations... functions like cut_orthogonality (and related routines in the same directory) should also handle near-zero norms to avoid instability."
213-217: Hardcoded tolerance1e-6could be inconsistent with solver settings.Line 213 uses a hardcoded tolerance of
1e-6, while lines 207-208 use1e3 * settings.primal_tol. For consistency, consider using a settings-based tolerance here as well.♻️ Optional: Use consistent tolerance
- if (new_lb > new_ub + 1e-6) { + if (new_lb > new_ub + settings.primal_tol) {cpp/src/dual_simplex/branch_and_bound.cpp (4)
1617-1627: Assertions on size mismatches may crash in production.Lines 1617-1621 and 1622-1627 assert on
basic_listandnonbasic_listsizes after logging the mismatch. In production builds where assertions may be disabled or where continued execution after logging is expected, consider returning an error status instead:🛠️ Suggested improvement
if (basic_list.size() != original_lp_.num_rows) { settings_.log.printf( "basic_list size %d != m %d\n", basic_list.size(), original_lp_.num_rows); - assert(basic_list.size() == original_lp_.num_rows); + return lp_status_t::NUMERICAL_ISSUES; } if (nonbasic_list.size() != original_lp_.num_cols - original_lp_.num_rows) { settings_.log.printf("nonbasic_list size %d != n - m %d\n", nonbasic_list.size(), original_lp_.num_cols - original_lp_.num_rows); - assert(nonbasic_list.size() == original_lp_.num_cols - original_lp_.num_rows); + return lp_status_t::NUMERICAL_ISSUES; }
1858-1859: Debug timing conditions should be cleaned up.Multiple
if (1 || ...)patterns at lines 1858, 1865, 1920, 1960, 1986, and 2022 always evaluate to true, printing timing information unconditionally. These appear to be debug artifacts that should be reverted to conditional logging:♻️ Example fix
- if (1 || cut_generation_time > 1.0) { + if (cut_generation_time > 1.0) { settings_.log.printf("Cut generation time %.2f seconds\n", cut_generation_time); }Apply similar changes to all six occurrences.
Also applies to: 1865-1866, 1920-1921, 1960-1961, 1986-1987, 2022-2023
1875-1879: Debug print block should be removed or made conditional.The
#if 1block unconditionally prints cut pool information. Consider removing or guarding with a debug macro/setting:-#if 1 - cut_pool.print_cutpool_types(); - print_cut_types("In LP ", cut_types, settings_); - printf("Cut pool size: %d\n", cut_pool.pool_size()); -#endif +#ifdef DEBUG_CUT_POOL + cut_pool.print_cutpool_types(); + print_cut_types("In LP ", cut_types, settings_); + settings_.log.printf("Cut pool size: %d\n", cut_pool.pool_size()); +#endifAlso note the use of raw
printfinstead ofsettings_.log.printf.
2115-2120: Unused variablenode_presolve_time.
node_presolve_timeis computed at line 2119 but never used. Either log it or remove the computation:bool feasible = node_presolve.bounds_strengthening(settings_, bounds_changed, original_lp_.lower, original_lp_.upper); - f_t node_presolve_time = toc(node_presolve_start_time); mutex_original_lp_.unlock();Or if timing is useful for debugging:
f_t node_presolve_time = toc(node_presolve_start_time); mutex_original_lp_.unlock(); + if (node_presolve_time > 1.0) { + settings_.log.printf("Reduced-cost node presolve time %.2f seconds\n", node_presolve_time); + }
|
/ok to test 030dbb5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/cuts.cpp`:
- Around line 101-135: In score_cuts (cut_pool_t::score_cuts) reset the
scored_cuts_ counter at the start of each scoring pass so it does not accumulate
across calls: after reserving/clearing best_cuts_ (or immediately before using
it) set scored_cuts_ = 0; this ensures subsequent cut selection logic that
depends on scored_cuts_ uses a fresh count for each pass rather than carrying
over previous values.
|
/ok to test 635f509 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cpp/src/dual_simplex/bounds_strengthening.cpp (1)
105-116:⚠️ Potential issue | 🟠 MajorGuard against
bounds_changedsize mismatch to avoid OOB.Line 105 assumes
bounds_changed.size() == nwhen non-empty; a smaller vector will cause out-of-bounds access. Please add a size check (or fallback to full scan) before indexing.Proposed fix (fallback to full scan on mismatch)
- if (!bounds_changed.empty()) { - std::fill(constraint_changed.begin(), constraint_changed.end(), false); - for (i_t i = 0; i < n; ++i) { - if (bounds_changed[i]) { - const i_t row_start = A.col_start[i]; - const i_t row_end = A.col_start[i + 1]; - for (i_t p = row_start; p < row_end; ++p) { - const i_t j = A.i[p]; - constraint_changed[j] = true; - } - } - } - } + if (!bounds_changed.empty()) { + if (bounds_changed.size() == static_cast<size_t>(n)) { + std::fill(constraint_changed.begin(), constraint_changed.end(), false); + for (i_t i = 0; i < n; ++i) { + if (bounds_changed[i]) { + const i_t row_start = A.col_start[i]; + const i_t row_end = A.col_start[i + 1]; + for (i_t p = row_start; p < row_end; ++p) { + const i_t j = A.i[p]; + constraint_changed[j] = true; + } + } + } + } else { + settings.log.debug("bounds_changed size mismatch; running full scan\n"); + } + }As per coding guidelines: Validate input sanitization to prevent buffer overflows and resource exhaustion attacks; avoid unsafe deserialization of problem files.
🧹 Nitpick comments (1)
cpp/src/dual_simplex/bounds_strengthening.hpp (1)
23-26: Documentbound_changedcontract (size/semantics).The new parameter is easy to misuse (empty vs sized-but-all-false). Consider a brief comment stating expected size and semantics (e.g., empty ⇒ full scan; otherwise size == num_cols).
|
/ok to test 6156fd1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
cpp/src/dual_simplex/barrier.cu (1)
684-704:⚠️ Potential issue | 🟠 MajorFix sentinel propagation at line 1810.
Thesolve_adatcall at line 1810 returnsCONCURRET_HALT_RETURN(-2) on concurrent halt, but the error handlerif (solve_status != 0) { return status; }swallows the sentinel by returning the previousstatusvariable instead of propagatingsolve_status. This prevents the halt signal from reaching callers. Lines 1935 and 2410 show the correct pattern: check for the sentinel explicitly before the generic error check.Required fix
- if (solve_status != 0) { return status; } + if (solve_status == CONCURRET_HALT_RETURN) { return CONCURRET_HALT_RETURN; } + if (solve_status != 0) { return solve_status; }cpp/src/dual_simplex/primal.cpp (1)
322-324:⚠️ Potential issue | 🟡 MinorPre-existing duplicate call to reorder_basic_list.
Lines 322-323 call
reorder_basic_list(q, basic_list)twice in succession. This appears to be a pre-existing issue (not introduced by this PR), but calling this function twice would incorrectly double-permute the basic list.🐛 Proposed fix
reorder_basic_list(q, basic_list); - reorder_basic_list(q, basic_list); basis_update_t ft(L, U, p);
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/right_looking_lu.cpp`:
- Around line 1116-1119: The code must explicitly detect and propagate the
concurrent-halt sentinel returned by right_looking_lu_row_permutation_only
instead of treating it as a pivot count; after calling
right_looking_lu_row_permutation_only (the variable pivots), add a check like if
(pivots == CONCURRET_HALT_RETURN) return CONCURRET_HALT_RETURN; (more generally
if (pivots < 0) return pivots;) before computing num_dependent or allocating
independent_rhs(pivots), so the sentinel -2 is propagated upward and never used
as a size or pivot count.
🧹 Nitpick comments (2)
cpp/src/dual_simplex/types.hpp (1)
22-23: Consider fixing the macro name typo for clarity.
CONCURRET_HALT_RETURNlooks misspelled; renaming toCONCURRENT_HALT_RETURN(with a mechanical rename across usages) would reduce confusion during status handling.cpp/src/dual_simplex/phase2.cpp (1)
2085-2168: Route infeasibility diagnostics through settings.log for consistency.Direct
printfbypasses logging controls and can be noisy in multi‑threaded runs. Consider usingsettings.log.printf(and/or a debug level) to keep output consistent.Suggested tweak
- printf( + settings.log.printf( "Primal infeasibility %e/%e (Basic %e, Nonbasic %e, Basic over %e). Perturbation %e/%e. Info " "%d\n", primal_infeas, orig_primal_infeas, basic_infeas, nonbasic_infeas, basic_over, orig_perturbation, perturbation, info);
|
/ok to test 83b5ac9 |
|
/ok to test ca00738 |
|
/ok to test 026ca38 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/branch_and_bound.cpp`:
- Around line 343-408: find_reduced_cost_fixings currently counts integer
fixings (num_fixed) but never applies them; update the logic that computes
reduced_cost_upper_bound and reduced_cost_lower_bound (variables
reduced_cost_upper_bound, reduced_cost_lower_bound) to actually write into
lower_bounds[j] and upper_bounds[j] and mark bounds_changed[j] when a tightening
is possible for integer vars, and when var_types_[j]==variable_type_t::INTEGER
and reduced_cost_upper_bound <= reduced_cost_lower_bound + fixed_tol set the
variable to a fixed integer by choosing a valid integer value (e.g., clamp/round
into [reduced_cost_lower_bound, reduced_cost_upper_bound] using floor/ceil as
appropriate), assign it to both lower_bounds[j] and upper_bounds[j], increment
num_fixed, and set bounds_changed[j]=true so the fix is applied rather than only
counted.
🧹 Nitpick comments (1)
cpp/src/dual_simplex/cuts.cpp (1)
157-199:drop_cuts()is still a stub.
If cut aging is expected to prune the pool, consider implementing this or tracking it as a follow‑up.Would you like me to draft an implementation or open a tracking issue?
|
/ok to test 51e0ed0 |
This PR adds cuts to the MIP solver. This includes the following: