Skip to content

Conversation

@chris-maes
Copy link
Contributor

@chris-maes chris-maes commented Jan 29, 2026

This PR adds cuts to the MIP solver. This includes the following:

  1. Add constraints in the form C*x <= d to an LP that has already been solved to optimality (and has basis information).
    • The constraints must be violated at the current relaxation solution x^star. That is, C*x^star > d.
    • The constraint matrix, rhs, basis, and basis factorization, are all updated to include the additional constraints.
    • Dual simplex is started in phase 2 from a dual feasible solution.
  2. Remove constraints from an LP that has already been solved to optimality.
    • The constraints must have slacks in the basis
    • The basis is refactored from scratch
  3. Add cut pass loop after solving the root relaxation
  4. Add a cut pool to store cuts and select cuts
    • We currently score cuts based on distance and orthogonality.
  5. Add Mixed Integer Gomory Cuts
    • These are computed via a MIR cut on a row of the simplex tableau
  6. Add Mixed Integer Rounding (MIR) Cuts
    • These are constructed by aggregating rows of the constraint matrix.
  7. Add Strong Chvatal-Gomory Cuts
    • These are constructed from a row of the tableau matrix and from rows of the constraint matrix.
  8. Fixes to Handling of Steepest Edge Norms in Dual Simplex
    • Ensure that all basic variables have a positive steepest edge norms
  9. Reduced Costs Fixing at Root Node
  • These are applied after each cut pass and after strong branching, if a heuristic solution is available.
  1. Fix issues in Crossover when solving the dual problem
  • We were not correctly populating slack variables when solving the dual. This issue appeared on graph20-80-1rand
  1. Fix issue in Crossover when basis became rank-deficient in dual push
  2. Fix issues across the code with handling and propagating concurrent halt.
  3. New solver options: mip-cut-passes, mip-mixed-integer-gomory-cuts, mip-mir-cuts, mip-strong-chvatal-gomory-cuts, mip-knapsack-cuts, mip-cut-change-threshold, mip-cut-min-orthogonality.

@copy-pr-bot
Copy link

copy-pr-bot bot commented Jan 29, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link

coderabbitai bot commented Jan 29, 2026

Warning

Rate limit exceeded

@chris-maes has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 13 minutes and 8 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📝 Walkthrough

Walkthrough

Adds a full cuts subsystem and APIs, expands MIP/cut/reliability settings and parameter bindings, threads timing and basis/edge-norm state through branch-and-bound, augments CSR/dense/vector utilities, and standardizes CONCURRENT_HALT_RETURN propagation across many solver codepaths.

Changes

Cohort / File(s) Summary
Cut Generation Framework
cpp/src/dual_simplex/cuts.hpp, cpp/src/dual_simplex/cuts.cpp
Introduces a comprehensive cuts subsystem (cut_pool, knapsack/MIR/Gomory/strong-CG generators), scoring/selection, add/remove and verification helpers, knapsack utilities, instrumentation, and explicit template instantiations (~2.8k LOC).
MIP / Solver Settings & Parameter Bindings
cpp/include/cuopt/linear_programming/constants.h, cpp/include/cuopt/linear_programming/mip/solver_settings.hpp, cpp/src/dual_simplex/simplex_solver_settings.hpp, cpp/src/math_optimization/solver_settings.cu, cpp/src/mip/solver.cu
Adds many MIP/cut-related macros and settings (max_cut_passes, mir_cuts, mixed_integer_gomory_cuts, knapsack_cuts, strong_chvatal_gomory_cuts, reduced_cost_strengthening, cut_change_threshold, cut_min_orthogonality, node_limit, reliability_branching, sub_mip, etc.) and binds them to solver settings.
Branch-and-Bound & Solver Integration
cpp/src/dual_simplex/branch_and_bound.hpp, cpp/src/dual_simplex/branch_and_bound.cpp, cpp/src/dual_simplex/solve.cpp, cpp/src/mip/solver.cu, cpp/src/mip/diversity/.../sub_mip.cuh, cpp/src/mip/diversity/lns/rins.cu
branch_and_bound_t now accepts a start/timing value; call sites updated to pass tic()/timer start; root/node solves propagate basis lists and edge norms; new helpers added (find_reduced_cost_fixings, set_solution_at_root, initialize_diving_heuristics_settings).
Basis Updates & Solves
cpp/src/dual_simplex/basis_updates.hpp, cpp/src/dual_simplex/basis_updates.cpp, cpp/src/dual_simplex/basis_solves.cpp
Adds basis_update_mpf_t::append_cuts to integrate CSR cuts into basis updates; refactors refactor_basis to use factorize_basis return statuses, handle CONCURRENT_HALT_RETURN, perform repair-and-retry, and adjust early-exit signaling.
Sparse/Dense/Vector APIs
cpp/src/dual_simplex/sparse_matrix.hpp, cpp/src/dual_simplex/sparse_matrix.cpp, cpp/src/dual_simplex/sparse_vector.hpp, cpp/src/dual_simplex/sparse_vector.cpp, cpp/src/dual_simplex/dense_matrix.hpp
Adds CSR append APIs (append_rows, append_row), changes csr_matrix::check_matrix to return i_t with validation, adds sparse_vector ctor-from-CSR-row, dot, squeeze, and dense matrix constructor dense_matrix_t(rows,cols,value).
Phase2 / Crossover / Primal & Enums
cpp/src/dual_simplex/phase2.cpp, cpp/src/dual_simplex/crossover.cpp, cpp/src/dual_simplex/primal.cpp, cpp/src/dual_simplex/primal.hpp, cpp/src/dual_simplex/solution.hpp
Adds primal_infeasibility_breakdown, changes prepare_optimality signature to accept extra info, introduces CONCURRENT_LIMIT and CONCURRENT_HALT_RETURN, standardizes concurrent-halt returns, and improves factorization/repair status propagation and logging.
Timer & Time Propagation
cpp/src/utilities/timer.hpp, cpp/src/dual_simplex/solve.cpp, cpp/src/mip/diversity/.../sub_mip.cuh, cpp/src/mip/diversity/lns/rins.cu
Adds timer_t::get_tic_start() and threads tic/time-start into branch_and_bound constructions and related call sites; many B&B call sites updated to pass timing.
Bounds Strengthening & Presolve
cpp/src/dual_simplex/bounds_strengthening.hpp, cpp/src/dual_simplex/bounds_strengthening.cpp, cpp/src/dual_simplex/presolve.cpp
Changes bounds_strengthening signature to accept settings and a bound_changed vector first (removes internal member); presolve and dependent-row checks propagate CONCURRENT_HALT_RETURN on concurrent halt.
MIP Solver Control Flow & Presolve Gating
cpp/src/mip/solver.cu, cpp/src/mip/solve.cu, cpp/src/mip/presolve/third_party_presolve.cpp, cpp/src/mip/diversity/diversity_manager.cu
Introduces run_presolve gating, minor presolve/log formatting tweaks, and expands branch_and_bound configuration copy from context.settings; branch_and_bound constructed with timer start.
Build & Tests
cpp/src/dual_simplex/CMakeLists.txt, cpp/tests/dual_simplex/unit_tests/solve.cpp
Adds cuts.cpp to build; unit test file updated with disabled (#if 0) duplicate blocks and header year updates.
Error-signaling and CUDA/cuDSS paths
cpp/src/dual_simplex/sparse_cholesky.cuh, cpp/src/dual_simplex/right_looking_lu.cpp, cpp/src/dual_simplex/barrier.cu
Standardizes concurrent-halt signaling using CONCURRENT_HALT_RETURN across many code paths; adjusts CUDA/cuDSS macros to return -1 for generic failures and CONCURRENT_HALT_RETURN for halt.
MIP Node / Search Tree
cpp/src/dual_simplex/mip_node.hpp
Adds i_t integer_infeasible to mip_node_t and extends constructors/branch calls to propagate this value to children.
Miscellaneous internal updates
cpp/src/dual_simplex/pseudo_costs.cpp, cpp/src/dual_simplex/bound_flipping_ratio_test.cpp, cpp/src/dual_simplex/folding.cpp, cpp/src/dual_simplex/branch_and_bound.cpp, cpp/src/dual_simplex/cuts.cpp (internal)
Replaces chrono timing with tic/toc, tightens some internal error return codes, adds logging/diagnostics, and integrates cuts and reduced-cost tightening hooks into B&B flows.

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 1.34% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'Add cuts to the MIP solver' directly and concisely summarizes the main objective of the changeset—implementing cut generation and management functionality for the MIP solver.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@chris-maes chris-maes self-assigned this Jan 29, 2026
@chris-maes chris-maes added feature request New feature or request non-breaking Introduces a non-breaking change labels Jan 29, 2026
@chris-maes chris-maes added this to the 26.02 milestone Jan 29, 2026
@chris-maes
Copy link
Contributor Author

/ok to test 9ea0271

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
cpp/src/dual_simplex/basis_updates.cpp (1)

2274-2317: Fix incomplete error propagation from refactor_basis.

refactor_basis propagates -2 (lines 2284, 2317), but call sites handle it inconsistently:

  • phase2.cpp:2327, 3060, 3067: Check only > 0, missing negative return values
  • cuts.cpp:2682: Ignores return value entirely

Expected: all call sites check for -2 (and other error codes) and propagate appropriately, not just positive values. Use != 0 checks or explicit -2 handling where applicable.

cpp/src/dual_simplex/mip_node.hpp (1)

229-243: Missing integer_infeasible in detach_copy().

The detach_copy() method copies most node fields but does not copy integer_infeasible. If diving heuristics rely on this value, it could cause incorrect behavior or inconsistent logging.

🐛 Proposed fix
   mip_node_t<i_t, f_t> detach_copy() const
   {
     mip_node_t<i_t, f_t> copy(lower_bound, vstatus);
     copy.branch_var         = branch_var;
     copy.branch_dir         = branch_dir;
     copy.branch_var_lower   = branch_var_lower;
     copy.branch_var_upper   = branch_var_upper;
     copy.fractional_val     = fractional_val;
     copy.objective_estimate = objective_estimate;
     copy.node_id            = node_id;
+    copy.integer_infeasible = integer_infeasible;
     return copy;
   }
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/branch_and_bound.hpp`:
- Around line 154-159: Reads of original_lp_ fields (.lower, .upper, .obj_scale)
in diving_thread() and the various multithreaded heuristic routines must be
protected with the same mutex used for writes: acquire mutex_original_lp_ before
any access to original_lp_ and release it after the read; this applies to all
occurrences (e.g., the accesses noted in diving_thread() and the other
heuristic/thread functions that read original_lp_). Locate uses of original_lp_
in those functions, wrap each read (and any short sequence of dependent reads)
with a lock/unlock on mutex_original_lp_, and keep the critical section minimal
to avoid contention while ensuring consistent snapshot semantics matching the
existing locked writes.

In `@cpp/src/dual_simplex/crossover.cpp`:
- Around line 1381-1396: The dual-feasibility check is using phase-1 duals in
solution.z while vstatus reflects duals for the original LP; before calling
dual_infeasibility (or the gate that may call dual_phase2) restore or recompute
the original objective duals (y/z) rather than using the phase-1 solution.z, or
avoid overwriting solution.z when assigning phase1_solution; update the code
paths around phase1_solution, solution, dual_infeasibility, vstatus, and
dual_phase2 so that dual_infeasibility receives y/z corresponding to the
original LP objective (e.g., recompute duals for the original objective or
save/restore phase-1 duals prior to the gate).

In `@cpp/src/dual_simplex/cuts.cpp`:
- Around line 103-109: The cut_orthogonality_ vector is not being reset each
scoring pass because resize(cut_storage_.m, 1) won't overwrite existing entries
when the size is unchanged; change the reset logic to explicitly set all entries
to the default (e.g., use std::fill or assign) after sizing so
cut_orthogonality_ is fully overwritten for the current pass (refer to
cut_orthogonality_ and cut_storage_.m in the scoring function where
cut_distances_, cut_norms_, and cut_scores_ are resized).
- Around line 47-98: The distance and orthogonality computations can divide by
zero/near-zero norms; update cut_distance and cut_orthogonality to guard those
divisions by clamping any computed norm to a small positive epsilon before
dividing: in cut_distance, after computing cut_norm set something like
guarded_norm = std::max(cut_norm, eps) and use guarded_norm to compute distance;
in cut_orthogonality, compute guarded_norm_i = std::max(norm_i, eps) and
guarded_norm_j = std::max(norm_j, eps) and use those when dividing the absolute
dot; choose eps using a numeric limit for f_t (e.g.,
std::numeric_limits<f_t>::epsilon() or a small constant) so cut_norms_ and
cut_norm remain unchanged but divisions are stable.

In `@cpp/src/dual_simplex/cuts.hpp`:
- Around line 99-133: The function minimum_violation currently calls exit(1)
when it finds a non-cut; instead remove the process termination and return a
sentinel failure value (e.g., std::numeric_limits<f_t>::lowest() or -inf) so
callers can handle the error; update the loop in minimum_violation (which
computes Cx from C via to_compressed_col and matrix_vector_multiply into Cx and
compares to cut_rhs) to, upon detecting Cx[k] <= cut_rhs[k], optionally emit a
debug-only assertion or logging and then return the chosen sentinel (and do not
call exit), ensuring min_cut_violation is not used after this early return.

In `@cpp/src/mip/presolve/third_party_presolve.cpp`:
- Line 412: Guard the use of PAPILO_GITHASH so builds without that macro still
compile: add a fallback definition (e.g. before the CUOPT_LOG_INFO call) by
checking `#ifndef` PAPILO_GITHASH and `#define` PAPILO_GITHASH "unknown" (or wrap
the CUOPT_LOG_INFO invocation in an `#ifdef/`#else that logs a safe string).
Target the CUOPT_LOG_INFO(...) call that references PAPILO_GITHASH and ensure
PAPILO_GITHASH is defined to a string literal fallback when absent.
🧹 Nitpick comments (9)
cpp/src/dual_simplex/sparse_matrix.cpp (1)

409-433: Consider adding column index bounds validation in append_row.

The append_row method does not validate that column indices in the input sparse vector are within bounds (c.i[k] < this->n). While the existing check_matrix method can detect this post-hoc, invalid indices could cause out-of-bounds access in downstream operations before validation occurs.

This is consistent with the approach in append_rows which validates C.n but not individual column indices. If this is intentional (relying on debug-only check_matrix validation per the codebase pattern), this can remain as-is.

cpp/tests/dual_simplex/unit_tests/solve.cpp (1)

329-478: Disabled test code should be enabled or removed.

This test block is wrapped in #if 0, so it will never compile or execute. Disabled test code adds maintenance burden without providing value. Consider:

  1. Enable the test if it's ready and the solve_linear_program_with_cuts function is available in this PR.
  2. Remove the code if it's not ready, and track it in an issue instead.
  3. Use a proper skip mechanism (e.g., GTEST_SKIP()) if the test should be temporarily disabled but still compile.
cpp/src/dual_simplex/basis_updates.cpp (1)

1114-1323: Add a debug‑only dimension check for cuts_basic.

append_cuts assumes cuts_basic.n == L0_.m. If violated, the Uᵀ solve and subsequent indexing are undefined. A CHECK_MATRIX‑guarded assert would catch misuse without impacting release performance.

🔧 Proposed fix
 i_t basis_update_mpf_t<i_t, f_t>::append_cuts(const csr_matrix_t<i_t, f_t>& cuts_basic)
 {
   const i_t m = L0_.m;
+#ifdef CHECK_MATRIX
+  assert(cuts_basic.n == m);
+#endif

Based on learnings: In the cuOPT dual simplex solver, CSR/CSC matrices (including the quadratic objective matrix Q) are required to have valid dimensions and indices by construction. Runtime bounds checking in performance-critical paths like matrix scaling is avoided to prevent slowdowns. Validation is performed via debug-only check_matrix() calls wrapped in #ifdef CHECK_MATRIX.

cpp/include/cuopt/linear_programming/mip/solver_settings.hpp (1)

81-95: Consider adding brief documentation for new MIP settings.

The new public data members control important MIP solver behavior. While these are internal headers per project conventions, brief inline comments would improve maintainability. For example:

i_t node_limit = ...;                // Maximum nodes to explore (max = unlimited)
i_t reliability_branching = -1;      // -1=auto, 0=off, >0=enabled
i_t max_cut_passes = 0;              // Number of cutting-plane rounds at root
i_t mir_cuts = -1;                   // -1=auto, 0=off, 1=on (Mixed Integer Rounding)
// etc.

The defaults and structure are consistent with simplex_solver_settings.hpp.

cpp/src/dual_simplex/branch_and_bound.cpp (4)

342-406: Verify numerical stability in reduced cost fixing calculations.

The function uses hardcoded thresholds (threshold = 1e-3, weaken = 1e-5) for bound tightening. Consider:

  1. Line 359: The !std::isfinite check correctly guards against NaN/Inf reduced costs, but a warning to printf bypasses the logger.
  2. Lines 370-373 and 381-384: The weaken factor loosens bounds slightly to avoid numerical issues, which is good practice.

The algorithm appears correct for reduced cost fixing. The printf at Line 360 should use the logger for consistency.

Use logger instead of printf
     if (!std::isfinite(reduced_costs[j])) {
-      printf("Variable %d reduced cost is %e\n", j, reduced_costs[j]);
+      settings_.log.printf("Variable %d reduced cost is %e\n", j, reduced_costs[j]);
       continue;
     }

408-458: Verify mutex ordering to prevent potential deadlocks.

The set_new_solution function acquires multiple mutexes: mutex_original_lp_ (Lines 411, 430, 433), and mutex_upper_ (Lines 423, 434, 450). While the current pattern appears safe (locks are released before acquiring new ones), the interleaved locking could be fragile.

Consider documenting the expected lock ordering or consolidating critical sections where possible to reduce complexity.


1856-1858: Remove debug timing output or make it conditional.

Multiple timing blocks use if (1 || ...) which always prints timing information:

  • Line 1856: if (1 || cut_generation_time > 1.0)
  • Line 1863: if (1 || score_time > 1.0)
  • Line 1918: if (1 || add_cuts_time > 1.0)
  • Line 1958: if (1 || node_presolve_time > 1.0)
  • Line 1989: if (1 || dual_phase2_time > 1.0)
  • Line 2025: if (1 || remove_cuts_time > 1.0)

These appear to be debug statements. Consider removing the 1 || to only log when operations exceed 1 second, or guard with a debug flag.

Remove unconditional timing output
-      if (1 || cut_generation_time > 1.0) {
+      if (cut_generation_time > 1.0) {
         settings_.log.printf("Cut generation time %.2f seconds\n", cut_generation_time);
       }

Apply similarly to the other occurrences.

Also applies to: 1862-1865, 1918-1920, 1958-1960, 1989-1991, 2025-2027


1873-1877: Remove or guard debug print statements.

Lines 1873-1877 contain unconditional debug output (#if 1 and direct printf). This should be removed or guarded with a debug macro for release builds.

Guard debug output
-#if 1
-      cut_pool.print_cutpool_types();
-      print_cut_types("In LP      ", cut_types, settings_);
-      printf("Cut pool size: %d\n", cut_pool.pool_size());
-#endif
+#ifdef DEBUG_CUTS
+      cut_pool.print_cutpool_types();
+      print_cut_types("In LP      ", cut_types, settings_);
+      settings_.log.printf("Cut pool size: %d\n", cut_pool.pool_size());
+#endif
cpp/src/dual_simplex/cuts.cpp (1)

2345-2685: Remove unused solution parameter from add_cuts().

The solution parameter is declared but never used in the function body, creating confusion about responsibilities. The caller (branch_and_bound.cpp) independently resizes root_relax_soln_ post-call anyway (lines 1963–1965), so the parameter serves no purpose. Either remove it from the signature or, if the intent is for add_cuts() to manage solution resizing, use it within the function to resize for the expanded problem. Current design creates maintenance risk without providing value.

@chris-maes
Copy link
Contributor Author

/ok to test 4c086ac

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/cuts.cpp`:
- Around line 770-772: The loop currently exits completely when
negate_inequality == -1 (the break on that condition), which prevents generating
MIR cuts for subsequent rows; change the control flow to skip the current
iteration instead of breaking the loop by replacing the break with continue so
other rows are still processed (locate the check of negate_inequality in
cuts.cpp where the TODO comment sits and make the single-line replacement from
break to continue).
- Around line 431-436: In greedy_knapsack_problem, protect the division
ratios[i] = values[i] / weights[i] against zero (or near-zero) weights: detect
if weights[i] is zero (or fabs(weights[i]) < epsilon) and set ratios[i]
appropriately (e.g., +infinity when weight==0 and value>0, -infinity when
value<0, or 0 when value==0) using std::numeric_limits<f_t>::infinity();
otherwise compute the normal division; update the ratios vector calculation to
use this guarded logic so no division-by-zero occurs.
- Around line 2761-2788: The function write_solution_for_cut_verification
currently opens a raw FILE* (fid) and may leak it if any operation throws or
early-returns; replace the raw FILE* use with an RAII wrapper (e.g.,
std::unique_ptr<FILE, decltype(&fclose)> or a small scope-guard) so fclose is
guaranteed on all exit paths, or switch to std::ofstream; update the code that
writes to fid (fprintf calls) to use the chosen RAII-managed stream or the
managed FILE pointer while preserving behavior in
write_solution_for_cut_verification and ensure fclose is removed from manual
calls.
🧹 Nitpick comments (2)
cpp/src/dual_simplex/cuts.hpp (1)

442-470: Verify parameter ordering consistency in add_cuts and remove_cuts.

The add_cuts and remove_cuts functions have different parameter orderings (e.g., settings position, Arow presence). While not incorrect, this inconsistency could lead to confusion at call sites.

cpp/src/dual_simplex/cuts.cpp (1)

361-363: NaN comparison for early return is fragile.

The check objective != objective is an idiomatic NaN test, but this pattern can be optimized away by aggressive compilers with -ffast-math. Consider using std::isnan() for clarity and robustness.

♻️ Suggested fix
-  if (objective != objective) { return -1; }
+  if (std::isnan(objective)) { return -1; }

@chris-maes
Copy link
Contributor Author

chris-maes commented Jan 29, 2026

/ok to test 606c482

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/branch_and_bound.cpp`:
- Around line 323-339: The division that computes iter_node uses
exploration_stats_.total_lp_iters / nodes_explored and can divide by zero when
nodes_explored is 0; modify the report code around iter_node so you check
nodes_explored (the i_t variable nodes_explored) before dividing: if
nodes_explored > 0 compute iter_node = exploration_stats_.total_lp_iters /
static_cast<f_t>(nodes_explored) else set iter_node to 0.0 (or another safe
sentinel), then use iter_node in the printf; this prevents a crash and preserves
numeric stability in report/branch_and_bound.cpp.
- Around line 351-367: The loop performing reduced‑cost fixing must skip any
tightening when the current gap is non‑positive; add a guard at the start of the
per‑column logic (inside the for over reduced_costs in branch_and_bound.cpp)
that checks upper_bound <= root_obj (or abs_gap <= 0) and continues to next j if
true, so that compute_objective(root_obj) and abs_gap are validated before using
them to tighten bounds (symbols to update: root_obj, upper_bound, abs_gap,
reduced_costs loop and the per‑column tightening logic).
🧹 Nitpick comments (2)
cpp/src/dual_simplex/cuts.cpp (2)

16-44: Track cut deduplication TODO.

The TODO indicates planned dedup logic in a hot path; consider creating a follow-up issue to avoid pool bloat.

Would you like me to draft an issue template or a minimal dedup sketch?


225-229: drop_cuts() is still a stub.

If this is intentionally deferred, please track it; otherwise I can help sketch a safe policy.

Would you like me to propose a simple age/score-based eviction policy?

@chris-maes
Copy link
Contributor Author

/ok to test 606c482

@chris-maes
Copy link
Contributor Author

/ok to test fb50454

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/cuts.hpp`:
- Around line 148-167: Protect the division in cut_orthogonality by checking
norms before dividing: inside the cut_orthogonality(i_t i, i_t j)
implementation, compute norm_i and norm_j as before but if either is below
settings_.zero_tol treat the pair as maximally non-aligned by returning 1.0;
otherwise compute and return 1.0 - std::abs(dot) / (norm_i * norm_j). This uses
the existing settings_.zero_tol guard to avoid dividing by near-zero values and
preserves current return semantics.
🧹 Nitpick comments (2)
cpp/src/dual_simplex/cuts.hpp (2)

65-91: Consider reusing cut_info_t to avoid duplicate counting logic.

The print_cut_types function duplicates the cut-type counting logic that already exists in cut_info_t::record_cut_types. This could be simplified:

♻️ Suggested refactor
 template <typename i_t, typename f_t>
 void print_cut_types(const std::string& prefix,
                      const std::vector<cut_type_t>& cut_types,
                      const simplex_solver_settings_t<i_t, f_t>& settings)
 {
-  i_t num_gomory_cuts   = 0;
-  i_t num_mir_cuts      = 0;
-  i_t num_knapsack_cuts = 0;
-  i_t num_cg_cuts       = 0;
-  for (i_t i = 0; i < cut_types.size(); i++) {
-    if (cut_types[i] == cut_type_t::MIXED_INTEGER_GOMORY) {
-      num_gomory_cuts++;
-    } else if (cut_types[i] == cut_type_t::MIXED_INTEGER_ROUNDING) {
-      num_mir_cuts++;
-    } else if (cut_types[i] == cut_type_t::KNAPSACK) {
-      num_knapsack_cuts++;
-    } else if (cut_types[i] == cut_type_t::CHVATAL_GOMORY) {
-      num_cg_cuts++;
-    }
-  }
+  cut_info_t<i_t, f_t> info;
+  // Note: record_cut_types takes non-const ref, may need adjustment
+  auto mutable_types = cut_types;
+  info.record_cut_types(mutable_types);
   settings.log.printf("%s: Gomory cuts: %d, MIR cuts: %d, Knapsack cuts: %d CG cuts: %d\n",
                       prefix.c_str(),
-                      num_gomory_cuts,
-                      num_mir_cuts,
-                      num_knapsack_cuts,
-                      num_cg_cuts);
+                      info.num_gomory_cuts,
+                      info.num_mir_cuts,
+                      info.num_knapsack_cuts,
+                      info.num_cg_cuts);
 }

Also, the loop at line 74 compares signed i_t with unsigned size_t from cut_types.size(), which may trigger compiler warnings. Consider using size_t for the loop index or casting.


267-304: Consider documenting the workspace vector invariants.

The tableau_equality_t class maintains several workspace vectors (b_bar_, x_workspace_, c_workspace_, etc.) and marking arrays (nonbasic_mark_, x_mark_). Consider adding brief comments documenting when these workspaces are valid/dirty, especially since generate_base_equality likely modifies them and may require reset between calls.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/src/dual_simplex/bounds_strengthening.cpp (1)

243-277: ⚠️ Potential issue | 🟡 Minor

Debug code references undefined problem variable — will not compile if enabled.

The debug block references problem.lower and problem.upper (lines 250, 254-255, 258, 263), but problem is not in scope. It was a constructor parameter, not stored as a member. If DEBUG_BOUND_STRENGTHENING is set to 1, this will cause a compilation error.

🐛 Proposed fix: use the correct parameter names
   for (i_t i = 0; i < n; ++i) {
-    if (lower[i] > problem.lower[i] + settings.primal_tol ||
-        (!std::isfinite(problem.lower[i]) && std::isfinite(lower[i]))) {
+    if (lower[i] > lower_bounds[i] + settings.primal_tol ||
+        (!std::isfinite(lower_bounds[i]) && std::isfinite(lower[i]))) {
       num_lb_changed++;
       lb_change +=
-        std::isfinite(problem.lower[i])
-          ? (lower[i] - problem.lower[i]) / (1e-6 + std::max(abs(lower[i]), abs(problem.lower[i])))
+        std::isfinite(lower_bounds[i])
+          ? (lower[i] - lower_bounds[i]) / (1e-6 + std::max(abs(lower[i]), abs(lower_bounds[i])))
           : 1.0;
     }
-    if (upper[i] < problem.upper[i] - settings.primal_tol ||
-        (!std::isfinite(problem.upper[i]) && std::isfinite(upper[i]))) {
+    if (upper[i] < upper_bounds[i] - settings.primal_tol ||
+        (!std::isfinite(upper_bounds[i]) && std::isfinite(upper[i]))) {
       num_ub_changed++;
       ub_change +=
-        std::isfinite(problem.upper[i])
-          ? (problem.upper[i] - upper[i]) / (1e-6 + std::max(abs(problem.upper[i]), abs(upper[i])))
+        std::isfinite(upper_bounds[i])
+          ? (upper_bounds[i] - upper[i]) / (1e-6 + std::max(abs(upper_bounds[i]), abs(upper[i])))
           : 1.0;
     }
   }

Note: After this fix, the debug comparison would show changes from the input bounds (before strengthening) to the computed bounds (stored in lower/upper), which appears to be the intended behavior.

🧹 Nitpick comments (7)
cpp/src/dual_simplex/bounds_strengthening.cpp (3)

228-229: Variable shadowing: local bounds_changed shadows the parameter.

The local variable bool bounds_changed on line 228 shadows the function parameter const std::vector<bool>& bounds_changed from line 94. While this doesn't cause a bug currently (the parameter is only used before this point), it can lead to confusion and potential issues if future code attempts to use the parameter after this line.

♻️ Proposed fix: rename the local variable
-      bool bounds_changed = lb_updated || ub_updated;
-      if (bounds_changed) { num_bounds_changed++; }
+      bool bound_updated = lb_updated || ub_updated;
+      if (bound_updated) { num_bounds_changed++; }

15-27: Consider guarding against near-zero coefficients for numerical stability.

The update_lb and update_ub functions divide by coeff without guarding against near-zero values. While sparse matrix coefficients are typically non-zero, near-zero coefficients could cause numerical instability by producing very large bounds. The downstream std::max/std::min operations provide some protection, but an explicit guard would be more robust.

♻️ Optional: Add near-zero coefficient guard
 template <typename f_t>
 static inline f_t update_lb(f_t curr_lb, f_t coeff, f_t delta_min_act, f_t delta_max_act)
 {
+  constexpr f_t coeff_eps = 1e-12;
+  if (std::abs(coeff) < coeff_eps) { return curr_lb; }
   auto comp_bnd = (coeff < 0.) ? delta_min_act / coeff : delta_max_act / coeff;
   return std::max(curr_lb, comp_bnd);
 }

 template <typename f_t>
 static inline f_t update_ub(f_t curr_ub, f_t coeff, f_t delta_min_act, f_t delta_max_act)
 {
+  constexpr f_t coeff_eps = 1e-12;
+  if (std::abs(coeff) < coeff_eps) { return curr_ub; }
   auto comp_bnd = (coeff < 0.) ? delta_max_act / coeff : delta_min_act / coeff;
   return std::min(curr_ub, comp_bnd);
 }

Based on learnings: "Guard numerical stability by guarding near-zero norms in cut-related computations... functions like cut_orthogonality (and related routines in the same directory) should also handle near-zero norms to avoid instability."


213-217: Hardcoded tolerance 1e-6 could be inconsistent with solver settings.

Line 213 uses a hardcoded tolerance of 1e-6, while lines 207-208 use 1e3 * settings.primal_tol. For consistency, consider using a settings-based tolerance here as well.

♻️ Optional: Use consistent tolerance
-      if (new_lb > new_ub + 1e-6) {
+      if (new_lb > new_ub + settings.primal_tol) {
cpp/src/dual_simplex/branch_and_bound.cpp (4)

1617-1627: Assertions on size mismatches may crash in production.

Lines 1617-1621 and 1622-1627 assert on basic_list and nonbasic_list sizes after logging the mismatch. In production builds where assertions may be disabled or where continued execution after logging is expected, consider returning an error status instead:

🛠️ Suggested improvement
       if (basic_list.size() != original_lp_.num_rows) {
         settings_.log.printf(
           "basic_list size %d != m %d\n", basic_list.size(), original_lp_.num_rows);
-        assert(basic_list.size() == original_lp_.num_rows);
+        return lp_status_t::NUMERICAL_ISSUES;
       }
       if (nonbasic_list.size() != original_lp_.num_cols - original_lp_.num_rows) {
         settings_.log.printf("nonbasic_list size %d != n - m %d\n",
                              nonbasic_list.size(),
                              original_lp_.num_cols - original_lp_.num_rows);
-        assert(nonbasic_list.size() == original_lp_.num_cols - original_lp_.num_rows);
+        return lp_status_t::NUMERICAL_ISSUES;
       }

1858-1859: Debug timing conditions should be cleaned up.

Multiple if (1 || ...) patterns at lines 1858, 1865, 1920, 1960, 1986, and 2022 always evaluate to true, printing timing information unconditionally. These appear to be debug artifacts that should be reverted to conditional logging:

♻️ Example fix
-      if (1 || cut_generation_time > 1.0) {
+      if (cut_generation_time > 1.0) {
         settings_.log.printf("Cut generation time %.2f seconds\n", cut_generation_time);
       }

Apply similar changes to all six occurrences.

Also applies to: 1865-1866, 1920-1921, 1960-1961, 1986-1987, 2022-2023


1875-1879: Debug print block should be removed or made conditional.

The #if 1 block unconditionally prints cut pool information. Consider removing or guarding with a debug macro/setting:

-#if 1
-      cut_pool.print_cutpool_types();
-      print_cut_types("In LP      ", cut_types, settings_);
-      printf("Cut pool size: %d\n", cut_pool.pool_size());
-#endif
+#ifdef DEBUG_CUT_POOL
+      cut_pool.print_cutpool_types();
+      print_cut_types("In LP      ", cut_types, settings_);
+      settings_.log.printf("Cut pool size: %d\n", cut_pool.pool_size());
+#endif

Also note the use of raw printf instead of settings_.log.printf.


2115-2120: Unused variable node_presolve_time.

node_presolve_time is computed at line 2119 but never used. Either log it or remove the computation:

       bool feasible =
         node_presolve.bounds_strengthening(settings_, bounds_changed, original_lp_.lower, original_lp_.upper);
-      f_t node_presolve_time = toc(node_presolve_start_time);
       mutex_original_lp_.unlock();

Or if timing is useful for debugging:

       f_t node_presolve_time = toc(node_presolve_start_time);
       mutex_original_lp_.unlock();
+      if (node_presolve_time > 1.0) {
+        settings_.log.printf("Reduced-cost node presolve time %.2f seconds\n", node_presolve_time);
+      }

@chris-maes
Copy link
Contributor Author

/ok to test 030dbb5

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/cuts.cpp`:
- Around line 101-135: In score_cuts (cut_pool_t::score_cuts) reset the
scored_cuts_ counter at the start of each scoring pass so it does not accumulate
across calls: after reserving/clearing best_cuts_ (or immediately before using
it) set scored_cuts_ = 0; this ensures subsequent cut selection logic that
depends on scored_cuts_ uses a fresh count for each pass rather than carrying
over previous values.

@chris-maes
Copy link
Contributor Author

/ok to test 635f509

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/src/dual_simplex/bounds_strengthening.cpp (1)

105-116: ⚠️ Potential issue | 🟠 Major

Guard against bounds_changed size mismatch to avoid OOB.

Line 105 assumes bounds_changed.size() == n when non-empty; a smaller vector will cause out-of-bounds access. Please add a size check (or fallback to full scan) before indexing.

Proposed fix (fallback to full scan on mismatch)
-  if (!bounds_changed.empty()) {
-    std::fill(constraint_changed.begin(), constraint_changed.end(), false);
-    for (i_t i = 0; i < n; ++i) {
-      if (bounds_changed[i]) {
-        const i_t row_start = A.col_start[i];
-        const i_t row_end   = A.col_start[i + 1];
-        for (i_t p = row_start; p < row_end; ++p) {
-          const i_t j           = A.i[p];
-          constraint_changed[j] = true;
-        }
-      }
-    }
-  }
+  if (!bounds_changed.empty()) {
+    if (bounds_changed.size() == static_cast<size_t>(n)) {
+      std::fill(constraint_changed.begin(), constraint_changed.end(), false);
+      for (i_t i = 0; i < n; ++i) {
+        if (bounds_changed[i]) {
+          const i_t row_start = A.col_start[i];
+          const i_t row_end   = A.col_start[i + 1];
+          for (i_t p = row_start; p < row_end; ++p) {
+            const i_t j           = A.i[p];
+            constraint_changed[j] = true;
+          }
+        }
+      }
+    } else {
+      settings.log.debug("bounds_changed size mismatch; running full scan\n");
+    }
+  }

As per coding guidelines: Validate input sanitization to prevent buffer overflows and resource exhaustion attacks; avoid unsafe deserialization of problem files.

🧹 Nitpick comments (1)
cpp/src/dual_simplex/bounds_strengthening.hpp (1)

23-26: Document bound_changed contract (size/semantics).

The new parameter is easy to misuse (empty vs sized-but-all-false). Consider a brief comment stating expected size and semantics (e.g., empty ⇒ full scan; otherwise size == num_cols).

@chris-maes
Copy link
Contributor Author

/ok to test 6156fd1

This was referenced Jan 30, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
cpp/src/dual_simplex/barrier.cu (1)

684-704: ⚠️ Potential issue | 🟠 Major

Fix sentinel propagation at line 1810.
The solve_adat call at line 1810 returns CONCURRET_HALT_RETURN (-2) on concurrent halt, but the error handler if (solve_status != 0) { return status; } swallows the sentinel by returning the previous status variable instead of propagating solve_status. This prevents the halt signal from reaching callers. Lines 1935 and 2410 show the correct pattern: check for the sentinel explicitly before the generic error check.

Required fix
-    if (solve_status != 0) { return status; }
+    if (solve_status == CONCURRET_HALT_RETURN) { return CONCURRET_HALT_RETURN; }
+    if (solve_status != 0) { return solve_status; }
cpp/src/dual_simplex/primal.cpp (1)

322-324: ⚠️ Potential issue | 🟡 Minor

Pre-existing duplicate call to reorder_basic_list.

Lines 322-323 call reorder_basic_list(q, basic_list) twice in succession. This appears to be a pre-existing issue (not introduced by this PR), but calling this function twice would incorrectly double-permute the basic list.

🐛 Proposed fix
   reorder_basic_list(q, basic_list);
-  reorder_basic_list(q, basic_list);
   basis_update_t ft(L, U, p);
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/right_looking_lu.cpp`:
- Around line 1116-1119: The code must explicitly detect and propagate the
concurrent-halt sentinel returned by right_looking_lu_row_permutation_only
instead of treating it as a pivot count; after calling
right_looking_lu_row_permutation_only (the variable pivots), add a check like if
(pivots == CONCURRET_HALT_RETURN) return CONCURRET_HALT_RETURN; (more generally
if (pivots < 0) return pivots;) before computing num_dependent or allocating
independent_rhs(pivots), so the sentinel -2 is propagated upward and never used
as a size or pivot count.
🧹 Nitpick comments (2)
cpp/src/dual_simplex/types.hpp (1)

22-23: Consider fixing the macro name typo for clarity.
CONCURRET_HALT_RETURN looks misspelled; renaming to CONCURRENT_HALT_RETURN (with a mechanical rename across usages) would reduce confusion during status handling.

cpp/src/dual_simplex/phase2.cpp (1)

2085-2168: Route infeasibility diagnostics through settings.log for consistency.

Direct printf bypasses logging controls and can be noisy in multi‑threaded runs. Consider using settings.log.printf (and/or a debug level) to keep output consistent.

Suggested tweak
-    printf(
+    settings.log.printf(
       "Primal infeasibility %e/%e (Basic %e, Nonbasic %e, Basic over %e). Perturbation %e/%e. Info "
       "%d\n",
       primal_infeas,
       orig_primal_infeas,
       basic_infeas,
       nonbasic_infeas,
       basic_over,
       orig_perturbation,
       perturbation,
       info);

@chris-maes
Copy link
Contributor Author

/ok to test 83b5ac9

@chris-maes
Copy link
Contributor Author

/ok to test ca00738

@chris-maes
Copy link
Contributor Author

/ok to test 026ca38

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/branch_and_bound.cpp`:
- Around line 343-408: find_reduced_cost_fixings currently counts integer
fixings (num_fixed) but never applies them; update the logic that computes
reduced_cost_upper_bound and reduced_cost_lower_bound (variables
reduced_cost_upper_bound, reduced_cost_lower_bound) to actually write into
lower_bounds[j] and upper_bounds[j] and mark bounds_changed[j] when a tightening
is possible for integer vars, and when var_types_[j]==variable_type_t::INTEGER
and reduced_cost_upper_bound <= reduced_cost_lower_bound + fixed_tol set the
variable to a fixed integer by choosing a valid integer value (e.g., clamp/round
into [reduced_cost_lower_bound, reduced_cost_upper_bound] using floor/ceil as
appropriate), assign it to both lower_bounds[j] and upper_bounds[j], increment
num_fixed, and set bounds_changed[j]=true so the fix is applied rather than only
counted.
🧹 Nitpick comments (1)
cpp/src/dual_simplex/cuts.cpp (1)

157-199: drop_cuts() is still a stub.
If cut aging is expected to prune the pool, consider implementing this or tracking it as a follow‑up.

Would you like me to draft an implementation or open a tracking issue?

@chris-maes
Copy link
Contributor Author

/ok to test 51e0ed0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature request New feature or request non-breaking Introduces a non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants