-
Notifications
You must be signed in to change notification settings - Fork 122
Add cuts to MIP solver #813
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughThis pull request adds comprehensive cut generation and management capabilities to the dual simplex solver, including Gomory, MIR, Knapsack, and Chvátal-Gomory cuts. It extends MIP solver configuration with new parameters for cut control and node limits, enhances basis update operations with new methods, and propagates edge norm vectors through LP solving paths. Additionally, it refines error signaling codes and adds root relaxation solution tracking capabilities. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cpp/src/dual_simplex/mip_node.hpp (1)
37-53: Uninitialized memberinteger_infeasiblein default constructor.The default constructor initializes most members but not the new
integer_infeasiblefield. This leaves it with an undefined value if a node is default-constructed.🔧 Suggested fix
mip_node_t() : status(node_status_t::PENDING), lower_bound(-std::numeric_limits<f_t>::infinity()), depth(0), parent(nullptr), node_id(0), branch_var(-1), branch_dir(rounding_direction_t::NONE), branch_var_lower(-std::numeric_limits<f_t>::infinity()), branch_var_upper(std::numeric_limits<f_t>::infinity()), fractional_val(std::numeric_limits<f_t>::infinity()), objective_estimate(std::numeric_limits<f_t>::infinity()), - vstatus(0) + vstatus(0), + integer_infeasible(-1) {
🤖 Fix all issues with AI agents
In `@cpp/src/dual_simplex/basis_solves.cpp`:
- Around line 364-367: The callers of factorize_basis still only check for -1 so
they miss the new -2 concurrent-halt code; update each site that currently does
if (factorize_basis(...) == -1) (the three call sites in crossover.cpp and the
one in primal.cpp) to treat any nonzero return as an error (e.g., if
(factorize_basis(...) != 0)) or explicitly test for both -1 and -2, so
concurrent halt (-2) is detected and handled the same as the existing error
path.
In `@cpp/src/dual_simplex/basis_updates.cpp`:
- Around line 1308-1317: The code resizes only row_permutation_ after expanding
the basis (in the block following compute_transposes()), leaving
col_permutation_ and inverse_col_permutation_ at the old size which yields stale
mappings for subsequent u_solve/u_transpose_solve; fix by resizing
col_permutation_ and inverse_col_permutation_ to match the new column count (m +
cuts_basic.m or the new U0_.cols()), and append identity entries for each new
column index (set col_permutation_[k] = k and then recompute
inverse_col_permutation_ via inverse_permutation or equivalent). Ensure you
perform these updates in the same routine where row_permutation_ is adjusted so
index mappings remain consistent across compute_transposes(), U0_, and the
u_solve/u_transpose_solve calls.
In `@cpp/src/dual_simplex/branch_and_bound.cpp`:
- Around line 343-405: The log message computes a percentage using
num_integer_variables_ and can divide by zero when there are no integer
variables; update the post-check block in find_reduced_cost_fixings to guard
that division by computing the percentage only if num_integer_variables_ > 0 (or
substitute a safe denominator like 1) before calling settings_.log.printf (refer
to find_reduced_cost_fixings and symbols settings_.log.printf and
num_integer_variables_), ensuring the printed percentage is valid or omitted
when there are zero integer vars.
In `@cpp/src/dual_simplex/cuts.cpp`:
- Around line 430-437: Protect the division in the ratio computation by checking
weights[i] before dividing: in the loop that fills ratios (using values,
weights, i_t i, n), when weights[i] == 0 assign a sentinel ratio instead of
performing values[i] / weights[i] (e.g. use std::numeric_limits<f_t>::infinity()
for positive values[i], -infinity for negative values[i], and 0 for values[i] ==
0) so std::sort on perm remains well-defined; ensure <limits> is available for
std::numeric_limits.
- Around line 769-771: The loop currently uses "break" when negate_inequality ==
-1 which stops MIR cut generation for all remaining rows; change this so only
the current row is skipped: in the function/method containing this loop (where
negate_inequality is checked) replace the break with a continue, or
alternatively set the current row's score to zero (the variable used for row
scoring) and continue, so subsequent rows still get processed and MIR cuts can
be generated for them.
- Around line 46-66: The function cut_distance computes cut_norm and divides by
it without checking for near-zero values; modify cut_distance (and similarly
review cut_orthogonality and related routines) to guard against tiny norms:
after computing cut_norm = std::sqrt(dot) compare it to a small threshold (e.g.
something based on std::numeric_limits<f_t>::epsilon(), scaled appropriately)
and if cut_norm is below that threshold avoid the divide — set cut_norm to 0 (or
the threshold) and set distance to 0 (or compute cut_violation/threshold if you
prefer a capped value), ensuring cut_violation is still assigned from
rhs_storage_[row] - cut_x; update the logic around the return value so no
division by a near-zero cut_norm occurs.
- Around line 78-98: In cut_orthogonality, guard the division by (norm_i *
norm_j): compute a small threshold (e.g. eps =
std::numeric_limits<f_t>::epsilon() * 100 or a small constant) and if norm_i *
norm_j <= eps return a sensible default (suggest returning 1.0 to represent
maximal orthogonality for near-zero norms), otherwise perform the existing
computation using sparse_dot and cut_norms_. Reference symbols: function
cut_orthogonality, variables norm_i, norm_j, cut_norms_, and sparse_dot.
- Around line 2760-2787: The function write_solution_for_cut_verification
currently opens a FILE* (fid) with fopen and only calls fclose on the normal
path, risking a resource leak if an exception occurs or a later I/O call fails;
replace the raw FILE* usage with an RAII-managed resource (e.g.,
std::unique_ptr<FILE, decltype(&fclose)> or switch to std::ofstream) so the file
is always closed automatically, or alternatively ensure all exit paths call
fclose (including via try/finally-style cleanup or catch blocks). Update
references to fid, fopen, and fclose in write_solution_for_cut_verification so
printing and writing use the RAII handle and no manual fclose is required.
In `@cpp/src/dual_simplex/cuts.hpp`:
- Around line 170-173: The cut_orthogonality function currently divides by
(norm_i * norm_j) which can be near-zero due to floating-point accumulation;
modify cut_orthogonality to check (norm_i * norm_j) against a small epsilon
(e.g. 1e-12) before performing the division and handle the near-zero case by
returning a safe orthogonality value (such as 0.0) or clamping the denominator
to epsilon; update any related uses (cut_distance, cut_density remain unchanged)
so cut_orthogonality(i, j) never performs an unsafe division.
- Line 9: The computation in cut_orthogonality divides by norm_i * norm_j
without guarding against near-zero norms; update the function
(cut_orthogonality) to load norm_i = cut_norms_[i] and norm_j = cut_norms_[j],
define a small epsilon (e.g. 1e-14), and if either norm is less than epsilon
return 0.0 (treat as non-orthogonal) before performing return 1.0 -
std::abs(dot) / (norm_i * norm_j); this prevents NaN/Inf from degenerate cuts
and preserves numerical stability.
In `@cpp/src/dual_simplex/dense_matrix.hpp`:
- Around line 19-22: The two dense_matrix_t constructors allocate values(rows *
cols, ...) without validating dimensions; add a centralized initializer (e.g., a
private init method used by both dense_matrix_t(i_t,i_t) and
dense_matrix_t(i_t,i_t,f_t)) that checks rows and cols are positive and that
(size_t)rows <= max_size_t / (size_t)cols to prevent multiplication
overflow/huge allocation, then computes the product as a size_t and constructs
values with that size; on failure, throw a std::invalid_argument or
std::length_error with a clear message.
In `@cpp/src/dual_simplex/sparse_matrix.cpp`:
- Around line 577-580: The current guard checks "j >= col_start.size()" but code
later accesses col_start[j+1], allowing an OOB when col_start.size() == n; add a
pre-check that col_start.size() >= this->n + 1 (or equivalently fail when
col_start.size() < this->n + 1) before iterating, or replace the per-iteration
check with a bounds test using j+1 (e.g. ensure j+1 < col_start.size()); update
the same function where the loop uses j and accesses col_start[j+1] to return an
error (e.g., -1) when the col_start size is insufficient.
In `@cpp/tests/dual_simplex/unit_tests/solve.cpp`:
- Around line 329-478: The test calls a non-existent API
solve_linear_program_with_cuts; update the test to apply cuts via the existing
internal add_cuts(...) helper and then call an existing public solver (e.g.
solve_linear_program_with_advanced_basis) instead: locate the test using
solve_linear_program_with_cuts and replace those calls with a sequence that
calls add_cuts(cuts, cut_rhs, lp, /*appropriate args*/) to modify lp and then
invokes solve_linear_program_with_advanced_basis(lp, start_time, settings,
solution, basis_update, basic_list, nonbasic_list, vstatus, edge_norms); ensure
variable lists (basic_list, nonbasic_list, vstatus, edge_norms) and basis_update
are preserved across the add_cuts+solve sequence so the test logic and
assertions remain valid.
🧹 Nitpick comments (5)
cpp/src/utilities/timer.hpp (1)
58-85: Minor: Misleading comment and unnecessarily complex computation.The docstring at line 61 mentions converting to
struct timeval, but the function returns adouble. Thetv_sec/tv_usecdecomposition and reconstruction can be simplified.♻️ Suggested simplification
double get_tic_start() const noexcept { /** - * Converts a std::chrono::steady_clock::time_point to a struct timeval. - * This is an approximate conversion because steady_clock is relative to an - * unspecified epoch (e.g., system boot time), not the system clock epoch (UTC). + * Converts the timer's begin time_point to approximate seconds since system epoch. + * This is an approximate conversion because steady_clock is relative to an + * unspecified epoch (e.g., system boot time), not the system clock epoch (UTC). */ // Get the current time from both clocks at approximately the same instant std::chrono::system_clock::time_point sys_now = std::chrono::system_clock::now(); std::chrono::steady_clock::time_point steady_now = std::chrono::steady_clock::now(); // Calculate the difference between the given steady_clock time point and the current steady // time auto diff_from_now = begin - steady_now; // Apply that same difference to the current system clock time point std::chrono::system_clock::time_point sys_t = sys_now + diff_from_now; // Convert the resulting system_clock time point to microseconds since the system epoch auto us_since_epoch = std::chrono::duration_cast<std::chrono::microseconds>(sys_t.time_since_epoch()); - // Populate the timeval struct - double tv_sec = us_since_epoch.count() / 1000000; - double tv_usec = us_since_epoch.count() % 1000000; - - return tv_sec + 1e-6 * tv_usec; + return static_cast<double>(us_since_epoch.count()) * 1e-6; }cpp/src/dual_simplex/sparse_vector.cpp (1)
124-133: Consider adding a size assertion for safety.The
dotmethod accessesx_dense[i[k]]without verifying thatx_dense.size() >= n. While callers are expected to provide correctly sized vectors, an assertion would catch misuse during development. This is consistent with the existingassert(p.size() == n)pattern used ininverse_permute_vectorat line 99.🛡️ Suggested defensive assertion
template <typename i_t, typename f_t> f_t sparse_vector_t<i_t, f_t>::dot(const std::vector<f_t>& x_dense) const { + assert(x_dense.size() >= n); const i_t nz = i.size(); f_t dot = 0.0;cpp/src/dual_simplex/cuts.cpp (3)
1219-1223: Use consistent logging instead of printf.Line 1221 uses raw
printfwhile the rest of the codebase usessettings.log.printf. This inconsistency could cause issues with log routing and makes debugging harder.♻️ Proposed fix
f_t a_bar_dot_xstar = a_bar.dot(xstar); if (std::abs(a_bar_dot_xstar - b_bar_[i]) > tableau_tol) { - printf("bad tableau equality. error %e\n", std::abs(a_bar_dot_xstar - b_bar_[i])); + settings.log.printf("bad tableau equality. error %e\n", std::abs(a_bar_dot_xstar - b_bar_[i])); return -1; }
2243-2249: Use consistent logging in strong CG cut helper.Multiple
printfcalls in this function should usesettings.log.printffor consistency with the codebase's logging infrastructure. However, sincesettingsis not passed to this helper, consider either passing it or removing these debug prints.Also applies to: 2257-2258
224-228: TODO: drop_cuts is not implemented.The
drop_cutsfunction is stubbed with a TODO. If the cut pool grows unbounded, this could lead to memory issues and performance degradation over long-running solves.Would you like me to open an issue to track the implementation of
drop_cutsfor cut pool management?
| if (settings.concurrent_halt != nullptr && *settings.concurrent_halt == 1) { | ||
| settings.log.printf("Concurrent halt\n"); | ||
| return -1; | ||
| return -2; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find all callers of factorize_basis
rg -n 'factorize_basis\(' --type cppRepository: NVIDIA/cuopt
Length of output: 1489
🏁 Script executed:
#!/bin/bash
# Check how the return value is handled at each call site
# Start with crossover.cpp where multiple comparisons occur
echo "=== crossover.cpp calls ==="
rg -n -B2 -A5 'factorize_basis\(' cpp/src/dual_simplex/crossover.cpp | head -150
echo ""
echo "=== primal.cpp calls ==="
rg -n -B2 -A5 'factorize_basis\(' cpp/src/dual_simplex/primal.cpp
echo ""
echo "=== basis_updates.cpp calls ==="
rg -n -B2 -A5 'factorize_basis\(' cpp/src/dual_simplex/basis_updates.cppRepository: NVIDIA/cuopt
Length of output: 4426
🏁 Script executed:
#!/bin/bash
# Check basis_updates.cpp context for how status is used
echo "=== basis_updates.cpp context ==="
sed -n '2270,2320p' cpp/src/dual_simplex/basis_updates.cpp
echo ""
echo "=== Search for any handling of -2 or 'concurrent_halt' in return checks ==="
rg -n '\-2|concurrent_halt' --type cpp cpp/src/dual_simplex/ | grep -E '(== -2|!= -2|if.*-2|return.*-2)'Repository: NVIDIA/cuopt
Length of output: 8708
🏁 Script executed:
#!/bin/bash
# Check what happens in crossover.cpp after the -1 check
echo "=== crossover.cpp line 797-810 ==="
sed -n '797,810p' cpp/src/dual_simplex/crossover.cpp
echo ""
echo "=== crossover.cpp line 1151-1160 ==="
sed -n '1151,1160p' cpp/src/dual_simplex/crossover.cpp
echo ""
echo "=== primal.cpp line 310-320 ==="
sed -n '310,320p' cpp/src/dual_simplex/primal.cpp
echo ""
echo "=== How is -2 handled in phase2.cpp ==="
rg -n -B2 -A2 'status == -2' cpp/src/dual_simplex/phase2.cpp | head -40Repository: NVIDIA/cuopt
Length of output: 1803
🏁 Script executed:
#!/bin/bash
# Verify the function signatures and callers to understand the full call chain
echo "=== Function signatures ==="
rg -n 'crossover_status_t|primal::status_t' cpp/src/dual_simplex/crossover.hpp cpp/src/dual_simplex/primal.hpp | head -20
echo ""
echo "=== Check if these functions are called from phase2 ==="
rg -n 'crossover\(|primal\(' cpp/src/dual_simplex/phase2.cpp | head -20Repository: NVIDIA/cuopt
Length of output: 402
Error code change incomplete: four call sites need updates to handle -2.
The new -2 return code for concurrent halt was added to factorize_basis() but not all callers were updated. In crossover.cpp (lines 797, 1151, 1350) and primal.cpp (line 310), the error checks still only compare against -1. When concurrent halt occurs and -2 is returned, these sites will not detect it as an error—they will fall through to the else branch and continue execution with potentially corrupted state.
Update these four locations to check for -2:
cpp/src/dual_simplex/crossover.cpplines 797–798, 1151–1152, 1350–1351cpp/src/dual_simplex/primal.cpplines 310–311
Change pattern: if (factorize_basis(...) == -1) → if (factorize_basis(...) != 0) or explicitly check both -1 and -2.
🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/basis_solves.cpp` around lines 364 - 367, The callers of
factorize_basis still only check for -1 so they miss the new -2 concurrent-halt
code; update each site that currently does if (factorize_basis(...) == -1) (the
three call sites in crossover.cpp and the one in primal.cpp) to treat any
nonzero return as an error (e.g., if (factorize_basis(...) != 0)) or explicitly
test for both -1 and -2, so concurrent halt (-2) is detected and handled the
same as the existing error path.
| compute_transposes(); | ||
|
|
||
| // Adjust row_permutation_ and inverse_row_permutation_ | ||
| row_permutation_.resize(m + cuts_basic.m); | ||
| inverse_row_permutation_.resize(m + cuts_basic.m); | ||
| for (i_t k = m; k < m + cuts_basic.m; ++k) { | ||
| row_permutation_[k] = k; | ||
| } | ||
| inverse_permutation(row_permutation_, inverse_row_permutation_); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update column permutations after expanding the basis.
Line 1308-1317 updates row_permutation_ but leaves col_permutation_/inverse_col_permutation_ at the old size. After U0_ grows, subsequent u_solve/u_transpose_solve will permute with stale mappings. Resize and append identity entries for the new columns.
🛠️ Suggested fix
// Adjust row_permutation_ and inverse_row_permutation_
row_permutation_.resize(m + cuts_basic.m);
inverse_row_permutation_.resize(m + cuts_basic.m);
for (i_t k = m; k < m + cuts_basic.m; ++k) {
row_permutation_[k] = k;
}
inverse_permutation(row_permutation_, inverse_row_permutation_);
+ // Adjust col_permutation_ and inverse_col_permutation_
+ col_permutation_.resize(m + cuts_basic.m);
+ for (i_t k = m; k < m + cuts_basic.m; ++k) {
+ col_permutation_[k] = k;
+ }
+ inverse_permutation(col_permutation_, inverse_col_permutation_);🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/basis_updates.cpp` around lines 1308 - 1317, The code
resizes only row_permutation_ after expanding the basis (in the block following
compute_transposes()), leaving col_permutation_ and inverse_col_permutation_ at
the old size which yields stale mappings for subsequent
u_solve/u_transpose_solve; fix by resizing col_permutation_ and
inverse_col_permutation_ to match the new column count (m + cuts_basic.m or the
new U0_.cols()), and append identity entries for each new column index (set
col_permutation_[k] = k and then recompute inverse_col_permutation_ via
inverse_permutation or equivalent). Ensure you perform these updates in the same
routine where row_permutation_ is adjusted so index mappings remain consistent
across compute_transposes(), U0_, and the u_solve/u_transpose_solve calls.
| i_t branch_and_bound_t<i_t, f_t>::find_reduced_cost_fixings(f_t upper_bound, | ||
| std::vector<f_t>& lower_bounds, | ||
| std::vector<f_t>& upper_bounds) | ||
| { | ||
| std::vector<f_t> reduced_costs = root_relax_soln_.z; | ||
| lower_bounds = original_lp_.lower; | ||
| upper_bounds = original_lp_.upper; | ||
| std::vector<bool> bounds_changed(original_lp_.num_cols, false); | ||
| const f_t root_obj = compute_objective(original_lp_, root_relax_soln_.x); | ||
| const f_t threshold = 1e-3; | ||
| const f_t weaken = 1e-5; | ||
| i_t num_improved = 0; | ||
| i_t num_fixed = 0; | ||
| i_t num_cols_to_check = reduced_costs.size(); // Reduced costs will be smaller than the original | ||
| // problem because we have added slacks for cuts | ||
| for (i_t j = 0; j < num_cols_to_check; j++) { | ||
| if (!std::isfinite(reduced_costs[j])) { | ||
| printf("Variable %d reduced cost is %e\n", j, reduced_costs[j]); | ||
| continue; | ||
| } | ||
| if (std::abs(reduced_costs[j]) > threshold) { | ||
| const f_t lower_j = original_lp_.lower[j]; | ||
| const f_t upper_j = original_lp_.upper[j]; | ||
| const f_t abs_gap = upper_bound - root_obj; | ||
| f_t reduced_cost_upper_bound = upper_j; | ||
| f_t reduced_cost_lower_bound = lower_j; | ||
| if (lower_j > -inf && reduced_costs[j] > 0) { | ||
| const f_t new_upper_bound = lower_j + abs_gap / reduced_costs[j]; | ||
| reduced_cost_upper_bound = var_types_[j] == variable_type_t::INTEGER | ||
| ? std::floor(new_upper_bound + weaken) | ||
| : new_upper_bound; | ||
| if (reduced_cost_upper_bound < upper_j) { | ||
| num_improved++; | ||
| upper_bounds[j] = reduced_cost_upper_bound; | ||
| bounds_changed[j] = true; | ||
| } | ||
| } | ||
| if (upper_j < inf && reduced_costs[j] < 0) { | ||
| const f_t new_lower_bound = upper_j + abs_gap / reduced_costs[j]; | ||
| reduced_cost_lower_bound = var_types_[j] == variable_type_t::INTEGER | ||
| ? std::ceil(new_lower_bound - weaken) | ||
| : new_lower_bound; | ||
| if (reduced_cost_lower_bound > lower_j) { | ||
| num_improved++; | ||
| lower_bounds[j] = reduced_cost_lower_bound; | ||
| bounds_changed[j] = true; | ||
| } | ||
| } | ||
| if (var_types_[j] == variable_type_t::INTEGER && | ||
| reduced_cost_upper_bound <= reduced_cost_lower_bound) { | ||
| num_fixed++; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| if (num_fixed > 0 || num_improved > 0) { | ||
| settings_.log.printf( | ||
| "Reduced costs: Found %d improved bounds and %d fixed variables (%.1f%%)\n", | ||
| num_improved, | ||
| num_fixed, | ||
| 100.0 * static_cast<f_t>(num_fixed) / static_cast<f_t>(num_integer_variables_)); | ||
| } | ||
| return num_fixed; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard the fixed-percentage log against zero integer variables.
If this code path is ever hit with no integer vars (e.g., LP-only usage), the percentage calculation divides by zero. A small guard keeps logging safe.
Suggested fix
- settings_.log.printf(
- "Reduced costs: Found %d improved bounds and %d fixed variables (%.1f%%)\n",
- num_improved,
- num_fixed,
- 100.0 * static_cast<f_t>(num_fixed) / static_cast<f_t>(num_integer_variables_));
+ const f_t fixed_pct =
+ (num_integer_variables_ > 0)
+ ? 100.0 * static_cast<f_t>(num_fixed) / static_cast<f_t>(num_integer_variables_)
+ : 0.0;
+ settings_.log.printf(
+ "Reduced costs: Found %d improved bounds and %d fixed variables (%.1f%%)\n",
+ num_improved,
+ num_fixed,
+ fixed_pct);🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/branch_and_bound.cpp` around lines 343 - 405, The log
message computes a percentage using num_integer_variables_ and can divide by
zero when there are no integer variables; update the post-check block in
find_reduced_cost_fixings to guard that division by computing the percentage
only if num_integer_variables_ > 0 (or substitute a safe denominator like 1)
before calling settings_.log.printf (refer to find_reduced_cost_fixings and
symbols settings_.log.printf and num_integer_variables_), ensuring the printed
percentage is valid or omitted when there are zero integer vars.
| template <typename i_t, typename f_t> | ||
| f_t cut_pool_t<i_t, f_t>::cut_distance(i_t row, | ||
| const std::vector<f_t>& x, | ||
| f_t& cut_violation, | ||
| f_t& cut_norm) | ||
| { | ||
| const i_t row_start = cut_storage_.row_start[row]; | ||
| const i_t row_end = cut_storage_.row_start[row + 1]; | ||
| f_t cut_x = 0.0; | ||
| f_t dot = 0.0; | ||
| for (i_t p = row_start; p < row_end; p++) { | ||
| const i_t j = cut_storage_.j[p]; | ||
| const f_t cut_coeff = cut_storage_.x[p]; | ||
| cut_x += cut_coeff * x[j]; | ||
| dot += cut_coeff * cut_coeff; | ||
| } | ||
| cut_violation = rhs_storage_[row] - cut_x; | ||
| cut_norm = std::sqrt(dot); | ||
| const f_t distance = cut_violation / cut_norm; | ||
| return distance; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against near-zero norm to prevent numerical instability.
The cut_distance function divides by cut_norm at line 64 without checking if it's near zero. If a cut has very small coefficients (which could happen after coefficient dropping), this could cause numerical instability or produce unreliable distance values.
🛡️ Proposed fix
cut_violation = rhs_storage_[row] - cut_x;
cut_norm = std::sqrt(dot);
+ const f_t min_norm = 1e-12;
+ if (cut_norm < min_norm) {
+ return 0.0; // Treat near-zero norm cuts as having zero distance
+ }
const f_t distance = cut_violation / cut_norm;
return distance;Based on learnings: "Guard numerical stability by guarding near-zero norms in cut-related computations... functions like cut_orthogonality (and related routines in the same directory) should also handle near-zero norms to avoid instability."
🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/cuts.cpp` around lines 46 - 66, The function
cut_distance computes cut_norm and divides by it without checking for near-zero
values; modify cut_distance (and similarly review cut_orthogonality and related
routines) to guard against tiny norms: after computing cut_norm = std::sqrt(dot)
compare it to a small threshold (e.g. something based on
std::numeric_limits<f_t>::epsilon(), scaled appropriately) and if cut_norm is
below that threshold avoid the divide — set cut_norm to 0 (or the threshold) and
set distance to 0 (or compute cut_violation/threshold if you prefer a capped
value), ensuring cut_violation is still assigned from rhs_storage_[row] - cut_x;
update the logic around the return value so no division by a near-zero cut_norm
occurs.
| template <typename i_t, typename f_t> | ||
| f_t cut_pool_t<i_t, f_t>::cut_orthogonality(i_t i, i_t j) | ||
| { | ||
| const i_t i_start = cut_storage_.row_start[i]; | ||
| const i_t i_end = cut_storage_.row_start[i + 1]; | ||
| const i_t i_nz = i_end - i_start; | ||
| const i_t j_start = cut_storage_.row_start[j]; | ||
| const i_t j_end = cut_storage_.row_start[j + 1]; | ||
| const i_t j_nz = j_end - j_start; | ||
|
|
||
| f_t dot = sparse_dot(cut_storage_.j.data() + i_start, | ||
| cut_storage_.x.data() + i_start, | ||
| i_nz, | ||
| cut_storage_.j.data() + j_start, | ||
| cut_storage_.x.data() + j_start, | ||
| j_nz); | ||
|
|
||
| f_t norm_i = cut_norms_[i]; | ||
| f_t norm_j = cut_norms_[j]; | ||
| return 1.0 - std::abs(dot) / (norm_i * norm_j); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against near-zero norms in orthogonality calculation.
At line 97, the division by (norm_i * norm_j) can cause numerical instability when either norm is near-zero. This directly matches the retrieved learning about guarding near-zero norms in cut_orthogonality.
🛡️ Proposed fix
f_t norm_i = cut_norms_[i];
f_t norm_j = cut_norms_[j];
+ const f_t min_norm_product = 1e-12;
+ if (norm_i * norm_j < min_norm_product) {
+ return 0.0; // Treat as parallel (not orthogonal) to be conservative
+ }
return 1.0 - std::abs(dot) / (norm_i * norm_j);Based on learnings: "Guard numerical stability by guarding near-zero norms in cut-related computations... functions like cut_orthogonality (and related routines in the same directory) should also handle near-zero norms."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| template <typename i_t, typename f_t> | |
| f_t cut_pool_t<i_t, f_t>::cut_orthogonality(i_t i, i_t j) | |
| { | |
| const i_t i_start = cut_storage_.row_start[i]; | |
| const i_t i_end = cut_storage_.row_start[i + 1]; | |
| const i_t i_nz = i_end - i_start; | |
| const i_t j_start = cut_storage_.row_start[j]; | |
| const i_t j_end = cut_storage_.row_start[j + 1]; | |
| const i_t j_nz = j_end - j_start; | |
| f_t dot = sparse_dot(cut_storage_.j.data() + i_start, | |
| cut_storage_.x.data() + i_start, | |
| i_nz, | |
| cut_storage_.j.data() + j_start, | |
| cut_storage_.x.data() + j_start, | |
| j_nz); | |
| f_t norm_i = cut_norms_[i]; | |
| f_t norm_j = cut_norms_[j]; | |
| return 1.0 - std::abs(dot) / (norm_i * norm_j); | |
| } | |
| template <typename i_t, typename f_t> | |
| f_t cut_pool_t<i_t, f_t>::cut_orthogonality(i_t i, i_t j) | |
| { | |
| const i_t i_start = cut_storage_.row_start[i]; | |
| const i_t i_end = cut_storage_.row_start[i + 1]; | |
| const i_t i_nz = i_end - i_start; | |
| const i_t j_start = cut_storage_.row_start[j]; | |
| const i_t j_end = cut_storage_.row_start[j + 1]; | |
| const i_t j_nz = j_end - j_start; | |
| f_t dot = sparse_dot(cut_storage_.j.data() + i_start, | |
| cut_storage_.x.data() + i_start, | |
| i_nz, | |
| cut_storage_.j.data() + j_start, | |
| cut_storage_.x.data() + j_start, | |
| j_nz); | |
| f_t norm_i = cut_norms_[i]; | |
| f_t norm_j = cut_norms_[j]; | |
| const f_t min_norm_product = 1e-12; | |
| if (norm_i * norm_j < min_norm_product) { | |
| return 0.0; // Treat as parallel (not orthogonal) to be conservative | |
| } | |
| return 1.0 - std::abs(dot) / (norm_i * norm_j); | |
| } |
🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/cuts.cpp` around lines 78 - 98, In cut_orthogonality,
guard the division by (norm_i * norm_j): compute a small threshold (e.g. eps =
std::numeric_limits<f_t>::epsilon() * 100 or a small constant) and if norm_i *
norm_j <= eps return a sensible default (suggest returning 1.0 to represent
maximal orthogonality for near-zero norms), otherwise perform the existing
computation using sparse_dot and cut_norms_. Reference symbols: function
cut_orthogonality, variables norm_i, norm_j, cut_norms_, and sparse_dot.
| /* clang-format on */ | ||
| #pragma once | ||
|
|
||
| #include <dual_simplex/basis_updates.hpp> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find basis_updates.hpp file
echo "=== Searching for basis_updates.hpp ==="
fd 'basis_updates\.hpp' --type f
# Check references to basis_updates.hpp
echo -e "\n=== References to basis_updates.hpp ==="
rg -n "basis_updates\.hpp" --max-count=20
# Check CMakeLists.txt for include_directories configuration
echo -e "\n=== Include directory configuration ==="
rg -n "target_include_directories|include_directories" CMakeLists.txtRepository: NVIDIA/cuopt
Length of output: 919
🏁 Script executed:
#!/bin/bash
# Find CMakeLists.txt
echo "=== Finding CMakeLists.txt ==="
find . -name 'CMakeLists.txt' -type f | head -20
# Check cuts.hpp for cut_orthogonality
echo -e "\n=== Checking cuts.hpp for cut_orthogonality ==="
cat -n cpp/src/dual_simplex/cuts.hpp | head -50
# Check if cuts.cpp exists
echo -e "\n=== Checking for cuts.cpp ==="
ls -la cpp/src/dual_simplex/cuts.cpp 2>&1 || echo "cuts.cpp not found"
# Search for cut_orthogonality implementation
echo -e "\n=== Searching for cut_orthogonality implementation ==="
rg -n "cut_orthogonality" --type cpp --type hpp -A 10 | head -60Repository: NVIDIA/cuopt
Length of output: 2839
🏁 Script executed:
#!/bin/bash
# Search for cut_orthogonality with correct flags
echo "=== Searching for cut_orthogonality implementation ==="
rg -n "cut_orthogonality" -A 15 | head -100
# Check dual_simplex CMakeLists.txt for include directories
echo -e "\n=== dual_simplex CMakeLists.txt include configuration ==="
cat cpp/src/dual_simplex/CMakeLists.txt | grep -A 5 -B 5 "include\|target"
# Also check cuts.hpp more completely for all content
echo -e "\n=== Full cuts.hpp file ==="
wc -l cpp/src/dual_simplex/cuts.hppRepository: NVIDIA/cuopt
Length of output: 6970
🏁 Script executed:
#!/bin/bash
# Get the complete cut_orthogonality function implementation
echo "=== Complete cut_orthogonality function (lines 79-107) ==="
sed -n '79,107p' cpp/src/dual_simplex/cuts.cpp
# Also check if there are numerical stability guards (epsilon, min operations, etc.)
echo -e "\n=== Checking for numerical stability patterns near cut_orthogonality ==="
sed -n '79,120p' cpp/src/dual_simplex/cuts.cpp | cat -nRepository: NVIDIA/cuopt
Length of output: 3155
Add epsilon guard in cut_orthogonality for division by near-zero norms.
Line 19 in cuts.cpp performs division by norm_i * norm_j without checking for near-zero values. This violates numerical stability requirements and can produce NaN/Inf results for degenerate cuts. Add an epsilon guard to handle zero or near-zero norms:
f_t norm_i = cut_norms_[i];
f_t norm_j = cut_norms_[j];
const f_t epsilon = 1e-14;
if (norm_i < epsilon || norm_j < epsilon) {
return 0.0; // Degenerate cut; treat as non-orthogonal
}
return 1.0 - std::abs(dot) / (norm_i * norm_j);(Note: The include path for basis_updates.hpp is correct and verified.)
🧰 Tools
🪛 Clang (14.0.6)
[error] 9-9: 'dual_simplex/basis_updates.hpp' file not found
(clang-diagnostic-error)
🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/cuts.hpp` at line 9, The computation in
cut_orthogonality divides by norm_i * norm_j without guarding against near-zero
norms; update the function (cut_orthogonality) to load norm_i = cut_norms_[i]
and norm_j = cut_norms_[j], define a small epsilon (e.g. 1e-14), and if either
norm is less than epsilon return 0.0 (treat as non-orthogonal) before performing
return 1.0 - std::abs(dot) / (norm_i * norm_j); this prevents NaN/Inf from
degenerate cuts and preserves numerical stability.
| f_t cut_distance(i_t row, const std::vector<f_t>& x, f_t& cut_violation, f_t& cut_norm); | ||
| f_t cut_density(i_t row); | ||
| f_t cut_orthogonality(i_t i, i_t j); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
rg -n "cut_orthogonality" cpp/src/dual_simplex/cuts.cpp -A 20Repository: NVIDIA/cuopt
Length of output: 3439
Guard cut_orthogonality against near-zero cut norms to prevent numerical instability.
Line 97 divides by (norm_i * norm_j) without checking for near-zero values. Even though zero-norm cuts are prevented upstream, near-zero norms from floating-point accumulation in the sparse dot product can cause precision loss or unreliable orthogonality scores. Add a small epsilon check before the division.
🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/cuts.hpp` around lines 170 - 173, The cut_orthogonality
function currently divides by (norm_i * norm_j) which can be near-zero due to
floating-point accumulation; modify cut_orthogonality to check (norm_i * norm_j)
against a small epsilon (e.g. 1e-12) before performing the division and handle
the near-zero case by returning a safe orthogonality value (such as 0.0) or
clamping the denominator to epsilon; update any related uses (cut_distance,
cut_density remain unchanged) so cut_orthogonality(i, j) never performs an
unsafe division.
| dense_matrix_t(i_t rows, i_t cols) : m(rows), n(cols), values(rows * cols, 0.0) {} | ||
|
|
||
| dense_matrix_t(i_t rows, i_t cols, f_t value) : m(rows), n(cols), values(rows * cols, value) {} | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against overflow/negative dimensions before allocating values.
Line 21 computes rows * cols without validating size or sign, so invalid dimensions can overflow and allocate enormous memory. Please add guards (and ideally centralize init so both constructors share the same checks).
🛠️ Suggested guard + shared init
+#include <limits>
+#include <type_traits>
...
- dense_matrix_t(i_t rows, i_t cols) : m(rows), n(cols), values(rows * cols, 0.0) {}
-
- dense_matrix_t(i_t rows, i_t cols, f_t value) : m(rows), n(cols), values(rows * cols, value) {}
+ dense_matrix_t(i_t rows, i_t cols) : dense_matrix_t(rows, cols, f_t{0}) {}
+
+ dense_matrix_t(i_t rows, i_t cols, f_t value) : m(rows), n(cols)
+ {
+ if constexpr (std::is_signed_v<i_t>) {
+ if (rows < 0 || cols < 0) {
+ printf("dense_matrix_t: negative dimensions\n");
+ exit(1);
+ }
+ }
+ const auto rows_sz = static_cast<size_t>(rows);
+ const auto cols_sz = static_cast<size_t>(cols);
+ if (cols_sz != 0 && rows_sz > (std::numeric_limits<size_t>::max() / cols_sz)) {
+ printf("dense_matrix_t: size overflow\n");
+ exit(1);
+ }
+ values.assign(rows_sz * cols_sz, value);
+ }As per coding guidelines: Verify correct problem size checks before expensive GPU/CPU operations; prevent resource exhaustion on oversized problems.
🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/dense_matrix.hpp` around lines 19 - 22, The two
dense_matrix_t constructors allocate values(rows * cols, ...) without validating
dimensions; add a centralized initializer (e.g., a private init method used by
both dense_matrix_t(i_t,i_t) and dense_matrix_t(i_t,i_t,f_t)) that checks rows
and cols are positive and that (size_t)rows <= max_size_t / (size_t)cols to
prevent multiplication overflow/huge allocation, then computes the product as a
size_t and constructs values with that size; on failure, throw a
std::invalid_argument or std::length_error with a clear message.
| if (j >= col_start.size()) { | ||
| printf("Col start too small size %ld n %d\n", col_start.size(), this->n); | ||
| return -1; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Strengthen CSC col_start size guard to avoid OOB.
Line 577-580 checks j >= col_start.size(), but the loop still accesses col_start[j+1]. If col_start.size() == n, this will read past the end. Add a single pre-check for size < n+1.
🔧 Suggested fix
- for (i_t j = 0; j < this->n; ++j) {
- if (j >= col_start.size()) {
- printf("Col start too small size %ld n %d\n", col_start.size(), this->n);
- return -1;
- }
+ if (this->col_start.size() < static_cast<size_t>(this->n + 1)) {
+ printf("Col start too small size %ld n %d\n",
+ this->col_start.size(),
+ this->n);
+ return -1;
+ }
+ for (i_t j = 0; j < this->n; ++j) {🤖 Prompt for AI Agents
In `@cpp/src/dual_simplex/sparse_matrix.cpp` around lines 577 - 580, The current
guard checks "j >= col_start.size()" but code later accesses col_start[j+1],
allowing an OOB when col_start.size() == n; add a pre-check that
col_start.size() >= this->n + 1 (or equivalently fail when col_start.size() <
this->n + 1) before iterating, or replace the per-iteration check with a bounds
test using j+1 (e.g. ensure j+1 < col_start.size()); update the same function
where the loop uses j and accesses col_start[j+1] to return an error (e.g., -1)
when the col_start size is insufficient.
| #if 0 | ||
| TEST(dual_simplex, simple_cuts) | ||
| { | ||
| // minimize x + y + 2 z | ||
| // subject to x + y + z == 1 | ||
| // x, y, z >= 0 | ||
|
|
||
| raft::handle_t handle{}; | ||
| cuopt::linear_programming::dual_simplex::user_problem_t<int, double> user_problem(&handle); | ||
| constexpr int m = 1; | ||
| constexpr int n = 3; | ||
| constexpr int nz = 3; | ||
|
|
||
| user_problem.num_rows = m; | ||
| user_problem.num_cols = n; | ||
| user_problem.objective.resize(n); | ||
| user_problem.objective[0] = 1.0; | ||
| user_problem.objective[1] = 1.0; | ||
| user_problem.objective[2] = 2.0; | ||
| user_problem.A.m = m; | ||
| user_problem.A.n = n; | ||
| user_problem.A.nz_max = nz; | ||
| user_problem.A.reallocate(nz); | ||
| user_problem.A.col_start.resize(n + 1); | ||
| user_problem.A.col_start[0] = 0; | ||
| user_problem.A.col_start[1] = 1; | ||
| user_problem.A.col_start[2] = 2; | ||
| user_problem.A.col_start[3] = 3; | ||
| user_problem.A.i[0] = 0; | ||
| user_problem.A.x[0] = 1.0; | ||
| user_problem.A.i[1] = 0; | ||
| user_problem.A.x[1] = 1.0; | ||
| user_problem.A.i[2] = 0; | ||
| user_problem.A.x[2] = 1.0; | ||
| user_problem.lower.resize(n, 0.0); | ||
| user_problem.upper.resize(n, dual_simplex::inf); | ||
| user_problem.num_range_rows = 0; | ||
| user_problem.problem_name = "simple_cuts"; | ||
| user_problem.obj_scale = 1.0; | ||
| user_problem.obj_constant = 0.0; | ||
| user_problem.rhs.resize(m, 1.0); | ||
| user_problem.row_sense.resize(m, 'E'); | ||
| user_problem.var_types.resize( | ||
| n, cuopt::linear_programming::dual_simplex::variable_type_t::CONTINUOUS); | ||
|
|
||
| cuopt::init_logger_t logger("", true); | ||
|
|
||
| cuopt::linear_programming::dual_simplex::lp_problem_t<int, double> lp( | ||
| user_problem.handle_ptr, 1, 1, 1); | ||
| cuopt::linear_programming::dual_simplex::simplex_solver_settings_t<int, double> settings; | ||
| settings.barrier = false; | ||
| settings.barrier_presolve = false; | ||
| settings.log.log = true; | ||
| settings.log.log_to_console = true; | ||
| settings.log.printf("Test print\n"); | ||
| std::vector<int> new_slacks; | ||
| cuopt::linear_programming::dual_simplex::dualize_info_t<int, double> dualize_info; | ||
| cuopt::linear_programming::dual_simplex::convert_user_problem( | ||
| user_problem, settings, lp, new_slacks, dualize_info); | ||
| cuopt::linear_programming::dual_simplex::lp_solution_t<int, double> solution(lp.num_rows, | ||
| lp.num_cols); | ||
| std::vector<cuopt::linear_programming::dual_simplex::variable_status_t> vstatus; | ||
| std::vector<double> edge_norms; | ||
| std::vector<int> basic_list(lp.num_rows); | ||
| std::vector<int> nonbasic_list; | ||
| cuopt::linear_programming::dual_simplex::basis_update_mpf_t<int, double> basis_update( | ||
| lp.num_cols, settings.refactor_frequency); | ||
| double start_time = dual_simplex::tic(); | ||
| printf("Calling solve linear program with advanced basis\n"); | ||
| EXPECT_EQ((cuopt::linear_programming::dual_simplex::solve_linear_program_with_advanced_basis( | ||
| lp, | ||
| start_time, | ||
| settings, | ||
| solution, | ||
| basis_update, | ||
| basic_list, | ||
| nonbasic_list, | ||
| vstatus, | ||
| edge_norms)), | ||
| cuopt::linear_programming::dual_simplex::lp_status_t::OPTIMAL); | ||
| printf("Solution objective: %e\n", solution.objective); | ||
| printf("Solution x: %e %e %e\n", solution.x[0], solution.x[1], solution.x[2]); | ||
| printf("Solution y: %e\n", solution.y[0]); | ||
| printf("Solution z: %e %e %e\n", solution.z[0], solution.z[1], solution.z[2]); | ||
| EXPECT_NEAR(solution.objective, 1.0, 1e-6); | ||
| EXPECT_NEAR(solution.x[0], 1.0, 1e-6); | ||
|
|
||
| // Add a cut z >= 1/3. Needs to be in the form C*x <= d | ||
| csr_matrix_t<int, double> cuts(1, n, 1); | ||
| cuts.row_start[0] = 0; | ||
| cuts.j[0] = 2; | ||
| cuts.x[0] = -1.0; | ||
| cuts.row_start[1] = 1; | ||
| printf("cuts m %d n %d\n", cuts.m, cuts.n); | ||
| std::vector<double> cut_rhs(1); | ||
| cut_rhs[0] = -1.0 / 3.0; | ||
|
|
||
| std::vector<variable_type_t> var_types; | ||
| EXPECT_EQ(cuopt::linear_programming::dual_simplex::solve_linear_program_with_cuts(start_time, | ||
| settings, | ||
| cuts, | ||
| cut_rhs, | ||
| lp, | ||
| solution, | ||
| basis_update, | ||
| basic_list, | ||
| nonbasic_list, | ||
| vstatus, | ||
| edge_norms, | ||
| var_types), | ||
| cuopt::linear_programming::dual_simplex::lp_status_t::OPTIMAL); | ||
| printf("Solution objective: %e\n", solution.objective); | ||
| printf("Solution x: %e %e %e\n", solution.x[0], solution.x[1], solution.x[2]); | ||
| EXPECT_NEAR(solution.objective, 4.0 / 3.0, 1e-6); | ||
|
|
||
| cuts.row_start.resize(3); | ||
| cuts.j.resize(2); | ||
| cuts.x.resize(2); | ||
| // Add cut y >= 1/3 | ||
| cuts.j[0] = 1; | ||
| cuts.row_start[2] = 2; | ||
| // Add cut x <= 0.0 | ||
| cuts.j[1] = 0; | ||
| cuts.x[1] = 1.0; | ||
| cuts.m = 2; | ||
| cut_rhs.resize(2); | ||
| cut_rhs[1] = 0.0; | ||
|
|
||
| EXPECT_EQ(cuopt::linear_programming::dual_simplex::solve_linear_program_with_cuts(start_time, | ||
| settings, | ||
| cuts, | ||
| cut_rhs, | ||
| lp, | ||
| solution, | ||
| basis_update, | ||
| basic_list, | ||
| nonbasic_list, | ||
| vstatus, | ||
| edge_norms, | ||
| var_types), | ||
| cuopt::linear_programming::dual_simplex::lp_status_t::OPTIMAL); | ||
| printf("Solution objective: %e\n", solution.objective); | ||
| printf("Solution x: %e %e %e\n", solution.x[0], solution.x[1], solution.x[2]); | ||
| EXPECT_NEAR(solution.objective, 4.0 / 3.0, 1e-6); | ||
| EXPECT_NEAR(solution.x[0], 0.0, 1e-6); | ||
| EXPECT_NEAR(solution.x[1], 2.0 / 3.0, 1e-6); | ||
| EXPECT_NEAR(solution.x[2], 1.0 / 3.0, 1e-6); | ||
|
|
||
| } | ||
| #endif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Search for solve_linear_program_with_cuts declaration and usage
rg -n 'solve_linear_program_with_cuts' --type cpp --type hRepository: NVIDIA/cuopt
Length of output: 343
🏁 Script executed:
# Also search for any comments or TODOs related to cuts or this test
rg -n 'cuts|#if 0' cpp/tests/dual_simplex/unit_tests/solve.cpp | head -20Repository: NVIDIA/cuopt
Length of output: 889
🏁 Script executed:
# Check git history to understand when test was added and why it was disabled
git log --oneline -p cpp/tests/dual_simplex/unit_tests/solve.cpp | grep -A 5 -B 5 'simple_cuts' | head -50Repository: NVIDIA/cuopt
Length of output: 717
🏁 Script executed:
# Search for solve_linear_program_with_cuts in header files and source files
rg -n 'solve_linear_program_with_cuts' cpp/src --type h --type cppRepository: NVIDIA/cuopt
Length of output: 38
🏁 Script executed:
# Check if the function is declared in any headers
find cpp -name "*.h" -o -name "*.hpp" | xargs grep -l 'solve_linear_program_with_cuts' 2>/dev/nullRepository: NVIDIA/cuopt
Length of output: 40
🏁 Script executed:
# Look for function definition or declaration pattern
rg -B 2 -A 10 'solve_linear_program_with_cuts\s*\(' cpp/src --type cpp --type h | head -50Repository: NVIDIA/cuopt
Length of output: 38
🏁 Script executed:
# Search the entire codebase for solve_linear_program_with_cuts
rg -n 'solve_linear_program_with_cuts' --type cpp --type h --type cuh --type cuRepository: NVIDIA/cuopt
Length of output: 83
🏁 Script executed:
# Check if this is a new function being added in this PR - look for any mention of it
git log --all --oneline | grep -i cut | head -10Repository: NVIDIA/cuopt
Length of output: 211
🏁 Script executed:
# Look at the dual_simplex solver headers to see what functions are available
ls -la cpp/src/dual_simplex/*.h* | head -20Repository: NVIDIA/cuopt
Length of output: 1830
🏁 Script executed:
# Search for solve_linear_program functions to understand the API
rg -n 'solve_linear_program' cpp/src/dual_simplex --type cpp --type h -A 2Repository: NVIDIA/cuopt
Length of output: 6065
🏁 Script executed:
# Verify absolutely that solve_linear_program_with_cuts does not exist
rg 'solve_linear_program_with_cuts' . --type cpp --type hRepository: NVIDIA/cuopt
Length of output: 339
🏁 Script executed:
# Check the solve.hpp header to see all available solve functions
cat cpp/src/dual_simplex/solve.hppRepository: NVIDIA/cuopt
Length of output: 4179
🏁 Script executed:
# Look at cuts.hpp since the test involves cuts
head -100 cpp/src/dual_simplex/cuts.hppRepository: NVIDIA/cuopt
Length of output: 3196
🏁 Script executed:
# Double-check by looking for any TODO or comment about solve_linear_program_with_cuts
rg -n 'TODO|FIXME|XXX' cpp/src/dual_simplex/solve.hpp cpp/src/dual_simplex/solve.cpp | grep -i cutRepository: NVIDIA/cuopt
Length of output: 38
🏁 Script executed:
# Check if there's any recent work on cuts implementation in branch_and_bound or cuts files
rg -n 'solve.*cut' cpp/src/dual_simplex --type cpp --type hRepository: NVIDIA/cuopt
Length of output: 144
🏁 Script executed:
# Look at the branch_and_bound code around line 1894 to see how cuts are resolved
sed -n '1880,1920p' cpp/src/dual_simplex/branch_and_bound.cppRepository: NVIDIA/cuopt
Length of output: 1749
🏁 Script executed:
# Search for any function that takes cuts as parameters
rg -n 'cuts.*csr_matrix|csr_matrix.*cuts' cpp/src/dual_simplex --type cpp --type h -B 2 -A 2Repository: NVIDIA/cuopt
Length of output: 4960
🏁 Script executed:
# Check if cuts are integrated into the main branch and bound solver
rg -n 'add_cuts|apply_cuts|solve.*cuts' cpp/src/dual_simplex --type cpp --type hRepository: NVIDIA/cuopt
Length of output: 1028
🏁 Script executed:
# Look at the add_cuts function signature in detail
sed -n '2346,2380p' cpp/src/dual_simplex/cuts.cppRepository: NVIDIA/cuopt
Length of output: 1421
🏁 Script executed:
# Check what add_cuts returns and how it's typically called
sed -n '2346,2500p' cpp/src/dual_simplex/cuts.cpp | head -100Repository: NVIDIA/cuopt
Length of output: 3474
The test calls a non-existent function solve_linear_program_with_cuts.
The disabled test simple_cuts (lines 329-478) invokes solve_linear_program_with_cuts() at lines 427 and 457, but this function is not defined anywhere in the codebase. The public solver API provides only: solve_linear_program_advanced, solve_linear_program_with_advanced_basis, solve_linear_program_with_barrier, solve_linear_program, solve_mip, and solve_mip_with_guess.
While an internal add_cuts() function exists in cpp/src/dual_simplex/cuts.cpp for integrating cuts into an LP during branch-and-bound, there is no standalone public interface for solving LPs with cuts.
Implement the missing solve_linear_program_with_cuts() API or rewrite the test to use existing solver functions combined with the internal add_cuts() mechanism.
🤖 Prompt for AI Agents
In `@cpp/tests/dual_simplex/unit_tests/solve.cpp` around lines 329 - 478, The test
calls a non-existent API solve_linear_program_with_cuts; update the test to
apply cuts via the existing internal add_cuts(...) helper and then call an
existing public solver (e.g. solve_linear_program_with_advanced_basis) instead:
locate the test using solve_linear_program_with_cuts and replace those calls
with a sequence that calls add_cuts(cuts, cut_rhs, lp, /*appropriate args*/) to
modify lp and then invokes solve_linear_program_with_advanced_basis(lp,
start_time, settings, solution, basis_update, basic_list, nonbasic_list,
vstatus, edge_norms); ensure variable lists (basic_list, nonbasic_list, vstatus,
edge_norms) and basis_update are preserved across the add_cuts+solve sequence so
the test logic and assertions remain valid.
Description
Issue
Checklist
Summary by CodeRabbit
Release Notes
New Features
Improvements
✏️ Tip: You can customize this high-level summary in your review settings.