-
Notifications
You must be signed in to change notification settings - Fork 4
Filter opportunities by start date (1st Jan 2026) in invoice generation task #906
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe PR narrows the opportunity set for monthly automated invoicing by adding an explicit Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
commcare_connect/opportunity/tasks.py (1)
550-550: Consider making the cutoff date configurable.The hardcoded date
2026-01-01reduces flexibility. If the cutoff date needs adjustment in the future, it will require a code change and redeployment. Consider moving this to a Django setting, environment variable, or database configuration field for easier updates.🔎 Potential approaches
Option 1: Django setting
# In settings.py AUTOMATED_INVOICE_START_DATE_CUTOFF = datetime.date(2026, 1, 1) # In tasks.py opp_start_date = settings.AUTOMATED_INVOICE_START_DATE_CUTOFFOption 2: Environment variable with fallback
import os from django.utils.dateparse import parse_date opp_start_date = parse_date(os.getenv('INVOICE_START_DATE_CUTOFF', '2026-01-01'))
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
commcare_connect/opportunity/tasks.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: pytest
| for opportunity in Opportunity.objects.filter(active=True, managed=True, start_date__gte=opp_start_date).iterator( | ||
| chunk_size=CHUNK_SIZE | ||
| ): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
cat -n commcare_connect/opportunity/tasks.py | sed -n '540,570p'Repository: dimagi/commcare-connect
Length of output: 1561
🏁 Script executed:
grep -n "opp_start_date" commcare_connect/opportunity/tasks.pyRepository: dimagi/commcare-connect
Length of output: 240
🏁 Script executed:
head -20 commcare_connect/opportunity/tasks.pyRepository: dimagi/commcare-connect
Length of output: 654
🏁 Script executed:
fd -t f "test.*\.py$|.*_test\.py$" | grep -E "(task|invoice)" | head -20Repository: dimagi/commcare-connect
Length of output: 158
🏁 Script executed:
grep -r "generate_automated_service_delivery_invoice" --include="*.py" | head -20Repository: dimagi/commcare-connect
Length of output: 1509
🏁 Script executed:
grep -r "2026.*1.*1\|2026-01-01" commcare_connect/ --include="*.py" | head -20Repository: dimagi/commcare-connect
Length of output: 151
🏁 Script executed:
grep -B5 -A15 "opp_start_date" commcare_connect/opportunity/tasks.pyRepository: dimagi/commcare-connect
Length of output: 1075
🏁 Script executed:
cat -n commcare_connect/opportunity/tests/test_tasks.py | head -100Repository: dimagi/commcare-connect
Length of output: 4529
🏁 Script executed:
grep -A 30 "generate_automated_service_delivery_invoice" commcare_connect/opportunity/tests/test_tasks.py | head -80Repository: dimagi/commcare-connect
Length of output: 3417
🏁 Script executed:
grep -n "start_date" commcare_connect/opportunity/tests/test_tasks.py | head -30Repository: dimagi/commcare-connect
Length of output: 442
🏁 Script executed:
grep -B 10 -A 20 "test_generate_automated_service_delivery_invoice\|test_no_invoice" commcare_connect/opportunity/tests/test_tasks.py | head -150Repository: dimagi/commcare-connect
Length of output: 3400
🏁 Script executed:
grep -n "OpportunityFactory" commcare_connect/opportunity/tests/test_tasks.py | head -20Repository: dimagi/commcare-connect
Length of output: 594
🏁 Script executed:
grep -B 5 -A 20 "class OpportunityFactory" commcare_connect/opportunity/tests/factories.pyRepository: dimagi/commcare-connect
Length of output: 952
🏁 Script executed:
grep -B 5 -A 50 "def test_generate_automated_service_delivery_invoice" commcare_connect/opportunity/tests/test_tasks.pyRepository: dimagi/commcare-connect
Length of output: 49
🏁 Script executed:
grep "start_date" commcare_connect/opportunity/tests/factories.pyRepository: dimagi/commcare-connect
Length of output: 102
🏁 Script executed:
sed -n '180,270p' commcare_connect/opportunity/tests/test_tasks.pyRepository: dimagi/commcare-connect
Length of output: 4297
🏁 Script executed:
grep -A 60 "def test_generate_automated_service_delivery_invoice" commcare_connect/opportunity/tests/test_tasks.py | head -80Repository: dimagi/commcare-connect
Length of output: 49
🏁 Script executed:
grep -n "start_date\s*=" commcare_connect/opportunity/tests/test_tasks.pyRepository: dimagi/commcare-connect
Length of output: 257
🏁 Script executed:
grep -B 2 -A 2 "OpportunityFactory(active=True, managed=True)" commcare_connect/opportunity/tests/test_tasks.py | head -40Repository: dimagi/commcare-connect
Length of output: 1023
🏁 Script executed:
cat commcare_connect/opportunity/tests/conftest.py 2>/dev/null || echo "No conftest.py found"Repository: dimagi/commcare-connect
Length of output: 87
🏁 Script executed:
python -c "from faker import Faker; fake = Faker(); print([fake.past_date() for _ in range(5)])"Repository: dimagi/commcare-connect
Length of output: 153
🏁 Script executed:
grep -n "start_date=" commcare_connect/opportunity/tests/test_tasks.pyRepository: dimagi/commcare-connect
Length of output: 49
🏁 Script executed:
wc -l commcare_connect/opportunity/tests/test_tasks.pyRepository: dimagi/commcare-connect
Length of output: 119
🏁 Script executed:
tail -100 commcare_connect/opportunity/tests/test_tasks.pyRepository: dimagi/commcare-connect
Length of output: 4379
🏁 Script executed:
sed -n '200,230p' commcare_connect/opportunity/tests/test_tasks.pyRepository: dimagi/commcare-connect
Length of output: 1427
🏁 Script executed:
grep -C 5 "test_generate_invoice_for_two_active_managed_opportunities" commcare_connect/opportunity/tests/test_tasks.py | head -40Repository: dimagi/commcare-connect
Length of output: 609
🏁 Script executed:
git log --oneline commcare_connect/opportunity/tests/test_tasks.py | head -5Repository: dimagi/commcare-connect
Length of output: 137
🏁 Script executed:
git show 37fcb1f4 --statRepository: dimagi/commcare-connect
Length of output: 34450
🏁 Script executed:
git show 37fcb1f4Repository: dimagi/commcare-connect
Length of output: 50383
🏁 Script executed:
grep -r "2026" commcare_connect/opportunity/ --include="*.py" | grep -v ".pyc" | grep -v "migration"Repository: dimagi/commcare-connect
Length of output: 151
🏁 Script executed:
grep -B 10 -A 10 "opp_start_date\|2026-01-01" commcare_connect/opportunity/tasks.pyRepository: dimagi/commcare-connect
Length of output: 928
🏁 Script executed:
cat commcare_connect/opportunity/tests/factories.py | grep -A 20 "class OpportunityFactory"Repository: dimagi/commcare-connect
Length of output: 890
🏁 Script executed:
sed -n '207,260p' commcare_connect/opportunity/tests/test_tasks.pyRepository: dimagi/commcare-connect
Length of output: 2466
The new start_date filter must be covered by test cases.
The filter correctly implements start_date >= 2026-01-01, but the existing test test_generate_invoice_for_two_active_managed_opportunities creates opportunities using OpportunityFactory(active=True, managed=True) without explicitly setting start_date. The factory defaults to Faker("past_date"), which generates dates before 2026-01-01. These test opportunities should be excluded by the new filter, yet the test expects 2 invoices to be generated.
Update the test to explicitly set start_date on opportunities to be on or after 2026-01-01:
Example fix
opportunity1 = OpportunityFactory(active=True, managed=True, start_date=datetime.date(2026, 1, 1))
opportunity2 = OpportunityFactory(active=True, managed=True, start_date=datetime.date(2026, 1, 15))Additionally, add a test case to verify that opportunities with start_date < 2026-01-01 are properly excluded from invoice generation.
🤖 Prompt for AI Agents
commcare_connect/opportunity/tasks.py around lines 551-553: the new start_date
filter (start_date >= 2026-01-01) requires test updates—update the existing
test_generate_invoice_for_two_active_managed_opportunities to create its two
OpportunityFactory instances with explicit start_date values on or after
2026-01-01 (e.g. 2026-01-01 and 2026-01-15) so they are included by the filter,
and add a new test that creates an active, managed opportunity with start_date
before 2026-01-01 and asserts it is excluded (no invoice generated).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test cases were updated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the current state of the test file to verify the test updates
grep -B 5 -A 30 "test_generate_invoice_for_two_active_managed_opportunities" commcare_connect/opportunity/tests/test_tasks.py
Length of output: 1587
🏁 Script executed:
#!/bin/bash
# Also check for any new tests related to start_date filtering
grep -B 2 -A 20 "start_date.*2026" commcare_connect/opportunity/tests/test_tasks.py
Length of output: 4340
🏁 Script executed:
#!/bin/bash
# Look at the OpportunityFactory calls in the test file to see if start_date is being set
grep -n "OpportunityFactory.*start_date" commcare_connect/opportunity/tests/test_tasks.py
Length of output: 683
@ajeety4, thank you for updating the test cases! I can confirm that test_generate_invoice_for_two_active_managed_opportunities now correctly sets explicit start_date values (2026-01-01 and 2026-12-01) for the opportunities, ensuring they're included by the new filter. All other related tests also properly set start_date=datetime.date(2026, 1, 1).
Quick question: did you also add a test case to verify that opportunities with start_date < 2026-01-01 are properly excluded from invoice generation? This was the second part of the original review request. If you'd like, I can help verify if such a test exists in the file.
🧬 Code Graph Analysis Results
-
File: commcare_connect/opportunity/export.py
-
Snippet 1 (lines 96-117)
def get_dataset(self, from_date, to_date, status: list[VisitValidationStatus]) -> Dataset:
"""Get dataset of all user visits for an opportunity."""
user_visits = UserVisit.objects.filter(
opportunity=self.opportunity, visit_date__gte=from_date, visit_date__lte=to_date
)
if status and "all" not in status:
user_visits = user_visits.filter(status__in=status)
user_visits = user_visits.order_by("visit_date")
self._get_table_metadata()dataset = Dataset(title="Export User Visits", headers=self.headers) for page in Paginator(user_visits, per_page=500): table = UserVisitTable(page.object_list) base_data = [ # form_json must be the last column in the row [row.get_cell_value(column.name) for column in columns] + [row.get_cell_value("form_json")] for row in table.rows ] for row in base_data: row = self._process_row(row) dataset.append([force_str(col, strings_only=True) for col in row]) return dataset -
Snippet 2 (lines 204-207)
def export_catchment_area_table(opportunity):
catchment_areas = CatchmentArea.objects.filter(opportunity=opportunity)
table = CatchmentAreaTable(catchment_areas)
return get_dataset(table, export_title="Catchment Area Export") -
Snippet 3 (lines 146-175)
def export_empty_payment_table(opportunity: Opportunity) -> Dataset:
headers = [
"Username",
"Phone Number",
"Name",
"Payment Amount",
"Payment Date (YYYY-MM-DD)",
"Payment Method",
"Payment Operator",
]
dataset = Dataset(title="Export", headers=headers)access_objects = ( OpportunityAccess.objects.filter(opportunity=opportunity, suspended=False, accepted=True) .select_related("user") .annotate(total_payments=Sum("payment__amount")) ) for access in access_objects: row = ( access.user.username, access.user.phone_number, access.user.name, "", "", "", "", ) dataset.append(row) return dataset -
Snippet 4 (lines 178-181)
def export_user_status_table(opportunity: Opportunity) -> Dataset:
access_objects = get_annotated_opportunity_access(opportunity)
table = UserStatusTable(access_objects, exclude=("date_popup", "view_profile"))
return get_dataset(table, export_title="User status export") -
Snippet 5 (lines 184-187)
def export_deliver_status_table(opportunity: Opportunity) -> Dataset:
opportunity = opportunity
access_objects = get_annotated_opportunity_access_deliver_status(opportunity)
table = DeliverStatusTable(access_objects, exclude=("details", "date_popup"))
return get_dataset(table, export_title="Deliver Status export") -
Snippet 6 (lines 120-143)
def export_user_visit_review_data(
opportunity: Opportunity, from_date, to_date, status: list[VisitReviewStatus]
) -> Dataset:
user_visits = UserVisit.objects.filter(
opportunity=opportunity,
review_created_on__isnull=False,
review_created_on__gte=from_date,
review_created_on__lte=to_date,
)
if status and "all" not in status:
user_visits = user_visits.filter(review_status__in=status)
user_visits = user_visits.order_by("visit_date")
table = UserVisitReviewTable(user_visits)
exclude_columns = {"pk", "details", "user_visit"}columns = [] headers = [] for column in table.columns.iterall(): if not (column.column.exclude_from_export or column.name in exclude_columns): columns.append(column) headers.append(force_str(column.header, strings_only=True)) dataset = append_row_data(Dataset(title="Export Review User Visit", headers=headers), table, columns) return dataset -
Snippet 7 (lines 190-201)
def export_work_status_table(opportunity: Opportunity) -> Dataset:
access_objects = OpportunityAccess.objects.filter(opportunity=opportunity, suspended=False)
completed_works = []
for completed_work in CompletedWork.objects.filter(opportunity_access__in=access_objects):
completed = completed_work.completed
if opportunity.auto_approve_payments and completed and completed_work.flags:
completed_works.append(completed_work)
continue
if completed:
completed_works.append(completed_work)
table = CompletedWorkTable(completed_works, exclude=("date_popup"))
return get_dataset(table, export_title="Payment Verification export") -
Snippet 8 (lines 204-207) (alternate/duplicate note above)
def get_dataset(from_date, to_date, status: list[VisitValidationStatus]) -> Dataset:
...
-
-
File: commcare_connect/opportunity/visit_import.py
-
Snippet 9 (lines 53-56)
class ImportException(Exception):
def init(self, message, rows=None):
self.message = message
self.rows = rows -
Snippet 10 (lines 302-420)
def bulk_update_payments(opportunity_id: int, headers: list[str], rows: list[list]):
opportunity = Opportunity.objects.get(id=opportunity_id)
headers = [header.lower() for header in headers]
if not headers:
raise ImportException("The uploaded file did not contain any headers")username_col_index = _get_header_index(headers, USERNAME_COL) amount_col_index = _get_header_index(headers, AMOUNT_COL) payment_date_col_index = _get_header_index(headers, PAYMENT_DATE_COL) payment_method_col_index = _get_header_index(headers, PAYMENT_METHOD_COL) payment_operator_col_index = _get_header_index(headers, PAYMENT_OPERATOR_COL) invalid_rows = [] payments_by_user = defaultdict(list) exchange_rate_today = get_exchange_rate(opportunity.currency) if not exchange_rate_today: raise ImportException(f"Currency code {opportunity.currency} is invalid") for row in rows: row = list(row) username = str(row[username_col_index]) amount_raw = row[amount_col_index] payment_date_raw = row[payment_date_col_index] payment_method = row[payment_method_col_index] payment_operator = row[payment_operator_col_index] if not amount_raw: continue if not username: invalid_rows.append(([escape(r) for r in row], "username required")) continue try: amount = Decimal(amount_raw) except InvalidOperation: invalid_rows.append(([escape(r) for r in row], "amount must be a number")) continue try: if payment_date_raw: if isinstance(payment_date_raw, datetime.datetime): payment_date = payment_date_raw.date() else: payment_date = datetime.datetime.strptime(payment_date_raw, "%Y-%m-%d").date() else: payment_date = None except ValueError: invalid_rows.append(([escape(r) for r in row], "Payment Date must be in YYYY-MM-DD format")) continue payment_row = { "amount": amount, "payment_date": payment_date, "payment_method": payment_method, "payment_operator": payment_operator, } user_last_payment = ( Payment.objects.filter( opportunity_access__user__username=username, opportunity_access__opportunity=opportunity, ) .order_by("-created_at") .first() ) date_paid = payment_date if payment_date else datetime.date.today() if ( user_last_payment and user_last_payment.amount == amount and user_last_payment.date_paid.date() == date_paid ): invalid_rows.append( ( [escape(r) for r in row], "A payment for this user with the same amount and date already exists.", ) ) payments_by_user[username].append(payment_row) if invalid_rows: raise ImportException(f"{len(invalid_rows)} rows have errors", "<br>".join([str(r) for r in invalid_rows])) seen_users = set() payment_ids = [] with transaction.atomic(): usernames = payments_by_user.keys() users = OpportunityAccess.objects.filter( user__username__in=usernames, opportunity=opportunity, suspended=False ).select_related("user") for access in users: username = access.user.username if username not in payments_by_user: continue for payment_row in payments_by_user[username]: amount = payment_row["amount"] payment_date = payment_row["payment_date"] if payment_date: exchange_rate = get_exchange_rate(opportunity.currency, payment_date) else: exchange_rate = exchange_rate_today payment_data = { "opportunity_access": access, "amount": amount, "amount_usd": amount / exchange_rate, "payment_method": payment_row["payment_method"], "payment_operator": payment_row["payment_operator"], } if payment_date: payment_data["date_paid"] = payment_date payment = Payment.objects.create(**payment_data) payment_ids.append(payment.pk) seen_users.add(username) update_work_payment_date(access) missing_users = set(usernames) - seen_users send_payment_notification.delay(opportunity.id, payment_ids) return PaymentImportStatus(seen_users, missing_users) -
Snippet 11 (lines 167-220)
def bulk_update_visit_status(opportunity_id: int, file_path: str, file_format: str, tracking_info: dict = None):
from commcare_connect.opportunity.visit_import import (
ImportException,
bulk_update_visit_status,
get_imported_dataset,
)set_task_progress(self, "Visit Verification Import is in porgress.") try: with default_storage.open(file_path, "rb") as f: dataset = get_imported_dataset(f, file_format) headers = dataset.headers or [] rows = list(dataset) status = bulk_update_visit_status(opportunity_id, headers, rows) messages = [f"Visit status updated successfully for {len(status)} visits.<br>"] if status.missing_visits: messages.append(status.get_missing_message()) if status.locked_visits: messages.append(status.get_locked_message()) if tracking_info: tracking_info = GATrackingInfo.from_dict(tracking_info) events = [ Event("visit_import_approved", {"updated": status.approved_count, "total": len(status.seen_visits)}), Event("visit_import_rejected", {"updated": status.rejected_count, "total": len(status.seen_visits)}), ] for event in events: event.add_tracking_info(tracking_info) send_event_task.delay(tracking_info.client_id, _serialize_events(events)) except ImportException as e: messages = [f"Visit status import failed: {e}"] + getattr(e, "invalid_rows", []) set_task_progress(self, "<br>".join(messages), is_complete=True)
-
-
File: commcare_connect/utils/celery.py
- Snippet 12 (lines 5-6)
def set_task_progress(task, message, is_complete=False):
task.update_state(state=CELERY_TASK_SUCCESS if is_complete else CELERY_TASK_IN_PROGRESS, meta={"message": message})
- Snippet 12 (lines 5-6)
-
File: commcare_connect/utils/analytics.py
-
Snippet 13 (lines 50-58)
class Event:
name: str
params: dict[str, Any]def add_tracking_info(self, tracking_info: GATrackingInfo): self.params.update( { "session_id": tracking_info.session_id, GA_CUSTOM_DIMENSIONS.IS_DIMAGI.value: tracking_info.is_dimagi, # This is needed for tracking to work properly. "engagement_time_msec": 100, } ) -
Snippet 14 (lines 22-42)
class GATrackingInfo:
client_id: str
session_id: str
is_dimagi: bool = Falsedef dict(self): return asdict(self) @classmethod def from_dict(cls, tracking_info_dict: dict): client_id = tracking_info_dict.get("client_id") session_id = tracking_info_dict.get("session_id") is_dimagi = tracking_info_dict.get("is_dimagi") return cls(client_id, session_id, is_dimagi) @classmethod def from_request(cls, request): client_id = _get_ga_client_id(request) session_id = _get_ga_session_id(request) is_dimagi = _is_dimagi_user(request) return cls(client_id, session_id, is_dimagi) -
Snippet 15 (lines 91-92)
def _serialize_events(events: list[Event]):
return [asdict(event) for event in events] -
Snippet 16 (lines 81-88)
def send_event_task(client_id: str, events: list[Event]):
measurement_id = settings.GA_MEASUREMENT_ID
ga_api_secret = settings.GA_API_SECRET
base_url = "https://www.google-analytics.com/mp/collect"
params = {"measurement_id": measurement_id, "api_secret": ga_api_secret}
url = f"{base_url}?{urlencode(params)}"
response = httpx.post(url, json={"client_id": client_id, "events": events})
response.raise_for_status()
-
-
File: commcare_connect/connect_id_client/main.py
-
Snippet 17 (lines 18-21)
def fetch_users(phone_number_list: list[str]) -> list[ConnectIdUser]:
response = _make_request(GET, "/users/fetch_users", params={"phone_numbers": phone_number_list})
data = response.json()
return [ConnectIdUser(**user_dict) for user_dict in data["found_users"]] -
Snippet 18 (lines 30-34)
def send_message(message: Message):
"""Send a push notification to a user."""
response = _make_request(POST, "/messaging/send/", json=message.asdict())
data = response.json()
return MessagingResponse.build(**data) -
Snippet 19 (lines 37-42)
def send_message_bulk(messages: list[Message]) -> MessagingBulkResponse:
"""Send a push notification to multiple users."""
json = {"messages": [message.asdict() for message in messages]}
response = _make_request(POST, "/messaging/send_bulk/", json=json, timeout=30)
data = response.json()
return MessagingBulkResponse.build(**data)
-
-
File: commcare_connect/opportunity/visit_import.py (additional context)
-
Snippet 20 (lines 53-56)
class ImportException(Exception):
def init(self, message, rows=None):
self.message = message
self.rows = rows -
Snippet 21 (lines 302-420) (already listed as Snippet 10 above)
def bulk_update_payments(...): ... -
Snippet 22 (lines 167-220) (already listed as Snippet 11 above)
def bulk_update_visit_status(...): ...
-
Note: The user comment states "Test cases were updated." The above snippets identify code sections in the involved modules that tests could interact with or verify, such as export dataset generation, bulk import/update flows, error handling, and analytics/instrumentation hooks. If you’d like, I can tailor this list to specific test failures or test names you’re seeing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
commcare_connect/opportunity/tests/test_tasks.py (1)
210-211: Consider using temporally consistent test dates.The opportunities have start_date values in 2026 (lines 210-211), but the completed work has status_modified_date in January 2024 (lines 223-224). This creates a temporal inconsistency where work is completed before the opportunity started. While the tests function correctly, using more realistic dates would improve test clarity and maintainability.
💡 Suggestion
Consider using start dates that precede the work completion dates but still satisfy the filter. For example:
- opportunity1 = OpportunityFactory(active=True, managed=True, start_date=datetime.date(2026, 1, 1)) - opportunity2 = OpportunityFactory(active=True, managed=True, start_date=datetime.date(2026, 12, 1)) + opportunity1 = OpportunityFactory(active=True, managed=True, start_date=datetime.date(2026, 1, 1)) + opportunity2 = OpportunityFactory(active=True, managed=True, start_date=datetime.date(2026, 1, 1))And adjust the mock dates or work completion dates to maintain logical temporal ordering. Alternatively, document why the temporal inconsistency is acceptable for these tests.
Also applies to: 223-224
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
commcare_connect/opportunity/tests/test_tasks.py
🧰 Additional context used
🧬 Code graph analysis (1)
commcare_connect/opportunity/tests/test_tasks.py (1)
commcare_connect/opportunity/tests/factories.py (1)
OpportunityFactory(47-64)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: pytest
| opportunity1 = OpportunityFactory(active=True, managed=True, start_date=datetime.date(2026, 1, 1)) | ||
| opportunity2 = OpportunityFactory(active=True, managed=True, start_date=datetime.date(2026, 12, 1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the test file around lines 210-211
cat -n commcare_connect/opportunity/tests/test_tasks.py | sed -n '200,230p'Repository: dimagi/commcare-connect
Length of output: 1721
🏁 Script executed:
# Get overall structure of the test file
wc -l commcare_connect/opportunity/tests/test_tasks.pyRepository: dimagi/commcare-connect
Length of output: 119
🏁 Script executed:
# Search for tests that validate the start_date filter behavior
rg -n 'start_date.*2025|2026.*1.*1|test.*filter|test.*exclude' commcare_connect/opportunity/tests/ -A 3 -B 1Repository: dimagi/commcare-connect
Length of output: 10885
🏁 Script executed:
# Search for the actual implementation of the filter to understand what's being tested
rg -n 'start_date.*<|start_date.*>|cutoff.*date' commcare_connect/opportunity/ --type=py -B 2 -A 2Repository: dimagi/commcare-connect
Length of output: 3836
🏁 Script executed:
# Find and examine the get_start_date_for_invoice function
rg -n 'def get_start_date_for_invoice' -A 10 commcare_connect/opportunity/Repository: dimagi/commcare-connect
Length of output: 949
🏁 Script executed:
# Look at the context around the filter in tasks.py more carefully
cat -n commcare_connect/opportunity/tasks.py | sed -n '545,570p'Repository: dimagi/commcare-connect
Length of output: 1376
🏁 Script executed:
# Search for all test methods in test_tasks.py that might test invoice filtering
rg -n 'def test_' commcare_connect/opportunity/tests/test_tasks.py | head -20Repository: dimagi/commcare-connect
Length of output: 949
🏁 Script executed:
# Check if there are any parametrized tests or other test methods beyond what we've seen
tail -50 commcare_connect/opportunity/tests/test_tasks.pyRepository: dimagi/commcare-connect
Length of output: 2318
Add test coverage to verify the start_date filter excludes opportunities before 2026-01-01.
The implementation filters opportunities with start_date__gte=datetime.date(2026, 1, 1), but all tests use opportunities meeting or exceeding this cutoff. No test case validates that opportunities with earlier start dates are actually excluded from invoice generation. This is a significant gap in test coverage for the new filtering behavior.
🔎 Suggested test case to add
@override_switch(AUTOMATED_INVOICES_MONTHLY, active=True)
def test_no_invoice_for_opportunities_before_cutoff_date(self):
"""Verify opportunities with start_date before 2026-01-01 are excluded."""
old_opportunity = OpportunityFactory(
active=True,
managed=True,
start_date=datetime.date(2025, 12, 31) # Before cutoff
)
payment_unit = PaymentUnitFactory(opportunity=old_opportunity, amount=Decimal("100.00"))
access = OpportunityAccessFactory(opportunity=old_opportunity)
completed_work = CompletedWorkFactory(
opportunity_access=access,
payment_unit=payment_unit,
status=CompletedWorkStatus.approved,
status_modified_date=datetime.date(2024, 1, 5),
)
completed_work.saved_payment_accrued = Decimal("100.00")
completed_work.save()
generate_automated_service_delivery_invoice()
assert PaymentInvoice.objects.count() == 0
assert completed_work.invoice is None🤖 Prompt for AI Agents
In commcare_connect/opportunity/tests/test_tasks.py around lines 210-211, tests
only cover opportunities on/after 2026-01-01 so we need to add a test that
verifies opportunities with start_date before 2026-01-01 are excluded; add a
test (with @override_switch(AUTOMATED_INVOICES_MONTHLY, active=True)) that
creates an OpportunityFactory(start_date=datetime.date(2025, 12, 31),
active=True, managed=True), associated PaymentUnitFactory,
OpportunityAccessFactory, and a CompletedWorkFactory with
status=CompletedWorkStatus.approved and saved_payment_accrued set, call
generate_automated_service_delivery_invoice(), and assert
PaymentInvoice.objects.count() == 0 and that completed_work.invoice is still
None to confirm exclusion.
Product Description
No UI facing change.
Filter opportunities by start date (1st Jan 2026) for automated invoice generated through celery task.
Technical Summary
Ticket
Safety Assurance
Safety story
A very very small change.
Automated test coverage
Updated.
QA Plan
None
Labels & Review