Skip to content

Dashboard prototype#136

Open
kaapstorm wants to merge 9 commits intomasterfrom
nh/dashboard
Open

Dashboard prototype#136
kaapstorm wants to merge 9 commits intomasterfrom
nh/dashboard

Conversation

@kaapstorm
Copy link
Contributor

@kaapstorm kaapstorm commented Jan 26, 2026

Adds a prototype of a dashboard to CommCare Data Pipeline

🦖 Jira: SAAS-18884

image

I'm opening this as a normal PR to get feedback about the approach and the code. If the changes are approved, I will merge the PR, but please note that this prototype will be modified in future based on the following:

  • Collected feedback about the UI
  • Implementing scheduled refreshes for materialized views (the "coming soon" cards)
  • Migration from Bulma to Bootstrap 5, to align with HQ.

🐟

Copy link
Contributor

@millerdev millerdev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I asked Copilot to review. Let me know if that was annoying and you'd prefer not to have automated reviews like that.

Comment on lines +28 to +29
export_stats = _get_export_statistics(last_24h)
forwarding_stats = _get_forwarding_statistics(last_24h)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure how large the data sets are that these are querying, but they do a lot of .count() queries, which can be expensive on large tables. Would it be worth adding caching in case someone rage-clicks reload on this page?

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a dashboard prototype to the CommCare Data Pipeline application, providing users with a visual overview of their data pipeline status. The dashboard displays statistics and recent activity for exports and forwarders, along with placeholder cards for future aggregation and transformation features.

Changes:

  • Adds a new dashboard view with pipeline flow visualization showing exports and forwarders
  • Updates navigation to include a dashboard link and updates icons for consistency
  • Implements statistics calculation functions for exports and forwarders with health status indicators
  • Adds comprehensive test coverage for dashboard views and statistics functions
  • Includes styling updates to support the dashboard's horizontal scrolling layout
  • Updates CONTRIBUTING.md with clarifications on documentation conventions

Reviewed changes

Copilot reviewed 10 out of 11 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
templates/web/dashboard.html New dashboard template with pipeline visualization, statistics cards, and empty state
templates/web/components/app_nav.html Adds dashboard navigation link and updates icons for exports and forwarders
assets/styles/app/dashboard.sass New styles for pipeline flow layout with responsive design
assets/styles/app/layout.sass Updates content area to support horizontal overflow in dashboard
assets/styles/app/_all.sass Imports new dashboard styles
apps/web/views.py Adds dashboard view and statistics calculation functions
apps/web/urls.py Adds dashboard URL pattern and reformats terms URL
apps/web/tests/test_dashboard.py Comprehensive test suite for dashboard functionality
apps/web/tests/init.py New test module initialization file
CONTRIBUTING.md Updates documentation guidelines for clarity and consistency

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +64 to +71
recent_runs = list( # Combines both exports and multi-project exports
ExportRun.objects.select_related(
'base_export_config',
'base_export_config__project',
)
.exclude(status=ExportRun.QUEUED)
.order_by('-created_at')[:10]
)
Copy link

Copilot AI Feb 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment states "Combines both exports and multi-project exports" but the code only queries ExportRun objects, not MultiProjectExportRun objects. This means that multi-project export runs won't appear in the "Recent Exports" section of the dashboard, even though they are counted in the statistics (total_runs, success_rate, etc.).

Consider either:

  1. Updating the comment to reflect what the code actually does, or
  2. Implementing the functionality to combine both types of runs by merging two separate queries using itertools.chain or a union query.

Copilot uses AI. Check for mistakes.
Comment on lines +66 to +205
class ExportStatisticsTestCase:
def setup(self):
self.user = User.objects.create_user(
username='testuser',
email='test@example.com',
password='testpass123',
)

self.server = CommCareServer.objects.create(
url='https://commcarehq.org',
name='CommCare HQ',
)
self.account = CommCareAccount.objects.create(
server=self.server,
username='test@account.com',
api_key_encrypted='encrypted_key',
)
self.project = CommCareProject.objects.create(
domain='test-domain',
server=self.server,
)

self.database = ExportDatabase.objects.create(
name='Test DB',
owner=self.user,
)
self.database.connection_string = (
'postgresql://user:pass@localhost:5432/db'
)
self.database.save()

self.export_config = ExportConfig.objects.create(
name='Test Export',
account=self.account,
project=self.project,
database=self.database,
created_by=self.user,
)

def test_export_statistics_with_no_runs(self):
last_24h = timezone.now() - timedelta(hours=24)
stats = _get_export_statistics(last_24h)

assert stats['total_configs'] == 1
assert stats['last_24h_runs'] == 0
assert stats['success_rate'] == 0
assert stats['status'] == 'neutral'

def test_export_statistics_with_completed_runs(self):
last_24h = timezone.now() - timedelta(hours=24)

for i in range(5):
ExportRun.objects.create(
base_export_config=self.export_config,
status=ExportRun.COMPLETED,
created_at=timezone.now() - timedelta(hours=i),
)

stats = _get_export_statistics(last_24h)

assert stats['total_configs'] == 1
assert stats['last_24h_runs'] == 5
assert stats['success_rate'] == 100.0
assert stats['successful_count'] == 5
assert stats['failed_count'] == 0
assert stats['status'] == 'healthy'

def test_export_statistics_with_failed_runs(self):
last_24h = timezone.now() - timedelta(hours=24)

ExportRun.objects.create(
base_export_config=self.export_config,
status=ExportRun.COMPLETED,
created_at=timezone.now() - timedelta(hours=1),
)
ExportRun.objects.create(
base_export_config=self.export_config,
status=ExportRun.FAILED,
created_at=timezone.now() - timedelta(hours=2),
)

stats = _get_export_statistics(last_24h)

assert stats['last_24h_runs'] == 2
assert stats['success_rate'] == 50.0
assert stats['successful_count'] == 1
assert stats['failed_count'] == 1
assert stats['status'] == 'error'

def test_export_statistics_excludes_queued_runs(self):
last_24h = timezone.now() - timedelta(hours=24)

ExportRun.objects.create(
base_export_config=self.export_config,
status=ExportRun.QUEUED,
created_at=timezone.now() - timedelta(hours=1),
)
ExportRun.objects.create(
base_export_config=self.export_config,
status=ExportRun.COMPLETED,
created_at=timezone.now() - timedelta(hours=2),
)

stats = _get_export_statistics(last_24h)

assert stats['last_24h_runs'] == 1
assert stats['success_rate'] == 100.0

@pytest.mark.parametrize(
'successful,failed,expected_rate,expected_status',
[
(19, 1, 95.0, 'healthy'),
(17, 3, 85.0, 'warning'),
(15, 5, 75.0, 'error'),
(16, 4, 80.0, 'warning'),
(20, 0, 100.0, 'healthy'),
],
)
def test_export_statistics_status_thresholds(
self, successful, failed, expected_rate, expected_status
):
last_24h = timezone.now() - timedelta(hours=24)

for i in range(successful):
ExportRun.objects.create(
base_export_config=self.export_config,
status=ExportRun.COMPLETED,
created_at=timezone.now() - timedelta(hours=i),
)
for i in range(failed):
ExportRun.objects.create(
base_export_config=self.export_config,
status=ExportRun.FAILED,
created_at=timezone.now() - timedelta(hours=successful + i),
)

stats = _get_export_statistics(last_24h)
assert stats['success_rate'] == expected_rate
assert stats['status'] == expected_status

Copy link

Copilot AI Feb 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are no test cases for MultiProjectExportConfig and MultiProjectExportRun scenarios. Since the _get_export_statistics function counts both ExportConfig and MultiProjectExportConfig in total_configs, and counts runs from both ExportRun and MultiProjectExportRun, there should be test coverage for these scenarios to ensure the statistics are calculated correctly when multi-project exports are present.

Copilot uses AI. Check for mistakes.
@kaapstorm
Copy link
Contributor Author

Let me know if that was annoying

Absolutely not. I have used Claude and Codex for reviews (or pre-reviews) myself, and I made a mental note to use them before opening pull requests, and then the mental note somehow got lost behind the cushions of the mental sofa. This is great.

I also noticed that I have my own pending review on this. I'll post that in a second.

Main dashboard showing pipeline status overview.
"""
now = timezone.now()
last_24h = now - timedelta(hours=24)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

24 hours might not be significant. One month might be better.


def _get_export_statistics(since_datetime):
"""Calculate export pipeline statistics."""
export_configs = ExportConfig.objects.count()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Find a different name than "Config" in the UI. Explain the time period better. (What does "Last 24h" mean?)

<!-- Pipeline Flow Visualization -->
<div class="pipeline-flow">

<!-- Stage 1: Exports -->
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Split these out into partials.


success_rate = (successful_runs / total_runs * 100) if total_runs > 0 else 0

recent_runs = list( # Combines both exports and multi-project exports
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

recent_runs should only include runs that are included in the stats. It is confusing to see runs that did not happen in the last 24 hours.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants