Skip to content

feat: Security hardening - env vars, timeouts, and resilience#83

Open
tdawe1 wants to merge 50 commits intodevfrom
feat/security-remediation
Open

feat: Security hardening - env vars, timeouts, and resilience#83
tdawe1 wants to merge 50 commits intodevfrom
feat/security-remediation

Conversation

@tdawe1
Copy link
Owner

@tdawe1 tdawe1 commented Jan 29, 2026

Summary

  • P0: Add environment variable overrides for secrets (GENGO_USER_SESSION, GENGOWATCHER_API_TOKEN, GMAIL_*)
  • P0: Add 30s timeout + 3 retries to OAuth refresh
  • P1: Wrap RSS feedparser.parse in ThreadPoolExecutor with 30s timeout
  • P1: Add 3-attempt retry with exponential backoff for job cancellation
  • P2: Fix CLI config schema mismatch ([Credentials][WebSocket])
  • P2: Add fcntl.flock file locking for concurrent config writes
  • P2: Prefix auto-repair messages with WARNING:

Files Changed

  • src/gengowatcher/config.py - Env var overrides, file locking, warning prefixes
  • src/gengowatcher/email_monitor.py - OAuth timeout/retry
  • src/gengowatcher/watcher.py - RSS timeout wrapper
  • src/gengowatcher/job_cancellation_manager.py - Cancellation retry
  • src/gengowatcher/main.py - Fixed config schema mismatch
  • .gitignore - Added .env patterns
  • config.ini.example - Template with placeholders
  • .env.example - Environment variable template

Evidence

  • All changes follow security remediation plan in docs/SECURITY_REMEDIATION.md

tdawe1 and others added 30 commits January 29, 2026 00:33
- Display all config sections in organized order
- Mask sensitive values (tokens, secrets, passwords)
- Color-coded values: booleans (green/red), numbers (pink), sensitive (violet)
- Truncate long values to fit panel
- Section headers with visual separators
Docstrings generation was requested by @tdawe1.

* #65 (comment)

The following files were modified:

* `src/gengowatcher/ui_textual.py`
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
📝 Add docstrings to `feat/config-preview`
- Extract section_order list to SECTION_ORDER constant
- Extract header width 18 to SECTION_HEADER_WIDTH constant
- Extract truncation lengths 20 and 17 to MAX_VALUE_LENGTH and MAX_VALUE_LENGTH_SHORT constants
- Add comprehensive tests to validate constants and their usage

Co-authored-by: tdawe1 <[email protected]>
- Make the truncation test more specific by checking that the full 30-char value is not present
- Verify ellipsis is actually used for truncation

Co-authored-by: tdawe1 <[email protected]>
- Run black formatter to fix whitespace issues
- Ensures code passes flake8 linting

Co-authored-by: tdawe1 <[email protected]>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…stants

Extract hardcoded layout values to class constants in ConfigPreview
fix: annotate SENSITIVE_KEYS with ClassVar to satisfy RUF012
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
tdawe1 and others added 12 commits January 29, 2026 01:25
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
- Remove tests for non-existent classes (ChartPlaceholder, JobsHourChart)
- Remove tests for unimplemented high_value_job_manager module
- Update imports: JobsHourChart → HourlyActivity
- Fix ConfigPreview() missing required config argument
- Remove duplicate JobsHourChart yield in compose()

Fixes test collection errors and reduces failures from 88 to 56.
- Add ENV_VAR_OVERRIDES in AppConfig for sensitive credentials
- Support GENGO_USER_ID, GENGO_USER_SESSION, GENGO_USER_KEY env vars
- Support GENGOWATCHER_API_TOKEN for web API authentication
- Support GMAIL_CLIENT_ID, GMAIL_CLIENT_SECRET, GMAIL_REFRESH_TOKEN
- Add timeout (30s) and 3-attempt retry to OAuth token refresh
- Add config.ini.example and .env.example templates
- Update .gitignore for .env files
- Simplify README.md
- Add docs/SECURITY_REMEDIATION.md with full remediation plan
- Wrap feedparser.parse in ThreadPoolExecutor with 30s timeout
- Add 3-attempt retry with exponential backoff to job cancellation
- Handle aiohttp.ClientError and asyncio.TimeoutError gracefully
- Log warnings on retry attempts
- Fix interactive configure to write to [WebSocket] instead of [Credentials]
- Add fcntl.flock for exclusive file locking on config save
- Prefix auto-repair messages with WARNING for visibility
Copilot AI review requested due to automatic review settings January 29, 2026 06:29
@chatgpt-codex-connector
Copy link

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Credits must be used to enable repository wide code reviews.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 29, 2026

📝 Walkthrough

Summary by CodeRabbit

Release Notes

New Features

  • Environment variables can now override configuration values for enhanced flexibility.
  • Improved job cancellation with automatic retry logic and error recovery.
  • Enhanced OAuth token refresh with retry mechanism and timeout handling.

Documentation

  • Simplified README with streamlined quick-start guidance.
  • Added example configuration files for easier setup.
  • New security remediation documentation published.

Improvements

  • Non-blocking RSS feed retrieval prevents timeout delays.
  • Atomic configuration saves with platform-aware file locking.
  • More robust error handling across cancellation and authentication flows.

✏️ Tip: You can customize this high-level summary in your review settings.

Walkthrough

This PR implements environment variable configuration support, adds example config templates, refactors UI rendering constants, enhances job cancellation and email OAuth with retry logic, optimises RSS fetching with threading, and introduces comprehensive test coverage across multiple modules.

Changes

Cohort / File(s) Summary
Configuration & Environment Variables
.env.example, .gitignore, config.ini.example, src/gengowatcher/config.py, src/gengowatcher/main.py
Introduces environment variable overrides for config values via ENV_VAR_OVERRIDES mapping and _get_env_or_config helper. Adds platform-aware file locking for atomic saves. Moves WebSocket credentials to dedicated section with renamed keys. Creates example files to guide user setup.
Documentation
README.md, docs/SECURITY_REMEDIATION.md
Simplifies README from versioned multi-section format into streamlined quick-start guide. Introduces Security Remediation Plan detailing P0–P2 initiatives with implementation timeline and verification checklist.
Job Cancellation & Retry Logic
src/gengowatcher/job_cancellation_manager.py
Introduces retry-enabled cancellation flow with exponential backoff. Adds job status verification, multiple response scenario handling (200, 302/303, other), and persistent state tracking across retry attempts. Expands exception handling for network errors and timeouts.
Email OAuth Resilience
src/gengowatcher/email_monitor.py
Replaces single-shot token refresh with 3-attempt retry loop using per-attempt exponential backoff and 30-second total timeout. Adds detailed logging for retries/failures and updates config on success.
UI Constants & Rendering
src/gengowatcher/ui_textual.py
Introduces type-annotated class-level constants: SENSITIVE_KEYS (ClassVar), SECTION_ORDER, and layout constants (SECTION_HEADER_WIDTH, MAX_VALUE_LENGTH). Refactors _render_config to use constants instead of hardcoded values. Hardens error handling for missing widgets.
RSS & Threading Optimisation
src/gengowatcher/watcher.py
Moves RSS fetch to separate thread via ThreadPoolExecutor with 30-second timeout. Avoids blocking main thread; logs warning on timeout and returns None.
Test Coverage Expansion
tests/test_integration_scenarios.py, tests/test_watcher_comprehensive.py, tests/test_watcher_enhanced.py, tests/test_watcher_extended.py, tests/test_ui_*.py, tests/test_web_*.py, tests/test_stats_manager*.py, tests/test_dashboard_quadrants.py
Adds comprehensive test suites covering end-to-end job processing, cancellation workflows, monitor coordination, error recovery, Web API interactions, UI component rendering, statistics handling, and concurrency scenarios. Introduces fixtures and mocks for isolated testing.
Test Removals & Minor Adjustments
tests/test_chart_placeholder.py, tests/test_chart_rendering.py
Removes ChartPlaceholder test file (84 lines). Removes unused variable assignment in chart rendering test.

Sequence Diagram(s)

sequenceDiagram
    participant Job State
    participant CancellationManager
    participant HTTP Handler
    participant Retry Loop
    participant State Persistence

    Job State->>CancellationManager: current job exists
    CancellationManager->>Retry Loop: init max_retries=3
    loop Retry Attempts
        Retry Loop->>HTTP Handler: fetch job details (GET)
        alt Status 200 or 3xx
            HTTP Handler-->>Retry Loop: job found/redirect
            Retry Loop->>HTTP Handler: submit cancellation (POST)
            alt Success Response
                HTTP Handler-->>Retry Loop: cancellation confirmed
                Retry Loop->>State Persistence: save stats & clear current job
                State Persistence-->>Retry Loop: persisted
                Retry Loop-->>CancellationManager: success
            else Unexpected Response
                HTTP Handler-->>Retry Loop: error status
                Retry Loop->>Retry Loop: exponential backoff
            end
        else Status 4xx/5xx or Error
            HTTP Handler-->>Retry Loop: error
            Retry Loop->>Retry Loop: exponential backoff & retry
        end
    end
    alt All Retries Exhausted
        Retry Loop->>State Persistence: increment failure counters
        Retry Loop-->>CancellationManager: failure after 3 attempts
    end
Loading
sequenceDiagram
    participant EmailMonitor
    participant Gmail OAuth
    participant Retry Handler
    participant Config Storage
    participant Token Expiry

    EmailMonitor->>Retry Handler: token refresh needed
    Retry Handler->>Retry Handler: init max_attempts=3, timeout=30s
    loop Retry Attempts
        Retry Handler->>Gmail OAuth: request token refresh
        alt Success
            Gmail OAuth-->>Retry Handler: access_token + expiry
            Retry Handler->>Config Storage: update token & expiry
            Config Storage-->>Retry Handler: persisted
            Retry Handler-->>EmailMonitor: token refreshed
        else Failure
            Gmail OAuth-->>Retry Handler: error
            Retry Handler->>Retry Handler: exponential backoff
            Retry Handler->>Token Expiry: check timeout elapsed
            alt Timeout Exceeded
                Token Expiry-->>Retry Handler: exceeded 30s
                Retry Handler-->>EmailMonitor: raise ValueError (final error)
            else Continue Retrying
                Retry Handler->>Retry Handler: attempt next retry
            end
        end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

  • PR #65: Modifies ConfigPreview in ui_textual.py with type-annotated class constants (SENSITIVE_KEYS, SECTION_ORDER) and refactored rendering logic—directly aligned with UI changes in this PR.
  • PR #6: Introduces environment variable config handling in config.py via AppConfig.get modifications and environment-based overrides—core mechanism extended here.
  • PR #45: Updates email_monitor.py OAuth handling and README/docs for monitoring features—aligns with OAuth retry enhancements and documentation restructuring.

Suggested labels

backend, feature, docs, needs review

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed Title directly summarises the main change: security hardening via environment variables, timeouts, and resilience improvements across multiple components.
Description check ✅ Passed Description is well-related to the changeset, detailing P0/P1/P2 security remediation priorities, specific changes per file, and reference to the security remediation plan.
Docstring Coverage ✅ Passed Docstring coverage is 91.52% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/security-remediation

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @tdawe1, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the GengoWatcher application by bolstering its security posture, improving operational resilience, and refining configuration management. It ensures sensitive data is handled more securely through environment variables, makes network interactions more robust against transient failures, and improves the integrity and transparency of configuration changes. The accompanying documentation and extensive new test suite reflect a commitment to a more stable and maintainable system.

Highlights

  • Security Hardening: Implemented environment variable overrides for sensitive credentials (e.g., GENGO_USER_SESSION, GENGOWATCHER_API_TOKEN, GMAIL_*) to prevent cleartext storage in configuration files. This includes new .env.example and config.ini.example files, and .gitignore updates.
  • Resilience Improvements: Enhanced network operations with timeouts and retries: OAuth token refresh now includes a 30-second timeout and 3 retries with exponential backoff, RSS feed parsing is wrapped in a ThreadPoolExecutor with a 30-second timeout to prevent blocking, and job cancellation attempts now feature 3 retries with exponential backoff.
  • Configuration Management: Fixed a CLI configuration schema mismatch ([Credentials] to [WebSocket]), added fcntl.flock for file-based locking during concurrent config writes to prevent corruption, and prefixed auto-repair messages with WARNING: for better user awareness.
  • Documentation & Testing: The README.md has been significantly simplified, focusing on quick start and core features. A new docs/SECURITY_REMEDIATION.md outlines the security plan. Numerous new and enhanced test files have been added across the codebase to improve coverage for integration scenarios, stats management, UI components, watcher functionality, and the web API.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant security and resilience enhancements, aligning with the outlined remediation plan. The move to environment variables for secrets, the addition of timeouts and retries for network operations like OAuth refresh and RSS fetching, and the implementation of retry logic for job cancellation are all excellent improvements. The code is also hardened against race conditions in configuration writing and fixes a schema mismatch.

I have one critical review comment regarding the use of fcntl for file locking, which is not platform-independent and will cause the application to fail on Windows. A suggestion for a cross-platform fix is provided.

Comment on lines +280 to +284
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
try:
self._config_parser.write(f)
except IOError as e:
print(f"Error saving config: {e}")
finally:
fcntl.flock(f.fileno(), fcntl.LOCK_UN)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

To complement the conditional import of fcntl, the calls to fcntl.flock must also be made conditional to avoid a NameError on Windows where fcntl would not be imported.

Suggested change
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
try:
self._config_parser.write(f)
except IOError as e:
print(f"Error saving config: {e}")
finally:
fcntl.flock(f.fileno(), fcntl.LOCK_UN)
if sys.platform != 'win32':
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
try:
self._config_parser.write(f)
finally:
if sys.platform != 'win32':
fcntl.flock(f.fileno(), fcntl.LOCK_UN)

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements security hardening measures for GengoWatcher, focusing on protecting sensitive credentials, improving network resilience, and fixing configuration issues. The changes follow a comprehensive security remediation plan that addresses critical vulnerabilities in secrets management and adds defensive layers against network failures.

Changes:

  • Added environment variable overrides for sensitive credentials (session tokens, API keys, OAuth credentials)
  • Implemented timeout and retry mechanisms for OAuth token refresh and job cancellation operations
  • Added RSS feed timeout wrapper to prevent blocking on slow/stalled feeds
  • Fixed config schema mismatch and added file locking for concurrent writes
  • Created example configuration files with placeholders instead of actual secrets

Reviewed changes

Copilot reviewed 25 out of 26 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
src/gengowatcher/config.py Environment variable override system, file locking for config writes, warning prefixes for auto-repairs
src/gengowatcher/email_monitor.py OAuth refresh with 30s timeout and 3-retry exponential backoff
src/gengowatcher/watcher.py RSS fetch wrapped in ThreadPoolExecutor with 30s timeout
src/gengowatcher/job_cancellation_manager.py Job cancellation with 3-attempt retry and exponential backoff
src/gengowatcher/main.py Fixed config section from [Credentials] to [WebSocket]
src/gengowatcher/ui_textual.py ConfigPreview class constants extraction, graceful fallback handling
.gitignore Added .env patterns to prevent secrets exposure
config.ini.example Template configuration with placeholders
.env.example Environment variable template for secrets
docs/SECURITY_REMEDIATION.md Security remediation plan documentation
Comments suppressed due to low confidence (1)

tests/test_chart_placeholder.py:1

  • The entire test file for ChartPlaceholder has been deleted. This removes test coverage for the ChartPlaceholder widget including placeholder rendering when no state is provided, empty jobs handling, and chart display functionality. Consider restoring these tests or replacing them with equivalent coverage.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

# The top line should have more empty spaces on the left (lower values)
# and more filled blocks on the right (higher values)
top_line = lines[0]
bottom_line = lines[-1]
Copy link

Copilot AI Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test test_render_chart_ascending_values is incomplete. Line 81 defines top_line = lines[0] but was removed, leaving only bottom_line. The test should verify that ascending values produce a chart where the bottom line has more filled content than the top line, but no assertions exist to validate this expected behavior.

Copilot uses AI. Check for mistakes.

# Keys that should be masked for security
SENSITIVE_KEYS = {
SENSITIVE_KEYS: ClassVar[set[str]] = {
Copy link

Copilot AI Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The type annotation ClassVar[set[str]] is correct for a class variable containing a set of strings. However, the variable is defined as a mutable set at the class level. Consider using frozenset[str] instead to prevent accidental modification of this security-critical configuration.

Copilot uses AI. Check for mistakes.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Copilot AI review requested due to automatic review settings February 2, 2026 03:15
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 25 out of 26 changed files in this pull request and generated 5 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +281 to +285
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
try:
self._config_parser.write(f)
except IOError as e:
print(f"Error saving config: {e}")
finally:
fcntl.flock(f.fileno(), fcntl.LOCK_UN)
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

File locking is only applied during config save but not available on Windows. The code checks sys.platform != 'win32' to conditionally import fcntl, but then uses fcntl.flock unconditionally in the save_config method (lines 281-285). This will cause a NameError on Windows systems.

Copilot uses AI. Check for mistakes.
import re
import time
from collections import deque
from typing import ClassVar
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ClassVar type annotation imports from typing (line 12) but this file targets Python 3.8+ according to setup requirements. For Python 3.8-3.9, this should be from typing import ClassVar, but for Python 3.10+ it's available as typing.ClassVar. The current import is correct, but ensure this is compatible with your minimum Python version.

Copilot uses AI. Check for mistakes.
Comment on lines +548 to +559
# Wrap feedparser in thread with timeout to prevent blocking
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(
feedparser.parse,
feed_url,
request_headers=headers,
)
try:
feed = future.result(timeout=30)
except concurrent.futures.TimeoutError:
self.logger.warning("RSS feed fetch timed out after 30 seconds")
return None
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ThreadPoolExecutor should be reused across calls rather than creating a new one for each RSS fetch. Creating and destroying thread pools on every fetch is inefficient and wastes resources. Consider creating a single thread pool as an instance variable during initialization.

Suggested change
# Wrap feedparser in thread with timeout to prevent blocking
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(
feedparser.parse,
feed_url,
request_headers=headers,
)
try:
feed = future.result(timeout=30)
except concurrent.futures.TimeoutError:
self.logger.warning("RSS feed fetch timed out after 30 seconds")
return None
# Wrap feedparser in thread with timeout to prevent blocking.
# Reuse a single ThreadPoolExecutor instance across calls to avoid
# repeatedly creating and destroying thread pools.
if not hasattr(self, "_rss_executor"):
self._rss_executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
future = self._rss_executor.submit(
feedparser.parse,
feed_url,
request_headers=headers,
)
try:
feed = future.result(timeout=30)
except concurrent.futures.TimeoutError:
self.logger.warning("RSS feed fetch timed out after 30 seconds")
return None

Copilot uses AI. Check for mistakes.
Comment on lines +155 to +164
except (aiohttp.ClientError, asyncio.TimeoutError) as e:
last_error = e
if attempt < 2:
delay = 2**attempt
self.logger.warning(
f"OAuth refresh attempt {attempt + 1} failed: {e}, retrying in {delay}s"
)
await asyncio.sleep(delay)

self.logger.debug("OAuth token refreshed successfully")
raise ValueError(f"OAuth token refresh failed after 3 attempts: {last_error}")
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The retry logic uses exponential backoff correctly, but there's no logging when a retry succeeds after initial failures. Consider adding a log message when recovery occurs to help with debugging and monitoring system resilience.

Copilot uses AI. Check for mistakes.
Comment on lines +2 to 7
if sys.platform != 'win32':
import fcntl
import json
import os
from pathlib import Path
import sys
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing import for sys module. The code uses sys.platform on line 2 but the import statement is on line 7, which comes after the conditional import that depends on it.

Suggested change
if sys.platform != 'win32':
import fcntl
import json
import os
from pathlib import Path
import sys
import sys
if sys.platform != 'win32':
import fcntl
import json
import os
from pathlib import Path

Copilot uses AI. Check for mistakes.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/gengowatcher/main.py (1)

164-184: ⚠️ Potential issue | 🟡 Minor

Avoid labelling optional user_key as required.

user_key is described as optional but the placeholder path marks it as “required”, which is confusing. Consider tracking required/optional per field.

Proposed fix
-    required_fields = [
-        ("WebSocket", "user_id", "Your Gengo user ID"),
-        ("WebSocket", "user_key", "Your Gengo API key (optional)"),
-        (
-            "WebSocket",
-            "user_session",
-            "Your session token from browser cookies",
-        ),
-    ]
+    required_fields = [
+        ("WebSocket", "user_id", "Your Gengo user ID", True),
+        ("WebSocket", "user_key", "Your Gengo API key", False),
+        (
+            "WebSocket",
+            "user_session",
+            "Your session token from browser cookies",
+            True,
+        ),
+    ]
 
-    for section, option, description in required_fields:
+    for section, option, description, required in required_fields:
         current = config.get(section, option)
         is_placeholder = current in PLACEHOLDER_CONFIG_VALUES
 
         if is_placeholder:
-            prompt_text = f"[label]{description}[/] [warning](required)[/]: "
+            label = "required" if required else "optional"
+            prompt_text = f"[label]{description}[/] [warning]({label})[/]: "
🤖 Fix all issues with AI agents
In `@src/gengowatcher/config.py`:
- Around line 251-253: The print call in config.py uses an unnecessary f-string
for a static message; change the print statement in the relevant block (the line
printing "WARNING: Config file updated with missing sections/options") to use a
plain string literal (remove the leading f) so the message is not formatted as
an f-string.
- Around line 279-287: The save_config block uses fcntl unconditionally which
will raise NameError on Windows; update the method (save_config) that writes
self.CONFIG_FILE so fcntl calls are only used when available (e.g., check for
presence of the fcntl symbol or platform != "win32"), and provide a safe
fallback on Windows (skip file locking or use msvcrt.locking) so the file write
still occurs without raising; ensure the lock/unlock calls around
self._config_parser.write(f) are guarded by the availability check and that
exceptions still release any OS-specific locks.
- Around line 2-8: The module references sys.platform before sys is imported,
causing a NameError; move the import sys so it appears before the conditional
that checks sys.platform (i.e., import sys must be placed above or immediately
before the if sys.platform != 'win32': line) and adjust imports in config.py
accordingly so the fcntl conditional uses the already-imported sys; no other
logic changes required.

In `@src/gengowatcher/job_cancellation_manager.py`:
- Around line 267-269: The log call uses an unnecessary f-string: replace the
f-prefixed literal in the self.logger.error(...) call (the cancellation failure
log inside the job cancellation logic in JobCancellationManager) with a normal
string literal (remove the leading "f") so the message reads "Cancellation may
have failed - unexpected response" without the f prefix.
- Around line 229-285: The stats dictionary mutations in this block (updates to
self.stats["successful_cancellations"], self.stats["total_lost_rewards"],
self.stats["jobs_saved"], and self.stats["failed_cancellations"]) are not
protected by self._lock; wrap all modifications to self.stats with self._lock to
avoid races with get_stats() and persistence, e.g., acquire self._lock, update
counters and create any dict/list entries into local variables (or a
shallow-copied entry) then release the lock before calling clear_current_job()
and self._save_job_state(); also apply the same locking pattern to other
failed_cancellations increments elsewhere in this method so every mutation of
self.stats is guarded by self._lock.

In `@tests/test_integration_scenarios.py`:
- Around line 404-428: The tests test_zero_check_interval_validation and
test_negative_check_interval_validation create an unused local variable watcher
by assigning GengoWatcher(...) which triggers a Ruff F841 warning; fix by
removing the unused assignment and just instantiate GengoWatcher(...) without
assigning (or assign to _ if explicit discard preferred), updating the lines
where watcher = GengoWatcher(mock_config, mock_state, mock_logger) in both
test_zero_check_interval_validation and test_negative_check_interval_validation.

In `@tests/test_stats_manager_extended.py`:
- Around line 273-285: The test computes average using session.total_value but
calls record_job with accepted=False so total_value stays 0; update the test to
either mark the jobs as accepted (call record_job(..., accepted=True)) for the
ones intended to contribute to total_value, or change the average calculation to
use only accepted job counts/values (e.g., use session.accepted_jobs and sum of
accepted values). Locate the test function test_average_reward_with_jobs and
modify the record_job calls or the average expression so total_value and
jobs_found align (use StatsManager.record_job and StatsManager.session fields).
- Around line 27-31: The test test_session_stats_duration_calculation is flaky
because SessionStats.duration_seconds returns an int
(SessionStats.duration_seconds), so a 0.1s sleep can still yield 0; update the
test to relax the assertion to assert stats.duration_seconds >= 0 (or
alternatively increase the sleep to >=1 second) so the test matches the
integer-second behavior of SessionStats.duration_seconds.
- Around line 48-55: The test test_all_time_stats_initialization should not
assert on a non-existent attribute: remove the assertion referencing
stats.first_run_time; update the test_all_time_stats_initialization function to
only check stats.total_jobs, stats.total_sessions, and stats.total_value against
their expected initial values, leaving AllTimeStats() construction and the three
valid asserts intact.
- Around line 68-157: The tests reference non-existent SourceStats attributes
and wrong source mapping: SourceStats defines website, websocket, email (not rss
or web) and record_job maps sources containing "web" to by_source.website;
update tests to use website instead of rss/web and use "Web" (or "web") as the
source string instead of "RSS" so assertions check manager.by_source.website and
manager.by_source.websocket/email as appropriate (refer to SourceStats,
record_job, and by_source.website/by_source.websocket fields to locate changes).

In `@tests/test_stats_manager.py`:
- Around line 174-183: The test test_get_peak_hour_empty is asserting that
peak_hour may be -1, but StatsManager.get_peak_hour returns (12, 0.0) when
hourly_counts is empty; update the test to reflect the implementation by
asserting peak_hour == 12 and peak_count == 0.0 (or 0) for the
manager.get_peak_hour() call, referencing StatsManager.get_peak_hour and the
hourly_counts behavior so the test matches current logic.
- Around line 104-126: The test is asserting non-existent SourceStats attributes
(rss and web); update either the implementation or the tests so names match: in
the tests referring to manager.by_source, replace assertions to use the existing
attribute "website" instead of "web" (i.e., assert manager.by_source.website ==
1), and remove or adjust the RSS assertion to use a recognized source (or add
handling in StatsManager.record_job to map "RSS" to a new rss counter on
SourceStats). Locate SourceStats and StatsManager.record_job to ensure the
source mapping and attribute names are consistent with the test.

In `@tests/test_ui_tabs_enhanced.py`:
- Around line 61-70: The test creates StatsManager with a stats_path inside a
tempfile.TemporaryDirectory() context but returns a GengoWatcherApp that uses
that StatsManager after the context exits, so the temp directory (and stats
file) may be removed; to fix, ensure the temp directory outlives the
StatsManager by creating the TemporaryDirectory outside the with-block (or avoid
using the context manager and create a persistent temp dir), then construct
stats_path and instantiate StatsManager (reference: StatsManager, stats_path,
GengoWatcherApp) before returning the app so the underlying directory remains
available for later file operations.

In `@tests/test_ui_tabs.py`:
- Around line 252-260: The async test function test_dashboard_contains_chart is
missing the `@pytest.mark.asyncio` decorator; add the decorator above the test
function so pytest runs it in an asyncio event loop. Locate
test_dashboard_contains_chart (which uses create_mock_app and awaits
app.run_test() and queries HourlyActivity) and annotate it with
`@pytest.mark.asyncio` to match the other async tests.

In `@tests/test_ui_textual_components.py`:
- Around line 74-86: The fixture mock_stats currently returns a StatsManager
after exiting the tempfile.TemporaryDirectory() context so the temp dir is
removed too early; change mock_stats to yield the stats from inside the with
block (use yield instead of return) so the TemporaryDirectory stays alive for
the test, creating stats via StatsManager(stats_path=stats_path) and setting
session, all_time, and hourly_counts as before, and let the with block handle
cleanup when the fixture teardown runs.

In `@tests/test_watcher_comprehensive.py`:
- Around line 138-143: The test creates an unused binding "mock_file" when
patching builtins.open which triggers Ruff F841; remove the unused variable by
changing the context manager from "with patch('builtins.open', mock_open()) as
mock_file:" to simply "with patch('builtins.open', mock_open()):" in the test
that instantiates GengoWatcher (the block that references GengoWatcher,
mock_config, mock_state, logger and asserts watcher._csv_writer), so the open
mock remains applied but no unused name is introduced.
- Around line 505-521: The test reuses the same job IDs across threads so the
watcher's session dedupe (_seen_jobs_session) collapses them to 10 instead of
50; update test_process_new_job_thread_safe to generate unique IDs per thread
(e.g., include a thread index or UUID in the id string passed to
_process_new_job like f"{thread_idx}-{i}" or use threading.get_ident()) so all
50 jobs are treated distinct and session_new_entries will reach 50, and keep
references to _process_new_job, _seen_jobs_session and session_new_entries when
making the change.
- Around line 25-44: The mock_config fixture currently hardcodes
"/tmp/entries.csv" in config_data["Paths"]["all_entries_log"]; change the
fixture signature to accept the pytest tmp_path fixture (def
mock_config(tmp_path):) and set config_data["Paths"]["all_entries_log"] =
str(tmp_path / "entries.csv") so each test gets an isolated temporary file; keep
the fixture name mock_config and update any callers if necessary.

In `@tests/test_web_comprehensive.py`:
- Around line 248-255: The test_get_status assertion for is_running is inverted;
update the test in test_get_status (tests/test_web_comprehensive.py) to expect
True (since shutdown_event.is_set() is False, WatcherStatus.is_running should be
True). Locate the test_get_status function and change the assertion for
status.is_running to assert True (e.g., assert status.is_running is True or
assert status.is_running) to match the fixture shutdown semantics.
- Around line 31-38: The mock_config fixture currently hardcodes
"/tmp/test_entries.csv"; update the fixture signature to accept pytest's
tmp_path and replace that hardcoded path in the config.get side_effect mapping
for key ("Paths","all_entries_log") with str(tmp_path / "test_entries.csv") so
tests use an isolated temporary file; ensure the fixture name mock_config and
the AppConfig spec remain unchanged and only the returned mapping value is
adjusted.
🧹 Nitpick comments (13)
src/gengowatcher/email_monitor.py (1)

133-137: Consider retrying on 5xx server errors.

Currently, non-200 responses raise ValueError immediately without retry. While this is correct for auth failures (4xx), transient server errors (5xx) might benefit from retry logic.

💡 Optional: Add 5xx to retry conditions
                     if resp.status != 200:
                         error_text = await resp.text()
+                        if resp.status >= 500:
+                            raise aiohttp.ClientError(f"Server error {resp.status}: {error_text}")
                         raise ValueError(
                             f"OAuth token refresh failed: {error_text}"
                         )
tests/test_stats_manager.py (1)

95-95: Missing blank lines before class definition.

PEP 8 requires two blank lines before a class definition at module level.

docs/SECURITY_REMEDIATION.md (1)

23-31: Add language specifier to code block.

The code block lacks a language identifier, triggering MD040.

📝 Fix
-   ```
+   ```bash
    GENGO_USER_ID=
.env.example (1)

6-8: Reorder keys to satisfy dotenv‑linter.

Swap GENGO_USER_KEY before GENGO_USER_SESSION to clear the ordering warning.

Proposed fix
 GENGO_USER_ID=
-GENGO_USER_SESSION=
 GENGO_USER_KEY=
+GENGO_USER_SESSION=
README.md (1)

38-41: Call out .env overrides in the Configuration section.

The PR adds env‑based overrides; a short note here will help users avoid missing it.

Proposed fix
-Settings are stored in `config.ini`. Key sections:
+Settings are stored in `config.ini`. Key sections:
+Environment variables in `.env` (see `.env.example`) override sensitive values in `config.ini`.
tests/test_ui_tabs_enhanced.py (3)

113-118: Avoid bare except clauses.

Bare except catches all exceptions including KeyboardInterrupt and SystemExit. Catch the specific exception from Textual's query API instead.

Proposed fix
         try:
             jobs_table = pilot.app.query_one("#jobs-table-full", DataTable)
             assert jobs_table is not None
-        except:
+        except Exception:
             tables = list(pilot.app.query(DataTable))
             assert len(tables) > 0

133-138: Avoid bare except clauses.

Same issue as above—use except Exception: at minimum.

Proposed fix
         try:
             activity_log = pilot.app.query_one("#activity-log-full", RichLog)
             assert activity_log is not None
-        except:
+        except Exception:
             logs = list(pilot.app.query(RichLog))
             assert len(logs) > 0

21-24: Hardcoded /tmp path flagged by static analysis.

Using a hardcoded /tmp path can be insecure and non-portable. Consider using tempfile.gettempdir() or a fixture-provided path.

Proposed fix
+    import tempfile as tf
     mock_watcher.shutdown_event = MagicMock()
     mock_watcher.shutdown_event.is_set.return_value = True
-    mock_watcher.PAUSE_FILE = "/tmp/gw_pause_test"
+    mock_watcher.PAUSE_FILE = f"{tf.gettempdir()}/gw_pause_test"
tests/test_ui_textual_comprehensive.py (1)

169-183: Unused pilot variable.

The pilot variable is assigned but never used. Either use _ as the variable name or remove the as pilot clause.

Proposed fix
         app = TestApp()
-        async with app.run_test() as pilot:
+        async with app.run_test():
             metrics_row = app.query_one(MetricsRow)
             cards = metrics_row.query(MetricCard)
             assert len(cards) == 5  # Found, Accepted, Value, Rate, Today
tests/test_web_api.py (1)

67-72: Hardcoded /tmp path.

Use tempfile.gettempdir() for portability and security.

Proposed fix
+    import tempfile as tf
     watcher.shutdown_event = MagicMock()
     watcher.shutdown_event.is_set.return_value = False
-    watcher.PAUSE_FILE = "/tmp/test_pause"
+    watcher.PAUSE_FILE = f"{tf.gettempdir()}/test_pause"
tests/test_watcher_extended.py (1)

331-343: Remove unused monkeypatch argument.

The monkeypatch parameter is not used in this test method.

Proposed fix
-    def test_prompt_for_config_values_no_missing(
-        self, watcher_with_mocks, capsys, monkeypatch
-    ):
+    def test_prompt_for_config_values_no_missing(
+        self, watcher_with_mocks, capsys
+    ):
tests/test_web_comprehensive.py (1)

40-42: Discard unused lambda parameters to satisfy ARG005.
Lines 40‑42 ignore inputs, which Ruff flags. Use throwaway args to keep intent clear.

Proposed fix
-    config.getint.side_effect = lambda s, k, **kw: 60
-    config.getboolean.side_effect = lambda s, k, **kw: False
-    config.getfloat.side_effect = lambda s, k, **kw: 0.0
+    config.getint.side_effect = lambda *_, **__: 60
+    config.getboolean.side_effect = lambda *_, **__: False
+    config.getfloat.side_effect = lambda *_, **__: 0.0
tests/test_watcher_comprehensive.py (1)

126-129: Discard unused lambda parameters to satisfy ARG005.
Lines 128, 242, 489, and 496 ignore some/all parameters. Use throwaway args to make intent explicit.

Proposed fix
-        mock_config.get.side_effect = lambda s, k, **kw: 0 if (s, k) == ("Watcher", "check_interval") else mock_config.config.get(s, {}).get(k)
+        mock_config.get.side_effect = lambda s, k, **_: 0 if (s, k) == ("Watcher", "check_interval") else mock_config.config.get(s, {}).get(k)
@@
-        watcher_instance.config.get.side_effect = lambda s, k, **kw: 20.0 if (s, k) == ("Watcher", "min_reward") else watcher_instance.config.config.get(s, {}).get(k)
+        watcher_instance.config.get.side_effect = lambda s, k, **_: 20.0 if (s, k) == ("Watcher", "min_reward") else watcher_instance.config.config.get(s, {}).get(k)
@@
-        watcher_instance.config.get.side_effect = lambda s, k, **kw: "REPLACE_WITH_YOUR_SESSION_TOKEN"
+        watcher_instance.config.get.side_effect = lambda *_, **__: "REPLACE_WITH_YOUR_SESSION_TOKEN"
@@
-        watcher_instance.config.get.side_effect = lambda s, k, **kw: "valid_value"
+        watcher_instance.config.get.side_effect = lambda *_, **__: "valid_value"

Also applies to: 240-243, 487-497

Comment on lines +2 to 8
if sys.platform != 'win32':
import fcntl
import json
import os
from pathlib import Path
import sys
import threading
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: sys used before import.

Line 2 references sys.platform but sys is imported at line 7. This will raise NameError on module load.

🐛 Fix: Move sys import before conditional
 import configparser
+import sys
 if sys.platform != 'win32':
     import fcntl
 import json
 import os
 from pathlib import Path
-import sys
 import threading
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if sys.platform != 'win32':
import fcntl
import json
import os
from pathlib import Path
import sys
import threading
import configparser
import sys
if sys.platform != 'win32':
import fcntl
import json
import os
from pathlib import Path
import threading
🧰 Tools
🪛 Ruff (0.14.14)

[error] 2-2: Undefined name sys

(F821)

🤖 Prompt for AI Agents
In `@src/gengowatcher/config.py` around lines 2 - 8, The module references
sys.platform before sys is imported, causing a NameError; move the import sys so
it appears before the conditional that checks sys.platform (i.e., import sys
must be placed above or immediately before the if sys.platform != 'win32': line)
and adjust imports in config.py accordingly so the fcntl conditional uses the
already-imported sys; no other logic changes required.

Comment on lines +251 to +253
print(
f"WARNING: Config file updated with missing sections/options"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove extraneous f-prefix.

The f-string has no placeholders.

🔧 Fix
-                        print(
-                            f"WARNING: Config file updated with missing sections/options"
-                        )
+                        print(
+                            "WARNING: Config file updated with missing sections/options"
+                        )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
print(
f"WARNING: Config file updated with missing sections/options"
)
print(
"WARNING: Config file updated with missing sections/options"
)
🧰 Tools
🪛 Ruff (0.14.14)

[error] 252-252: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
In `@src/gengowatcher/config.py` around lines 251 - 253, The print call in
config.py uses an unnecessary f-string for a static message; change the print
statement in the relevant block (the line printing "WARNING: Config file updated
with missing sections/options") to use a plain string literal (remove the
leading f) so the message is not formatted as an f-string.

Comment on lines +279 to +287
try:
with open(self.CONFIG_FILE, "w", encoding="utf-8") as f:
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
try:
self._config_parser.write(f)
except IOError as e:
print(f"Error saving config: {e}")
finally:
fcntl.flock(f.fileno(), fcntl.LOCK_UN)
except IOError as e:
print(f"Error saving config: {e}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: fcntl usage crashes on Windows.

fcntl is only imported on non-Windows platforms (line 2-3), but save_config uses it unconditionally. This will raise NameError on Windows.

🐛 Fix: Guard fcntl usage
     try:
         with open(self.CONFIG_FILE, "w", encoding="utf-8") as f:
-            fcntl.flock(f.fileno(), fcntl.LOCK_EX)
-            try:
-                self._config_parser.write(f)
-            finally:
-                fcntl.flock(f.fileno(), fcntl.LOCK_UN)
+            if sys.platform != 'win32':
+                fcntl.flock(f.fileno(), fcntl.LOCK_EX)
+                try:
+                    self._config_parser.write(f)
+                finally:
+                    fcntl.flock(f.fileno(), fcntl.LOCK_UN)
+            else:
+                self._config_parser.write(f)
     except IOError as e:
         print(f"Error saving config: {e}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try:
with open(self.CONFIG_FILE, "w", encoding="utf-8") as f:
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
try:
self._config_parser.write(f)
except IOError as e:
print(f"Error saving config: {e}")
finally:
fcntl.flock(f.fileno(), fcntl.LOCK_UN)
except IOError as e:
print(f"Error saving config: {e}")
try:
with open(self.CONFIG_FILE, "w", encoding="utf-8") as f:
if sys.platform != 'win32':
fcntl.flock(f.fileno(), fcntl.LOCK_EX)
try:
self._config_parser.write(f)
finally:
fcntl.flock(f.fileno(), fcntl.LOCK_UN)
else:
self._config_parser.write(f)
except IOError as e:
print(f"Error saving config: {e}")
🤖 Prompt for AI Agents
In `@src/gengowatcher/config.py` around lines 279 - 287, The save_config block
uses fcntl unconditionally which will raise NameError on Windows; update the
method (save_config) that writes self.CONFIG_FILE so fcntl calls are only used
when available (e.g., check for presence of the fcntl symbol or platform !=
"win32"), and provide a safe fallback on Windows (skip file locking or use
msvcrt.locking) so the file write still occurs without raising; ensure the
lock/unlock calls around self._config_parser.write(f) are guarded by the
availability check and that exceptions still release any OS-specific locks.

Comment on lines +229 to +285
if response.status == 200:
content = await response.text()

# Check for success indicators
if (
"job cancelled" in content.lower()
or "cancellation successful" in content.lower()
or "job successfully cancelled" in content.lower()
):
self.logger.info(
f"✅ Successfully cancelled job {self.current_job_id}"
)

# Update stats
self.stats["successful_cancellations"] += 1
self.stats["total_lost_rewards"] += (
self.current_job_reward
)

# Record cancellation
self.stats["jobs_saved"].append(
{
"cancelled_job_id": self.current_job_id,
"cancelled_reward": self.current_job_reward,
"timestamp": datetime.now().isoformat(),
"job_duration": time.time()
- self.job_start_time
if self.job_start_time
else 0,
}
)

# Clear tracking
self.clear_current_job()
self._save_job_state()

return True
else:
self.logger.error(
f"Cancellation may have failed - unexpected response"
)
self.stats["failed_cancellations"] += 1
return False

elif response.status == 302 or response.status == 303:
# Redirect might indicate success
self.logger.info(
f"✅ Job {self.current_job_id} cancelled (redirect response)"
)

# Update stats
self.stats["successful_cancellations"] += 1
self.stats["total_lost_rewards"] += self.current_job_reward

self.clear_current_job()
self._save_job_state()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Protect stats mutations with _lock.

stats is documented as lock‑protected, but updates here bypass _lock, which can race with get_stats() and persistence. Guard updates with _lock and use locals to avoid holding the lock across clear_current_job().

Proposed fix
-                                # Update stats
-                                self.stats["successful_cancellations"] += 1
-                                self.stats["total_lost_rewards"] += (
-                                    self.current_job_reward
-                                )
-
-                                # Record cancellation
-                                self.stats["jobs_saved"].append(
-                                    {
-                                        "cancelled_job_id": self.current_job_id,
-                                        "cancelled_reward": self.current_job_reward,
-                                        "timestamp": datetime.now().isoformat(),
-                                        "job_duration": time.time()
-                                        - self.job_start_time
-                                        if self.job_start_time
-                                        else 0,
-                                    }
-                                )
+                                # Update stats (guarded)
+                                with self._lock:
+                                    job_id = self.current_job_id
+                                    reward = self.current_job_reward
+                                    job_start = self.job_start_time
+                                    self.stats["successful_cancellations"] += 1
+                                    self.stats["total_lost_rewards"] += reward
+                                    self.stats["jobs_saved"].append(
+                                        {
+                                            "cancelled_job_id": job_id,
+                                            "cancelled_reward": reward,
+                                            "timestamp": datetime.now().isoformat(),
+                                            "job_duration": time.time() - job_start
+                                            if job_start
+                                            else 0,
+                                        }
+                                    )

Apply the same locking for failed_cancellations increments elsewhere in this method.

🧰 Tools
🪛 Ruff (0.14.14)

[error] 268-268: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
In `@src/gengowatcher/job_cancellation_manager.py` around lines 229 - 285, The
stats dictionary mutations in this block (updates to
self.stats["successful_cancellations"], self.stats["total_lost_rewards"],
self.stats["jobs_saved"], and self.stats["failed_cancellations"]) are not
protected by self._lock; wrap all modifications to self.stats with self._lock to
avoid races with get_stats() and persistence, e.g., acquire self._lock, update
counters and create any dict/list entries into local variables (or a
shallow-copied entry) then release the lock before calling clear_current_job()
and self._save_job_state(); also apply the same locking pattern to other
failed_cancellations increments elsewhere in this method so every mutation of
self.stats is guarded by self._lock.

Comment on lines +267 to +269
self.logger.error(
f"Cancellation may have failed - unexpected response"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

head -n 270 src/gengowatcher/job_cancellation_manager.py | tail -n 10

Repository: tdawe1/GengoWatcher

Length of output: 549


Remove the unused f‑string prefix.

The string contains no placeholders, so the f prefix is unnecessary.

Proposed fix
-                                self.logger.error(
-                                    f"Cancellation may have failed - unexpected response"
-                                )
+                                self.logger.error(
+                                    "Cancellation may have failed - unexpected response"
+                                )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self.logger.error(
f"Cancellation may have failed - unexpected response"
)
self.logger.error(
"Cancellation may have failed - unexpected response"
)
🧰 Tools
🪛 Ruff (0.14.14)

[error] 268-268: f-string without any placeholders

Remove extraneous f prefix

(F541)

🤖 Prompt for AI Agents
In `@src/gengowatcher/job_cancellation_manager.py` around lines 267 - 269, The log
call uses an unnecessary f-string: replace the f-prefixed literal in the
self.logger.error(...) call (the cancellation failure log inside the job
cancellation logic in JobCancellationManager) with a normal string literal
(remove the leading "f") so the message reads "Cancellation may have failed -
unexpected response" without the f prefix.

Comment on lines +25 to +44
@pytest.fixture
def mock_config():
"""Create a comprehensive mock config."""
config = MagicMock(spec=AppConfig)
config_data = {
"Watcher": {
"min_reward": 5.0,
"use_custom_user_agent": False,
"feed_url": "https://example.com/feed",
"check_interval": 60,
"enable_notifications": True,
"enable_sound": True,
},
"Paths": {
"browser_path": "",
"browser_args": "{url}",
"notification_icon_path": "/path/to/icon.png",
"sound_file": "/path/to/sound.mp3",
"all_entries_log": "/tmp/entries.csv",
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

git ls-files tests/test_watcher_comprehensive.py

Repository: tdawe1/GengoWatcher

Length of output: 98


🏁 Script executed:

wc -l tests/test_watcher_comprehensive.py

Repository: tdawe1/GengoWatcher

Length of output: 102


🏁 Script executed:

sed -n '25,50p' tests/test_watcher_comprehensive.py

Repository: tdawe1/GengoWatcher

Length of output: 958


🏁 Script executed:

# Check for hardcoded /tmp paths in the test file
rg '/tmp' tests/test_watcher_comprehensive.py -n

Repository: tdawe1/GengoWatcher

Length of output: 116


🏁 Script executed:

# Check for Ruff configuration
fd -e "ruff.toml" -e "pyproject.toml" -e ".ruff.toml" | head -5

Repository: tdawe1/GengoWatcher

Length of output: 45


🏁 Script executed:

# Look for Ruff lint violations in the test file (check if ruff is available)
rg 'S108|hardcoded.*tmp' --type py | head -20

Repository: tdawe1/GengoWatcher

Length of output: 45


🏁 Script executed:

# Check if tmp_path is used elsewhere in the test file
rg 'tmp_path' tests/test_watcher_comprehensive.py

Repository: tdawe1/GengoWatcher

Length of output: 45


🏁 Script executed:

# Verify the mock_config fixture signature and any other fixtures around it
sed -n '1,100p' tests/test_watcher_comprehensive.py | grep -A 5 -B 5 '@pytest.fixture'

Repository: tdawe1/GengoWatcher

Length of output: 791


🏁 Script executed:

# Check pytest imports and fixture imports
head -30 tests/test_watcher_comprehensive.py

Repository: tdawe1/GengoWatcher

Length of output: 759


🏁 Script executed:

# Look at the full mock_config fixture to understand its usage
sed -n '20,65p' tests/test_watcher_comprehensive.py

Repository: tdawe1/GengoWatcher

Length of output: 1556


🏁 Script executed:

# Check if tmp_path is a standard pytest fixture (verify if this approach is viable)
python3 << 'EOF'
import inspect
import pytest
# tmp_path is a standard pytest fixture, confirm it exists
print("tmp_path fixture available in pytest")
print("Standard signature: tmp_path: pathlib.Path")
EOF

Repository: tdawe1/GengoWatcher

Length of output: 142


🏁 Script executed:

# Check if project uses Ruff linting
cat pyproject.toml 2>/dev/null | grep -A 10 '\[tool.ruff\]' || echo "No Ruff config found"

Repository: tdawe1/GengoWatcher

Length of output: 83


🏁 Script executed:

# Verify tmp_path can be used with mock_config and check if fixture is parametrised elsewhere
rg 'mock_config' tests/test_watcher_comprehensive.py | head -10

Repository: tdawe1/GengoWatcher

Length of output: 719


Use tmp_path fixture instead of hardcoded /tmp path.

Line 43 hardcodes /tmp/entries.csv, which violates test isolation best practices. Use the standard tmp_path fixture to ensure each test runs in isolation and avoid permission/collision issues.

Proposed fix
`@pytest.fixture`
-def mock_config():
+def mock_config(tmp_path):
     """Create a comprehensive mock config."""
     config = MagicMock(spec=AppConfig)
     config_data = {
         "Watcher": {
             "min_reward": 5.0,
             "use_custom_user_agent": False,
             "feed_url": "https://example.com/feed",
             "check_interval": 60,
             "enable_notifications": True,
             "enable_sound": True,
         },
         "Paths": {
             "browser_path": "",
             "browser_args": "{url}",
             "notification_icon_path": "/path/to/icon.png",
             "sound_file": "/path/to/sound.mp3",
-            "all_entries_log": "/tmp/entries.csv",
+            "all_entries_log": str(tmp_path / "entries.csv"),
         },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@pytest.fixture
def mock_config():
"""Create a comprehensive mock config."""
config = MagicMock(spec=AppConfig)
config_data = {
"Watcher": {
"min_reward": 5.0,
"use_custom_user_agent": False,
"feed_url": "https://example.com/feed",
"check_interval": 60,
"enable_notifications": True,
"enable_sound": True,
},
"Paths": {
"browser_path": "",
"browser_args": "{url}",
"notification_icon_path": "/path/to/icon.png",
"sound_file": "/path/to/sound.mp3",
"all_entries_log": "/tmp/entries.csv",
},
`@pytest.fixture`
def mock_config(tmp_path):
"""Create a comprehensive mock config."""
config = MagicMock(spec=AppConfig)
config_data = {
"Watcher": {
"min_reward": 5.0,
"use_custom_user_agent": False,
"feed_url": "https://example.com/feed",
"check_interval": 60,
"enable_notifications": True,
"enable_sound": True,
},
"Paths": {
"browser_path": "",
"browser_args": "{url}",
"notification_icon_path": "/path/to/icon.png",
"sound_file": "/path/to/sound.mp3",
"all_entries_log": str(tmp_path / "entries.csv"),
},
🧰 Tools
🪛 Ruff (0.14.14)

[error] 43-43: Probable insecure usage of temporary file or directory: "/tmp/entries.csv"

(S108)

🤖 Prompt for AI Agents
In `@tests/test_watcher_comprehensive.py` around lines 25 - 44, The mock_config
fixture currently hardcodes "/tmp/entries.csv" in
config_data["Paths"]["all_entries_log"]; change the fixture signature to accept
the pytest tmp_path fixture (def mock_config(tmp_path):) and set
config_data["Paths"]["all_entries_log"] = str(tmp_path / "entries.csv") so each
test gets an isolated temporary file; keep the fixture name mock_config and
update any callers if necessary.

Comment on lines +138 to +143
with patch("builtins.open", mock_open()) as mock_file:
with patch("pathlib.Path.stat") as mock_stat:
mock_stat.return_value.st_size = 0
watcher = GengoWatcher(mock_config, mock_state, logger)
# CSV writer should be initialized
assert watcher._csv_writer is not None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Remove the unused mock_file binding.
Line 138 assigns mock_file but never uses it, which triggers Ruff F841.

Proposed fix
-        with patch("builtins.open", mock_open()) as mock_file:
+        with patch("builtins.open", mock_open()):
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
with patch("builtins.open", mock_open()) as mock_file:
with patch("pathlib.Path.stat") as mock_stat:
mock_stat.return_value.st_size = 0
watcher = GengoWatcher(mock_config, mock_state, logger)
# CSV writer should be initialized
assert watcher._csv_writer is not None
with patch("builtins.open", mock_open()):
with patch("pathlib.Path.stat") as mock_stat:
mock_stat.return_value.st_size = 0
watcher = GengoWatcher(mock_config, mock_state, logger)
# CSV writer should be initialized
assert watcher._csv_writer is not None
🧰 Tools
🪛 Ruff (0.14.14)

[error] 138-138: Local variable mock_file is assigned to but never used

Remove assignment to unused variable mock_file

(F841)

🤖 Prompt for AI Agents
In `@tests/test_watcher_comprehensive.py` around lines 138 - 143, The test creates
an unused binding "mock_file" when patching builtins.open which triggers Ruff
F841; remove the unused variable by changing the context manager from "with
patch('builtins.open', mock_open()) as mock_file:" to simply "with
patch('builtins.open', mock_open()):" in the test that instantiates GengoWatcher
(the block that references GengoWatcher, mock_config, mock_state, logger and
asserts watcher._csv_writer), so the open mock remains applied but no unused
name is introduced.

Comment on lines +505 to +521
def test_process_new_job_thread_safe(self, watcher_instance):
"""Test that job processing is thread-safe."""
watcher_instance.show_notification = MagicMock()

def process_jobs():
for i in range(10):
watcher_instance._process_new_job(i, f"Job {i}", 10.0, f"http://example.com/{i}", "RSS")

threads = [threading.Thread(target=process_jobs) for _ in range(5)]
for t in threads:
t.start()
for t in threads:
t.join()

# All 50 jobs should be processed
assert watcher_instance.session_new_entries == 50

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Thread‑safety test conflicts with dedup logic.
Lines 509‑520 reuse the same IDs across threads, so _seen_jobs_session dedups to 10 and the 50 assertion will fail. Use unique IDs per thread (or adjust the assertion).

Proposed fix
-        def process_jobs():
-            for i in range(10):
-                watcher_instance._process_new_job(i, f"Job {i}", 10.0, f"http://example.com/{i}", "RSS")
-
-        threads = [threading.Thread(target=process_jobs) for _ in range(5)]
+        def process_jobs(offset):
+            for i in range(10):
+                job_id = offset * 10 + i
+                watcher_instance._process_new_job(job_id, f"Job {job_id}", 10.0, f"http://example.com/{job_id}", "RSS")
+
+        threads = [threading.Thread(target=process_jobs, args=(idx,)) for idx in range(5)]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_process_new_job_thread_safe(self, watcher_instance):
"""Test that job processing is thread-safe."""
watcher_instance.show_notification = MagicMock()
def process_jobs():
for i in range(10):
watcher_instance._process_new_job(i, f"Job {i}", 10.0, f"http://example.com/{i}", "RSS")
threads = [threading.Thread(target=process_jobs) for _ in range(5)]
for t in threads:
t.start()
for t in threads:
t.join()
# All 50 jobs should be processed
assert watcher_instance.session_new_entries == 50
def test_process_new_job_thread_safe(self, watcher_instance):
"""Test that job processing is thread-safe."""
watcher_instance.show_notification = MagicMock()
def process_jobs(offset):
for i in range(10):
job_id = offset * 10 + i
watcher_instance._process_new_job(job_id, f"Job {job_id}", 10.0, f"http://example.com/{job_id}", "RSS")
threads = [threading.Thread(target=process_jobs, args=(idx,)) for idx in range(5)]
for t in threads:
t.start()
for t in threads:
t.join()
# All 50 jobs should be processed
assert watcher_instance.session_new_entries == 50
🤖 Prompt for AI Agents
In `@tests/test_watcher_comprehensive.py` around lines 505 - 521, The test reuses
the same job IDs across threads so the watcher's session dedupe
(_seen_jobs_session) collapses them to 10 instead of 50; update
test_process_new_job_thread_safe to generate unique IDs per thread (e.g.,
include a thread index or UUID in the id string passed to _process_new_job like
f"{thread_idx}-{i}" or use threading.get_ident()) so all 50 jobs are treated
distinct and session_new_entries will reach 50, and keep references to
_process_new_job, _seen_jobs_session and session_new_entries when making the
change.

Comment on lines +31 to +38
@pytest.fixture
def mock_config():
"""Create a mock config."""
config = MagicMock(spec=AppConfig)
config.get.side_effect = lambda s, k, **kw: {
("Watcher", "check_interval"): 60,
("Paths", "all_entries_log"): "/tmp/test_entries.csv",
("WebServer", "auth_token"): "test_token_12345",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Locate the test file
find . -name "test_web_comprehensive.py" -type f

Repository: tdawe1/GengoWatcher

Length of output: 96


🏁 Script executed:

# Read lines 31-45 to see the full fixture
sed -n '31,45p' ./tests/test_web_comprehensive.py

Repository: tdawe1/GengoWatcher

Length of output: 719


🏁 Script executed:

# Check for all /tmp occurrences in the test file
rg "/tmp" ./tests/test_web_comprehensive.py -n

# Also check the lambda definitions around lines 40-42
sed -n '40,42p' ./tests/test_web_comprehensive.py

Repository: tdawe1/GengoWatcher

Length of output: 300


🏁 Script executed:

# Check if there are any pyproject.toml or ruff config files
find . -name "pyproject.toml" -o -name "ruff.toml" -o -name ".ruff.toml" | head -5

# Check if tmp_path is used elsewhere in tests
rg "tmp_path" ./tests/ -l | head -5

Repository: tdawe1/GengoWatcher

Length of output: 201


🏁 Script executed:

# Check ruff configuration
cat ./ruff.toml | head -50

# Check how tmp_path is used in another test file
rg "tmp_path" ./tests/test_config.py -A 2 -B 2 | head -30

Repository: tdawe1/GengoWatcher

Length of output: 1179


🏁 Script executed:

# Search for all usages of mock_config fixture
rg "mock_config" ./tests/test_web_comprehensive.py -n | head -20

Repository: tdawe1/GengoWatcher

Length of output: 566


🏁 Script executed:

# Final check: verify tmp_path fixture is standard pytest and available
grep -r "tmp_path" ./tests/conftest.py | head -5

Repository: tdawe1/GengoWatcher

Length of output: 128


Replace hardcoded /tmp path with pytest fixture.

Line 37 uses a fixed /tmp path, which violates Ruff rule S108 and can be non-portable across environments. Use pytest's tmp_path fixture for proper test isolation.

Proposed fix
`@pytest.fixture`
-def mock_config():
+def mock_config(tmp_path):
     """Create a mock config."""
     config = MagicMock(spec=AppConfig)
     config.get.side_effect = lambda s, k, **kw: {
         ("Watcher", "check_interval"): 60,
-        ("Paths", "all_entries_log"): "/tmp/test_entries.csv",
+        ("Paths", "all_entries_log"): str(tmp_path / "test_entries.csv"),
         ("WebServer", "auth_token"): "test_token_12345",
     }.get((s, k), kw.get("fallback", "test_value"))
🧰 Tools
🪛 Ruff (0.14.14)

[error] 37-37: Probable insecure usage of temporary file or directory: "/tmp/test_entries.csv"

(S108)

🤖 Prompt for AI Agents
In `@tests/test_web_comprehensive.py` around lines 31 - 38, The mock_config
fixture currently hardcodes "/tmp/test_entries.csv"; update the fixture
signature to accept pytest's tmp_path and replace that hardcoded path in the
config.get side_effect mapping for key ("Paths","all_entries_log") with
str(tmp_path / "test_entries.csv") so tests use an isolated temporary file;
ensure the fixture name mock_config and the AppConfig spec remain unchanged and
only the returned mapping value is adjusted.

Comment on lines +248 to +255
def test_get_status(self, web_api):
"""Test get_status returns WatcherStatus."""
status = web_api.get_status()
assert isinstance(status, WatcherStatus)
assert status.is_running is False
assert status.websocket_status == "Live"
assert status.rss_status == "Checking"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix is_running expectation to match shutdown semantics.
Line 252 asserts False, but the fixture sets shutdown_event.is_set() to False, so is_running should be True.

Proposed fix
-        assert status.is_running is False
+        assert status.is_running is True
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_get_status(self, web_api):
"""Test get_status returns WatcherStatus."""
status = web_api.get_status()
assert isinstance(status, WatcherStatus)
assert status.is_running is False
assert status.websocket_status == "Live"
assert status.rss_status == "Checking"
def test_get_status(self, web_api):
"""Test get_status returns WatcherStatus."""
status = web_api.get_status()
assert isinstance(status, WatcherStatus)
assert status.is_running is True
assert status.websocket_status == "Live"
assert status.rss_status == "Checking"
🤖 Prompt for AI Agents
In `@tests/test_web_comprehensive.py` around lines 248 - 255, The test_get_status
assertion for is_running is inverted; update the test in test_get_status
(tests/test_web_comprehensive.py) to expect True (since shutdown_event.is_set()
is False, WatcherStatus.is_running should be True). Locate the test_get_status
function and change the assertion for status.is_running to assert True (e.g.,
assert status.is_running is True or assert status.is_running) to match the
fixture shutdown semantics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants