Skip to content

ci: pin redis image to SHA digest and add Docker Hub auth#57

Open
27Bslash6 wants to merge 6 commits intomainfrom
ci/pin-redis-image
Open

ci: pin redis image to SHA digest and add Docker Hub auth#57
27Bslash6 wants to merge 6 commits intomainfrom
ci/pin-redis-image

Conversation

@27Bslash6
Copy link
Contributor

Summary

  • Pin redis:7-alpine service container to amd64 manifest digest (sha256:4bfd9e...)
  • Add credentials block using DOCKERHUB_USERNAME / DOCKERHUB_TOKEN org secrets

Fixes Docker Hub unauthenticated rate limit errors (toomanyrequests) on self-hosted ARC runners.

Test plan

  • CI passes with pinned image + authenticated pull
  • Self-hosted runners (cachekit) pull redis without rate limiting

Fixes Docker Hub unauthenticated rate limit errors on self-hosted
runners by adding credentials block and pinning redis:7-alpine to
its amd64 manifest digest.
@codecov
Copy link

codecov bot commented Feb 28, 2026

❌ 1 Tests Failed:

Tests completed Failed Passed Skipped
231 1 230 7
View the full list of 1 ❄️ flaky test(s)
tests/critical/test_intelligent_cache_regression.py::TestIntelligentCacheRegression::test_concurrent_access_regression

Flake rate in main: 71.43% (Passed 4 times, Failed 10 times)

Stack Traces | 0.041s run time
self = <tests.critical.test_intelligent_cache_regression.TestIntelligentCacheRegression object at 0x7175b56bf0b0>

    def test_concurrent_access_regression(self):
        """CRITICAL: Concurrent access must work identically for both interfaces.
    
        Note: Sync decorators don't support distributed locking (async-only feature).
        With max_workers=5 and 10 concurrent requests, some cache stampede is expected.
        The test verifies that both interfaces behave identically, not that there's
        perfect deduplication (which requires async functions with distributed locks).
        """
        legacy_calls = intelligent_calls = 0
    
        @cache(ttl=300, namespace="legacy_concurrent")
        def legacy_concurrent(value):
            nonlocal legacy_calls
            legacy_calls += 1
            time.sleep(0.01)  # Simulate work
            return f"legacy_{value}"
    
        @cache(ttl=300, namespace="intelligent_concurrent")
        def intelligent_concurrent(value):
            nonlocal intelligent_calls
            intelligent_calls += 1
            time.sleep(0.01)  # Simulate work
            return f"intelligent_{value}"
    
        # Test concurrent access with ThreadPoolExecutor
        with ThreadPoolExecutor(max_workers=5) as executor:
            # Submit multiple concurrent requests for same value
            legacy_futures = [executor.submit(legacy_concurrent, "test") for _ in range(10)]
            intelligent_futures = [executor.submit(intelligent_concurrent, "test") for _ in range(10)]
    
            # Collect results
            legacy_results = [f.result() for f in legacy_futures]
            intelligent_results = [f.result() for f in intelligent_futures]
    
        # Both should have some cache stampede (multiple calls due to no distributed lock in sync mode)
        # But both should behave identically (same number of calls)
        assert legacy_calls > 0 and legacy_calls <= 10, "Should have some calls but not all 10"
        assert intelligent_calls > 0 and intelligent_calls <= 10, "Should have some calls but not all 10"
        # Both interfaces should have similar behavior (within tolerance)
>       assert abs(legacy_calls - intelligent_calls) <= 2, "Both interfaces should have similar stampede behavior"
E       AssertionError: Both interfaces should have similar stampede behavior
E       assert 4 <= 2
E        +  where 4 = abs((1 - 5))

tests/critical/test_intelligent_cache_regression.py:223: AssertionError

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

PR: lint + critical tests on Python 3.12 only (1 runner)
Push: full 6-version matrix + doc tests + version check + pip-audit
uv, Rust stable (rustfmt, clippy), and Python 3.9-3.14 are now
baked into the custom runner image (ghcr.io/cachekit-io/runner).

Removes per-job: setup-uv action, uv python install, dtolnay/rust-toolchain.
Keeps: Swatinem/rust-cache and actions/cache for compiled artifacts.
- actions/checkout v4 → v6
- actions/cache v4 → v5
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant