To understand where your day went
A macOS desktop app that captures screenshots, windows and apps (both background and foreground) on a schedule and transforms them into a timeline, daily summaries, project milestones, addiction tracking, with optional E2E-encrypted social feed. Screencap answers the questions like:
- What did I actually do today?
- How long did I really work?
- Am I spending too much time on Chess?
- What progress did I make on my project?
- What my colleagues are doing?
- What actual progress on project X has been made since September?
The idea behind this opensource is to inspire as many forks as possible. The project (both app and social backend) are fully free to use, enrouranging everyone to customise and build their own Screencaps. Project started as a background project tracker, as we all tend to have zero-to-little screenshots from projects we worked on for months. Then addiction tracker came it, the Spotify background played, End Of Day flow, activity popup, and social E2E network in tray (I couldn't help myself)
Download · Changelog · Security · E2EE & Sharing · Local LLM
Time period (day), visualized as a stream of events
- Each card is an event — multiple captures with the same context and similar pixels merge into one time window
- Rich context extraction — app name, window title, browser URL, media playing, and per-app lower contexts like VS Code workspace
- Fully editable — relabel events, dismiss captures, copy screenshots, create per-app or per-website automation rules, mark as progress or addiction (by default done automatically via llms)
The tray widget "what happened today?" with quick actions and different views available.
|
Apps View - see which apps and websites dominated each time slot |
|
|
Categories - what categories dominated |
Addictions - confirmed signals highlighted |
Quick actions:
- Capture now — trigger an immediate capture
- Capture project progress — save a milestone with a caption (screenshots active window and waits for a comment. would want to double-down and add loom-like view support one day)
To turn your day into a narrative
- Dayline visualization — see your entire day at a glance with category colors
- Breakdown metrics — active time, focus percentage, longest streak, top apps/sites/projects
- End of Day — optional LLM-powered daily recap (or the one from End of Day flow below)
- Manual journaling — write reflections, embed event screenshots, create custom sections
A guided ritual to close your day intentionally, guides you through:
- Summary — metrics and dayline visualization
- Progress Review — confirm or promote potential milestones
- Addictions Review — acknowledge or dismiss flagged events
- Write — compose custom sections with embedded event screenshots
| Summary |
|---|
![]() |
| active time, focus, progress milestones, addiction episodes. |
| Write |
|---|
![]() |
| to compose with embedded screenshots and custom sections. |
Define behaviors you want to track, then measure. Bullet chess in my case:) But is thought for games and porn too, so tricky events are well-covered.
- Create rules — define what counts as an addiction
- AI detection — the LLM surfaces candidates based on content (either image OCR or image itself + Accessibility context)
- Calendar view — see patterns across weeks
- Streak tracking — visualize consecutive clean days
- Episode timeline — drill into specific incidents with screenshots
A dedicated timeline for milestones and momentum, the foundation for multiplayer collaboration
- Automatic detection — AI identifies progress-worthy captures
- Manual milestones —
⌘⇧Pto capture and caption a moment - Git integration — link local repositories to see commits alongside work sessions
- Multi-project filtering — track progress across all projects or focus on one
Sharing: Projects can be shared with friends via encrypted rooms. Invite collaborators by username, and everyone sees each other's milestones in a unified timeline. All shared content is end-to-end encrypted; the server never sees your screenshots or captions, but just in case you can selfhost and set your own backend, for a better guide see screencap-website project
So you can feel-not-ask each other:)
| Activity Feed | Friend's Day |
|---|---|
![]() |
![]() |
| See what friends are working on. Screenshots, captions, and context | View a friend's dayline, categories, and recent activity in real-time. |
The flow is simple:
- Choose a username
- Add friends
- Share Day Wrapped — let friends see your dayline in real-time
- Shared projects — invite friends to project rooms, see their milestones
- Comments — react to shared events with threaded messages
- Activity feed — see what friends are working on
Screencap extracts rich context from your active window:
| Provider | What it captures |
|---|---|
| System Events | Frontmost app, window title, fullscreen state |
| Safari | Current URL, page title |
| Chromium browsers | Current URL, page title (Chrome, Arc, Brave, Edge, etc.) |
| Spotify | Track name, artist, album art |
| Cursor/VS Code | Workspace path, project name |
With accessibility + automation permissions, we can get pretty much precise context
When enabled, events go through a multi-step classification pipeline:
- Cache reuse — instant match by fingerprint + context
- Local retrieval — match against your own history
- Local LLM — Ollama, LM Studio, or any OpenAI-compatible server
- Cloud text — OpenRouter with context + OCR (no images)
- Cloud vision — OpenRouter with screenshots (if enabled)
- Fallback — baseline classification from context alone
Classification output:
- Category — Work, Study, Leisure, Social, Chores, Unknown
- Project — detected project name
- Caption — human-readable description
- Addiction candidate — potential matches against your rules
- Progress detection — milestone-worthy content
Fine-grained control over capture and classification:
| Rule | Effect |
|---|---|
| Skip capture | Don't screenshot this app/website at all |
| Skip AI | Capture locally but never send to LLM |
| Force category | Always assign Work/Study/etc. |
| Force project | Always tag captures with a project |
Create rules from any event card or in Settings->Automation.
Local-first overall, but for LLM classification both local and remote (openrouter/openai) options are available.
- SQLite database under
~/Library/Application Support/Screencap/ - All screenshots (thumbnails + originals)
- Settings and encryption keys (Keychain-encrypted)
| Feature | Data sent | Where |
|---|---|---|
| Cloud AI | Context + OCR text | OpenRouter/OpenAI |
| Cloud Vision | Screenshot images | OpenRouter/OpenAI (if enabled) |
| Friends/Sharing | Encrypted events | Backend (default: screencaping.com; can be self-hosted) |
| Auto-updates | Version check | GitHub Releases |
All shared content (screenshots, captions, chat) is encrypted on your device before upload.
- Device identity — Ed25519 signing key + X25519 key agreement key
- Room keys — 32-byte secrets, per-recipient-device encrypted envelopes
- Event encryption — AES-256-GCM with keys derived via HKDF
- Chat encryption — DMs use X25519 shared secret; rooms use room key
The server sees ciphertext, metadata (timestamps, usernames, project names), and encrypted blobs. It cannot read content.
See Security & Privacy: Sharing and E2EE Crypto Spec.
- macOS 13+ (Ventura or later)
- Screen Recording permission (required)
- Accessibility permission (recommended)
- Automation permissions (recommended)
- Grab the latest DMG from Releases
- Open the DMG and drag
Screencap.appto Applications - Launch — the onboarding wizard guides you through permissions and setup
| Permission | Purpose | Required? |
|---|---|---|
| Screen Recording | Capture screenshots | Yes |
| Accessibility | Read window titles | Recommended |
| Automation -> System Events | Identify focused window | Recommended |
| Automation -> Browsers | Read URLs from Safari/Chrome/etc. | Recommended |
| Automation -> Media apps | Capture Spotify track info | Optional |
The onboarding wizard walks you through:
- Screen Recording — grant the core permission
- Accessibility — enable rich window context
- Automation — allow per-app context extraction
- AI Setup — choose Cloud, Local, or Disabled
- First Capture — see what Screencap captures
After onboarding:
- Capture interval is set in Settings -> Capture
- Retention period is set in Settings -> Data
- Global shortcuts are customizable in Settings -> Capture -> Shortcuts
| Action | Default | Notes |
|---|---|---|
| Command palette | ⌘K |
Quick access to any view or action |
| Capture now | ⌘⇧O |
Immediate screenshot |
| Capture project progress | ⌘⇧P |
Opens caption popup for milestone |
| End of Day | ⌘⇧E |
Open the journal flow |
All shortcuts are configurable in Settings -> Capture -> Shortcuts.
Screencap uses dominant activity scheduling:
- Sample context every second — which app, window, URL is in focus
- Wait for stability — context must be stable for ~10 seconds
- Capture candidate — take a multi-display screenshot
- Keep the dominant — at the end of the interval, save the most representative capture
- Merge similar events — captures with same context + similar pixels become one event
This means:
- Quick app switches don't produce captures
- You get one clean event per sustained activity
- Storage usage stays reasonable
The overall capturing algo is still in very rough and can be very much improved
Screencap uses OpenRouter by default, but any OpenAI-compatible API works
- Get an API key from openrouter.ai/keys or platform.openai.com
- Settings -> AI -> Cloud LLM -> paste your key
- Choose a model (default:
openai/gpt-5-mini) - Toggle Allow vision uploads if you want image-based classification
OpenRouter gives you access to many models (Claude, GPT-5, Gemini, open-source) through one API key, so very much recommended
- Run a local OpenAI-compatible server
- Settings -> AI -> Local LLM -> enable and configure:
- Base URL:
http://localhost:11434/v1(Ollama) orhttp://localhost:1234/v1(LM Studio) - Model: the model name from
/v1/models
- Click Test to verify
See Local LLM Guide for detailed setup.
Settings -> AI -> Classification -> Off
Captures still happen, but no LLM calls. Events get basic category from context only.
- Open the tray popup -> Social tab
- Choose a username (alphanumeric, unique)
- Your device generates E2EE keys automatically
- Click + in the Social tab
- Enter their username
- They accept your request
Settings -> Social -> Day Wrapped Sharing -> Enable
Friends see your dayline in their feed. You control what's shared:
- Categories (always)
- App names (optional)
- Addiction flags (optional)
- Open Projects -> select a project -> Share
- Invite friends by username
- They accept the room invite and see your milestones
Screencap's backend is open source. Run your own:
- Settings -> System -> Custom Backend -> Enable
- Enter your backend URL
- See Self-Hosted Backend Guide
Your data, your server, full control.
- macOS
- Node.js 20+
- npm
git clone https://github.com/yahorbarkouski/screencap.git
cd screencap
npm installnpm run devnpm testnpm run build
npm run previewnpx electron-builder --config electron-builder.ymlscreencap/
├── electron/
│ ├── main/ # Main process
│ │ ├── app/ # Window, tray, popup, lifecycle
│ │ ├── features/ # Capture, AI, context, social, sync
│ │ ├── infra/ # Settings, logging, storage
│ │ └── ipc/ # Secure IPC handlers
│ ├── preload/ # Context bridge (window.api)
│ └── shared/ # IPC channels, shared types
├── src/ # React renderer
│ ├── components/ # UI components
│ ├── hooks/ # React hooks
│ ├── lib/ # Utilities
│ └── stores/ # Zustand stores
└── docs/ # Documentation
| Service | Purpose |
|---|---|
CaptureService |
Screenshot capture, fingerprinting |
ContextService |
App/window/URL/media extraction |
ClassificationService |
AI pipeline orchestration |
EventService |
Event creation, merging, storage |
IdentityService |
E2EE key management |
RoomsService |
Shared project rooms |
SocialFeedService |
Friend activity publishing |
Read Security Practices before adding new IPC handlers or file access surfaces.
Key principles:
- Treat IPC as a security boundary
- Validate all renderer inputs with Zod
- Use the
secureHandlewrapper for new handlers - Prefer allowlists over blocklists
This project is in beta. Expect rough edges, breaking changes, and behaviors that may surprise you.
The classification pipeline can make up to 2 LLM calls per screenshot (one for general classification, another for addiction verification). There's no rate limiting or cost tracking - if you capture frequently, your API bill can grow fast. Small models work best tho, but be thoughtful about this for a while
What's needed: smarter classification logic, caching improvements, and optional cost caps.
Retention cleanup deletes old data based on time thresholds, not activity relevance. Important events get purged alongside noise if they're past the retention window.
What's needed: activity-aware or user-marked retention. Really curious one.
Many parameters are fixed /theoretically/:
- Fingerprint similarity thresholds
- Capture stability window (10s)
- Merge gap for events (~2× capture interval)
- HQ image retention (12 hours)
These may not fit all workflows and require some good practice to make sense / become dynamic
The entire context system uses AppleScript, macOS Vision, and Apple-specific APIs
If any of these bother you, PRs are very welcome
MIT











