Skip to content

AI-curated cultural events calendar for Athens, Greece. Automated data collection, AI enrichment, and SEO-optimized static site generation.

Notifications You must be signed in to change notification settings

chrimar3/agent_athens

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

61 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

agent-athens

AI-curated cultural events calendar for Athens, Greece. Transforms daily event newsletters and events scraped from the internet into SEO/GEO-optimized static pages designed for AI answer engines (ChatGPT, Perplexity, Claude), future A2A assistants, and humans.

Status

Phase: Live Prototype (Developer-Only) Current Milestone: Available only to project developers Next Milestone: Make available to search engines Future Milestone: Release to small group of first users for feedback Mode: Static Site (Netlify CDN) Pages: Dynamic (currently ~315 pages based on active events) Location: Athens, Greece (EET/UTC+2) Deployment: Netlify (auto-deploy on git push)

Latest Update (Nov 3, 2025): βœ… Full Automation Active - Running Daily at 8 AM

  • βœ… Email ingestion integrated (Priority 0 - fetches newsletters from Gmail)
  • βœ… Web scraping automation (3-step pipeline with frequency-based scheduling)
  • βœ… Cron job deployed (daily 8 AM Athens time)
  • βœ… State tracking & duplicate prevention
  • βœ… Automated archiving & logging
  • See docs/EMAIL-INGESTION-INTEGRATION.md and docs/CRON-AUTOMATION-SETUP.md

The Vision

Start with Athens. Prove the model. Expand to agent-barcelona, agent-berlin, agent-cities. Become the global cultural events platform for the AI era, monetized through affiliate revenue (tickets, hotels, restaurants) and agent referral networks where AI agents earn commission on bookings they drive.

The Ask

In the reputation economy where AI trust = revenue, agent-athens is positioned to be the source that AI engines cite first. We're building the infrastructure for affiliate marketing in the post-LLM world.


How It Works: Complete System Flow

Phase 1: Event Collection (Daily, ~8:00 AM Athens)

1A. Email Ingestion (βœ… INTEGRATED - Automated newsletter fetching):

Status: Integrated into orchestrator as Priority 0 (runs before web scraping)

Workflow:

  1. Connect to Gmail via IMAP ([email protected])
  2. Fetch unread newsletter emails from INBOX
  3. Save emails to data/emails-to-parse/ for Claude Code parsing
  4. Mark as processed in data/processed-emails.json (prevents reprocessing)
  5. Archive emails (move from INBOX β†’ All Mail)

Features:

  • βœ… Frequency-based scheduling: daily (integrated into orchestrator)
  • βœ… Duplicate prevention: Tracks Message-IDs to avoid reprocessing
  • βœ… Email archiving: Keeps inbox clean, creates audit trail
  • βœ… Timeout handling: 60-second timeout with 2 retries
  • βœ… State tracking: Included in orchestrator state management

Scripts:

  • scripts/ingest-emails.ts - Standalone email fetching
  • scripts/parse-emails.ts - Helper for Claude Code parsing workflow
  • src/ingest/email-ingestion.ts - Core email fetching logic

Usage:

# Standalone (test only)
bun run scripts/ingest-emails.ts           # Fetch emails
bun run scripts/ingest-emails.ts --dry-run # Preview

# Integrated (production)
bun run scripts/scrape-all-sources.ts      # Runs email ingestion first (Step 0)

Parsing Workflow:

  1. Emails saved to data/emails-to-parse/
  2. Ask Claude Code: "Parse emails in data/emails-to-parse/ and add events to database"
  3. Claude Code uses FREE tool_agent (no API costs!)
  4. Events imported to database with auto-deduplication

Configuration: config/orchestrator-config.json β†’ email_ingestion section

1B. Web Scraping (βœ… AUTOMATED - Standalone orchestrator with frequency scheduling):

Orchestrator: bun run scripts/scrape-all-sources.ts

Features:

  • βœ… Frequency-based scheduling: daily/weekly/monthly (only scrapes when due)
  • βœ… Timeout handling: kills runaway processes (configurable per source)
  • βœ… Retry logic: exponential backoff (2s, 4s, 8s)
  • βœ… State tracking: data/scrape-state.json (timestamps, counts, errors)
  • βœ… Rate limiting: configurable delays between sources
  • βœ… CLI modes: --force, --source=X, --dry-run

Configuration:

  • config/orchestrator-config.json - Pipeline config (timeout, retry, frequency, priority)
  • config/scrape-list.json - Scraper config (sites, URLs, user agent)
  • Currently configured: viva.gr (daily), more.com (daily), gazarte.gr (weekly)

Pipeline (3-step execution per source):

  1. Scraper: Python scraper (scripts/scrape-all-sites.py --site X)
    • Crawls website URL
    • Saves HTML to data/html-to-parse/
  2. Parser: Python parser (scripts/parse_tier1_sites.py)
    • Extracts events from HTML
    • Saves JSON to data/parsed-events/
  3. Importer: Bun importer (scripts/import-X-events.ts)
    • Imports events to database
    • Auto-deduplicates by hash(title+date+venue)

Usage:

# Run all sources that are due (frequency-based)
bun run scripts/scrape-all-sources.ts

# Force all sources (ignore frequency)
bun run scripts/scrape-all-sources.ts --force

# Run specific source
bun run scripts/scrape-all-sources.ts --source=viva.gr

# Preview without executing
bun run scripts/scrape-all-sources.ts --dry-run

Automation: βœ… ACTIVE - Cron job running daily at 8 AM

Cron Schedule:

0 8 * * * cd /Users/chrism/Project\ with\ Claude/AgentAthens/agent-athens && /Users/chrism/.bun/bin/bun run scripts/scrape-all-sources.ts >> logs/scrape-$(date +\%Y\%m\%d).log 2>&1

Documentation: See docs/EMAIL-INGESTION-INTEGRATION.md, docs/CRON-AUTOMATION-SETUP.md, and docs/INTEGRATION-COMPLETE.md

1C. Database Upsert (Deduplication & Storage):

  1. Normalize all collected events (from email + web) to Schema.org format
  2. For each event:
    • Generate event ID (hash of title+date+venue)
    • Check if ID exists in database
    • IF exists: UPDATE (description, price, URL changes)
    • IF new: INSERT
  3. Log results summary

Output:

πŸ“Š Database Upsert Results:
  βœ… X new events inserted
  πŸ”„ Y events updated (price/description changes)
  ⏭️  Z duplicates skipped (already current)

Total: New raw events per day in database (dynamic number - realistically we don't expect hundreds of new events every day)


Phase 2: Event Enrichment (Daily, ~8:05 AM)

AI Description Generator (bun run scripts/enrich-events.ts):

For each event without a full description:

  1. Build enrichment prompt:

    • Include event metadata (title, type, venue, date, genre)
    • Request exactly 400 words (Β±20 acceptable)
    • Emphasize cultural context, artist background, what makes it special
    • Avoid marketing fluff, focus on authentic storytelling
  2. Call tool_agent (need to work on scheduling a call to the Agent SDK to run the project like we manually do in Claude Code, which uses the internal tool_agent):

    • Generate compelling narrative description
    • Include practical details naturally (time, location, price)
    • Mention Athens neighborhood connections when relevant
    • Never fabricate facts
  3. Update database:

    • Store in full_description column
    • Update updated_at timestamp
    • Word count validation (must be ~400 words)
  4. Rate limiting:

    • 2-second pause between tool_agent calls
    • Handle 429 errors gracefully (wait 30s)
    • Log progress and errors

Output: All events have rich 400-word descriptions

Cost: CRITICAL: FREE when using tool_agent!


Phase 3: Database Cleanup (Daily, ~8:10 AM)

Automatic Event Lifecycle (within generate-site.ts):

  1. Delete expired events:

    • Remove events older than 1 day (past events)
    • Keep today/future events only
    • Maintains database size (~300-500 event categories typical)
  2. Smart date handling:

    • "Today" automatically updates daily
    • "Tomorrow" becomes "today" (no manual updates)
    • Past events disappear automatically

Output: Clean database with only current/future events


Phase 4: Static Site Generation (Daily, ~8:15 AM)

Combinatorial Page Generator (bun run build):

  1. Load events from database:

    const allEvents = getAllEvents();
    console.log(`πŸ“₯ Loaded ${allEvents.length} events`);
  2. Generate combinatorial pages (Type Γ— Time Γ— Price Γ— Genre = dynamic number of pages, currently ~315):

    Core time pages (dynamic count):

    • /today, /tomorrow, /this-week, /this-weekend, /this-month, /next-month, /all-events
    • TODO: Consider if we also need pages with specific values, such as /november-2025 (which currently appears as "next-month")

    Type pages (dynamic count: types Γ— time ranges):

    • /concert-today, /exhibition-this-week, /cinema-this-weekend, etc.

    Price pages (dynamic count: price filters Γ— time ranges):

    • /open-today, /with-ticket-this-week, etc.
    • Note: Using "open" and "with-ticket" terminology instead of "free/paid"

    Type + Price pages (dynamic count: types Γ— prices Γ— time ranges):

    • /open-concert-today, /with-ticket-exhibition-this-week, etc.

    Genre pages (dynamic count: genres Γ— time ranges Γ— prices):

    • /jazz-concert-today, /open-jazz-concert-this-week, etc.
  3. For each page:

    • Filter events matching criteria (type, time, price, genre)
    • Generate HTML with Schema.org markup (Event CollectionPage)
    • Generate JSON API (same data, different format)
    • Handle empty pages gracefully ("0 events found, check back tomorrow")
    • Add cross-links to related pages
  4. Generate discovery files:

    • llms.txt - AI agent discovery (what this site offers) - TODO: Confirm if this is a good practice
    • robots.txt - Search engine crawling rules
    • sitemap.xml - All URLs for search engines (dynamic count)

Output: HTML pages + JSON APIs + 3 discovery files (total size ~4.1 MB)

Build time: ~2-5 seconds (Bun is fast!)


Phase 5: Deployment (Daily, ~8:20 AM)

Git + Netlify Auto-Deploy:

  1. Commit changes:

    git add dist/ data/events.db
    git commit -m "chore: Daily update $(date +%Y-%m-%d)
    
    - X new events added
    - Y events enriched with AI descriptions
    - Z past events removed
    
    πŸ€– Automated daily update"
    git push origin main
  2. Netlify detects push:

    • Triggers build (instant - just copies files)
    • Atomic deployment (zero downtime)
    • Global CDN distribution
    • SSL/HTTPS automatic
  3. Site goes live:

Deploy time: ~30 seconds (Netlify build + CDN propagation)

Total pipeline: ~20 minutes (collection β†’ enrichment β†’ generation β†’ deployment)


Daily Timeline (Athens Time / ET)

Note: Designed for Athens (EET/EEST) but configurable for any timezone.

08:00 AM - Email ingestion + Web scraping (PRIORITY - need to develop now)
08:05 AM - AI enrichment (using tool_agent)
08:10 AM - Database cleanup
08:15 AM - Site generation (dynamic page count)
08:20 AM - Git commit + Netlify deploy
08:25 AM - βœ… Live site updated

Current (Manual): Run bun run build whenever events are added/updated. Manual from the perspective of the human developer can mean asking Claude Code to run these tasks. Agent SDK will be able to run them as standalone when called through the SDK.

Future (Automated): macOS launchd triggers full pipeline daily at 8 AM.


Key Decision Points

  1. Do we have new events? β†’ Yes: Enrich with AI descriptions
  2. Are events enriched? β†’ No: Run scripts/enrich-events.ts
  3. Are there past events? β†’ Yes: Auto-cleanup on site generation
  4. Page has 0 events? β†’ Still generate (show "0 events found" message)
  5. Database has changes? β†’ Regenerate ALL pages (ensures consistency)
  6. Deployment ready? β†’ Git push β†’ Netlify auto-deploys

Human Setup Requirements

Before you start coding, you'll need to set up a few things manually. These are one-time setup tasks that enable you to own and operate the production infrastructure.

1. Set Up GitHub Repository (Required)

Why: You'll own the codebase and control deployments.

Steps:

  1. Fork or transfer the repository to your GitHub account:
    • Option A: Fork: https://github.com/ggrigo/agent-athens β†’ Click "Fork"
    • Option B: Transfer: Repo owner transfers ownership to you (Settings β†’ Transfer)
  2. Clone YOUR repository:
    git clone https://github.com/YOUR-USERNAME/agent-athens.git
    cd agent-athens
  3. You now control the main branch and all deployments

2. Set Up Netlify for Production (Required)

Why: You'll own the production deployment and domain.

Steps:

  1. Go to netlify.com and sign up (free tier is plenty)
  2. Click "Add new site" β†’ "Import an existing project"
  3. Connect to your GitHub repository (YOUR-USERNAME/agent-athens)
  4. Configure build settings:
    • Build command: bun run build (or leave empty - we commit dist/)
    • Publish directory: dist
    • Branch: main
  5. Click "Deploy site"
  6. Your production site is live at: agent-athens.netlify.app (or custom domain)

Auto-Deploy Setup:

  • Netlify will auto-deploy every time you push to main
  • Or use CLI: netlify deploy --prod --dir=dist

Local CLI Setup:

npm install -g netlify-cli
netlify login
netlify link  # Links to YOUR production site

3. Create a Newsletter Subscription Email (Future)

Why: To receive Athens event newsletters for testing email ingestion (Phase 1A - not yet implemented).

Steps:

  1. Create a new Gmail account (e.g., [email protected])
  2. Enable IMAP in Gmail:
    • Settings β†’ Forwarding and POP/IMAP β†’ Enable IMAP
  3. Generate an App Password:
    • Google Account β†’ Security β†’ 2-Step Verification β†’ App Passwords
    • Select "Mail" and generate a 16-character password
  4. Subscribe to Athens event newsletters:
  5. Save your credentials to .env:
    cp .env.example .env
    # Edit .env with your email and app password

Note: Email ingestion is not yet implemented. For now, the pipeline uses web scraping (no credentials needed).

4. Get Claude Code (Required for Interactive AI Tasks)

Why: HTML parsing and event enrichment use Claude Code tool_agent (free with your subscription).

Steps:

  1. Install Claude Code CLI: Follow instructions at claude.ai/code
  2. Authenticate: claude login
  3. You're ready to use interactive AI features (HTML parsing, enrichment)

What you'll use it for:

  • Parsing HTML event pages into structured JSON
  • Generating 400-word event descriptions
  • Database queries and transformations

Quick Start

1. Prerequisites

# Install Bun (fast JavaScript runtime)
curl -fsSL https://bun.sh/install | bash

# Verify installation
bun --version

2. Setup Project

# Clone repository
git clone https://github.com/ggrigo/agent-athens.git
cd agent-athens

# Install dependencies
bun install

3. Initialize Database

# Create SQLite database with schema
bun run scripts/init-database.ts

# Import sample events (optional)
bun run scripts/import-scraped-events.ts

4. Enrich Events with AI

# Generate 400-word descriptions for all events
bun run scripts/enrich-events.ts

# Or enrich in batches (need to decide on tool_agent capacity)
bun run scripts/enrich-5-events.ts

5. Generate Static Site

# Build all pages (dynamic count)
bun run build

# Output: dist/ directory with HTML + JSON files

6. Deploy to Netlify

# First time: Connect to Netlify
netlify login
netlify init

# Deploy (or just git push if auto-deploy is configured)
bun run deploy
# Or manually: netlify deploy --prod --dir=dist

Project Structure

Note: The following is offered as an example architecture.

agent-athens/
β”œβ”€β”€ src/                        # Source code
β”‚   β”œβ”€β”€ generate-site.ts        # Main site generator (combinatorial logic)
β”‚   β”œβ”€β”€ types.ts                # TypeScript types (Event, Filters, etc.)
β”‚   β”œβ”€β”€ db/                     # Database layer
β”‚   β”‚   └── database.ts         # SQLite queries (insert, update, get events)
β”‚   β”œβ”€β”€ templates/              # HTML generation
β”‚   β”‚   └── page.ts             # Page renderer (Schema.org markup)
β”‚   └── utils/                  # Utilities
β”‚       β”œβ”€β”€ normalize.ts        # Event normalization (Schema.org format)
β”‚       β”œβ”€β”€ filters.ts          # Event filtering (type, time, price, genre)
β”‚       └── urls.ts             # URL building (/open-jazz-concert-today)
β”œβ”€β”€ scripts/                    # Standalone scripts
β”‚   β”œβ”€β”€ init-database.ts        # Database initialization
β”‚   β”œβ”€β”€ scrape-events.ts        # Web scraping
β”‚   β”œβ”€β”€ import-scraped-events.ts # Import JSON events to DB
β”‚   β”œβ”€β”€ enrich-events.ts        # AI description generation (all events)
β”‚   └── enrich-5-events.ts      # AI enrichment (batched)
β”œβ”€β”€ data/                       # Data files (gitignored except .sql)
β”‚   β”œβ”€β”€ events.db               # SQLite database (gitignored)
β”‚   β”œβ”€β”€ events.sql              # Database schema
β”‚   β”œβ”€β”€ scraped-events.json     # Raw scraped events (gitignored)
β”‚   └── unenriched-events.json  # Events pending enrichment (gitignored)
β”œβ”€β”€ dist/                       # Generated static site (gitignored locally, committed for Netlify)
β”‚   β”œβ”€β”€ *.html                  # HTML pages (dynamic count)
β”‚   β”œβ”€β”€ api/*.json              # JSON API endpoints (dynamic count)
β”‚   β”œβ”€β”€ llms.txt                # AI agent discovery
β”‚   β”œβ”€β”€ robots.txt              # Search engine rules
β”‚   └── sitemap.xml             # Search engine sitemap
β”œβ”€β”€ logs/                       # Runtime logs (gitignored)
β”œβ”€β”€ netlify.toml                # Netlify configuration
β”œβ”€β”€ package.json                # Bun dependencies
β”œβ”€β”€ tsconfig.json               # TypeScript configuration
β”œβ”€β”€ .gitignore                  # Git exclusions
β”œβ”€β”€ README.md                   # This file
β”œβ”€β”€ PROJECT_DESCRIPTION.md      # Full technical overview
β”œβ”€β”€ ELEVATOR_PITCH.md           # 30-second + 2-minute pitches
β”œβ”€β”€ IMPLEMENTATION_PLAN.md      # 4-step daily pipeline architecture
β”œβ”€β”€ ENRICHMENT_README.md        # AI enrichment guide
└── COMBINATORIAL_SEO_STRATEGY.md # SEO/GEO strategy documentation

Configuration

Note: The following values are offered as examples.

Portfolio Settings (Content Strategy)

  • Input: Curated events daily (dynamic count)
  • Output: Unique pages (dynamic count, currently ~315)
  • Multiplier: Dynamic coverage based on event diversity
  • Update Frequency: Daily (8:00 AM Athens time)

Page Generation Rules

Note: These are example values. We need to define the multi-dimensional cube and its accepted values.

  • Event Types: 6 (concert, exhibition, cinema, theater, performance, workshop)
  • Time Ranges: 7 (today, tomorrow, this-week, this-weekend, this-month, next-month, all-events)
  • Price Filters: 2 (open, with-ticket)
  • Genres: Top genres per type (dynamic, based on actual events)

URL Structure

Pattern: /{price}-{genre}-{type}-{time}

Note: Using "open" and "with-ticket" terminology βœ…

Examples:

  • /open-jazz-concert-today
  • /contemporary-art-exhibition-this-week
  • /with-ticket-electronic-concert-this-weekend
  • /cinema-this-month
  • /open-today

Database Management

  • Active Events: 300-500 event categories at any time (rolling window)
  • Cleanup Policy: Events older than 1 day auto-deleted
  • Retention: 90 days historical (future enhancement)
  • File Size: 2-5 MB (SQLite)

AI Enrichment

  • Model: Anthropic Agent SDK (tool_agent)
  • Word Count: 400 words (Β±20 acceptable)
  • Rate Limit: 2 seconds between requests
  • Cost: FREE (using tool_agent)

Data Sources

Current:

  • Manual Scraping: This is Athens, SNFCC, Gazarte, Bios, Six D.O.G.S, Fuzz Club
    • Need: List of websites to crawl
  • SQLite Database: Persistent event storage

In Development:

  • Gmail IMAP: Automated newsletter ingestion ([email protected])
    • Need: List of active newsletters for monitoring and documentation of event catchment scope

SEO/GEO Strategy

Note: We have done minimal analysis. The below are offered as a direction that must be confirmed, and in time will be updated and modified as we learn more about the best practices of SEO/GEO.

For AI Answer Engines (GEO = Generative Engine Optimization)

Discovery:

  • llms.txt - Tells AI agents what this site offers (TODO: Confirm if this is a good practice)
  • Schema.org markup - Machine-readable event data (Event + CollectionPage)
  • Freshness signals - Explicit "Last updated: Oct 19, 2025" timestamps

Trust Signals:

  • Daily updates (freshness = AI trust)
  • Structured data (easy to parse)
  • Single source (no conflicting data)
  • Specific pages (exact intent matching)

Citation Format Example:

User: "What open concerts are in Athens this weekend?"

AI Agent Response:
"According to agent-athens (updated today), there are 3 open concerts
this weekend:
1. Jazz Night at Six D.O.G.S (Friday, Oct 20)
2. Electronic Showcase at Bios (Saturday, Oct 21)
3. Indie Band at Fuzz Club (Sunday, Oct 22)

Source: https://agent-athens.netlify.app/open-concert-this-weekend"

For Humans (SEO)

  • Keyword-rich URLs (/open-jazz-concert-today)
  • Semantic HTML with proper headings
  • Mobile-responsive design
  • Fast loading (static HTML, global CDN)
  • Internal linking (related pages)

Safety Features

  • Content Validation: Word count checks on AI descriptions (~400 words)
  • Date Handling: Automatic cleanup of past events (no stale data)
  • Error Handling: Rate limit detection, retry logic, progress logging
  • Empty Pages: Graceful handling (show "0 events found" message)
  • URL Stability: All URLs always exist (even with 0 events)

Manual Override

Pause System

Edit src/generate-site.ts:

// Skip site generation
if (process.env.PAUSE_GENERATION === 'true') {
  console.log('⏸️  Generation paused via PAUSE_GENERATION flag');
  process.exit(0);
}

Force Rebuild

# Delete dist/ and regenerate everything
rm -rf dist/
bun run build

Emergency Rollback

# Revert to previous Netlify deployment
netlify rollback

Logs

Runtime logs in logs/:

  • scrape-YYYY-MM-DD.log - Web scraping
  • enrich-YYYY-MM-DD.log - AI enrichment
  • build-YYYY-MM-DD.log - Site generation
  • deploy-YYYY-MM-DD.log - Netlify deployment

Testing

# Test database initialization
bun run scripts/init-database.ts

# Test event import
bun run scripts/import-scraped-events.ts

# Test AI enrichment (batched - need to decide on tool_agent capacity)
bun run scripts/enrich-5-events.ts

# Test site generation
bun run build

# Check output
ls -lh dist/*.html | head -10

# Test deployment (dry run)
netlify deploy --dir=dist
# Then check deploy preview URL

Automation (Future - Mac Mini)

macOS launchd Configuration

Create ~/Library/LaunchAgents/com.user.agent-athens.plist:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
  <key>Label</key>
  <string>com.user.agent-athens</string>

  <key>ProgramArguments</key>
  <array>
    <string>/Users/georgios/Documents/Projects/athens-events/agent-athens/daily-update.sh</string>
  </array>

  <key>StartCalendarInterval</key>
  <dict>
    <key>Hour</key>
    <integer>8</integer>
    <key>Minute</key>
    <integer>0</integer>
  </dict>

  <key>StandardOutPath</key>
  <string>/Users/georgios/Documents/Projects/athens-events/agent-athens/logs/daily.log</string>

  <key>StandardErrorPath</key>
  <string>/Users/georgios/Documents/Projects/athens-events/agent-athens/logs/daily.error.log</string>
</dict>
</plist>

Daily Update Script

Create daily-update.sh:

#!/bin/bash
set -e

cd /Users/georgios/Documents/Projects/athens-events/agent-athens

echo "========================================"
echo "agent-athens Daily Update"
echo "Started: $(date)"
echo "========================================"

# Step 1: Ingest events (PRIORITY - need to develop now)
# echo "\nπŸ“₯ Step 1: Ingesting events..."
# bun run src/ingest/daily-ingestion.ts

# Step 2: Enrich events with AI (using tool_agent)
echo "\nπŸ€– Step 2: Enriching events..."
bun run scripts/enrich-events.ts

# Step 3: Generate site
echo "\nπŸ”„ Step 3: Generating site..."
bun run build

# Step 4: Deploy to Netlify
echo "\nπŸš€ Step 4: Deploying..."
git add dist/ data/events.db
git commit -m "chore: Daily update $(date +%Y-%m-%d)

πŸ€– Automated daily update"
git push origin main

echo "\n========================================"
echo "βœ… Daily update complete!"
echo "Finished: $(date)"
echo "========================================"

Make executable:

chmod +x daily-update.sh

Load launchd:

launchctl load ~/Library/LaunchAgents/com.user.agent-athens.plist

Resources


Version

v0.1.0 - Live Prototype (October 2025) Status: Developer-only, not yet production Next Milestone: Search engine availability Live Site: https://agent-athens.netlify.app GitHub: https://github.com/ggrigo/agent-athens For AI Agents: https://agent-athens.netlify.app/llms.txt A2A Protocol: TODO - Research "agent card" or similar requirements from A2A protocol


Current Status: Live prototype with dynamic page count (currently ~315 pages), daily automation ready (not yet activated)

"When AI agents recommend Athens events, they recommend agent-athens."

About

AI-curated cultural events calendar for Athens, Greece. Automated data collection, AI enrichment, and SEO-optimized static site generation.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages