Search, analyze, and visualize your AI conversations across ChatGPT, Claude, Perplexity, and Gemini with real-time momentum tracking and cognitive trajectory analysis.
Contextual is a cutting-edge AI memory infrastructure platform that transforms how you interact with and understand your conversations across multiple AI assistants. It provides a unified interface to search, analyze patterns, track cognitive momentum, and visualize the evolution of your thoughts across different AI platforms.
- 🔍 Unified Search: Search across all your AI conversations from ChatGPT, Claude, Perplexity, and Gemini in one place
- 📊 Momentum Tracking: Real-time visualization of conversation velocity and cognitive engagement patterns
- 🧬 Trajectory Analysis: Track the evolution of ideas and topics across multiple conversations
- 🎨 Beautiful UI: Modern, responsive interface with dark mode and cyberpunk aesthetics
- 🔐 Privacy First: Your data stays with you - self-hosted or local deployment options
- 🚀 Fast & Efficient: Built with React + Vite for lightning-fast performance
- 📱 Responsive: Works seamlessly on desktop, tablet, and mobile devices
- 🐳 Docker Ready: One-command containerized deployment
- ☁️ Cloud Deploy: Ready for Vercel, Netlify, GitHub Pages, and more
- Researchers: Track research questions and insights across multiple AI platforms
- Developers: Reference past coding discussions and technical solutions
- Writers: Monitor creative ideation patterns and writing evolution
- Students: Organize learning conversations and study materials
- Professionals: Maintain searchable knowledge base of AI-assisted work
- Node.js: >= 18.0.0
- npm: >= 9.0.0 (or yarn/pnpm equivalent)
- Modern Web Browser: Chrome 90+, Firefox 88+, Safari 14+, Edge 90+
- Windows 10 or later (Windows Server 2019+ for production)
- PowerShell 5.1 or later (PowerShell Core 7+ recommended)
- Visual Studio Build Tools (optional, for native modules)
- Ubuntu 20.04+, Debian 11+, CentOS 8+, or equivalent
- bash 4.0+
- systemd (for service deployment)
- macOS 11 (Big Sur) or later
- Xcode Command Line Tools:
xcode-select --install
- Docker Engine 20.10+
- Docker Compose 2.0+ (optional)
-
Clone the repository:
git clone https://github.com/POWDER-RANGER/contextual-memory-ui.git cd contextual-memory-ui -
Open
index.htmldirectly in your browser:Windows:
start index.html
macOS:
open index.html
Linux:
xdg-open index.html
-
That's it! The application runs entirely in your browser with zero build step.
# Install dependencies
npm install
# Start development server
npm run dev
# Application will open at http://localhost:3000# Install dependencies
npm install
# Build for production
npm run build
# Preview production build
npm run previewThe production build will be in the dist/ directory, ready for deployment.
-
Build the application:
npm install npm run build
-
Run the deployment script:
.\deploy\windows\deploy-iis.ps1 -
Manual IIS setup:
- Open IIS Manager
- Right-click "Sites" → "Add Website"
- Site name:
Contextual-AI - Physical path:
C:\path\to\contextual-memory-ui\dist - Binding: HTTP, Port 80 (or HTTPS with certificate)
- Click OK
-
Configure web.config (already included in
deploy/windows/web.config):- Enables SPA routing
- Adds security headers
- Configures CORS (if needed)
# Using deployment script
.\deploy\windows\deploy-node.ps1
# Or manually:
npm install -g serve
serve -s dist -l 3000See deploy/windows/README.md for detailed instructions on running as a Windows Service with PM2 or NSSM.
-
Build the application:
npm install npm run build
-
Run the deployment script:
chmod +x deploy/linux/deploy-nginx.sh sudo ./deploy/linux/deploy-nginx.sh
-
Manual Nginx setup:
# Copy built files sudo cp -r dist/* /var/www/contextual-ai/ # Copy nginx configuration sudo cp deploy/linux/nginx.conf /etc/nginx/sites-available/contextual-ai sudo ln -s /etc/nginx/sites-available/contextual-ai /etc/nginx/sites-enabled/ # Test and reload sudo nginx -t sudo systemctl reload nginx
- Build and deploy:
npm install npm run build sudo cp -r dist/* /var/www/html/contextual-ai/ sudo cp deploy/linux/apache.conf /etc/apache2/sites-available/contextual-ai.conf sudo a2ensite contextual-ai sudo a2enmod rewrite sudo systemctl reload apache2
See deploy/linux/contextual-ai.service for running as a system service with Node.js/serve.
npm install
npm run devnpm install
npm run build
# Serve with Python
python3 -m http.server --directory dist 8000
# Or with Node.js
npx serve -s dist# Build
npm run build
# Deploy to Apache
sudo cp -r dist/* /Library/WebServer/Documents/contextual-ai/
sudo cp deploy/macos/httpd-vhost.conf /etc/apache2/extra/httpd-contextual.conf
# Enable and restart
sudo apachectl configtest
sudo apachectl restart# Build image
docker build -t contextual-ai .
# Run container
docker run -d -p 8080:80 --name contextual-ai contextual-ai
# Access at http://localhost:8080docker-compose up -dThe included Dockerfile uses multi-stage builds:
- Stage 1: Node.js build environment
- Stage 2: Nginx production server (lightweight)
Final image size: ~25MB
# Install Vercel CLI
npm install -g vercel
# Deploy
vercel --prodOr connect your GitHub repo in the Vercel dashboard for automatic deployments.
# Install Netlify CLI
npm install -g netlify-cli
# Deploy
netlify deploy --prod --dir=distOr use the Netlify dashboard with these settings:
- Build command:
npm run build - Publish directory:
dist
# Build
npm run build
# Deploy to gh-pages branch
npm install -g gh-pages
gh-pages -d distOr use the included GitHub Actions workflow in .github/workflows/deploy.yml.
See deploy/aws/README.md for detailed CloudFormation/Terraform templates.
See deploy/azure/README.md for deployment via Azure CLI or GitHub Actions.
contextual-memory-ui/
├── index.html # Standalone single-page app (zero-build option)
├── src/ # Source files for Vite build
│ ├── App.jsx # Main React application
│ ├── index.js # Application entry point
│ └── core/ # Core business logic
│ ├── AIHousekeeper.js
│ ├── ContextBridge.js
│ └── StateVault.js
├── deploy/ # Platform-specific deployment configs
│ ├── windows/ # Windows IIS, PowerShell scripts
│ ├── linux/ # Linux nginx, Apache, systemd
│ ├── macos/ # macOS Apache configuration
│ ├── docker/ # Docker and Docker Compose
│ ├── aws/ # AWS deployment templates
│ └── azure/ # Azure deployment templates
├── vite.config.js # Vite build configuration
├── package.json # Dependencies and scripts
├── Dockerfile # Multi-stage production container
├── docker-compose.yml # Orchestrated container deployment
└── README.md # This file
All core application logic is platform-agnostic JavaScript/React:
src/core/: Business logic (runs in any browser)src/App.jsx: Main UI components (React)index.html: Standalone version (no build required)
The application is browser-based and platform-agnostic at the code level. Platform differences are only in deployment/hosting infrastructure.
- Multi-Platform Support: Import from ChatGPT, Claude, Perplexity, Gemini
- Batch Processing: Handle multiple conversations at once
- Smart Parsing: Automatic extraction of metadata and timestamps
- Full-Text Search: Search across all conversation content
- Filter by Platform: Focus on specific AI assistants
- Date Range: Find conversations from specific time periods
- Tag Support: Organize with custom tags (coming soon)
- Momentum Metrics: Track conversation frequency and intensity
- Trajectory Visualization: See how topics evolve over time
- Platform Comparison: Compare usage across different AI assistants
- Cognitive Patterns: Identify your thinking and questioning patterns
- JSON Export: Full data portability
- Markdown Export: Readable documentation format
- Backup & Restore: Easy data management
- Frontend: React 18 + Vite
- UI Components: Custom React components with Hooks
- Styling: Custom CSS with cyberpunk design system
- State Management: React Context + Hooks
- Data Storage: LocalStorage / IndexedDB
- Charts: Chart.js + react-chartjs-2
- Build Tool: Vite 5 (fast HMR, optimized production builds)
- Testing: Jest + React Testing Library
- Linting: ESLint + Prettier
# Run all tests
npm test
# Run tests in watch mode
npm run test:watch
# Run tests with coverage
npm test -- --coverage# Lint code
npm run lint
# Format code
npm run formatCreate a .env file based on .env.example:
cp .env.example .envAvailable variables:
VITE_API_URL: Backend API URL (optional)VITE_STORAGE_TYPE:localorindexeddb(default:local)VITE_DEBUG_MODE: Enable debug logging (default:false)
Contributions are welcome! This project thrives on community input.
-
Fork the Repository
gh repo fork POWDER-RANGER/contextual-memory-ui
-
Create a Feature Branch
git checkout -b feature/amazing-feature
-
Commit Your Changes
git commit -m 'feat: add amazing feature' -
Push to Branch
git push origin feature/amazing-feature
-
Open a Pull Request
See CONTRIBUTING.md for detailed guidelines.
- Basic conversation import
- Search functionality
- Momentum tracking
- Responsive UI
- Cross-platform deployment
- Export/Import features
- Advanced filtering
- Backend API integration
- Real-time sync across devices
- Advanced analytics dashboard
- AI-powered conversation summarization
- Tag management system
- Custom themes
- Browser extension
- Collaborative features
- API for third-party integrations
- Native mobile apps (React Native)
- Advanced visualization options
- Machine learning insights
- Large conversation imports may slow down on older browsers
- Safari private mode has localStorage limitations
- Mobile UI optimization ongoing for tablets
See Issues for full list.
This project is licensed under the MIT License - see the LICENSE file for details.
- Inspired by the need for better AI conversation management
- Built with love for the AI research and development community
- Special thanks to all contributors and early adopters
- Star this repository if you find it useful
- Watch for updates and new features
- Fork to customize for your needs
- Sponsor via GitHub Sponsors
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Profile: @POWDER-RANGER
Making AI conversations searchable, analyzable, and actionable