A comprehensive Next.js application for collecting detailed feedback on LLM responses with real-time streaming chat interface, per-message feedback, and overall conversation evaluation.
Check it out here: Mercity LLM Feedback Collector
- ✅ Real-time Streaming Chat - Live chat with GPT-4o via OpenRouter using Server-Sent Events
- ✅ OpenRouter Integration - Full integration with OpenRouter API for multiple LLM providers
- ✅ Markdown & LaTeX Support - Complete markdown rendering with GitHub Flavored Markdown (GFM) and KaTeX for mathematical expressions
- ✅ Smart UI/UX - Responsive design with proper text wrapping, auto-scroll, and loading states
- ✅ Message History - Maintains full conversation context with abort controls
- ✅ Per-Message Feedback - Individual message rating with thumbs up/down, 0-10 scale rating, and comments
- ✅ Overall Chat Feedback - End-of-conversation rating (1-5 stars), recommendation, and detailed feedback
- ✅ Real-time Feedback Storage - Immediate persistence of all feedback data
- ✅ Expandable Feedback UI - Collapsible detailed feedback forms for better UX
- ✅ Complete Database Integration - Prisma ORM with SQLite for development
- ✅ Conversation Persistence - Full message history and metadata storage
- ✅ Session Management - Unique session tracking with completion status
- ✅ Feedback Analytics Ready - Structured data storage for future analytics dashboard
- Frontend: Next.js 15, React 19, TypeScript
- UI Components: shadcn/ui with Radix UI primitives
- Styling: Tailwind CSS 4
- Database: Prisma ORM + SQLite (development) / PostgreSQL (production)
- API Integration: OpenRouter API with streaming support
- Form Management: React Hook Form + Zod validation
- Markdown Rendering: React Markdown with GFM and KaTeX
- Real-time: Server-Sent Events for streaming responses
- Node.js 18+
- npm or yarn
- OpenRouter API key (Get yours here)
-
Clone the repository
git clone <your-repo-url> cd llm-feedback-collector
-
Install dependencies
npm install
-
Set up environment variables
cp .env.example .env
Edit
.envand add your OpenRouter API key:OPENROUTER_API_KEY=your_openrouter_api_key_here DATABASE_URL="file:./dev.db"
-
Set up the database
# Generate Prisma client npx prisma generate # Run migrations (creates database and tables) npx prisma migrate dev --name init # Optional: View database in Prisma Studio npx prisma studio
-
Start development server
npm run dev
-
Access the application
- Main page: http://localhost:3000
- Chat interface: http://localhost:3000/chat
- Database studio: http://localhost:5555 (if running prisma studio)
The application uses a single conversations table with the following schema:
CREATE TABLE "conversations" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT,
"session_id" TEXT UNIQUE NOT NULL,
"username" TEXT NOT NULL,
"messages" TEXT NOT NULL, -- JSON: [{role, content, timestamp}]
"feedback" TEXT DEFAULT '{}', -- JSON: {messageIndex: {thumbs, rating, comment}}
"overall_rating" INTEGER, -- 1-5 stars
"overall_thumbs" TEXT, -- "up" or "down"
"overall_feedback" TEXT, -- Text comment
"is_completed" BOOLEAN DEFAULT false, -- Chat session ended
"created_at" DATETIME DEFAULT NOW,
"updated_at" DATETIME
);Messages Array:
interface MessageData {
role: 'user' | 'assistant';
content: string;
timestamp: string;
}Per-Message Feedback:
interface FeedbackData {
[messageIndex: number]: {
thumbs?: 'up' | 'down';
rating?: number; // 0-10
comment?: string;
};
}Overall Feedback:
interface OverallFeedbackData {
rating: number; // 1-5 stars
thumbs: 'up' | 'down';
comment: string;
}POST /api/chat
Content-Type: application/json
{
"messages": [
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hi there!"}
]
}
# Returns: Server-Sent Events stream# Save conversation
POST /api/conversations
{
"sessionId": "session_123",
"username": "[email protected]",
"messages": [...],
"feedback": {...}
}
# Get conversation
GET /api/conversations?sessionId=session_123# Update per-message feedback
POST /api/feedback
{
"sessionId": "session_123",
"feedback": {
"2": {
"thumbs": "up",
"rating": 8,
"comment": "Great response!"
}
}
}# Submit overall feedback and end session
POST /api/end-chat
{
"sessionId": "session_123",
"overallFeedback": {
"rating": 4,
"thumbs": "up",
"comment": "Overall good experience"
}
}GET /api/health # System status
POST /api/health # Echo endpointThe application includes built-in context size management to control conversation length and message size:
# Context message limit
CONTEXT_MSG_LIMIT = -1 # -1 = unlimited, positive integer = max messages
MAX_MSG_SIZE = 1000 # Maximum words per message-
CONTEXT_MSG_LIMIT- Set to
-1for unlimited messages - Set to any positive integer (e.g.,
50) to limit total messages per conversation - When limit is reached, send button is disabled with warning message
- Set to
-
MAX_MSG_SIZE- Controls maximum words per individual message
- Default: 1000 words
- Shows real-time word count and warning when exceeded
- Real-time Feedback: Word count display shows current usage
- Visual Warnings: Red text alerts when limits are exceeded
- Disabled Send Button: Prevents sending when limits are reached
- Context Awareness: Shows current message count vs. limit
GET /api/config
# Returns current limits configuration
{
"contextMsgLimit": -1,
"maxMsgSize": 1000,
"status": "success"
}npx prisma studioAccess at http://localhost:5555 to browse and edit data visually.
import { prisma } from '@/lib/prisma';
// Get all conversations
const conversations = await prisma.conversation.findMany({
orderBy: { createdAt: 'desc' }
});
// Get conversations with feedback
const withFeedback = await prisma.conversation.findMany({
where: {
NOT: {
feedback: '{}'
}
}
});
// Get completed chats only
const completed = await prisma.conversation.findMany({
where: { isCompleted: true }
});# Export to JSON
npx prisma studio --export
# Or create custom export script
node scripts/export-data.js-
PostgreSQL (Recommended)
DATABASE_URL="postgresql://user:password@localhost:5432/llm_feedback?schema=public"
-
Run migrations in production
npx prisma migrate deploy npx prisma generate
-
Push to GitHub
git add . git commit -m "Initial commit" git push origin main
-
Configure Vercel
npm i -g vercel vercel --prod
-
Set Environment Variables in Vercel Dashboard:
OPENROUTER_API_KEYDATABASE_URL(use Vercel Postgres or external provider)
- Connect repository
- Set environment variables
- Add build command:
npm run build - Add start command:
npm start
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npx prisma generate
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]// Average ratings per message
const avgMessageRating = await prisma.$queryRaw`
SELECT AVG(json_extract(feedback, '$.rating')) as avg_rating
FROM conversations
WHERE feedback != '{}'
`;
// Overall satisfaction
const overallStats = await prisma.conversation.aggregate({
_avg: { overallRating: true },
_count: { overallThumbs: true },
where: { isCompleted: true }
});
// Completion rate
const completionRate = await prisma.conversation.groupBy({
by: ['isCompleted'],
_count: true
});src/
├── app/
│ ├── api/ # API routes
│ │ ├── chat/ # Streaming chat
│ │ ├── conversations/ # CRUD operations
│ │ ├── feedback/ # Per-message feedback
│ │ ├── end-chat/ # Overall feedback
│ │ └── health/ # System health
│ ├── chat/ # Chat interface
│ └── page.tsx # Landing page
├── components/
│ ├── FeedbackWidget.tsx # Per-message feedback UI
│ ├── OverallFeedbackDialog.tsx # End-chat feedback
│ └── ui/ # shadcn/ui components
├── lib/
│ ├── prisma.ts # Database utilities
│ └── utils.ts # General utilities
└── prisma/
├── schema.prisma # Database schema
└── migrations/ # Database migrations
npm run dev # Start development server
npm run build # Build for production
npm run start # Start production server
npm run lint # Run ESLint
npx prisma studio # Database GUI
npx prisma migrate # Run database migrationsSee .env.example for all required environment variables.
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Commit changes:
git commit -m 'Add amazing feature' - Push to branch:
git push origin feature/amazing-feature - Open a Pull Request
MIT License - see the LICENSE file for details.