-
Notifications
You must be signed in to change notification settings - Fork 2
Closed
Labels
P1-high-betaHigh priority - should fix before beta for best experienceHigh priority - should fix before beta for best experienceai-enforcementcontext-engineeringContext window management and optimizationContext window management and optimizationenhancementNew feature or requestNew feature or request
Description
Problem
Discovery questions are being generated as static templates rather than being dynamically generated by an AI agent that reacts to and learns from previous answers. The questions should follow a Socratic methodology where each question builds on the context of the previous answer.
Current Behavior
Discovery questions appear to be pre-defined templates that don't change based on user input:
- Q1: "Who are the primary users..." (problem)
- Q2: "Who are the primary users..." (users) - redundant repeat of Q1
- Q3: "What are the core features..." (features)
- Q4: "Are there any technical constraints..." (constraints)
- Q5: "Do you have a preferred tech stack..." (tech_stack)
Each time a user starts discovery, they get the same set of questions regardless of their specific answers.
Expected Behavior
Discovery should be driven by an AI agent that:
- Reads and understands each user answer
- Generates follow-up questions based on the specific context of previous answers
- Adapts the question flow - different projects get different questions
- Builds on answers - Socratic method where each question probes deeper into relevant areas
Example of expected dynamic flow:
- User answers: "Building a weather app for sailors"
- AI asks: "What offshore weather data sources do you need access to?" or "Should the app include maritime weather alerts?"
Impact
- Users feel the discovery process is generic/unintelligent
- PRDs may miss critical project-specific requirements
- Reduces trust in AI capabilities of the platform
- Not a blocker, but degrades user experience significantly
Suggested Approach
- Use the LeadAgent or similar AI agent to dynamically generate each next question
- Pass conversation history to the agent for context
- Allow the agent to determine when sufficient information has been gathered
- Implement fallback to static questions if AI generation fails
Acceptance Criteria
- Discovery questions vary based on user answers
- AI agent reads previous answers before generating next question
- Questions build on context (Socratic method)
- Different projects produce different question flows
- Error handling falls back to reasonable defaults
Priority
P1 - This should be fixed before beta for best experience. It's a core part of the "AI-powered" value proposition.
Environment
- Repo: frankbria/codeframe
- Test URL: https://dev.codeframeapp.com
- Tested Project: test-weather-app
Reported by: Rocket 🦊 via workflow testing
coderabbitai
Metadata
Metadata
Assignees
Labels
P1-high-betaHigh priority - should fix before beta for best experienceHigh priority - should fix before beta for best experienceai-enforcementcontext-engineeringContext window management and optimizationContext window management and optimizationenhancementNew feature or requestNew feature or request