Skip to content

[P1] Discovery questions are static instead of Socratic/dynamic #239

@frankbria

Description

@frankbria

Problem

Discovery questions are being generated as static templates rather than being dynamically generated by an AI agent that reacts to and learns from previous answers. The questions should follow a Socratic methodology where each question builds on the context of the previous answer.

Current Behavior

Discovery questions appear to be pre-defined templates that don't change based on user input:

  • Q1: "Who are the primary users..." (problem)
  • Q2: "Who are the primary users..." (users) - redundant repeat of Q1
  • Q3: "What are the core features..." (features)
  • Q4: "Are there any technical constraints..." (constraints)
  • Q5: "Do you have a preferred tech stack..." (tech_stack)

Each time a user starts discovery, they get the same set of questions regardless of their specific answers.

Expected Behavior

Discovery should be driven by an AI agent that:

  1. Reads and understands each user answer
  2. Generates follow-up questions based on the specific context of previous answers
  3. Adapts the question flow - different projects get different questions
  4. Builds on answers - Socratic method where each question probes deeper into relevant areas

Example of expected dynamic flow:

  • User answers: "Building a weather app for sailors"
  • AI asks: "What offshore weather data sources do you need access to?" or "Should the app include maritime weather alerts?"

Impact

  • Users feel the discovery process is generic/unintelligent
  • PRDs may miss critical project-specific requirements
  • Reduces trust in AI capabilities of the platform
  • Not a blocker, but degrades user experience significantly

Suggested Approach

  1. Use the LeadAgent or similar AI agent to dynamically generate each next question
  2. Pass conversation history to the agent for context
  3. Allow the agent to determine when sufficient information has been gathered
  4. Implement fallback to static questions if AI generation fails

Acceptance Criteria

  • Discovery questions vary based on user answers
  • AI agent reads previous answers before generating next question
  • Questions build on context (Socratic method)
  • Different projects produce different question flows
  • Error handling falls back to reasonable defaults

Priority

P1 - This should be fixed before beta for best experience. It's a core part of the "AI-powered" value proposition.

Environment


Reported by: Rocket 🦊 via workflow testing

Metadata

Metadata

Assignees

No one assigned

    Labels

    P1-high-betaHigh priority - should fix before beta for best experienceai-enforcementcontext-engineeringContext window management and optimizationenhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions