We propose to improve LLM-enabled domain model generation with a refinement loop. The workflow is organized in three main phases:
- Initial Modeling Phase: Start with a domain description to create a draft domain model.
- Iterative Improvement Phase: Refine the domain model via a Q&A feedback loop.
- Final Modeling Phase: Presents the domain model with changes incorporated from domain expert's answer.
The ToT-Q framework is supported by four components:
- ToT & Confidence Quantification – Creates the domain model using ToT4DM framework and estimates confidence of the recommended elements.
- Modeling Pattern Matching – Detects modeling patterns in the domain model and prepares relevant data for question generation.
- Question Generation & Selection – Generates questions from matched patterns using a rule-based agent, prioritizing the areas of uncertainty in the domain model.
- Model Refinement – Updates the domain model and confidence scores based on domain expert’s answers, until all questions are addressed or a limit is reached.
The ToT-Q tool is developed using the ToT4DM DSL tool and BESSER Agentic framework.
Request OpenAI or Azure keys to have access to the LLM API. Instructions are in the following links:
To configure the ToT DSL:
- Create the .env file as instructed in the Tot4DM repo.
- Review the examples to configure the ToT4DM DSL.
To configure the BESSER Agentic framework:
- Configure the config.ini file with the websocket options indicated in the BESSER Agentic framework docs.
To configure the templates, you can modify the question variables in the following python file.
Add in the .env file the following variables to configure the trigger of questions:
# Maximum number of questions in the Q&A loop
MAX_QUESTIONS = 10
# Confidence threshold for asking questions
CONFIDENCE_THRESHOLD = 0.8 # Suggested range: (0.5, 0.9]
# Confidence values used when updating the model based on expert answers
HIGH_CONFIDENCE = 0.9 # Suggested range: (0.5, 1.0]
LOW_CONFIDENCE = 0.4 # Suggested range: [0.1, 0.5]
# Expert simulation mode (0 = No simulation, 1 = Simulation)
SIMULATED_EXPERT = 1- Install Python 3.11 and create a virtual environment
- Install the required packages:
pip install -r requirements.txt
- Configure the templates and question triggers in the .env file.
- Run the rule-based agent (this agent call the LLM agents):
python tot_rules_q/rule_agent.py
- Run the chat application:
python chat.py
- A log will capture all the thoughts created by the LLM and questions triggered by the rule-based agent.
The results of the experiments include the reference models and the output from the experiments. To run the experiments, use the input data with the domain descriptions. Then execute the experiment:
python tot_rules_q/rule_agent.py
python chat.py
