Skip to content

Commit 03d925d

Browse files
Fixed streamlit prompt to use HAL9_TOKEN when APIs are requested (#485)
* Fixed streamlit prompt to use HAL9_TOKEN when APIs are requested * Update apps/hal9/tools/streamlit.py --------- Co-authored-by: Javier Arturo Porras Luraschi <[email protected]>
1 parent 506221b commit 03d925d

File tree

1 file changed

+12
-1
lines changed

1 file changed

+12
-1
lines changed

apps/hal9/tools/streamlit.py

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,18 @@ def streamlit_generator(prompt):
8585
messages = load_messages(file_path="./.storage/.streamlit_messages.json")
8686

8787
if len(messages) < 1:
88-
messages = insert_message(messages, "system", f"""This is a Python streamlit generator system that automates the creation of Streamlit apps based on user prompts. It interprets natural language queries, and the response is an complete python script with the including imports for a interactive Streamlit app, return the code as fenced code block with triple backticks (```) as ```python```""")
88+
messages = insert_message(
89+
messages=messages,
90+
role="system",
91+
content=(
92+
"You work in a Python streamlit generator system that automates the creation of Streamlit apps based on user prompts. "
93+
"The system interprets natural language queries, and respondes a complete python script with the including imports for an interactive Streamlit app."
94+
"Your task is to return the code inside of a fenced code block with triple backticks (```) as ```python```\n"
95+
"IMPORTANT: If the app requires using LLMs (OpenAI, Groq, Anthropic), retrieve the API key from the environment variable 'HAL9_TOKEN' and set the base_url to 'http://api.hal9.com/proxy/server=https://api.groq.com/openai/v1' replacing api.groq.com/openai/v1 with the right path to the LLM API "
96+
"and use it in the code. Do not include any other text or explanation, just the code.\n"
97+
)
98+
)
99+
89100
messages = insert_message(messages, "user", f"Generates an app that fullfills this user request -> {prompt}")
90101
model_response = generate_response("openai", "o3-mini", messages)
91102
response_content = model_response.choices[0].message.content

0 commit comments

Comments
 (0)