Replies: 1 comment
-
|
The timeout warnings you're seeing may be caused by OpenEvolve's default 60-second timeout being too aggressive for providers like OpenRouter, NovitaAI. Add this to your config.yaml: Let me know if this resolves it! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I've been trying various models, various providers and two hosts (openrouter & NovitaAI).
For all of them it seems that the LLM calls end up taking many minutes and often timeout.
I don't understand why, models like gemini flash 2 have a e2e latency of maybe 2-3 seconds on openrouter and work fine in other programs.
But here it seems somehow it massively overloads and timeouts happen. This is all with the basic function_min example.
Anyone else has this issue?
EDIT: So it seems no errors show up when using the OpenAI API. I guess something open evolve is doing is not good for other API providers somehow?
Beta Was this translation helpful? Give feedback.
All reactions