-
Notifications
You must be signed in to change notification settings - Fork 3k
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Do you need to file an issue?
- I have searched the existing issues and this feature is not already filed.
- My model is hosted on OpenAI or Azure. If not, please look at the "model providers" issue and don't file a new one here.
- I believe this is a legitimate feature request, not just a question. If this is a question, please use the Discussions area.
Is your feature request related to a problem? Please describe.
Hello, I have observed that the LLM will typically hallucinate in the process of summarisation, when it encounters an acronym that is not defined in the context of the prompt. This is typical in large documents, where you define the acronyms at the beginning and use them throughout.
Describe the solution you'd like
It would be useful if GraphRAG extracted and kept record (for each source document), the acronym definitions found in the text units. Then, the definitions can be inserted as JSON in the summarisation prompts, to help the LLM use them correctly.
Additional context
In the absence of this feature, it would make sense to instruct the LLM to avoid the interpretation of acronyms, unless an acronym is clearly defined in the context of the prompt.
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request