-
Notifications
You must be signed in to change notification settings - Fork 15
Reduce provided task data to reduce size of total data #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
I’m not sure that those fields aren’t useful. |
|
I'm using the ticktick_mcp in combination with a LLM (have been tested with Azure-OpenAI and with Gemini-2.5), in my own web-client. I discovered that there are two disadvantages in providing a lot of unneeded data of tasks to the LLM:
That is why I have implemented this PR to remove all unneeded information from the task information. Also fields that are logical and do not have any usefull information are also removed. A single task information with all the information could be: while the same reduced information of the same task is: All still important and relevant information is included. |
src/ticktick_mcp/tools/task_tools.py
Outdated
| # Remove zero value integer properties | ||
| zeroValueProps = ["deleted", "imgMode", "priority", "progress", "status"] | ||
| for prop in zeroValueProps: | ||
| if prop in task and task.get(prop, 0) == 0: | ||
| del task[prop] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's still useful even if the value is 0. Does it have to be removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leaving the 0-values in, is not adding any additional information, but it is causing larger responses (bigger JSON objects), and with 100's of tasks being returned this is causing problems when the LLM only is interpreting the first part of the response. That is why all unnecesary data is being stripped, it is faster, the input-token-limit of the LLM is not reached, and the the LLM is less confused with the unnecessary data.
…planatory comments
Adds the
_filter_unneeded_propertiesfunction, which removes all properties of tasks, that are not very useful, when a list of tasks is being retrieved.This reduces the size of the total content that is passed from the MCP server to the MCP client, and thus very probably also reduces the data that is transmitted to the LLM. Another benefit is that the LLM is not confused with all kind of properties that are listed, but do not have useful information.
Retrieving the full information of a task is still possible by using the
ticktick_get_by_idfunction which still provides all the available properties and all information for a specific task.