-
Notifications
You must be signed in to change notification settings - Fork 34
Add HuggingFace Inference and code optimization #20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds multi-provider LLM support to the NANDA framework, removes exposed API keys from code, and optimizes dependencies. The changes introduce an abstraction layer that allows using either Anthropic Claude or Hugging Face Inference API, improving flexibility and security.
Changes:
- Created new
llm_providers.pymodule with provider abstraction supporting Anthropic and HuggingFace - Removed hardcoded fallback API keys from
agent_bridge.pyandmcp_utils.py - Updated
requirements.txtto remove duplicates and addhuggingface_hubdependency - Modified
nanda.pyto initialize and configure LLM providers with new optional parameters - Refactored
mcp_utils.pyandagent_bridge.pyto use provider abstraction instead of direct Anthropic SDK calls - Updated documentation in README.md with HuggingFace usage examples and configuration details
Reviewed changes
Copilot reviewed 7 out of 9 changed files in this pull request and generated 20 comments.
Show a summary per file
| File | Description |
|---|---|
| requirements.txt | Removed duplicate anthropic and pymongo entries, added huggingface_hub |
| nanda_adapter/core/llm_providers.py | New provider abstraction with AnthropicProvider and HuggingFaceProvider classes |
| nanda_adapter/core/nanda.py | Added LLM provider initialization with new parameters for provider selection |
| nanda_adapter/core/mcp_utils.py | Refactored to use provider abstraction, handles both dict and object response formats |
| nanda_adapter/core/agent_bridge.py | Removed hardcoded API keys, updated to use provider abstraction, fixed typo |
| nanda_adapter/core/init.py | Exported new LLM provider classes and functions |
| nanda_adapter/init.py | Exported new LLM provider classes and functions |
| README.md | Added HuggingFace documentation and updated environment variable documentation |
| .gitignore | Added wildcard pattern for pycache directories |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| content.append({ | ||
| "type": "tool_use", | ||
| "id": tool_call.id, | ||
| "name": tool_call.function.name, | ||
| "input": json.loads(tool_call.function.arguments) if isinstance(tool_call.function.arguments, str) else tool_call.function.arguments |
Copilot
AI
Jan 13, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error handling for JSON parsing of tool call arguments could be improved. If json.loads fails but the arguments are already a dict, the code should handle this more robustly. Consider adding explicit error handling for the JSON parsing operation to prevent potential runtime errors when tool_call.function.arguments is neither a valid JSON string nor a dict.
| content.append({ | |
| "type": "tool_use", | |
| "id": tool_call.id, | |
| "name": tool_call.function.name, | |
| "input": json.loads(tool_call.function.arguments) if isinstance(tool_call.function.arguments, str) else tool_call.function.arguments | |
| raw_arguments = tool_call.function.arguments | |
| # Robustly handle tool call arguments which may be a JSON string, dict, or other types | |
| if isinstance(raw_arguments, dict): | |
| parsed_arguments = raw_arguments | |
| elif isinstance(raw_arguments, str): | |
| try: | |
| parsed_arguments = json.loads(raw_arguments) | |
| except Exception: | |
| # Fall back to the raw string if JSON parsing fails | |
| parsed_arguments = raw_arguments | |
| else: | |
| # For unexpected types, pass through as-is | |
| parsed_arguments = raw_arguments | |
| content.append({ | |
| "type": "tool_use", | |
| "id": tool_call.id, | |
| "name": tool_call.function.name, | |
| "input": parsed_arguments |
| if response_text: | ||
| # Log the LLM response | ||
| log_message(conversation_id, current_path, f"{provider.name} {agent_id}", response_text) | ||
|
|
Copilot
AI
Jan 13, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent error handling for missing response_text. In the updated code at line 178, the function checks 'if response_text:' before logging, but still returns response_text regardless. If response_text is None (which can happen from provider.complete() failures), the function should return None explicitly or handle the None case more clearly for the caller.
| if response_text: | |
| # Log the LLM response | |
| log_message(conversation_id, current_path, f"{provider.name} {agent_id}", response_text) | |
| if response_text is None: | |
| # Handle missing response_text from provider as a failure case | |
| print(f"Agent {agent_id}: {provider.name} returned no response_text", flush=True) | |
| return None | |
| # Log the LLM response | |
| log_message(conversation_id, current_path, f"{provider.name} {agent_id}", response_text) | |
| import traceback | ||
| traceback.print_exc() |
Copilot
AI
Jan 13, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The traceback module is used at line 201 but is not imported at the top of the file. This will cause a NameError at runtime when an exception occurs. Add 'import traceback' to the imports section at the top of the file.
| content = response.get("content", []) | ||
|
|
||
| # Handle different content formats (Anthropic raw vs unified) | ||
| if hasattr(content, '__iter__') and not isinstance(content, (str, dict)): |
Copilot
AI
Jan 13, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential issue with iteration check. The condition 'hasattr(content, 'iter')' will be True for strings since strings are iterable in Python. The additional check 'not isinstance(content, (str, dict))' prevents this, but the logic could be clearer. Consider simplifying this to check if content is a list directly: 'isinstance(content, list)' would be more explicit and readable.
| if hasattr(content, '__iter__') and not isinstance(content, (str, dict)): | |
| if isinstance(content, list): |
| if hasattr(content, '__iter__') and not isinstance(content, (str, dict)): | ||
| blocks = content | ||
| else: | ||
| blocks = [content] if content else [] |
Copilot
AI
Jan 13, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Duplicate code pattern for content iteration. The same logic for checking if content is iterable and converting it to blocks appears at lines 110-113 and again at lines 183-186. This duplicated logic should be extracted into a helper function to improve maintainability.
| from .llm_providers import get_provider, init_provider | ||
| from .mcp_utils import MCPClient | ||
| except ImportError: | ||
| from llm_providers import get_provider, init_provider |
Copilot
AI
Jan 13, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Import of 'init_provider' is not used.
| from .llm_providers import get_provider, init_provider | |
| from .mcp_utils import MCPClient | |
| except ImportError: | |
| from llm_providers import get_provider, init_provider | |
| from .llm_providers import get_provider | |
| from .mcp_utils import MCPClient | |
| except ImportError: | |
| from llm_providers import get_provider |
| from .llm_providers import get_provider, init_provider | ||
| from .mcp_utils import MCPClient | ||
| except ImportError: | ||
| from llm_providers import get_provider, init_provider |
Copilot
AI
Jan 13, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Import of 'init_provider' is not used.
| from .llm_providers import get_provider, init_provider | |
| from .mcp_utils import MCPClient | |
| except ImportError: | |
| from llm_providers import get_provider, init_provider | |
| from .llm_providers import get_provider | |
| from .mcp_utils import MCPClient | |
| except ImportError: | |
| from llm_providers import get_provider |
nanda_adapter/core/nanda.py
Outdated
| from .llm_providers import init_provider, get_provider | ||
| except ImportError: | ||
| # If running from parent directory, add current directory to path | ||
| current_dir = os.path.dirname(os.path.abspath(__file__)) | ||
| sys.path.insert(0, current_dir) | ||
| from agent_bridge import * | ||
| import run_ui_agent_https | ||
| from llm_providers import init_provider, get_provider |
Copilot
AI
Jan 13, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Import of 'get_provider' is not used.
| from .llm_providers import init_provider, get_provider | |
| except ImportError: | |
| # If running from parent directory, add current directory to path | |
| current_dir = os.path.dirname(os.path.abspath(__file__)) | |
| sys.path.insert(0, current_dir) | |
| from agent_bridge import * | |
| import run_ui_agent_https | |
| from llm_providers import init_provider, get_provider | |
| from .llm_providers import init_provider | |
| except ImportError: | |
| # If running from parent directory, add current directory to path | |
| current_dir = os.path.dirname(os.path.abspath(__file__)) | |
| sys.path.insert(0, current_dir) | |
| from agent_bridge import * | |
| import run_ui_agent_https | |
| from llm_providers import init_provider |
| SMITHERY_API_KEY = os.getenv("SMITHERY_API_KEY") or "bfcb8cec-9d56-4957-8156-bced0bfca532" | ||
| SMITHERY_API_KEY = os.getenv("SMITHERY_API_KEY") | ||
| if not SMITHERY_API_KEY: | ||
| print("WARNING: SMITHERY_API_KEY not set - Smithery MCP servers will not work") |
Copilot
AI
Jan 13, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Print statement may execute during import.
Co-authored-by: Copilot <[email protected]>
Removed exposed/fallback API Keys from
nanda_adapter/core/agent_bridge.pyandnanda_adapter/core/mcp_utils.pyAbstract the LLM Provider to easily be able to add support for multiple types of agents in future.
Adds HuggingFace Inference Provider Integration so that Hugging Face API Keys work too.
Fixed requirement.txt i.e dependency listing file by removing duplicates and unused dependencies.
Summary: The code is now secure, optimized, abstract for future use and now even supports HuggingFace Inference for any person wanting to run low cost inference models.