Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,6 @@ nanda_agent/__pycache__
dist/
*.egg-info/

nanda_adapter/core/__pycache__
nanda_adapter/core/__pycache__

*/__pycache__/
92 changes: 89 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ https://docs.google.com/presentation/d/16ehp8yq4-QjEu55unsI9rHJ8BMK9MAi1/edit?us
## Features

- **Multiple AI Frameworks**: Support for LangChain, CrewAI, and any custom logic.
- **Multiple LLM Providers**: Use Anthropic Claude, Hugging Face, or other providers.
- **Multi-protocol Communication**: Built-in protocol that allows universal communication
- **Global Index**: Automatic agent discovery via MIT NANDA Index
- **SSL Support**: Production-ready with Let's Encrypt certificates
Expand Down Expand Up @@ -52,6 +53,14 @@ pip install nanda-adapter

> export DOMAIN_NAME="<YOUR_DOMAIN_NAME.COM>

**Alternative: Use Hugging Face instead of Anthropic**

> export HUGGINGFACE_API_KEY="hf_your-api-key-here"

> export HUGGINGFACE_MODEL="meta-llama/Llama-3.3-70B-Instruct"

> export DOMAIN_NAME="<YOUR_DOMAIN_NAME.COM>"

### 5. Run an example agent (langchain_pirate.py)
> nohup python3 langchain_pirate.py > out.log 2>&1 &

Expand Down Expand Up @@ -187,6 +196,50 @@ domain = os.getenv("DOMAIN_NAME")
nanda.start_server_api(anthropic_key, domain)
```

### Deploy with Hugging Face (Alternative to Anthropic)

You can use Hugging Face Inference API instead of Anthropic:

```python
#!/usr/bin/env python3
from nanda_adapter import NANDA
import os

def create_simple_improvement():
"""Create a simple improvement function"""
def simple_improvement(message_text: str) -> str:
return f"[Enhanced] {message_text}"
return simple_improvement

def main():
nanda = NANDA(create_simple_improvement())

# Use Hugging Face instead of Anthropic
nanda.start_server_api(
anthropic_key=None, # Not needed for HuggingFace
domain=os.getenv("DOMAIN_NAME"),
llm_provider="huggingface",
llm_model="meta-llama/Llama-3.3-70B-Instruct",
llm_api_key=os.getenv("HUGGINGFACE_API_KEY")
)

if __name__ == "__main__":
main()
```

**For local development without SSL:**

```python
nanda.start_server_api(
anthropic_key=None,
domain="localhost",
llm_provider="huggingface",
llm_model="meta-llama/Llama-3.3-70B-Instruct",
llm_api_key=os.getenv("HUGGINGFACE_API_KEY"),
ssl=False # Disable SSL for local testing
)
```

## Deploy from Scratch on a barebones machine (Ubuntu on Linode or Amazon Linux on EC2)

```bash
Expand Down Expand Up @@ -242,14 +295,47 @@ The framework will automatically:
## Appendix: Configuration Details

### Environment Variables
You need the following environment details ()

Copy link

Copilot AI Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation header "Environment Variables" is now missing a descriptive sentence. The original text "You need the following environment details ()" appears to have been removed, but the replacement doesn't include an introductory sentence before the "Core Settings" subsection. Consider adding a brief introduction like "Configure the following environment variables:"

Suggested change
Configure the following environment variables:

Copilot uses AI. Check for mistakes.
- `ANTHROPIC_API_KEY`: Your Anthropic API key (required)
- `DOMAIN_NAME`: Domain name for SSL certificates (required)
**Core Settings:**
- `DOMAIN_NAME`: Domain name for SSL certificates (required for production)
- `AGENT_ID`: Custom agent ID (optional, auto-generated if not provided)
- `PORT`: Agent bridge port (optional, default: 6000)
- `IMPROVE_MESSAGES`: Enable/disable message improvement (optional, default: true)

**LLM Provider Settings (choose one):**

*Anthropic (default):*
- `ANTHROPIC_API_KEY`: Your Anthropic API key
- `ANTHROPIC_MODEL`: Model to use (optional, default: claude-3-5-sonnet-20241022)

*Hugging Face:*
- `HUGGINGFACE_API_KEY`: Your Hugging Face API key
- `HUGGINGFACE_MODEL`: Model to use (optional, default: meta-llama/Llama-3.3-70B-Instruct)

*General:*
- `LLM_PROVIDER`: Provider to use - "anthropic" or "huggingface" (optional, default: anthropic)

### start_server_api() Parameters

```python
nanda.start_server_api(
anthropic_key, # Anthropic API key (or None if using other provider)
domain, # Domain name for the server
agent_id=None, # Custom agent ID (auto-generated if not provided)
port=6000, # Agent bridge port
api_port=6001, # Flask API port
registry=None, # Registry URL (optional)
public_url=None, # Public URL for Agent Bridge (optional)
api_url=None, # API URL for User Client (optional)
cert=None, # Path to SSL certificate (optional)
key=None, # Path to SSL key (optional)
ssl=True, # Enable SSL (default: True)
llm_provider=None, # "anthropic" or "huggingface" (default: anthropic)
llm_model=None, # Model name/ID (uses provider default if not set)
llm_api_key=None # API key for the LLM provider
)
```

### Production Deployment

For production deployment with SSL:
Expand Down
23 changes: 22 additions & 1 deletion nanda_adapter/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,10 @@

This package provides a framework for creating customizable AI agents with pluggable
message improvement logic, built on top of the python_a2a communication framework.

Supports multiple LLM providers:
- Anthropic Claude (default)
- Hugging Face Inference API
"""

from .core.nanda import NANDA
Expand All @@ -14,6 +18,15 @@
get_message_improver,
list_message_improvers
)
from .core.llm_providers import (
LLMProvider,
AnthropicProvider,
HuggingFaceProvider,
get_provider,
set_provider,
create_provider,
init_provider
)

__version__ = "1.0.0"
__author__ = "NANDA Team"
Expand All @@ -26,5 +39,13 @@
"message_improver",
"register_message_improver",
"get_message_improver",
"list_message_improvers"
"list_message_improvers",
# LLM Providers
"LLMProvider",
"AnthropicProvider",
"HuggingFaceProvider",
"get_provider",
"set_provider",
"create_provider",
"init_provider"
]
19 changes: 18 additions & 1 deletion nanda_adapter/core/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,29 @@
get_message_improver,
list_message_improvers
)
from .llm_providers import (
LLMProvider,
AnthropicProvider,
HuggingFaceProvider,
get_provider,
set_provider,
create_provider,
init_provider
)

__all__ = [
"NANDA",
"AgentBridge",
"message_improver",
"register_message_improver",
"get_message_improver",
"list_message_improvers"
"list_message_improvers",
# LLM Providers
"LLMProvider",
"AnthropicProvider",
"HuggingFaceProvider",
"get_provider",
"set_provider",
"create_provider",
"init_provider"
]
73 changes: 28 additions & 45 deletions nanda_adapter/core/agent_bridge.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,27 +7,27 @@
import requests
from typing import Optional
from datetime import datetime
from anthropic import Anthropic, APIStatusError
from python_a2a import (
A2AServer, A2AClient, run_server,
Message, TextContent, MessageRole, ErrorContent, Metadata
)
import asyncio
from mcp_utils import MCPClient
import base64

# Handle different import contexts
try:
from .llm_providers import get_provider, init_provider
from .mcp_utils import MCPClient
except ImportError:
from llm_providers import get_provider, init_provider
Comment on lines +19 to +22
Copy link

Copilot AI Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'init_provider' is not used.

Suggested change
from .llm_providers import get_provider, init_provider
from .mcp_utils import MCPClient
except ImportError:
from llm_providers import get_provider, init_provider
from .llm_providers import get_provider
from .mcp_utils import MCPClient
except ImportError:
from llm_providers import get_provider

Copilot uses AI. Check for mistakes.
Comment on lines +19 to +22
Copy link

Copilot AI Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'init_provider' is not used.

Suggested change
from .llm_providers import get_provider, init_provider
from .mcp_utils import MCPClient
except ImportError:
from llm_providers import get_provider, init_provider
from .llm_providers import get_provider
from .mcp_utils import MCPClient
except ImportError:
from llm_providers import get_provider

Copilot uses AI. Check for mistakes.
from mcp_utils import MCPClient

import sys
sys.stdout.reconfigure(line_buffering=True)

# Set API key through environment variable or directly in the code
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY") or "your key"

# Toggle for message improvement feature
IMPROVE_MESSAGES = os.getenv("IMPROVE_MESSAGES", "true").lower() in ("true", "1", "yes", "y")

# Create Anthropic client with explicit API key
anthropic = Anthropic(api_key=ANTHROPIC_API_KEY)

# Get agent configuration from environment variables
def get_agent_id():
"""Get AGENT_ID dynamically from environment variables"""
Expand Down Expand Up @@ -58,7 +58,9 @@ def get_agent_id():
"default": "Improve the following message to make it more clear, compelling, and professional without changing the core content or adding fictional information. Keep the same overall meaning but enhance the phrasing and structure. Don't make it too verbose - keep it concise but impactful. Return only the improved message without explanations or introductions."
}

SMITHERY_API_KEY = os.getenv("SMITHERY_API_KEY") or "bfcb8cec-9d56-4957-8156-bced0bfca532"
SMITHERY_API_KEY = os.getenv("SMITHERY_API_KEY")
if not SMITHERY_API_KEY:
print("WARNING: SMITHERY_API_KEY not set - Smithery MCP servers will not work")
Copy link

Copilot AI Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Print statement may execute during import.

Copilot uses AI. Check for mistakes.

def get_registry_url():
"""Get the registry URL from file or use default"""
Expand Down Expand Up @@ -153,7 +155,7 @@ def log_message(conversation_id, path, source, message_text):
print(f"Logged message from {source} in conversation {conversation_id}")

def call_claude(prompt: str, additional_context: str, conversation_id: str, current_path: str, system_prompt: str = None) -> Optional[str]:
"""Wrapper that never raises: returns text or None on failure."""
"""Wrapper that never raises: returns text or None on failure. Uses configured LLM provider."""
try:
# Use the specified system prompt or default to the agent's system prompt
if system_prompt:
Expand All @@ -165,60 +167,41 @@ def call_claude(prompt: str, additional_context: str, conversation_id: str, curr
# Combine the prompt with additional context if provided
full_prompt = prompt
if additional_context and additional_context.strip():
full_prompt = f"ADDITIONAL CONTEXT FRseOM USER: {additional_context}\n\nMESSAGE: {prompt}"
full_prompt = f"ADDITIONAL CONTEXT FROM USER: {additional_context}\n\nMESSAGE: {prompt}"

agent_id = get_agent_id()
print(f"Agent {agent_id}: Calling Claude with prompt: {full_prompt[:50]}...")
resp = anthropic.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=512,
messages=[{"role":"user","content":full_prompt}],
system=system
)
response_text = resp.content[0].text
provider = get_provider()
print(f"Agent {agent_id}: Calling {provider.name} with prompt: {full_prompt[:50]}...")

response_text = provider.complete(full_prompt, system=system, max_tokens=512)

# Log the Claude response
log_message(conversation_id, current_path, f"Claude {agent_id}", response_text)
if response_text:
# Log the LLM response
log_message(conversation_id, current_path, f"{provider.name} {agent_id}", response_text)

Comment on lines +178 to 181
Copy link

Copilot AI Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistent error handling for missing response_text. In the updated code at line 178, the function checks 'if response_text:' before logging, but still returns response_text regardless. If response_text is None (which can happen from provider.complete() failures), the function should return None explicitly or handle the None case more clearly for the caller.

Suggested change
if response_text:
# Log the LLM response
log_message(conversation_id, current_path, f"{provider.name} {agent_id}", response_text)
if response_text is None:
# Handle missing response_text from provider as a failure case
print(f"Agent {agent_id}: {provider.name} returned no response_text", flush=True)
return None
# Log the LLM response
log_message(conversation_id, current_path, f"{provider.name} {agent_id}", response_text)

Copilot uses AI. Check for mistakes.
return response_text
except APIStatusError as e:
print(f"Agent {agent_id}: Anthropic API error:", e.status_code, e.message, flush=True)
# If we hit a credit limit error, return a fallback message
if "credit balance is too low" in str(e):
return f"Agent {agent_id} processed (API credit limit reached): {prompt}"
except Exception as e:
print(f"Agent {agent_id}: Anthropic SDK error:", e, flush=True)
agent_id = get_agent_id()
print(f"Agent {agent_id}: LLM error:", e, flush=True)
traceback.print_exc()
return None

def call_claude_direct(message_text: str, system_prompt: str = None) -> Optional[str]:
"""Wrapper that never raises: returns text or None on failure."""
"""Wrapper that never raises: returns text or None on failure. Uses configured LLM provider."""
try:
# Use the specified system prompt or default to the agent's system prompt

# Combine the prompt with additional context if provided
full_prompt = f"MESSAGE: {message_text}"

agent_id = get_agent_id()
print(f"Agent {agent_id}: Calling Claude with prompt: {full_prompt[:50]}...")
resp = anthropic.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=512,
messages=[{"role":"user","content":full_prompt}],
system=system_prompt
)
response_text = resp.content[0].text
provider = get_provider()
print(f"Agent {agent_id}: Calling {provider.name} with prompt: {full_prompt[:50]}...")

# Log the Claude response
response_text = provider.complete(full_prompt, system=system_prompt, max_tokens=512)

return response_text
except APIStatusError as e:
print(f"Agent {agent_id}: Anthropic API error:", e.status_code, e.message, flush=True)
# If we hit a credit limit error, return a fallback message
if "credit balance is too low" in str(e):
return f"Agent {agent_id} processed (API credit limit reached): {message_text}"
except Exception as e:
print(f"Agent {agent_id}: Anthropic SDK error:", e, flush=True)
agent_id = get_agent_id()
print(f"Agent {agent_id}: LLM error:", e, flush=True)
traceback.print_exc()
return None

Expand Down
Loading