Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
2cfa4b4
Merge branch 'feature/mcp' into 'master'
Jun 9, 2025
039c0f2
Merge branch 'feature/mcp' into 'master'
Jun 10, 2025
0cc5029
Merge branch 'feature/mcp' into 'master'
Jun 11, 2025
6accdb8
fix(test): resolve Swoole dependency issue in non-Swoole environment …
lihq1403 Jun 11, 2025
5f8c3b9
feat: Enhance AwsBedrockCachePointManager to handle strategy instanti…
lihq1403 Jun 11, 2025
984fc40
fix(ci): upgrade Swoole version and add build dependencies - Upgrade …
lihq1403 Jun 11, 2025
0f41ce1
feat(ci): add PHP 8.3 support to test matrix
lihq1403 Jun 11, 2025
9aae094
feat(test): add comprehensive MCP unit tests using real stdio server …
lihq1403 Jun 11, 2025
1971295
feat(test): add allowed tools filtering in McpServerManager - Impleme…
lihq1403 Jun 11, 2025
cd1aa97
feat(McpServerManager): implement allowed tools filtering in tool lis…
lihq1403 Jun 11, 2025
4fbf57b
Merge branch 'feature/mcp' into 'master'
Jun 11, 2025
7e37168
feat(test): skip MCP tests if ApplicationContext container is not ava…
lihq1403 Jun 11, 2025
f12f8aa
feat(stdio_server): update log file path to use BASE_PATH for consist…
lihq1403 Jun 11, 2025
81f73c5
Merge branch 'feature/mcp' into 'master'
Jun 11, 2025
d261b0f
Merge branch 'feature/mcp' into master111
lihq1403 Jun 11, 2025
5ac7fbd
Merge branch 'master111' into 'master'
Jun 11, 2025
a905b26
Merge branch 'feature/mcp' into 'master'
Jun 12, 2025
4e5da50
feat: Add option key mapping for max_tokens in chat completion requests
lihq1403 Jun 13, 2025
d9de837
Merge branch 'fix/max_tokens' into 'master'
Jun 13, 2025
ee62795
feat(event): Dispatch AfterChatCompletionsEvent and AfterChatCompleti…
lihq1403 Jun 24, 2025
4eb1de3
Merge branch 'fix/event' into 'master'
Jun 24, 2025
68bf884
feat(mcp): Add headers support to McpServerConfig and related tests
lihq1403 Jul 6, 2025
70a19b3
Merge branch 'feature/mcp_headers' into 'master'
Jul 6, 2025
7dc2e07
feat(mcp): Add env property to McpServerConfig for environment variab…
lihq1403 Jul 8, 2025
69a586f
Merge branch 'feature/mcp_headers' into 'master'
Jul 8, 2025
890eb91
feat(mcp): Add env property to McpServerConfig for environment variab…
lihq1403 Jul 8, 2025
d2d03ea
Merge branch 'feature/mcp_headers' into 'master'
Jul 8, 2025
3f32642
feat(converse): Implement message processing with tool call grouping …
lihq1403 Jul 15, 2025
2b56972
feat(schema): Update meta schema URL and add embedded Draft-07 schema…
lihq1403 Jul 15, 2025
8c5871f
feat(converse): Enhance JSON decoding to handle empty arrays and objects
lihq1403 Jul 17, 2025
7689a6f
feat(toolcall): Modify argument serialization to convert empty arrays…
lihq1403 Jul 23, 2025
b416b28
feat(toolcall): Modify argument serialization to convert empty arrays…
lihq1403 Jul 23, 2025
63c0f8f
feat(toolcall): Return stream arguments if not empty in serialization
lihq1403 Jul 24, 2025
c94639a
feat(converse): Handle empty tool call arguments by converting to emp…
lihq1403 Jul 31, 2025
553175e
feat(converse): Update input schema to use stdClass for empty propert…
lihq1403 Jul 31, 2025
c962cc1
feat(tool): Add trigger_task tool and update user message to include …
lihq1403 Jul 31, 2025
70a0b3c
feat(api): Add HTTP handler configuration to ApiOptions and update cl…
lihq1403 Aug 4, 2025
01ec593
feat(ChatCompletion): Normalize finishReason values to OpenAI standards
lihq1403 Aug 4, 2025
05410d8
feat(logging): Implement logging configuration and enhance log data f…
lihq1403 Aug 7, 2025
c8502a5
fix(logging): Change debug logs to info level for better log visibili…
lihq1403 Aug 7, 2025
760b5ad
feat(logging): Add dynamic request ID generation and logging for chat…
lihq1403 Aug 7, 2025
708b1d9
feat(logging): Enhance logging with dynamic request ID and additional…
lihq1403 Aug 7, 2025
c949baa
fix(logging): Update request ID header from 'odin-request-id' to 'x-r…
lihq1403 Aug 7, 2025
1e53208
fix(logging): Remove redundant request options logging in AbstractCli…
lihq1403 Aug 8, 2025
d7c9552
feat(model): Implement fixed temperature setting for model options an…
lihq1403 Aug 8, 2025
45ed86d
fix(logging): Increase maximum string length threshold for logging fr…
lihq1403 Aug 8, 2025
2b66c47
feat(network): Implement network retry mechanism for chat requests wi…
lihq1403 Aug 8, 2025
8ba4bec
feat(model): Add support for wildcard matching in fixed temperature c…
lihq1403 Aug 8, 2025
8d6fd84
feat(model): Enhance model addition by caching instances based on type
lihq1403 Aug 8, 2025
a9112e0
refactor(model): Simplify error handling and remove unused methods in…
lihq1403 Aug 8, 2025
9984397
refactor(logging): Streamline request ID handling and logging methods…
lihq1403 Aug 8, 2025
0d7d89d
feat(exception): Add Azure OpenAI model error handling and improve co…
lihq1403 Aug 8, 2025
9b1338e
feat(exception): Enhance Azure OpenAI error handling for retryable se…
lihq1403 Aug 8, 2025
21c9276
refactor(cache): Improve null handling for cache point message retrie…
lihq1403 Aug 11, 2025
18403df
feat(model): Add response content validation to ensure non-empty mode…
lihq1403 Aug 11, 2025
2c3354b
feat(request): Implement message sequence validation in ChatCompletio…
lihq1403 Aug 11, 2025
066f007
refactor(tests): Update long text test cases to handle increased char…
lihq1403 Aug 11, 2025
494f81f
feat(model): Enhance response content validation to ensure valid Assi…
lihq1403 Aug 11, 2025
b1cfa28
fix(request): Update validation logic to ensure assistant messages wi…
lihq1403 Aug 12, 2025
403a51c
fix(request): Update temperature validation range from [0,1] to [0,2]…
lihq1403 Aug 13, 2025
2450776
feat(exception): Add embedding input size error handling and create e…
lihq1403 Aug 14, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 8 additions & 1 deletion composer.json
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,13 @@
"autoload": {
"psr-4": {
"Hyperf\\Odin\\": "src/"
}
},
"classmap": [
"src/Api/Providers/AwsBedrock/ClassMap/"
],
"exclude-from-classmap": [
"vendor/aws/aws-sdk-php/src/Api/Validator.php"
]
},
"autoload-dev": {
"psr-4": {
Expand All @@ -28,6 +34,7 @@
"hyperf/config": "~2.2.0 || 3.0.* || 3.1.*",
"hyperf/di": "~2.2.0 || 3.0.* || 3.1.*",
"hyperf/logger": "~2.2.0 || 3.0.* || 3.1.*",
"hyperf/retry": "~2.2.0 || 3.0.* || 3.1.*",
"hyperf/qdrant-client": "*",
"justinrainbow/json-schema": "^6.3",
"yethee/tiktoken": "^0.1.2"
Expand Down
4 changes: 3 additions & 1 deletion examples/aws/aws_chat.php
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,10 @@
new Logger(),
);
$model->setApiRequestOptions(new ApiOptions([
// 如果你的环境不需要代码,那就不用
// 如果你的环境不需要代理,那就不用
'proxy' => env('HTTP_CLIENT_PROXY'),
// HTTP 处理器配置 - 支持环境变量 ODIN_HTTP_HANDLER
'http_handler' => env('ODIN_HTTP_HANDLER', 'auto'),
]));

$messages = [
Expand Down
4 changes: 3 additions & 1 deletion examples/aws/aws_chat_stream.php
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,10 @@
);

$model->setApiRequestOptions(new ApiOptions([
// 如果你的环境不需要代码,那就不用
// 如果你的环境不需要代理,那就不用
'proxy' => env('HTTP_CLIENT_PROXY'),
// HTTP 处理器配置 - 支持环境变量 ODIN_HTTP_HANDLER
'http_handler' => env('ODIN_HTTP_HANDLER', 'auto'),
]));

echo '=== AWS Bedrock Claude 流式响应测试 ===' . PHP_EOL;
Expand Down
16 changes: 15 additions & 1 deletion examples/aws/aws_tool_use_agent.php
Original file line number Diff line number Diff line change
Expand Up @@ -196,6 +196,19 @@
}
);

$taskTool = new ToolDefinition(
name: 'trigger_task',
description: '触发任务执行',
parameters: ToolParameters::fromArray([
'type' => 'object',
'properties' => [],
'required' => [],
]),
toolHandler: function () {
return ['status' => 'success', 'message' => '任务 已触发'];
}
);

// 创建带有所有工具的代理
$agent = new ToolUseAgent(
model: $model,
Expand All @@ -204,6 +217,7 @@
$calculatorTool->getName() => $calculatorTool,
$weatherTool->getName() => $weatherTool,
$translateTool->getName() => $translateTool,
$taskTool->getName() => $taskTool,
],
temperature: 0.6,
logger: $logger
Expand All @@ -213,7 +227,7 @@
echo "===== 顺序工具调用示例 =====\n";
$start = microtime(true);

$userMessage = new UserMessage('请计算 23 × 45,然后查询北京的天气,最后将"你好"翻译成英语。请详细说明每一步。');
$userMessage = new UserMessage('请计算 23 × 45,然后查询北京的天气,最后将"你好"翻译成英语,和触发任务。请详细说明每一步。');
$response = $agent->chat($userMessage);

$message = $response->getFirstChoice()->getMessage();
Expand Down
138 changes: 138 additions & 0 deletions examples/aws/aws_tools.php
Original file line number Diff line number Diff line change
@@ -0,0 +1,138 @@
<?php

declare(strict_types=1);
/**
* This file is part of Hyperf.
*
* @link https://www.hyperf.io
* @document https://hyperf.wiki
* @contact group@hyperf.io
* @license https://github.com/hyperf/hyperf/blob/master/LICENSE
*/
! defined('BASE_PATH') && define('BASE_PATH', dirname(__DIR__, 2));

require_once dirname(__FILE__, 3) . '/vendor/autoload.php';

use Hyperf\Context\ApplicationContext;
use Hyperf\Di\ClassLoader;
use Hyperf\Di\Container;
use Hyperf\Di\Definition\DefinitionSourceFactory;
use Hyperf\Odin\Api\RequestOptions\ApiOptions;
use Hyperf\Odin\Logger;
use Hyperf\Odin\Message\AssistantMessage;
use Hyperf\Odin\Message\SystemMessage;
use Hyperf\Odin\Message\ToolMessage;
use Hyperf\Odin\Message\UserMessage;
use Hyperf\Odin\Model\AwsBedrockModel;
use Hyperf\Odin\Model\ModelOptions;
use Hyperf\Odin\Tool\Definition\ToolDefinition;
use Hyperf\Odin\Tool\Definition\ToolParameters;

use function Hyperf\Support\env;

ClassLoader::init();

$container = ApplicationContext::setContainer(new Container((new DefinitionSourceFactory())()));

// 创建 AWS Bedrock 模型实例
// 使用 Claude 3 Sonnet 模型 ID
$model = new AwsBedrockModel(
'us.anthropic.claude-3-7-sonnet-20250219-v1:0',
[
'access_key' => env('AWS_ACCESS_KEY'),
'secret_key' => env('AWS_SECRET_KEY'),
'region' => env('AWS_REGION', 'us-east-1'),
],
new Logger(),
);
$model->setModelOptions(new ModelOptions([
'function_call' => true,
]));
$model->setApiRequestOptions(new ApiOptions([
// 如果你的环境不需要代码,那就不用
'proxy' => env('HTTP_CLIENT_PROXY'),
]));

echo '=== AWS Bedrock Claude 工具调用测试 ===' . PHP_EOL;
echo '支持函数调用功能' . PHP_EOL . PHP_EOL;

// 定义一个天气查询工具
$weatherTool = new ToolDefinition(
name: 'weather',
description: '查询指定城市的天气信息',
parameters: ToolParameters::fromArray([
'type' => 'object',
'properties' => [
'city' => [
'type' => 'string',
'description' => '要查询天气的城市名称',
],
],
'required' => ['city'],
]),
toolHandler: function ($params) {
$city = $params['city'];
// 模拟天气数据
$weatherData = [
'北京' => ['temperature' => '25°C', 'condition' => '晴朗', 'humidity' => '45%'],
'上海' => ['temperature' => '28°C', 'condition' => '多云', 'humidity' => '60%'],
'广州' => ['temperature' => '30°C', 'condition' => '阵雨', 'humidity' => '75%'],
'深圳' => ['temperature' => '29°C', 'condition' => '晴朗', 'humidity' => '65%'],
];

if (isset($weatherData[$city])) {
return $weatherData[$city];
}
return ['error' => '没有找到该城市的天气信息'];
}
);

$toolMessages = [
new SystemMessage('你是一位有用的天气助手,可以查询天气信息。'),
new UserMessage('同时查询明天 深圳和东莞的天气'),
AssistantMessage::fromArray(json_decode(
<<<'JSON'
{
"content": "我可以帮您查询明天深圳和东莞的天气信息",
"tool_calls": [
{
"id": "tooluse_NPtHekdGQpSCu0JphjkdHQ",
"function": {
"name": "weather",
"arguments": "{\"city\":\"深圳\"}"
},
"type": "function"
},
{
"id": "tooluse_eJJQosmHSDWThQN53aeOJA",
"function": {
"name": "weather",
"arguments": "{\"city\":\"东莞\"}"
},
"type": "function"
}
]
}
JSON,
true
)),
new ToolMessage('25 度', 'tooluse_NPtHekdGQpSCu0JphjkdHQ', 'weather', [
'city' => '深圳',
]),
new ToolMessage('26 度', 'tooluse_eJJQosmHSDWThQN53aeOJA', 'weather', [
'city' => ' 东莞',
]),
];

$start = microtime(true);

// 使用工具进行API调用
$response = $model->chat($toolMessages, 0.7, 0, [], [$weatherTool]);

// 输出完整响应
$message = $response->getFirstChoice()->getMessage();
if ($message instanceof AssistantMessage) {
echo $message->getContent();
}

echo '耗时' . (microtime(true) - $start) . '秒' . PHP_EOL;
6 changes: 6 additions & 0 deletions examples/chat.php
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
use Hyperf\Di\ClassLoader;
use Hyperf\Di\Container;
use Hyperf\Di\Definition\DefinitionSourceFactory;
use Hyperf\Odin\Api\RequestOptions\ApiOptions;
use Hyperf\Odin\Logger;
use Hyperf\Odin\Message\AssistantMessage;
use Hyperf\Odin\Message\SystemMessage;
Expand All @@ -40,6 +41,11 @@
new Logger(),
);

$model->setApiRequestOptions(new ApiOptions([
// HTTP 处理器配置 - 支持环境变量 ODIN_HTTP_HANDLER
'http_handler' => env('ODIN_HTTP_HANDLER', 'auto'),
]));

$messages = [
new SystemMessage(''),
new UserMessage('请解释量子纠缠的原理,并举一个实际应用的例子'),
Expand Down
2 changes: 1 addition & 1 deletion examples/chat_doubao.php
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@
$start = microtime(true);

// 使用非流式API调用
$request = new ChatCompletionRequest($messages);
$request = new ChatCompletionRequest($messages, maxTokens: 8096);
$request->setThinking([
'type' => 'disabled',
]);
Expand Down
115 changes: 115 additions & 0 deletions examples/exception/azure_server_error_retry_example.php
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
<?php

declare(strict_types=1);
/**
* This file is part of Hyperf.
*
* @link https://www.hyperf.io
* @document https://hyperf.wiki
* @contact group@hyperf.io
* @license https://github.com/hyperf/hyperf/blob/master/LICENSE
*/
require_once __DIR__ . '/../../vendor/autoload.php';

use GuzzleHttp\Exception\RequestException;
use GuzzleHttp\Psr7\Request;
use GuzzleHttp\Psr7\Response;
use Hyperf\Odin\Exception\LLMException\ErrorMappingManager;
use Hyperf\Odin\Exception\LLMException\LLMNetworkException;
use Hyperf\Odin\Exception\LLMException\Model\LLMContentFilterException;

echo "=== Azure OpenAI 异常分类与重试机制演示 ===\n\n";

// 模拟两种不同的 Azure OpenAI 500 错误
$testCases = [
'model_error (内容过滤)' => [
'error_type' => 'model_error',
'message' => 'The model produced invalid content. Consider modifying your prompt if you are seeing this error persistently.',
'expected_exception' => LLMContentFilterException::class,
'retryable' => false,
'user_action' => '修改提示词内容',
],
'server_error (服务故障)' => [
'error_type' => 'server_error',
'message' => 'The server had an error while processing your request. Sorry about that!',
'expected_exception' => LLMNetworkException::class,
'retryable' => true,
'user_action' => '自动重试',
],
];

$errorMappingManager = new ErrorMappingManager();

foreach ($testCases as $caseName => $testCase) {
echo "🧪 测试场景: {$caseName}\n";
echo " Azure错误类型: {$testCase['error_type']}\n";
echo " Azure错误消息: {$testCase['message']}\n";

// 创建模拟的Azure错误响应
$errorBody = json_encode([
'error' => [
'message' => $testCase['message'],
'type' => $testCase['error_type'],
'param' => null,
'code' => null,
],
]);

$request = new Request('POST', 'https://test-azure-openai.example.com/openai/deployments/test-gpt/chat/completions');
$response = new Response(500, ['Content-Type' => 'application/json'], $errorBody);

$requestException = new RequestException(
"Server error: {$testCase['message']}",
$request,
$response
);

// 通过异常映射管理器处理
$mappedException = $errorMappingManager->mapException($requestException);

echo " ✅ 映射结果:\n";
echo ' 异常类型: ' . get_class($mappedException) . "\n";
echo " 异常消息: {$mappedException->getMessage()}\n";
echo " HTTP状态码: {$mappedException->getStatusCode()}\n";
echo " 错误代码: {$mappedException->getErrorCode()}\n";

// 检查重试逻辑
$isRetryable = $mappedException instanceof LLMNetworkException;
echo ' 可重试: ' . ($isRetryable ? '✅ 是' : '❌ 否') . "\n";
echo " 用户操作: {$testCase['user_action']}\n";

// 验证分类正确性
$isCorrectType = $mappedException instanceof $testCase['expected_exception'];
echo ' 分类正确: ' . ($isCorrectType ? '✅ 是' : '❌ 否') . "\n";

echo "\n";
}

echo "=== 重试机制逻辑演示 ===\n";
echo "在 AbstractModel::callWithNetworkRetry 中的重试条件:\n";
echo "```php\n";
echo "return \$throwable instanceof LLMNetworkException\n";
echo " || (\$throwable && \$throwable->getPrevious() instanceof LLMNetworkException);\n";
echo "```\n\n";

echo "📊 改进前后对比:\n";
echo "┌─────────────┬─────────────────────────┬────────────────────────────┐\n";
echo "│ 错误类型 │ 改进前 │ 改进后 │\n";
echo "├─────────────┼─────────────────────────┼────────────────────────────┤\n";
echo "│ model_error │ LLMContentFilterException│ LLMContentFilterException │\n";
echo "│ │ ❌ 不可重试 │ ❌ 不可重试 (正确) │\n";
echo "├─────────────┼─────────────────────────┼────────────────────────────┤\n";
echo "│ server_error│ LLMApiException │ LLMNetworkException │\n";
echo "│ │ ❌ 不可重试 │ ✅ 可重试 (正确) │\n";
echo "└─────────────┴─────────────────────────┴────────────────────────────┘\n\n";

echo "🎯 **重要改进**:\n";
echo "1. ✅ Azure OpenAI 服务故障 (server_error) 现在可以自动重试\n";
echo "2. ✅ 内容过滤错误 (model_error) 仍然不会重试,需要用户修改提示词\n";
echo "3. ✅ 状态码和错误信息都被正确保留\n";
echo "4. ✅ 为用户提供了更准确的错误处理建议\n\n";

echo "💡 **对你的 OpenAI 代理接口的影响**:\n";
echo "- 暂时性服务故障会自动重试,提升可用性\n";
echo "- 用户收到更准确的错误类型和处理建议\n";
echo "- 减少因 Azure 服务抖动造成的请求失败\n";
Loading