Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ agentic architectures that range from simple tasks to complex workflows.
interacts with various services like session management, artifact storage,
and memory, and integrates with application-wide plugins. The runner
provides different execution modes: `run_async` for asynchronous execution
in production, `run_live` for bi-directional streaming interaction, and
in production, `run_live` for bidirectional streaming interaction, and
`run` for synchronous execution suitable for local testing and debugging. At
the end of each invocation, it can perform event compaction to manage
session history size.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This sample demonstrates how to use the `ApplicationIntegrationToolset` within a
## Prerequisites

1. **Set up Integration Connection:**
* You need an existing [Integration connection](https://cloud.google.com/integration-connectors/docs/overview) configured to interact with your Jira instance. Follow the [documentation](https://google.github.io/adk-docs/tools/google-cloud-tools/#use-integration-connectors) to provision the Integration Connector in Google Cloud and then use this [documentation](https://cloud.google.com/integration-connectors/docs/connectors/jiracloud/configure) to create an Jira connection. Note the `Connection Name`, `Project ID`, and `Location` of your connection.
* You need an existing [Integration connection](https://cloud.google.com/integration-connectors/docs/overview) configured to interact with your Jira instance. Follow the [documentation](https://google.github.io/adk-docs/tools/google-cloud-tools/#use-integration-connectors) to provision the Integration Connector in Google Cloud and then use this [documentation](https://cloud.google.com/integration-connectors/docs/connectors/jiracloud/configure) to create a Jira connection. Note the `Connection Name`, `Project ID`, and `Location` of your connection.
*

2. **Configure Environment Variables:**
Expand Down
2 changes: 1 addition & 1 deletion contributing/samples/bigquery_mcp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

This sample agent demonstrates using ADK's `McpToolset` to interact with
BigQuery's official MCP endpoint, allowing an agent to access and execute
toole by leveraging the Model Context Protocol (MCP). These tools include:
tools by leveraging the Model Context Protocol (MCP). These tools include:


1. `list_dataset_ids`
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ response. This keeps the turn events small, saving context space.
the *next* request to the LLM. This makes the report data available
immediately, allowing the agent to summarize it or answer questions in the
same turn, as seen in the logs. This artifact is only appended for that
round and not saved to session. For furtuer rounds of conversation, it will
round and not saved to session. For further rounds of conversation, it will
be removed from context.
3. **Loading on Demand**: The `CustomLoadArtifactsTool` enhances the default
`load_artifacts` behavior.
Expand Down
2 changes: 1 addition & 1 deletion contributing/samples/mcp_stdio_notion_agent/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Notion MCP Agent

This is an agent that is using Notion MCP tool to call Notion API. And it demonstrate how to pass in the Notion API key.
This is an agent that is using Notion MCP tool to call Notion API. And it demonstrates how to pass in the Notion API key.

Follow below instruction to use it:

Expand Down
2 changes: 1 addition & 1 deletion contributing/samples/multi_agent_seq_config/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The whole process is:

1. An agent backed by a cheap and fast model to write initial version.
2. An agent backed by a smarter and a little more expensive to review the code.
3. An final agent backed by the smartest and slowest model to write the final revision.
3. A final agent backed by the smartest and slowest model to write the final revision.

Sample queries:

Expand Down
2 changes: 1 addition & 1 deletion contributing/samples/oauth_calendar_agent/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ This sample tests and demos the OAuth support in ADK via two tools:
* 1. list_calendar_events

This is a customized tool that calls Google Calendar API to list calendar
events. It pass in the client id and client secrete to ADK and then get back
events. It passes in the client id and client secrete to ADK and then get back
the access token from ADK. And then it uses the access token to call
calendar api.

Expand Down
2 changes: 1 addition & 1 deletion contributing/samples/spanner_rag_agent/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ type.

## 💬 Sample prompts

* I'd like to buy a starter bike for my 3 year old child, can you show me the recommendation?
* I'd like to buy a starter bike for my 3-year-old child, can you show me the recommendation?

![Spanner RAG Sample Agent](Spanner_RAG_Sample_Agent.png)

Expand Down
2 changes: 1 addition & 1 deletion src/google/adk/cli/built_in_agents/tools/write_files.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ async def write_files(
- file_size: size of written file in bytes
- existed_before: bool indicating if file existed before write
- backup_created: bool indicating if backup was created
- backup_path: path to backup file if created
- backup_path: path to back up file if created
- error: error message if write failed for this file
- successful_writes: number of files written successfully
- total_files: total number of files requested
Expand Down
6 changes: 3 additions & 3 deletions src/google/adk/flows/llm_flows/base_llm_flow.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@
class BaseLlmFlow(ABC):
"""A basic flow that calls the LLM in a loop until a final response is generated.

This flow ends when it transfer to another agent.
This flow ends when it transfers to another agent.
"""

def __init__(self):
Expand Down Expand Up @@ -393,8 +393,8 @@ async def _run_one_step_async(
current_invocation=True, current_branch=True
)

# Long running tool calls should have been handled before this point.
# If there are still long running tool calls, it means the agent is paused
# Long-running tool calls should have been handled before this point.
# If there are still long-running tool calls, it means the agent is paused
# before, and its branch hasn't been resumed yet.
if (
invocation_context.is_resumable
Expand Down
2 changes: 1 addition & 1 deletion src/google/adk/tools/pubsub/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

"""Pub/Sub Tools (Experimental).

Pub/Sub Tools under this module are hand crafted and customized while the tools
Pub/Sub Tools under this module are handcrafted and customized while the tools
under google.adk.tools.google_api_tool are auto generated based on API
definition. The rationales to have customized tool are:

Expand Down
4 changes: 2 additions & 2 deletions src/google/adk/tools/spanner/settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ class VectorSearchIndexSettings(BaseModel):
"""

num_branches: Optional[int] = None
"""Optional. The number of branches to further parititon the vector data.
"""Optional. The number of branches to further partition the vector data.

You can only designate num_branches for trees with 3 levels.
The number of branches must be fewer than the number of leaves
Expand Down Expand Up @@ -165,7 +165,7 @@ class SpannerVectorStoreSettings(BaseModel):
"""Required. The vector store table columns to return in the vector similarity search result.

By default, only the `content_column` value and the distance value are returned.
If sepecified, the list of selected columns and the distance value are returned.
If specified, the list of selected columns and the distance value are returned.
For example, if `selected_columns` is ['col1', 'col2'], then the result will contain the values of 'col1' and 'col2' columns and the distance value.
"""

Expand Down