
想要構建具有現實世界功能的智慧代理?使用 Google ADK 構建可以推理、委託和動態響應的代理。本 Google ADK 教程將引導您瞭解使用 Google ADK 跨 Gemini 和GPT 等不同語言模型構建會話式代理的步驟。無論您是在探索用於人工智慧代理的 Google ADK,還是對如何使用 Google ADK 建立人工智慧代理感到好奇,本實踐指南都將幫助您輕鬆、清晰地開啟代理開發之旅。
什麼是代理開發工具包?
代理開發工具包(ADK)是一個靈活的模組化框架,用於開發和部署人工智慧代理。它可與流行的 LLM 和開源生成式人工智慧工具一起使用,並可與 Google 生態系統和 Gemini 模型緊密整合。ADK 可讓您輕鬆開始使用由 Gemini 模型和谷歌人工智慧工具驅動的簡單代理,同時提供更復雜的代理架構和協調所需的控制和結構。
谷歌代理開發工具包的特點
- 多代理架構:在並行、順序或分層工作流中組合代理。
- 靈活的協調:使用 LLM 驅動的工作流動態路由任務。
- 豐富的工具生態系統:無縫使用內建、自定義和第三方工具。
- 與模型無關:支援 Gemini、GPT-4o、Claude、Mistral 等。
- 流功能:文字、音訊和影片的即時流。
- 開發友好型工具:CLI、Web UI、視覺化除錯和評估工具。
- 記憶體和狀態管理:內建會話和長期記憶體處理功能。
- 人工製品處理:輕鬆管理檔案、輸出和二進位制資料。
- 智慧執行:代理可執行程式碼並處理多步驟規劃。
- 多功能部署:可在本地、谷歌雲(Vertex AI、Cloud Run)或 Docker 上執行。
問題陳述
隨著人工智慧系統從單一用途的工具發展為協作式多代理生態系統,開發人員在構建和協調可通訊、可委派和可適應的智慧代理方面需要實用的指導。為了彌補這一差距,我們將建立一個天氣機器人團隊(Weather Bot Team),這是一個多代理系統,能夠回答與天氣有關的詢問,同時還能處理問候、告別和安全響應等使用者互動。
本實踐專案旨在演示如何
- 使用谷歌的代理開發工具包(ADK)設計一個模組化的多代理系統。
- 整合多種語言模型(如 Gemini、GPT、Claude),實現任務專業化。
- 在各代理間實施智慧任務委託。
- 管理會話記憶體,實現上下文的連續性。
- 透過結構化回撥應用安全機制。
透過解決這個問題,您將獲得 ADK 架構、協調、記憶體管理和安全最佳實踐方面的實際經驗,為更復雜、更真實的代理應用奠定基礎。
您可以參考所提供的 Colab notebook 來指導您完成實踐實施。
建議的工作流

前提條件
在深入學習程式碼之前,請確保您已完成以下設定步驟:
1. 設定環境並安裝ADK
首先建立並啟用虛擬環境,以隔離專案依賴關係:
# Create a virtual environment
# Create a virtual environment
python -m venv .venv
# Create a virtual environment
python -m venv .venv
現在環境已經建立,我們可以使用以下命令啟用它:
# Activate the environment
source .venv/bin/activate
.venv\Scripts\activate.bat
.venv\Scripts\Activate.ps1
# Activate the environment
# macOS/Linux:
source .venv/bin/activate
# Windows CMD:
.venv\Scripts\activate.bat
# Windows PowerShell:
.venv\Scripts\Activate.ps1
# Activate the environment
# macOS/Linux:
source .venv/bin/activate
# Windows CMD:
.venv\Scripts\activate.bat
# Windows PowerShell:
.venv\Scripts\Activate.ps1
環境啟用後,安裝Google AI 開發工具包 (ADK) :
pip install google-adk
2. 獲取 API 金鑰
您需要 API 金鑰才能與不同的人工智慧模型進行互動。從以下來源獲取:
建立天氣應用程式的步驟
第 1 步:設定和安裝
安裝專案所需的庫:
# Install Google ADK and LiteLLM
!pip install google-adk -q
# Install Google ADK and LiteLLM
!pip install google-adk -q
!pip install litellm -q
# Install Google ADK and LiteLLM
!pip install google-adk -q
!pip install litellm -q
匯入庫:
from google.adk.agents import Agent
from google.adk.models.lite_llm import LiteLlm # For multi-model support
from google.adk.sessions import InMemorySessionService
from google.adk.runners import Runner
from google.genai import types # For creating message Content/Parts
warnings.filterwarnings("ignore")
logging.basicConfig(level=logging.ERROR)
import os
import asyncio
from google.adk.agents import Agent
from google.adk.models.lite_llm import LiteLlm # For multi-model support
from google.adk.sessions import InMemorySessionService
from google.adk.runners import Runner
from google.genai import types # For creating message Content/Parts
import warnings
# Ignore all warnings
warnings.filterwarnings("ignore")
import logging
logging.basicConfig(level=logging.ERROR)
import os
import asyncio
from google.adk.agents import Agent
from google.adk.models.lite_llm import LiteLlm # For multi-model support
from google.adk.sessions import InMemorySessionService
from google.adk.runners import Runner
from google.genai import types # For creating message Content/Parts
import warnings
# Ignore all warnings
warnings.filterwarnings("ignore")
import logging
logging.basicConfig(level=logging.ERROR)
設定 API 金鑰:
os.environ["GOOGLE_API_KEY"] = "YOUR_GOOGLE_API_KEY"
os.environ['OPENAI_API_KEY'] = “YOUR_OPENAI_API_KEY”
os.environ['ANTHROPIC_API_KEY'] = “YOUR_ANTHROPIC_API_KEY”
print(f"Google API Key set: {'Yes' if os.environ.get('GOOGLE_API_KEY') and os.environ['GOOGLE_API_KEY'] != 'YOUR_GOOGLE_API_KEY' else 'No (REPLACE PLACEHOLDER!)'}")
print(f"OpenAI API Key set: {'Yes' if os.environ.get('OPENAI_API_KEY') and os.environ['OPENAI_API_KEY'] != 'YOUR_OPENAI_API_KEY' else 'No (REPLACE PLACEHOLDER!)'}")
print(f"Anthropic API Key set: {'Yes' if os.environ.get('ANTHROPIC_API_KEY') and os.environ['ANTHROPIC_API_KEY'] != 'YOUR_ANTHROPIC_API_KEY' else 'No (REPLACE PLACEHOLDER!)'}")
# Configure ADK to use API keys directly (not Vertex AI for this multi-model setup)
os.environ["GOOGLE_GENAI_USE_VERTEXAI"] = "False"
# Gemini API Key
os.environ["GOOGLE_API_KEY"] = "YOUR_GOOGLE_API_KEY"
# OpenAI API Key
os.environ['OPENAI_API_KEY'] = “YOUR_OPENAI_API_KEY”
# Anthropic API Key
os.environ['ANTHROPIC_API_KEY'] = “YOUR_ANTHROPIC_API_KEY”
print("API Keys Set:")
print(f"Google API Key set: {'Yes' if os.environ.get('GOOGLE_API_KEY') and os.environ['GOOGLE_API_KEY'] != 'YOUR_GOOGLE_API_KEY' else 'No (REPLACE PLACEHOLDER!)'}")
print(f"OpenAI API Key set: {'Yes' if os.environ.get('OPENAI_API_KEY') and os.environ['OPENAI_API_KEY'] != 'YOUR_OPENAI_API_KEY' else 'No (REPLACE PLACEHOLDER!)'}")
print(f"Anthropic API Key set: {'Yes' if os.environ.get('ANTHROPIC_API_KEY') and os.environ['ANTHROPIC_API_KEY'] != 'YOUR_ANTHROPIC_API_KEY' else 'No (REPLACE PLACEHOLDER!)'}")
# Configure ADK to use API keys directly (not Vertex AI for this multi-model setup)
os.environ["GOOGLE_GENAI_USE_VERTEXAI"] = "False"
# Gemini API Key
os.environ["GOOGLE_API_KEY"] = "YOUR_GOOGLE_API_KEY"
# OpenAI API Key
os.environ['OPENAI_API_KEY'] = “YOUR_OPENAI_API_KEY”
# Anthropic API Key
os.environ['ANTHROPIC_API_KEY'] = “YOUR_ANTHROPIC_API_KEY”
print("API Keys Set:")
print(f"Google API Key set: {'Yes' if os.environ.get('GOOGLE_API_KEY') and os.environ['GOOGLE_API_KEY'] != 'YOUR_GOOGLE_API_KEY' else 'No (REPLACE PLACEHOLDER!)'}")
print(f"OpenAI API Key set: {'Yes' if os.environ.get('OPENAI_API_KEY') and os.environ['OPENAI_API_KEY'] != 'YOUR_OPENAI_API_KEY' else 'No (REPLACE PLACEHOLDER!)'}")
print(f"Anthropic API Key set: {'Yes' if os.environ.get('ANTHROPIC_API_KEY') and os.environ['ANTHROPIC_API_KEY'] != 'YOUR_ANTHROPIC_API_KEY' else 'No (REPLACE PLACEHOLDER!)'}")
# Configure ADK to use API keys directly (not Vertex AI for this multi-model setup)
os.environ["GOOGLE_GENAI_USE_VERTEXAI"] = "False"
定義模型常量,方便使用:
MODEL_GEMINI_2_0_FLASH = "gemini-2.0-flash".
MODEL_GPT_4O = "openai/gpt-4o"
MODEL_CLAUDE_SONNET = "anthropic/claude-3-sonnet-20240229"
print("\nEnvironment configured.")
MODEL_GEMINI_2_0_FLASH = "gemini-2.0-flash".
MODEL_GPT_4O = "openai/gpt-4o"
MODEL_CLAUDE_SONNET = "anthropic/claude-3-sonnet-20240229"
print("\nEnvironment configured.")
MODEL_GEMINI_2_0_FLASH = "gemini-2.0-flash".
MODEL_GPT_4O = "openai/gpt-4o"
MODEL_CLAUDE_SONNET = "anthropic/claude-3-sonnet-20240229"
print("\nEnvironment configured.")
第 2 步:定義工具
在 ADK 中,“工具”是功能構件,讓代理不僅能生成文字。它們通常是簡單的 Python 函式,可以執行實際操作,如獲取天氣資料、查詢資料庫或執行計算。
首先,我們將建立一個模擬天氣工具來模擬天氣查詢。這有助於我們專注於代理的結構,而不需要外部 API。之後,我們可以輕鬆地將其替換為真正的天氣服務。
程式碼
def get_weather(city: str) -> dict:
"""Retrieves the current weather report for a specified city.
city (str): The name of the city (e.g., "Mumbai","Chennai","Delhi").
dict: A dictionary containing the weather information.
Includes a 'status' key ('success' or 'error').
If 'success', includes a 'report' key with weather details.
If 'error', includes an 'error_message' key.
# Best Practice: Log tool execution for easier debugging
print(f"--- Tool: get_weather called for city: {city} ---")
city_normalized = city.lower().replace(" ", "") # Basic input normalization
"delhi": {"status": "success", "report": "The weather in Delhi is sunny with a temperature of 35°C."},
"mumbai": {"status": "success", "report": "It's humid in Mumbai with a temperature of 30°C."},
"bangalore": {"status": "success", "report": "Bangalore is experiencing light showers and a temperature of 22°C."},
"kolkata": {"status": "success", "report": "Kolkata is partly cloudy with a temperature of 29°C."},
"chennai": {"status": "success", "report": "It's hot and humid in Chennai with a temperature of 33°C."},
if city_normalized in mock_weather_db:
return mock_weather_db[city_normalized]
return {"status": "error", "error_message": f"Sorry, I don't have weather information for '{city}'."}
print(get_weather("Mumbai"))
def get_weather(city: str) -> dict:
"""Retrieves the current weather report for a specified city.
Args:
city (str): The name of the city (e.g., "Mumbai","Chennai","Delhi").
Returns:
dict: A dictionary containing the weather information.
Includes a 'status' key ('success' or 'error').
If 'success', includes a 'report' key with weather details.
If 'error', includes an 'error_message' key.
"""
# Best Practice: Log tool execution for easier debugging
print(f"--- Tool: get_weather called for city: {city} ---")
city_normalized = city.lower().replace(" ", "") # Basic input normalization
mock_weather_db = {
"delhi": {"status": "success", "report": "The weather in Delhi is sunny with a temperature of 35°C."},
"mumbai": {"status": "success", "report": "It's humid in Mumbai with a temperature of 30°C."},
"bangalore": {"status": "success", "report": "Bangalore is experiencing light showers and a temperature of 22°C."},
"kolkata": {"status": "success", "report": "Kolkata is partly cloudy with a temperature of 29°C."},
"chennai": {"status": "success", "report": "It's hot and humid in Chennai with a temperature of 33°C."},
}
if city_normalized in mock_weather_db:
return mock_weather_db[city_normalized]
else:
return {"status": "error", "error_message": f"Sorry, I don't have weather information for '{city}'."}
# Example usage
print(get_weather("Mumbai"))
def get_weather(city: str) -> dict:
"""Retrieves the current weather report for a specified city.
Args:
city (str): The name of the city (e.g., "Mumbai","Chennai","Delhi").
Returns:
dict: A dictionary containing the weather information.
Includes a 'status' key ('success' or 'error').
If 'success', includes a 'report' key with weather details.
If 'error', includes an 'error_message' key.
"""
# Best Practice: Log tool execution for easier debugging
print(f"--- Tool: get_weather called for city: {city} ---")
city_normalized = city.lower().replace(" ", "") # Basic input normalization
mock_weather_db = {
"delhi": {"status": "success", "report": "The weather in Delhi is sunny with a temperature of 35°C."},
"mumbai": {"status": "success", "report": "It's humid in Mumbai with a temperature of 30°C."},
"bangalore": {"status": "success", "report": "Bangalore is experiencing light showers and a temperature of 22°C."},
"kolkata": {"status": "success", "report": "Kolkata is partly cloudy with a temperature of 29°C."},
"chennai": {"status": "success", "report": "It's hot and humid in Chennai with a temperature of 33°C."},
}
if city_normalized in mock_weather_db:
return mock_weather_db[city_normalized]
else:
return {"status": "error", "error_message": f"Sorry, I don't have weather information for '{city}'."}
# Example usage
print(get_weather("Mumbai"))
第 3 步:定義代理
在 ADK 中,Agent 是管理對話流的核心元件,它連線著使用者、LLM 及其可使用的工具。
要定義一個代理,需要配置幾個基本引數:
- name:代理的唯一識別符號(如“weather_agent_v1”)。
- model:代理將使用的 LLM(例如,MODEL_GEMINI_2_5_PRO)。
- description:描述: 對該代理所做工作的簡短概括–這對多代理系統中的協作和授權至關重要。
- instruction:LLM 的詳細行為指南,定義其角色、目標、如何使用工具以及如何處理邊緣情況。
- tools:代理可以呼叫的工具功能(如 [get_weather])列表。
程式碼 :
description="Provides weather information for specific cities.",
instruction="You are a helpful weather assistant. Your primary goal is to provide current weather reports. "
"When the user asks for the weather in a specific city, "
"you MUST use the 'get_weather' tool to find the information. "
"Analyze the tool's response: if the status is 'error', inform the user politely about the error message. "
"If the status is 'success', present the weather 'report' clearly and concisely to the user. "
"Only use the tool when a city is mentioned for a weather request.",
print(f"Agent '{weather_agent.name}' created using model '{AGENT_MODEL}'.")
AGENT_MODEL=model
weather_agent=Agent(
name="weather_agent_v1",
model=AGENT_MODEL,
description="Provides weather information for specific cities.",
instruction="You are a helpful weather assistant. Your primary goal is to provide current weather reports. "
"When the user asks for the weather in a specific city, "
"you MUST use the 'get_weather' tool to find the information. "
"Analyze the tool's response: if the status is 'error', inform the user politely about the error message. "
"If the status is 'success', present the weather 'report' clearly and concisely to the user. "
"Only use the tool when a city is mentioned for a weather request.",
tools=[get_weather],
)
print(f"Agent '{weather_agent.name}' created using model '{AGENT_MODEL}'.")
AGENT_MODEL=model
weather_agent=Agent(
name="weather_agent_v1",
model=AGENT_MODEL,
description="Provides weather information for specific cities.",
instruction="You are a helpful weather assistant. Your primary goal is to provide current weather reports. "
"When the user asks for the weather in a specific city, "
"you MUST use the 'get_weather' tool to find the information. "
"Analyze the tool's response: if the status is 'error', inform the user politely about the error message. "
"If the status is 'success', present the weather 'report' clearly and concisely to the user. "
"Only use the tool when a city is mentioned for a weather request.",
tools=[get_weather],
)
print(f"Agent '{weather_agent.name}' created using model '{AGENT_MODEL}'.")
第 4 步:設定執行程式和會話服務
要有效處理對話和執行代理,我們需要兩個關鍵元件:
會話服務(SessionService):該元件跟蹤每個使用者的對話歷史和會話狀態。稱為InMemorySessionService的基本版本將所有資料儲存在記憶體中,因此非常適合測試或輕量級應用程式。它會記錄會話中交換的每一條資訊。我們將深入探討如何永久儲存會話資料。
執行程式:它是系統的大腦。它負責管理整個互動流程,接收使用者輸入,將其傳遞給正確的代理,呼叫 LLM 和任何必要的工具,透過SessionService 更新會話資料,並生成事件流以顯示互動過程中發生的情況。
程式碼:
# @title Setup Session Service and Runner
# ---Session Management ---
# Key Concept: SessionService stores conversation history & state.
# InMemorySessionService is a simple, non-persistent storage for this tutorial.
session_service=InMemorySessionService()
# Define constants for identifying the interaction context
APP_NAME="weathertutorial_app"
# Create the specific session where the conversation will happen
session=session_service.create_session(
print(f"Session created: App='{APP_NAME}', User='{USER_ID}', Session='{SESSION_ID}'")
# Key Concept: Runner orchestrates the agent execution loop.
session_service=session_service
print(f"Runner created for agent '{runner.agent.name}'.")
# @title Setup Session Service and Runner
# ---Session Management ---
# Key Concept: SessionService stores conversation history & state.
# InMemorySessionService is a simple, non-persistent storage for this tutorial.
session_service=InMemorySessionService()
# Define constants for identifying the interaction context
APP_NAME="weathertutorial_app"
USER_ID="user_1"
SESSION_ID="session_001"
# Create the specific session where the conversation will happen
session=session_service.create_session(
app_name=APP_NAME,
user_id=USER_ID,
session_id=SESSION_ID,
)
print(f"Session created: App='{APP_NAME}', User='{USER_ID}', Session='{SESSION_ID}'")
# ---Runner ---
# Key Concept: Runner orchestrates the agent execution loop.
runner=Runner(
agent=weather_agent,
app_name=APP_NAME,
session_service=session_service
)
print(f"Runner created for agent '{runner.agent.name}'.")
# @title Setup Session Service and Runner
# ---Session Management ---
# Key Concept: SessionService stores conversation history & state.
# InMemorySessionService is a simple, non-persistent storage for this tutorial.
session_service=InMemorySessionService()
# Define constants for identifying the interaction context
APP_NAME="weathertutorial_app"
USER_ID="user_1"
SESSION_ID="session_001"
# Create the specific session where the conversation will happen
session=session_service.create_session(
app_name=APP_NAME,
user_id=USER_ID,
session_id=SESSION_ID,
)
print(f"Session created: App='{APP_NAME}', User='{USER_ID}', Session='{SESSION_ID}'")
# ---Runner ---
# Key Concept: Runner orchestrates the agent execution loop.
runner=Runner(
agent=weather_agent,
app_name=APP_NAME,
session_service=session_service
)
print(f"Runner created for agent '{runner.agent.name}'.")
第 5 步:與代理互動
我們將使用 ADK 的非同步執行程式(asynchronous Runner)與代理對話並獲取其響應。由於 LLM 和工具呼叫需要時間,因此非同步處理可確保流暢、無阻塞的體驗。
我們將建立一個名為 call_agent_async 的輔助函式,執行以下操作:
- 將使用者查詢作為輸入
- 將其封裝為 ADK 所需的 Content 格式
- 使用會話和訊息呼叫 runner.run_async()
- 遍歷 ADK 返回的事件流,並跟蹤每個步驟(工具呼叫、響應等)。
- 使用 event.is_final_response() 檢測並列印最終響應
程式碼
# @title Define Agent Interaction Function
from google.genai import types # For creating message Content/Parts
async def call_agent_async(query: str):
"""Sends a query to the agent and prints the final response."""
print(f"\n>>> User Query: {query}")
# Prepare the user's message in ADK format
content = types.Content(role='user', parts=[types.Part(text=query)])
final_response_text = "Agent did not produce a final response." # Default
# Key Concept: run_async executes the agent logic and yields Events.
# We iterate through events to find the final answer.
async for event in runner.run_async(user_id=USER_ID, session_id=SESSION_ID, new_message=content):
# You can uncomment the line below to see *all* events during execution
# print(f" [Event] Author: {event.author}, Type: {type(event).__name__}, Final: {event.is_final_response()}, Content: {event.content}")
# Key Concept: is_final_response() marks the concluding message for the turn.
if event.is_final_response():
if event.content and event.content.parts:
# Assuming text response in the first part
final_response_text = event.content.parts[0].text
elif event.actions and event.actions.escalate: # Handle potential errors/escalations
final_response_text = f"Agent escalated: {event.error_message or 'No specific message.'}"
# Add more checks here if needed (e.g., specific error codes)
break # Stop processing events once the final response is found
print(f"<<< Agent Response: {final_response_text}")
# @title Define Agent Interaction Function
import asyncio
from google.genai import types # For creating message Content/Parts
async def call_agent_async(query: str):
"""Sends a query to the agent and prints the final response."""
print(f"\n>>> User Query: {query}")
# Prepare the user's message in ADK format
content = types.Content(role='user', parts=[types.Part(text=query)])
final_response_text = "Agent did not produce a final response." # Default
# Key Concept: run_async executes the agent logic and yields Events.
# We iterate through events to find the final answer.
async for event in runner.run_async(user_id=USER_ID, session_id=SESSION_ID, new_message=content):
# You can uncomment the line below to see *all* events during execution
# print(f" [Event] Author: {event.author}, Type: {type(event).__name__}, Final: {event.is_final_response()}, Content: {event.content}")
# Key Concept: is_final_response() marks the concluding message for the turn.
if event.is_final_response():
if event.content and event.content.parts:
# Assuming text response in the first part
final_response_text = event.content.parts[0].text
elif event.actions and event.actions.escalate: # Handle potential errors/escalations
final_response_text = f"Agent escalated: {event.error_message or 'No specific message.'}"
# Add more checks here if needed (e.g., specific error codes)
break # Stop processing events once the final response is found
print(f"<<< Agent Response: {final_response_text}")
# @title Define Agent Interaction Function
import asyncio
from google.genai import types # For creating message Content/Parts
async def call_agent_async(query: str):
"""Sends a query to the agent and prints the final response."""
print(f"\n>>> User Query: {query}")
# Prepare the user's message in ADK format
content = types.Content(role='user', parts=[types.Part(text=query)])
final_response_text = "Agent did not produce a final response." # Default
# Key Concept: run_async executes the agent logic and yields Events.
# We iterate through events to find the final answer.
async for event in runner.run_async(user_id=USER_ID, session_id=SESSION_ID, new_message=content):
# You can uncomment the line below to see *all* events during execution
# print(f" [Event] Author: {event.author}, Type: {type(event).__name__}, Final: {event.is_final_response()}, Content: {event.content}")
# Key Concept: is_final_response() marks the concluding message for the turn.
if event.is_final_response():
if event.content and event.content.parts:
# Assuming text response in the first part
final_response_text = event.content.parts[0].text
elif event.actions and event.actions.escalate: # Handle potential errors/escalations
final_response_text = f"Agent escalated: {event.error_message or 'No specific message.'}"
# Add more checks here if needed (e.g., specific error codes)
break # Stop processing events once the final response is found
print(f"<<< Agent Response: {final_response_text}")
第 6 步:執行對話
現在,一切都已準備就緒,是時候測試一下我們的代理了,傳送幾個查詢樣本。
我們將
- 將非同步呼叫包在 main() 例程中
- 使用 await 執行函式。
預期結果
- 列印使用者查詢
- 當代理使用工具(如 get_weather)時,你會看到類似以下的日誌:- 工具:get_weather 已呼叫… –代理將返回最終響應,甚至可以優雅地處理資料。
- 代理將返回最終響應,甚至會優雅地處理資料不可用的情況(例如“Paris”)。
程式碼:
# @title Run the Initial Conversation
# # We need an async function to await our interaction helper
# async def run_conversation():
# await call_agent_async("What is the weather like in Mumbai")
# await call_agent_async("How about Delhi?") # Expecting the tool's error message
# await call_agent_async("Tell me the weather in CHennai")
# Execute the conversation using await in an async context (like Colab/Jupyter)
# @title Run the Initial Conversation
# # We need an async function to await our interaction helper
# async def run_conversation():
# await call_agent_async("What is the weather like in Mumbai")
# await call_agent_async("How about Delhi?") # Expecting the tool's error message
# await call_agent_async("Tell me the weather in CHennai")
# Execute the conversation using await in an async context (like Colab/Jupyter)
await run_conversation()
# @title Run the Initial Conversation
# # We need an async function to await our interaction helper
# async def run_conversation():
# await call_agent_async("What is the weather like in Mumbai")
# await call_agent_async("How about Delhi?") # Expecting the tool's error message
# await call_agent_async("Tell me the weather in CHennai")
# Execute the conversation using await in an async context (like Colab/Jupyter)
await run_conversation()
輸出:

小結
谷歌的代理開發工具包(ADK)允許開發人員建立超越簡單文字生成的智慧多代理系統。透過構建天氣機器人,我們學習到了 ADK 的關鍵概念,如工具整合、代理協調和會話管理,同時充分利用了 Google Gemini 的強大功能。從為工具定義清晰的描述性文件,到透過 Runner 和 SessionService 協調互動,ADK 提供了構建可互動、學習和適應的生產就緒型代理的靈活性。無論您是要構建聊天機器人、虛擬助手還是多代理生態系統,ADK 都能為您提供實現願景的工具。
評論留言