This guide demonstrates how to create a specialized weather agent using BedrockLLMAgent and a custom weather tool. We’ll walk through the process of defining the tool, setting up the agent, and integrating it into your Multi-Agent Orchestrator system.
Define the Weather Tool
Let’s break down the weather tool definition into its key components:
A. Tool Description
export const weatherToolDescription = [
description: " Get the current weather for a given location, based on its WGS84 coordinates. " ,
description: " Geographical WGS84 latitude of the location. " ,
description: " Geographical WGS84 longitude of the location. " ,
required: [ " latitude " , " longitude " ],
weather_tool_description = [ {
" description " : " Get the current weather for a given location, based on its WGS84 coordinates. " ,
" description " : " Geographical WGS84 latitude of the location. " ,
" description " : " Geographical WGS84 longitude of the location. " ,
" required " : [ " latitude " , " longitude " ] ,
Explanation:
This describes the tool’s interface to the LLM.
name
: Identifies the tool to the LLM.
description
: Explains the tool’s purpose to the LLM.
inputSchema
: Defines the expected input format.
Requires latitude
and longitude
as strings.
This schema helps the LLM understand how to use the tool correctly.
B. Custom Prompt
export const WEATHER_PROMPT = `
You are a weather assistant that provides current weather data for user-specified locations using only
the Weather_Tool, which expects latitude and longitude. Infer the coordinates from the location yourself.
If the user provides coordinates, infer the approximate location and refer to it in your response.
To use the tool, you strictly apply the provided tool specification.
- Explain your step-by-step process, and give brief updates before each step.
- Only use the Weather_Tool for data. Never guess or make up information.
- Repeat the tool use for subsequent requests if necessary.
- If the tool errors, apologize, explain weather is unavailable, and suggest other options.
- Report temperatures in °C (°F) and wind in km/h (mph). Keep weather reports concise. Sparingly use
emojis where appropriate.
- Only respond to weather queries. Remind off-topic users of your purpose.
- Never claim to search online, access external data, or use tools besides Weather_Tool.
- Complete the entire process until you have all required data before sending the complete response.
weather_tool_prompt = """
You are a weather assistant that provides current weather data for user-specified locations using only
the Weather_Tool, which expects latitude and longitude. Infer the coordinates from the location yourself.
If the user provides coordinates, infer the approximate location and refer to it in your response.
To use the tool, you strictly apply the provided tool specification.
- Explain your step-by-step process, and give brief updates before each step.
- Only use the Weather_Tool for data. Never guess or make up information.
- Repeat the tool use for subsequent requests if necessary.
- If the tool errors, apologize, explain weather is unavailable, and suggest other options.
- Report temperatures in °C (°F) and wind in km/h (mph). Keep weather reports concise. Sparingly use
emojis where appropriate.
- Only respond to weather queries. Remind off-topic users of your purpose.
- Never claim to search online, access external data, or use tools besides Weather_Tool.
- Complete the entire process until you have all required data before sending the complete response.
Explanation:
This prompt sets the behavior and limitations for the LLM.
It instructs the LLM to:
Use only the Weather_Tool for data.
Infer coordinates from location names.
Provide step-by-step explanations.
Handle errors gracefully.
Format responses consistently (units, conciseness).
Stay on topic and use only the provided tool.
C. Tool Handler
import { ConversationMessage, ParticipantRole } from " multi-agent-orchestrator " ;
export async function weatherToolHandler ( response , conversation : ConversationMessage [] ) : Promse < ConversationMessage > {
const responseContentBlocks = response . content as any [];
let toolResults : any = [];
if ( ! responseContentBlocks) {
throw new Error ( " No content blocks in response " );
for ( const contentBlock of response . content ) {
if ( " toolUse " in contentBlock) {
const toolUseBlock = contentBlock . toolUse ;
if (toolUseBlock . name === " Weather_Tool " ) {
const response = await fetchWeatherData ( {
latitude: toolUseBlock . input . latitude ,
longitude: toolUseBlock . input . longitude
" toolUseId " : toolUseBlock . toolUseId ,
" content " : [{ json: { result: response } }] ,
const message : ConversationMessage = { role: ParticipantRole . USER , content: toolResults } ;
from requests.exceptions import RequestException
from typing import List, Dict, Any
from multi_agent_orchestrator.types import ConversationMessage, ParticipantRole
async def weather_tool_handler ( response : ConversationMessage, conversation : List[Dict[ str , Any]] ) -> ConversationMessage:
response_content_blocks = response.content
# Initialize an empty list of tool results
if not response_content_blocks:
raise ValueError ( " No content blocks in response " )
for content_block in response_content_blocks:
if " text " in content_block:
# Handle text content if needed
if " toolUse " in content_block:
tool_use_block = content_block[ " toolUse " ]
tool_use_name = tool_use_block. get ( " name " )
if tool_use_name == " Weather_Tool " :
tool_response = await fetch_weather_data ( tool_use_block [ " input " ])
" toolUseId " : tool_use_block [ " toolUseId " ] ,
" content " : [ { " json " : { " result " : tool_response}} ] ,
# Embed the tool results in a new user message
message = ConversationMessage (
role = ParticipantRole.USER.value ,
Explanation:
This handler processes the LLM’s request to use the Weather_Tool.
It iterates through the response content, looking for tool use blocks.
When it finds a Weather_Tool use:
It calls fetchWeatherData
with the provided coordinates.
It formats the result into a tool result object.
Finally, it returns the tool results to the caller as a new user message.
D. Data Fetching Function
async function fetchWeatherData ( inputData : { latitude : number ; longitude : number } ) {
const endpoint = " https://api.open-meteo.com/v1/forecast " ;
const params = new URLSearchParams ( {
latitude: inputData . latitude . toString () ,
longitude: inputData . longitude . toString () ,
const response = await fetch ( ` ${ endpoint } ? ${ params } ` );
const data = await response . json ();
return { error: ' Request failed ' , message: data . message || ' An error occurred ' };
return { weather_data: data };
return { error: error . name , message: error . message };
async def fetch_weather_data ( input_data ) :
Fetches weather data for the given latitude and longitude using the Open-Meteo API.
Returns the weather data or an error message if the request fails.
:param input_data: The input data containing the latitude and longitude.
:return: The weather data or an error message.
endpoint = " https://api.open-meteo.com/v1/forecast "
latitude = input_data. get ( " latitude " )
longitude = input_data. get ( " longitude " , "" )
params = { " latitude " : latitude, " longitude " : longitude, " current_weather " : True }
response = requests. get ( endpoint , params = params )
weather_data = { " weather_data " : response. json ()}
response. raise_for_status ()
except RequestException as e:
return { " error " : type ( e ), " message " : str ( e )}
Explanation:
This function makes the actual API call to get weather data.
It uses the Open-Meteo API (a free weather API service).
It constructs the API URL with the provided latitude and longitude.
It handles both successful responses and errors:
On success, it returns the weather data.
On failure, it returns an error object.
These components work together to create a functional weather tool:
The tool description tells the LLM how to use the tool.
The prompt guides the LLM’s behavior and response format.
The handler processes the LLM’s tool use requests.
The fetch function retrieves real weather data based on the LLM’s input.
This setup allows the BedrockLLMAgent to provide weather information by seamlessly integrating external data into its responses.
Create the Weather Agent
Now that we have our weather tool defined and the code above in a file called weatherTool.ts
, let’s create a BedrockLLMAgent that uses this tool.
import { BedrockLLMAgent } from ' multi-agent-orchestrator ' ;
import { weatherToolDescription, weatherToolHandler, WEATHER_PROMPT } from ' ./weatherTool ' ;
const weatherAgent = new BedrockLLMAgent ( {
description: ` Specialized agent for providing comprehensive weather information and forecasts for specific cities worldwide.
This agent can deliver current conditions, temperature ranges, precipitation probabilities, wind speeds, humidity levels, UV indexes, and extended forecasts.
It can also offer insights on severe weather alerts, air quality indexes, and seasonal climate patterns.
The agent is capable of interpreting user queries related to weather, including natural language requests like 'Do I need an umbrella today?' or 'What's the best day for outdoor activities this week?'.
It can handle location-specific queries and time-based weather predictions, making it ideal for travel planning, event scheduling, and daily decision-making based on weather conditions. ` ,
useToolHandler: weatherToolHandler ,
tool: weatherToolDescription ,
weatherAgent . setSystemPrompt ( WEATHER_PROMPT );
from tools import weather_tool
from multi_agent_orchestrator.agents import (BedrockLLMAgent, BedrockLLMAgentOptions)
weather_agent = BedrockLLMAgent ( BedrockLLMAgentOptions (
description = " Specialized agent for giving weather condition from a city. " ,
' tool ' :weather_tool.weather_tool_description,
' useToolHandler ' : weather_tool.weather_tool_handler
weather_agent. set_system_prompt ( weather_tool.weather_tool_prompt )
Add the Weather Agent to the Orchestrator
Now we can add our weather agent to the Multi-Agent Orchestrator:
import { MultiAgentOrchestrator } from " multi-agent-orchestrator " ;
const orchestrator = new MultiAgentOrchestrator ();
orchestrator . addAgent (weatherAgent);
from multi_agent_orchestrator.orchestrator import MultiAgentOrchestrator
orchestrator = MultiAgentOrchestrator ()
orchestrator. add_agent ( weather_agent )
4. Using the Weather Agent
Now that our weather agent is set up and added to the orchestrator, we can use it to get weather information:
const response = await orchestrator . routeRequest (
" What's the weather like in New York City? " ,
response = await orchestrator. route_request ( " What's the weather like in New York City? " , " user123 " , " session456 " )
How It Works
When a weather query is received, the orchestrator routes it to the Weather Agent.
The Weather Agent processes the query using the custom system prompt (WEATHER_PROMPT).
The agent uses the Weather_Tool to fetch weather data for the specified location.
The weatherToolHandler processes the tool use, fetches real weather data, and adds it to the conversation.
The agent then formulates a response based on the weather data and the original query.
This setup allows for a specialized weather agent that can handle various weather-related queries while using real-time data from an external API.
By following this guide, you can create a powerful, context-aware weather agent using BedrockLLMAgent and custom tools within your Multi-Agent Orchestrator system.