Overview
The Bedrock LLM Agent is a powerful and flexible agent class in the Multi-Agent Orchestrator System. It leverages Amazon Bedrock’s Converse API to interact with various LLMs supported by Amazon Bedrock.
This agent can handle a wide range of processing tasks, making it suitable for diverse applications such as conversational AI, question-answering systems, and more.
Key Features
Integration with Amazon Bedrock’s Converse API
Support for multiple LLM models available on Amazon Bedrock
Streaming and non-streaming response options
Customizable inference configuration
Ability to set and update custom system prompts
Optional integration with retrieval systems for enhanced context
Support for Tool use within the conversation flow
Creating a BedrockLLMAgent
By default, the Bedrock LLM Agent uses the anthropic.claude-3-haiku-20240307-v1:0
model.
Basic Example
To create a new Bedrock LLM Agent with only the required parameters, use the following code:
import { BedrockLLMAgent } from ' multi-agent-orchestrator ' ;
const agent = new BedrockLLMAgent ( {
description: ' Specializes in technology areas including software development, hardware, AI, cybersecurity, blockchain, cloud computing, emerging tech innovations, and pricing/costs related to technology products and services. '
from multi_agent_orchestrator.agents import BedrockLLMAgent, BedrockLLMAgentOptions
agent = BedrockLLMAgent ( BedrockLLMAgentOptions (
description = ' Specializes in technology areas including software development, hardware, AI, cybersecurity, blockchain, cloud computing, emerging tech innovations, and pricing/costs related to technology products and services. '
In this basic example, only the name
and description
are provided, which are the only required parameters for creating a BedrockLLMAgent.
Advanced Example
For more complex use cases, you can create a Bedrock LLM Agent with all available options. All parameters except name
and description
are optional:
import { BedrockLLMAgent, BedrockLLMAgentOptions } from ' multi-agent-orchestrator ' ;
import { Retriever } from ' ../retrievers/retriever ' ;
const options : BedrockLLMAgentOptions = {
name: ' My Advanced Bedrock Agent ' ,
description: ' A versatile agent for complex NLP tasks ' ,
modelId: ' anthropic.claude-3-sonnet-20240229-v1:0 ' ,
stopSequences: [ ' Human: ' , ' AI: ' ]
guardrailIdentifier: ' my-guardrail ' ,
retriever: new Retriever () , // Assuming you have a Retriever class implemented
name: ' get_current_weather ' ,
description: ' Get the current weather in a given location ' ,
description: ' The city and state, e.g. San Francisco, CA '
unit: { type: ' string ' , enum: [ ' celsius ' , ' fahrenheit ' ] }
toolCallback : ( response ) => {
return { continueWithTools: false , message: response };
const agent = new BedrockLLMAgent (options);
from multi_agent_orchestrator.agents import BedrockLLMAgent, BedrockLLMAgentOptions
from multi_agent_orchestrator.retrievers import Retriever
options = BedrockLLMAgentOptions (
name = ' My Advanced Bedrock Agent ' ,
description = ' A versatile agent for complex NLP tasks ' ,
model_id = ' anthropic.claude-3-sonnet-20240229-v1:0 ' ,
' stopSequences ' : [ ' Human: ' , ' AI: ' ]
' guardrailIdentifier ' : ' my-guardrail ' ,
' guardrailVersion ' : ' 1.0 '
retriever = Retriever () , # Assuming you have a Retriever class implemented
' name ' : ' get_current_weather ' ,
' description ' : ' Get the current weather in a given location ' ,
' description ' : ' The city and state, e.g. San Francisco, CA '
' unit ' : { ' type ' : ' string ' , ' enum ' : [ ' celsius ' , ' fahrenheit ' ] }
' useToolHandler ' : lambda response , conversation : ( False , response) # Process tool response
agent = BedrockLLMAgent ( options )
Option Explanations
name
and description
: Identify and describe the agent’s purpose.
model_id
: Specifies the LLM model to use (e.g., Claude 3 Sonnet).
region
: AWS region for the Bedrock service.
streaming
: Enables streaming responses for real-time output.
inference_config
: Fine-tunes the model’s output characteristics.
guardrail_config
: Applies predefined guardrails to the model’s responses.
retriever
: Integrates a retrieval system for enhanced context.
tool_config
: Defines tools the agent can use and how to handle their responses.
Setting a New Prompt
` You are an AI assistant specialized in {{DOMAIN}}.
Your main goal is to {{GOAL}}.
Always maintain a {{TONE}} tone in your responses. ` ,
GOAL: " help users understand and mitigate potential security threats " ,
TONE: " professional and reassuring "
""" You are an AI assistant specialized in {{ DOMAIN }} .
Your main goal is to {{ GOAL }} .
Always maintain a {{ TONE }} tone in your responses. """ ,
" DOMAIN " : " cybersecurity " ,
" GOAL " : " help users understand and mitigate potential security threats " ,
" TONE " : " professional and reassuring "
This method allows you to dynamically change the agent’s behavior and focus without creating a new instance.
Adding the Agent to the Orchestrator
To integrate the Bedrock LLM Agent into your orchestrator, follow these steps:
First, ensure you have created an instance of the orchestrator:
import { MultiAgentOrchestrator } from " multi-agent-orchestrator " ;
const orchestrator = new MultiAgentOrchestrator ();
from multi_agent_orchestrator.orchestrator import MultiAgentOrchestrator
orchestrator = MultiAgentOrchestrator ()
Then, add the agent to the orchestrator:
orchestrator . addAgent (agent);
orchestrator. add_agent ( agent )
Now you can use the orchestrator to route requests to the appropriate agent, including your Bedrock LLM agent:
const response = await orchestrator . routeRequest (
" What is the base rate interest for 30 years? " ,
response = await orchestrator. route_request (
" What is the base rate interest for 30 years? " ,
By leveraging the Bedrock LLM Agent , you can create sophisticated, context-aware AI agents capable of handling a wide range of tasks and interactions, all powered by the latest LLM models available through Amazon Bedrock.