Overview
The Bedrock LLM Agent is a powerful and flexible agent class in the Multi-Agent Orchestrator System. It leverages Amazon Bedrock’s Converse API to interact with various LLMs supported by Amazon Bedrock.
This agent can handle a wide range of processing tasks, making it suitable for diverse applications such as conversational AI, question-answering systems, and more.
Key Features
Integration with Amazon Bedrock’s Converse API
Support for multiple LLM models available on Amazon Bedrock
Streaming and non-streaming response options
Customizable inference configuration
Ability to set and update custom system prompts
Optional integration with retrieval systems for enhanced context
Support for Tool use within the conversation flow
Creating a BedrockLLMAgent
By default, the Bedrock LLM Agent uses the anthropic.claude-3-haiku-20240307-v1:0
model.
Python Package
If you haven’t already installed the AWS-related dependencies, make sure to install them:
pip install " multi-agent-orchestrator[aws] "
1. Minimal Configuration
const agent = new BedrockLLMAgent ( {
name: ' Bedrock Assistant ' ,
description: ' A versatile AI assistant '
agent = BedrockLLMAgent ( BedrockLLMAgentOptions (
name = ' Bedrock Assistant ' ,
description = ' A versatile AI assistant '
2. Using Custom Client
import { BedrockRuntimeClient } from " @aws-sdk/client-bedrock-runtime " ;
const customClient = new BedrockRuntimeClient ( { region: ' us-east-1 ' } );
const agent = new BedrockLLMAgent ( {
name: ' Bedrock Assistant ' ,
description: ' A versatile AI assistant ' ,
custom_client = boto3. client ( ' bedrock-runtime ' , region_name = ' us-east-1 ' )
agent = BedrockLLMAgent ( BedrockLLMAgentOptions (
name = ' Bedrock Assistant ' ,
description = ' A versatile AI assistant ' ,
3. Custom Model and Streaming
const agent = new BedrockLLMAgent ( {
name: ' Bedrock Assistant ' ,
description: ' A streaming-enabled assistant ' ,
modelId: ' anthropic.claude-3-sonnet-20240229-v1:0 ' ,
agent = BedrockLLMAgent ( BedrockLLMAgentOptions (
name = ' Bedrock Assistant ' ,
description = ' A streaming-enabled assistant ' ,
model_id = ' anthropic.claude-3-sonnet-20240229-v1:0 ' ,
4. With Inference Configuration
const agent = new BedrockLLMAgent ( {
name: ' Bedrock Assistant ' ,
description: ' An assistant with custom inference settings ' ,
stopSequences: [ ' Human: ' , ' AI: ' ]
agent = BedrockLLMAgent ( BedrockLLMAgentOptions (
name = ' Bedrock Assistant ' ,
description = ' An assistant with custom inference settings ' ,
' stopSequences ' : [ ' Human: ' , ' AI: ' ]
5. With Simple System Prompt
const agent = new BedrockLLMAgent ( {
name: ' Bedrock Assistant ' ,
description: ' An assistant with custom prompt ' ,
template: ' You are a helpful AI assistant focused on technical support. '
agent = BedrockLLMAgent ( BedrockLLMAgentOptions (
name = ' Bedrock Assistant ' ,
description = ' An assistant with custom prompt ' ,
' template ' : ' You are a helpful AI assistant focused on technical support. '
6. With System Prompt Variables
const agent = new BedrockLLMAgent ( {
name: ' Bedrock Assistant ' ,
description: ' An assistant with variable prompt ' ,
template: ' You are an AI assistant specialized in {{DOMAIN}}. Always use a {{TONE}} tone. ' ,
DOMAIN: ' technical support ' ,
TONE: ' friendly and helpful '
agent = BedrockLLMAgent ( BedrockLLMAgentOptions (
name = ' Bedrock Assistant ' ,
description = ' An assistant with variable prompt ' ,
' template ' : ' You are an AI assistant specialized in {{ DOMAIN }} . Always use a {{ TONE }} tone. ' ,
' DOMAIN ' : ' technical support ' ,
' TONE ' : ' friendly and helpful '
7. With Custom Retriever
const retriever = new CustomRetriever ( {
// Retriever configuration
const agent = new BedrockLLMAgent ( {
name: ' Bedrock Assistant ' ,
description: ' An assistant with retriever ' ,
retriever = CustomRetriever (
# Retriever configuration
agent = BedrockLLMAgent ( BedrockLLMAgentOptions (
name = ' Bedrock Assistant ' ,
description = ' An assistant with retriever ' ,
8. With Tool Configuration
const agent = new BedrockLLMAgent ( {
name: ' Bedrock Assistant ' ,
description: ' An assistant with tool support ' ,
description: " Get current weather data " ,
description: " City name " ,
agent = BedrockLLMAgent ( BedrockLLMAgentOptions (
name = ' Bedrock Assistant ' ,
description = ' An assistant with tool support ' ,
' description ' : ' Get current weather data ' ,
' description ' : ' City name '
9. Complete Example with All Options
import { BedrockLLMAgent } from " multi-agent-orchestrator " ;
const agent = new BedrockLLMAgent ( {
name: " Advanced Bedrock Assistant " ,
description: " A fully configured AI assistant powered by Bedrock models " ,
modelId: " anthropic.claude-3-sonnet-20240229-v1:0 " ,
retriever: customRetriever , // Custom retriever for additional context
stopSequences: [ " Human: " , " AI: " ] ,
guardrailIdentifier: " my-guardrail " ,
description: " Get current weather data " ,
description: " City name " ,
template: ` You are an AI assistant specialized in {{DOMAIN}}.
- Maintain a {{TONE}} tone
- Prioritize {{PRIORITY}} ` ,
DOMAIN: " scientific research " ,
" - Advanced data analysis " ,
" - Statistical methodology " ,
TONE: " professional and academic " ,
FOCUS: " accuracy and clarity " ,
PRIORITY: " evidence-based insights " ,
from multi_agent_orchestrator.agents import BedrockLLMAgent, BedrockLLMAgentOptions
agent = BedrockLLMAgent ( BedrockLLMAgentOptions (
name = ' Advanced Bedrock Assistant ' ,
description = ' A fully configured AI assistant powered by Bedrock models ' ,
model_id = ' anthropic.claude-3-sonnet-20240229-v1:0 ' ,
retriever = custom_retriever , # Custom retriever for additional context
' stopSequences ' : [ ' Human: ' , ' AI: ' ]
' guardrailIdentifier ' : ' my-guardrail ' ,
' guardrailVersion ' : ' 1.0 '
' description ' : ' Get current weather data ' ,
' description ' : ' City name '
' template ' : """ You are an AI assistant specialized in {{ DOMAIN }} .
- Maintain a {{ TONE }} tone
- Prioritize {{ PRIORITY }} """ ,
' DOMAIN ' : ' scientific research ' ,
' - Advanced data analysis ' ,
' - Statistical methodology ' ,
' TONE ' : ' professional and academic ' ,
' FOCUS ' : ' accuracy and clarity ' ,
' PRIORITY ' : ' evidence-based insights '
The BedrockLLMAgent
provides multiple ways to set custom prompts. You can set them either during initialization or after the agent is created, and you can use prompts with or without variables.
10. Setting Custom Prompt After Initialization (Without Variables)
const agent = new BedrockLLMAgent ( {
name: ' Business Consultant ' ,
description: ' Business strategy and management expert '
agent . setSystemPrompt ( ` You are a business strategy consultant.
4. Performance Optimization
When providing business advice:
- Begin with clear objectives
- Use data-driven insights
- Consider market context
- Provide actionable steps ` );
agent = BedrockLLMAgent ( BedrockLLMAgentOptions (
name = ' Business Consultant ' ,
description = ' Business strategy and management expert '
agent. set_system_prompt ( """ You are a business strategy consultant.
4. Performance Optimization
When providing business advice:
- Begin with clear objectives
- Use data-driven insights
- Consider market context
- Provide actionable steps """ )
11. Setting Custom Prompt After Initialization (With Variables)
const agent = new BedrockLLMAgent ( {
name: ' Education Expert ' ,
description: ' Educational specialist and learning consultant '
` You are a {{ROLE}} focusing on {{SPECIALTY}}.
Always maintain a {{TONE}} tone. ` ,
ROLE: ' education specialist ' ,
SPECIALTY: ' personalized learning ' ,
' - Curriculum development ' ,
' - Educational technology '
' - Student-centered learning ' ,
TONE: ' supportive and encouraging '
agent = BedrockLLMAgent ( BedrockLLMAgentOptions (
description = ' Educational specialist and learning consultant '
""" You are a {{ ROLE }} focusing on {{ SPECIALTY }} .
Always maintain a {{ TONE }} tone. """ ,
" ROLE " : " education specialist " ,
" SPECIALTY " : " personalized learning " ,
" - Curriculum development " ,
" - Educational technology "
" - Student-centered learning " ,
" TONE " : " supportive and encouraging "
Notes on Custom Prompts
Variables in templates use the {{VARIABLE_NAME}}
syntax
When using arrays in variables, items are automatically joined with newlines
The same template and variable functionality is available both during initialization and after
Variables are optional - you can use plain text templates without any variables
Setting a new prompt will completely replace the previous prompt
The agent will use its default prompt if no custom prompt is specified
Choose the approach that best fits your needs:
Use initialization when the prompt is part of the agent’s core configuration
Use post-initialization when prompts need to be changed dynamically
Use variables when parts of the prompt need to be modified frequently
Use direct templates when the prompt is static
Option Explanations
Parameter Description Required/Optional name
Identifies the agent within the system Required description
Describes the agent’s purpose and capabilities Required modelId
Specifies the LLM model to use (e.g., Claude 3 Sonnet) Optional region
AWS region for the Bedrock service Optional streaming
Enables streaming responses for real-time output Optional inferenceConfig
Fine-tunes the model’s output characteristics Optional guardrailConfig
Applies predefined guardrails to the model’s responses Optional retriever
Integrates a retrieval system for enhanced context Optional toolConfig
Defines tools the agent can use and how to handle their responses Optional customSystemPrompt
Defines the agent’s system prompt and behavior, with optional variables for dynamic content Optional client
Optional custom Bedrock client for specialized configurations Optional
Parameter Description Required/Optional name
Identifies the agent within the system Required description
Describes the agent’s purpose and capabilities Required model_id
Specifies the LLM model to use (e.g., Claude 3 Sonnet) Optional region
AWS region for the Bedrock service Optional streaming
Enables streaming responses for real-time output Optional inference_config
Fine-tunes the model’s output characteristics Optional guardrail_config
Applies predefined guardrails to the model’s responses Optional retriever
Integrates a retrieval system for enhanced context Optional tool_config
Defines tools the agent can use and how to handle their responses Optional custom_system_prompt
Defines the agent’s system prompt and behavior, with optional variables for dynamic content Optional client
Optional custom Bedrock client for specialized configurations Optional