Skip to content

Bedrock LLM Agent

Overview

The Bedrock LLM Agent is a powerful and flexible agent class in the Multi-Agent Orchestrator System. It leverages Amazon Bedrock’s Converse API to interact with various LLMs supported by Amazon Bedrock.

This agent can handle a wide range of processing tasks, making it suitable for diverse applications such as conversational AI, question-answering systems, and more.

Key Features

  • Integration with Amazon Bedrock’s Converse API
  • Support for multiple LLM models available on Amazon Bedrock
  • Streaming and non-streaming response options
  • Customizable inference configuration
  • Ability to set and update custom system prompts
  • Optional integration with retrieval systems for enhanced context
  • Support for Tool use within the conversation flow

Creating a BedrockLLMAgent

By default, the Bedrock LLM Agent uses the anthropic.claude-3-haiku-20240307-v1:0 model.

Python Package

If you haven’t already installed the AWS-related dependencies, make sure to install them:

Terminal window
pip install "multi-agent-orchestrator[aws]"

1. Minimal Configuration

const agent = new BedrockLLMAgent({
name: 'Bedrock Assistant',
description: 'A versatile AI assistant'
});

2. Using Custom Client

import { BedrockRuntimeClient } from "@aws-sdk/client-bedrock-runtime";
const customClient = new BedrockRuntimeClient({ region: 'us-east-1' });
const agent = new BedrockLLMAgent({
name: 'Bedrock Assistant',
description: 'A versatile AI assistant',
client: customClient
});

3. Custom Model and Streaming

const agent = new BedrockLLMAgent({
name: 'Bedrock Assistant',
description: 'A streaming-enabled assistant',
modelId: 'anthropic.claude-3-sonnet-20240229-v1:0',
streaming: true
});

4. With Inference Configuration

const agent = new BedrockLLMAgent({
name: 'Bedrock Assistant',
description: 'An assistant with custom inference settings',
inferenceConfig: {
maxTokens: 500,
temperature: 0.7,
topP: 0.9,
stopSequences: ['Human:', 'AI:']
}
});

5. With Simple System Prompt

const agent = new BedrockLLMAgent({
name: 'Bedrock Assistant',
description: 'An assistant with custom prompt',
customSystemPrompt: {
template: 'You are a helpful AI assistant focused on technical support.'
}
});

6. With System Prompt Variables

const agent = new BedrockLLMAgent({
name: 'Bedrock Assistant',
description: 'An assistant with variable prompt',
customSystemPrompt: {
template: 'You are an AI assistant specialized in {{DOMAIN}}. Always use a {{TONE}} tone.',
variables: {
DOMAIN: 'technical support',
TONE: 'friendly and helpful'
}
}
});

7. With Custom Retriever

const retriever = new CustomRetriever({
// Retriever configuration
});
const agent = new BedrockLLMAgent({
name: 'Bedrock Assistant',
description: 'An assistant with retriever',
retriever: retriever
});

8. With Tool Configuration

const agent = new BedrockLLMAgent({
name: 'Bedrock Assistant',
description: 'An assistant with tool support',
toolConfig: {
tool: [
{
name: "Weather_Tool",
description: "Get current weather data",
input_schema: {
type: "object",
properties: {
location: {
type: "string",
description: "City name",
}
},
required: ["location"]
}
}
]
}
});

9. Complete Example with All Options

import { BedrockLLMAgent } from "multi-agent-orchestrator";
const agent = new BedrockLLMAgent({
// Required fields
name: "Advanced Bedrock Assistant",
description: "A fully configured AI assistant powered by Bedrock models",
// Optional fields
modelId: "anthropic.claude-3-sonnet-20240229-v1:0",
region: "us-west-2",
streaming: true,
retriever: customRetriever, // Custom retriever for additional context
inferenceConfig: {
maxTokens: 500,
temperature: 0.7,
topP: 0.9,
stopSequences: ["Human:", "AI:"],
},
guardrailConfig: {
guardrailIdentifier: "my-guardrail",
guardrailVersion: "1.0",
},
toolConfig: {
tool: [
{
name: "Weather_Tool",
description: "Get current weather data",
input_schema: {
type: "object",
properties: {
location: {
type: "string",
description: "City name",
},
},
required: ["location"],
},
},
],
},
customSystemPrompt: {
template: `You are an AI assistant specialized in {{DOMAIN}}.
Your core competencies:
{{SKILLS}}
Communication style:
- Maintain a {{TONE}} tone
- Focus on {{FOCUS}}
- Prioritize {{PRIORITY}}`,
variables: {
DOMAIN: "scientific research",
SKILLS: [
"- Advanced data analysis",
"- Statistical methodology",
"- Research design",
"- Technical writing",
],
TONE: "professional and academic",
FOCUS: "accuracy and clarity",
PRIORITY: "evidence-based insights",
},
},
});

The BedrockLLMAgent provides multiple ways to set custom prompts. You can set them either during initialization or after the agent is created, and you can use prompts with or without variables.

10. Setting Custom Prompt After Initialization (Without Variables)

const agent = new BedrockLLMAgent({
name: 'Business Consultant',
description: 'Business strategy and management expert'
});
agent.setSystemPrompt(`You are a business strategy consultant.
Key Areas of Focus:
1. Strategic Planning
2. Market Analysis
3. Risk Management
4. Performance Optimization
When providing business advice:
- Begin with clear objectives
- Use data-driven insights
- Consider market context
- Provide actionable steps`);

11. Setting Custom Prompt After Initialization (With Variables)

const agent = new BedrockLLMAgent({
name: 'Education Expert',
description: 'Educational specialist and learning consultant'
});
agent.setSystemPrompt(
`You are a {{ROLE}} focusing on {{SPECIALTY}}.
Your expertise includes:
{{EXPERTISE}}
Teaching approach:
{{APPROACH}}
Core principles:
{{PRINCIPLES}}
Always maintain a {{TONE}} tone.`,
{
ROLE: 'education specialist',
SPECIALTY: 'personalized learning',
EXPERTISE: [
'- Curriculum development',
'- Learning assessment',
'- Educational technology'
],
APPROACH: [
'- Student-centered learning',
'- Active engagement',
'- Continuous feedback'
],
PRINCIPLES: [
'- Clear objectives',
'- Scaffolded learning',
'- Regular assessment'
],
TONE: 'supportive and encouraging'
}
);

Notes on Custom Prompts

  • Variables in templates use the {{VARIABLE_NAME}} syntax
  • When using arrays in variables, items are automatically joined with newlines
  • The same template and variable functionality is available both during initialization and after
  • Variables are optional - you can use plain text templates without any variables
  • Setting a new prompt will completely replace the previous prompt
  • The agent will use its default prompt if no custom prompt is specified

Choose the approach that best fits your needs:

  • Use initialization when the prompt is part of the agent’s core configuration
  • Use post-initialization when prompts need to be changed dynamically
  • Use variables when parts of the prompt need to be modified frequently
  • Use direct templates when the prompt is static

Option Explanations

ParameterDescriptionRequired/Optional
nameIdentifies the agent within the systemRequired
descriptionDescribes the agent’s purpose and capabilitiesRequired
modelIdSpecifies the LLM model to use (e.g., Claude 3 Sonnet)Optional
regionAWS region for the Bedrock serviceOptional
streamingEnables streaming responses for real-time outputOptional
inferenceConfigFine-tunes the model’s output characteristicsOptional
guardrailConfigApplies predefined guardrails to the model’s responsesOptional
retrieverIntegrates a retrieval system for enhanced contextOptional
toolConfigDefines tools the agent can use and how to handle their responsesOptional
customSystemPromptDefines the agent’s system prompt and behavior, with optional variables for dynamic contentOptional
clientOptional custom Bedrock client for specialized configurationsOptional