Bedrock LLM Agent
Overview
The Bedrock LLM Agent is a powerful and flexible agent class in the Multi-Agent Orchestrator System. It leverages Amazon Bedrock’s Converse API to interact with various LLMs supported by Amazon Bedrock.
This agent can handle a wide range of processing tasks, making it suitable for diverse applications such as conversational AI, question-answering systems, and more.
Key Features
- Integration with Amazon Bedrock’s Converse API
- Support for multiple LLM models available on Amazon Bedrock
- Streaming and non-streaming response options
- Customizable inference configuration
- Ability to set and update custom system prompts
- Optional integration with retrieval systems for enhanced context
- Support for Tool use within the conversation flow
Creating a BedrockLLMAgent
By default, the Bedrock LLM Agent uses the anthropic.claude-3-haiku-20240307-v1:0
model.
Basic Example
To create a new Bedrock LLM Agent with only the required parameters, use the following code:
Advanced Example
For more complex use cases, you can create a Bedrock LLM Agent with all available options. All parameters except name
and description
are optional:
Option Explanations
name
anddescription
: Required fields to identify and describe the agent’s purposemodelId
/model_id
: Specifies the LLM model to use (e.g., Claude 3 Sonnet)region
: AWS region for the Bedrock servicestreaming
: Enables streaming responses for real-time outputinferenceConfig
/inference_config
: Fine-tunes the model’s output characteristicsguardrailConfig
/guardrail_config
: Applies predefined guardrails to the model’s responsesretriever
: Integrates a retrieval system for enhanced contexttoolConfig
/tool_config
: Defines tools the agent can use and how to handle their responsescustomSystemPrompt
/custom_system_prompt
: Defines the agent’s system prompt and behavior, with optional variables for dynamic contentclient
: Optional custom Bedrock client for specialized configurations
Setting Custom Prompts
The BedrockLLMAgent provides multiple ways to set custom prompts. You can set them either during initialization or after the agent is created, and you can use prompts with or without variables.
1. Setting Custom Prompt During Initialization (Without Variables)
2. Setting Custom Prompt During Initialization (With Variables)
3. Setting Custom Prompt After Initialization (Without Variables)
4. Setting Custom Prompt After Initialization (With Variables)
Notes on Custom Prompts
- Variables in templates use the
{{VARIABLE_NAME}}
syntax - When using arrays in variables, items are automatically joined with newlines
- The same template and variable functionality is available both during initialization and after
- Variables are optional - you can use plain text templates without any variables
- Setting a new prompt will completely replace the previous prompt
- The agent will use its default prompt if no custom prompt is specified
Choose the approach that best fits your needs:
- Use initialization when the prompt is part of the agent’s core configuration
- Use post-initialization when prompts need to be changed dynamically
- Use variables when parts of the prompt need to be modified frequently
- Use direct templates when the prompt is static