Bedrock Classifier
The Bedrock Classifier is the default classifier used in the Multi-Agent Orchestrator.
It leverages Amazon Bedrock’s models through Converse API providing powerful and flexible classification capabilities.
Overview
The BedrockClassifier extends the abstract Classifier
class and uses Amazon Bedrock’s runtime client to process requests and classify user intents. It’s designed to analyze user input, consider conversation history, and determine the most appropriate agent to handle the query.
Features
- Utilizes Amazon Bedrock’s models through Converse API
- Configurable model selection and inference parameters
- Supports custom system prompts and variables
- Handles conversation history for context-aware classification
Basic Usage
By default, the Multi-Agent Orchestrator uses the Bedrock Classifier, which in turn utilizes the anthropic.claude-3-5-sonnet-20240620-v1:0
(Claude 3.5 Sonnet) model for classification tasks.
Custom Configuration
You can customize the BedrockClassifier by creating an instance with specific options:
The BedrockClassifier accepts the following configuration options:
model_id
(optional): The ID of the Bedrock model to use. Defaults to Claude 3.5 Sonnet.region
(optional): The AWS region to use. If not provided, it will use theREGION
environment variable.inference_config
(optional): A dictionary containing inference configuration parameters:maxTokens
(optional): The maximum number of tokens to generate.temperature
(optional): Controls randomness in output generation.topP
(optional): Controls diversity of output generation.stopSequences
(optional): A list of sequences that, when generated, will stop the generation process.
Customizing the System Prompt
You can customize the system prompt used by the BedrockClassifier:
Processing Requests
The BedrockClassifier processes requests using the process_request
method, which is called internally by the orchestrator. This method:
- Prepares the user’s message and conversation history.
- Constructs a command for the Bedrock API, including the system prompt and tool configurations.
- Sends the request to the Bedrock API and processes the response.
- Returns a
ClassifierResult
containing the selected agent and confidence score.
Error Handling
The BedrockClassifier includes error handling to manage potential issues during the classification process. If an error occurs, it will log the error and raise an exception, which can be caught and handled by the orchestrator.
Best Practices
- Model Selection: Choose an appropriate model based on your use case and performance requirements.
- Inference Configuration: Experiment with different inference parameters to find the best balance between response quality and speed.
- System Prompt: Craft a clear and comprehensive system prompt to guide the model’s classification process effectively.
Limitations
- Requires an active AWS account with access to Amazon Bedrock.
- Classification quality depends on the chosen model and the quality of your system prompt and agent descriptions.
For more information on using and customizing the Multi-Agent Orchestrator, refer to the Classifier Overview and Agents documentation.