Bedrock Classifier
The Bedrock Classifier is the default classifier used in the Multi-Agent Orchestrator. It leverages Amazon Bedrock’s models through Converse API providing powerful and flexible classification capabilities.
Features
- Utilizes Amazon Bedrock’s models through Converse API
- Configurable model selection and inference parameters
- Supports custom system prompts and variables
- Handles conversation history for context-aware classification
Default Model
The classifier uses Claude 3.5 Sonnet as its default model:
Model Support for Tool Choice
The BedrockClassifier’s toolChoice configuration for structured outputs is only available with specific models in Amazon Bedrock. As of January 2025, the following models support tool use:
-
Anthropic Models:
- Claude 3 models (all variants except Haiku)
- Claude 3.5 Sonnet (
anthropic.claude-3-5-sonnet-20240620-v1:0
) - Claude 3.5 Sonnet v2
-
AI21 Labs Models:
- Jamba 1.5 Large
- Jamba 1.5 Mini
-
Amazon Models:
- Nova Pro
- Nova Lite
- Nova Micro
-
Meta Models:
- Llama 3.2 11b
- Llama 3.2 90b
-
Mistral AI Models:
- Mistral Large
- Mistral Large 2 (24.07)
- Mistral Small
-
Cohere Models:
- Command R
- Command R+
When using other models:
- The tool configuration will still be included in the request
- The model won’t be explicitly directed to use the
analyzePrompt
tool - Response formats may be less consistent
For the most up-to-date list of supported models and their features, please refer to the Amazon Bedrock Converse API documentation.
Python Package
If you haven’t already installed the AWS-related dependencies, make sure to install them:
Basic Usage
By default, the Multi-Agent Orchestrator uses the Bedrock Classifier:
System Prompt and Variables
Full Default System Prompt
The default system prompt used by the classifier is comprehensive and includes examples of both simple and complex interactions:
Example Variable Replacements
AGENT_DESCRIPTIONS Example
Extended HISTORY Examples
The conversation history is formatted to include agent names in the responses, allowing the classifier to track which agent handled each interaction. Each assistant response is prefixed with [agent-name]
in the history, making it clear who provided each response:
Example 1: AWS Services to Billing Transition
In this example, you can see how the history shows the transition from aws-services-agent
to billing-agent
, with each response clearly marked.
Example 2: Multiple Department Interaction
Here, the history shows the conversation moving between aws-services-agent
and security-agent
as the topic shifts from general access to security configuration.
The agent prefixing (e.g., [agent-name]
) is automatically handled by the Multi-Agent Orchestrator when formatting the conversation history. This helps the classifier understand:
- Which agent handled each part of the conversation
- The context of previous interactions
- When agent transitions occurred
- How to maintain continuity for follow-up responses
Tool-Based Response Structure
Like the Anthropic Classifier, the BedrockClassifier uses a tool specification to enforce structured output from the model. This is a design pattern that ensures consistent and properly formatted responses.
The Tool Specification
Why Use Tools?
- Structured Output: Instead of free-form text, the model must provide exactly the data structure we need.
- Guaranteed Format: The tool schema ensures we always get:
- A valid agent identifier
- A properly formatted confidence score
- All required fields
- Implementation Note: The tool isn’t actually executed - it’s a pattern to force the model to structure its response in a specific way that maps directly to our
ClassifierResult
type.
Example Response:
Custom Configuration
You can customize the BedrockClassifier by creating an instance with specific options:
The BedrockClassifier accepts the following configuration options:
model_id
(optional): The ID of the Bedrock model to use. Defaults to Claude 3.5 Sonnet.region
(optional): The AWS region to use. If not provided, it will use theREGION
environment variable.inference_config
(optional): A dictionary containing inference configuration parameters:maxTokens
(optional): The maximum number of tokens to generate.temperature
(optional): Controls randomness in output generation.topP
(optional): Controls diversity of output generation.stopSequences
(optional): A list of sequences that will stop generation.
Best Practices
- AWS Configuration: Ensure proper AWS credentials and Bedrock access are configured.
- Model Selection: Choose appropriate models based on your use case requirements.
- Region Selection: Consider using the region closest to your application for optimal latency.
- Inference Configuration: Experiment with different parameters to optimize classification accuracy.
- System Prompt: Consider customizing the system prompt for your specific use case, while maintaining the core classification structure.
Limitations
- Requires an active AWS account with access to Amazon Bedrock
- Classification quality depends on the chosen model and the quality of agent descriptions
- Subject to Amazon Bedrock service quotas and pricing
For more information, see the Classifier Overview and Agents documentation.