Skip to content

Bedrock Classifier

The Bedrock Classifier is the default classifier used in the Multi-Agent Orchestrator.

It leverages Amazon Bedrock’s models through Converse API providing powerful and flexible classification capabilities.

Overview

The BedrockClassifier extends the abstract Classifier class and uses Amazon Bedrock’s runtime client to process requests and classify user intents. It’s designed to analyze user input, consider conversation history, and determine the most appropriate agent to handle the query.

Features

  • Utilizes Amazon Bedrock’s models through Converse API
  • Configurable model selection and inference parameters
  • Supports custom system prompts and variables
  • Handles conversation history for context-aware classification

Basic Usage

By default, the Multi-Agent Orchestrator uses the Bedrock Classifier, which in turn utilizes the anthropic.claude-3-5-sonnet-20240620-v1:0 (Claude 3.5 Sonnet) model for classification tasks.

import { MultiAgentOrchestrator } from "multi-agent-orchestrator";
const orchestrator = new MultiAgentOrchestrator();

Custom Configuration

You can customize the BedrockClassifier by creating an instance with specific options:

import { BedrockClassifier, MultiAgentOrchestrator } from "multi-agent-orchestrator";
const customBedrockClassifier = new BedrockClassifier({
modelId: 'anthropic.claude-v2',
inferenceConfig: {
maxTokens: 500,
temperature: 0.7,
topP: 0.9
}
});
const orchestrator = new MultiAgentOrchestrator({ classifier: customBedrockClassifier });

The BedrockClassifier accepts the following configuration options:

  • model_id (optional): The ID of the Bedrock model to use. Defaults to Claude 3.5 Sonnet.
  • region (optional): The AWS region to use. If not provided, it will use the REGION environment variable.
  • inference_config (optional): A dictionary containing inference configuration parameters:
    • maxTokens (optional): The maximum number of tokens to generate.
    • temperature (optional): Controls randomness in output generation.
    • topP (optional): Controls diversity of output generation.
    • stopSequences (optional): A list of sequences that, when generated, will stop the generation process.

Customizing the System Prompt

You can customize the system prompt used by the BedrockClassifier:

orchestrator.classifier.setSystemPrompt(
`
Custom prompt template with placeholders:
{{AGENT_DESCRIPTIONS}}
{{HISTORY}}
{{CUSTOM_PLACEHOLDER}}
`,
{
CUSTOM_PLACEHOLDER: "Value for custom placeholder"
}
);

Processing Requests

The BedrockClassifier processes requests using the process_request method, which is called internally by the orchestrator. This method:

  1. Prepares the user’s message and conversation history.
  2. Constructs a command for the Bedrock API, including the system prompt and tool configurations.
  3. Sends the request to the Bedrock API and processes the response.
  4. Returns a ClassifierResult containing the selected agent and confidence score.

Error Handling

The BedrockClassifier includes error handling to manage potential issues during the classification process. If an error occurs, it will log the error and raise an exception, which can be caught and handled by the orchestrator.

Best Practices

  1. Model Selection: Choose an appropriate model based on your use case and performance requirements.
  2. Inference Configuration: Experiment with different inference parameters to find the best balance between response quality and speed.
  3. System Prompt: Craft a clear and comprehensive system prompt to guide the model’s classification process effectively.

Limitations

  • Requires an active AWS account with access to Amazon Bedrock.
  • Classification quality depends on the chosen model and the quality of your system prompt and agent descriptions.

For more information on using and customizing the Multi-Agent Orchestrator, refer to the Classifier Overview and Agents documentation.