The Anthropic Classifier is an alternative classifier for the Multi-Agent Orchestrator that leverages Anthropic’s AI models for intent classification. It provides powerful classification capabilities using Anthropic’s state-of-the-art language models.
The Anthropic Classifier extends the abstract Classifier class and uses the Anthropic API client to process requests and classify user intents.
Features
Utilizes Anthropic’s AI models (e.g., Claude) for intent classification
Configurable model selection and inference parameters
Supports custom system prompts and variables
Handles conversation history for context-aware classification
Default Model
The classifier uses Claude 3.5 Sonnet as its default model:
Python Package
If you haven’t already installed the Anthropic-related dependencies, make sure to install them:
Basic Usage
To use the AnthropicClassifier, you need to create an instance with your Anthropic API key and pass it to the Multi-Agent Orchestrator:
The default system prompt used by the classifier is comprehensive and includes examples of both simple and complex interactions:
Variable Replacements
AGENT_DESCRIPTIONS Example
Extended HISTORY Examples
The conversation history is formatted to include agent names in the responses, allowing the classifier to track which agent handled each interaction. Each assistant response is prefixed with [agent-name] in the history, making it clear who provided each response:
Here, the history shows the conversation moving between billing-agent and tech-support-agent as the topic shifts between billing and technical issues.
The agent prefixing (e.g., [agent-name]) is automatically handled by the Multi-Agent Orchestrator when formatting the conversation history. This helps the classifier understand:
Which agent handled each part of the conversation
The context of previous interactions
When agent transitions occurred
How to maintain continuity for follow-up responses
Tool-Based Response Structure
The AnthropicClassifier uses a tool specification to enforce structured output from the model. This is a design pattern that ensures consistent and properly formatted responses.
The Tool Specification
Why Use Tools?
Structured Output: Instead of free-form text, the model must provide exactly the data structure we need.
Guaranteed Format: The tool schema ensures we always get:
A valid agent identifier
A properly formatted confidence score
All required fields
Implementation Note: The tool isn’t actually executed - it’s a pattern to force the model to structure its response in a specific way that maps directly to our ClassifierResult type.
Example Response:
Customizing the System Prompt
You can override the default system prompt while maintaining the required agent descriptions and history variables. Here’s how to do it: