OpenAI Classifier
The OpenAI Classifier is a built-in classifier for the Multi-Agent Orchestrator that leverages OpenAI’s language models for intent classification. It provides robust classification capabilities using OpenAI’s state-of-the-art models like GPT-4o.
The OpenAI Classifier extends the abstract Classifier
class and uses the OpenAI API client to process requests and classify user intents.
Features
- Utilizes OpenAI’s advanced models (e.g., GPT-4o) for intent classification
- Configurable model selection and inference parameters
- Supports custom system prompts and variables
- Handles conversation history for context-aware classification
Basic Usage
Python Package
If you haven’t already installed the OpenAI-related dependencies, make sure to install them:
To use the OpenAIClassifier, you need to create an instance with your OpenAI API key and pass it to the Multi-Agent Orchestrator:
Custom Configuration
You can customize the OpenAIClassifier by providing additional options:
The OpenAIClassifier accepts the following configuration options:
api_key
(required): Your OpenAI API key.model_id
(optional): The ID of the OpenAI model to use. Defaults to GPT-4 Turbo.inference_config
(optional): A dictionary containing inference configuration parameters:max_tokens
(optional): The maximum number of tokens to generate. Defaults to 1000 if not specified.temperature
(optional): Controls randomness in output generation.top_p
(optional): Controls diversity of output generation.stop_sequences
(optional): A list of sequences that, when generated, will stop the generation process.
Customizing the System Prompt
You can customize the system prompt used by the OpenAIClassifier:
Processing Requests
The OpenAIClassifier processes requests using the process_request
method, which is called internally by the orchestrator. This method:
- Prepares the user’s message and conversation history.
- Constructs a request for the OpenAI API, including the system prompt and function calling configurations.
- Sends the request to the OpenAI API and processes the response.
- Returns a
ClassifierResult
containing the selected agent and confidence score.
Error Handling
The OpenAIClassifier includes error handling to manage potential issues during the classification process. If an error occurs, it will log the error and raise an exception, which can be caught and handled by the orchestrator.
Best Practices
- API Key Security: Ensure your OpenAI API key is kept secure and not exposed in your codebase.
- Model Selection: Choose an appropriate model based on your use case and performance requirements.
- Inference Configuration: Experiment with different inference parameters to find the best balance between response quality and speed.
- System Prompt: Craft a clear and comprehensive system prompt to guide the model’s classification process effectively.
Limitations
- Requires an active OpenAI API key.
- Classification quality depends on the chosen model and the quality of your system prompt and agent descriptions.
- API usage is subject to OpenAI’s pricing and rate limits.
For more information on using and customizing the Multi-Agent Orchestrator, refer to the Classifier Overview and Agents documentation.