Ollama classifier with llama3.1
Welcome to the Ollama Classifier guide! This example will walk you through creating an Ollama classifier and integrating it into your Agent Squad System. Letβs dive in!
π Prerequisites:
- Basic knowledge of Python
- Familiarity with the Agent Squad System
- Ollama installed on your machine
πΎ 1. Ollama installation:
First, letβs install the Ollama Python package:
pip install ollama
𧬠2. Create the Ollama Classifier class:
Now, letβs create our OllamaClassifier
class. This class extends the Classifier
abstract class from the Agent Squad.
The process_request method must be implemented by the OllamaClassifier
from typing import List, Dict, Optional, Anyfrom agent_squad.classifiers import Classifier, ClassifierResultfrom agent_squad.types import ConversationMessage, ParticipantRolefrom agent_squad.utils import Loggerimport ollama
class OllamaClassifierOptions: def __init__(self, model_id: Optional[str] = None, inference_config: Optional[Dict[str, Any]] = None, host: Optional[str] = None ): self.model_id = model_id self.inference_config = inference_config or {} self.host = host
class OllamaClassifier(Classifier): def __init__(self, options: OllamaClassifierOptions): super().__init__()
self.model_id = options.model_id or 'llama3.1' self.inference_config = options.inference_config self.streaming = False self.temperature = options.inference_config.get('temperature', 0.0) self.client = ollama.Client(host=options.host or None)
async def process_request(self, input_text: str, chat_history: List[ConversationMessage]) -> ClassifierResult: messages = [ {"role": msg.role, "content": msg.content[0]['text']} for msg in chat_history ] self.system_prompt = self.system_prompt + f'\n question: {input_text}' messages.append({"role": ParticipantRole.USER.value, "content": self.system_prompt})
try: response = self.client.chat( model=self.model_id, messages=messages, options={'temperature':self.temperature}, tools=[{ 'type': 'function', 'function': { 'name': 'analyzePrompt', 'description': 'Analyze the user input and provide structured output', 'parameters': { 'type': 'object', 'properties': { 'userinput': { 'type': 'string', 'description': 'The original user input', }, 'selected_agent': { 'type': 'string', 'description': 'The name of the selected agent', }, 'confidence': { 'type': 'number', 'description': 'Confidence level between 0 and 1', }, }, 'required': ['userinput', 'selected_agent', 'confidence'], }, } }] ) # Check if the model decided to use the provided function if not response['message'].get('tool_calls'): Logger.get_logger().info(f"The model didn't use the function. Its response was:{response['message']['content']}") raise Exception(f'Ollama model {self.model_id} did not use tools') else: tool_result = response['message'].get('tool_calls')[0].get('function', {}).get('arguments', {}) return ClassifierResult( selected_agent=self.get_agent_by_id(tool_result.get('selected_agent', None)), confidence=float(tool_result.get('confidence', 0.0)) )
except Exception as e: Logger.get_logger().error(f'Error in Ollama Classifier :{str(e)}') raise e
Now that we have our OllamaClassifier
, letβs use it in the Agent Squad:
π 3. Use OllamaClassifier in the orchestrator:
If you have used the quickstarter sample program, you can use the Ollama classifier and run it like this:
from ollamaClassifier import OllamaClassifier, OllamaClassifierOptionsfrom agent_squad.orchestrator import AgentSquad
classifier = OllamaClassifier(OllamaClassifierOptions( model_id='llama3.1', inference_config={'temperature':0.0} ))
# Use our newly created classifier within the orchestratororchestrator = AgentSquad(classifier=classifier)
And you are done!
π 4. Run Your Ollama Model Locally:
Before running your program, make sure to start the Ollama model locally:
ollama run llama3.1
If you havenβt downloaded the Llama3.1 model yet, it will be downloaded automatically before running.
π Youβre All Set!
Congratulations! Youβve successfully integrated an Ollama classifier into your Agent Squad System. Now you can start classifiying user requests and leveraging the power of Llama3.1 in your applications!
5.π Useful Links:
6.π‘ Next Steps:
- Experiment with different Ollama models
- Build a complete multi agent system in an offline environment
Happy coding! π