Welcome to the Ollama Agent guide! This example will walk you through creating an Ollama agent and integrating it into your Multi-Agent Orchestrator System.
Let’s dive in!
📚Prerequisites:
Basic knowledge of TypeScript or Python
Familiarity with the Multi-Agent Orchestrator System
Ollama installed on your machine
💾 1. Ollama installation:
First, let’s install the Ollama JavaScript library:
First, let’s install the Ollama Python package:
🧬 2. Create the Ollama Agent class:
Now, let’s create our OllamaAgent
class. This class extends the Agent
abstract class from the Multi-Agent Orchestrator.
The process_request method must be implemented by the OllamaAgent
} from " multi-agent-orchestrator " ;
import ollama from ' ollama '
export interface OllamaAgentOptions extends AgentOptions {
// Add other Ollama-specific options here (e.g., temperature, top_k, top_p)
export class OllamaAgent extends Agent {
private options : OllamaAgentOptions ;
constructor ( options : OllamaAgentOptions ) {
description: options . description ,
modelId: options . modelId ?? " llama2 " ,
streaming: options . streaming ?? false
private async * handleStreamingResponse ( messages : any [] ) : AsyncIterable < string > {
const response = await ollama . chat ( {
model: this . options . modelId ?? " llama2 " ,
for await ( const part of response) {
yield part . message . content ;
Logger . logger . error ( " Error getting stream from Ollama model: " , error);
chatHistory : ConversationMessage [],
additionalParams ?: Record < string , string >
) : Promise < ConversationMessage | AsyncIterable < any >> {
const messages = chatHistory . map ( item => ( {
content: item . content ! [ 0 ] . text
messages . push ({role: ParticipantRole . USER , content: inputText});
if ( this . options . streaming ) {
return this . handleStreamingResponse (messages);
const response = await ollama . chat ( {
model: this . options . modelId !,
const message : ConversationMessage = {
role: ParticipantRole . ASSISTANT ,
content: [{text: response . message . content }]
from typing import List, Dict, Optional, AsyncIterable, Any
from multi_agent_orchestrator.agents import Agent, AgentOptions
from multi_agent_orchestrator.types import ConversationMessage, ParticipantRole
from multi_agent_orchestrator.utils import Logger
from dataclasses import dataclass
class OllamaAgentOptions ( AgentOptions ):
# Add other Ollama-specific options here (e.g., temperature, top_k, top_p)
class OllamaAgent ( Agent ):
def __init__ ( self , options : OllamaAgentOptions ) :
super (). __init__ ( options )
self .model_id = options.model_id
self .streaming = options.streaming
async def handle_streaming_response ( self , messages : List[Dict[ str , str ]] ) -> ConversationMessage:
text += part[ ' message ' ] [ ' content ' ]
self .callbacks. on_llm_new_token ( part [ ' message ' ] [ ' content ' ] )
return ConversationMessage (
role = ParticipantRole.ASSISTANT.value ,
except Exception as error:
Logger.logger. error ( " Error getting stream from Ollama model: " , error )
async def process_request (
chat_history : List[ConversationMessage],
additional_params : Optional[Dict[ str , str ]] = None
) -> ConversationMessage | AsyncIterable[Any]:
{ " role " : msg.role, " content " : msg.content[ 0 ] [ ' text ' ] }
messages. append ( { " role " : ParticipantRole.USER.value, " content " : input_text} )
return await self . handle_streaming_response ( messages )
return ConversationMessage (
role = ParticipantRole.ASSISTANT.value ,
content = [ { " text " : response [ ' message ' ] [ ' content ' ] } ]
Now that we have our OllamaAgent
, let’s add it to the Multi-Agent Orchestrator:
🔗 3. Add OllamaAgent to the orchestrator:
If you have used the quickstarter sample program, you can add the Ollama agent and run it:
import { OllamaAgent } from " ./ollamaAgent " ;
import { MultiAgentOrchestrator } from " multi-agent-orchestrator "
const orchestrator = new MultiAgentOrchestrator ();
// Add a text summarization agent using Ollama and Llama 2
name: " Text Summarization Wizard " ,
description: " I'm your go-to agent for concise and accurate text summaries! " ,
from ollamaAgent import OllamaAgent, OllamaAgentOptions
from multi_agent_orchestrator.orchestrator import MultiAgentOrchestrator
orchestrator = MultiAgentOrchestrator ()
# Add a text summarization agent using Ollama and Llama 2
OllamaAgent ( OllamaAgentOptions (
name = " Text Summarization Wizard " ,
description = " I'm your go-to agent for concise and accurate text summaries! " ,
And you are done!
🏃 4. Run Your Ollama Model Locally:
Before running your program, make sure to start the Ollama model locally:
If you haven’t downloaded the Llama 2 model yet, it will be downloaded automatically before running.
🎉 You’re All Set!
Congratulations! You’ve successfully integrated an Ollama agent into your Multi-Agent Orchestrator System. Now you can start summarizing text and leveraging the power of Llama 2 in your applications!
5.🔗 Useful Links:
6.💡 Next Steps:
Experiment with different Ollama models
Customize the agent’s behavior by adjusting parameters
Create specialized agents for various tasks using Ollama
Happy coding! 🚀