Overview
Running the Multi-Agent Orchestrator System locally is useful for development, testing, and debugging. This guide will walk you through setting up and running the orchestrator on your local machine.
๐ Ensure you have Node.js and npm installed (for TypeScript) or Python installed (for Python) on your development environment before proceeding.
Prerequisites
Create a new project:
mkdir test_multi_agent_orchestrator
cd test_multi_agent_orchestrator
Follow the steps to generate a package.json
file.
mkdir test_multi_agent_orchestrator
cd test_multi_agent_orchestrator
# Optional: Set up a virtual environment
source venv/bin/activate # On Windows use `venv\Scripts\activate`
Authenticate with your AWS account
This quickstart demonstrates the use of Amazon Bedrock for both classification and agent responses.
By default, the framework is configured as follows:
Classifier: Uses the Bedrock Classifier implementation with anthropic.claude-3-5-sonnet-20240620-v1:0
Agent: Utilizes the Bedrock LLM Agent with anthropic.claude-3-haiku-20240307-v1:0
Important
These are merely default settings and can be easily changed to suit your needs or preferences.
You have the flexibility to:
Change the classifier model or implementation
Change the agent model or implementation
Use any other compatible models available through Amazon Bedrock
Ensure you have requested access to the models you intend to use through the AWS console.
To customize the model selection :
For the classifier, refer to our guide on configuring the classifier.
For the agent, refer to our guide on configuring agents .
๐ Get Started!
Install the Multi-Agent Orchestrator framework in your project:
npm install multi-agent-orchestrator
pip install multi-agent-orchestrator
Create a new file for your quickstart code:
Create a file named quickstart.ts
.
Create a file named quickstart.py
.
Create an Orchestrator:
import { MultiAgentOrchestrator } from " multi-agent-orchestrator " ;
const orchestrator = new MultiAgentOrchestrator ( {
LOG_CLASSIFIER_CHAT: true ,
LOG_CLASSIFIER_RAW_OUTPUT: false ,
LOG_CLASSIFIER_OUTPUT: true ,
LOG_EXECUTION_TIMES: true ,
from typing import Optional, List, Dict, Any
from multi_agent_orchestrator.orchestrator import MultiAgentOrchestrator, OrchestratorConfig
from multi_agent_orchestrator.agents import (BedrockLLMAgent,
from multi_agent_orchestrator.types import ConversationMessage, ParticipantRole
orchestrator = MultiAgentOrchestrator ( options = OrchestratorConfig (
LOG_CLASSIFIER_CHAT = True ,
LOG_CLASSIFIER_RAW_OUTPUT = True ,
LOG_CLASSIFIER_OUTPUT = True ,
LOG_EXECUTION_TIMES = True ,
USE_DEFAULT_AGENT_IF_NONE_IDENTIFIED = True ,
MAX_MESSAGE_PAIRS_PER_AGENT = 10
Add Agents:
import { BedrockLLMAgent } from " multi-agent-orchestrator " ;
description: " Specializes in technology areas including software development, hardware, AI, cybersecurity, blockchain, cloud computing, emerging tech innovations, and pricing/costs related to technology products and services. " ,
description: " Focuses on health and medical topics such as general wellness, nutrition, diseases, treatments, mental health, fitness, healthcare systems, and medical terminology or concepts. " ,
class BedrockLLMAgentCallbacks ( AgentCallbacks ):
def on_llm_new_token ( self , token : str ) -> None :
# handle response streaming here
print ( token , end = '' , flush = True )
tech_agent = BedrockLLMAgent ( BedrockLLMAgentOptions (
description = " Specializes in technology areas including software development, hardware, AI, \
cybersecurity, blockchain, cloud computing, emerging tech innovations, and pricing/costs \
related to technology products and services. " ,
model_id = " anthropic.claude-3-sonnet-20240229-v1:0 " ,
callbacks = BedrockLLMAgentCallbacks ()
orchestrator. add_agent ( tech_agent )
Send a Query:
const userId = " quickstart-user " ;
const sessionId = " quickstart-session " ;
const query = " What are the latest trends in AI? " ;
console . log ( ` \n User Query: ${ query } ` );
const response = await orchestrator . routeRequest (query , userId , sessionId);
console . log ( " \n ** RESPONSE ** \n " );
console . log ( ` > Agent ID: ${ response . metadata . agentId } ` );
console . log ( ` > Agent Name: ${ response . metadata . agentName } ` );
console . log ( ` > User Input: ${ response . metadata . userInput } ` );
console . log ( ` > User ID: ${ response . metadata . userId } ` );
console . log ( ` > Session ID: ${ response . metadata . sessionId } ` );
` > Additional Parameters: ` ,
response . metadata . additionalParams
console . log ( ` \n > Response: ${ response . output } ` );
// ... rest of the logging code ...
console . error ( " An error occurred: " , error);
// Here you could also add more specific error handling if needed
async def handle_request ( _orchestrator : MultiAgentOrchestrator, _user_input : str , _user_id : str , _session_id : str ) :
response: AgentResponse = await _orchestrator. route_request ( _user_input , _user_id , _session_id )
print ( f "Selected Agent: {response.metadata.agent_name} " )
print ( ' Response: ' , response.output.content [ 0 ] [ ' text ' ] )
print ( ' Response: ' , response.output.content [ 0 ] [ ' text ' ] )
if __name__ == " __main__ " :
SESSION_ID = str ( uuid. uuid4 ())
print ( " Welcome to the interactive Multi-Agent system. Type 'quit' to exit. " )
user_input = input ( " \n You: " ). strip ()
if user_input. lower () == ' quit ' :
print ( " Exiting the program. Goodbye! " )
asyncio. run ( handle_request ( orchestrator , user_input , USER_ID , SESSION_ID ))
Now, letโs run the quickstart script:
npx ts-node quickstart.ts
Congratulations! ๐
Youโve successfully set up and run your first multi-agent conversation using the Multi-Agent Orchestrator System.
๐จโ๐ป Next Steps
Now that youโve seen the basic functionality, here are some next steps to explore:
Try adding other agents from those built-in in the framework (Bedrock LLM Agent , Amazon Lex Bot , Amazon Bedrock Agent , Lambda Agent , OpenAI Agent ).
Experiment with different storage options, such as Amazon DynamoDB for persistent storage.
Explore the Agent Overlap Analysis feature to optimize your agent configurations.
Integrate the system into a web application or deploy it as an AWS Lambda function .
Try adding your own custom agents by extending the Agent
class.
For more detailed information on these advanced features, check out our full documentation.
๐งน Cleanup
As this quickstart uses in-memory storage and local resources, thereโs no cleanup required. Simply stop the script when youโre done experimenting.