Skip to content

Local Execution

Overview

Running the Multi-Agent Orchestrator System locally is useful for development, testing, and debugging. This guide will walk you through setting up and running the orchestrator on your local machine.

๐Ÿ’ Ensure you have Node.js and npm installed (for TypeScript) or Python installed (for Python) on your development environment before proceeding.

Prerequisites

  1. Create a new project:
Terminal window
mkdir test_multi_agent_orchestrator
cd test_multi_agent_orchestrator
npm init

Follow the steps to generate a package.json file.

  1. Authenticate with your AWS account

This quickstart demonstrates the use of Amazon Bedrock for both classification and agent responses.

By default, the framework is configured as follows:

  • Classifier: Uses the Bedrock Classifier implementation with anthropic.claude-3-5-sonnet-20240620-v1:0
  • Agent: Utilizes the Bedrock LLM Agent with anthropic.claude-3-haiku-20240307-v1:0

Important

These are merely default settings and can be easily changed to suit your needs or preferences.


You have the flexibility to:

  • Change the classifier model or implementation
  • Change the agent model or implementation
  • Use any other compatible models available through Amazon Bedrock

Ensure you have requested access to the models you intend to use through the AWS console.


To customize the model selection:

  • For the classifier, refer to our guide on configuring the classifier.
  • For the agent, refer to our guide on configuring agents.

๐Ÿš€ Get Started!

  1. Install the Multi-Agent Orchestrator framework in your project:
Terminal window
npm install multi-agent-orchestrator
  1. Create a new file for your quickstart code:

Create a file named quickstart.ts.

  1. Create an Orchestrator:
import { MultiAgentOrchestrator } from "multi-agent-orchestrator";
const orchestrator = new MultiAgentOrchestrator({
config: {
LOG_AGENT_CHAT: true,
LOG_CLASSIFIER_CHAT: true,
LOG_CLASSIFIER_RAW_OUTPUT: false,
LOG_CLASSIFIER_OUTPUT: true,
LOG_EXECUTION_TIMES: true,
}
});
  1. Add Agents:
import { BedrockLLMAgent } from "multi-agent-orchestrator";
orchestrator.addAgent(
new BedrockLLMAgent({
name: "Tech Agent",
description: "Specializes in technology areas including software development, hardware, AI, cybersecurity, blockchain, cloud computing, emerging tech innovations, and pricing/costs related to technology products and services.",
})
);
orchestrator.addAgent(
new BedrockLLMAgent({
name: "Health Agent",
description: "Focuses on health and medical topics such as general wellness, nutrition, diseases, treatments, mental health, fitness, healthcare systems, and medical terminology or concepts.",
})
);
  1. Send a Query:
const userId = "quickstart-user";
const sessionId = "quickstart-session";
const query = "What are the latest trends in AI?";
console.log(`\nUser Query: ${query}`);
async function main() {
try {
const response = await orchestrator.routeRequest(query, userId, sessionId);
console.log("\n** RESPONSE ** \n");
console.log(`> Agent ID: ${response.metadata.agentId}`);
console.log(`> Agent Name: ${response.metadata.agentName}`);
console.log(`> User Input: ${response.metadata.userInput}`);
console.log(`> User ID: ${response.metadata.userId}`);
console.log(`> Session ID: ${response.metadata.sessionId}`);
console.log(
`> Additional Parameters:`,
response.metadata.additionalParams
);
console.log(`\n> Response: ${response.output}`);
// ... rest of the logging code ...
} catch (error) {
console.error("An error occurred:", error);
// Here you could also add more specific error handling if needed
}
}
main();

Now, letโ€™s run the quickstart script:

Terminal window
npx ts-node quickstart.ts

Congratulations! ๐ŸŽ‰ Youโ€™ve successfully set up and run your first multi-agent conversation using the Multi-Agent Orchestrator System.

๐Ÿ‘จโ€๐Ÿ’ป Next Steps

Now that youโ€™ve seen the basic functionality, here are some next steps to explore:

  1. Try adding other agents from those built-in in the framework (Bedrock LLM Agent, Amazon Lex Bot, Amazon Bedrock Agent, Lambda Agent, OpenAI Agent).
  2. Experiment with different storage options, such as Amazon DynamoDB for persistent storage.
  3. Explore the Agent Overlap Analysis feature to optimize your agent configurations.
  4. Integrate the system into a web application or deploy it as an AWS Lambda function.
  5. Try adding your own custom agents by extending the Agent class.

For more detailed information on these advanced features, check out our full documentation.

๐Ÿงน Cleanup

As this quickstart uses in-memory storage and local resources, thereโ€™s no cleanup required. Simply stop the script when youโ€™re done experimenting.