Skip to content

Home

Agent Evaluation is a generative AI-powered framework for testing virtual agents.

Internally, Agent Evaluation implements an LLM agent (evaluator) that will orchestrate conversations with your own agent (target) and evaluate the responses during the conversation.

Key features

✅ Evaluate an agent's responses by simulating concurrent, multi-turn conversations.

✅ Built-in support for popular AWS services including Amazon Bedrock, Amazon Q Business, and Amazon SageMaker. You can also bring your own agent to test using Agent Evaluation.

✅ Define hooks to perform additional tasks such as integration testing.

✅ Incorporate into CI/CD pipelines to expedite the time to delivery while maintaining the stability of agents in production environments.


  • 🚀 Getting started


    Create your first test using Agent Evaluation.

    User Guide

  • 🎯 Built-in targets


    View the required configurations for your agent.

    Targets

  • ✏️ Writing test cases


    Learn how to write test cases in Agent Evaluation.

    User Guide

  • Contribute


    Review the contributing guidelines to get started!

    GitHub