Testes
Previous versions
Previous versions
  • 3.3.1
    • User Manual
  • 3.3.0
    • User Manual
  • 3.2.4
    • User Manual
      • Glossary
      • Accessing eva
      • Introduction
      • Creating a virtual agent
      • User Management
      • Channel Management
      • Developing a virtual agent
      • Automated Tests
      • Dashboard
      • Appendices
    • Development Manual
      • Base Architecture
      • Conversation context
      • Creating channels – The Conversation API
      • Webhooks
      • Data extraction
      • Distributed Tracing
      • Appendices
    • Virtual agent Migration Guide
    • eva Installation in Azure Cloud
Powered by GitBook
On this page
  1. 3.2.4
  2. User Manual

Automated Tests

PreviousDeveloping a virtual agentNextDashboard

Last updated 3 years ago

To guarantee that your bot delivers the right answers to every question your users might ask, eva allows you to test intents and check if your bot answers match what you expect.

This is important to keep track of your bot’s health. Once a test scenario is created you can run it multiple times, so the accuracy of your bot can be checked every time a change is made. For example, if the most important question the users have is the PLACE_ORDER intent, this functionality can show you if the accuracy for this intent has decreased, increased or if it is unchanged in the last trainings.

This is the first screen you will see when you click on tests.

Automated tests

To test your intents, first download the template to guide you on how you have to format your intents.

Example of XLS f ile

Starting a test

On the following screen, you can see each intent, the user input and the expected answer.

Test results

After you test your intents, you can see how accurate they are. There is a universal accuracy indicator that shows the percentage of intents that has high, average and low accuracy.

Below that, you can see how each intent performed. An intent that performed fine will have an answer that match its query. An average intent might not have a matching answer, but it will have an answer. A poor intent should have a wrong answer.

Every test is stored in the repository. You can access them and test again.

Tests repository

​

In this file, you should insert the intent name, its data, the example/utterance it should respond and the expected answer. Once you have the XLS file ready, upload it, name your test and select a channel.​