How to train a bot
After building a flow, you have to make sure that they are connected to the right answers. Here, you will learn the best way to do this
Trainning button
If you are using everis Clever, you will have to train your Intents and Entities.
Every time that an Intent, Entity, document or question is changed, the button “Train” will appear at the training section. Just click it.
Training a bot is very easy! You'll just have to go to the training repository:
Training repository
All trained versions (valid and invalid) are kept in the training repository. The latest trained version will be automatically published.
    You can publish only one version at a time.
    To train a flow in Clever, you must have at least five (5) intents with at least five (5) examples each. A bot will not be trainable if it has any intents with less than five (5) examples.
    If you delete an intent or example/utterance in the repository, the training will be invalid. Entities doesn’t have minimum values.
After training, you can check the details.
Training Details
By clicking on “Details”, you will be taken to a page with that specific training.
Failed Training
If something goes wrong, you can see why by clicking on “view details” after the X icon.

Bot Simulator

After you train your intents in Clever (or use the ones from other NLPs), you can see how your dialogs will work in a simulated chat. The bot simulator allows you to test your bot by checking if its intents, entities, services and other cells are behaving properly. To access the bot simulator, click the balloon button on the lower right corner.
Bot Simulator Button on the lower right corner
A modal will open for you to choose a channel.
Channel Selection
The bot simulator will show you the last trained version (if the bot uses Clever) or the last loaded intents (if the bot uses any other NLP).
Some eva functionalities doesn’t work in the bot simulator. This does not mean that they will not work in a flow, just that they will not be shown in the bot simulator.
Line breaks in answers will be rendered as a space in the bot simulator.
For example, the following answer,
“Thank you for ordering the tomato soup. We will serve it in a second. Enjoy your meal.”
would appear like this in the bot simulator:
“Thank you for ordering the tomato soup. We will serve it in a second. Enjoy your meal.”
Bot Simulator Window
You now can write any phrase to see if your flows work properly. If you want to see which intent a sentence leads to, check the “Show intents” box.
Show Intents Box
Using the bot simulator, you can test every training. For example, consider the following soup ordering flow:
Partial Soup Ordering Flow
After this flow is trained, it becomes the following dialog:
Dialog on the Bot Simulator
The simulated conversation shows the intent which the sentence was tied to. In this case, the first intent was “soup1” and the second, “cold”.
Sentences and Intents

Automated Tests

Automated Tests
To guarantee that a bot delivers the right answers to every question users might ask, eva allows you to test intents, documents and questions and check if your bot answers match what you expect.
Once a test scenario is created you can run it multiple times, so the accuracy of your bot can be checked every time a change is made.
For example, if the most important question the users have is the PLACE_ORDER intent, this functionality can show you if the accuracy for this intent has decreased, increased or if it is unchanged in the last trainings.
The automated test might generate additional fees
To test your intents, first download the template to guide you on how you have to format the .xls file that you will upload.
Example of XLS file
In this file, you should insert the component category, name, the example/utterance it should respond and the expected answer. If you wish, you can describe each component, but this is not mandatory.
Blank test file
Once you have the XLS file ready, upload it, name your test and select a channel.
Starting a Test
Once the test is completed, you can see its results.
Test Results
This screen shows the test results. Before you see how each component did individually, you see the general results.
The average assertiveness shows the percentage of times a bot linked a user input to a component correctly.
The trust rating shows the percentage of times a user input was linked to an intent correctly.
The Likelihood score shows the percentage of times a user input was linked to a document or question correctly.
Below the general results, you can see how each component did individually.
Each line shows the expected component, the delivered component, the user input, the percentage of times the right component was linked to that input, the expected answer and the delivered answer.
A component that performed well will have an answer that match its query. An average component might not have a matching answer, but it will have an answer. A poor component will have a wrong answer or no answer at all.
Every test is stored in the repository. There, you will see the test name, when it was last tested, the channel where it was tested and its general assertiveness. You can access them and test again.
To get more information about:

Automated Learning Training

Every time that you upload a document, create or edit a question, Automated Learning becomes trainable again.
To train them, click on “train”.
You can check the trained versions on the Automated Learning tab in the training repository.
Automated Learning Tab
The Automated Learning documents and questions are trainable even if your bot doesn’t use Clever. Automatic Learning training works just like Clever training. You can check the details after training and see what went wrong after a failed training.

Last modified 1mo ago