Simulate Dialog
After you train your intents in Syntphony NLP (or use the ones from other NLPs), you can see how your dialogues will work in a simulated chat. The dialog simulator allows you to test your virtual agent by checking if its intents, entities, services and other cells are behaving properly. To access the simulator, click the balloon button in the bottom right corner.
A modal will open for you to choose a channel.

The virtual agent simulator will show you the last trained version (if it's using Syntphony NLP) or the last loaded intents (if the virtual agent uses any other NLP).
Not all the Syntphony CAI functionalities won't work on the simulator. It doesn't mean they won't work in a flow, they just will not be shown on the simulator.
Line breaks in answers will be rendered as a space in the virtual agent simulator.
For example, the following answer,
“Thank you for ordering the tomato soup.
We will serve it in a second.
Enjoy your meal.”
would appear like this in the dialog simulator:
“Thank you for ordering the tomato soup. We will serve it in a second. Enjoy your meal.”

Show Info
The "Show Info" feature in the Dialogue Simulator allows you to inspect both the input and output of an interaction in detail. This functionality helps identify how the different components — intents, entities, agent, action, knowledge collection — contribute to the outcome of a conversation. The behavior adapts depending on the flow used (NLU or Agents):
NLU flows display details such as the detected intent, possible entities, knowledge source, KAI collection, etc.
Agent flows show data such as the responsible agent, executed action, supervisor, and associated KAI collection.
Accessing the "Show Info" Panel
When opening the Dialogue Simulator within the application, you can start a new conversation or load an existing simulation. Once in the environment, locate the message whose details you want to analyze. Clicking on "Show Info" will open a panel with expanded information about the input and output, showing how the system interpreted the user's input and which elements were involved in generating the response.
Viewing the Input
NLU Flows
Input Intent This only appears if the NLU engine detected an intent above the confidence threshold.
Input Intent + Entity If recognized entities also exist, the panel will display this information. If it exceeds three lines, part of the content will be hidden, with buttons for (view more)/(view less).

Input None If no intent is detected with sufficient confidence, it will display: “Input: None”
Agent Flows
It will simply display: “User input”
Viewing the Output
The panel shows information about how the agent generated its response. Depending on the flow type (NLU or Agents), the level of detail displayed may vary.
NLU Flows When the response comes from an NLU flow, the panel may show different types of information:
Response (cell) Indicates which flow cell responded to the user.
KAI Source + KAI Collection If the response comes from the KAI knowledge repository, the source and the collection where the information was found are shown.
KAI Question + KAI Collection If the response corresponds to an existing question in KAI, its name and associated collection will be displayed.
FAQ (intent and response with the same name) When the response comes from an intent that matches a cell of the same name, only the name is displayed.
External NLU If the interaction was handled by an external NLU service, it will be clearly indicated in the panel. If there is no internal cell that triggered the response, only the following will be displayed: “External NLU”
Agent Flows When the response comes from an Agent flow, the panel shows who or which component was responsible for the action:
Supervisor If the Supervisor generated the response, it will simply display: “Supervisor”
Agent + Action If the response was generated by an agent executing a specific action, both pieces of information will appear.
Agent + KAI Collection If the agent retrieved the response from a knowledge collection in KAI, it will display:

Voice
The Dialogue Simulator allows you to validate your Voice Agent’s behavior in a visual and intuitive way.
With this tool, you can make test calls, configure audio channels, and review real-time transcriptions — all from a unified, easy-to-use interface.
Voice Channel Selection
When you open the application, you’ll see the available voice channels in your project, organized by type: • Mobile • Telephony • Voice Assistants
Only active channels will be displayed, sorted alphabetically within each category.
If you need to find a specific channel, use the search bar, which lets you locate it by name or category. Once identified, click the channel to select it and start your test.

Making a Call
After selecting a voice channel (for example, Telephony), the main controls will appear at the bottom of the screen: • Start Call • Mute / Unmute Audio • Open Audio Settings
Click Start Call to begin the test conversation with your agent.
During the call, the elapsed time (in minutes and seconds) is displayed, allowing you to monitor the duration at all times.
From the Audio Settings menu, you can adjust the microphone volume for both the user and the agent using sliders. You can also mute or unmute each one with a single click.
For quicker access, use the shortcut at the bottom of the screen, represented by the settings icon or a side arrow, to make adjustments without opening the main menu.

Ending the Call
When the call ends, the timer stops automatically and a message displays the total duration. The application remains open, showing the full transcription, ready for review or for starting a new test.

Call Analytics
Each test call is automatically saved in the Analytics section. There, you can review the complete transcription, analyze the messages, and gather insights to optimize your interaction experience.
With these features, the tool becomes a practical, visual, and reliable resource to test, measure, and improve the performance of your voice channels.
Last updated