Multilingual Agent (beta)
Create virtual agents capable of understanding and speaking different languages
Last updated
Create virtual agents capable of understanding and speaking different languages
Last updated
The multilingual capability allows virtual agents to understand and respond in multiple languages, enhancing user experience and broadening accessibility. This feature eliminates the need for creating separate virtual agents for different languages, enabling users to interact in their preferred language.
Multilingual support is currently a beta feature; some use cases may not function as expected.
We recommend its use in development environments only and not in production.
Automatic Language Detection and Translation: When a user sends a message, the system detects the language and translates the message into the agents's primary language to search in the knowledge base. After that, the virtual agent answer will be translated and delivered in the user's language.
Primary and Additional Languages: Users can configure a primary language for the knowledge base and add complementary languages at the Advanced Resources page.
Comprehensive Language Support: Provides accurate and clear translations for all languages supported by Azure OpenAI.
Integration with your Knowledge Base: Ensures efficient information retrieval and answer generation in any language.
Flexible Language Settings: Allows configuration to detect language either at the beginning of the conversation or with each interaction.
Isolating Terms: To prevent specific terms from being translated—like product names or technical terms—you can use specific characters, like curly braces {}
around the term.
Enabling this feature may result in additional costs for each new request.
The primary language is selected when you're creating the virtual agent. The list of languages may vary depending on the NLP provider in use (Syntphony NLP, Dialogflow, Amazon Lex, IBM Watson and Microsoft Luis, or Open AI and Azure OpenAI for Zero-Shot option). See the languages supported by each NLP:
To start using this feature, first go to the Advanced Resources page to enable it, as it comes disabled by default. Then, turn the switch of the corresponding card on.
After enabling it, a pop-up window will appear where you can set up translation options and add languages.
You can choose to detect the language either only the first time or with each interaction.
Detect and respond based on recent history: The agent will detect the language based on recent interactions and the conversation context, avoiding unwanted language switches triggered by foreign terms. If the user actually switches the language during the conversation, the agent will respond in the new language detected, provided it is included in the list of additional languages. If a user starts a chat in English and later switches to Spanish, the virtual agent will respond in Spanish, as long as Spanish is listed as one of the additional languages in the settings. But if the user input language detected is not set as additional language, the system will translate the input into the primary language, process it, and provide the answer in the primary language.
Answer only with the initially detected language: If you choose to detect the language only at the start of the conversation, the virtual agent will always respond in that initial language, even if the user switches to another language during the conversation. If English is set as the primary language and Spanish and French as additional languages, the virtual agent will respond in these languages. For example, if the conversation starts in Spanish and the user switches to French, the agent will understand the input but will continue to respond in the language the conversation started in—Spanish in this case. If the initial language is not listed as an additional language, the agent will always respond in the primary language.
You must consent to share masked data with third-party translation providers for entity extraction and answers purposes before enabling this feature.
You can configure the agent's content translation within the flow using transactional services. This does not affect language detection, only the translation of the subsequent responses.
This can be interesting if you want to disambiguate a flow in multiple languages or define that a flow responds in only one specific language.
If the requested language doesn't match the list of configured languages or if the multilingual support is disabled, the answer will be provided in the agent's primary language.
Use this code to set the language:
Use this code to reset the language:
The cache will be saved in the defined language.
It's important to note that the translation language is not considered as a user input.
In cases where the language cannot be determined—due to ambiguity, abbreviations, or one-word input terms used in multiple languages—the system will prioritize the context of the conversation, based on the last five user interactions, over the most recent input when determining the language.
This approach aims to provide a more accurate understanding of the user's intended language by considering the broader context.
Ambiguous input with abbreviations:
User: "Bonjour"
User: "Comment ça va?"
User: "J’ai une question sur le compte."
User: "Où puis-je trouver mes relevés bancaires?"
User: "Merci"
Recent input: "OK"
In this scenario, the system detects that "OK" could be understood in many languages, so it prioritizes the context from previous interactions, which were all in French. Based on this context, the system continues to respond in French, given that it's among the ones you set as additional languages.
Another example.
User: "Hola, necesito ayuda con mi pedido."
User: "No entiendo el seguimiento."
User: "¿Puedes verificarlo?"
User: "Gracias"
User: "Me podrías ayudar con esto?"
Recent input: "Yes"
Though "Yes" could suggest a switch to English, the system identifies the broader context from previous interactions, which indicate the user has been communicating in Spanish. Thus, the system continues to reply in Spanish.
During the translation process, there may be specific terms that you don't want to translate. These could be product names, commands, or other technical terms that need to remain in their original language. To ensure that these remain unchanged, you can "isolate" them using special characters, such as curly braces {}
. Simply enclose the term you want to keep in the original language within {}
, and the system will then skip translating these terms, keeping them in their original language.
You can also configure that certain values are not translated, such as button values. You can configure that only the text field for the call-to-action is translated, preserving the value of the button (see Conversation API).
You have the option to include all available languages, or selectively add specific languages from the provided list.
The multilingual capabilities support all text-based inputs and virtual agent answers, including Zero-Shot, Assist Answer, and Rephrase functionalities.
Translation capabilities do not extend to audio, video, technical texts, or images.
It's the amount of time (in seconds) the virtual agent should wait for a response from the generative service. If the request fails or exceeds the time set, the system will deliver the configured fallback answer. You can set this timeout value between 1 to 10 seconds, with the recommended default being 5 seconds. In this scenario, the system will seek for Intents starting a flow, then a FAQ (pairs of an even Intent and Answer), after that Knowledge AI, if enabled, and finally a Not Expected flow.
This parameter can be configured in the Parameters section.
To disable the Multilingual Agent, go back to the Advanced Resources page and turn off the toggle switch of the related card. Once multilingual support is disabled, the virtual agent will interact solely in the primary language. If the user interacts in any other language, the system will redirect them to the fallback measure you have set.
Multilingual agents can currently be used in the following text channels:
Apple Business Chat
Kakao
Slack
App Mobile
Line
SMS
Facebook Messenger
Microsoft Teams
Telegram
RCS
Web
Skype
Skype for Business
Web Mobile
X (former Twitter)
Multilingual support is not currently available in Telephony, Smart Speakers, and Voice Assistants.
Masking: When multilingual support is enabled, masking features cannot be used simultaneously. Assess each case carefully, prioritizing either multilingual support or masking to optimize efficiency.
Fallback Measures: If the virtual agent cannot identify the language, it will respond in the agent's primary language. If multilingual support is disabled, the system will not detect inputs in unsupported languages, which could result in the user going to the Not Expected flow.
Buttons and one-word inputs: The beta version has a limitation regarding language detection when buttons are used at the beginning of a conversation. The button value will be sent in the agent's primary language, which may affect language selection, as the system relies on the last five interactions to determine the predominant language. This can result in incorrect detection when one-word inputs, like "no," are recognized as a different language. -> For this reason, it is recommended to set the button values to the primary language, so that the translation of the answer will be based on the user's previous input; if there is none, the primary language will be used.
If there is an error in translating different languages, you can create a rule (using rule cells) in the Not Expected flow to ensure the user will get a proper response in case of error.
Just add the following rules $hiddenContext._eva.api.errors.grouped.MULTILANGUAGE
to filter errors by the feature, but if you need to check all errors in the order they have happened use $hiddenContext._eva.api.errors.details
at the Not Expected flow followed by a regular answer cell with the variables above so you can check the details and address the errors.