Voice agents

In this guide, you'll learn how to easily build a virtual agent for voice channels using two main answers templates and the technical text.

Creating Voice channel

First, add a voice channel, which can be done in two different moments: when you're creating a virtual agent or adding it later to an existing agent. In the later, access the side menu option "Channels" and then click the "Create channel" tab.

Before continuing, make sure you have read this step-by-step guide until the Welcome Flow item.

How to configurate a DNIS

The following JSON contains all the data and configurable properties you must provide eva.

This JSON allows you to insert the default DNIS configurations, including setting up a Conversation Property (voice providers).

These properties can be modified individually within the flows by utilizing the "technical text" field of the answer cells, as demonstrated ahead in this documentation.

Please refer to each property table to understand the configurable fields used in the JSON and their reference values: TTS (text-to-speech) properties, such as BargeIn and Flush, used in audio and text answer templates, Play Silence, DTMF menu, Voice menu, Transfer, Fetch, Default Error Behaviour, Regional Expressions, etc.

JSON for DNIS configuration
{
   "dnis":"913",
   "properties":{
      "tts":{
         "bargeIn":false,
         "flush":false,
         "bargeInOffset":200,
         "mask":"\u003cspeak xmlns\u003d\u0027http://www.w3.org/2001/10/synthesis\u0027 xmlns:mstts\u003d\u0027http://www.w3.org/2001/mstts\u0027 xmlns:emo\u003d\u0027http://www.w3.org/2009/10/emotionml\u0027 version\u003d\u00271.0\u0027 xml:lang\u003d\u0027en-US\u0027\u003e\u003cvoice name\u003d\u0027pt-BR-FranciscaNeural\u0027\u003e\u003cprosody rate\u003d\u0027-15%\u0027 pitch\u003d\u00270%\u0027\u003e $TEXT \u003c/prosody\u003e\u003c/voice\u003e\u003c/speak\u003e",
         "voiceProvider":"MICROSOFT",
         "microsoftTtsConfig":{
            "region":"brazilsouth",
            "subscriptionKey":"***",
            "language":"pt-BR"
         }
      },
      "audio":{
         "bargeIn":false,
         "flush":false,
         "bargeInOffset":200
      },
      "playSilence":{
         "time":50,
         "bargeIn":false,
         "flush":false
      },
      "dtmfMenu":{
         "numOfDigits":1,
         "timeout":20000,
         "interDigitTimeout":3000,
         "termTimeout":500,
         "termChar":"#"
      },
      "voiceMenu":{
         "sensitivity":0.01,
         "maxSpeechTimeout":30000,
         "timeout":20000,
         "incompleteTimeout":20000,
         "voiceProvider":"MICROSOFT",
         "microsoftAsrConfig":{
            "region":"brazilsouth",
            "subscriptionKey":"***",
            "language":"pt-BR"
         }
      },
      "transfer":{
         "uui":"evatest",
         "dest":"1234@172.16.0.7"
      },
      "fetch":{
         "fetchTimeout":45000,
         "fetchAudio":"",
         "fetchAudioDelay":0,
         "fetchAudioMinimum":0,
         "fetchAudioInterval":0
      },
      "defaultErrorBehaviour":{
         "audio":"",
         "tts":"ssml",
         "transfer":false
      },
      "firstConversationRequest":{
         "text":"",
         "code":"%EVA_WELCOME_MSG",
         "entities":{
            
         },
         "context":{
            
         }
      },
      "conversationProperties":{
         "headers":{
            "API-KEY":"***",
            "OS":"evg",
            "LOCALE":"pt-BR"
         },
         "conversationUrl":"https://api-dev-instance1.eva.bot/eva-broker/org/2fbe99b2-ea98-484f-b392-f649f1844e03/env/f5317429-55bb-4418-a7ca-00f6992388b2/bot/80d9ab14-5374-402a-9a93-6f1dc77f7675/channel/47a77735-d652-4c6c-a283-4d18028a3b18/v1/conversations"
      },
      "conversationAuthProperties":{
         "keycloakUrl":"https://keycloak-dev-admin.eva.bot/auth/realms/everis/protocol/openid-connect/token",
         "secret":"***",
         "clientId":"***"
      },
      "regionalExpressionsFileUrl":"https://***/regional-expressions.json",
      "welcomeTimeout":5000,
      "conversationTimeout":30000
   }
}js

TTS configurations

To build a voice agent in eva, there are a few concepts that are different from a "text first" agent. The flow building logic is the same, the difference is the consistent use of the technical text field using JSON. We'll call it property; each property has a command that will tell the agent what to do.

Before jumping to them, let's see how an answer cell for voice agents would look like in eva?

Don't worry if you don't understand some of the terms in the following example, we'll get to all the concepts ahead in this chapter. 😉

Now, imagine you have an audio file with a greeting and a menu, and you want the user to choose a number option off of the menu:

1. Click the + icon to add a cell, in this case, a Welcome flow.

2. Select the channel and choose the audio template

3. Add the audio URL (WAV or FLAC formats)

4. Use the "Add option" to create buttons that will be used to identify the menu options

5. After that, attach a JSON to the technical text field with the DTMF menu property, as follows:

{
   "dtmfMenu":{}
}

6. Finally, click Save.

If you don't have an audio, just choose the text template to use the text-to-speech function and proceed to step 4.

Audio template

The following example is an answer using the audio template. The formats supported are WAV and FLAC.

There are a few properties that you can attach to the technical text field to enrich the experience, like allowing the user to interrupt the audio playback at anytime.

JSON used in the example:

{
   "configuration": {
      "bargeIn":false,
      "flush":false
   }
}

Other audio commands that overwrite the default settings:

NameTypeFunction

bargeIn

boolean

Allows users to interrupt an audio using a DTMF keypad input. For ex., in a menu audio, the user wouldn't have to wait all the options to finally be able to choose.

bargeInOffset

Long

This configuration allows users to interact with the IVR from a specific point in the audio. For ex., if you set the value 300ms, this means that the user will be able to interact with the IVR when it is 300 milliseconds before the audio stops playing.

flush

boolean

Flush

When flush property is enabled as "true", the IVR will wait the audio to be fully reproduced before continuing the flow. It applies for audios, TTS, and play silence.

It's not mandatory to use all these JSON configurations when using the answer templates. When they are not attached the system will use the default configurations.

Prefer the audio template to reproduce audios. When an audio is entered, the text-to-speech (TTS) property will be ignored.

Text template

Text-to-speech technology receives a text as an input and produces speech as an output. To produce the audible speech for IVR, create an answer using the text template. You can either fill it with regular text or with a SSML.

When you insert a regular text, the IVR will play the default configurations, but if you want to change the default rate, pitch and even voice, use a SSML with the new configuration, as seen below.

The text field has 640 character limit.

You can also overwrite the default configurations using the following JSON in techinal text field:

JSON used in the example:

{
   "configuration":{
      "bargeIn":false,
      "flush":false,
      "voiceProvider":"MICROSOFT",
      "mask":"<speak xmlns='<http://www.w3.org/2001/10/synthesis>' xmlns:mstts='<http://www.w3.org/2001/mstts>' xmlns:emo='<http://www.w3.org/2009/10/emotionml>' version='1.0' xml:lang='en-US'><voice name='pt-BR-FranciscaNeural'><prosody rate='6%' pitch='3%'>$TEXT</prosody></voice></speak>",
      "microsoftTtsConfig":{
         "region":"brazilsouth",
         "subscriptionKey":"ba471adb1da790bd4e222a9d4041ed90",
         "language":"pt-br"
      }
   }
}

In the example above, we used a mask with the variable $TEXT to replace with the content you have written in the text template, so you don't have to repeat it in the xml. If the content of the answer is an xml starting with "<speak" the default xml won't be used.

Other TTS commands that overwrite the default settings:

NameTypeFunction

bargeIn

boolean

Allows users to interrupt an audio using a DTMF keypad input

bargeInOffset

Long

This configuration allows users to interact with the IVR from a specific point in the audio. For ex., if you set the value 300ms, this means that the user will be able to interact with the IVR when it is 300 milliseconds before the audio stops playing.

flush

boolean

voiceProvider

String

TTS Provider Name. So far, only the MICROSOFT value is supported.

microsoftTtsConfig

JSON Object

Credentials to access Microsoft

Properties

Now that we know the basics of how an answer cell for IVR looks like in eva using audio and text templates, let's move on to the technical text field.

To use the eva-evg channel or implement a connector that will be integrated to an IVR, there are some configurations that need to be informed. They are the properties, i.e. a regular JSON attached to the technical text field.

In case no properties are attached to the technical text field, the system will use the default properties.

Let's breakdown the properties and learn how to use them to create commands.

Mostly used when you need an input from the user. You can use all templates available: audio, text and custom.

There are three types of menu:

  • DTMF: allows the user to interact with the IVR by the telephone keypad

  • VOICE: allows the user to interact with the IVR by speech

  • DTMF VOICE: allows the user to interact with the IVR by both, telephone keypad and speech

Let's breakdown each type.

DTMF menu

As mentioned, the DTMF menu allows the user to interact with the IVR through the telephone keypad. See the example bellow:

JSON used in the example:

{
   "dtmfMenu":{}
}

It's possible to overwrite some configurations of the DTMF menu:

NameTypeFunctionDefault

numOfDigits

int

Numbers of digits to be captured

1

timeout

int

Pause timeout in milliseconds for the user to send an input (DTMF or speech).

5500 ms

interDigitTimeout

int

Inter-digit timeout in milliseconds for the user to enter a DTMF input

3000 ms

termTimeout

int

Timeout in milliseconds since the user's last input (DTMF or speech) before terminating the call

300 ms

termChar

String

Users can indicate when the DTMF input has finished by sending a special character.

If the user types only the character # (hashtag) without informing any numbers, this is the value sent to eva; but if there are other information sent along, the # won't be sent.

#

Timeouts: Refer to the pauses between words or phrases when speaking or when entering DMTF inputs. You can control the length of these pauses so the engine can detect when a user has done speaking or entering the DTMF input.

To overwrite the default settings we can enter the following JSON in the technical text.

JSON used in the example:

{
   "dtmfMenu":{
      "numOfDigits":1,
      "timeout":5000,
      "interDigitTimeout":1000,
      "termTimeout":500,
      "termChar":"#"
   }
}

It is possible to combine multiple configurations of different items to achieve proper customization of the menu, as in the example below (for DTMF menu and audio):

{
   "dtmfMenu":{
      "numOfDigits":1,
      "timeout":5000,
      "interDigitTimeout":1000,
      "termTimeout":500,
      "termChar":"#"
   },
   "configuration":{
      "bargeIn":false,
      "flush":false
   }
}

Usually, a DTMF menu is used with buttons to find the input that will be sent to eva during the conversation. For example, when the user press "1" in the phone keypad, eva will receive the value, like in the example bellow, the value sent to eva was "Schedule".

VOICE menu

As mentioned, the Voice menu allows the user to interact with the IVR by speech, see the example below.

If you want use the voice property but not overwrite any other configuration, just attach the following JSON in the technical text, as seen in the example above:

{
   "voiceMenu":{}
}

In case you want to overwrite some default configurations, use the following commands in the technical text:

NameTypeFunctionDefault

voiceProvider

String

ASR Provider Name. So far, only the MICROSOFT value is supported.

-

sensitivity

double

Noise reduction sensitivity. Lower values will lower the audio silence threshold and more noise will be recorded. Higher values will raise the audio silence threshold and louder audio will be needed to trigger the record. Valid values go from 1 to 100.

20

timeout

int

Pause timeout in milliseconds for the user to send an input (DTMF or speech)

5500 ms

maxSpeechTimeout

int

The maximum duration of user speech. If this time elapsed before the user stops speaking, the event "nomatch" is activated.

15000 ms

incompleteTimeout

int

Timeout in milliseconds the IVR will wait for a page/json fetch

300 ms

microsoftAsrConfig

JSON Object

Credentials to access Microsoft

-

This is how it will look like:

JSON used in the example:

{
   "voiceMenu":{
      "voiceProvider":"MICROSOFT",
      "sensitivity":0.01,
      "timeout":20000,
      "maxSpeechTimeout":30000,
      "incompleteTimeout":20000,
      "microsoftAsrConfig":{
         "region":"brazilsouth",
         "subscriptionKey":"efvouqheg91b34fw094rtybyqyiwsdfqf",
         "language":"pt-br"
      }
   }
}

It's possible to combine multiple configurations of different items to achieve a proper menu customization, as in the example below:

{
   "voiceMenu":{
      "voiceProvider":"MICROSOFT",
      "sensitivity":0.01,
      "timeout":20000,
      "maxSpeechTimeout":30000,
      "incompleteTimeout":20000,
      "microsoftAsrConfig":{
         "region":"brazilsouth",
         "subscriptionKey":"efvouqheg91b34fw094rtybyqyiwsdfqf",
         "language":"pt-br"
      }
   },
   "configuration":{
      "bargeIn":false,
      "flush":false
   }
}

DTMF VOICE menu

As mentioned, the DTMF VOICE menu allows the user to interact with the IVR by both, telephone keypad and/or speech.

To overwrite the default settings we can enter the following JSON in the technical text.

If you want use the DTMF VOICE property but not overwrite any other configuration, just attach the following JSON in the technical text, as seen in the example above:

{
   "dtmfVoiceMenu":{}
}

Settings for the DTMF VOICE menu will be the same as those used for DTMF and VOICE.

This is how it will look like:

JSON used in the example:

{
   "dtmfVoiceMenu":{
      "numOfDigits":1,
      "interDigitTimeout":1000,
      "termTimeout":500,
      "termChar":"#",
      "voiceProvider":"MICROSOFT",
      "sensitivity":0.01,
      "timeout":20000,
      "maxSpeechTimeout":30000,
      "incompleteTimeout":20000,
      "microsoftAsrConfig":{
         "region":"brazilsouth",
         "subscriptionKey":"efvouqheg91b34fw094rtybyqyiwsdfqf",
         "language":"pt-br"
      }
   }
}

It's possible to combine multiple configurations of different items to achieve a proper menu customization, as in the example below:

{
   "dtmfVoiceMenu":{
      "numOfDigits":1,
      "interDigitTimeout":1000,
      "termTimeout":500,
      "termChar":"#",
      "voiceProvider":"MICROSOFT",
      "sensitivity":0.01,
      "timeout":20000,
      "maxSpeechTimeout":30000,
      "incompleteTimeout":20000,
      "microsoftAsrConfig":{
         "region":"brazilsouth",
         "subscriptionKey":"efvouqheg91b34fw094rtybyqyiwsdfqf",
         "language":"pt-br"
      }
   },
   "configuration":{
      "bargeIn":false,
      "flush":false
   }
}

When used with buttons we can find the input that will be sent to eva during the conversation. For example, when the user press "1" in the phone keypad, eva will receive the value, a word or a phrase like "I want to buy".

Buttons

Let's learn how to use buttons in the context of eva-EVG. All three answer templates for voice channels allow you to add buttons. Click "Add option" to expand the two fields for buttons: Option and Value.

The value saved in the context works as a map, helping eva identify where the user should be led.

When combined with a DTMF or DTMF VOICE menu, it's possible to associate the "Option" field with the digit and send the value to eva. For example:

Option: "1" Value: "Buy clothes"

When the user press "1", the value that was actually sent to eva is "Buy clothes", leading the user to the appropriate flow.

Users may also consider an alternative approach by spelling out the number instead. So these are the third input possibilities:

  • "1" (phone button)

  • "Buy clothes" (spoken)

  • "One" (spoken)

To cover this third option, represented by "One" in this example, you can add a Cardinal System entity (eva NLP pre-built entity for numbers) followed by a Rule cell, as seen below.

On the Rule cell you can create a condition (see example below) to segment the flow and, subsequently, add a Jump cell to said flow. Use this field to handle possible input options and help the STT recognize any variations of the spoken number.

Play Silence

To provide greater fluidity and natural speech when you have answers/audios in sequence, we recommend to use the play silence property. It will allow the audios to not be played immediately after another.

The play silence should be included in the answer that comes first, in the example, "Buy".

JSON used in the example:

{
   "playSilence":{}
}

Important: When the answer has a menu setting, the play silence will not be executed.

It's possible to overwrite some configurations with:

NameTypeFunctionDefault

time

int

Silence duration in milliseconds. Maximum value accepted is 45.000 ms.

0

bargeIn

boolean

Allows users to interrupt an audio using a DTMF input

False

flush

boolean

False

To overwrite the default settings we can enter the following JSON in the technical text.

JSON used in the example:

{
   "playSilence":{
      "time":"50",
      "bargeIn":"false",
      "flush":"false"
   }
}

It's possible to combine multiple configurations of different items to achieve a proper menu customization, as in the example below:

{
   "playSilence":{
      "time":"50",
      "bargeIn":"false",
      "flush":"false"
   },
   "configuration":{
      "bargeIn":false,
      "flush":false
   }
}

Transfer to human

Transfer property is used to transfer the call to live agents.

NameTypeFunction

uui

String

dest

String

Call destination, where it will be transferred to. You can declare it as sip or tel.

Below are some examples:

  • How to declare you want a call to be transferred (remember to replace the information inside the quotation marks):

{
   "transfer":{
      "uui":"48656C6C6F20776F726C64;encoding=hex",
      "dest":"sip:12345678@172.16.0.7:5060"
   }
}

In the example above, the value "48656C6C6F20776F726C64" will be translated as "hello world" by the agent.

  • By combining transfer configurations, it's possible to overwrite audio configurations, using TTS (text template). It's possible to combine multiple configurations of different items to achieve a proper menu customization, as in the example below:

{
   "transfer":{
      "dest":"sip:12345678@172.16.0.7:5060?user-to-user=342342ef34;encoding=hex"
   },
   "configuration":{
      "bargeIn":false,
      "flush":false
   }
}

In the example above, the hex encoding is declared in the default.

Important: Transfer property has priority over menu and play silence. When you attach these commands with transfer, the other two will be ignored.

Hangup

This property is used to end the flow. In other words, after this, the call will be terminated. Simply attach in the technical text the following JSON:

{
   "hangup": true
}
  • By combining terminate configurations, it's possible to overwrite audio configurations, using TTS (text template). It's possible to combine multiple configurations of different items to achieve a proper menu customization, as in the example below:

{
   "hangup":true,
   "configuration":{
      "bargeIn":false,
      "flush":false
   }
}

Important: Hangup property has priority over transfer, menu, and play silence. When you attach these commands with hangup, the other three will be ignored.

Recall

The recall property can be used to simulate an asynchronous delivery of the answers and also to send eva a user input that can be used to trigger a flow or validate a service.

This behavior is useful when the system requires a lengthy processing and you don't want to hang the user waiting in silence wondering if the call is still active.

💡 It's a good practice to give the user a feedback with audios with background music or informative messages.

This is how you use a recall. Add a wait-input cell after the answer you want delivered before continuing in the flow.

You can use the same parameters as those in the Conversation API to specify the user input (if it's text, code, context, intent, confidence, or entities).

In the example below, the code "357YVU" is being used as a value to validate a service.

{
  "recall": {
     "code": "357YVU"
   }

In the following case, the intent "shopping" was triggered without the need of identifying utterances, you just have to inform the name of the intent the way it's registered in eva.

{
  "recall": {
    "confidence": "0.50",
    "intent": "shopping"
  }
}

This next example is a simpler way of using the recall property. In this scenario, eva would be called with an empty input.

{
  "recallText": ""

Fetch

The fetch property represents the waiting time for the IVR to make a new request to eva and then continue the flow. You can also overwrite the default setting it in the technical text to only reflect a specific execution (audio playback, TTS, etc.).

NameTypeFunction

fetchTimeout

Long

The default amount of time in milliseconds the IVR will wait for a page/json fetch.

fetchAudio

String

The path to the default audio file to be used during IVR platform fetch events.

fetchAudioDelay

Long

The default value for the fetch audio delay. This is the amount of time in milliseconds the IVR will wait while transitioning and fetching resources before it starts playing the fetch audio.

fetchAudioMinimum

Long

The minimum time in milliseconds to play a fetch audio source, once started, even if the fetch result arrives in the meantime. The idea is that once the user does begin to hear a fetch audio, it should not be stopped too quickly.

fetchAudioInterval

Long

Controls the time interval between fetch audio loops. The default value is 0. A value of -1 is valid and will prevent the audio loop.

Below are some examples:

  • Fetch configuration

{
   "fetch":{
      "fetchTimeout":45000,
      "fetchAudio":"",
      "fetchAudioDelay":0,
      "fetchAudioMinimum":0,
      "fetchAudioInterval":0
   }
}
  • By combining fetch configurations, it's possible to overwrite audio configurations, using TTS (text template). It's possible to combine multiple configurations of different items to achieve a proper menu customization, as in the example below:

{
   "fetch":{
      "fetchTimeout":45000,
      "fetchAudio":"",
      "fetchAudioDelay":0,
      "fetchAudioMinimum":0,
      "fetchAudioInterval":0
   },
   "configuration":{
      "bargeIn":false,
      "flush":false
   }
}

Important: If none of the properties above mentioned (DTMF, VOICE, DTMF_VOICE, play silence, transfer, hangup, or recall) are attached, a DTMF_VOICE with the default configurations will be added.

Synonymous for regional expressions

This property gives a contextual understanding of expressions and words variations. For example, in English it's common to say O (letter) instead of zero when giving a phone number.

To help the STT intelligence understand this is the number 0 and not a letter, you can use a JSON file that gathers all “Regional Expressions”, as in the example:

{
   "O": "0"
}

Important: The JSON with regional expressions has to be a public file. To enable it, provide the URL in the JSON with the default configurations.

To enable this property, simply attach in the technical text the following JSON:

{
   "useRegionalExpressions": true
}

This way, the agent will have a better recognition of specific entities such as phone number, credit card number, etc. Bear in mind that each time a new change is made to the file, it can take up to one hour to reflect in the call.

Configure first flow

If you want to start the conversation with a different flow, use the following code to set the first interaction when configurating the DNIS:

"firstConversationRequest": {
   "code":"",
   "text":"",
   "intent":"",
   "confidence":1,
   "entities":{
      "comida":"",
      "carro":""
   },
   "context":{
      
   }
}

See here all the properties you can use in this JSON.

You can use this scenario to change the channel, to start on a specific seasonal flow, or outbound calls, for example.

Handling events

Disconnected user

When a call is interrupted unexpectedly, either because the user hung up accidentally or as a result of some system error, it's possible to configure a flow in eva so that the conversation resumes from the same point if this same user calls again in less than 5 minutes.

This setup not only enhances user experience but also refines the abandonment metric by filtering out abandoned calls and excluding those that were resumed.

To create this scenario, you'll have to:

  • Create a welcome answer with a transactional service to identify the call.

  • Create a User Journey flow specifically for this use case. Add the utterance "USER_DISCONNECTED" to your intent followed by a service cell (see image below) to identify the call and resume from the same point where it left off.

No input

When the user doesn't interact with the virtual agent, which means there isn't a DTMF or a speech input, the system sends eva the code IVR_NO_INPUT, visible on the User Messages column on Dashboards (see image below).

No match

Used to manage events when it is not possible to identify or transcribe the input, the system sends the code IVR_NO_MATCH, visible on the User Messages column in Dashboards (see image above).

Handling Errors

During a call some errors may occur. We list below possible errors:

  • Communication with eva, due to some misconfiguration.

  • Failed authentication with eva

  • Flow not found (when a Not Expected flow wasn't created, for example).

  • The use of a template not supported by the IVR channel.

There are two ways of handling them:

  1. Redirect the call to a live agent

  2. End the call

For both cases, we recommend you to deliver a message notifying the user what will happen next.

Default error behavior

NameTypeFunction

audio

String

The field must contain an audio URL in WAV or FLAC format, when this response is delivered to the IVR it will play the audio content.

tts

String

The field content will be synthesized by the IVR, you can fill it with free text or with an SSML.

transfer

boolean

If set as true the call will be transferred after the message is played; if set as false or when the property is not specified the call will be terminated after the message. To make the call transfer we will use the default transfer settings.

Last updated