Conversation API
How to integrate own channels and custom interfaces in eva
What is the conversation API
Any developer can integrate their own channels to eva. A user could benefit from a chat in the company’s website using a custom interface or even custom templates for responses, such as graphs, masked inputs or interaction with other elements of the webpage.
Companies normally add chatbot platforms to their existing app, or use one of the internal channels to release a virtual agent for its employees. When using eva, this can be done by consuming the Conversation API described below.
Authentication
Authentication must be handled following the OAuth2 Bearer Token protocol, where one must authenticate with a valid, expirable token. Including the Bearer Token in your header is mandatory from the API version 4.0.0.0 and onwards.
Once obtained, the access token must be sent in your header ‘Authorization’ as the string: “Bearer {{access_token}}”
Obtaining your Authentication Token
To generate a token, make a request on the following endpoint:
POST
Request Body
Content-Type is ‘x-www-form-urlencoded’. Username and password are your own credentials; client_id and grant_type are fixed texts. If you lack credentials, ask your administrator to issue a valid user to your environment.
Sample Response Body
While there are several, default fields that you may map from the OAuth2 response body, we recommend you to map those, as they are the only ones you will need to use.
Sample response
Authentication Token Renewal
Authentication tokens may expire, as indicated by the ‘expires_in’ field from the token generation response. When it does so, you may still renew it without reentering credentials, by calling the same endpoint, but with the following request body:
Renewal Request Body
Once again, Content-Type is ‘x-www-form-urlencoded’. If your refresh_token has also expired (indicated by the field ‘refresh_expires_in’), you are required to generate a new token by reentering the client credentials.
Conversation service
Unsafe Conversation service (Deprecated)
Authenticated conversation service
The conversation service is used to execute a conversation. Each call to this service is a message from the user that the virtual agent must process in order to understand and answer the user. This does not requires any authentication. This method is deprecated and is considered unsafe due to the newest method (see below) that does the same, enforcing a user authentication.
The conversation service is used to execute a conversation. Each call to this service is a message from the user that the virtual agent must process in order to understand and answer the user. Authorization is required and calls will be refused whenever a token is revoked or expired.
URL Parameters
Request headers
Request body
Response body
User Input
Answer
Learn more about all Answer's features in eva
Button
Carousel card
Learn more about Carousel features in eva
Nlp Response
Entity
Position
Sample requests
The request below is an example of a first call to the conversation service, requesting the execution of the welcome flow. It also add a variable to the context, although it is not necessary.
Another possible request, for following user messages:
Sample response
The following JSON is an example for a response for the request above.
Loading answers
When you want to avoid NLP calls, eva offers a front-end pre-processing option that bypasses cognitive processing. The CODE practice ties a specific code to a specific answer and obliges eva to deliver this answer.
In eva, a call to the Conversation API with
loads the welcome flow. When this code appears, eva is obliged to load the welcome flow. The extension of this behavior to any other answer is what is called the CODE practice.
When you register an answer name, it will also be its “code”, eva will deliver that specific answer when faced with that code. If the answer is transactional, the transaction is done before the answer is delivered. If the answer is not found, the “code” content is sent to the NLP so it can be interpreted.
When eva API encounters a “code” and a “text”, the code is and the text not (unless the text is used by a transactional component). If an answer with the same name of the “code” content is not found, the “text” content is sent to the NLP. This happens too in the middle of a flow. If a code is sent in the middle of a flow, the flow is stopped to run the code.
So, eva loading priority will be code -> answer -> NLP -> Fallback
Important:
Every code interaction is registered in the User Interactions table
This is useful when you want to build a clickable menu with preset options and each option is a code. For example, a simple menu with options such as “check balance”, “check opening times” and “ask for a refund”.
Learn more about all Answer's features in eva
Likable service
The likable service is used when an answer is configured to be evaluable. When this option is enable, the answer should give the user a thumps up / thumbs down option (like / dislike) in the chat.
When the user likes or dislikes an answer, this service must be called.
Request headers
Request body
Response body
The likable service will return a HTTP Status 200 with a “Success” string.
Sample request
Sample Response
"Success"
Satisfaction service
When the conversation ends, a form might be given to the user to evaluate the virtual agent. This evaluation has 3 parts:
A yes/no question asking the user if his doubt or if the problem was solved.
A grade for the conversation. The range can vary, but it is recommended to use a 0 to 10 grade.
Comments field for any details the user might want to add.
Important:
This service can be called only once for each sessionCode
URL parameters
Request headers
Request body
Response body
The satisfaction service will return a HTTP Status 200 with a “Success” string.
Sample request
Sample response
"Success"
Recommended Practices
Message Protection
Messages sent to the conversation API are sent with both the user’s message data, which may contain sensitive information and with your eva token.
While one attempt a straightforward implementation in their front-end website that immediately calls our API, we highly recommend against that as it can endanger your user's data, your token integrity, and breaches the GDPR.
Leaving this valuable information exposed may be exploited and also exposes your token to anyone who wants it in the console. Furthermore, your data is now spoofable by softwares such as sharks which may intercept your messages if they are transmitted with their data in a human language.
Sending your raw requests from the front-end is an understandable practice during your development phase, but we highly recommend you to add a custom security layer for this data, encrypting all messages sent by the front-end.
One adequate way to do this, is to have your back-end act as a security intermediate. A secure message flow would have the messages your user is sending though your chat, encrypted by a library such as CryptoJS, sent to your back-end server instead, which will then decrypt the message and send the request to the Conversation API itself, where the data is not spoofable or exploitable by users.
Last updated