Other NLP and LLM Connectors
Last updated
Last updated
In this chapter, you'll learn how to connect the virtual agent to an external engine.
The NTT DATA proprietary Natural Language Processing engine - NLP comes integrated as default.
Syntphony Conversational AI allows you to use different NLP engines:
IBM Watson Assistant
Google Dialogflow Essentials
Microsoft Luis
Amazon Lex
OpenAI for LLM models
To use any of these, just follow this step by step.
Click on Change Model
to open this window with other options.
Watson is a service package offered by IMB. Among them, there is a question-answering software that applies natural language processing, information retrieval, knowledge representation, automated reasoning and machine learning technologies to answer questions posed in natural language.
1) Go to https://login.ibm.com/
2) Log in with your IBMid
3) Click on "skills" in the upper left corner
4) Then click “create skill” to create a virtual agent on Watson
5) If you have existing skills, select one, then click on the menu in the upper right corner of the selected skill card.
6) Click on “view API details”
7) If you are using a newer account, copy the links and codes after Assistant URL and Api Key insert them on cockpit. Remember to switch to the newer version in Syntphony Conversational AI.
8) If you are using an older account, copy the links and codes after v1 Workspace URL, Username and Password and insert them on Syntphony CAI. Remember to switch to the older version in Syntphony CAI.
Google Dialogflow is a human-computer interaction framework that works on natural language.
1) Go to https://dialogflow.com/
2) Then click on “go to console”.
3) Click on settings on the upper left corner (the cogwheel icon - see image).
4) Click the link right after “Project ID”.
5) You will be taken to a page in the Google Cloud Platform.
6) Once in the Google Cloud Platform, click on the link below “e-mail”.
Important: Remember to charge your agent permission or else your intents won’t work
7) Go to IAM on the upper left corner of the menu (as shown in the image below).
8) Once there, click on the edit icon (pencil) on the right of the agent named as Dialogflow Integrations (see image below).
9) Now, select “Dialogflow” and then “Dialogflow API Admin” (as shown in the image below).
10) Once you changed your agent permission, go to “service accounts” and then click on the menu on the right of the agent you want to use.
Important: If you don’t have a Service Account, click on “Create Service Account” and create one
11) Click on “create key” and select JSON.
12) Save the JSON file on your computer.
New option to configure Dialogflow multi region.
13) (Optional) If you want to use a Dialogflow agent from a specific region, you need to modify the JSON file with a new parameter called Dialogflow.region. This parameter must contain the official region identifier described in this table:
If this parameter does not exist when creating the bot in Syntphony CAI, the global region will continue to be used by default as it has been to date.
Example Dialogflow metadata JSON with “region” parameter:
14. Upload this file when creating a Dialogflow virtual agent in cockpit to complete the integration.
Language Understanding (LUIS) is a cloud-based API service that applies custom machine-learning intelligence to a user's conversational, natural language text to predict overall meaning, and pull out relevant, detailed information.
To integrate LUIS to Syntphony Conversational AI, you have to have an active Azure account with created resources.
1) Go to luis.ai
2) Login with your Microsoft account.
3) Create an app or click on an existing one.
4) Click on “manage”.
5) Then click on “Azure Resources” at the left.
6) Copy the example query, located at the bottom of the screen.
7) Then, click on authoring resource and copy the primary key.
8) Paste the Example Query on the URL prediction field and the primary key on the authoring key field.
Syntphony Conversational AI supports Luis version 2. When using the datetimeV2 system entity in Luis, you can use subcategories, such:
date
time
datetime
daterange
timerange
datetimerange
Those subcategories should be added after a dot (.).
So, if you are using the date subcategory, the entity name should be builtin.datetimeV2.date
Where builtin.datetimeV2 is the system entity name and date is the subcategory.
For further information, check https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-reference-prebuilt-datetimev2?tabs=1-3%2C2-1%2C3-1%2C4-1%2C5-1%2C6-1#subtypes-of-datetimev2
You will be asked to provide some information on your request, as listed below:
Log in and access IAM in the AWS menu
Create a new user by clicking on Users on the Access Management menu
Fill in the required information (tip: try naming it with something obvious, such as syntphony-user).
Then, click on "Next: Permissions" and select "existing policies", enabling the "AmazonLexFull" policy.
Finish downloading this user and the CSV file.
This file contains all the data required to integrate Syntphony Conversational AI to your AWS account.
Access the Services menu to go back to the Amazon Lex page
Then, proceed to the side menu to access the virtual agent you want to integrate. Click on the name to open this "Bot details" card. Copy the ID.
On the same side menu, choose "Implementation" and then "Aliases". Select the alias you want to use, then find the value on the fiel "ID" within "Details".
There are two ways of finding out the region: the first is on your virtual agent URL.
One way is clicking on the top bar and find the selected region (as seen below).
The other way is through the URL, for example: “https://us-east-1.console.aws.amazon.com/lexv2/home?region=us-east-1#bot/YZ24GFVCSX”.
Note that it shows the region us-east-1.
Now Select "Draft Version" and find the field "Version".
These are the information required to integrate Amazon Lex.
Learn more about the Zero-Shot learning model
You'll be asked to provide the following information:
The current OpenAI Endpoint is always the same: https://api.openai.com.
-> Read the OpenAI documentation to learn more about endpoints.
1) Access https://platform.openai.com/docs/overview and click on the lock icon, corresponding to the API Keys.
2) When you're on the API Keys page, click on Create New Secret Key
:
3) Enter a name that represents the key and click on Create Secret Key
.
4) After the key is created, before you click Done
, remember to save it somewhere right away.
⚠️ It is only possible to view the key at the time of creation
After filling out the Endpoint and API Key fields, the system will load the available model options.
Refer to the OpenAI documentation to learn about the models:
A token is roughly 3-5 characters long, but its exact length may vary. It usually consists in the sum of both a system prompt and the user input.
The outcome may depend on the availability of the generative service chosen and the token limit defined. If you're using Azure OpenAI by Syntphony CAI, the limit is set at 4000 tokens.
This model is highly influenced by the limitation of tokens. You can set this limit at the time of creation of the virtual agent or at the Parameters page.
Learn more about the Zero-Shot learning model
You will be asked to provide the following information:
1) Once you're in the Azure webpage, select the OpenAI instance (the one marked with the OpenAI symbol), in this case, it's the eva-dev-openai-keys
.
2) It'll direct you to your main OpenAI instance page. On the side menu, select the option Keys and Endpoint
.
3) On this page, you will be able to view the Keys and Endpoint that will be used on the Syntphony Conversational AI cockpit screen.
After filling out the Endpoint and API Key fields, the system will load the available model options.
Refer to the Azure OpenAI documentation to learn about the models:
Country grouping
Geographic location
Region ID
Europe
Belgium
europe-west1
Europe
London
europe-west2
Asia-Pacific
Sydney
australia-southeast1
Asia-Pacific
Tokyo
asia-northeast1
Global
Dialogflow delivery is global, data at rest is within the US
global