Testes
Previous versions
Previous versions
  • 3.3.1
    • User Manual
  • 3.3.0
    • User Manual
  • 3.2.4
    • User Manual
      • Glossary
      • Accessing eva
      • Introduction
      • Creating a virtual agent
      • User Management
      • Channel Management
      • Developing a virtual agent
      • Automated Tests
      • Dashboard
      • Appendices
    • Development Manual
      • Base Architecture
      • Conversation context
      • Creating channels – The Conversation API
      • Webhooks
      • Data extraction
      • Distributed Tracing
      • Appendices
    • Virtual agent Migration Guide
    • eva Installation in Azure Cloud
Powered by GitBook
On this page
  • Pre-installation
  • Services connections
  • Pipelines configuration
  • MySQL pipeline
  • ISTIO pipeline
  • KEYCLOAK pipeline
  • EVA-PREINSTALLATION pipeline
  • EVA-INSTALLATION-OR-UPDATE pipeline
  • GOOGLE-ASSISTANT pipeline
  • JWT pipeline
  • Installation
  • Istio
  • Keycloak
  • MySQL
  • eVA pre-installation
  • eVA installation or update
  • Google Assistant
  • JWT (Optional)
  • Installation validation
  1. 3.2.4

eva Installation in Azure Cloud

PreviousVirtual agent Migration Guide

Last updated 3 years ago

Pre-installation

Uploading files to the Azure DevOps repository

Firstly, you have to initialize the repository of your project in Azure DevOps. In the next section, you can do this and copy the credentials to your Git file manager:

Here, you will have a folder structure, like the one shown in the image, which contains the configuration files and the scripts needed to create the environment. These scripts and files must be configured for each project, as you will see later.

In addition, you need to import the Pipelines that run each file from this repository. To do this, access the pipelines section and import them:

Select the JSON files from the Pipelines which you need and they will be ready to run, although you must configure them together with the files they use before.

Configuration and deployment of the infrastructure script

One of the scripts you have to run is the infrastructure script, which will create the necessary resources in Azure for your eVA installation. The script is located in Infrastructure\evainstalation.bat, and you have to configure it before running it in the corresponding Pipeline:

Here, you will specify your subscription ID, the name you want to give to the resource group, the networks and subnets with their IPs, and the configuration of the different components such as MySQL, Redis, CosmosDB and AKS. The configuration of these components will depend on the needs of the project, but generally you will be able to use the one shown in the script.

Once the script has been configured, you will run it in the Pipelines section, as you have imported it previously. To do so, select the Pipeline Creation Infrastructure and edit it first, since you have to configure the Azure login with which you will access the subscription you have indicated:

In the section "Variables", you must specify the user and password that has access to the subscription where you will create the resources. To do this, edit the USER and PASS variables (for this variable you have to press the padlock to make it secret) and save them.

Finally, it is necessary to check that the following data appears in the arguments of your job that executes the script:

This is how you can obtain the value of the password that you specified as a secret variable. It also appears in the script login as %1%:

With all these settings, launch the Pipeline (Queue or Run) and if everything has gone well, the job will appear in green.

The final result in Azure should look something like this:

Services connections

In order to develop, configure and later execute the jobs that automate the installation of eVA, a connection to the cluster of kubernetes of the group of resources of your subscription is necessary.

At first, let’s go to the configuration page of your project in the Azure DevOps portal, which is located in the lower area of the Project settings side menu.

Once you are inside, select Service connections from the side menu and then press the Create service connection button.

Sequently, you have to look for and select the option Kubernetes and click on Next. Now, fill in the form, leaving the authentication method marked as Azure subscription, the Azure subscription and the cluster aks that you want to associate in the connection and for the Namespace select “default”. In addition, you have to allow access to all pipelines and the use of administrator credentials. Finally, assign a name to the connection and save it.

Note: It can ask us for a login to be able to fill in the subscription and the cluster.

Pipelines configuration

MySQL pipeline

Import the MySQL.json file with the configuration of the pipeline. Once the file is imported, you have to adapt the pipeline for its correct operation: firstly, rename it as Agent pool, then, select Azure pipeline and as Agent Specification, select vs2017-win2016.

In the Get resources tab, select the repository where the files are located for the creation of the database schema and the insertion of the administrator user in the database.

Now, you have to set up the Schema creation and Insert Admin jobs. To do this, inside any of the two jobs, select the Azure subscription where you have the MySQL resource and authorize it through the Authorize button (this will create another connection inside Service connections, so it will only be necessary to authorize it once). Once validated, it will allow you to select the host of the database and you have to leave the rest of the fields the same.

Replicate the procedure for the other job but the authorization step will not be necessary because you already have the connection created. You will select for Azure Subscription, the subscription within the list of Available Azure service connections that you have authorized in the previous job and link the host.

To finish with the configuration of the pipeline, you have to modify the values of the user's variables and password to access the database. To do this, let’s go to the Variables tab and modify the values of USERDB and PASSDB.

The value of the user generated can be found in the portal of Azure: first, search in All the resources and filtering by your subscription and group of resources, you will find the mysql instance.

Open the resource and copy the value of Login Name, which will be the one to be added in USERDB.

In the case of the password, you will have to take the value that has been given within the infrastructure installation file and assign it to PASSDB, in particular, the PasswordDB field.

Note: The password must be kept secret.

Now, you can save the pipeline, from the same screen, select Save & queue -> Save, it will appear a window which you will finish by pressing the Save button.

ISTIO pipeline

For this pipeline, import the ISTIO.json file to generate the configuration. Once imported, you will have to rename the pipeline value and assign the Agent pool with an Agent specification with the value vs2017-win2016.

Now, you have to configure the jobs: you only have to assign to each job of the "kubectl" type the Kubernetes service connection that you have created previously from its drop-down menu, the connection already created will appear.

To finish the configuration, you must go to the Variables tab and assign the SUBSCRIPTIONID, RESOURCE_GROUP, USERAZ and PASSAZ values of a user who owns the subscription created for the resources.

Note: The password must be kept secret.

Save it from the Save & queue drop-down menu by clicking on Save button and then, from you save again from the pop-up window.

KEYCLOAK pipeline

Import the KEYCLOAK.json file that contains the pipeline configuration. Firstly, you have to rename the value of it, assign the Agent pool with an Agent specification with the value vs2017-win2016.

Sequently, you will assign the repository where all the configuration files are.

Now, you only have to assign the Kubernetes service connection created in the jobs. You have to repeat the process for all of them, except for the first one which is not necessary.

Once the jobs have been configured, you only have to save the pipeline by opening the Save & queue drop-down menu and clicking on Save; a window will be opened and you will have to click on Save.

EVA-PREINSTALLATION pipeline

You will start by importing the file that contains the configuration of the pipeline (EVA-PREINSTALLATION.json). Once imported, you have to rename the pipeline and assign an Agent pool together with the Agent Specification, as seen above.

Now, you have to associate the installation repository, which must have been created previously.

Then, continue with the configuration of the jobs, all you have to do is to assign the parameter requested to the three tasks, which is the Kubernetes service connection that you have previously created.

Once you have configured the pipeline with the appropriate parameters, you only have to save the pipeline from the Save button, and then press Save again in the window that will appear.

EVA-INSTALLATION-OR-UPDATE pipeline

Import the pipeline from the EVA-INSTALLATION-OR-UPDATE.json file, rename it, associate the Agent pool and link the corresponding repository with the installation.

Now, you will focus on the jobs. First, you will update the "kubectl" type jobs, which you will have to assign to the Kubernetes service connection that you have already created.

Once all the Jobs have been set up, you have to go to the Variables section. You have to configure 8 parameters:

  • DOCKEREMAIL

  • DOCKERPASS

  • DOCKERSERVER

  • DOCKERUSERNAME

  • USERAZ: Owner user with access to Azure's resources.

  • PASSAZ: Owner user password with access to Azure's resources

  • USERDB: Same as in the MySQL pipeline.

  • PASSDB: Same as in the MySQL pipeline.

Note: Password values must be kept secret.

For the Docker parameters, you have to go to the portal of Azure and go to your Container Registry of your resource. Once inside, let’s go to the section of Access Keys, where you will take out DOCKERSERVER, DOCKERUSERNAME and DOCKERPASS (there are two, but you have to take the first one). For the DOCKEREMAIL parameter, you can assign any mail.

Finally, you have to save the pipeline, as you have done before by clicking on Save in the drop-down menu and then click again on Save.

GOOGLE-ASSISTANT pipeline

Import the GOOGLE-ASSISTANT.json file and configure the pipeline, rename it, associate Agent pool, point to the installation repository and configure the "kubetcl" jobs with the Kubernetes service connection created, as in previous pipelines. Once configured, save as in previous pipelines.

You should have a similar configuration similar to the following images:

JWT pipeline

You will start by importing the JWT.json configuration file. Once it is imported, rename it, assign an Agent pool with a vs2017-win2016 specification, associate the installation repository and associate the Kubernetes service connection in both Jobs. You will finish by saving the configuration as in previous pipelines.

Installation

Once the pre-installation and configuration of the pipelines have been done, you have to execute all the pipelines and the necessary manual updates of the files hosted in the repository for a correct eVA configuration.

Note: It is essential to follow the same order of installation.

Istio

To install Istio you have to run the pipeline that refers to that installation, but first you have to update the certificates if necessary. To do this, you have to modify the file certificate.yaml and add the base64 certificates in lines 3 and 4 respectively.

Once updated, let’s go to the All tab of the pipelines section within Azure DevOps and look for the Istio pipeline.

Once located, click on it and then click on Run pipeline.

In the following window, click on run and the installation process will start automatically.

The process and the result of the installation can be checked by clicking on job.

The installation has been done correctly if you obtain a result similar to the following image.

Keycloak

The application will ask you for identification; you will need:

  • User: admin

  • ​

Now, you have to create a realm. To do this, deploy the hidden side menu and click on Add realm.

You have to name the realm eva.bot and create the realm (Important: eva.bot written in lower case).

Now, you have to assign all the properties to the realm, import the keycloak-realm.json file that you have in the Keycloak folder of the repository.

Before importing, you have to overwrite them if they already exist.

The next step in the configuration is the creation of an administrator user. You have to go to Users from the side menu and click on Add user, assign the following values and save them.

Then, let’s go to the Credentials tab, create a new password, which will validate the access to the cockpit, the Temporary field has to be off and click on Reset Pass.

Continue on the Role Mappings tab, select eva-cockpit from the Client Roles drop-down menu and add the 4 values.

Now, you have to modify the eva-cockpit and eva-broker clients, firstly, eva-cockpit. You have to delete the redirection URLs and replace them. Then, select the access type as confidential in order to see the Credentials tab and save.

You have to generate a new secret by clicking on Regenerate Secret and save it. This value will go in the cockpitclientsecret.yml file in the field CLIENT_SECRET.

Once the value has been saved, return to Settings and change access to public again.

Now, delete the Client from the broker and create another one as eva-broker (Optional, only if JWT will be used).

Add the redirect URLs, mark all 3 as enable and in this Client, select it as confidential.

MySQL

First, you have to update the file insert-cockpit-admin.sql that you have in the repository inside the MySQL folder, and you must replace two keycloakUserId values with the admin ID created in keycloak.

To obtain this value, let’s go to Users in the Keycloak page and click on View all users, the administrator user will appear and you have to copy the ID to the value of the parameter which was mentioned before.

Once the repository has been updated with the new correct value, you have to look for and select the pipeline within the All view and then, click on Run pipeline to proceed with the installation as you have done previously in the other pipelines. The result should be like as the following image.

If you want to connect to the database to check that the database and the schema have been executed correctly, you must go to the Azure resource and insert your IP in the Firewall rules.

For more information about the database connection, let’s go to the connection strings section.

Secret keys updating (manual step)

Before continuing to run the following Pipelines, you must update several files located in the eva\secrets path:

In each of these files, you must change the specified namespace, as well as several urls that are composed from the group of resources, users and passwords specified in the creation, or keycloak sections.

For example, in the file cockpitclientsecret.yml you must change the name of the namespace and some parameters of the data section. Generally, some parameters can remain as they are if you have not changed the url or the names. All the attributes that you see coded, when you change them you must code them in some encode web in base 64:

For the following file, in addition to following the previous example, you must change the parameters of the MySQL database that you have created in the infrastructure. If you do not remember the host or some other section, you can locate it in the Azure resource, in the initial section and in Connection strings:

Now, update the data in the cockpitserversecret.yml file:

The parameters to be completed are located in the created Azure resource:

Finally, update the evasecret.yml file following the same steps as the previous ones and you have to code the data that changes:

eVA pre-installation

Now, you will proceed to launch the EVA-PREINSTALLATION pipeline. For this, you will look for this pipeline and launch it, the result of the task must have the following result:

eVA installation or update

The next step is to do the installation of eVA. Before running the pipeline, the user must be given Contribute, Contribute to pull request and Create tag permissions.

Once the permissions of the repository have been configured and you have checked that the certificate in the eva-secrets route is also updated, run the pipeline. The result of the job should end with all the jobs in green, as you can see in the following image.

Google Assistant

In this step, you are going to install the Google Assistant channel for eVA. The procedure is the same, look for the GOOGLE-ASSISTANT pipeline and execute it, as a result of this, you must obtain the following output:

JWT (Optional)

This task will be executed only if the Keycloak option has been configured and if you want to use JWT. In case you want to use it, you will start configuring the eva-broker-jwt-requestauthentication.yaml file with the corresponding Keycloak domain.

The next configuration is the creation of a Mapper in Keycloak. To do this, from the Keycloak page of your domain, let’s go to the Mappers tab of the Client eva-broker, here click on the Create button.

The configuration to be added must be Mapper type as Audience and then in the drop-down menu (Included Client Audience) select eva-broker, assign the name eva-broker-audience and save it.

Once configured, run the pipeline and the result should be something like this:

Installation validation

Once all the pipelines have been executed, your project should have the following structure with everything well installed.

From the command line, you will check that all the pods have been installed and they are working correctly. To do this, open a CMD and link to your subscription cluster with the following commands:

  • az login

  • az aks get- credentials –name [isubscription] – resource-group [resource] (For instance: az aks get-credentials --name evaautoinstall-aks --resource-group evaautoinstall)

Once you are already linked and associated to the cluster, launch the following command to obtain that all the pods are activated and running with a result similar to the following image.

  • kubectl get pod -n eva

To finish the check, you must launch the following commands, obtaining results similar to the images associated with each command.

  • kubectl get virtualservices -n eva

  • kubectl get services -n eva

  • kubectl get destinationrule -n eva

  • kubectl get hpa -n eva

With all this validated and configured, eVA is installed and working correctly.

​ ​

Then, you will link from Get sources, the repository where all the installation files are located.​

First, let’s go to the URL of Keycloak () and select Administration Console.

The dialogmanagersecret.yml file contains the CosmosDB url that you will have to change and code. The url has this format: $TIPODB://$USERNAME:$HOST:$PORT

https://keycloak.everiata.com/
[email protected]