What's New
Last updated
Was this helpful?
Last updated
Was this helpful?
Websnippet
Adjusting the alignment of Carousel images
Multilingual Agent (beta)
We are excited to introduce the new feature, designed to elevate user experience and expand accessibility. With this capability, virtual agents can now understand and respond in multiple languages seamlessly. This enhancement eliminates the need of creating separate agents for each language, allowing users to interact in their preferred language effortlessly.
Whether your audience speaks English, Spanish, Japanese, French, Thai, or any other , the Multilingual Agent ensures a consistent and personalized interaction across the board.
The new version release brings to you a in the dialog simulator, designed to empower developers with greater visibility and control over conversation flow. This tool provides real-time insights into conversation execution, making it easier to troubleshoot and optimize agent performance.
With the Logs viewer, developers can:
Access real-time visibility of any service errors during simulations
Review detailed step-by-step execution for each conversation
Use advanced request options to specify users and input values for targeted testing
Answers repository
Application of rich text listing styles
Dialog simulator
Full conversation update with library components
Gen AI cell
Cell has been renamed "Prompt cell"
Menu
Adaptation to keep main menu expanded when selecting a submenu
Notifications
Blank space removed in Notifications with minimal content
Websnippet
Inclusion of accessibility in buttons for visually impaired users
Layout adjustment for cropped images
Dashboards
Layout tweaking when selecting many tags in a funnel step
Knowledge AI
Training button has been restricted to the Knowledge AI page
Login
Improvements to the rerouting flow on the login screen
Training
Permission to change intents/entities during training implemented
Training button has been hidden when there is no new content to be trained
Webchat plugin (websnippet)
Additional open context parameters have been added
We are happy to announce the latest updates, designed to enhance user experience and strengthen data security. These new features include advanced data protection, seamless integration with Azure Open ID, enhanced voice channel configurations, an improved user interface, and expanded channels options.
These new improvements allow the system to consider previous interactions, providing more accurate and relevant answer to end-users. You can set the number of past interactions to be taken into account, ranging from 0 to 5, ensuring that follow-up questions are understood in context.
Voice Gateway
We're introducing a mini product center to keep you up to date with the latest features and events, so you don't miss out on important updates!
Menu
Experience a whole new way of navigating through eva. We're introducing a new navigation structure to help you navigate through the sections and find what you need faster with new sections and a more logical clustering, and the option of pinning your most accessed and/or favorite sections on top of the menu.
Channel Library
Plus, we have added new channels to our library:
Kakao
Line
Amazon
Connect
Genesys
Odigo
Twillio
Infobip Conversations
Naka
Digital Humans
Slack
List of bug fixes and other improvements in the June release:
Parameters
Bug resolution on the Parameters screen (env and bot).
Error when registering environment parameters corrected.
Content type body validation and rest connector cell output adjusted.
Sliders changed via input.
Snack message after parameter slider change corrected.
Training
Training status bar adjusted to be behind the menu.
Activation of the training button after document removal.
Rest Connector
Problem with editing rest connector with key/value fixed.
Tooltip in the body of the rest connector displayed correctly.
Websnippet
Switch enable/disable corrected.
Source adjusted to reflect on the site.
Text URL error resolved.
Images now render correctly.
Smartphone styles corrected.
KnowledgeAI
KAI training page adjusted.
Hover message on create question button fixed.
Remove duplicate image button set.
User List
User screen repositioned correctly.
Remove duplicate image button fixed.
Dropdown of list options adjusted.
Login
Automatic logout after inactivity fixed.
Flows Repository
Drop down flow creation adjusted.
Title of the user journey flow modal corrected.
Answers Repository
Template files aligned correctly.
Response modal buttons aligned.
Snack
Hover message in the dropdown of the "Create bot" screen adjusted.
Snack message after slider change fixed.
After months of dedicated work, our product team is thrilled to unveil a wave of transformative features harnessing the power of generative AI technology. From adding dynamism to conversations, to assist you in crafting and enhancing text effortlessly. Dive into the capabilities that will help you in a more efficient and advanced conversational experience.
Find out what's new in this latest release:
This feature makes use of pre-trained language models from LLM (Large Language Model) and OpenAI to assist the engine identify relevant intents without the need for explicit training utterances, significantly simplifying and reducing the process of training your virtual agent.
You can upload a TXT or a PDF file to extract insights for your virtual agent. It has the ability to read images with text (except illustrations), update the file while retaining all questions previoulsy linked to the document, and track user journeys through tags.
Extensions
New improvements were made available to be enabled/disabled in this section — Prompt cell, Rephrase Answer and Assist Answer.
Parameters
New thresholds parameters added to this release to configure the request timeout behavior of the Generative AI services.
Speed up your knowledge base creation process by automatically generating a list of context-related utterance examples for each intent with this exciting feature.
Introducing a new filtering option in Dashboards, leveraging tags added to cells and flows in the Dialog Manager. This feature offers a precise way to analyze specific scenarios, simplifying the performance analysis of your virtual agent.
Our platform is equipped to understand audio when users communicate through channels that support audio recordings. This feature allows you to engage with users via audio interactions, enhancing accessibility. It's designed to work across all audio-compatible channels.
Trial accounts created in the Try Syntphony Conversational AI environment offer a seamless transition to a production upgrade with just a single click. This can be accomplished by purchasing a license, enabling team members to retain all the content they've diligently crafted during the trial period.
This new cell for generative content is a versatile tool with many use cases. In this cell, you can enter a prompt, which may use any existing parameters or user input's text as part of it, to process inquiries, create answer variations, and format your inputs into specific formats.
Rewrite texts for your answers
Process your user's input and store it in the format of your choice, such as JSON or other technical structures, based on your specific requirements.
Engage in freeform conversations with the language model by utilizing user input for inquiries.
Infer intents and needs regardless of the NLP's configuration, allowing for diverse, generic zero-shot integrations with Not Expected flows redirecting to the appropriate flow based on user text parsing.
Generate tailored texts based on available or missing user information.
Validate inputs, make sure they are in the correct format, and display text with only specific fields.
Literally anything a LLM tool can provide you with.
Previously, you could use Transactional Service Cells to make requests to a Webhook of your own, which allowed you to some extent integrate submissions of data into your webservice through headers. Now, this new service cell comes with an integrated authentication step, allowing you to use any of the market standard authorization types to proccess any type of request.
Among others, the new dashboards bring data and gather insights about:
Full Conversations
Message details
Confidence score
The NLP and/or Knowledge AI (Automated Learning) response
Satisfaction
Duration
Channels
Syntphony CAI is using this powerful new tool so you can improve the way you manage the Not Expected answers and deliver much more dynamic and accurate answers in real time, giving users an amazing experience and speeding up the creation of your conversations.
New instance API conversation endpoints
New Infobip, Google Assistant and Facebook API endpints
New error codes in instance conversation API
Metrics fot total conversations, total messages, total of users, percetage of accuracy, top 10 intents, and top 10 flows.
A comparative data from the previous period, so you can quickly see how the virtual agent has been performing.
Quick-filters to switch the charts data visualization
More details and specifics such as occurrences by channel and total executions on the period by just hovering the bars and lines on the charts
Filters by period (analyzes data up to 12 months) and by channels
Export your dashboards as PDF
For a better experience, we added new ways of navigating on the repositories. Now you can sort items by name, modification date, or type. This is also useful to help you search using this filters.
Other possibility added to this release is pagination, to help control how many items are displayed per page. Choose if you want to see from 50 up to 100 items on the Flows, Intents, Entities, Services, and Answers repositories.
In the latter, it’s like moving the virtual agent from one environment to another (from dev to prod, for example), without the need of creating a whole a new agent every time you change it in a different environment.
Sort and Pagination to give the user a better navigation experience on all repositories
Syntphony CAI brings a solution that will allow users to manage Organizations and Environments on the same page to bring more operational efficiency. Now you won't need to open different pages in the browser with different login accounts.
This also means more flexibility to create different Environments (dev/test/prod, for example) within these Organizations, according to the project strategies. At the permission level, Admins can also set different user access levels and define their roles for each environment and the virtual agents therein: in other words, the same user can be editor in environments A and B and a viewer in another environment C, for example.
In practical terms, it helps reduce time to market, as you’ll also be able to quickly perform the deployment process and speed up updating to new versions.
Our platform is equipped to understand audio when users communicate through channels that support audio recordings. This feature allows you to engage with users via audio interactions, enhancing accessibility. It's designed to work across all audio-compatible channels.
To enhance the security of PII (Personal Identifiable Information) data within the platform, we're introducing a feature that activates . This feature reduces the risk of data breaches and enhances compliance with privacy regulations.
The and Syntphony Conversational AI enhances the operational efficiency of Contact Centers. This integration leverages the sophisticated telephony and contact center capabilities of UCCE, combined with the intelligent automation features of Syntphony CAI. Additionally, users can now configure phone numbers for voice channels like VXML directly from the channel library.
Users can now (if enabled), providing a seamless and secure authentication process. This integration supports single sign-on (SSO) capabilities, reducing the need for multiple passwords and simplifying user management for IT administrators.
We are introducing enhanced contextual understanding in .
Now you can set up default error handling, timeout configurations, TTS (text-to-speech), voice menus, DTMF menus, and voice handover settings for smooth transition calls to human agents when necessary .
Another significant improvement made was in the section navigation! With this update, we've restructured the library to enhance usability and intuitiveness, ensuring that users can effortlessly find the integrations they need.
The classification is a task that enables the model to classify intents during runtime, even if they have not yet been trained, using semantic similarity.
Empower your virtual agent's answers with real-time ! Enhance user engagement by tailoring responses based on context and emotions for a more natural conversational experience.
A that makes it easier for conversational designers to create or enhance answer with the help of generative AI. You can generate text based on a simple instruction or with one single click: expand, reduce, or improve text, fix spelling and grammar or change tone. Available in the text template for all channels.
A solution that transforming documents into a structured and easily accessible content. doesn't rely on conventional intent-based model to identify user questions to provide answers, which makes it ideal for FAQs, product descriptions, institutional content, manuals, chit chat, etc.
Integrate conversational AI into your website, app and mobile channels. Whether you want to enhance customer satisfaction or simplify user interactions, Syntphony Conversational AI enables you to create a personalized solution that perfectly matches your distinct brand identity, ensuring a dynamic user experience.
The empowers writers to quickly generate multiple sentences using the provided context. By effortlessly creating sample utterances for your intents, you'll turbocharge the training process, making it faster and more efficient than ever before.
Open the power of in your Dashboards: gain insights, make data-driven decisions, and optimize user experiences effortlessly with valuable insights about your conversations. The newly added section to our Dashboards will help you better understand the conversation journey, drop-off points, and A/B testing.
We've added the new section, to enhance your virtual agent's capabilities. A variety of advanced features can be enabled with a single click. Stay tuned for upcoming features.
Refer to the to learn how to integrate it.
The recent rise of Large Language Models (LLM) technologies, such as OpenAI's ChatGPT, has unveiled a remarkable potential in harnessing the power of NLP. The new will empower you to unlock all this potential with its awe-inspiring transformative capabilities.
To help you understand how it works better, we recommend accessing its , which provides a brief and detailed explanation of its features and how-tos. In summary, you can:
We've added yet another cell that will allow you to integrate literally any API you need, the .
A new was added to our list. These are pre-built and ready-to-use virtual agents to help establish a base for building conversations for Airlines, a collection of 19 flows focused on travel services in 3 languages: English, Spanish, and Portuguese.
This release includes new dashboards with sections for , , and .
The new evg-connector allows you to create voice agents within Syntphony CAI, that means that no external platform is needed. Now you can easily implement and automate virtual agents using text and audio answer templates in Dialog Manager, integrated to a .
We have added another NLP to our list! If your knowledge base is based in Amazon Lex, you can to create flows and manage all the user conversational journey.
Improvements in and flows that offer new possibilities according to the channel being used. You can add new cells to these flows to, for example, segment different user groups using rule cells and deliver a different welcome message for each group. You can also use rule cells to set your virtual agent to deliver different Not Expected answers for different segments of customers. .
A new was added. These are pre-built and ready-to-use virtual agents to help establish a base for building conversations for C-commerce, a collection of 19 flows focused on e-commerce services in 3 languages: English, Spanish, and Portuguese.
your flows using Code and/or Rule cells.
A new feature is available, providing key metrics that will help you analyze if the virtual agent is successfully performing and achieving your business goals. With this new feature, Syntphony CAI gathers and charts specific data and easily custom them the way you want.
The new section includes the following:
In this new release, we bring some improvements in the way you your virtual agent: now you can choose if you will import it with a new ID or if you want to keep the same ID from the previous environment.
You can also an existing version, updating all changes made in parameters, channels, workspace, repositories, and Knowledge AI, or restoring a backup.
We also added a new shortcut in a pop-up menu to import and and update the virtual agent directly on the main page.
Now you can add questions to disabled documents in and choose if you want to activate or leave them deactivated.
: Release of the new Dashboard - Overview, Syntphony CAI gathers and charts specific data you need, and easily custom data the way you want to see.
improvements: Ability to choose between importing the virtual agent as a new one with the same or a new and unique ID, or to update (replace). This option won’t change the ID.
improvement: Allows creating questions in disabled documents.
New were added. These are pre-built and ready-to-use virtual agents to help establish a base for building conversations for Help Desk (a collection of 21 flows focused on ticketing services) and Telco (collection of 25 flows focused on Telecom services).
Searches for specific cells (intent, entity, answer, service), flows, AL documents or AL questions through extensive lists on the repositories in Dialog Manager, by .
We have updated the profiles and roles definitions to better respond to our users' needs. From two types in the previous version, we have now five different types: owner, admin, supervisor, editor, and viewer. The idea is to allow a better understanding of the roles of each user in each project and, thus, define their access levels and permissions across all Syntphony CAI resources. .