Zero-Shot LLM
Last updated
Last updated
The Zero-Shot model is capable of performing intent classification on new data with minimal to no training, using models that are pre-trained on extensive datasets, ensuring accurate predictions. The Zero-Shot model enhances efficiency by identifying intents in real-time using semantic similarity.
By eliminating the need for training, costs can be reduced, and the development process can be simplified and accelerated.
eva enables integration with OpenAI models.
At your virtual agent's general settings page, click on Change Model
.
Once you have chosen the model, fill in the required fields for the selected option and set a token limit.
This model is highly influenced by the limitation of tokens: When the configured limit is reached, the system disables the functions of importing and/or creating new intents.
The outcome may depend on the availability of the generative service chosen and the token limit defined. If you're using Azure OpenAI by eva, the limit is set at 4000 tokens.
When creating new Intents for a Zero-Shot LLM model, you don't need to train your virtual agent with tens of utterance examples. Simply enter a name and fill in the optional Description field to help the model with more context. Note that the Name and Description fields also consume your tokens.
The Zero-Shot LLM model works best for agents with fewer intents and clear-cut use cases.
When coming up with the Name of the Intent, avoid being too vague, for it does't help the model in identifying the use case.
We recommend using more descriptive names. For instance, if an Intent to cancel a credit card is simply named "Cancel", the model may have trouble classifying it due to the lack of an object and may lead to the wrong flow. So, in this case, a better option would be cancelCreditCard our CANCEL_CREDIT_CARD.
If correctly populated, the context will enhance the Intent's classification. You can check the OpenAI Prompt Engineering page to learn some strategies and tactics for getting better results from LLM models (GPT).
But a good practice is to keep your description clear and specific. Example:
❌ Instead of simply writing:
Intent to cancel a credit card
✔️ Prefer:
Customer expresses the desire to terminate or deactivate their credit card, intending to cease its functionality.
User indicates the wish to discontinue credit card-related services. This typically involves cancelling the card or related operations.
The user is looking to halt the use of their credit card, possibly implying the need to cancel or deactivate it.
Try to use examples that aim to cover variations in how users might express the intent, assisting the model in accurately classifying such intents.
It's the amount of time (in seconds) the virtual agent should wait for a response from OpenAI. In case the generative services times out, the system will work to deliver fallback measures:
If the user is currently in a flow, they will get a Not Expected answer (sibling cell of the Intent that timed out).
But, instead, if the user is not in a flow, the system will search for a answer within Knowledge AI (if enabled). If disabled, the system will deliver a Not Expected flow.
You can set the timeout at the Parameters sections.
The Azure OpenAI by eva is in beta phase, so we can't assure you that it will perform accurately.