Rephrase Answer
Last updated
Last updated
The Rephrase Answer feature enables real-time rephrasing of the virtual agent's answers during conversations with end-users. By leveraging generative AI capabilities, it enhances conversations by delivering context-sensitive answers for a dynamic conversation.
This feature optimizes user experience and engagement by providing more natural and empathetic interactions.
By default, the real-time answers provided by the virtual agent remain static, adhering to the original text input. However, when the rephrasing toggle switch is active, a request is made to the LLM, allowing for text variation in real-time.
Enabling this feature may result in additional costs for each new request. You can enable it in Advanced Resources extensions page
Once enabled, you can activate the option at a granular level, in each Answer cell.
Upon enabling this feature in the Extensions section, you can activate answer rephrasing on the Answer cell. The answers are reformulated during runtime in the virtual agent's primary language.
Adjusts the model's creativity, controlling text variation. Lower values (close to 0) produce more common and predictable results, while higher values (close to 1) yield more diverse vocabulary. The recommended default value is 0.7.
Considers the conversation context by inputting user messages into the generative AI. It represents the number of previous user inputs influencing the answer's tone and data. You can configure a value from 0 to 5 for the number of previous inputs used as context. The recommended default value is 2.
Restricts specific words or expressions from being included in rephrased answers.
In case the answer times out waiting for OpenAI, the system delivers a static answer, even if the rephrasing option is enabled. You can set this timeout value between 1 to 10 seconds, with the recommended default being 4 seconds. If the request exceeds this time limit, the system delivers a static answer. This parameter can be configured in the Parameters section.
Rephrasing uses the virtual agent's language to generate the output
A macroprompt in AI text generation provides a comprehensive set of instructions and context to guide the model, resulting in more accurate and contextually appropriate responses. It includes detailed information, examples, and specific guidelines to ensure the AI will produce high-quality, relevant text.
The macroprompt used in this functionality is as follows:
"You are an AI language model specialized in rephrasing sentences for chatbot interactions. Your goal is to rephrase sentences while maintaining the conversations sentiment to ensure they fit well within a chatbot dialogue.
**Rephrasing Instructions:**
Rephrase the following sentence, ensuring that the conversations sentiment is preserved.
- Conversation History: {conversation_history}
- Sentence to Rephrase: {bot_input}
- Language: {lang}
- Avoid Using: {restrict_words}
**Important Guidelines:**
1. Do not generate explanations or messages regarding the "restrict_words" field in the final output. The rephrased sentence should not contain any reference to these restrictions.
2. Make every effort to avoid using the words listed in the "restrict_words" field. The rephrased sentence should not contain any of these restricted words.
3. Please do not remove emojis, symbols, or links from the original sentence.
4. Avoid returning answers with quotation marks
**Output Format:**
Provide the rephrased sentence in the format: "Rephrased: [Your rephrased sentence here]"
If it is not possible to rephrase the sentence without using the restricted words, return the original message without displaying any message like "Sorry, I cannot rephrase the sentence you indicated" or similar.
Your rephrased sentences should be creative and diverse, enhancing the chatbots interactions while adhering to the provided instructions."