skilled-lime
skilled-lime15mo ago

Openai assistant issue

Hi i have an issue with integerating openai assitatnt, it always add the sourse annotation with it response, only happen when i choose gpt-4o model, can anyone help please
No description
No description
5 Replies
Tico | Voiceflow Assistant
To modify how answers are generated from AI, you can use the following configurations: 1. Model: This is the model that will be used to create your prompt completion. Each model has its own strengths and weaknesses, so be sure to select the one that is best for your task. The models available include GPT-3 DaVinci, GPT-3.5-Turbo, GPT-4, Claude V1, and Claude Instant V1. 2. Temperature: This will allow you to influence how much variation your responses will have from the prompt. Higher temperature will result in more variability in your responses. Lower temperature will result in responses that directly address the prompt, providing more exact answers. 3. Max Tokens: This sets the total number of tokens you want to use when completing your prompt. The max number of tokens available per response is 512. Greater max tokens means more risk of longer response latency. 4. System: This is the instruction you can give to the LLM model to frame how it should behave. Giving the model a 'job' will help it provide a more contextual answer. Here you can also define response length, structure, personality, tone, and/or response language. System instructions get combined with the question/prompt, so be sure they don't contradict. If you're using Knowledge Base as the Data Source in your Response AI step, please note that you cannot use prompt-engineering to prescribe a specific data file to use. The Response AI step searches the entire Knowledge Base, selecting data “chunks” with highest relevance to the question/prompt. Keep this in mind when curating your Knowledge Base content. For more detailed information on configuring models and settings, you can refer to the following resources: - Global No Match (Generative or Static) – Voiceflow - Response AI – Voiceflow
W. Williams (SFT)
In your Assistant, tell it not to provide sources.
skilled-lime
skilled-limeOP15mo ago
I have instructed not to add sources, tags, or annotation but it still add them I also added a JS block to check for regualr experssion matches the annotation and delete it before display the response, but it also still added it
Steve B. | IntelleSync.ai
Can you share your prompt?
foreign-sapphire
foreign-sapphire14mo ago
Just drop a JavaScript block in that removes Japanese brackets and anything in between them. Save the resulting output to a new variable and serve that up as your user response? That’s how I got round it anyway.

Did you find this page helpful?