extended-salmon•17mo ago
Knowledgebase constantly outputting [not found] with Haiku
I would really want to use Haiku instead of GPT 3.5, but for messages that are very clearly relevant it almost always outputs [not found] - even if the answer is clearly part of the sources it finds.
4 Replies
The issue you're experiencing with the Knowledge Base consistently outputting [not found] when using Haiku might be due to a few factors.
Firstly, it's important to understand how the Knowledge Base (KB) works. When you upload a document to the KB, it is turned into 'chunks' or pieces of text. When you send a question to the KB, it determines which 'chunks' are most similar to your question. Then, our system combines those chunks, the question, the custom instructions, and system prompt you provided into a structured wrapper prompt. This entire package is sent to the AI model and an answer is returned.
If the AI model can't find a relevant answer in the chunks, it will return [not found]. This could be because the model isn't able to accurately match the question with the relevant chunks.
Haiku, being a different model than GPT-3.5, might interpret the chunks and the question differently, leading to a [not found] response even when the answer seems clear to you. Each model has its own strengths and weaknesses, and it's possible that Haiku is not as effective as GPT-3.5 in this particular use case.
To improve the performance, you might want to consider adjusting the settings of your Knowledge Base. You can adjust settings such as Model, Temperature, Max Tokens, Instructions, System, and Chunk Limit to tailor the generation of responses to your specific needs.
Also, ensure that the data in your Knowledge Base is well-structured and relevant to the questions being asked. The principle of "garbage in, garbage out" applies here - the quality of the answers you get is directly related to the quality of the data you put in.
Lastly, you might want to consider using a different model if Haiku continues to return [not found] frequently. While Haiku has its strengths, it might not be the best fit for your specific use case.
Remember, the goal is to find the right balance between the model's capabilities and the data you're providing to it.
Very strange. Have you tested it in the KB preview tool to see what Chunks are being returned.
extended-salmonOP•17mo ago
Yes, and there doesn't seem to be a difference there between the models. Even if I'm asking questions which are like a literal match with whats in the content of the KB it still very often fails (more than 70% of the time i think). The success rate goes up a bit if I increase temperature, but it still misses so often it's practically unusable
DM me and I can have a look