afraid-scarlet
afraid-scarlet2y ago

Timeout error in functions

its working most of the times but comes with these errors every now and then, but after few more messages it works. I tried with different API endpoint. having same network timeout issues sometimes.
No description
37 Replies
wise-white
wise-white2y ago
Maybe an api provider side error ?
afraid-scarlet
afraid-scarletOP2y ago
Nope..like I mentioned. I tried with 2 different endpoints
NiKo | Voiceflow
Both of your endpoints does completions? Because to Moritz point, from time to time those can timeout by taking more time to answer.
afraid-scarlet
afraid-scarletOP2y ago
Yes. First I thought it was a server error since first was hosted in locally Using ollama Then this one was done on the example given in the YT tutoral Together.ai
NiKo | Voiceflow
Try to run some testing on those endpoints with one of the utterance that was generating an error. Run multiple requests in a dedicated test tool to check if you have some spikes sometimes. This can happen more than you think, even on our end we have to deal with this with OpenAI endpoints.
afraid-scarlet
afraid-scarletOP2y ago
I'm thinking of adding error handling on the outside. If step Looping the same question 2-3 times until success. Still no answer then if I'll send to inbuilt gpt Gonna try the testing with Postman and see if that works
W. Williams (SFT)
I built a console.log type function so I could get timing data from functions. This is not a Functions issue. It is a LLM/endpoint issue. All my tests show that the fetch requests were working fine.
afraid-scarlet
afraid-scarletOP2y ago
I thought so too when it was running on local LLM then I switched to a good provider. But it still happened Or maybe some code error in the function I built? Still hard to determine since It only happened for a few tries then it again resumed to normal
afraid-scarlet
afraid-scarletOP2y ago
@NiKo | Voiceflow tried it a few times, worked for 3-4 messages . then it started showing. 400 error and then this.
No description
afraid-scarlet
afraid-scarletOP2y ago
@W. Williams (SFT) what do you think? endpoint, my code or the VF functions?
W. Williams (SFT)
expport the function and let me look at it.
NiKo | Voiceflow
Also the memory encoding looks weird, as if you were re-encoding it each time or populating logs (agent responses) in it.
afraid-scarlet
afraid-scarletOP2y ago
im stringifying memory into a varibale {memory} before sending that is just a {memory} in a text node to preview before sending
afraid-scarlet
afraid-scarletOP2y ago
just within few tries @NiKo | Voiceflow with mistral API. (new endpoint just for confirmation)
No description
No description
afraid-scarlet
afraid-scarletOP2y ago
@W. Williams (SFT)
W. Williams (SFT)
@NiKo | Voiceflow @bamelement So I jumped on a call and looked at this. I even re-wrote his function. I saw zero issues with hundreds of calls. I sent him the Function I wrote and he continued to have issues (see screenshots). NOTE: I let him use my Mistral Account / API Key. Seems like a possible issue with his project possibly. Happy to provide any other info you guys need.
No description
No description
NiKo | Voiceflow
Yeah ideally we should run the exact same test with the exact same project at the exact same time and see what we got. We can setup a test session next week if you both have time.
wise-white
wise-white2y ago
I have spend hours trying to fix this and I thought it was in my code, but now I see other people has similar issue. Locally it completely works fine. 70% error rate.
No description
wise-white
wise-white2y ago
@NiKo | Voiceflow I have time for a call if that might help resolving this issue.
NiKo | Voiceflow
Not sure there's much to do here, that seems to be the error rate of GPT (OpenAI) since a bunch of weeks for some of the endpoints, recent models, Assistants or with large responses. Can you share more context about your function and what are the OpenAI endpoints you are using (or at least those leading to a timeout of your requests).
wise-white
wise-white2y ago
Sure absolutely, I have two OpenAI api calls in that function. Its to the gpt4 turbo endpoint. The reason I am assuming it might has to do something with this error is that the error is always „network timeout“ no incorrect body or something. Is there anything specific you are looking for ? Or we can do a call tomorrow if you like and that might speed things up.
NiKo | Voiceflow
A good test will be to test those 2 exact same requests that generate a timeout with the OpenAI API in Insomnia or Postman and check the total response time.
wise-white
wise-white2y ago
I will try to test if eventhough I have never worked with any of these, but I can for now say that the second API call which is exclusivly causing the error is around 10 seconds
W. Williams (SFT)
Moritz, what happens if you switch to gpt-3.5-turbo and see if that works on the second call? See if it works. I believe that is gpt-3.5-1106
wise-white
wise-white2y ago
3.5 turbo seems to work fine. But it also has a much lower runtime. 4 turbo seems has issues. „FetchError: network timeout at: https://api.openai.com/v1/chat/completion“
W. Williams (SFT)
Try to stream the answer. I believe this is an issue with OpenAI Completion endpoint. Upon further review, it appears that there are reported issue with GPT-4. If you stream the answer, it does not have this issue. You will need to Buffer the response and return it after it has finished.
wise-white
wise-white2y ago
Good Idea I tested that with this function:
async function fetchAIResponse(apiKey, information) {
const requestBody = {
model: "gpt-4-1106-preview",
messages: [
{
"role": "system",
"content": "You answer questions"
},
{
"role": "user",
"content": information
}
],
temperature: 0.4,
max_tokens: 800,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0
};

try {
const response = await fetch("https://api.openai.com/v1/chat/completions", {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify(requestBody)
});

if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}

const reader = response.body.getReader();
let chunks = [];
let completed = false;

while (!completed) {
const { done, value } = await reader.read();
if (done) {
completed = true;
} else {
chunks.push(value);
}
}

let jsonResponse = new TextDecoder("utf-8").decode(Uint8Array.concat(...chunks));
jsonResponse = JSON.parse(jsonResponse);
return jsonResponse.choices[0]?.message.content;

} catch (error) {
console.error('Error in OpenAI API call:', error);
// Ensure that 'logStr' is defined somewhere in your code
logStr += "Error in second OpenAI API call: " + error + " \n";
return null;
}
}
async function fetchAIResponse(apiKey, information) {
const requestBody = {
model: "gpt-4-1106-preview",
messages: [
{
"role": "system",
"content": "You answer questions"
},
{
"role": "user",
"content": information
}
],
temperature: 0.4,
max_tokens: 800,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0
};

try {
const response = await fetch("https://api.openai.com/v1/chat/completions", {
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify(requestBody)
});

if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}

const reader = response.body.getReader();
let chunks = [];
let completed = false;

while (!completed) {
const { done, value } = await reader.read();
if (done) {
completed = true;
} else {
chunks.push(value);
}
}

let jsonResponse = new TextDecoder("utf-8").decode(Uint8Array.concat(...chunks));
jsonResponse = JSON.parse(jsonResponse);
return jsonResponse.choices[0]?.message.content;

} catch (error) {
console.error('Error in OpenAI API call:', error);
// Ensure that 'logStr' is defined somewhere in your code
logStr += "Error in second OpenAI API call: " + error + " \n";
return null;
}
}
But I still get the exact same error. I hope this is as you thought of it @W. Williams (SFT) or did I made any errors?
W. Williams (SFT)
Did that work?
wise-white
wise-white2y ago
No sorry
W. Williams (SFT)
After I messaged you, I tried and got a reader error
wise-white
wise-white2y ago
Maybe that is also not working in my code, but I am not realizing it because I dont have it logged it just not working maybe. I have no Idea honestly. Is there anything I can do to maybe better determine where the issue might come from if not this voiceflow functions error ? Now I also got a reader error
W. Williams (SFT)
lol
NiKo | Voiceflow
If you want to use a streamed response, first you need to set the stream setting to true in the OpenAI request. Here is a quick function to handle a streamed response and return the full content.
NiKo | Voiceflow
Shared as an example as you will still hit a timeout if the response took more than 10s
Response timeout while trying to fetch https://api.openai.com/v1/chat/completions (over 10000ms)
Response timeout while trying to fetch https://api.openai.com/v1/chat/completions (over 10000ms)
wise-white
wise-white2y ago
Thank you NiKo 🙏 , But for my understanding is there currently no way to use API calls that take longer then 10 seconds ?
NiKo | Voiceflow
I'm checking with the team what is the current limit and if we are going to change it.
wise-white
wise-white2y ago
Thank you, but it probably won't be implemented until next week, right?

Did you find this page helpful?