eastern-cyan•16mo ago
What’s causing LLM Response Timeout?
Hello. I’m just a newbie here so forgive me if my question is so basic. I’m using my own Assitants API in the first part of my workflow (screenshot 1) but I also made an Intent for “book Appointment” (screenshot 2) but I’m using VF’s LLMs for intent classification.
It was working well earliee but after a few hours, I started receiving LLM Response Timeout and the conversations ended.
1) what could be the cause?
2) is it because I exceeded 2M tokens already?
3) is there a way for me to totally access my own OpenAI API Assistant so that I’ll use my token allowance majority of the time?
Thank you for all your help. This is all new to me.


5 Replies
Causes of LLM Response Timeout
There are a few potential reasons why you might be experiencing LLM Response Timeout issues:
1. Network Issues: Sometimes, network connectivity problems can cause timeouts. Ensure that your internet connection is stable.
2. Server Load: High server load on Voiceflow's end or OpenAI's servers can lead to timeouts. This can happen during peak usage times.
3. Token Limits: If you are using a free or limited plan, you might have exceeded the token limit. However, exceeding 2M tokens would typically result in a different error message indicating that the token limit has been reached.
4. Configuration Issues: There might be a misconfiguration in your workflow or API settings. Double-check your setup to ensure everything is correctly configured.
Exceeding Token Limits
Exceeding the token limit usually results in a specific error message related to token usage rather than a timeout. However, it's always good to monitor your token usage to ensure you stay within your plan's limits.
Using Your Own OpenAI API
To use your own OpenAI API and manage your token allowance, you can configure Voiceflow to use your API keys. Here’s how you can do it:
1. API Key Configuration: Go to your Voiceflow project settings and find the section for API keys. Enter your OpenAI API key here.
2. Custom API Integration: Modify your workflow to call your custom API endpoints instead of relying on Voiceflow's built-in LLMs. This way, you can manage your token usage more effectively.
3. Testing: After configuring your API keys and modifying your workflow, test thoroughly to ensure that everything is working as expected.
By following these steps, you can better manage your token usage and potentially avoid timeout issues related to token limits.
For more detailed guidance, you can refer to the Voiceflow documentation.
Links
- Voiceflow Documentation
eastern-cyanOP•16mo ago
Thank you for your help. But I can’t seem to find under Settings the section for API keys. Is it the one under Integrations under Primary Key? I thought it’s the Voiceflow API key
foreign-sapphire•16mo ago
Tico seems to be hallucinating a bit here, there's no way to provide your own OpenAI keys to voiceflow. You can see your token amount in the bottom left of your screen, and if you're using a ton of tokens, that could be why, so check that first 🙂

If you are using the Assistant API, you will not be using VF Tokens
eastern-cyanOP•16mo ago
Oh I see. So I reached 2M tokens already 😭. So it means for intent Classification, I can only use VF LLMs?
I think for the book appointment part I’m using VF LLMs because I don’t know how to integrate the one in the Start with API calls. honestly, I just copied the API call workflow from youtube 😞 Now Im having difficulty how to use it in other workflow like in book appointment.