correct-apricot•2y ago
Responses slow when processing large batch of API calls on POST request to openAi gpt-4
We are using a Topic flow to process an image url.
The steps are:
We display an image from a url
We send the image as a POST to chat gpt for analysis
We display the analysis
When we run tests:
If we isolate the display image node, it occurs instantly. When we attach the display image node to the send POST request node, the image does not display for a while, then the POST request is instant.
Our guess is that Voiceflow is batching the responses until it has completed all the steps, then sending each response quickly.
How can we unbatch these responses so that the image displays immediately, then we inform the user to wait for analysis, then respond with the analysis when the POST call finishes?
1 Reply
In Voiceflow, the responses are indeed batched and sent together. This is done to ensure a smooth and consistent user experience. However, I understand your need to display the image immediately and then inform the user to wait for the analysis.
Unfortunately, as of now, there's no direct way to unbatch these responses in Voiceflow. The system is designed to process all the steps and then send the responses. This is why you're noticing a delay when the image display node is attached to the POST request node.
One workaround could be to split the process into two separate flows. The first flow could be responsible for displaying the image, and the second flow could handle the POST request and analysis. This way, the image would display immediately, and the analysis would follow as soon as it's ready.
Remember, this is just a workaround and might not be the most efficient solution. It's always a good idea to experiment with different approaches and see what works best for your specific use case.
Sources:
- Voiceflow Developer API Reference
- Voiceflow Functions Documentation