Responses slow when processing large batch of API calls on POST request to openAi gpt-4
The steps are:
We display an image from a url
We send the image as a POST to chat gpt for analysis
We display the analysis
When we run tests:
If we isolate the display image node, it occurs instantly. When we attach the display image node to the send POST request node, the image does not display for a while, then the POST request is instant.
Our guess is that Voiceflow is batching the responses until it has completed all the steps, then sending each response quickly.
How can we unbatch these responses so that the image displays immediately, then we inform the user to wait for analysis, then respond with the analysis when the POST call finishes?
