Agent with Carousel/Buttons activated
Good day everyone, I'm encountering an issue when using Carousel and Buttons with my agent.
Context : I configured an agent that is able to guide students regarding immigration questions, and I added a tool able to pull up relevant listings for apartments to help the student find apartments easily.
The tool returns a nicely formatted json, and I toggled the Carousel feature on to be able to present the listings as a carousel.
When doing that, the chat ends immediately, and I get the following error :
Did anyone ever get this issue ?
Thanks in advance š
8 Replies
@Moderator how can I get a response if there is no follow-up ?
hey
Hey Michael, how are you ?
Thanks for taking the time.
I figured that Anthropic models have no problem generating Buttons or Carousels, but itās very hard to get consistent results with open ai cheaper models like 4.1mini.
Any idea why ? Couldnāt understand the difference in the backend
@Dali I am also having issues with Buttons and it all started when I switched to a cheaper model. I was burning like 3K credits for a few hours of building. So I switched models and while now consumption is very manage-able, my buttons randomly disappear both while testing on canvas and even after deployment. That's not right.
Yeah, I paid a lot of money for development recently, because in order for it to work I need to use Claude 3.7 and above. Thatās inconvenient.
It seems OpenAI models donāt get how to send the button payload as good as Anthropic models, which is strange.
ElevenLabs have a 50% cheaper token price for development environments.
That would be encouraging to spend less money while tweaking, and pay the full token price for published models.
Especially when the money we pay is partially used to find issues and hence contribute to the products feedback.
Heyo, thanks for your feedback.
We apologize for the inconvience caused with the OpenAI models š . We're actively trying to improve component generation with the lower end models. We've updated the docs to let others know about this issue as well.
Till futher notice, I'd honestly suggest you stick to the Anthropic models for the Agent step; they've generally performed better than other models for tool calling and component generation
@lord guandoš« i understand, but paying $150 for a week of development is not viable: Claude 3.7 Sonnet is ~7.5Ć (input) to ~9.4Ć (output) more expensive than GPT-4.1 mini ā Claude 3.7: $3 in / $15 out per 1M tokens vs GPT-4.1 mini: $0.40 in / $1.60 out.
I hope this will be addressed soon, thank you very much for making Voiceflow such a good product, this would make things much more cost effective for devs and clients šš»
Hey, yeah I totally agree with you. Unfortunately, It's model's fault and a known issue, that's why agent steps default to sonnet 4.. we're hoping we can see better outputs with GPT5.
Here's some advice on using components with the agent step tho.. if you want to display cards, carousels and buttons with the Agent step (assuming they are static), try using the component step.
Create a component for your items(e.g. a component can be a simple workflow with a message step and then a carousel). Create an exit condition to run the component. Link back to the agent step after the exit condition finishes.