Error when using gpt chat in agent block
When using gpt chat in the agent block, the block fails to start, an error occurs. But when using claude AI, everything works without errors. I attach a video and a file with an example of an error in the code.
I already tried to upload the error code to the gpt chat, I was advised to solve the problem using the API block and code, but the instructions were not enough, and I do not understand how to apply this to my task.
I want the agent block to work with gpt 4o chat at startup. Why does the error occur and how to fix it?
I attached a file with the error code and a screen recording
2 Replies
It’s not your project – it’s the way GPT models validate the very first call they receive.
What’s happening
• The Agent block autostarts as soon as the workflow is reached, so it immediately asks the selected LLM for a reply.
• GPT-4o (and all OpenAI “chat” models) will throw the error below whenever the request’s messages array is empty:
“You must provide a ‘messages’ array with at least one user or system message.”
• Anthropic models are more forgiving, which is why Claude works while GPT-4o fails.
• Voiceflow’s outage-fallback isn’t triggered here because the error is classified as a user / design error, not a provider outage. How to fix it
1. Make sure the very first turn you pass to GPT contains a user message.
• Add an Interaction step (or any step that collects the user’s first utterance) before the Agent block.
• OR, if you want the assistant to open the conversation, put that first message in a Prompt step and feed it to the Agent block as context.
2. If you need the Agent block to wait, simply disable “Autostart” in its settings and call the Agent only after you have at least one message.
3. Keep Claude as a fallback: you can still configure Anthropic as the fallback provider for OpenAI models; if GPT-4o is actually down, the flow will switch over automatically (see docs on LLM fallback). Result: GPT-4o will start normally once it has a valid first message, and the block will no longer crash. Docs for reference
• LLM fallback & non-outage errors: https://docs.voiceflow.com
• Prompt vs. Interaction steps: https://docs.voiceflow.com
• The Agent block autostarts as soon as the workflow is reached, so it immediately asks the selected LLM for a reply.
• GPT-4o (and all OpenAI “chat” models) will throw the error below whenever the request’s messages array is empty:
“You must provide a ‘messages’ array with at least one user or system message.”
• Anthropic models are more forgiving, which is why Claude works while GPT-4o fails.
• Voiceflow’s outage-fallback isn’t triggered here because the error is classified as a user / design error, not a provider outage. How to fix it
1. Make sure the very first turn you pass to GPT contains a user message.
• Add an Interaction step (or any step that collects the user’s first utterance) before the Agent block.
• OR, if you want the assistant to open the conversation, put that first message in a Prompt step and feed it to the Agent block as context.
2. If you need the Agent block to wait, simply disable “Autostart” in its settings and call the Agent only after you have at least one message.
3. Keep Claude as a fallback: you can still configure Anthropic as the fallback provider for OpenAI models; if GPT-4o is actually down, the flow will switch over automatically (see docs on LLM fallback). Result: GPT-4o will start normally once it has a valid first message, and the block will no longer crash. Docs for reference
• LLM fallback & non-outage errors: https://docs.voiceflow.com
• Prompt vs. Interaction steps: https://docs.voiceflow.com
Thank you. Did I understand correctly that it is necessary to insert some other blocks before the agent block? I do not understand how to solve the problem with the agent block itself without adding other blocks. Since now the user writes a message and the agent block responds to him.
I figured it out, the "exit conditions" setting had a too long Name