agent response
Is it possible to use an Agent block and instruct it not to send the message to the customer?
I want the Agent to generate the answer to the customer’s question, but instead of sending it, the answer should be stored in a variable {answer}. Right after storing the response in the variable, the flow should automatically continue through the exit condition without waiting for the user’s next input.
1 Reply
At the moment the Agent step is designed to complete the full “think → respond → (optionally) exit-condition” loop.
When the Agent finishes reasoning it always emits at least one trace (Text, Card, etc.) and that trace is returned to the chat widget/runtime before the exit-condition path fires. There isn’t a setting that lets you • suppress the outgoing trace and • only write the answer to a variable. Work-arounds if you need the answer silently: 1. Use an API / Function step instead
• Call the LLM (or any endpoint) yourself.
• Capture the response JSON into {answer}.
• Continue the flow immediately—no user turn required. 2. Keep the Agent step, but move the LLM call into a Function inside the Agent
• Have the Function return its result in a variable.
• In your Agent instructions tell the model not to include the result in its reply (e.g. “After calling the function, acknowledge with ‘OK’ only.”).
• The user will still receive that minimal acknowledgement, but not the full content. If you truly need a completely silent operation, option 1 (API / Function outside the Agent step) is the cleanest approach today.
When the Agent finishes reasoning it always emits at least one trace (Text, Card, etc.) and that trace is returned to the chat widget/runtime before the exit-condition path fires. There isn’t a setting that lets you • suppress the outgoing trace and • only write the answer to a variable. Work-arounds if you need the answer silently: 1. Use an API / Function step instead
• Call the LLM (or any endpoint) yourself.
• Capture the response JSON into {answer}.
• Continue the flow immediately—no user turn required. 2. Keep the Agent step, but move the LLM call into a Function inside the Agent
• Have the Function return its result in a variable.
• In your Agent instructions tell the model not to include the result in its reply (e.g. “After calling the function, acknowledge with ‘OK’ only.”).
• The user will still receive that minimal acknowledgement, but not the full content. If you truly need a completely silent operation, option 1 (API / Function outside the Agent step) is the cleanest approach today.