You can adjust the length of the AI responses by modifying the 'Max Tokens' setting in your Response AI Step. This setting determines the total number of tokens you want to use when completing your prompt. The maximum number of tokens available per response is 512, but keep in mind that your prompt and settings are included in this count.
However, be aware that a greater number of max tokens can result in longer response latency. If you're finding that the AI's output is being cut off, you may need to increase the 'Max Tokens' setting to allow for longer responses.
Remember, tokens are used by both the input (what is being sent to the AI model) and output (what is being received from the AI model). The 'token' slider on the AI steps only controls the output message. Depending on your design, there may be a large amount of information that is sent to the AI model as part of your prompt.