In terms of pricing chatbot development services, it varies greatly depending on the complexity of the bot, the platforms it's deployed on, and the specific needs of the client. Voiceflow doesn't directly provide chatbot development services, but we offer a platform for you to create and manage your chatbots.
As for API tokens, they are determined by the AI model you use. For example, OpenAI (GPT models) and Anthropic (Claude models) consider a token as around 3-4 characters. Tokens are used by both the input (what is being sent to the AI model) and output (what is being received from the AI model).
The cost of these tokens is also dependent on the AI model you're using. Voiceflow is billed by our Large Language Model (LLM) providers (OpenAI, Anthropic, Google, etc.) by the tokens at their own prices. We then apply a multiplication factor based on how much more expensive their per token rates are than the rates for the Voiceflow tokens. For example, if Model A charged us $30 per 1M tokens, and the Voiceflow token price was $2.5 per 1M tokens, then using Model A from inside Voiceflow would be charged at a 15x token multiple.
Additional tokens can only be purchased by paid customers (Pro, Teams, or Enterprise plans). The price for additional tokens is $5 per 2 Million tokens.
Please note that the "Max Token" slider on the AI steps only controls the maximum output message. Depending on your agent design, there may be a large amount of information that is sent to the AI model as a part of your prompt, so even if your max tokens are very low, you could still use thousands of tokens.
I hope this gives you a clearer understanding of how tokens work and how they are priced. For more detailed information, you can refer to these documents: What are Tokens? and Token Top-up.