Hey, how can I upload graphic data as a source for my bot?
Is there a way to upload graphic data such as graphs and charts? Is there an API for this?
9 Replies
Unfortunately, Voiceflow does not currently support the direct upload of graphic data such as graphs and charts as a data source for your bot. The platform primarily supports text-based data sources. You can add data sources in formats such as URL(s), Plain text, Sitemap, PDF, and Docx, among others.
However, you can include images or GIFs in your voice and chat assistants using the Image Step. This allows you to display visuals in the prototyping view on the Custom Assistant channel and on actual devices. To add an image or GIF, navigate inside the Image Step and upload your intended image or GIF into the design. You can add the desired image or GIF by dragging and dropping it into the editor, or click 'Browse' to select one from your computer. Once the image is uploaded, it will have an auto-generated hosted image link by Voiceflow.
Please note that while you can display images and GIFs in your assistant, the AI will not be able to interpret or understand the content of these images. The AI responses are based on text-based data sources that you upload to the Knowledge Base.
For more advanced functionalities, you might want to consider integrating with third-party APIs or services that can process and interpret graphic data. However, this would require additional technical skills and resources.
fair-rose•17mo ago
You can make API calls to an external system & make a custom extension for thechat bot to visualize the data in the web chat client itself + a custom trace in voiceflow designer
So like for examaple the gpt vison?
other-emerald•17mo ago
I’d love to be pointed in the right direction on how to get my head round custom extensions? Is a custom extension the same as a function? I don’t know where to start with these bad boys. Non coder obvs, but keen to learn how they work. I don’t even know how to describe them to the GPT in order to get up on them. It’s all a mystery to me. ✨
fair-rose•17mo ago
Custom extensions are pieces of code you can add to the chat interface. Custom functions are pieces of code you add to the voiceflow designer to process things.
So for example, with an extension you add new things to chat interface like the ability to have forms, google maps, a graph, etc..
Where as with a custom function, you can make a query to the vision API and process the data and send that data to the chat.
So if you combine both, you are using the full power of it, because through a custom function, you can make it trigger a custom extension on the chat interface to do something thats not natively supported 🙂
other-emerald•17mo ago
Thanks Mike. That sounds very powerful. Do mind answering one more thing I’ve been wondering at all? What language are custom functions written in? Would it be easy enough for me to build one through chatGPT? For example, can I make a custom function make an api call to an LLM? I know you have done it with your Vision APi (which I’ve been using and is really brilliant), but is it easy enough to make it directly call other LLM models such as individual assistants setup in OpenAI assistants (that aren’t vision). I guess just understanding how you would make an API call using a function is more where I’m trying to get to. Appreciate this might be an answer beyond the scope of this server thread! Ha!
fair-rose•17mo ago
Well, soon there will be a video out with me building one
So this might help
And I'll be doing a lot more of them too soon
It's built using Javascript
other-emerald•17mo ago
Awesome! Can’t wait. It’s great picking up bits off of you guys on here who know what you’re doing. Humble and grateful for any scraps from the master’s table! Thank you.
fair-rose•17mo ago
Haha, happy to help 🙂