Abdullah
Abdullah•7mo ago

Compliance Checklist for EU/Benelux Projects

What are the key compliance considerations I should be aware of when deploying Voiceflow bots for EU client/users - particularly around data privacy, retention, and user rights? Also for WA bots, can I have it online without mentioning somewhere that it is AI, same to voice/webchat any other similar things I need to pay attention to?
15 Replies
Abdullah
AbdullahOP•7mo ago
No Tico?šŸ˜† Anyone?
frenc
frenc•7mo ago
I can send you a few links where you can find the most important topics. Perhaps you already know all of them? https://gdpr-info.eu/ https://artificialintelligenceact.eu/ https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32019L0882
fascinating-indigo
fascinating-indigo•7mo ago
This is an area I'd like to know more about as well. Many European companies don't even bother to work with you if they know you aren't GDPR-compliant. I know VoiceFlow offers EU-hosting but this is only for Enterprise users which is a big entry step for many to jump in.
Abdullah
AbdullahOP•7mo ago
Im following these rules: 1. Clearly state that it’s an AI assistant before or at the start of the conversation 2. Don’t collect or store personal data without explicit user consent 3. Add a short privacy disclaimer or link in the chat widget or first message 4. Let users request deletion of their data (include a ā€˜delete my data’ option or fallback) 5. Log user consent and data actions in your backend 6. Set a data retention limit (e.g., auto-delete after 30 or 90 days) and enforce it 7. Ensure your chatbot platform stores data in the EU or with GDPR-compliant hosting 8. Don’t use WhatsApp unless the user explicitly agreed to be contacted there 9. Use WhatsApp templates for first contact, not AI messages 10. Don’t impersonate a human—say it’s an AI (required by EU AI Act) 11. Don’t ask for sensitive data (ID numbers, health info, payment) unless 100% secured and justified 12. Include a fallback route to a human or clearly state if that’s not possible Maybe im missing something @idmon @frenc For whatsapp chatbots im adding this in contact description ā€œYou are being assisted by an AI assistant. By replying, you give consent for further communication via WhatsApp.ā€
lovely
lovely•2mo ago
any new info on this?
W. Williams (SFT)
W. Williams (SFT)•2mo ago
@Braden (Voiceflow CEO) @Daniel (Voiceflow) šŸ‘€
Braden
Braden•2mo ago
Hey all - new info on what in partfiular?
Pavel ČermÔk 🦾
Will there be an option to choose the hosting location for VF ? (As many of us have EU customers) Will there be functions/guardrails to detect PII, or harmfull content before they reach LLM?
lovely
lovely•2mo ago
would love that
Braden
Braden•2mo ago
We have PII redaction today, we also have the moderation API but we can surface these - right now they're enterprise features, same as the EMEA cloud @lovely @Pavel ČermÔk 🦾
Pavel ČermÔk 🦾
interesting...havent heard about these...where can we read about this further? in terms of pricing and everything
Braden
Braden•2mo ago
We'll look to surface PII redaction in-app. For GDPR, we have all of that on our website today, and you have the ability to not save transcripts
Abdullah
AbdullahOP•2mo ago
Hi @Braden . If I enable BYOM on enterprise, will my custom llm appear as a selectable option in the AI model dropdown (alongside gpt5/claude/gemini etc)? So I can still use the agent step for example. for context, I'm exploring this opensource llm route for clients concerned about data sovereignty. Correct me if I’m wrong here, but if I go with llama or mistral hosted on Azure/AWS with a private endpoint, that should solve the data issue since everything stays in their infrastructure
Braden
Braden•3w ago
Got it, yeah that makes sense @Abdullah
Abdullah
AbdullahOP•3w ago
Would it be possible to do that on BYOM feature?

Did you find this page helpful?