Contact us

Are you interested in our advice, do you want to send us your application or do you have comments?

Don't hesitate to contact us, we will get back to you quickly.

Your message has been sent. We will get back to you within 24-48 hours.
Oops! Something went wrong while submitting the form.
Inscrivez-vous à notre newsletter !

FAQS

Can we keep the link between personal data and avatar?

No, it would defeat anonymization as defined by the GDPR. It is an irreversible process.

What is the difference with the competitors?

Our anonymization metrics and report that allow you to prove compliance and usefulness are unique. In addition, our calculation speed as well as the transparency and explainability of the method are differentiating points. To learn more about the method: https://www.nature.com/articles/s41746-023-00771-5

Can we anonymize in flows?

We have already successfully completed flow anonymization projects. The challenge is to anonymize small volumes of data while maintaining maximum usefulness. To meet this challenge we have developed a batch approach.

What is the need for deployment with us in terms of infrastructures?

Deployment is completely industrialized thanks to Docker and Kubernetes. Our teams adapt to all architectures in a few hours.

Why is the avatar method compliant with the CNIL?

The CNIL successfully evaluated our anonymization method on the basis of our security and utility metrics respecting the 3 criteria set out by the EDPS to define anonymization (opinion of 05/2014).

Why not anonymize using generative methods?

The fact that synthetic data is artificially generated data could indicate that this data is anonymous by default. The ability to share the generation method rather than the data itself seems to be an additional guarantee of privacy and a paradigm shift in the use of data. However, generative models may also not ensure the confidentiality of training data. Indeed, generative models can remember specific details of the training data, including the presence of specific individuals or personal information, and incorporate that information into the synthetic data that is generated. This type of privacy breach is called Membership inference attack, when a hacker is trying to determine if a specific person's data was used to train a machine learning model. This can lead to serious privacy breaches, especially with sensitive data.