FAQS
No, it would defeat anonymization as defined by the GDPR. It is an irreversible process.
Our anonymization metrics and report that allow you to prove compliance and usefulness are unique. In addition, our calculation speed as well as the transparency and explainability of the method are differentiating points. To learn more about the method: https://www.nature.com/articles/s41746-023-00771-5
We have already successfully completed flow anonymization projects. The challenge is to anonymize small volumes of data while maintaining maximum usefulness. To meet this challenge we have developed a batch approach.
Deployment is completely industrialized thanks to Docker and Kubernetes. Our teams adapt to all architectures in a few hours.
The CNIL successfully evaluated our anonymization method on the basis of our security and utility metrics respecting the 3 criteria set out by the EDPS to define anonymization (opinion of 05/2014).
The fact that synthetic data is artificially generated data could indicate that this data is anonymous by default. The ability to share the generation method rather than the data itself seems to be an additional guarantee of privacy and a paradigm shift in the use of data. However, generative models may also not ensure the confidentiality of training data. Indeed, generative models can remember specific details of the training data, including the presence of specific individuals or personal information, and incorporate that information into the synthetic data that is generated. This type of privacy breach is called Membership inference attack, when a hacker is trying to determine if a specific person's data was used to train a machine learning model. This can lead to serious privacy breaches, especially with sensitive data.