Privacy can be understood as the desire to protect the confidentiality of data related to an individual, but I believe it is somehow broader: empowering of the user so that he/she knows how data is intended to be used, making him/her understand the risks in disclosing information and ultimately allowing him/her to make his own choice. It is more about giving people the knowledge to perform their own risk analysis and the ability to choose.
There is therefore a subtle equilibrium between raising awareness and offering technical solutions.
Yet, to stay on the safest side, we could adopt a conservative strategy: disclose nothing because nobody can be trusted. Arguments in favor of this position are many. Press headlines are full of stories of data that have been misused, sold, lost, stolen, etc. As the French motto says, pour vivre heureux, vivons cachés (to live happily you’ve got to hide away).
However, the mass of data that is created each second by users of technologies could help face society’s challenges. How to achieve this without breaking individuals privacy? How to monitor response to treatments without disclosing patients identity? How to understand urban mobility without tracking users? Examples are many.
In this work, I am interested in designing an application that would let organizations (companies, research institutes, health agencies, NGOs, …) query a population of users. Users would receive requests on a device (phone, computer, online service, …) and be asked to choose a strategy between answering truthfully, ignoring the request or faking an answer. The challenge in this scenario lies in the control of the amount of noise that comes from missing and false answers. If we assume that the population of users is eager to participate and trusts the service (and not necessarily the data requester), we can present them with a pre-made risk analysis that helps them to choose, all this process being fully distributed.
My work, today, consists in finding the models and limits of such systems.