Credit: Getty Images
Over the past few decades, computer scientists and statisticians have developed tools to achieve the dual goals of protecting individuals' private data while permitting beneficial analysis of their data. Examples include techniques and standards such as blind signatures, k-anonymity, differential privacy, and federated learning. We refer to such approaches as privacy-preserving analytics, or PPAs. The privacy research community has grown increasingly interested in these tools. Their deployment, however, has also been met with controversy. The U.S. Census Bureau, for instance, has faced a lawsuit over its differentially private disclosure avoidance system; opposition to the new privacy plans garnered support from both politicians and civil rights groups.3,11
In theory, PPAs can offer a compromise between user privacy and statistical utility by helping researchers and organizations maneuver through the trade-offs between disclosure risks and data utility. In practice, the effects of these techniques are complex, obfuscated, and largely untested. While the interest in PPAs has been growing particularly among computer scientists and statisticians, there is a need for complementary social science research on the downstream impacts of these tools13—that is, the concrete ripples those technologies may cast for individuals and societies. In this Viewpoint, we advocate for an inter-disciplinary, empirically grounded research agenda on PPAs that connects social and computer scientists.
No entries found