Credit: Freebird Photos
Big data is all the rage; using large datasets promises to give us new insights into questions that have been difficult or impossible to answer in the past. This is especially true in fields such as medicine and the social sciences, where large amounts of data can be gathered and mined to find insightful relationships among variables. Data in such fields involves humans, however, and thus raises issues of privacy that are not faced by fields such as physics or astronomy.
Such privacy issues become more pronounced when researchers try to share their data with others. Data sharing is a core feature of big-data science, allowing others to verify research that has been done and to pursue other lines of inquiry the original researchers may not have attempted. But sharing data about human subjects triggers a number of regulatory regimes designed to protect the privacy of those subjects. Sharing medical data, for example, requires adherence to HIPAA (Health Insurance Portability and Accountability Act); sharing educational data triggers the requirements of FERPA (Family Educational Rights to Privacy Act). These laws require that, to share data generally, the data be de-identified or anonymized (note that, for the purposes of this article, these terms are interchangeable). While FERPA and HIPAA define the notion of de-identification slightly differently, the core idea is if a dataset has certain values removed, the individuals whose data is in the set cannot be identified, and their privacy will be preserved.
No entries found