News

Two methods to de-identify large patient datasets greatly reduced risk of re-identification


Two de-identification methods, k-anonymization and including a “fuzzy factor,” considerably reduced the risk of re-identification of sufferers in a dataset of 5 million patient data from a large cervical most cancers screening program in Norway.

The research is revealed within the Cancer Epidemiology, Biomarkers & Prevention, a journal of the American Association for Cancer Research, by Giske Ursin, MD, PhD, director of Cancer Registry of Norway, Institute of Population-based Research.

“Researchers sometimes get entry to de-identified , that’s, knowledge with none private figuring out info, corresponding to names, addresses, and Social Security numbers. However, this is probably not enough to shield the privateness of people collaborating in a analysis research,” stated Ursin.

Patient datasets typically have delicate knowledge, reminiscent of details about an individual’s well being and illness analysis that a person might not need to share publicly, and knowledge custodians are chargeable for safeguarding such info, Ursin added. “People who have the permission to access such datasets have to abide by the laws and ethical guidelines, but there is always this concern that the data might fall into the wrong hands and be misused,” she added. “As a data custodian, that’s my worst mightmare.”

To check the power of their de-identification method, Ursin and colleagues used screening knowledge containing 5,693,582 data from 911,510 ladies within the Norwegian Cervical Cancer Screening Program. The knowledge included sufferers’ dates of start, and cervical screening dates, outcomes, names of the labs that ran the exams, subsequent most cancers diagnoses, if any, and date of demise, if deceased.

The researchers used a software referred to as ARX to consider the risk of re-identification by approaching the dataset utilizing a “prosecutor scenario,” through which the software assumes the attacker is aware of that some knowledge about a person are within the dataset. An assault is taken into account profitable if a large portion of people within the dataset could possibly be re-identified by somebody who had entry to some of the details about these people.

The staff assessed the re-identification risk in three alternative ways: First they used the unique knowledge to create a sensible dataset that contained all of the abovementioned patient info (D1). Next, they “k-anonymized” the info by altering all of the dates within the data to the 15th of the month (D2). Third, they fuzzied the info by including a random issue between -Four to +Four months (besides zero) to every month within the dataset (D3).


By including a fuzzy issue to every patient’s data, the months of start, screening, and different occasions are modified; nevertheless, the intervals between the procedures and the sequence of the procedures are retained, which ensures that the dataset continues to be usable for analysis functions.

“We found that changing the dates using the standard procedure of k-anonymization drastically reduced the chances of re-identifiying most individuals in the dataset,” Ursin famous.

In D1, the typical risk of a prosecutor figuring out an individual was 97.1 %. More than 94 % of the have been distinctive, and subsequently these sufferers ran the risk of being re-identified. In D2, the typical risk of a prosecutor figuring out an individual dropped to 9.7 %; nevertheless, 6 % of the data have been nonetheless distinctive and ran the risk of being re-identified. Adding a fuzzy issue, in D3, didn’t decrease the risk of re-identification additional: The common risk of a prosecutor figuring out an individual was 9.eight %, and 6 % of the data ran the risk of being re-identified.

This meant that there have been as many distinctive data in D3 as in D2. However, scrambling the months of all data in a dataset by including a fuzzy issue makes it harder for a prosecutor to hyperlink a from this dataset to the data in different datasets and re-identify a person, Ursin defined.

“Every time a research group requests permission to access a dataset, data custodians should ask the question, ‘What information do they really need and what are the details that are not required to answer their research question,’ and make every effort to collapse and fuzzy the data to ensure protection of patients’ privacy,” Ursin stated.

Patient knowledge are typically very properly safeguarded and re-identification just isn’t but a serious menace, Ursin added. “However, given the recent trend in sharing data and combining datasets for big-data analyses—which is a good development—there is always a chance of information falling into the hands of someone with malicious intent. Data custodians are, therefore, rightly concerned about potential future challenges and continue to test preventive measures.”

According to Ursin, the primary limitation of the research is that the approaches to anonymize knowledge on this research are particular to the dataset used; such approaches are distinctive for every and must be designed based mostly on the character of the info.

Ursin declares no conflicts of curiosity.


Explore additional:
Current performance measures for cervical cancer screening promote overscreening



Source link