Search


  • Categories


  • Archives

    « Home

    Article 29 Working Party Issues Opinion on Anonymization Techniques

    The EU’s Article 29 Working Party on the Protection of Individuals with regard to the Processing of Personal Data has released “Opinion 05/2014 on Anonymisation Techniques” (Working Party pdf; archive pdf). We’ve discussed the pitfalls of various anonymization or “de-identification” techniques and how the information can be “deanonymized” or re-identified, leading to privacy problems for individuals. Here’s an excerpt from the executive summary of the Working Party’s opinion:

    The WP acknowledges the potential value of anonymisation in particular as a strategy to reap the benefits of ‘open data’ for individuals and society at large whilst mitigating the risks for the individuals concerned. However, case studies and research publications have shown how difficult it is to create a truly anonymous dataset whilst retaining as much of the underlying information as required for the task.

    In the light of Directive 95/46/EC and other relevant EU legal instruments, anonymisation results from processing personal data in order to irreversibly prevent identification. In doing so, several elements should be taken into account by data controllers, having regard to all the means “likely reasonably” to be used for identification (either by the controller or by any third party).

    Anonymisation constitutes a further processing of personal data; as such, it must satisfy the requirement of compatibility by having regard to the legal grounds and circumstances of the further processing. Additionally, anonymized data do fall out of the scope of data protection legislation, but data subjects may still be entitled to protection under other provisions (such as those protecting confidentiality of communications).

    The main anonymisation techniques, namely randomization and generalization, are described in this opinion. In particular, the opinion discusses noise addition, permutation, differential privacy, aggregation, k-anonymity, l-diversity and t-closeness. It explains their principles, their strengths and weaknesses, as well as the common mistakes and failures related to the use of each technique. [...]

    The Opinion concludes that anonymisation techniques can provide privacy guarantees and may be used to generate efficient anonymisation processes, but only if their application is engineered appropriately – which means that the prerequisites (context) and the objective(s) of the anonymisation process must be clearly set out in order to achieve the targeted anonymisation while producing some useful data.  [...]

    Finally, data controllers should consider that an anonymised dataset can still present residual risks to data subjects.

    Possibly related posts:

    Leave a Reply