Ethical Privacy Guidelines for Mobile Connectivity Measurements is the first item on the C3S reading list, below is my brief notes on this November 2013 report by the Oxford Internet Institute.
The stated purpose of this report is to inform networking researchers about the best practices for preserving data subject privacy when performing active measurement of mobile networks.
Researcher must make a comprise between the privacy of data subjects and dissemination of research artefacts for reproducibility. To aid in reasoning about this comprise, the report presents a risk assessments format covering: the contributions of the research, risk of de-identification, impact of re-identification, unforeseen risk (such as data theft), methods to dissemination artefacts, informed consent and transparency. The report goes onto discuss a few legal implications, in particular, the ongoing debate on whether IP addresses and communication metadata are personally identifiable information.
The authors focus on a guiding principle: collect the minimal data possible to conducted the stated research. Data should n0t be used for a secondary purpose unless explained at the consent stage. This includes open dissemination of the data collected. The report suggests some methods of fuzzing the data including: perturbation, truncation, permutation, quantisation, pseudonymization, k-anonymity and differential privacy.
Overall, I would recommend the report to someone new to the domain of data privacy, as its a nice introduction to the topic. The authors raise awareness of the necessary compromise between reproducible research and data privacy. Though they do not provide concrete advise to researchers on how to make the best compromise (other than telling them to be conservative). The report claims to focus on active mobile measurements, in practice its contribution is much more general than this. I would love to see this report with real-world examples of measurement studies that have been conducted, the comprise between reproducible research and data privacy that was chosen and how it was chosen.
Here is a draft copy of the A1 poster I’ll be presenting at the 2nd Annual Oxbridge Women in Computer Science Conference in Oxford. The poster abstract is in a previous post. Any feedback would be greatly appreciated.
version 1 (9:20 11/3)
version 2 (10:50 11/3)
now with left alignment of text on the left and right alignment of text on the right, gateway text on black router removed
VERSION 3 (11:00 11/3)
now with bolded keywords
FINAL VERSION (11:23 11/3)
new text for the aim box
Below are my draft slides for next week’s talk at the 2nd Annual Oxbridge Women in Computer Science Conference in Oxford. The talk abstract is in a previous post. Any feedback would be greatly appreciated (note that the speaker notes are WIP)
Last week we release a open access preprint of our first paper on the Databox on arXiv, titled “Personal Data: Thinking Inside the Box“. Despite not publishing in a peer reviewed venue, the response has been greater than we expect. Most notability we were featured in the Guardian, a British newspaper known for its pro-privacy and anti-government surveillance views and well as the MIT Technology Review and Treasury Insider.
- Time to start thinking inside the box? Image By Husky [Public domain], via Wikimedia Commons
In the paper, we propose there is a need for a technical platform enabling people to engage with the collection, management and consumption of personal data; and that this platform should itself be personal, under the direct control of the individual whose data it holds. Our solution is the the Databox, a personal, networked service that collates personal data and can be used to make those data available.
The paper is an accessible read and does not cover any technical details, instead its a brief overview of the problem space and its challenges. We are currently preparing the paper for submission so your thoughts and ideas are more welcome than ever.
A huge thanks to my amazing co-authors Hamed Haddadi (@realhamed), Amir Chaudhry (
@amirmc), Jon Crowcroft (), Anil Madhavapeddy () and Richard Mortier ().
“Can You Engineer Privacy?” featured in Aug 2014 CACM has one of the best start paragraphs I have seen. Following this strong start, the article articulately introduces some of the challenge and areas of active research in privacy engineering. The article does an excellent job of presenting an cross discipline overview though the lack of reference (the typical style of CACM articles) can leave you guessing which specific works the article was referring too.
The article introduces data minimization, a concept that ignored that companies business models rely on collecting, using (e.g. targeted ads) & selling data to provide online services that are free at the point of use such as facebook and google, which clearly people want.
Personal data is an assert that each individual owns. Many people want to exchange they’re personal data for services, our job as a community to enable them and provide viable alternatives instead of blocking them.
“Can You Engineer Privacy?” is worth reading if your new to the privacy research and refreshingly articulate, its available over at the CACM.