Skip to main content

The ethics of mobile data collection

The mobile computing and networking research communities need to start paying closer attention to the data collection practices of researchers in our field. Now that it's easy to write mobile apps that collect data from real users, I'm going to argue that computer science publication venues should start requiring authors to document whether they have IRB approval for studies involving human subjects, and how the study participants were consented. This documentation requirement is standard in the medical and social science communities, and it makes sense for computer science conferences and journals to do the same. Otherwise I fear we run the risk of accepting papers that have collected data unethically, hence rewarding researchers for not adequately protecting the privacy of the study participants.

I am often asked to review papers in which the authors have deployed a mobile phone app that collects data about the app's users. In some cases, these apps are overtly used for data collection and the users of the app are told how this data will be collected and used. But I have read a number of papers in which data collection has been embedded into apps that have some other purpose -- such as games or photo sharing. The goal, of course, is to get a lot of people to install the app, which is great for getting lots of "real world" data for a research paper. In some cases, I have downloaded the app in question and installed it, only to discover that the app never informs the user that it is collecting sensitive data in the background.

The problem is, such practices are unethical (and possibly illegal) according to federal requirements for protecting the privacy for human subjects in a research study. Even if there is some fine print in the app the use of data for a research study, it's not clear to me that in all cases the researchers have actually gone through the federally-mandated Institutional Review Board approval process to collect this data.

Unfortunately, not many computer scientists seem to be familiar with the IRB approval requirement for studies involving human subjects. Our field is pretty lax about this, but I think it's time we started taking human subjects approval more seriously.

It is now dead simple to develop mobile apps that collect all kinds of data about their users. On the Android platform, an app can collect data such as the device's GPS location; which other apps are running and how much network traffic they use; what type of wireless network the device is using; the device manufacturer, model, and OS version; which cellular carrier the device uses; the device's battery level; and the current cell tower ID. Similar provisions exist on iOS and other mobile operating systems. With rooted devices, it's possible to collect even more information, such as a complete network packet trace and complete information on which websites and apps have been used.

Put together, this data can yield a rich picture of the usage patterns, mobility, and network performance experienced by a mobile user. It is very tempting for researchers to exploit this capability, and it's easy to get thousands of people to install your app by releasing it on Google Play or the Apple App Store. However, I have very little confidence that most researchers are adhering to legal and ethical guidelines for collecting such data -- I bet the typical scenario is that the data ends up being logged to an unsecured computer under some grad student's desk.

So, what is an IRB? In the US and many other countries, any institution that receives federal funding must ensure that research studies involving human subjects protect the rights and privacy of the participants in such studies. This is accomplished through Institutional Review Board review which much occur prior to the study taking place. The purpose of the IRB is to ensure that the study meets certain guidelines for protecting the privacy of the study participants. The Stanford IRB Website has some good background about the purpose of IRB approval and what the process is like. The principles underpinning IRB review were set forth in the Declaration of Helsinki, which has been the basis for many countries' laws regarding protection of human subjects.

Failing to get IRB approval for a research study is serious business. In the medical and social science communities, failing to get IRB approval is tantamount to faking data or plagiarism. The Retraction Watch blog has a long list of cases in which published articles have been retracted due to lack of IRB approval. In those fields, this kind of forced retraction can destroy an academic's career.

Documenting IRB approval and informed consent for study participants is becoming standard practice in the medical and social science communities. For example, the submission guidelines to the Annals of Internal Medicine require an explicit statement from authors regarding IRB approval:
"The authors must confirm review of the study by the appropriate institutional review board or affirm that the protocol is consistent with the principles of the Declaration of Helsinki (see World Medical Association). If the authors did not obtain institutional review board approval before the start of the study, they should so state and explain the circumstances. If the study was exempt from review, the authors must state that such exemption complied with the policy of their local institutional review board. They should affirm that study participants gave their informed consent or state than an institutional review board approved conduct of the research without explicit consent from the participants. If patients are identifiable from illustrations, photographs, pedigrees, case reports, or other study data, the authors must submit the release form for each such individual (or copies of the figures with the appropriate release statement) giving permission for publication with the manuscript. Consult the Research section of the American College of Physicians Ethics Manual for further information."

But yet, in computer science, we tend not to take this process very seriously. I suspect most computer scientists have never heard of, or dealt with, their institution's IRB. I was surprised to see that CHI, the top conference in the area of human-computer interaction (in which user studies are commonplace), says nothing in its call for papers about requiring IRB approval disclosure for human subjects studies -- perhaps the practice of obtaining IRB approval is already widespread in that community, though I doubt it.

Why do I think we should require authors to document IRB approval? For two reasons. First, to raise awareness of this issue and ensure that authors are aware of their obligations before they submit a paper to such venues. Second, to prevent paper reviewers from having to make a judgment call when a paper is unclear on whether and how a study protects its participants. The whole point of an IRB is to front-load the approval process before the research study even begins, well before a paper gets submitted. The nature of a research project may well change depending on the IRB's requirements for protecting user privacy.

To give an example of how this can be done properly, colleagues of mine at University of Michigan and University of Washington are developing a mobile app for collecting network performance data, called MobiPerf. The PIs have IRB approval for this study and the app clearly informs the users that the data will be collected for a research study when the app first starts; clicking "No thanks" immediately exits the app. Furthermore, there is a fairly detailed privacy statement and EULA on the app's website, explaining exactly what data is collected. It's true that going through these steps required more effort on the part of the researchers, but it's not just a good idea -- it's the law.

This is my personal blog. The views expressed here are mine alone and not those of my employer.


  1. Easy solution to this. Just copy the privacy policy of any large company (Google, Facebook, Microsoft, ...); find and replace company name with university name; hide the link in the app's about page. Done. This way researchers end up being no more or no less ethical than, well, everyone else.

  2. I'm afraid that "easy solution" does not work. Academic institutions that receive federal research funding are required to have an IRB and any human subjects research conducted at that institution must get IRB approval. Slapping a privacy policy onto an app does not satisfy this requirement, although that is one thing that an IRB might ask for. There are also requirements having to do with what kind of data is collected, how the data is anonymized and so forth.

  3. To be pedantic, the details are a bit more... detailed. Common Rule derived requirements apply to all federally funded research... its not limited to academic institutions. However, academic institutions typically extend this protection (voluntarily) to all research (e.g., including industrially-funded work). Astute readers will note that commercial entities have no such limitations and indeed there research that goes on in industry that would be challenging to get through a university IRB. Two other things to note is that there is no universal standard of when the human subjects condition is triggered (i.e., are IP addressed human subjects?), of what controls are appropriate for what situations or of IRB ethics in general (aside from in the most general sense as outlined by the Belmont report goals). Indeed, it is a commonly held misconception that human subjects must provide informed consent. This is not true and there are many IRB approved studies that involve deception (precisely because it is critical to the science and there are controls to balance the good vs potential harm). In one famous paper (co-written by the head of an IRB commnittee) on designing phishing experiments a case was advanced (and held sway with the IRB) that it caused less harm to never inform subjects that they had been phished as part of an experiment.

    Its important to remember that adjudicating an absolute standard of ethics is not the purpose of the IRB. The purpose of the IRB is to provide independent oversight. The development of community ethics is a parallel process that evolves at an independent rate and non-uniformly.

    There are a number of problems with requiring IRB approval for paper submission however. One is that not all research institutions have IRBs. As I mentioned, this is not a standard feature in industry (and while a given company may have very stringent review against internal ethical principals, it is perhaps a bit harder to assure the level of independence that is intended). Second, the IRB structure is not one that exists in much of the rest of the world so many researchers (in Europe for example) would not have an organization to avail themselves of (some use legal review, but that is quite a different standard in my own experience). Yet another problem is the considerable difference _between_ IRBs about what constitutes human subjects research. There are places where it is clear and places where it is not and differences abound. So now the PC has to second guess if the author needed a formal IRB proposal or not (for example, at UC was have a blanket exemption for survey research, but other institutions do not). Finally, I think that the research community is torn and conflicted about ethical issues and would like to turn to IRBs as a solution. However, as I said before IRBs do not have a prima facie claim on what is ethical (particularly in fields that are changing quickly) and ultimately will rely, in part, on feedback from the research community to build their precedents. For this reason, in addition to the others, I think ultimately the community cannot just defer to IRBs, but needs to render its own judgement. If an author violated his or her university rules, this is something for the university to deal with. If an author violates community standards (evolving as they are) this seems like a more appropriate place for PCsI. I understand that this is ugly, uncomfortable and will not be equitably applied (but this is no different from the IRB process across institutions). However, I think research communities have a critical role to play and reviewers can't escape having to make judgement calls.

  4. Stefan: I think your argument is that the IRB does not supplant the need for community standards of what constitutes ethical data collection. I agree with this, although I think requiring IRB approval is a good first step in this direction. (At least, much better than our current situation, which is pretty much a free-for-all.)

    If you look at the requirement from _Annals_ you'll see they make it clear that authors have to state when they did not require IRB approval for a study and why. They also say that it's fine not to have informed consent as long as the IRB signed off on this. I think this covers all of the cases you're concerned about above: It does not mean that IRB approval is *required* for all papers -- it just says that authors have to document what approvals they have, if any, and if exceptions were granted. I think that's pretty reasonable.

  5. Matt: I thought it was pretty standard that you get human subject training before you can start your job as a researcher at an institution (not for students, but when you could be a PI or senior personel on a grant). This was at least the case for me at UCLA, UMich, and Utah. I had to take an online course at each one of these institutions before I could start. I even believe that I have to retake the certification every couple of years.

  6. Great post! wrt the CHI conference, at least from my experience, it is both widespread and assumed that any human subjects research goes through IRB before being submitted to CHI (if it comes from an academic institution). If a paper appears to have not done so (e.g. a study of Facebook that violated its TOS), it should not make it past the Program Committee process.

  7. Hi Matt,

    Thanks for a thoughtful and important post. A few quick thoughts:

    1. I don't think the picture is quite as bleak as you paint, at least in certain circles. For example, the IMC call for papers from 2012 alludes to conformance to ethics. Granted, it could be more explicit about IRB requirements, but there may be reasons that it does not (such as those that Stefan mentions above).

    2. In my experience, various federal funding agencies require researchers to obtain IRB approval for any research involving human subjects. In one case, some work I was doing required IRB approval both from my institution *and* from the federal funding agency. Ultimately, the agency required us to stop working until we could get approval, even though the work had already been approved by my institution's IRB.

    3. Stefan is completely right that there are huge grey areas, and that IRBs' standards differ across institutions. For example, in terms of our IRB, network traces from a single-user home constitute human subjects, since monitoring that traffic necessarily implies looking at the behavior of a single user. However, collecting aggregate statistics---or network traffic that does not identify/study/characterize any *single* user---is not human subjects research according to our IRB, since you are now studying aggregate patterns, not any particular human. (That said, the BISmark experiments where we collect passive traffic traces from homes are approved by IRB whenever our IRB deems the approval necessary.) Another gray area we just discovered is wireless traffic data inside the home---according to our IRB, not a human subjects experiment, even though some determined adversary might be able to take the traces and infer something about human behavior. In yet another case, we are grappling with how to bring an international collaborator onto an IRB-approved project who is working at an NGO (hence, no IRB) and wants to work with some of our data. Simply "having IRB approval" is often not black-and-white.

    4. The question of privacy in mobile measurement is quite interesting, and is something we're now grappling with, with our cellular measurement tool, as well. This is a thorny (and interesting) open question, and one that is currently being pretty hotly debated in the FCC among carriers: how does one protect individual user location privacy while still gathering useful mobile measurements with geographical annotations? Obviously, aggregation or obfuscation can help, but simply anonymizing IDs doesn't work, and aggregating too many users might obscure the real factor that is affecting performance (e.g., the user of a particular device and Android OS version and mobility pattern might uniquely be able to isolate the cause of poor performance, but then you've also uniquely identified a user). I'd very much like to discuss this problem with you at greater length.

    Excellent post.



    IMC '12 CFP Excerpt:

    "Ethical standards for measurement must be considered by all authors. In particular, authors must conform to acceptable use policies for domains that are probed or monitored, data privacy and anonymity for all personally identifiable information (PII) and etiquette for using shared measurement data. (See Allman and Paxson, IMC '07.) If applicable, authors are also urged to notify parties of security flaws in their products or services in advance of publication. Adherence to ethical standards for measurement will be a criterion for all submissions, and any violations---including ambiguous situations not well described---will be grounds for rejection."

  8. Part of the challenge is that the whole nature of mobile data collection is a fuzzy area - it depends very much on what data you're collecting, how it's processed and stored, how it is aggregated, and so forth. One of the reasons we have IRBs is to decentralize the process of making these tough judgment calls about whether a given study adequately protects the human subjects. I'm not saying the IRB process is perfect -- far from it -- but at least it is a process. And despite arguments that IRB approval by itself does not necessarily lead to ethical data collection practices, it's still the case that agencies receiving federal funding (at least in the US) require IRB approval for studies involving human subjects. Saying that the IRB is not perfect does not obviate a researcher from this requirement.

    I never received any formal training on human subjects approvals when I joined Harvard, although there was a packet of information that I probably skimmed over and threw out -- since much of it seemed irrelevant to a computer scientist. I'll be the first to admit that computer scientists need to pay more attention to this stuff!

  9. I don't think anyone is arguing against IRBs. They are clearly an important check on overreach. I think the question is more about what is the role of the PC in this. I'm happy with your first goal, "raise awareness", but less happy with "prevent reviewers from having to make a judgement call".

    My concern is that IRB compliance is an institutional issue about compliance with an oversight mechanism. Ultimately, it is between researchers and their employers (and potentially their funding agency depending). I don't think its the best place for a PC to get involved. IF someone screws up badly on the IRB process, this is something for their institution to enforce I think (who has far sharper teeth than PCs FWIW)

    That doesn't mean that I'm against people declaring that they have IRB approval, but I've sat on more than my fair share of PCs and I've been involved in more than my fair share of ethical arguments on said PCs, and I see this issue of IRB approval being commonly used as a shortcut. I've seen lack of IRB approval used as a cudgel against what I believe to be perfectly ethical research and the presence of IRB approval used as a shield ("oh, the ethics must be fine, it has IRB approval") on research that was past my comfort zone (any my comfort zone is pretty aggressive). This is not unique to PC's BTW, I've had program managers basically tell me that they don't care what gets done so long as it has IRB approval (i.e., that process is key). I think these kinds of ethical issues are messy and complex and the promise of a binary "ethical bit" in the form of IRB approval is far too tempting not to use it.

    Personally, I far prefer that we ask authors to explicitly outline the ethical principles under which they operate and then have the PC stand for what they believe in in paper selection. Then we can look at papers that get accepted, with their ethical statements, and say "this PC at this time, thought that X was ethical behavior". That is useful information for the community as far as I'm concerned. If all I learn is that a particular paper was IRB approved, I don't really get any information to inform my own research (other than that I need to have the IRB box checked off... and as you know faculty are very good at checking boxes that need to be checked).. I don't know the basis on which it was approved, what the risks were, what the controls were, etc.

    Anyway, I'm good with IRBs, but I'm skeptical about using their decisions as a key element in the paper review process. I'm much rather papers be explicit about their ethical issues, their potential harms and the mitigation controls they use.

  10. Stefan: All right, I understand what you're getting at now, and I mostly agree. For me the key goal is raising awareness. At least in the program committees that I've served on, this issue of the ethics of how data was collected -- let alone IRB approval -- has never come up, to my recollection. I am also much more acutely aware of these issues now that I'm at Google -- much of what we do is governed by a set of pretty clear policies around collection, storage, processing, and aggregation of private user data.

    So requiring documentation of an IRB process is not a panacea - I'll grant that. I do feel that the mobile systems research community should establish some clear standards for what constitutes ethical data collection and encourage authors of papers to adhere to them. And program committees need to be thinking about these questions when they review papers as well.

  11. What do people think of this text, from the CFP for the 2013 Usenix Security Symposium?

    "New in 2013: Papers that describe experiments on human subjects, or that analyze non-public data derived from human subjects (even anonymized data), should disclose whether an ethics review (e.g., IRB approval) was conducted and discuss steps taken to ensure that participants were treated ethically.

    Contact the program chair at if you have any questions."

  12. Matt,

    I think you have highlighted an important discussion that needs more attention.

    You won't be surprised, but many don't realize how many practitioners / developers aren't aware of the importance of IRB or quality issues around collecting data. "If the build makes it easy, well, it is easy," is too often the current approach. There are more rigorous aspects to data collection, aggregation, reporting and use that come with these emerging fantastic new modes (e.g., mobility, real time, etc.).

    Thanks for bring up a topic.

    Might you make this into a piece for wider or main stream press?

  13. I agree with Sarita, I am convinced that the HCI community has a strong culture of human-subjects IRB. IRB shows up as a topic in many HCI classes, we discuss it while at conferences like CHI (I recall lots of excitement about the potential changes to the federal regulations that were being discussed). There have been workshops on how human subjects changes as what you can collect changes.

    It's also embedded in the papers. People don't often write, "we got IRB" but its present in arguments about study design. Choosing how to compensate participants when doing studies in very poor areas (too much could lead to coercion, an IRB issue) or picking to interview rather than observe to minimize invasiveness... so I'd say it's deeply embedded in the culture, which maybe why you can't easily see it from the outside. The call for papers is really long, I know it was mine to edit in 2006, so we try to leave out all the obvious stuff (like you've done an IRB or its equivalent in your country -- we also have discussions about what multi-country studies involve in terms of getting certification).

    But in my experience IRB is not as well-understood outside of HCI. I have seen it discussed as being a huge inconvenience to researchers rather than an opportunity to reflect on participants' rights. So I might venture that IRB is not just not known, but in some cases (although hopefully not too many) it is known but it is seen as an unnecessary impediment to research. In that regard thank you for saying this, because it will take voices like yours and others to change that perception as well as raise the visibility of IRB across the discipline.


Post a Comment

Popular posts from this blog

Why I'm leaving Harvard

The word is out that I have decided to resign my tenured faculty job at Harvard to remain at Google. Obviously this will be a big change in my career, and one that I have spent a tremendous amount of time mulling over the last few months.

Rather than let rumors spread about the reasons for my move, I think I should be pretty direct in explaining my thinking here.

I should say first of all that I'm not leaving because of any problems with Harvard. On the contrary, I love Harvard, and will miss it a lot. The computer science faculty are absolutely top-notch, and the students are the best a professor could ever hope to work with. It is a fantastic environment, very supportive, and full of great people. They were crazy enough to give me tenure, and I feel no small pang of guilt for leaving now. I joined Harvard because it offered the opportunity to make a big impact on a great department at an important school, and I have no regrets about my decision to go there eight years ago. But m…

Rewriting a large production system in Go

My team at Google is wrapping up an effort to rewrite a large production system (almost) entirely in Go. I say "almost" because one component of the system -- a library for transcoding between image formats -- works perfectly well in C++, so we decided to leave it as-is. But the rest of the system is 100% Go, not just wrappers to existing modules in C++ or another language. It's been a fun experience and I thought I'd share some lessons learned.

Why rewrite?

The first question we must answer is why we considered a rewrite in the first place. When we started this project, we adopted an existing C++ based system, which had been developed over the course of a couple of years by two of our sister teams at Google. It's a good system and does its job remarkably well. However, it has been used in several different projects with vastly different goals, leading to a nontrivial accretion of cruft. Over time, it became apparent that for us to continue to innovate rapidly wo…

Running a software team at Google

I'm often asked what my job is like at Google since I left academia. I guess going from tenured professor to software engineer sounds like a big step down. Job titles aside, I'm much happier and more productive in my new role than I was in the 8 years at Harvard, though there are actually a lot of similarities between being a professor and running a software team.

I lead a team at Google's Seattle office which is responsible for a range of projects in the mobile web performance area (for more background on my team's work see my earlier blog post on the topic). One of our projects is the recently-announced data compression proxy support in Chrome Mobile. We also work on the PageSpeed suite of technologies, specifically focusing on mobile web optimization, as well as a bunch of other cool stuff that I can't talk about just yet.

My official job title is just "software engineer," which is the most common (and coveted) role at Google. (I say "coveted&quo…