The coronavirus pandemic is an opportunity to balance public health and personal privacy
The Internet of Things makes the invisible visible. That’s the IoT’s greatest feature, but also its biggest potential drawback. More sensors on more people means the IoT becomes a visible web of human connections that we can use to, say, track down a virus.
Track-and-trace programs are already being used to monitor outbreaks of COVID-19 and its spread. But because they would do so through easily enabled mass surveillance, we need to put rules in place about how to undertake any attempts to track the movements of people.
In April, Google and Apple said they would work together to build an opt-in program for Android or iOS users. The program would use their phones’ Bluetooth connection to deliver exposure notifications—meaning that transmissions are tracked by who comes into contact with whom, rather than where people spend their time. Other proposals use location data provided by phone applications to determine where people are traveling.
All of these ideas have slightly different approaches, but at their core they’re still tracking programs. Any such program that we implement to track the spread of COVID-19 should follow some basic guidelines to ensure that the data is used only for public health research. This data should not be used for marketing, commercial gain, or law enforcement. It shouldn’t even be used for research outside of public health.
Let’s talk about the limits we should place around this data. A tracking program for COVID-19 should be implemented only for a prespecified duration that’s associated with a public health goal (like reducing the spread of the virus). So, if we’re going to collect device data and do so without requiring a user to opt in, governments need to enact legislation that explains what the tracking methodology is, requires an audit for accuracy and efficacy by a third party, and sets a predefined end.
Ethical data collection is also critical. Apple and Google’s Bluetooth method uses encrypted tokens to track people as they pass other people. The Bluetooth data is people-centric, not location-centric. Once a person uploads a confirmation that they’ve been infected, their device can issue notifications to other devices that were recently nearby, alerting users—anonymously—that they may have come in contact with someone who’s infected.
This is good. And while it might be possible to match a person to a device, it would be difficult. Ultimately, linking cases anonymously to devices is safer than simply collecting location data on infected individuals. The latter makes it easy to identify people based on where they sleep at night and work during the day, for example.
Going further, this data must be encrypted on the device, during transit and when stored on a cloud or government server, so that random hackers can’t access it. Only the agency in charge of track-and-trace efforts should have access to the data from the device. This means that police departments, immigration agencies, or private companies can’t access that data. Ever.
However, researchers should have access to some form of the data after a few years have passed. I don’t know what that time limit should be, but when that time comes, institutional review boards, like those that academic institutions use to protect human research subjects, should be in place to evaluate each request for what could be highly personal data.
If we can get this right, we can use the lessons learned during COVID-19 not only to protect public health but also to promote a more privacy-centric approach to the Internet of Things.
This article appears in the June 2020 print issue as “Pandemic vs. Privacy.”