Today is Giving Tuesday, and here are a few consumer, privacy, and civil liberty groups that could use donations to continue to fight for your rights: ACLU national (or give to your local chapter), Center for Digital Democracy, Consumers Union, Electronic Frontier Foundation, Electronic Privacy Information Center, Privacy International, and the World Privacy Forum.
There has been an ongoing privacy and ethics debate over the unauthorized or undisclosed use of newborns’ blood samples for purposes other than the standard disease-screening, which includes about 30 conditions. Now, there’s a trial study, called BabySeq, from Brigham and Women’s Hospital that “uses genomic sequencing to screen for about 1,800 conditions, including some cancers,” CBS Miami reports.
The privacy questions are clear: What happens to the DNA data — who keeps it, in what form, for how long — and who has access to it? The participants in the study have chosen to participate with, presumably, complete knowledge of the answers to these questions. But consider if the screening of 1,800 conditions, rather than the current 30, became the legal standard. This is a significant amount of highly personal information and there are substantial privacy issues.
BabySeq co-director, Dr. Robert Green, has raised some of these issues. “We can’t predict what kind of discrimination is going to be occurring by the time your child grows up,” Green said. “We can’t predict whether there’s some sort of privacy breaches, this information gets out and is used against your child in some sort of future scenario. And we, most importantly, we can’t predict the information’s accurate.” Read more »
We have discussed the privacy issues connected with targeted behavioral advertising before. This type of advertising is where a user’s online activity is tracked so that ads can be served based on the user’s behavior. What began as online data gathering has expanded — now there’s the online and offline data collection and tracking of the habits of consumers. For example, Google announced earlier this year that it “has begun using billions of credit-card transaction records” to try to connect individuals’ “digital trails to real-world purchase records in a far more extensive way than was possible before,” the Washington Post reported.
Some people are uncomfortable with the tracking and the targeting by companies and attempt to opt-out. (Opt-out puts the burden on consumers to learn about what the privacy policies are, whether they protect consumer data, whom the data is shared with and for what purpose, and how to opt-out of this data collection, use and sharing. Consumer advocates support opt-in policies, where companies have an incentive to create strong privacy protections and use limitations so consumers will choose to share their data.) In response, people have installed ad-blocker technology to avoid seeing ads. However, there is online-tracking technology that can be difficult to block, such as “canvas fingerprinting.”
People also have joined the Do Not Track movement — this can take the form of opting out of being tracked by e-mail address or by having your Web browser send an opt-out signal to a company as you conduct your online activity. And federal lawmakers have tried to pass Do Not Track legislation to protect kids.
There has been a battle. For example, Apple’s Safari browser and Mozilla’s Firefox browser have included anti-tracking technology for years. However, some companies choose not to respect Do Not Track signals sent by Web browsers. Read more »
We’ve discussed before the many ways that companies have been monitoring their employees. They’re using key-logging technology to monitor workers’ keystrokes and Internet-tracking software to log the sites that employees visit. Or tracking workers using GPS technology. More workplaces are using employee badges that have microphones and sensors for tracking individuals’ movements. Now, there’s a move toward a more invasive way to track employees: By implanting microchips in workers.
Wisconsin technology company Three Square Market announced that it is “offering implanted chip technology to all of their employees. … Employees will be implanted with a RFID chip allowing them to make purchases in their break room micro market, open doors, login to computers, use the copy machine, etc.” The company continued: “The chip implant uses near-field communications (NFC); the same technology used in contactless credit cards and mobile payments. A chip is implanted between the thumb and forefinger underneath the skin within seconds.” Read more »
A couple of years ago, we discussed the increasing use license-plate-recognition camera technology and the possible privacy, civil liberty and security implications about the surveillance tech used to gather and record information on drivers’ movements. At the time, we noted that license-plate-reader technology (also called automated license plate readers, ALPRs), like other surveillance systems, has the ability to create a profile of an individual using personal, possibly sensitive data. Now, the technology is in even more jurisdictions nationwide, and the privacy questions remain.
Two examples of the proliferation of the license-plate-reader technology are in Rhode Island and Tennessee. In Rhode Island, state legislators are considering HB 5531, “An Act Relating to Motor and Other Vehicles — Electronic Confirmation and Compliance System,” which would create a state-wide license-plate-reader network to identify and fine uninsured drivers. The chief sponsor is Rep. Robert Jacquard (D), who “said he has made a number of changes to address fears of growing state surveillance and concerns the cameras could be used to expand highway tolling,” reports the Providence Journal.
The ACLU of Rhode Island testified (pdf) against the bill, noting “this legislation would nevertheless facilitate the capture and storage of real time location information on every Rhode Islander on the road, with no guidance as to how this information is to be used, at the benefit of a third-party corporation.” ACLU-RI wants the state to “implement clear and specific restrictions on the use of this technology, particularly by law enforcement” and notes such restrictions are included in HB 5989, whose chief sponsor is Rep. John G. Edwards (D). Read more »
Update on June 6, 2017: Apple has introduced its own A.I. assistant device, the HomePod. Notably, the company says the device will only collect data after the wake command. Also, the data will be encrypted when sent to Apple’s servers. However, privacy questions remain, as with other A.I. assistants.
Artificial intelligence assistants, such as Amazon’s Echo or Google’s Home devices (or Apple’s Siri or Microsoft’s Cortana services) have been proliferating, and they can gather a lot of personal information on the individuals or families who use them. A.I. assistants are part of the “Internet of Things,” a computerized network of physical objects. In IoT, sensors and data-storage devices embedded in objects interact with Web services.
I’ve discussed the privacy issues associated with IoT generally (relatedly, the Government Accountability Office recently released a report on the privacy and security problems that can arise in IoT devices), but I want to look closer at the questions raised by A.I. assistants. The personal data retained or transmitted on these A.I. services and devices could include email, photos, sensitive medical or other information, financial data, and more.
And law enforcement officials could access this personal data. Earlier this year, there was a controversy concerning the data possibly collected by an Amazon Echo. The Washington Post explained, “The Echo is equipped with seven microphones and responds to a ‘wake word,’ most commonly ‘Alexa.’ When it detects the wake word, it begins streaming audio to the cloud, including a fraction of a second of audio before the wake word, according to the Amazon website. A recording and transcription of the audio is logged and stored in the Amazon Alexa app and must be manually deleted later.” Read more »