Search


  • Categories


  • Archives

    « Home

    Archive for the ‘Anonymity’ Category

    What If the Rules About Newborn Blood Screenings Changed?

    Thursday, October 26th, 2017

    There has been an ongoing privacy and ethics debate over the unauthorized or undisclosed use of newborns’ blood samples for purposes other than the standard disease-screening, which includes about 30 conditions. Now, there’s a trial study, called BabySeq, from Brigham and Women’s Hospital that “uses genomic sequencing to screen for about 1,800 conditions, including some cancers,” CBS Miami reports.

    The privacy questions are clear: What happens to the DNA data — who keeps it, in what form, for how long — and who has access to it? The participants in the study have chosen to participate with, presumably, complete knowledge of the answers to these questions. But consider if the screening of 1,800 conditions, rather than the current 30, became the legal standard. This is a significant amount of highly personal information and there are substantial privacy issues.

    BabySeq co-director, Dr. Robert Green, has raised some of these issues. “We can’t predict what kind of discrimination is going to be occurring by the time your child grows up,” Green said. “We can’t predict whether there’s some sort of privacy breaches, this information gets out and is used against your child in some sort of future scenario. And we, most importantly, we can’t predict the information’s accurate.” Read more »

    Digital Advertisers Continue to Battle Private-Browsing Technology

    Monday, September 18th, 2017

    We have discussed the privacy issues connected with targeted behavioral advertising before. This type of advertising is where a user’s online activity is tracked so that ads can be served based on the user’s behavior. What began as online data gathering has expanded — now there’s the online and offline data collection and tracking of the habits of consumers. For example, Google announced earlier this year that it “has begun using billions of credit-card transaction records” to try to connect individuals’ “digital trails to real-world purchase records in a far more extensive way than was possible before,” the Washington Post reported.

    Some people are uncomfortable with the tracking and the targeting by companies and attempt to opt-out. (Opt-out puts the burden on consumers to learn about what the privacy policies are, whether they protect consumer data, whom the data is shared with and for what purpose, and how to opt-out of this data collection, use and sharing. Consumer advocates support opt-in policies, where companies have an incentive to create strong privacy protections and use limitations so consumers will choose to share their data.) In response, people have installed ad-blocker technology to avoid seeing ads. However, there is online-tracking technology that can be difficult to block, such as “canvas fingerprinting.”

    People also have joined the Do Not Track movement — this can take the form of opting out of being tracked by e-mail address or by having your Web browser send an opt-out signal to a company as you conduct your online activity. And federal lawmakers have tried to pass Do Not Track legislation to protect kids.

    There has been a battle. For example, Apple’s Safari browser and Mozilla’s Firefox browser have included anti-tracking technology for years. However, some companies choose not to respect Do Not Track signals sent by Web browsers.  Read more »

    Be aware of privacy issues as your A.I. assistant learns more about you

    Friday, May 26th, 2017

    Update on June 6, 2017: Apple has introduced its own A.I. assistant device, the HomePod. Notably, the company says the device will only collect data after the wake command. Also, the data will be encrypted when sent to Apple’s servers. However, privacy questions remain, as with other A.I. assistants. 

    Artificial intelligence assistants, such as Amazon’s Echo or Google’s Home devices (or Apple’s Siri or Microsoft’s Cortana services) have been proliferating, and they can gather a lot of personal information on the individuals or families who use them. A.I. assistants are part of the “Internet of Things,” a computerized network of physical objects. In IoT, sensors and data-storage devices embedded in objects interact with Web services.

    I’ve discussed the privacy issues associated with IoT generally (relatedly, the Government Accountability Office recently released a report on the privacy and security problems that can arise in IoT devices), but I want to look closer at the questions raised by A.I. assistants. The personal data retained or transmitted on these A.I. services and devices could include email, photos, sensitive medical or other information, financial data, and more.

    And law enforcement officials could access this personal data. Earlier this year, there was a controversy concerning the data possibly collected by an Amazon Echo. The Washington Post explained, “The Echo is equipped with seven microphones and responds to a ‘wake word,’ most commonly ‘Alexa.’ When it detects the wake word, it begins streaming audio to the cloud, including a fraction of a second of audio before the wake word, according to the Amazon website. A recording and transcription of the audio is logged and stored in the Amazon Alexa app and must be manually deleted later.”  Read more »

    New Year? Time for a New Assessment of Your Privacy Setup.

    Tuesday, January 17th, 2017

    People use a lot of services and devices to transmit and retain sensitive personal information. A person could use daily: a work computer, a personal computer, multiple email addresses, a work cellphone, a personal cellphone, an e-reader or tablet, a fitness tracker or smart watch, and an Artificial Intelligence assistant (Amazon’s Echo, Apple’s Siri, Google’s Assistant, or Microsoft’s Cortana). The data retained or transmitted on these services and devices could include sensitive medical or other information, personal photos, financial data, and more.

    There’s also the issue of the collection of information that could lead to other data being learned. For example, I wrote recently about health-app data and the surprising results of scrutinizing it. A man was alarmed by his wife’s heart rate data, as collected by her Fitbit, and asked others for assistance analyzing it. One theory: She could be pregnant. Did you know that heart-rate changes could signal a pregnancy?

    Currently, there’s ongoing controversy concerning the data possibly collected by an Amazon Echo. The Washington Post explains, “The Echo is equipped with seven microphones and responds to a ‘wake word,’ most commonly ‘Alexa.’ When it detects the wake word, it begins streaming audio to the cloud, including a fraction of a second of audio before the wake word, according to the Amazon website. A recording and transcription of the audio is logged and stored in the Amazon Alexa app and must be manually deleted later.” Arkansas police have served a warrant to Amazon, as they seek information recorded by a suspect’s Echo. Amazon has refused to comply with the warrant.  Read more »

    It’s Becoming Easier to Have Detailed Secret Surveillance from a Distance

    Wednesday, November 23rd, 2016

    The idea of secret surveillance from a distance isn’t new. For centuries, there have been undercover agents. Subsequently, there came hidden cameras and microphones. But there were limitations to this secret surveillance — such as the physical constraints of a human or camera located far from the person being watched. As surveillance technology has become more sophisticated, however, it is becoming easier to identify, watch, listen to, and judge people from a distance.

    The judgment portion is, in part, based on biometric facial-recognition technology that incorporates expression recognition. For the unseen eyes, it’s no longer just about identifying a person, but also about watching their emotional responses. This type of facial-recognition tech gained attention a few years ago when Microsoft filed a patent for technology that would track individuals’ emotions and target advertising and marketing as based upon a person’s mood.

    “Degrees of emotion can vary — a user can be ‘very angry’ or ‘slightly angry’ — as well as the duration of the mood. Advertisers can target people ‘happy for one hour’ or ‘happy for 24 hours,’” the Toronto Star reported in 2012. Four years later, the mood-identification technology can be bought off the shelf, as NBC News explains in a story about “a new immersive experience for moviegoers.” Read more »

    Criminalizing the Reidentification of ‘Anonymized’ Data Won’t Solve the Privacy Issue

    Monday, October 17th, 2016

    For years, companies and institutions have been using “anonymization” or “deidentification” techniques and processes to release data concerning individuals, saying that the techniques will protect personal privacy and preclude the sensitive information from being linked back to an individual. Yet we have seen time and again that these processes haven’t worked.

    For almost two decades, researchers have told us that anonymization of private information has significant problems, and individuals can be re-identified and have their privacy breached. (I wrote a blog post last year detailing some of the research concerning re-identificaiton of anonymized data sets.)

    Recently, Australian Attorney General George Brandis announced that he would seek to amend the country’s Privacy Act to “create a new criminal offence of re-identifying de-identified government data. It will also be an offence to counsel, procure, facilitate, or encourage anyone to do this, and to publish or communicate any re-identified dataset.”

    According to the Guardian, the “impetus” for this announcement was a recent privacy problem with deidentified Medicare data, a problem uncovered by researchers. “A copy of an article published by the researchers outlines how every single Medicare data code was able to be reidentified by linking the dataset with other available information,” the Guardian reported. Read more »