Search


  • Categories


  • Archives

    « Home

    Archive for the ‘Medical data’ Category

    As biometrics use expands, privacy questions continue to fester

    Tuesday, April 19th, 2016

    As the costs of the technologies fall, biometric identification tools — such as fingerprint, iris or voice-recognition scanners — are increasingly being used in everyday life. There are significant privacy questions that arise as biometric data is collected and used, sometimes without the knowledge or consent of the individuals being scanned.

    Biometrics use has become more commonplace. Many smartphones, including iPhones, have fingerprint “touch” ID scanners that people can use instead of numeric passcodes. And law enforcement personnel have been using fingerprint scanners for years, both domestically and internationally. In the past few years, we’ve see banks capturing customers’ voice prints in order, the institutions say, to fight fraud. Or gyms asking members to identify themselves using their fingerprints. Reuters recently reported that companies are seeking to expand fingerprint-identification systems to credit cards and railway commuters.

    And the voluntariness of a person submitting his or her biometric has also been questioned. Do you realize when you’re calling your bank that you’re handing over your voice print? Another situation a few years ago in Washington, D.C., also raised at the issue of voluntariness. The District considered requiring that all visitors to its jail “have their fingerprints scanned and checked against law enforcement databases for outstanding warrants.” So if you wanted to visit a friend or relative who was in the D.C. jail, you would have to volunteer to submit your biometric data. The plan was dropped after strong criticism from the public and civil rights groups.

    Your biometric data can be gathered for any number of innocuous reasons. For example, I had to submit my fingerprints to obtain my law license, not because of a crime. Family members, roommates and business colleagues of crime victims have submitted fingerprints in order to rule out “innocent” fingerprints at a crime scene in a home or workplace. Some “trusted traveler” airport programs gather iris scans. Some companies use iris-recognition technology for their security systems.

    There are a variety of privacy, security and usage problems that can arise from the widespread use of biometric data. Such problems could lead to discrimination or disenfranchisement of people who can’t submit their biometrics. For example, it’s possible that some people won’t be able to give the biometric. Some people with missing limbs or prints that are difficult to capture consistently. Or the machinery used to capture the biometric could have difficulty capturing diverse users – very tall or very short, etc. people could have problems with iris scanners, for example.

    Or there are religious or cultural problems and you can’t use facial recognition as a biometric because the person wears a beard or a headscarf. Or a person is just plain uncomfortable handing over their biometric. The reason for discomfort could be because of privacy or civil liberty questions or a fear that the biometric would be misused or stolen.

    Some people are wary of the covert collection of biometrics. For example, there are systems that can scan a person’s iris from a distance. And there’s the problem of mission creep — fingerprints added to a database for innocuous reasons (ruling out “innocent” fingerprints at a crime scene) are then used for other purposes. What if iris scans are collected for building-access control but are later added to a criminal database? Data submitted for one purpose should not be used for a different purpose without the individual’s knowledge and consent.

    Another privacy and security issue has to do with how biometrics can be compromised. A person could capture a biometric, say a fingerpint, from a person and later use it to gain access. And capturing a biometric for misuse can be easy, depending on the biometric. Fingerprints are left everywhere, faces can be photographed, voices can be recorded. How do you solve the problem of misuse of your fingerprints, which you cannot change?

    There are ways to lower the privacy and security risks in biometric systems. You need to look at the system as a whole. How is the system set up, protected, and maintained? Are there stringent security and audit trails, among other security protocols?

    But even strong biometric systems can fail or be hacked. So the march toward centralizing identification using biometric data such as fingerprints, voice prints or iris scans should be halted. It decreases security to have a centralized system of identification, one ID for many purposes, as there will be a larger amount of harm when the one biometric is compromised. A better system is one of decentralized identification, which reduces the risks associated with security breaches and the misuse of personal information.

    Who sees your health-app data? It’s hard to know.

    Thursday, March 24th, 2016

    Lots of people use personal health devices, such as Fitbits, or mobile health or wellness apps (there are a variety offered through Apple’s and Google’s app stores). There are important privacy and security questions about the devices and apps, because the data that they can gather can be sensitive — disease status, medication usage, glucose levels, fertility data, or location information as the devices track your every step on the way to your 10,000 steps-per-day goal. And the medical diagnoses drawn from such information can surprise people, especially the individuals using the apps and devices.

    For example, one man was concerned after reviewing his wife’s Fitbit data. He “noticed her heart rate was well above normal.” He thought the device might be malfunctioning, so he posted the data on message-board site Reddit and asked for analyses. One person theorized that his wife would be pregnant. The couple made a doctor’s appointment and confirmed the pregnancy.

    This case illustrates the sensitive medical data gathered by personal medical devices and apps that a person might not even realize is possible. Did you know that heart-rate changes could signal a pregnancy?

    And this isn’t the first time that sensitive information of Fitbit users has been inadvertently revealed. Five years ago, there was an uproar over Fitbit’s privacy settings when people who were logging their sexual activity as a form of exercise learned that the data was showing up in Google searches. Read more »

    Organizations Must Protect Against Insider Threats to Security

    Thursday, January 21st, 2016

    As personal information becomes more accessible and shareable through massive databases there is the question of security. Agencies and companies build protections against threats, but there is a unique problem with insider threats: Often, people are misusing or abusing their access privileges to private data rather than attempting to illegally gain access to the information.

    We’ve seen the problems that arise when insiders abuse or misuse their access privileges to individuals’ data and violate the individuals’ privacy rights. Last week, the Florida Times-Union reported that Jacksonville and a Highway Patrol trooper reached a settlement after she sued, accusing police of misusing their access to a driver’s license database to gather information on her and harass her.

    A similar situation is said to have occurred in Minnesota, where 104 officers from 18 agencies in the state accessed one woman’s “driver’s license record 425 times in what could be one of the largest private data breaches by law enforcement in history.” A state report later found such misuse was common.

    Federal databases also have the problem of insiders misusing or abusing their data-access privileges. A recent ProPublica investigation found a variety of privacy violations at Department of Veterans Affairs facilities. “Some VA employees have used their access to medical records as a weapon in disputes or for personal gain, incident reports show,” such as one case where health data was improperly accessed and used in a divorce proceeding. Other individuals misused their authority to access medical information after suicides or suicide attempts by fellow employees. Read more »

    Targeted Behavioral Advertising and What It Can Mean for Privacy

    Tuesday, September 8th, 2015

    Targeted behavioral advertising is where a user’s online activity is tracked so that ads can be served based on the user’s behavior. What began as online data gathering has expanding — now there’e the online and offline data collection and tracking of the habits of consumers. There have been numerous news stories about this privacy and surveillance issue. There is a fundamental issue about targeted behavioral advertising that divides industry and consumer advocates: opt-in or opt-out. Opt-in, the choice of consumer advocates, puts the burden on companies to have strong privacy protections and use limitations so consumers will choose to share their data. Opt-out, the choice of the majority of ad industry players, puts the burden on consumers to learn about what the privacy policies are, whether they protect consumer data, whom the data is shared with and for what purpose, and how to opt-out of this data collection, use and sharing.

    Companies can also buy information on individuals from data collectors. At times, the information can be wrong, causing problems for individuals. Read a previous post for more about data brokers.

    What happens when data is gathered as a person browses the Internet? It can lead to innocuous advertisements for cars when you’re searching for a new vehicle or boots when you’re considering replacing ones that wore out last winter. Or it can lead to a more difficult situation when you’re faced with ads strollers and car seats showing up on Web sites that you visit even though you had a miscarriage a month ago. It’s easy for advertisers to connect the dots when someone starts searching for infant safety gear or reading parenting Web sites and the person is unable to opt-out of targeted behavioral advertising. Read more »

    It’s 2015. Why Aren’t Companies Encrypting Their Data?

    Thursday, June 4th, 2015

    Update on June 7: There’s news that the Office of Personnel Management was hacked and the unencrypted personal data of 4.1 million current and former federal employees was accessed. It has been nine years since an unencrypted laptop and hard drive containing sensitive data on 26.5 million current military personnel, veterans, and their spouses were stolen from a Department of Veterans Affairs’ employee’s home. That security breach led to a push for the use of encryption throughout the federal government, and I hope this breach leads to stronger data protections.

    For years, security and privacy professionals have been urging companies to encrypt their data so that when there are security breaches, there is less damage to individuals whose data is accessed. Yet we continue to read reports about companies failing to use this basic tool to secure information.

    For example, California-based U.S. Healthworks recently revealed (pdf) that a password-protected yet unencrypted laptop was stolen from an employee’s vehicle. The health-care service provider told employees, “We determined that the laptop may have contained files that included your name, address, date of birth, job title, and Social Security number.”

    Financial services company Sterne Agee and Leach was recently fined $225,000 and required to review its security protocols by the Financial Industry Regulatory Authority after a 2014 incident where a Sterne Agee employee lost an unencrypted laptop after leaving it in a restroom. The laptop included “clients’ account numbers, Social Security numbers and other personal information,” according to a news report. Read more »

    Significant Problems in White House’s Draft Privacy Legislation

    Monday, March 2nd, 2015

    The Obama White House recently released its draft Consumer Privacy Bill of Rights Act (pdf) and a fact sheet. Parts of the draft legislation date to a 2012 white paper (pdf) that laid out a plan to better protect consumer privacy. And last year, the big data group that the White House convened also issued recommendations on privacy (pdf).

    The White House has taken important steps in highlighting that individuals need strong privacy protections for their data and in creating the draft legislation. And it is important that the draft legislation attempts to implement the Fair Information Practices: collection limitation, data quality, purpose specification, use limitation, security safeguards, openness, individual participation, and accountability. For example, the draft legislation gives several options for responding to companies that would violate the bill’s provisions, including allowing individuals and states attorneys general to file lawsuits.

    But there are several significant problems with the proposal that need to be addressed before it can move forward. (The draft does not yet have a legislative sponsor, which it would need in order to be introduced and debated in Congress.)

    One problem with the legislation: It would preempt state laws.

    SEC. 401. Preemption.
    (a) In General.—This Act preempts any provision of a statute, regulation, or rule of a State or local government, with respect to those entities covered pursuant to this Act, to the extent that the provision imposes requirements on covered entities with respect to personal data processing.

    Read more »