Search


  • Categories


  • Archives

    « Home

    Archive for the ‘Identification’ Category

    As biometrics use expands, privacy questions continue to fester

    Tuesday, April 19th, 2016

    As the costs of the technologies fall, biometric identification tools — such as fingerprint, iris or voice-recognition scanners — are increasingly being used in everyday life. There are significant privacy questions that arise as biometric data is collected and used, sometimes without the knowledge or consent of the individuals being scanned.

    Biometrics use has become more commonplace. Many smartphones, including iPhones, have fingerprint “touch” ID scanners that people can use instead of numeric passcodes. And law enforcement personnel have been using fingerprint scanners for years, both domestically and internationally. In the past few years, we’ve see banks capturing customers’ voice prints in order, the institutions say, to fight fraud. Or gyms asking members to identify themselves using their fingerprints. Reuters recently reported that companies are seeking to expand fingerprint-identification systems to credit cards and railway commuters.

    And the voluntariness of a person submitting his or her biometric has also been questioned. Do you realize when you’re calling your bank that you’re handing over your voice print? Another situation a few years ago in Washington, D.C., also raised at the issue of voluntariness. The District considered requiring that all visitors to its jail “have their fingerprints scanned and checked against law enforcement databases for outstanding warrants.” So if you wanted to visit a friend or relative who was in the D.C. jail, you would have to volunteer to submit your biometric data. The plan was dropped after strong criticism from the public and civil rights groups.

    Your biometric data can be gathered for any number of innocuous reasons. For example, I had to submit my fingerprints to obtain my law license, not because of a crime. Family members, roommates and business colleagues of crime victims have submitted fingerprints in order to rule out “innocent” fingerprints at a crime scene in a home or workplace. Some “trusted traveler” airport programs gather iris scans. Some companies use iris-recognition technology for their security systems.

    There are a variety of privacy, security and usage problems that can arise from the widespread use of biometric data. Such problems could lead to discrimination or disenfranchisement of people who can’t submit their biometrics. For example, it’s possible that some people won’t be able to give the biometric. Some people with missing limbs or prints that are difficult to capture consistently. Or the machinery used to capture the biometric could have difficulty capturing diverse users – very tall or very short, etc. people could have problems with iris scanners, for example.

    Or there are religious or cultural problems and you can’t use facial recognition as a biometric because the person wears a beard or a headscarf. Or a person is just plain uncomfortable handing over their biometric. The reason for discomfort could be because of privacy or civil liberty questions or a fear that the biometric would be misused or stolen.

    Some people are wary of the covert collection of biometrics. For example, there are systems that can scan a person’s iris from a distance. And there’s the problem of mission creep — fingerprints added to a database for innocuous reasons (ruling out “innocent” fingerprints at a crime scene) are then used for other purposes. What if iris scans are collected for building-access control but are later added to a criminal database? Data submitted for one purpose should not be used for a different purpose without the individual’s knowledge and consent.

    Another privacy and security issue has to do with how biometrics can be compromised. A person could capture a biometric, say a fingerpint, from a person and later use it to gain access. And capturing a biometric for misuse can be easy, depending on the biometric. Fingerprints are left everywhere, faces can be photographed, voices can be recorded. How do you solve the problem of misuse of your fingerprints, which you cannot change?

    There are ways to lower the privacy and security risks in biometric systems. You need to look at the system as a whole. How is the system set up, protected, and maintained? Are there stringent security and audit trails, among other security protocols?

    But even strong biometric systems can fail or be hacked. So the march toward centralizing identification using biometric data such as fingerprints, voice prints or iris scans should be halted. It decreases security to have a centralized system of identification, one ID for many purposes, as there will be a larger amount of harm when the one biometric is compromised. A better system is one of decentralized identification, which reduces the risks associated with security breaches and the misuse of personal information.

    Who sees your health-app data? It’s hard to know.

    Thursday, March 24th, 2016

    Lots of people use personal health devices, such as Fitbits, or mobile health or wellness apps (there are a variety offered through Apple’s and Google’s app stores). There are important privacy and security questions about the devices and apps, because the data that they can gather can be sensitive — disease status, medication usage, glucose levels, fertility data, or location information as the devices track your every step on the way to your 10,000 steps-per-day goal. And the medical diagnoses drawn from such information can surprise people, especially the individuals using the apps and devices.

    For example, one man was concerned after reviewing his wife’s Fitbit data. He “noticed her heart rate was well above normal.” He thought the device might be malfunctioning, so he posted the data on message-board site Reddit and asked for analyses. One person theorized that his wife would be pregnant. The couple made a doctor’s appointment and confirmed the pregnancy.

    This case illustrates the sensitive medical data gathered by personal medical devices and apps that a person might not even realize is possible. Did you know that heart-rate changes could signal a pregnancy?

    And this isn’t the first time that sensitive information of Fitbit users has been inadvertently revealed. Five years ago, there was an uproar over Fitbit’s privacy settings when people who were logging their sexual activity as a form of exercise learned that the data was showing up in Google searches. Read more »

    Obama’s new federal privacy council long overdue, but Americans need more protections

    Wednesday, February 24th, 2016

    Recently, President Obama released a package of cybersecurity reform proposals. Along with these proposals, Obama also unveiled a new executive order: “Establishment of the Federal Privacy Council.” The council will be composed of senior privacy officials from at least 24 federal agencies, including Cabinet-level departments and NASA and the Office of Personnel Management, and “may also include other officials from agencies and offices, as the Chair may designate.”

    The new council is tasked with developing, coordinating and sharing ideas and best practices for federal programs to protect privacy and implement “appropriate privacy safeguards” throughout the administration.

    Although the council’s mission is important, this move seems incomplete. First, such a concerted effort to improve privacy protections throughout the federal government should have begun years ago. If privacy and security protections for sensitive personal data had been prioritized, there might not have been the problems caused by the hacker attack last year against the Office of Personnel Management, which did not use encryption or other such security technology to protect the information (including fingerprints) of the millions of current and former federal employees affected. Read more »

    Organizations Must Protect Against Insider Threats to Security

    Thursday, January 21st, 2016

    As personal information becomes more accessible and shareable through massive databases there is the question of security. Agencies and companies build protections against threats, but there is a unique problem with insider threats: Often, people are misusing or abusing their access privileges to private data rather than attempting to illegally gain access to the information.

    We’ve seen the problems that arise when insiders abuse or misuse their access privileges to individuals’ data and violate the individuals’ privacy rights. Last week, the Florida Times-Union reported that Jacksonville and a Highway Patrol trooper reached a settlement after she sued, accusing police of misusing their access to a driver’s license database to gather information on her and harass her.

    A similar situation is said to have occurred in Minnesota, where 104 officers from 18 agencies in the state accessed one woman’s “driver’s license record 425 times in what could be one of the largest private data breaches by law enforcement in history.” A state report later found such misuse was common.

    Federal databases also have the problem of insiders misusing or abusing their data-access privileges. A recent ProPublica investigation found a variety of privacy violations at Department of Veterans Affairs facilities. “Some VA employees have used their access to medical records as a weapon in disputes or for personal gain, incident reports show,” such as one case where health data was improperly accessed and used in a divorce proceeding. Other individuals misused their authority to access medical information after suicides or suicide attempts by fellow employees. Read more »

    As Our Devices Increasingly Talk to Others, Privacy Questions Arise

    Thursday, December 17th, 2015

    As technology continues to evolve and become integrated into our lives, there are significant questions about privacy and security. We’ve discussed before the “Internet of Things,” which is a computerized network of physical objects. In IoT, sensors and data-storage devices embedded in objects interact with Web services. Such connected televisions, refrigerators and other devices can raise privacy and security questions.

    For example, consider the “smart” or “connected” car. People buy such vehicles for the benefits of integrating technology into something where they can be for hours at a time. Your car or truck knows where you go and when. It knows how fast you drive and how quickly or slowly you brake. Your car knows if you’re wearing a seatbelt.

    Privacy experts have noted that unclear or vague privacy or usage policies could allow companies that collect drivers’ sensitive data to share or sell that information with others, creating databases that may invade the privacy of consumers. For example, the locations where individuals drive to could reveal deeply personal information. Do you go to a church or mosque at the same time every week? Have you visited an adoption or fertility organization? Did you join a protest or demonstration? Did you recently start going to a building that includes the offices of several psychotherapists or one that houses a drug addiction clinic?

    One privacy issue recently arose with connected automobiles — and it caught many people off-guard. ABC25 in West Palm Beach, Fla., reported that a Ford car with opt-in 911 Assist allegedly ratted out a hit-and-run driver in Florida. Read more »

    Legislators, Federal Officials Seek Limits on Use of Stingray Surveillance Technology

    Tuesday, November 10th, 2015

    Rep. Jason Chaffetz (R-Utah) recently introduced a bill, H.R. 3871, The Stingray Privacy Act (pdf), to limit the use of cellphone surveillance technology known as cell-site simulators or “Stingray” technology. The bill, Chaffetz says, “would require law enforcement to obtain a warrant before deploying a cell site simulator consistent with recently issued federal guidance and the 4th Amendment to the Constitution. H.R. 3871 does provide targeted exceptions for exigent circumstances and foreign intelligence surveillance.” The federal guidance mentioned is recent policies on cell-site simulators released by the departments of Justice (pdf) and Homeland Security (pdf), with various exceptions for special circumstances. The new guidance was released after public and Congressional scrutiny of the use of the surveillance devices.

    The Stingray and similar cellphone surveillance technologies are extremely invasive. They simulate a cellphone tower so that nearby mobile devices will connect to it and reveal their location, text messages, voice calls, and other personal data. The surveillance technology scoops up data on every cellphone within its range, so innocent people’s private conversations and texts are gathered, too.

    Dozens of police departments nationwide use this cell-site simulator surveillance technology, and there are a lot of questions about how they’re using it. Even the IRS admitted in Congressional testimony that it using the surveillance technology. Read more »