November 23rd, 2016
The idea of secret surveillance from a distance isn’t new. For centuries, there have been undercover agents. Subsequently, there came hidden cameras and microphones. But there were limitations to this secret surveillance — such as the physical constraints of a human or camera located far from the person being watched. As surveillance technology has become more sophisticated, however, it is becoming easier to identify, watch, listen to, and judge people from a distance.
The judgment portion is, in part, based on biometric facial-recognition technology that incorporates expression recognition. For the unseen eyes, it’s no longer just about identifying a person, but also about watching their emotional responses. This type of facial-recognition tech gained attention a few years ago when Microsoft filed a patent for technology that would track individuals’ emotions and target advertising and marketing as based upon a person’s mood.
“Degrees of emotion can vary — a user can be ‘very angry’ or ‘slightly angry’ — as well as the duration of the mood. Advertisers can target people ‘happy for one hour’ or ‘happy for 24 hours,’” the Toronto Star reported in 2012. Four years later, the mood-identification technology can be bought off the shelf, as NBC News explains in a story about “a new immersive experience for moviegoers.” Read more »
October 17th, 2016
For years, companies and institutions have been using “anonymization” or “deidentification” techniques and processes to release data concerning individuals, saying that the techniques will protect personal privacy and preclude the sensitive information from being linked back to an individual. Yet we have seen time and again that these processes haven’t worked.
For almost two decades, researchers have told us that anonymization of private information has significant problems, and individuals can be re-identified and have their privacy breached. (I wrote a blog post last year detailing some of the research concerning re-identificaiton of anonymized data sets.)
Recently, Australian Attorney General George Brandis announced that he would seek to amend the country’s Privacy Act to “create a new criminal offence of re-identifying de-identified government data. It will also be an offence to counsel, procure, facilitate, or encourage anyone to do this, and to publish or communicate any re-identified dataset.”
According to the Guardian, the “impetus” for this announcement was a recent privacy problem with deidentified Medicare data, a problem uncovered by researchers. “A copy of an article published by the researchers outlines how every single Medicare data code was able to be reidentified by linking the dataset with other available information,” the Guardian reported. Read more »
September 19th, 2016
Companies have been monitoring their employees for years, in a variety of ways. Employers are using key-logging technology to monitor workers’ keystrokes and Internet-tracking software to log the sites that employees visit. Ars Technica and others reported on Xora, a job-management app that was used by one business to track employees even when they were off the clock. The latest in workplace monitoring concerns employee badges as well as gathering social-media data on workers.
Businesses have been tracking the movements of their workers in various ways, including through GPS-enabled smartphones and tablets. “Etta Epps, a UPS delivery driver for 10 years,” reports the Atlanta Journal-Constitution, “said she is keenly aware of the shipping giant’s surveillance of her actions through GPS and sensors in her truck.” ”You’re so conscious every day of trying not to do this or not to do that because you know you’re being monitored,” Epps said.
Now, there is a new type of badge that can track employees even more closely. Humanyze, a Boston company, has created special surveillance badges that can be used in the workplace. “Each has two microphones doing real-time voice analysis, and each comes with sensors that follow where you are in the office, with motion detectors to record how much you move. The beacons tracking your movements are omitted from bathroom locations, to give you some privacy,” the Washington Post reports. Read more »
August 8th, 2016
The Stingray surveillance technology, also called cell-site simulator technology, can gather a significant amount of personal data from individuals’ cellphones. A recent federal case in New York and a new law in Illinois aim to curtail the warrantless use of Stingrays.
The technology simulates a cellphone tower so that nearby mobile devices will connect to it and reveal sensitive personal data, such as their location, text messages, voice calls, and other information. The Stingray surveillance technology vacuums information from every cellphone within its range, so innocent people’s private data are gathered, as well. It is a dragnet that can capture hundreds of innocent people, rather than just the suspect targeted.
As I have discussed before, law enforcement officials are using this technology in secret. Documents obtained by the ACLU showed that the U.S. Marshals Service directed Florida police to hide the use of Stingray surveillance technology from judges, which meant the police lied in court documents. Sarasota police Sgt. Kenneth Castro sent an e-mail in April 2009 to colleagues at the North Port (Florida) Police Department: “In reports or depositions we simply refer to the assistance as ‘received information from a confidential source regarding the location of the suspect.’” A recent San Diego Union-Tribune investigation showed that local police are using the surveillance technology in routine investigations – not ones involving terrorism or national security.
Now, a federal judge in New York has thrown out Stingray evidence gathered without a warrant. The case is United States v. Lambis (pdf) in the Southern District of New York. Without a warrant, the Drug Enforcement Administration used a powerful cell-site simulator to determine the location of a cellphone was in Raymond Lambis’s home. Agents then searched his home and found drugs and drug paraphernalia. Read more »
April 19th, 2016
As the costs of the technologies fall, biometric identification tools — such as fingerprint, iris or voice-recognition scanners — are increasingly being used in everyday life. There are significant privacy questions that arise as biometric data is collected and used, sometimes without the knowledge or consent of the individuals being scanned.
Biometrics use has become more commonplace. Many smartphones, including iPhones, have fingerprint “touch” ID scanners that people can use instead of numeric passcodes. And law enforcement personnel have been using fingerprint scanners for years, both domestically and internationally. In the past few years, we’ve see banks capturing customers’ voice prints in order, the institutions say, to fight fraud. Or gyms asking members to identify themselves using their fingerprints. Reuters recently reported that companies are seeking to expand fingerprint-identification systems to credit cards and railway commuters.
And the voluntariness of a person submitting his or her biometric has also been questioned. Do you realize when you’re calling your bank that you’re handing over your voice print? Another situation a few years ago in Washington, D.C., also raised at the issue of voluntariness. The District considered requiring that all visitors to its jail “have their fingerprints scanned and checked against law enforcement databases for outstanding warrants.” So if you wanted to visit a friend or relative who was in the D.C. jail, you would have to volunteer to submit your biometric data. The plan was dropped after strong criticism from the public and civil rights groups.
Your biometric data can be gathered for any number of innocuous reasons. For example, I had to submit my fingerprints to obtain my law license, not because of a crime. Family members, roommates and business colleagues of crime victims have submitted fingerprints in order to rule out “innocent” fingerprints at a crime scene in a home or workplace. Some “trusted traveler” airport programs gather iris scans. Some companies use iris-recognition technology for their security systems. Read more »
March 24th, 2016
Lots of people use personal health devices, such as Fitbits, or mobile health or wellness apps (there are a variety offered through Apple’s and Google’s app stores). There are important privacy and security questions about the devices and apps, because the data that they can gather can be sensitive — disease status, medication usage, glucose levels, fertility data, or location information as the devices track your every step on the way to your 10,000 steps-per-day goal. And the medical diagnoses drawn from such information can surprise people, especially the individuals using the apps and devices.
For example, one man was concerned after reviewing his wife’s Fitbit data. He “noticed her heart rate was well above normal.” He thought the device might be malfunctioning, so he posted the data on message-board site Reddit and asked for analyses. One person theorized that his wife would be pregnant. The couple made a doctor’s appointment and confirmed the pregnancy.
This case illustrates the sensitive medical data gathered by personal medical devices and apps that a person might not even realize is possible. Did you know that heart-rate changes could signal a pregnancy?
And this isn’t the first time that sensitive information of Fitbit users has been inadvertently revealed. Five years ago, there was an uproar over Fitbit’s privacy settings when people who were logging their sexual activity as a form of exercise learned that the data was showing up in Google searches. Read more »