Search


  • Categories


  • Archives

    « Home

    Archive for the ‘Civil liberties’ Category

    License-plate-reader Technology Is Proliferating, And Questions Remain

    Wednesday, June 28th, 2017

    A couple of years ago, we discussed the increasing use license-plate-recognition camera technology and the possible privacy, civil liberty and security implications about the surveillance tech used to gather and record information on drivers’ movements. At the time, we noted that license-plate-reader technology (also called automated license plate readers, ALPRs), like other surveillance systems, has the ability to create a profile of an individual using personal, possibly sensitive data. Now, the technology is in even more jurisdictions nationwide, and the privacy questions remain.

    Two examples of the proliferation of the license-plate-reader technology are in Rhode Island and Tennessee. In Rhode Island, state legislators are considering HB 5531, “An Act Relating to Motor and Other Vehicles — Electronic Confirmation and Compliance System,” which would create a state-wide license-plate-reader network to identify and fine uninsured drivers. The chief sponsor is Rep. Robert Jacquard (D), who “said he has made a number of changes to address fears of growing state surveillance and concerns the cameras could be used to expand highway tolling,” reports the Providence Journal.

    The ACLU of Rhode Island testified (pdf) against the bill, noting “this legislation would nevertheless facilitate the capture and storage of real time location information on every Rhode Islander on the road, with no guidance as to how this information is to be used, at the benefit of a third-party corporation.” ACLU-RI wants the state to “implement clear and specific restrictions on the use of this technology, particularly by law enforcement” and notes such restrictions are included in HB 5989, whose chief sponsor is Rep. John G. Edwards (D). Read more »

    New Year? Time for a New Assessment of Your Privacy Setup.

    Tuesday, January 17th, 2017

    People use a lot of services and devices to transmit and retain sensitive personal information. A person could use daily: a work computer, a personal computer, multiple email addresses, a work cellphone, a personal cellphone, an e-reader or tablet, a fitness tracker or smart watch, and an Artificial Intelligence assistant (Amazon’s Echo, Apple’s Siri, Google’s Assistant, or Microsoft’s Cortana). The data retained or transmitted on these services and devices could include sensitive medical or other information, personal photos, financial data, and more.

    There’s also the issue of the collection of information that could lead to other data being learned. For example, I wrote recently about health-app data and the surprising results of scrutinizing it. A man was alarmed by his wife’s heart rate data, as collected by her Fitbit, and asked others for assistance analyzing it. One theory: She could be pregnant. Did you know that heart-rate changes could signal a pregnancy?

    Currently, there’s ongoing controversy concerning the data possibly collected by an Amazon Echo. The Washington Post explains, “The Echo is equipped with seven microphones and responds to a ‘wake word,’ most commonly ‘Alexa.’ When it detects the wake word, it begins streaming audio to the cloud, including a fraction of a second of audio before the wake word, according to the Amazon website. A recording and transcription of the audio is logged and stored in the Amazon Alexa app and must be manually deleted later.” Arkansas police have served a warrant to Amazon, as they seek information recorded by a suspect’s Echo. Amazon has refused to comply with the warrant.  Read more »

    It’s Becoming Easier to Have Detailed Secret Surveillance from a Distance

    Wednesday, November 23rd, 2016

    The idea of secret surveillance from a distance isn’t new. For centuries, there have been undercover agents. Subsequently, there came hidden cameras and microphones. But there were limitations to this secret surveillance — such as the physical constraints of a human or camera located far from the person being watched. As surveillance technology has become more sophisticated, however, it is becoming easier to identify, watch, listen to, and judge people from a distance.

    The judgment portion is, in part, based on biometric facial-recognition technology that incorporates expression recognition. For the unseen eyes, it’s no longer just about identifying a person, but also about watching their emotional responses. This type of facial-recognition tech gained attention a few years ago when Microsoft filed a patent for technology that would track individuals’ emotions and target advertising and marketing as based upon a person’s mood.

    “Degrees of emotion can vary — a user can be ‘very angry’ or ‘slightly angry’ — as well as the duration of the mood. Advertisers can target people ‘happy for one hour’ or ‘happy for 24 hours,’” the Toronto Star reported in 2012. Four years later, the mood-identification technology can be bought off the shelf, as NBC News explains in a story about “a new immersive experience for moviegoers.” Read more »

    Federal Case and State Law Are Latest Moves to Curb Warrantless Use of Stingray Tech

    Monday, August 8th, 2016

    The Stingray surveillance technology, also called cell-site simulator technology, can gather a significant amount of personal data from individuals’ cellphones. A recent federal case in New York and a new law in Illinois aim to curtail the warrantless use of Stingrays.

    The technology simulates a cellphone tower so that nearby mobile devices will connect to it and reveal sensitive personal data, such as their location, text messages, voice calls, and other information. The Stingray surveillance technology vacuums information from every cellphone within its range, so innocent people’s private data are gathered, as well. It is a dragnet that can capture hundreds of innocent people, rather than just the suspect targeted.

    As I have discussed before, law enforcement officials are using this technology in secret. Documents obtained by the ACLU showed that the U.S. Marshals Service directed Florida police to hide the use of Stingray surveillance technology from judges, which meant the police lied in court documents. Sarasota police Sgt. Kenneth Castro sent an e-mail in April 2009 to colleagues at the North Port (Florida) Police Department: “In reports or depositions we simply refer to the assistance as ‘received information from a confidential source regarding the location of the suspect.’” A recent San Diego Union-Tribune investigation showed that local police are using the surveillance technology in routine investigations – not ones involving terrorism or national security.

    Now, a federal judge in New York has thrown out Stingray evidence gathered without a warrant. The case is United States v. Lambis (pdf) in the Southern District of New York. Without a warrant, the Drug Enforcement Administration used a powerful cell-site simulator to determine the location of a cellphone was in Raymond Lambis’s home. Agents then searched his home and found drugs and drug paraphernalia. Read more »

    As biometrics use expands, privacy questions continue to fester

    Tuesday, April 19th, 2016

    As the costs of the technologies fall, biometric identification tools — such as fingerprint, iris or voice-recognition scanners — are increasingly being used in everyday life. There are significant privacy questions that arise as biometric data is collected and used, sometimes without the knowledge or consent of the individuals being scanned.

    Biometrics use has become more commonplace. Many smartphones, including iPhones, have fingerprint “touch” ID scanners that people can use instead of numeric passcodes. And law enforcement personnel have been using fingerprint scanners for years, both domestically and internationally. In the past few years, we’ve see banks capturing customers’ voice prints in order, the institutions say, to fight fraud. Or gyms asking members to identify themselves using their fingerprints. Reuters recently reported that companies are seeking to expand fingerprint-identification systems to credit cards and railway commuters.

    And the voluntariness of a person submitting his or her biometric has also been questioned. Do you realize when you’re calling your bank that you’re handing over your voice print? Another situation a few years ago in Washington, D.C., also raised at the issue of voluntariness. The District considered requiring that all visitors to its jail “have their fingerprints scanned and checked against law enforcement databases for outstanding warrants.” So if you wanted to visit a friend or relative who was in the D.C. jail, you would have to volunteer to submit your biometric data. The plan was dropped after strong criticism from the public and civil rights groups.

    Your biometric data can be gathered for any number of innocuous reasons. For example, I had to submit my fingerprints to obtain my law license, not because of a crime. Family members, roommates and business colleagues of crime victims have submitted fingerprints in order to rule out “innocent” fingerprints at a crime scene in a home or workplace. Some “trusted traveler” airport programs gather iris scans. Some companies use iris-recognition technology for their security systems. Read more »

    Who sees your health-app data? It’s hard to know.

    Thursday, March 24th, 2016

    Lots of people use personal health devices, such as Fitbits, or mobile health or wellness apps (there are a variety offered through Apple’s and Google’s app stores). There are important privacy and security questions about the devices and apps, because the data that they can gather can be sensitive — disease status, medication usage, glucose levels, fertility data, or location information as the devices track your every step on the way to your 10,000 steps-per-day goal. And the medical diagnoses drawn from such information can surprise people, especially the individuals using the apps and devices.

    For example, one man was concerned after reviewing his wife’s Fitbit data. He “noticed her heart rate was well above normal.” He thought the device might be malfunctioning, so he posted the data on message-board site Reddit and asked for analyses. One person theorized that his wife would be pregnant. The couple made a doctor’s appointment and confirmed the pregnancy.

    This case illustrates the sensitive medical data gathered by personal medical devices and apps that a person might not even realize is possible. Did you know that heart-rate changes could signal a pregnancy?

    And this isn’t the first time that sensitive information of Fitbit users has been inadvertently revealed. Five years ago, there was an uproar over Fitbit’s privacy settings when people who were logging their sexual activity as a form of exercise learned that the data was showing up in Google searches. Read more »