Search


  • Categories


  • Archives

    « Home

    Archive for the ‘Security’ Category

    Be aware of privacy issues as your A.I. assistant learns more about you

    Friday, May 26th, 2017

    Update on June 6, 2017: Apple has introduced its own A.I. assistant device, the HomePod. Notably, the company says the device will only collect data after the wake command. Also, the data will be encrypted when sent to Apple’s servers. However, privacy questions remain, as with other A.I. assistants. 

    Artificial intelligence assistants, such as Amazon’s Echo or Google’s Home devices (or Apple’s Siri or Microsoft’s Cortana services) have been proliferating, and they can gather a lot of personal information on the individuals or families who use them. A.I. assistants are part of the “Internet of Things,” a computerized network of physical objects. In IoT, sensors and data-storage devices embedded in objects interact with Web services.

    I’ve discussed the privacy issues associated with IoT generally (relatedly, the Government Accountability Office recently released a report on the privacy and security problems that can arise in IoT devices), but I want to look closer at the questions raised by A.I. assistants. The personal data retained or transmitted on these A.I. services and devices could include email, photos, sensitive medical or other information, financial data, and more.

    And law enforcement officials could access this personal data. Earlier this year, there was a controversy concerning the data possibly collected by an Amazon Echo. The Washington Post explained, “The Echo is equipped with seven microphones and responds to a ‘wake word,’ most commonly ‘Alexa.’ When it detects the wake word, it begins streaming audio to the cloud, including a fraction of a second of audio before the wake word, according to the Amazon website. A recording and transcription of the audio is logged and stored in the Amazon Alexa app and must be manually deleted later.”  Read more »

    Insiders Can Exploit Their Knowledge of Security Protocols

    Monday, February 27th, 2017

    Good security is difficult. There are insider and outsider threats to prepare for, and the best defense includes continuous upgrades of security systems. A recent federal indictment concerning an alleged 18-year drug-smuggling operation among airport and Transportation Security Administration employees shows the value of strong security protocols that are changed and upgraded often enough that they cannot be easily circumvented by knowledgable insiders.

    The use of airport and airline employees to smuggle drugs and other illicit contraband is not new. For example, a decade ago there was a scandal at an airport in Florida because airline baggage handlers were able to smuggle guns and drugs onto a plane. According to court documents, in 2007, two Comair baggage handlers were able to carry a duffel bag containing 14 guns and 8 pounds of marijuana onto a commercial plane in Orlando that was headed for San Juan, Puerto Rico. The men avoided detection, because they are airline baggage handlers who used their uniforms and legally issued identification cards to bypass security screeners and enter a restricted area before loading the contraband onto a plane. The men, who had passed federal background checks, used their knowledge of airport security protocols. The security protocols failed, and the men were caught because a source called a tip into the police.

    Earlier that year, CBS News had revealed that “unlike passengers, pilots and flight attendants, some 700,000 airport workers with ID badges are allowed to completely bypass airport screening areas at virtually all our nation’s 452 commercial airlines.” Shortly after the Comair arrests, airports in Florida strengthened security protocols for employees and the Transportation Security Administration also heightened screening requirements. Read more »

    New Year? Time for a New Assessment of Your Privacy Setup.

    Tuesday, January 17th, 2017

    People use a lot of services and devices to transmit and retain sensitive personal information. A person could use daily: a work computer, a personal computer, multiple email addresses, a work cellphone, a personal cellphone, an e-reader or tablet, a fitness tracker or smart watch, and an Artificial Intelligence assistant (Amazon’s Echo, Apple’s Siri, Google’s Assistant, or Microsoft’s Cortana). The data retained or transmitted on these services and devices could include sensitive medical or other information, personal photos, financial data, and more.

    There’s also the issue of the collection of information that could lead to other data being learned. For example, I wrote recently about health-app data and the surprising results of scrutinizing it. A man was alarmed by his wife’s heart rate data, as collected by her Fitbit, and asked others for assistance analyzing it. One theory: She could be pregnant. Did you know that heart-rate changes could signal a pregnancy?

    Currently, there’s ongoing controversy concerning the data possibly collected by an Amazon Echo. The Washington Post explains, “The Echo is equipped with seven microphones and responds to a ‘wake word,’ most commonly ‘Alexa.’ When it detects the wake word, it begins streaming audio to the cloud, including a fraction of a second of audio before the wake word, according to the Amazon website. A recording and transcription of the audio is logged and stored in the Amazon Alexa app and must be manually deleted later.” Arkansas police have served a warrant to Amazon, as they seek information recorded by a suspect’s Echo. Amazon has refused to comply with the warrant.  Read more »

    Criminalizing the Reidentification of ‘Anonymized’ Data Won’t Solve the Privacy Issue

    Monday, October 17th, 2016

    For years, companies and institutions have been using “anonymization” or “deidentification” techniques and processes to release data concerning individuals, saying that the techniques will protect personal privacy and preclude the sensitive information from being linked back to an individual. Yet we have seen time and again that these processes haven’t worked.

    For almost two decades, researchers have told us that anonymization of private information has significant problems, and individuals can be re-identified and have their privacy breached. (I wrote a blog post last year detailing some of the research concerning re-identificaiton of anonymized data sets.)

    Recently, Australian Attorney General George Brandis announced that he would seek to amend the country’s Privacy Act to “create a new criminal offence of re-identifying de-identified government data. It will also be an offence to counsel, procure, facilitate, or encourage anyone to do this, and to publish or communicate any re-identified dataset.”

    According to the Guardian, the “impetus” for this announcement was a recent privacy problem with deidentified Medicare data, a problem uncovered by researchers. “A copy of an article published by the researchers outlines how every single Medicare data code was able to be reidentified by linking the dataset with other available information,” the Guardian reported. Read more »

    Federal Case and State Law Are Latest Moves to Curb Warrantless Use of Stingray Tech

    Monday, August 8th, 2016

    The Stingray surveillance technology, also called cell-site simulator technology, can gather a significant amount of personal data from individuals’ cellphones. A recent federal case in New York and a new law in Illinois aim to curtail the warrantless use of Stingrays.

    The technology simulates a cellphone tower so that nearby mobile devices will connect to it and reveal sensitive personal data, such as their location, text messages, voice calls, and other information. The Stingray surveillance technology vacuums information from every cellphone within its range, so innocent people’s private data are gathered, as well. It is a dragnet that can capture hundreds of innocent people, rather than just the suspect targeted.

    As I have discussed before, law enforcement officials are using this technology in secret. Documents obtained by the ACLU showed that the U.S. Marshals Service directed Florida police to hide the use of Stingray surveillance technology from judges, which meant the police lied in court documents. Sarasota police Sgt. Kenneth Castro sent an e-mail in April 2009 to colleagues at the North Port (Florida) Police Department: “In reports or depositions we simply refer to the assistance as ‘received information from a confidential source regarding the location of the suspect.’” A recent San Diego Union-Tribune investigation showed that local police are using the surveillance technology in routine investigations – not ones involving terrorism or national security.

    Now, a federal judge in New York has thrown out Stingray evidence gathered without a warrant. The case is United States v. Lambis (pdf) in the Southern District of New York. Without a warrant, the Drug Enforcement Administration used a powerful cell-site simulator to determine the location of a cellphone was in Raymond Lambis’s home. Agents then searched his home and found drugs and drug paraphernalia. Read more »

    As biometrics use expands, privacy questions continue to fester

    Tuesday, April 19th, 2016

    As the costs of the technologies fall, biometric identification tools — such as fingerprint, iris or voice-recognition scanners — are increasingly being used in everyday life. There are significant privacy questions that arise as biometric data is collected and used, sometimes without the knowledge or consent of the individuals being scanned.

    Biometrics use has become more commonplace. Many smartphones, including iPhones, have fingerprint “touch” ID scanners that people can use instead of numeric passcodes. And law enforcement personnel have been using fingerprint scanners for years, both domestically and internationally. In the past few years, we’ve see banks capturing customers’ voice prints in order, the institutions say, to fight fraud. Or gyms asking members to identify themselves using their fingerprints. Reuters recently reported that companies are seeking to expand fingerprint-identification systems to credit cards and railway commuters.

    And the voluntariness of a person submitting his or her biometric has also been questioned. Do you realize when you’re calling your bank that you’re handing over your voice print? Another situation a few years ago in Washington, D.C., also raised at the issue of voluntariness. The District considered requiring that all visitors to its jail “have their fingerprints scanned and checked against law enforcement databases for outstanding warrants.” So if you wanted to visit a friend or relative who was in the D.C. jail, you would have to volunteer to submit your biometric data. The plan was dropped after strong criticism from the public and civil rights groups.

    Your biometric data can be gathered for any number of innocuous reasons. For example, I had to submit my fingerprints to obtain my law license, not because of a crime. Family members, roommates and business colleagues of crime victims have submitted fingerprints in order to rule out “innocent” fingerprints at a crime scene in a home or workplace. Some “trusted traveler” airport programs gather iris scans. Some companies use iris-recognition technology for their security systems. Read more »