• Categories

  • Archives

    « Home

    Archive for the ‘Anonymity’ Category

    Libraries Fight to Protect Users’ Rights to Privacy

    Friday, October 23rd, 2015

    A recent case in New Hampshire illustrates how libraries continue to be battlegrounds for privacy rights. The Kilton Public Library in Lebanon, N.H., a town of about 13,000 people, decided to join Tor, an anonymization network for online activities. It was a pilot for a bigger Tor relay system envisioned by the Library Freedom Project. According to Ars Technica, the Library Freedom Project seeks to set up Tor exit relays in libraries throughout the country. “As of now, only about 1,000 exit relays exist worldwide. If this plan is successful, it could vastly increase the scope and speed of the famed anonymizing network.”

    The Department of Homeland Security learned of the pilot, Pro Publica reported: “Soon after state authorities received an email about it from an agent at the Department of Homeland Security. [...] After a meeting at which local police and city officials discussed how Tor could be exploited by criminals, the library pulled the plug on the project.”

    After much criticism of the DHS and local law enforcement interference and petitions to reinstate the pilot project (including one from the Electronic Frontier Foundation), the Kilton library’s board voted a few weeks later to reinstate the project. ”Alison Macrina, the founder of the Library Freedom Project which brought Tor to Kilton Public Library, said the risk of criminal activity taking place on Tor is not a sufficient reason to suspend its use. For comparison, she said, the city is not going to shut down its roads simply because some people choose to drive drunk,” the Valley News reported. Read more »

    Targeted Behavioral Advertising and What It Can Mean for Privacy

    Tuesday, September 8th, 2015

    Targeted behavioral advertising is where a user’s online activity is tracked so that ads can be served based on the user’s behavior. What began as online data gathering has expanding — now there’e the online and offline data collection and tracking of the habits of consumers. There have been numerous news stories about this privacy and surveillance issue. There is a fundamental issue about targeted behavioral advertising that divides industry and consumer advocates: opt-in or opt-out. Opt-in, the choice of consumer advocates, puts the burden on companies to have strong privacy protections and use limitations so consumers will choose to share their data. Opt-out, the choice of the majority of ad industry players, puts the burden on consumers to learn about what the privacy policies are, whether they protect consumer data, whom the data is shared with and for what purpose, and how to opt-out of this data collection, use and sharing.

    Companies can also buy information on individuals from data collectors. At times, the information can be wrong, causing problems for individuals. Read a previous post for more about data brokers.

    What happens when data is gathered as a person browses the Internet? It can lead to innocuous advertisements for cars when you’re searching for a new vehicle or boots when you’re considering replacing ones that wore out last winter. Or it can lead to a more difficult situation when you’re faced with ads strollers and car seats showing up on Web sites that you visit even though you had a miscarriage a month ago. It’s easy for advertisers to connect the dots when someone starts searching for infant safety gear or reading parenting Web sites and the person is unable to opt-out of targeted behavioral advertising. Read more »

    When Software Can Read Your Emotions as You Walk Down the Street

    Wednesday, April 22nd, 2015

    I’ve written before about the increasing use of “digital signage.” What is “digital signage”? Most people have heard of the term connected with billboards or other screens that have cameras (and facial-recognition technology) to watch people watching ads in order to target advertising toward individuals. The data-gathering and surveillance practices raise substantial privacy questions.

    The Los Angeles Times reported on the expansion of these digital billboards and their use of facial-recognition biometric technology in casinos, Chicago-area bars and more. USA Today and the New York Times have detailed safety problems that can arise from these digital billboards. BBC News has reported on the use of digital billboards in the United Kingdom. The Wall Street Journal has reported on digital signage use in Japan.

    Now, Wired reports on the more widespread use of software from the artificial intelligence startup Affectiva that “will read your emotional reactions” in real time. “Already, CBS has used it to determine how new shows might go down with viewers. And during the 2012 Presidential election, [Affectiva's chief science officer Rana el Kaliouby’s] team experimented with using it to track a sample of voters during a debate. Read more »

    License-plate-reader Technology Continues to Raise Privacy, Civil Liberty Questions

    Thursday, March 26th, 2015

    As the use of license-plate-recognition camera technology  to gather and record drivers’ movements started becoming widespread in the United States, people asked a number of questions about the privacy, civil liberty and security implications about the surveillance technology.  Last year, the Center for Investigative Reporting looked into privacy questions concerning the use of license-plate readers and found that “a leading maker of license-plate readers wants to merge the vehicle identification technology with other sources of identifying information.” A couple of years ago, the American Civil Liberties Union released a report (pdf) on license-plate readers and how they are used as surveillance devices.

    And law enforcement is concerned about how such tech affects privacy rights, as well. In 2009, the International Association of Chiefs of Police issued a report on license-plate-recognition technology and said, “Recording driving habits could implicate First Amendment concerns. [...] Mobile LPR units could read and collect the license plate numbers of vehicles parked at addiction counseling meetings, doctors’ offices, health clinics, or even staging areas for political protests.” The privacy and civil liberty questions have led to the cancellation of some license-plate-recognition surveillance programs, including ones in Boston and by the Department of Homeland Security.

    One of the biggest questions is: What happens to all the data on innocent individuals? Often, we don’t know what the restrictions are on the collection and use of the data. We have learned some information about what some groups do with the data. Last year, the Washington Post reported that commercial databases gather such location data to sell. In 2013, the ACLU review of license-plate-reader camera technology found that “the approach in Pittsburg, Calif., is typical: a police policy document there says that license plate readers can be used for ‘any routine patrol operation or criminal investigation,’ adding, ‘reasonable suspicion or probable cause is not required.’ [...] As New York’s Scarsdale Police Department put it in one document, the use of license plate readers ‘is only limited by the officer’s imagination.’” In 2011, the Washington Post reported that Virginia used the license-plate scanning technology for tax collection.

    Now, as a result of the public records request, Ars Technica has received the entire license-plate-reader dataset of the Oakland Police Department, “including more than 4.6 million reads of over 1.1 million unique plates between December 23, 2010 and May 31, 2014.” And it’s interesting to see what personal information can be gleaned from the surveillance data.

    Read more »

    Privacy Problems Continue with Anonymization of Data

    Friday, February 6th, 2015

    In a recent article for Science, researchers Yves-Alexandre de Montjoye, Laura Radaelli, Vivek Kumar Singh, and Alex “Sandy” Pentland showed that the “anonymization” of personal data is not a guarantee of privacy for individuals. Before we discuss their study, let’s consider that it has been almost two decades of researchers telling us that anonymization, or “de-identification,” of private information has significant problems, and individuals can be re-identified and have their privacy breached.

    Latanya Sweeney has been researching the issue of de-anonymization or re-identification of data for years. (She has taught at Harvard and Carnegie Mellon and has been the chief technologist for the Federal Trade Commission.) In 1998, she explained how a former governor of Massachusetts had his full medical record re-identified by cross-referencing Census information with de-identified health data. Sweeney also found that, with birth date alone, 12 percent of a population of voters can be re-identified. With birth date and gender, that number increases to 29 percent, and with birth date and Zip code it increases to 69 percent. In 2000, Sweeney found that 87 percent of the U.S. population could be identified with birth date, gender and Zip code. She used 1990 Census data.

    In 2008, University of Texas researchers Arvind Narayanan and Vitaly Shmatikov were able to reidentify (pdf) individuals from a dataset that Netflix had released, data that the video-rental and -streaming service had said was anonymized. The researchers said, “Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information.” Read more »

    Update: Netherlands Threatens to Fine Google Over Privacy Policy

    Tuesday, December 16th, 2014

    In the ongoing case concerning Google’s changes to its privacy policies a couple of years ago, the Netherlands announced that it will fine the Internet services giant if it doesn’t meet certain requirements by February 2015. “The Dutch Data Protection Authority (Dutch DPA) has imposed an incremental penalty payment on Google. This sanction may amount to 15 million euros. The reason for the sanction is that Google is acting in breach of several provisions of the Dutch data protection act with its new privacy policy, introduced in 2012.”

    Here’s a recap of the controversy and legal questions surrounding Google’s change to its privacy policies. In January 2012, Google announced changes in its privacy policies that would affect users of its services, such as search, Gmail, Google+ and YouTube. Advocates and legislators questioned the changes, saying that there were privacy issues, and criticized (pdf) the Internet services giant for not including an opt-out provision. The critics included 36 U.S. state attorneys general, who wrote to (pdf) Google raising privacy and security questions about the announced privacy policy changes. The EU’s Article 29 Data Protection Working Party wrote to (pdf) to the online services giant about the privacy policy changes, which affect 60 Google services. The Working Party, which includes data protection authorities from all 27 European Union member states as well as the European Data Protection Supervisor, asked Google to halt implementation of these changes while the data protection authority in France (the National Commission for Computing and Civil Liberties, CNIL) investigates. Google refused and its new privacy policies went into effect in March 2012. The CNIL investigation continued, and in January, CNIL fined the Internet services giant €150,000 ($204,000) over privacy violations. Read more »