• Categories

  • Archives

    « Home

    Recent Stories About Facial Recognition, Eye Movement and Voice Print Biometric Technologies

    Here are some recent stories concerning biometric technologies, which can have implications for individuals’ privacy rights and civil liberties. (See this previous post for a discussion about the First Amendment right to free speech and how widespread identification technologies can affect that.) Slate reports that the FBI plans to give facial recognition software to local law enforcement agencies. “Instituting the technology nationally is the latest stage in the FBI’s $1 billion Next-Generation Identification program, which will also establish a system for searching a database of scars, marks, and tattoos,” Slate reports. In an article for the International Journal of Biometrics, researchers Youming Zhang, Jyrki Rasku and Martti Juhola discuss the use of eye movement to biometrically identify individuals. MIT Technology Review discusses new research on voice authentication technology that seeks to bring voice prints into widespread use for identification, as fingerprints are used today.

    FBI To Give Facial Recognition Software to Law-Enforcement Agencies,” Slate

    The speedy onward march of biometric technology continues. After recently announcing plans for a nationwide iris-scan database, the FBI has revealed it will soon hand out free facial-recognition software to law enforcement agencies across the United States.

    The software, which was piloted in February in Michigan, is expected to be rolled out within weeks. It will give police analysts access to a so-called “Universal Face Workstation” that can be used to compare a database of almost 13 million images. The UFW will permit police to submit and enhance image files so they can be cross-referenced with others on the database for matches.

    Instituting the technology nationally is the latest stage in the FBI’s $1 billion Next-Generation Identification program, which will also establish a system for searching a database of scars, marks, and tattoos. The FBI’s Jerome Pender, who was recently named executive assistant director of the Information and Technology Branch, says in a statement that Hawaii, Maryland, South Carolina, Ohio, New Mexico, Kansas, Arizona, Tennessee, Nebraska, and Missouri have already expressed interest in trying out the UFW. Pender says that “full operational capability” for facial recognition is scheduled for the summer of 2014. […]

    In addition to privacy concerns, UAW has another weak spot: It still isn’t that great at tracking people. Last year, the Wall Street Journal reported how human analysts still trump machines when it comes to comparing and identifying people from photographs. Poor quality images or bad lighting, for instance, can render facial recognition almost useless.

    “Biometric verification of subjects using saccade eye movements,” International Journal of Biometrics, Martti Juhola et al. (AlphaGalileo pdf; archive pdf)

    Verification of a user or subject is generally seen as a situation in which the actual user of a computer has to be determined and other possible subjects should be determined as non-users or imposters (Bednarik et al., 2005; Chellappa et al., 2010). Identification is usually seen as a more extensive computational task, in which any individual can be identified and distinguished from others in a group of subjects. We can see the former as a binary classification problem and the latter as a multiclass classification problem. In the present research we describe a novel technique to utilise saccade eye movements for verification purposes, as a simulation to verify an actual user of a computer or some device including a measuring component for eye movements.

    Our motivation to develop a verification technique applying eye movements arose from our earlier, long-term research in the field of otoneurological eye movement studies, e.g., Aalto et al. (1989), Juhola et al. (1985, 1997, 2007), Juhola, (1986). Of course, one reason was the technical development over the last 15 years of new videocamera systems to facilitate eye movement studies for various purposes (Morimoto and Mimica, 2005). In addition, we noticed how the values of a few essential features computed from eye movements varied fairly clearly between individuals (Juhola et al., 2007) which formed a sound basis for an objective to exploit eye movements in the process of verifying subjects. As the research of eye movements for human-computer interaction is currently very active, we may assume that in the future such systems can be used to aid interaction with computers in addition to a mouse and keyboard by registering the targets of the user’s gaze on a computer screen. Maybe such videocamera systems will be like the webcameras of today, cheap and easy to use. Therefore a verification procedure based on eye movements would be a timely and expedient property for a computer system including eye movement cameras.

    Securing Your Voice,” MIT Technology Review

    Voice authentication is increasingly used by tens of millions of people, including bank and telecom customers: you record a sample upon enrollment, and then speak that passage each time you call in, confirming your identity with a certainty regular passwords can’t match. But if hackers obtain your voiceprint—under scenarios akin to breaches of credit-card and other personal data—they could use it to break into other systems that use voice authentication.

    Now researchers at Carnegie Mellon University say they’ve developed voice-verification technology that can transform your voice into a series of password-like data strings, in a process that can be handled on the average smart phone. Your actual voice never leaves your phone, during enrollment or later authentication.

    “We are the first to convert a voice recording to something like passwords,” says Bhiksha Raj, the CMU computer scientist who led the research. “With fingerprints, this is exactly what is done, but nobody has figured out how to do it with voice until now.” The work will be presented as a keynote speech at an information security conference in Passau, Germany next month.

    The technology handles the slight differences in the way people speak from day to day by making multiple password-like data strings using different mathematical functions. By comparing how many of those match, it can determine whether the speaker is the person who enrolled. […]

    The CMU system is accurate 95 percent of the time using a test dataset. (Errors would simply require a speaker to repeat the authentication process.) That’s not quite as good as commercial systems that use stored voiceprints, but the technology is still being honed, and improvements are expected, says Raj.

    Leave a Reply