FPF testifies at face recognition hearing
16 January 2020 14:12 GMT

Rights group the Future of Privacy Forum (FPF) has noted that its senior counsel and director of AI and Ethics Brenda Leong testified this week on the privacy and ethical implications of the commercial use of facial recognition technology.

“Technology has only accelerated the practice of identification and tracking of people’s movements, whether by governments, commercial businesses, or some combination thereof, leading to the real concerns about an ultimate state of ubiquitous surveillance,” wrote Leong. “How our society faces these challenges will determine how we move further into the conveniences of a digital world, while continuing to embrace our fundamental ideals of personal liberty and freedom.”

In her testimony, Leong emphasized that not every camera-based system is a facial recognition system,” and that the term facial recognition is often broadly and confusingly used in reference to other image-based technology that does not necessarily involve individual identification.

“Understanding how particular image-analysis technology systems work is a critical foundation for effectively understanding and evaluating the risks of facial recognition,” Leong noted in her written testimony. To help educate policymakers, consumers, and others about the varying levels of facial image software and associated benefits and risks, and privacy implications of each, FPF created the infographic, Understanding Facial Detection, Characterization, and Recognition Technologies.

Leong outlined a set of privacy principles created by FPF that should be considered as the foundation of any facial recognition-specific legislation, writing, “consent remains the critical factor, and should be tiered based on the level of personal identification collected or linked, and the associated increasing risk levels.” Leong highlighted that the default standard for consent should be “an “opt-in” or “affirmative consent” model consistent with existing FTC guidelines.