Face recognition offers security and law enforcement professionals unprecedented crime prevention powers that, not too long ago, would have been thought impossible. In an era where terrorists are more determined than ever to strike public places, and shoplifting and organized retail crime are quickly rising, security professionals can do something they could never have done before – remember every face on a criminal watchlist.
While facial recognition offers a tremendous upside to those seeking to protect the public, some have expressed concern over privacy. But in the era in which we live – where a single airport has thousands of traditional surveillance cameras – is it possible that face recognition may actually increase privacy? Here’s what five experts had to say regarding the state of biometric surveillance and privacy.
Gus Downing (The D&D Daily)
In a recent webinar, Gus Downing, publisher and editor of The D&D Daily, asserted that it will ultimately be consumers who decide whether they will enter into a contract that includes biometric surveillance. However, it’s up to the industry to provide them with as much protection as possible. According to Downing, “As consumers will ultimately decide the path, it’s up to our industry to lead the way and ensure that our application of this technology maximizes its results while protecting the consumers privacy through standards and self-governance.”
Thomas McCally (Carr Maloney)
Thomas McCally, owner-equity partner at the law firm Carr Maloney (and one of the leading legal experts on retail security) has asserted that face recognition is actually more private than many of the common technologies that the majority of us use on a daily basis. According to McCally, “Look at the amount of tracking that goes on through web use, credit card use. They are tracking much more data about individuals than [face recognition].” McCally also sees face recognition as more private than traditional surveillance, “Face recognition is much less intrusive [than traditional surveillance], it doesn’t see race, it doesn’t see gender. It is just capturing metrics.”
Tom Melzl (FaceFirst)
Experts seem to agree that these criteria must be met in order for face recognition to protect people’s privacy (rather than inhibit it). According to FaceFirst CRO and President Tom Melzl, the burden of proof lies on vendors to establish that their product secures privacy. “Face recognition vendors need to prove that there is an anti-profiling system in place, prove that data is secure, prove that non-criminals are purged from the system,” Melzl stated.
Lloyd Muenzer (ARJIS)
Earlier this year we hosted a webinar that featured Lloyd Muenzer, one of the top law enforcement technology experts and an analyst at the Automated Regional Justice Information System (ARJIS) in San Diego County. Muenzer spends a lot of time looking at face recognition matches and has been impressed with how the technology prevents profiling of all sorts. According to Muenzer, “face recognition doesn’t care about hair color, eye color, hair length, facial hair tattoos or even gender. It’s all about the points on the face it’s measuring. I look at every match myself and I’m here to tell you it makes no difference.”
Read Hayes (LPRC)
Read Hayes, PH.D. is the Director of the Loss Prevention Research Council. From Hayes’ perspective, security professionals and facial recognition providers have a responsibility to ensure that a process is in place in order to secure peoples’ privacy. In a recent webinar, Hayes stated, “we need a good process. We need to self-police before we have to look to the government to do that.”
Want to learn more about face recognition privacy? Our recent webinar covers face recognition privacy issues from a vendor, journalistic, academic and legal perspective.