By: Peter Trepp February 28, 2019

This blog is syndicated from The New Rules of Privacy: Building Loyalty with Connected Consumers in the Age of Face Recognition and AI. To learn more click here

Since the invention of face recognition in the 1960s, has any single technology sparked more fascination for public safety officials, companies, journalists and Hollywood?

When people learn that I’m the CEO of a face recognition company, they commonly reference its fictional use in shows like CSI, Black Mirror or even films such as the 1980s James Bond movie A View to a Kill. Most often, however, they mention Minority Report starring Tom Cruise. For the uninitiated, the film is based on a futuristic Phillip K. Dick short story and is set in the year 2054. The plot focuses on Washington, D.C.’s “PreCrime” police unit, which stops murderers before they act, reducing the murder rate to zero. Murders are predicted using three mutated humans, called “precogs”, who receive visions of the future. The central theme of the story is free will versus pre-determinism. However, what people often remember most is facial recognition surveillance, which in the movie, identifies all individuals as they move about their daily life.

Order Peter’s book The New Rules of Consumer Privacy

While the ability to recognize individuals in real time has become reality, in Western countries like the United States, most people in a face recognition database of documented persons of interest are included because of a prior offense or a series of them. Retailers, for example, commonly apprehend people attempting to steal from their stores. These individuals are often photographed, and the ensuing images are uploaded into a private face recognition database. Because most shoplifters are often serial offenders – the average shoplifter steals 48 times before they are caught – it is extremely likely that the person will return, at which time an alert directs in-store security to observe the individual or offer customer service. The result is reduced theft and a much smaller chance of violence. In this way, face recognition has partially fulfilled Dick’s vision of using technology to prevent crime.

However, you’ll notice that there are no “precogs” in this scenario. It’s too early to say what may happen in China, Russia or a handful of other countries, but in Western society, there is very little interest in using an AI-based technology to predict whether someone will become a criminal in the first place.

Accordingly, we must ask ourselves this question: should we stop every single crime, even if we can? We live in a world where some lawlessness is accepted. For example, as I’m writing this, my hometown of Los Angeles is crawling with residents on motorized scooters. Most are riding without helmets, and some are under 18, making the activity illegal in many areas of the city. Should face recognition and other AI-powered technologies be used to prevent even petty crimes?

Much like the Internet, GPS and many other technologies that are commonly associated with consumer products today, face recognition’s roots are firmly planted in the defense and law enforcement sectors. To understand how the technology is evolving and the implications on privacy, it’s important to understand where it came from. Here are the major milestones over the past four decades:

  • 1960s: with funding from an unnamed intelligence agency, Woodrow Wilson Bledsoe creates first manual measurements using electromagnetic pulses.
  • 1970s: Researchers Goldstein, Harmon, and Lesk establish 21 points of facial measurement
  • 1988: Kirby and Sirovich establish normalized face image using fewer than 100 points of facial measurement using linear algebra.
  • 1991: Turk and Pentland invent first crude automatic face detection from images.
  • 1993: Defense Advanced Research Projects Agency (DARPA) creates the first basic database of facial images.
  • 2002: A face recognition database of 856 people is used at Super Bowl XXXV. The experiment fails.
  • 2003: DARPA database upgrades to 24-bit color facial images.
  • 2004: National Institute of Standards and Technology creates the NIST test.
  • 2009: Pinellas County Sherriff’s Office creates forensic database.
  • 2010: Facebook creates image identity auto-tagging using face recognition.
  • 2011: Panama Airport installs first face recognition surveillance system.
  • 2011: Body of terror mastermind Osama bin Laden positively identified using face recognition.
  • 2013: FaceFirst reaches effective real time mobile match alerts over cellular connection.
  • 2014: Automated Regional Justice Information System (ARJIS) deploys a cross-agency system in southern California, sharing criminal face recognition data across local, state and federal agencies.
  • 2016: U.S. CPB deploys exit face recognition at Atlanta Airport.
  • 2017: iPhone X becomes the world’s top-selling phone with face recognition access control.
  • 2018: FaceFirst achieves 150,000 facial points of measurement, including the ability to determine identity from 90-degree profile images.
  • 2018: Japan announces that it will use face recognition to verify the identity of athletes at the 2020 Olympic Games.

Most people are surprised to learn that face recognition was invented way back in the 1960s. In the beginning, it wasn’t very useful. Nor was it very useful or dependable in the 1970s, 1980s, 1990s or even in the early 2000s. Early adopters included banks, event managers and forensic investigators and law enforcement agencies that tried to use it unsuccessfully, resulting in lots of failure, frustration and bad press because the technology wasn’t truly ready for prime time.

Only in the past few years has face recognition become accurate and fast enough to begin to fulfill the dreams that futurists had decades ago.

AI vs. AAI

As amazing as the future for this technology is, we have to be careful not to mistake otherwise amazing applications that are not AI as such. There are many amazing innovations that fit the real definition of AI in terms of machine intelligence, but in most cases, a sophisticated algorithm or complex data crunching is being described incorrectly as AI. I call this phenomenon AAI, or Artificial Artificial Intelligence.

Dan Ariely, Duke Professor of Psychology and Behavioral Economics, likens it to teenage sex: “Everyone talks about it, nobody really knows how to do it, everything thinks everyone else is doing it, so everyone claims they are doing it.” Unfortunately, this happens all the time in the media and in venture capital pitch meetings.

A great example is Google Search. Google started its search engine with something called PageRank, which used dozens or hundreds of factors as part of an algorithm which determined the order in which Google displayed results. During that time period, PageRank was essentially a strict set of rules set by humans. Developers periodically tweaked the company’s algorithm to boost the importance of certain ranking factors. But in 2015, the company began using an additional layer called RankBrain, which uses a neural net to help determine search results. In other words, the product now improves on its own, meaning that Google has given up some control over the way its product works to AI. This is true artificial intelligence.

In other cases, the promise of AI has not yet been fully realized. For example, in 2016, Salesforce announced Einstein, an AI product that was supposed to make predictions based on Salesforce data that would help sales reps identify hot leads better than before. In 2018, it’s difficult to find concrete success stories, and the company would still not reveal how many of its customers were using the platform to act on Einstein’s predictions. Einstein and other applications like it may still be wildly successful, but it may take time. AI will fuel an enormous amount of the technology we will use every day and most people will hardly notice the extraordinary achievement behind it. Having said that, the evolution of any technology is almost always bumpy, and the businesses that are building AI solutions are no exception. I believe savvy technology investors understand this and will stay the course, but everyone should be wary of the difference between AI and AAI.

Understanding AI’s Role in Face Recognition

The face recognition timeline illustrates an amazing maturation of the technology in a relatively short amount of time. While face recognition and AI can be discussed as different mechanisms – and I do so occasionally to make clear that they are intertwined – it is important to note that most any contemporary, powerful face recognition has certainly been developed using some aspect of AI.

As a field of study, AI has come a long way since its origin at the Dartmouth Summer Research Project on Artificial Intelligence, a 1956 summer workshop. The weeks-long project was an extended brainstorming session among 11 mathematicians and scientists. In the 1960s, the U.S. Department of Defense began training computers to mimic human reasoning. Among them, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s, an early precursor to Google Maps route-mapping.

Eventually, these methods gave way to neural networks and deep learning. An artificial neural network (ANN) is a collection of connected units called artificial neurons that is inspired by the architecture of a human brain. Deep learning is a subfield of machine learning using neural networks. Together, they are responsible for many of the dramatic improvements in perception used by face recognition. While we can use machine learning to feed data to a face recognition algorithm to help it recognize people wearing hats, for example, the AI itself is too complex for humans to fully understand.

However, here is one way to think about it. Imagine, for example, writing a simple formula in Excel. Every time new numbers (data) are introduced to that formula, it spits out an answer. The program is not asked to do anything else other than calculate an answer based on the fixed math of that formula, no matter how large or complex that formula may be.

Now imagine introducing a very large data set to that formula and instructing that formula to look for combinations and patterns and then learn from that data. Based on this process, the formula begins to change and learn what to look for and how to make that calculation better and quicker. How did the formula change? Why did it choose to calculate that data using a different mathematical approach? That’s the part that is in a “black box” and hard for data scientists (and the rest of us) to fully understand.

The more impressive the technology grows, and the more complex it is, the less we are able to actually understand how it works. As MIT Technology Review editor Will Knight wrote in 2017, “You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers.” Despite these challenges, deep learning made it possible for DARPA to produce intelligent personal assistants long before Apple’s Siri, and they are just as critical to the rapid improvement in face recognition accuracy and speed.

In the years ahead, Westerners will have increasingly exciting but difficult choices to make. Some of those choices will depend on how we feel about AI. Let’s face it – not knowing exactly how an AI works can be somewhat unsettling. It’s the primary reason why British scientist Stephen Hawking sounded alarms in 2014 over fears that the technology could get out of control and re-design itself to surpass human control, saying, “The development of full artificial intelligence could spell the end of the human race.” Additionally, Tesla and Space-X CEO Elon Musk opined that AI could “create an immortal dictator from which we can never escape.” If these insightful individuals are predicting doomsday, should we be taking this much more seriously?

Fortunately, some very smart people are already attempting to solve that problem. Defense Advanced Research Projects Agency (DARPA), is overseeing the aptly named Explainable Artificial Intelligence program. The stated goal is to “Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.” Furthermore, new systems will have the ability to “explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.”

That sounds a lot like Data, the flawed and benevolent android from the Star Trek series. But will reality be that neat and tidy? As a society, we now have decisions to make. How will we responsibly employ AI? How can we make sure that we avoid being enslaved by it? How can we better understand how it works so that we can control it as much as possible?

One thing is for sure: as we attempt to tackle these questions, the future is going to be anything but boring.