Digit Geek
Digit Geek > Recent Articles > Technology > Face recognition: State of the art

Face recognition: State of the art

How well can your devices recognise you?

Looks matter. Now before we get into a debate of whether a person’s looks play a crucial role in the way they lead their life, from an aesthetic point of view, let us tell you that this one’s about technology. It is no secret that the world of technology is headed towards a situation where the way you look is really important, all thanks to facial recognition and its growing ubiquity in everyday tech. While Apple’s iPhone X is in no way the first application of facial recognition known to the tech world, it has, almost inevitably, thrown significant limelight on the rising importance of facial recognition as the new password. But at this point, does the tech available truly satisfy the use cases we are implementing it for? Let’s find out.

Face detection Vs. face recognition

Snapchat brought face detection to the mainstream users

Before we explore what (and how) face recognition can do what it can do, it’s important to understand the complexity of even the most basic face recognition systems. And the best way to do that is to compare it against a more rudimentary state of the technology – face detection. Detecting a face involves identifying the existence of certain parameters in an image – the eyes, a mouth, a nose etc. This identification involves looking for specific shapes that fit within a range of parameters. This technology has been around for quite a while – pick up any digital camera of the past decade and you’ll probably see face detection as a feature. In today’s world, the ubiquitous face filters from Snapchat, Facebook, Instagram and more are examples of face detection in your everyday life.

Recognition, on the other hand, casts a much smaller net. A face recognition system not only needs to detect a face but also needs to know who that face belongs to. It could be matching the data against a database of employees, criminals, or even a single set that contains the parameters of the face of a phone owner. But in any of these cases, the median range of acceptance is a lot narrower. Which immediately establishes that face recognition cannot be performed by a digital system effectively unless images of a certain resolution are available. Thankfully, the tech in our everyday smartphones has reached a point where front cameras are powerful enough to provide such images. From this point forward it all depends upon the software.

Algorithms

While there are a couple of different algorithms that are popularly used for face recognition, the overall process is somewhat similar and follows these steps, in order:

  • Face detection: Done via methods like Haar-cascade classifier or History of Oriented Histograms.
  • Face alignment: Using features like the eyes, the nose, etc.
  • Appearance normalisation: Removing unnecessary details like colour, extra objects etc.
  • Feature description: Parameters of the face are measured.
  • Feature extraction: Relevant parameters are extracted.
  • Matching: Extracted parameters are matched against pre-existing data-set.

Going into the details of every established approach would require a treatise if not a book itself on facial recognition. Methods like eigenfaces, linear discriminant analysis, elastic bunch graph matching using the Fisherface algorithm, the hidden Markov model, the multilinear subspace learning using tensor representation, and the neuronal motivated dynamic link matching have all been tried and tested across years and have been used in various combination to make up the market-leading face recognition APIs of today.

As you can see, recognition algorithms can detect a lot more than just how you look

In general, there are a couple of approaches that are identical across these algorithms. While statistical analysis gets rid of all the visuals and combines the parameters directly, in the form of raw numbers, it is the neural network approach that is currently state of the art. That, combined with 3D modelling techniques is what makes today’s face recognition systems what they are.

3D face recognition

With 3-dimensional face recognition, things get truly state of the art. This method, overall, is way more accurate than its 2D counterparts and rivals fingerprint systems in accuracy. Some of the advantages of using 3D modelling for face recognition are pretty evident. For instance, it doesn’t have to rely as much on environmental factors like lighting, angle, head orientation etc.

Additionally, most 3D scanners of today also capture visual data along with structural information. This allows most 3D recognition techniques to incorporate the traditional 2D algorithms as well, increasing the overall accuracy. In more recent examples, Apple’s iPhone X features a facial recognition system that reportedly utilises both 2D and 3D sensors to provide a more accurate response.

APIs and SDKs

From revolutionising payments to bringing in a completely new dimension to personalised ads, there is a multitude of reasons why the world would be crazy for the state-of-the-art in facial recognition tech. But not everybody possesses the technical know-how or the tools to delve into it at an algorithmic level. At that point, established APIs and SDKs come to the rescue. There are quite a few options available in the market:

  • Amazon Rekognition
  • Google Vision API
  • Microsoft Face API
  • IBM Watson
  • Cognitec
  • NEC
  • Affectiva
  • OpenCV
Amazon Rekognition is one of the most comprehensive offerings that has the added advantage of being well-supported on AWS

The details of these APIs mainly differ on specific features. For instance, Amazon Rekognition only charges you for the number of images you process and the face metadata you store, but video formats are not supported, which, on the other hand, is supported by OpenCV. But OpenCV, which is only available as an SDK, doesn’t recognise age, gender and emotions, which is detected by Microsoft Face API and quite a few other alternatives as well. The final decision, on which API to go for, rests absolutely on the use case as well as the volume of data it will be used for.

Future of face recognition

Enough has been said about the privacy concerns surrounding face recognition and its ubiquitous use. While that is a discussion for another day, it is inevitable to explore some of the debatable plans when you look at the future of face recognition. For instance, higher resolution cameras, as well as thermal cameras in combination, are being developed to identify health markers on your face (you can already detect pulse using a webcam). Think of the future of facial recognition as being more encompassing in the metrics it is able to collect. Soon, you’ll have something not too different from a fingerprint being generated from the lines on your face (ear recognition is already a quite popular talking point in biometric authentication). Taking it one step further from today’s capabilities, facial recognition could incorporate muscle movement analysis (the way you smile, down to the muscles involved) bringing behavioural analysis into everyday consumer gadgets. It’s all about how precise things can get from here on.

Arnab Mukherjee

Arnab Mukherjee