Even before the cutting edge technology of today was available, fake news in the form of falsified images and edited videos was a big problem for society. Today, thanks to the prowess of artificial intelligence and machine learning, a computer application can be used to produce fakes that are near indistinguishable from the real deal. A most notable example of this technology is the Deepfake, which works on popular AI libraries like Tensorflow. We’ve seen examples of world leaders morphed into saying things they never said, fake pornographic material and so on. The capabilities of this technology are endless, and they’re only getting better every day.
The key phrase in the introductory paragraph of this piece was near indistinguishable. Yes, it is not impossible to identify these fakes. There are certain characteristics that can still distinguish them to the naked eye. Additionally, there is technology in the works to detect them with higher accuracy than any human cognitive system can. Interested to learn how? Let us show you.
Step 1: Common sense
When faced with a video on the internet that is unlikely to be real, common sense dictates that it almost certainly a fake. Did Nicholas Cage ever really shoot for Lois Lane in Man of Steel? Did a multi-millionaire celebrity who is also a noted feminist icon really shoot a risque video for the internet?
It doesn’t take anything more than common sense to know that this is fake
The answers to these questions that come from your gut are mostly also the truth. If such a video were to be real, there would be corroborating coverage across the internet that would verify its authenticity. A couple of tips along these lines:
- If the video is super blurry, very low resolution, shot in poor light, features a location between buildings or obscured by trees, it shouldn’t be trusted for authenticity.
- If the camera is perfectly still for a video that the shooter just happened to record, it is likely that they had a tripod and that this was a planned fake.
Step 2: Visual Cues
This is where most of the action is. There are numerous things that we subconsciously observe in our day to day lives, as well as when we watch authentic videos. When a potentially fake video falls short on these signs, it registers in our brain and generates an uncanny feeling. Some of these signs and the reasons behind them are:
- Staring match: in many cases, the software that generates fake videos where the face of one person is superimposed onto another person’s body takes the source face data from huge datasets of photographs. These datasets are procured from the internet and have one particular flaw that makes the fakes easier to detect – almost all photos have people with their eyes open. People don’t generally save those botched selfies or groupfies where someone blinked. Hence, an AI trained on this data creates a visual where the target individual seems to be in an eternal staring match with an unknown contender.
In a staring match between a fake video and you, you always lose (Source: YouTube)
- Lip sync: While audio editing software has made it possible to modify speech to be near identical to a target, most fake generating tools have not been able to achieve a high enough level of sophistication to generate speech accurately. Even if you take into account a professional mimicry artist, in most doctored videos today the disparity between lip movements and the actual speech is too evident. You should also listen to a couple of verified videos of the person involved to familiarize yourself with their intonation, pronunciation etc and you’ll be able to spot the difference every single time.
- Uncoordinated facial movements: Keep an eye out for facial movements that you cannot explain or understand. For instance, the face and the mouth might be moving in completely different directions, same goes for the eyes, as well as coordination with the body. Jerky, out of place movements are a result of an unfiltered data source and a purely computational fake generating process with no human intervention.
- Lighting: A crucial aspect of all videos, especially outdoor videos, lighting can often be a dead giveaway for a faked video. A simple way to spot outdoor fakes is to plot the location of the sun with respect to the shadows it creates. This isn’t as difficult as it sounds, all you’ll need to do is take a frame where something questionable is happening and draw lines along the shadows created by all the objects. If they don’t meet up at the same point, you’re dealing with a fake. Of course, if objects are missing shadows, to begin with, that’s a definite fake.
The viral video of an eagle snatching a kid was debunked by plotting the sun’s location based on the shadows (Source: YouTube)
Step 3: Cutting Edge
The threat that professionally made fake videos pose to the world is not lost on anyone, including programmers and researchers. Hence, apart from ways for you to detect these videos on your own, there are tools and projects being worked on that can detect fake videos for you. Some of them have been proven to be efficient while some are still in the testing stages. Some of these methods depend on spotting changes that are not perceptible to human cognition.
A team of researchers from the University of Trento, Trento (Italy) and Dartmouth College came up with a method to differentiate real faces from CGI faces. Essentially, a human face changes colour ever so slightly with every pulse. As blood flows into the face, it turns redder, and the opposite movement turns it greener. Their program detected this phenomenon and could differentiate between face-swapped deepfake videos. Since the faces generated for these videos were based on a database of still images, the necessary variation for the pulse wasn’t included. However, Hany Farid, one of the researchers behind this technology, doesn’t think it’ll take too long for some random Redditor to figure this out and encode the pulse into deepfakes as well.
The colour change of the human face with every pulse
Step 4: Look beyond technology for an answer
No, we’re not talking about using common sense to detect videos that have been modified using technology. Some of the most effective fake videos aren’t really fake at all, just taken out of context. India currently faces a fake media epidemic, where low-tech fakes circulated via social media and messaging apps have caused more damage than any tech-infused forgeries have. Most of these fakes just have captions that put the video into a false context, and people don’t bother to check the authenticity, source or accuracy of translated videos recorded originally in foreign languages. Their effectiveness relies on your tendency to take things at face value and not bother about verifying the facts, especially if the video suits a particular agenda that you support.
Getting beyond this behaviour is really important. The technology to make better fake videos will catch up and overtake the technology to detect it, and this video ‘arms’ race between the two sides will keep going on as long as tech keeps progressing. How we react to the next viral video or WhatsApp forward is what will make the real difference.