That’s a fake video…. no it’s not…yes it is!!

For the past year or so, a day hasn’t gone by without reading an article or post about Artificial Intelligence/Machine Learning. Just over last weekend, I read about a ML driven program that mastered Rubik’s cube in 44 hours (MIT Technology Review), how  Google’s Algorithm can sometimes predict death in hospitals better than doctors (Business Insider), A Goldman Sachs algorithm predicted football world cup winner (Algorithmxlab and its Brazil ), and at least half a dozen more.

But two posts from Gizmodo and Scien-tech news caught my attention. I have written in the past about Deep Fakes where you can create fake realistic videos of people. Like many other technologies, so far these have been used as either a research experiment (like this Obama video from University of Washington) or to create porn! While fake porn is obviously very damaging , I think potential damage that can be caused by these technologies is scary tremendous (think fake videos of political or religious leaders). So it was with some relief I read this post from Gizmodo about researchers from SUNY who have written a paper about their algorithm that can detect fake videos.  I thought this could be like an ‘expert witness’ in court when figuring out if a video is fake or real. Of course, sometimes it may be too late, but I thought that this algorithm could be deployed by social media giants to highlight (and then kick off) fake videos in real time.

Apparently, one flaw in fake videos is that person in video doesn’t blink!! Sometimes, it is obvious to humans but sometimes, it isn’t. That’s because, the rate of blinking when someone reads is only 4.5 blinks per minute (as opposed to like 26 blinks when talking). Since most political leaders read their speeches of a teleprompter – blinking speed while reading is applicable and NOT blinking speed when speaking. As per this SUNY paper, traditional algorithms depend on something called as eye aspect ratio (EAR) or a convolutional neural network-based (CNN) while their algorithm is an improvement and is called recursive neural network (RNN). I have absolutely no idea what that means – just read the paper if you really want to know. Here is the link again. But I did understand that the reason videos don’t have blinking is that data sets used to train these algorithms usually is made of photos (not videos) of the person. And usually photos available, especially of politicians, are with their eyes open!

Unfortunately, just when I was feeling relieved, I read this Scien-Tech News article which reported that a Facebook machine learning system called Generative Adversarial Network (GAN) (what’s with the all the 3 letter acronyms) has figured a way out to ‘open’ eyes in photos where eyes of subject are closed. Yes, this is opening eyes that are closed – but if this direction works – I am sure it’s possible to open eyes that are ‘closed’! So, there you go – just when there is an algorithm that can detect fake videos, there is another one that can possibly evade this algorithm.

I guess like white hat- black hat hackers, there will continue to be algorithms that can detect fake videos – and then algorithms that can beat those algorithms. I do hope detection ones work better. Or soon enough we may be in a world where even videos, which are so far the ‘truth’, also cannot be trusted!

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to Top