Dark side of AI: Post-Truth to Post-Reality

One of the about 25 people I follow on twitter is Sam Altman. For those of you who may not know, Sam Altman heads Y Combinator. If startups are food, then Y Combinator is the kitchen of Michelin star restaurant. If startups are cars, then Y Combinator is the Koenigsegg factory and so on…you get the picture. It’s a launch pad every startup dreams of. Getting into the Y Combinator incubator class virtually guarantees generous funding offers from the VCs.
Entry into Y Combinator’s incubator is coveted, so they get the pick of the best startups globally and of course, an insight into all cutting-edge technologies startups are working on.

So when Sam tweets about a technology, I, along with the rest of the world, takes note.

Here’s what Sam tweeted a few days back:

There was a recent subreddit (now removed) where an unsavory user had posted an R-rated film with the lead’s face replaced with that of a famous actress (read here). Soon there were subreddit that were dedicated to posting applications which would not require software expertise. Unethical use apart, it was actually a great example democratizing complicated technology – complex technology distilled into an app. Thanks to advances in machine learning and artificial intelligence, big data of faces can be used to train an algorithm – which then creates face masks based on target ‘expressions’ and then manipulates other images to fit these ‘expressions’. And then it trains itself (see my other post here) and becomes better and better till the output look real (scared yet?).

While this technological feat is impressive, Sam’s tweet made me realize how technology like this can be used to devastating effects, especially as we see the world around us polarizing more and more. We are already in an era where Facebook ads are manipulated, where ‘post-truth’ becomes word of the year, where a computer-generated Instagram model has half a million followers. Now to make it worse, it’s going to be a challenge to know which video content is real – what the video shows as reality may not have ever taken place.

One could just imagine, for example, the devastating effects of a video of a mainstream political or religious leader stoking the flames of hatred, exhorting followers to attack people from a specific community or region. God knows we have enough real ones that fan the hate, but the next such video could very well be fake and yet could fool millions of people. Thousands of people could die before anyone figures out that the video is fake. After all, traditionally, video evidence has always been thought of as unimpeachable. If a movie that hasn’t even been released can cause such a rampage, what could happen if long movie clips are faked?

We are about to enter an age –  I think we are almost there – where putting trust in any news is going be very very hard. If no source remains credible, it’s hard to change opinions, to figure out what’s just, hard to figure who is in the right.

It reminds of the time when I was in Rome. When touring courts at Roman Forum, our guide told us that in Roman times, whoever had most people shouting in their favor won the case. So, it was not about evidence or witnesses, it was about how many people can shout in the court in your favor (so the rich always won!).

It looks like we are regressing back to the same paradigm –  except instead of number of people, it will be about how many AI experts you have in your corner.

Back to Top