Reflets Magazine #148 | Julien Mardas (M16): ‘Generative AI Produces Fake News Which is Impossible to Detect’
Reflets Magazine #148 devotes a feature to generative artificial intelligence. The article includes an interview with Julien Mardas (M16), founder of Buster.Ai, who explains how he is fighting against the explosion of fake news generated by AI, thanks to his own AI tool, or how to fight your opponent with their own weapons. Here is a free online translation of the article… subscribe to get the next issues (in French)!
Reflets Magazine: Do the recent advances in generative artificial intelligence herald a new age in the growth of fake news?
Julien Mardas: Fake news was originally created by humans in general, who wrote articles designed to misinform, generate a buzz, destabilise economies or trigger uprisings. It was nevertheless possible to debunk such information, by checking and denying it in an acceptable lapse of time and an explanatory manner. The recent advances in AI and stable diffusion generative models enable the production and widespread transmission of fake news which is virtually impossible to distinguish from real facts.
RM: Can you give us some examples?
J. Mardas: It is becoming possible to create articles in the style of a specific media source. You just need to ‘train’ an AI model to recognise a vast number of publications from the source in question. Generative AI is also capable of ‘multimodal’ production, i.e., of generating a coherent combination of image, text and audio content. This allows us to manipulate videos (deepfake) or even create new ones from scratch (GAN image). Not to mention the tools which enable you to edit certain parts of an image to decontextualise it without altering the key elements. In other words, it’s now very easy to give the impression that someone has said or done something they never said or did, and thus manipulate people’s beliefs and opinions. AI can also learn to generate messages for a specific target audience. It is with good reason that Sam Altman, CEO of OpenAI, and Sundar Pichai, CEO of Google are sounding the alarm bells on their own technologies, or why the UN Secretary General is calling the world to action.
RM: Have you any examples of fake news produced by generative AI which has already been disseminated on a large scale?
J. Mardas: It must be pointed out that, as it stands, generative AI does not spread the information itself. Within a few months, however, this technique will probably have become mainstream, and we can already quote numerous fake news stories which have fooled millions. In 2019, a deepfake video appearing to show Nancy Pelosi, Speaker of the US House of Representatives, in a state of drunkenness was widely shared on social media. Another deepfake video in 2021 had Joe Biden claim he would ban firearms, which triggered a movement of panic and online protest. In 2022, Éric Zemmour, who is known for his Islamophobia, attempted to mislead the Muslim community by targeting it with completely fake photos, supposedly showing young North African women who would no longer fear the streets at night thanks to him. More recently, a fake Bloomberg account announced the Pentagon was on fire. The political sphere is not the only target, however. In early 2023, a Twitter account apparently connected to the company Eli Lily announced that insulin would be distributed for free. The company lost $15 billion in stock market value. This list is sadly far from exhaustive.
RM: What solutions does your company, Buster.Ai, offer?
J. Mardas: We have developed a B2B SaaS platform with some of the most advanced AI to detect and act against deepfakes and fake news. We use a variety of techniques, including image and video analysis to detect signs of manipulation such as unnatural movements or incoherences, semantic language comprehension to identify and explain attempted deception, or social media surveillance to spot users and farms potentially posting false content. Our solution is particularly popular with media and political players.
RM: On an individual level, how can we spot fake news produced by generative AI?
J. Mardas: Generative AI systems still struggle to understand context and thus commit subtle errors of language or logic which offer a clue. Furthermore, most generative AI has no knowledge of recent or specific news events, because it is not designed to feed off the internet, and can thus appear completely disconnected from reality, which we call ‘hallucination’. That said, it is only a matter of time before these limits disappear. With this in mind, be sceptical: if an article is seemingly aimed at triggering your anger or fear, or on the contrary is too good to be true, it is probably fake. Check the source of the content: does it come from a reliable organisation such as a press site, or from a social media user? Look for proof of the claims made: are there quotes from experts or other sources, statistics or data? With videos, look out for eyes that don’t blink often, a mouth which is out of sync with the words you hear, aberrations in lighting, reflections or outlines, or perfect symmetries that are impossible to produce in reality, etc. In other words, use your common sense. If something does not seem real, it is probably because it isn’t.
Interview by Louis Armengaud Wurmser (E10), Content Manager at ESSEC Alumni
Translation of an article published in Reflets Magazine #148. Read a preview (in French). Get the next issues (in French).
Comments0
Please log in to see or add a comment
Suggested Articles