When technology empowers misinformation
The flagship concept of the 2016 American presidential election and favorite catchphrase of the current tenant of the White House, misinformation itself is now making headlines.
In recent years, numerous cases have revealed to the general public the extent of the plague that is misinformation — from death notices of people that are perfectly alive, to endless conspiracy theories and resounding scandals such as Cambridge Analytica.
Inaccurate information, alarmist headlines, hateful and inflammatory speech are part of a vast array of problematic content flooding our communication channels. Also considered misinformation are articles reporting true events, but with misinterpretations or pseudo-scientific conclusions.
Far from being naive blunders, these fake news is part of a very specific agenda: propaganda or defamation, targeting politicians, companies, celebrities, in order to harm their careers, image or brand, as well as causes such as the fight against global warming, in order to undermine their legitimacy.
Evolving in parallel with social networks, disinformation has become increasingly sophisticated. Beyond press articles, millions of “memes” now populate the internet, cheerfully relaying all kinds of nonsense.
Its most frightening weapon is undoubtedly “deep fakes” — videos that are manipulated by artificial intelligence, which Facebook has just banned in the wake of the 2020 US presidential election. In late December, the American giant removed from its platform and from Instagram a network of more than 900 accounts, pages and groups that allegedly used AI to generate fake profile photos, praising Donald Trump’s policy to about 55 million users.
In order to produce these kinds of videos, you don’t need an army of technicians — a simple computer does the trick, as long as it has the right algorithms. This technology is still in its infancy, but it is rapidly becoming more sophisticated. Within two years, anyone will be able to create a video with anyone saying whatever they want them to say. “I only believe what I see,” “don’t put words in my mouth” — many sayings that will surely lose their resonance.
Twitter, a major theater of disinformation, also intends to combat the phenomenon, in particular with the acquisition of Fabula AI last June, a London-based start-up specializing in fake news detection using deep learning technology.
The startup VineSight, for example, has developed an algorithm that automatically detects misinformation when it is still gurgling among computer bots, even before being picked up by human users, and thus prevents it from going viral. In the current state of the fight, an altered video like the one showing Nancy Pelosi fumbling for words as if she were drunk, does not violate Facebook’s new rules because technically it is not AI, but a simple slowing down of the video.
VineSight’s technology is content-agnostic and is not interested in what is being said, but rather in who is saying it and who is interacting with what is being said — the credibility of a piece of news is correlated to the credibility of the people who relay it. Their platform thus makes it possible to identify this type of attack upstream, including tweets containing only images, videos, or a few words.
Nevertheless, the real problem rests not only on social networks but on the way we consume and relay information, which has become less factual and deeply emotional. Various reports have shown that triggering emotions, such as joy or fear, is the key to going viral online. No solutionist DIY will address the cultural factors of misinformation. While many technologies and media have been designed with the idea of making information available, it is high time to make it reliable.