Experts are warning that tech platforms are struggling to contain a tsunami of misinformation, especially around the recent Palestinian-Israeli hostilities, after content moderation policies were rolled back.
While major world events typically trigger a deluge of falsehoods, researchers say the scale and speed with which misinformation proliferated online following the weekend’s attack on Israel by Palestinian militant group Hamas was unlike ever before.
Experts say the conflict offers a grim case study of the diminished ability of prominent platforms such as Meta-owned Facebook and X, formerly known as Twitter, to combat false information in a climate of layoffs and cost cutting that have gutted trust and safety teams.
Aggravating the problem on Elon Musk-owned X, are a slew of contentious measures such as the restoration of accounts pushing bogus conspiracies, and an ad revenue sharing program with content creators that researchers say incentivizes engagement over accuracy.
Experts fear these developments have increased the risk of misinformation provoking real world harm, amplifying hate and violence, especially in a fast-evolving crisis scenario such as the one in Israel and Gaza.
“The sheer amount of doctored, fake, old videos and images of attacks circulating is making it harder to understand what is going on,” said Alessandro Accorsi, a senior analyst at the Crisis Group think tank. He voiced ‘huge concern’ that the misinformation, especially fake images of hostages including children, could stoke further violence.
Making matters worse, tech platforms appear to be abandoning efforts to elevate quality information. Social media traffic to top news websites from platforms such as Facebook and X has fallen off a cliff over the past year, according to data cited by US media from research firm Similarweb.
“Even though there are still countless talented journalists and researchers continuing to use X to help the public better understand what’s going on, the signal-to-noise ratio has become intolerable,” said Andy Carvin from the Atlantic Council’s Digital Forensic Research Lab.
This degradation of the truthfulness and accuracy of the information being shared on social media poses a significant threat to our ability to fully understand the situation, form opinions, and act or react accordingly. Given the current state of social media, it is important for us to be more discerning when consuming, evaluating, and sharing information from untrusted sources. This applies not only to news on ongoing wars, but also for local developments and issues, as we have already seen how well funded and effective the disinformation campaigns have been.
If government cannot help us reign in disinformation efforts and the potential of AI being used to amplify its effectiveness, we will have to do it ourselves, and do our best to help those who are the most vulnerable, as we navigate the never ending wave of information being thrown at us from different sources.*