Freedom House, a US-based organization supporting human rights advocates and programs, warned in its report titled “Freedom on the Net 2023,” of the rise of generative artificial intelligence (AI), which it said “threatens to supercharge online disinformation campaigns.”
Over the past year, the report said, the technology was used to generate false images, text, and audio in at least 16 countries to distort information on political or social issues.
One example cited was during the conflict in Pakistan between former Prime Minister Imran Khan and the military-backed government, the former shared an AI-generated video to depict a woman fearlessly facing riot police – supposedly to push a narrative that the women of Pakistan were standing by him, not the military.
AI-manipulated content was also used to smear electoral opponents in the USA, the report added. Accounts affiliated with the campaigns of former President Donald Trump and Florida Gov. Ron DeSantis, both seeking the Republican Party’s nomination for the 2024 presidential election, shared videos with AI-generated content to undermine each other’s candidacy.
This growing use of AI in false and misleading information “is exacerbating the challenge of the so-called liar’s dividend, in which the widespread wariness of falsehoods on a given topic can muddy the waters to the extent that people disbelieve true statements,” Freedom House said.
“Political actors have labeled reliable reporting as AI-enabled fakery, or spread manipulated content to sow doubt about very similar genuine content,” it said. “The dangers of AI-assisted disinformation campaigns will skyrocket as malicious actors develop additional ways to bypass safeguards and exploit open-source models, and as other companies release competing applications with fewer protections in place.”
It challenged governments to develop a road map for this new era of AI as its benefits and harms become more apparent. Companies that create or deploy AI systems should “cultivate an understanding of previous efforts to strengthen platform responsibility,’ it added.
Whether we acknowledge it or not, AI will continue to develop and evolve, and will be used both for good and evil by those who see its value and potential to achieve their goals. Governments and societies that recognize its value and threats and proactively puts in place the necessary road maps, speed bumps, and other protections as early as possible, will hopefully be able to mitigate the harms and maximize its benefits.
Let us not allow ourselves to be left behind, because AI and those who know how to use it will not wait until we are ready.*