Countering deepfakes, the most serious AI threat

Disinformation and hoaxes have evolved from mere annoyance to high stake warfare for creating social discord, increasing polarisation, and in some cases, influencing an election outcome. ‘Deepfakes’ Deepfakes are a new tool to spread computational propaganda and disinformation at scale and with speed. Deepfakes are the digital media (video, audio, and images) manipulated using Artificial Intelligence. This synthetic media content is referred to as deepfakes. How is it dangerous?
  • Deepfakes, hyper-realistic digital falsification, can inflict damage to individuals, institutions, businesses and democracy.
  • They make it possible to fabricate media — swap faces, lip-syncing, and puppeteer — mostly without consent and bring threat to psychology, security, political stability, and business disruption.
  • Nation-state actors with geopolitical aspirations, ideological believers, violent extremists, and economically motivated enterprises can manipulate media narratives using deepfakes, with easy and unprecedented reach and scale.
Threats of deep-fakes
  • The very first use case of malicious use of a deepfake was seen in pornography, inflicting emotional, reputational, and in some cases, violence towards the individual. Pornographic deepfakes can threaten, intimidate, and inflict psychological harm and reduce women to sexual objects. Deepfake pornography exclusively targets women.
  • Deepfakes can depict a person indulging in antisocial behaviours and saying vile things. These can have severe implications on their reputation, sabotaging their professional and personal life. Even if the victim could debunk the fake via an alibi or otherwise, it may come too late to remedy the initial harm. Malicious actors can take advantage of unwitting individuals to defraud them for financial gains using audio and video deepfakes. Deepfakes can be deployed to extract money, confidential information, or exact favours from individuals.
  • Deepfakes can cause short- and long-term social harm and accelerate the already declining trust in news media. Such an erosion can contribute to a culture of factual relativism, fraying the increasingly strained civil society fabric. Falsity is profitable, and goes viral more than the truth on social platforms. Combined with distrust, the existing biases and political disagreement can help create echo chambers and filter bubbles, creating discord in society.
  • A deepfake could act as a powerful tool by a nation-state to undermine public safety and create uncertainty and chaos in the target country. It can be used by insurgent groups and terrorist organizations, to represent their adversaries as making inflammatory speeches or engaging in provocative actions to stir up anti-state sentiments among people.
Political use of deep-fakes
  • A deepfake can also aid in altering the democratic discourse and undermine trust in institutions and impair diplomacy. False information about institutions, public policy, and politicians powered by a deepfake can be exploited to spin the story and manipulate belief.
  • A deepfake of a political candidate can sabotage their image and reputation. A well-executed one, a few days before polling, of a political candidate spewing out racial epithets or indulging in an unethical act can damage their campaign. There may not be enough time to recover even after effective debunking. Voters can be confused and elections can be disrupted.
  • Deepfakes contribute to factual relativism and enable authoritarian leaders to thrive. For authoritarian regimes, it is a tool that can be used to justify oppression and disenfranchise citizens. Leaders can also use them to increase populism and consolidate power. Deepfakes can become a very effective tool to sow the seeds of polarisation, amplifying division in society, and suppressing dissent.
  • Another concern is a liar’s dividend; an undesirable truth is dismissed as deepfake or fake news. Leaders may weaponise deepfakes and use fake news and an alternative-facts narrative to replace an actual piece of media and truth.
What needs to be done?
  • Media literacy for consumers and journalists is the most effective tool to combat disinformation and deepfakes. Media literacy efforts must be enhanced to cultivate a discerning public. As consumers of media, we must have the ability to decipher, understand, translate, and use the information we encounter.
  • Meaningful regulations with a collaborative discussion with the technology industry, civil society, and policymakers can facilitate disincentivizing the creation and distribution of malicious deepfakes. We also need easy-to-use and accessible technology solutions to detect deepfakes, authenticate media, and amplify authoritative sources. 
  • To counter the menace of deepfakes, we all must take the responsibility to be a critical consumer of media on the Internet, think and pause before we share on social media, and be part of the solution to this infodemic.


POSTED ON 29-10-2020 BY ADMIN
Next previous