- Home
- Prelims
- Mains
- Current Affairs
- Study Materials
- Test Series
Latest News
EDITORIALS & ARTICLES
What is a deepfake technology?. Examine the need to regulate Deepfakes.
- Deepfakes are digital media – video, audio, and images edited and manipulated using Artificial Intelligence (AI) to inflict harm on individuals and institutions.
- AI-Generated Synthetic media or deepfakes have clear benefits in certain areas, such as accessibility, education, film production, criminal forensics, and artistic expression.
- However, it can be exploited (hyper-realistic digital falsification) to damage the reputation, fabricate evidence, defraud the public, and undermine trust in democratic institutions with fewer resources (cloud computing, AI algorithms and abundant data).
Issues with Deepfakes
- Building mistrust:
- Since they are compelling, deepfake videos can be used to spread misinformation and propaganda.
- They seriously compromise the public’s ability to distinguish between fact and fiction.
- Wrongful depiction:
- There has been a history of using deepfakes to depict someone in a compromising and embarrassing situation.
- For instance, there is no dearth of deepfake pornographic material of celebrities. Such photos and videos do not only amount to an invasion of privacy of the people reportedly in those videos, but also to harassment.
- As technology advances, making such videos will become much easier.
- Financial fraud:
- Deepfakes have been used for financial fraud.
- In the recent example, scammers used AI-powered software to trick the CEO of a U.K. energy company over the phone into believing he was speaking with the head of the German parent company. As a result, the CEO transferred a large sum of money — €2,20,000 — to what he thought was a supplier.
- The audio of the deepfake effectively mimicked the voice of the CEO’s boss, including his German accent.
Threats to National Security
- Influencing elections:
- Deepfakes can be used to influence elections.
- Recently, Taiwan’s cabinet approved amendments to election laws to punish the sharing of deepfake videos or images.
- Taiwan is becoming increasingly concerned that China is spreading false information to influence public opinion and manipulate election outcomes, and this concern has led to these amendments.
- This could also happen in India’s elections too.
- Espionage:
- Deepfakes can also be used to carry out espionage activities.
- Doctored videos can be used to blackmail government and defence officials into divulging state secrets.
- Production of hateful material:
- In India, deepfakes could be used to produce inflammatory material, such as videos purporting to show the armed forces or the police committing ‘crimes’ in areas with conflict.
- These deepfakes could be used to radicalise populations, recruit terrorists, or incite violence.
Legal protection available in India
- IPC & IT Act:
- Currently, very few provisions under the Indian Penal Code (IPC) and the Information Technology Act, 2000 can be potentially invoked to deal with the malicious use of deepfakes.
- Section 500 of the IPC provides punishment for defamation.
- Sections 67 and 67A of the Information Technology Act punish sexually explicit material in explicit form.
- RPI:
- The Representation of the People Act, 1951, includes provisions prohibiting the creation or distribution of false or misleading information about candidates or political parties during an election period.
- ECI Guidelines:
- The Election Commission of India has set rules that require registered political parties and candidates to get pre-approval for all political advertisements on electronic media, including TV and social media sites, to help ensure their accuracy and fairness.
Challenges
- Lack of regulatory framework for AI:
- There is often a lag between new technologies and the enactment of laws to address the issues and challenges they create.
- In India, the legal framework related to AI is insufficient to adequately address the various issues that have arisen due to AI algorithms.
- The lack of proper regulations creates avenues for individuals, firms and even non-state actors to misuse AI.
- Policy vacuums on deepfakes:
- The legal ambiguity, coupled with a lack of accountability and oversight, is a potent mix for a disaster.
- Policy vacuums on deepfakes are a perfect archetype of this situation.
- Challenging authenticity:
- As the technology matures further, deepfakes could enable individuals to deny the authenticity of genuine content, particularly if it shows them engaging in inappropriate or criminal behaviour, by claiming that it is a deepfake.
Solutions
- Media literacy for consumers is the most effective tool to combat disinformation and deep fakes.
- Meaningful regulations with a collaborative discussion with the technology industry, civil society and policymakers to disincentivise the creation and distribution of malicious deepfakes.
- Easy-to-use and accessible technology solutions to detect deepfakes, authenticate media, and amplify authoritative sources.
- Social media platforms are taking cognizance of the deepfake issue, and almost all of them have some policy or acceptable terms of use for deepfakes.
Road Ahead
- The Union government should introduce separate legislation regulating the nefarious use of deepfakes and the broader subject of AI.
- Legislation should not hamper innovation in AI, but it should recognise that deepfake technology may be used in the commission of criminal acts and should provide provisions to address the use of deepfakes in these cases.
- The proposed Digital India Bill can also address this issue.
- Tech firms are also working on detection systems that aim to flag up fakes whenever they appear.