- Home
- Prelims
- Mains
- Current Affairs
- Study Materials
- Test Series
AI and India’s Legal Landscape
India is rapidly expanding AI-powered surveillance infrastructure, deploying facial recognition systems and Artificial Intelligence technologies across law enforcement without comprehensive legal safeguards. The current regulatory landscape, exemplified by the Digital Personal Data Protection Act of 2023, grants broad government exemptions that potentially compromise individual privacy rights. Unlike the European Union''s risk-based approach to AI regulation, India lacks clear legislative frameworks to govern these technologies, leaving citizens vulnerable to unchecked data collection and potential civil liberties infringements.
Artificial Intelligence is Currently Regulated in India
- Information Technology Act of 2000: It provides legal recognition for electronic transactions and includes rules to protect electronic data, information, and records from unauthorized or unlawful use.
- IT Act 2000, along with the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules of 2011.
- These are set to be replaced by the Digital India Act 2023, which is currently in draft form and is expected to include key provisions related to AI.
- Information Technology (Intermediary Guidelines and Digital Media Ethics Code), 2021 provides a framework for oversight of social media, OTT platforms, and digital news media.
- Government Advisories on AI and Large Language Models (March 2024): Significant platforms must obtain MeitY approval before deploying untested AI models to prevent bias, electoral interference, or unidentifiable AI-generated media.
- Exemptions apply to startups and smaller platforms.
- Revised guidelines shift focus to mandatory labeling of unreliable AI models, user notifications for content inaccuracies, and deep fake detection measures.
- Digital Personal Data Protection Act (DPDP), 2023 is the primary law regulating data collection, storage, and processing.
- Limitations: Lacks provisions specific to AI-related challenges like algorithmic biases or AI-generated data misuse.
- No explicit mechanisms for AI audit or accountability,
- Principles for Responsible AI (2021): Seven core principles: Safety and reliability, inclusivity, non-discrimination, privacy, transparency, accountability, and reinforcement of positive human values.
- Collaboration encouraged between the government, private sector, and research entities.
- National Artificial Intelligence Strategy (2018): Launched by NITI Aayog under the tagline #AIFORALL.
- Focus areas: Healthcare, education, agriculture, smart cities, and transportation.
- Recommendations implemented: High-quality dataset creation and legislative frameworks for data protection and cybersecurity
- It serves as a foundational document for future AI regulation in the country.
- Draft National Data Governance Framework Policy (2022): Modernizes government data collection and management.
- Aims to support AI-driven research and startups through a comprehensive dataset repository.
AI Technologies Can Strengthen India''s Legal Landscape
- Timely and Effective Delivery of Justice: AI can streamline case management by automating repetitive tasks like documentation, case categorization, and scheduling.
- With over 5 crore pending cases in Indian courts, AI-driven tools can expedite processes, freeing up judges to focus on substantive matters.
- AI can also analyse legal precedents and case laws, offering historical data that aids in informed decision-making and litigation strategy development.
- AI can assist in evidence collection, verification, and analysis, especially in complex cases involving large datasets, forensic evidence, or digital fraud.
- The National Forensic Sciences University in Gujarat is integrating AI to analyze digital evidence, expediting cybercrime investigations.
- The Tis Hazari District Court in Delhi introduced its first AI-equipped ''Pilot Hybrid Court'' featuring a speech-to-text facility.
- AI-powered platforms can simplify mediation and arbitration processes by automating communication and negotiation tracking.
- Platforms like ODR India use AI to facilitate online dispute resolution.
- Enhancing Legislative Processes: AI can assist lawmakers in drafting, analyzing, and amending legislation by processing vast amounts of legal and public policy data.
- AI-driven simulations can predict the social, economic, and environmental impact of proposed laws, enhancing their precision and relevance.
- For instance, in some countries, tools like the European Union''s Legislation Editing Open Software (LEOS) are enhanced with AI.
- AI-powered legal tools now make it easier for people who aren''t lawyers to communicate with lawyers: they could speed up processes and cut down on the time needed for legal research and compliance analysis.
- Improved Law Enforcement and Crime Prevention: AI can enhance the efficiency of law enforcement by enabling predictive policing, real-time crime monitoring, and evidence analysis.
- Recently, Delhi Police used artificial intelligence (AI) to reconstruct the face of an unidentified murder victim and employed the image on a poster to seek information about his identity.
- The novel approach not only led to the victim''s identification but also played a crucial role in apprehending the perpetrators.
- Facilitating Compliance with International Laws: AI can simplify compliance for multinational corporations operating in India by analyzing cross-border regulations and trade laws.
- Automated compliance tools reduce the risk of penalties and improve India’s ease-of-doing-business ranking.
- Companies like TCS and Infosys are developing AI compliance tools for international trade agreements.
- Strengthening Corporate Compliance: AI simplifies legal compliance by automating monitoring, reporting, and filing processes, especially for businesses operating across multiple jurisdictions.
- With India''s growing emphasis on ESG (Environmental, Social, and Governance) reporting, AI ensures timely compliance and prevents violations.
- Companies can use AI for compliance with SEBI''s ESG disclosure norms, reducing manual errors.
- Improving Consumer Protection Mechanisms: AI can process consumer complaints, monitor fraudulent activities, and predict market trends to enhance consumer safety.
- With growing e-commerce and digital platforms, AI enables authorities to address grievances efficiently and prevent fraud.
- The Consumer Protection Authority of India can use AI to track unfair trade practices..
- Facilitating Environmental Law Enforcement: AI can monitor environmental compliance by analyzing data from sensors, satellites, and field reports, ensuring adherence to regulations.
- AI tools help identify violations such as illegal mining or deforestation, enabling swift regulatory action.
- In just 4 months, the Karnataka Forest Department has identified 167 cases of encroachment with the help of AI-powered analysis and satellite imagery.
- Strengthening Intellectual Property Rights: AI tools can streamline IPR processes by assisting in patent searches, drafting, and detecting copyright infringement.
- By automating complex searches and filings, AI ensures faster approvals and reduces disputes in IP-intensive industries like pharmaceuticals and IT.
- With advancements in AI, the U.S. Patent and Trademark Office (USPTO) has seen a rise in patent applications for AI-assisted inventions, which can be replicated in India.
AI Technologies Challenging India’s Legal Framework
- Privacy and Data Protection Vulnerabilities: AI systems extensively collect, analyze, and monetize personal data, often without adequate safeguards, threatening citizens'' privacy rights.
- The Digital Personal Data Protection Act (2023) is a step forward but lacks stringent enforcement mechanisms, especially regarding AI-driven surveillance.
- Facial recognition technology (FRT) is extensively utilized in public spaces, such as its deployment by Hyderabad''s police under the Smart Policing Mission, raising concerns of mass surveillance.
- India ranked second globally in cyberattacks (PwC 2022), with 40% of Indian firms using AI lacking proper data security protocols (NASSCOM, 2023).
- Bias and Discrimination in Algorithmic Decision-Making: AI systems often reinforce societal biases due to flawed datasets, leading to discriminatory outcomes in hiring, lending, and policing.
- Without comprehensive guidelines for algorithmic fairness, AI perpetuates systemic inequalities, undermining constitutional principles of equality.
- AI-powered recruitment tools in India were reported to filter out female candidates disproportionately in tech roles.`
- In 2018, Amazon discontinued its secret AI recruiting engine due to biases against women, yet similar systems could still be in operation across India.
- Intellectual Property Conflicts: AI challenges the foundational principles of IP law by blurring the lines of ownership in AI-generated works.
- India''s copyright framework lacks clarity on AI-generated content, leaving creators vulnerable to exploitation.
- The Copyright Act of 1957 stipulates that a work is eligible for copyright only if it is original and the result of human authorship. Therefore, AI-generated content is not considered copyrightable.
- Andersen v. Stability AI Ltd. case highlights artists'' vulnerabilities amid unclear copyright protections.
- Economic Inequality and Labor Law Challenges: AI-driven automation risks exacerbating unemployment and economic disparity, challenging India''s labor protections.
- India''s labor laws, including Four Labour Codes, do not address job displacement caused by AI.
- According to a McKinsey Global Institute report, automation could potentially displace up to 60 million workers in India''s manufacturing sector by 2030, with industries like textiles and electronics being particularly affected.
- National Security Threats: AI’s misuse in cyberattacks, deep fakes, and misinformation campaigns threatens India’s internal security.
- During the Lok Sabha Elections 2024, deep-fake videos were used to spread misinformation, undermining electoral integrity.
- In 2023, India faced a surge in cyber attack incidents, experiencing a 15% increase in weekly attacks per organization compared to 2022.
- India lacks AI-specific cybersecurity regulations, leaving critical sectors like banking and defense exposed.
- Ethical and Accountability Concerns: AI applications in healthcare, law enforcement, and public services raise questions about ethical standards and liability.
- Errors by AI systems lack clear accountability frameworks, leading to legal vacuums in disputes.
- A recent study published in JAMA examines the impact of systematically biased artificial intelligence (AI) on clinicians'' diagnostic accuracy.
- The findings reveal that predictions from biased AI models reduced clinicians'' accuracy by 11.3% points compared to baseline levels.
- Environmental Impacts of AI Deployment: The energy-intensive nature of AI training models exacerbates India''s environmental challenges, including rising carbon emissions.
- Training a large language model like ChatGPT-3 demands substantial energy, consuming as much as 10 gigawatt-hours (GWh) of electricity.
- India’s legal framework lacks mandates for sustainable AI practices, contradicting its climate commitments.
Steps to be Taken to Strengthen AI Regulation and Ensure Responsible AI Usage in India
- Enact a Comprehensive AI-Specific Legislation: India needs a dedicated law that addresses AI-related challenges, including ethical guidelines, accountability mechanisms, and risk classification.
- The European Union’s AI Act (entered into force in 2024) provides a tiered risk framework for AI applications; India can adopt a similar approach tailored to local contexts.
- Establish an Independent AI Regulatory Authority: Create a centralized body like the AI Ethics and Governance Authority of India to oversee AI deployment, ensure compliance, and address grievances.
- A dedicated regulator can ensure uniformity in AI governance across sectors, reducing fragmentation and misuse.
- The UK’s Centre for Data Ethics and Innovation serves as a model for addressing ethical AI use.
- Mandate Algorithmic Accountability and Audits: Introduce laws requiring AI developers to perform regular audits of algorithms to detect biases, inefficiencies, and ethical lapses.
- Algorithmic biases in AI tools used for hiring, lending, or policing can lead to systemic discrimination if unchecked.
- For example, In 2023, the Competition Commission of India highlighted concerns over price discrimination caused by algorithmic pricing on e-commerce platforms.
- Mandate Bias Impact Assessments (BIA) and Explainability Standards as part of AI lifecycle management in critical sectors like healthcare and finance.
- Strengthen Cybersecurity Regulations for AI Systems: Develop robust cybersecurity standards for AI applications to safeguard sensitive data and protect against AI-enabled cyber threats.
- CERT-In should mandate regular vulnerability assessments and adopt AI-specific threat monitoring systems to address risks.
- Promote Responsible AI Use Through Regulatory Sandboxes: Expand the use of regulatory sandboxes to allow controlled testing of AI innovations while ensuring safety and compliance.
- Sandboxes enable iterative testing and refinement of AI technologies without posing large-scale risks.
- It can be started with establishing cross-sectoral sandboxes under NITI Aayog to test AI in areas like healthcare diagnostics, smart cities, and environmental monitoring.
- Integrate Ethical AI Principles into Education and Training: Incorporate ethical AI development and responsible deployment into higher education curricula and corporate training programs.
- Educating developers and decision-makers about AI ethics ensures that future technologies prioritize inclusivity and fairness.
- Make AI ethics training mandatory for all government-funded AI projects and incentivize private firms to adopt similar programs.
- Ensure Data Transparency and Access Control: Enhance transparency in AI systems by implementing mandatory disclosures on data usage, model training, and decision-making processes.
- Without transparency, AI systems risk perpetuating black-box decision-making, reducing public trust.
- Amend the DPDP Act,2023 to include Right to Explanation provisions, enabling users to understand AI-driven outcomes that affect them.
- Incentivize Green AI Practices: Encourage the development of energy-efficient AI systems to mitigate their environmental impact.
- Training large AI models consumes massive energy, contradicting India’s climate commitments under the Paris Agreement.
- Provide tax benefits for AI firms adopting green computing practices and establish benchmarks for sustainable AI development.
While AI has the potential to revolutionize various sectors in India, its rapid adoption raises significant concerns regarding privacy, accountability, and bias. India should regulate AI, but not at the cost of innovation. The existing legal framework, particularly the Digital Personal Data Protection Act (2023), needs to be strengthened to address the unique challenges posed by AI technologies. India must adopt a comprehensive AI-specific legislation, establish regulatory bodies, and promote ethical AI practices to safeguard citizens'' rights.