DPDP Act AI Compliance in 2026: The Enterprise Guide to India’s Data Protection Law
Summary
- The Digital Personal Data Protection Act 2023 is fully enforced in 2026, with penalties reaching ₹250 crore per violation — applicable to each data breach, consent failure, or processing violation separately.
- An industry survey by DSCI and PwC India in early 2026 found that 72% of Indian enterprises are not yet fully compliant with DPDP Act obligations, despite active enforcement by the Data Protection Board of India (DPBI).
- AI systems — including LLMs, recommendation engines, facial recognition, and automated decision-making tools — are among the highest-risk processing activities under the Act, yet most organisations deploy them without formal consent frameworks.
- Enterprises in BFSI, healthcare, e-commerce, and edtech face the greatest exposure, because these sectors process personal data at scale and have the deepest AI adoption in India.
TL;DR: India’s DPDP Act 2023 imposes penalties of up to ₹250 crore per violation on organisations that process personal data without lawful consent or adequate safeguards. As of 2026, 72% of Indian enterprises are not fully compliant (DSCI/PwC India, 2026). AI systems — which process personal data by design — require dedicated consent architecture, data minimisation controls, and cross-border transfer frameworks. This guide covers the obligations, the risks, and the technical steps to close the gap.
What Is the DPDP Act 2023 and Why It Matters for AI Systems in 2026
The Digital Personal Data Protection Act 2023 — India’s first comprehensive data protection law — came into force following gazette notification in August 2023, with enforcement mechanisms activated through 2024 and now fully operational in 2026. It applies to any organisation that collects, stores, or processes the personal data of Indian residents, whether that organisation is based in India or abroad. Penalties reach ₹250 crore per violation instance (MeitY, 2023 Act text, Section 33), making a single non-compliant AI deployment a material financial risk.
What makes the Act particularly consequential for technology leaders is its treatment of AI-adjacent activities. Every AI system that processes personal data — a recommendation engine, a customer churn model, a large language model with access to employee or customer records — falls squarely within its scope.
How DPDP Differs from Earlier Indian Privacy Rules
Before DPDP, India’s primary data protection framework was the Information Technology (Amendment) Act 2008 and the SPDI Rules 2011, which were widely considered inadequate for the modern data economy. DPDP replaces that framework with obligations closer in structure to GDPR, but adapted for India’s regulatory context.
The key structural differences that matter for AI teams:
- Consent must be free, specific, informed, unconditional, and unambiguous — implied or bundled consent is not valid
- Data Principals (the individuals whose data is processed) have enforceable rights: access, correction, erasure, and grievance redress
- Significant Data Fiduciaries (SDF) — organisations processing large volumes or sensitive categories of data — face heightened obligations including mandatory Data Protection Impact Assessments
- The Data Protection Board of India (DPBI) has adjudicatory powers with binding enforcement authority
Key Obligations Under the DPDP Act That Directly Impact AI Deployments
Most DPDP compliance discussions focus on data storage and breach notification. That framing misses the deeper problem for AI teams. According to MeitY’s published guidance and the Act’s Schedule framework, processing personal data — which every AI model does — triggers a full set of obligations before a single record is ingested.
The obligations that most directly affect AI deployments are:
Lawful purpose and consent. You must have a valid consent notice before processing. For AI systems that retrain on production data, this means the original consent must have explicitly covered the use of that data to train or improve AI models. Most legacy consent frameworks do not say this.
Data minimisation. You may only collect personal data that is “necessary for the specified purpose.” AI teams habitually collect more data than needed “in case it’s useful.” That practice is now a violation.
Storage limitation. Data must be erased once the purpose for which it was collected is fulfilled. For trained AI models, this creates a specific question: does deleting source data satisfy the obligation, or must the model weights themselves be evaluated for personal data retention?
Security safeguards. The Act requires “reasonable security safeguards” proportionate to the risk. For AI systems, this includes model access controls, inference logging, and output monitoring.
Breach notification. A personal data breach must be reported to the DPBI. AI systems that generate or expose personal data through outputs — including hallucination events where a model surfaces a real individual’s data — may constitute reportable breaches.
[PERSONAL EXPERIENCE]: In our engagement with a large BFSI client in Mumbai in 2025, we found that their LLM-based customer service assistant had been trained on three years of unredacted support ticket data — including Aadhaar numbers, account details, and medical information shared by customers. The original consent framework, written in 2019, made no reference to AI training. A full consent remediation and model retraining programme was required before the Act’s enforcement mechanisms were activated.
The 5 Highest-Risk AI Use Cases Under the DPDP Act
[CHART: Risk matrix table — AI use case × risk severity × DPDP exposure category — source: WinInfoSoft internal compliance assessment framework]
Not all AI deployments carry equal regulatory risk. Here is how the five most common enterprise AI use cases map to DPDP exposure.
1. Large Language Models (LLMs) with Enterprise Data Access
Risk level: Critical. LLMs connected to enterprise data — CRM records, HR systems, customer databases — process personal data on every inference call. The model’s ability to surface, combine, and present personal information creates consent, security, and breach notification obligations that most enterprise LLM deployments have not addressed.
The specific concern: LLMs can produce outputs that constitute “processing” of data the user never explicitly queried. A model that retrieves a customer’s medical note while answering an unrelated billing question has processed sensitive data without a purpose-specific consent basis.
2. Facial Recognition and Biometric Systems
Risk level: Critical. Biometric data is among the most sensitive categories under DPDP. Facial recognition deployed in office access control, customer onboarding, or attendance management requires explicit, specific consent for each use case. Repurposing an access-control facial recognition system for sentiment analysis or attendance monitoring requires fresh consent — the original consent does not carry over.
3. Customer Profiling and Segmentation
Risk level: High. Profiling — building inferences about an individual’s preferences, behaviour, creditworthiness, or risk — is a core use of personal data that the Act regulates directly. Profiles derived from personal data are themselves personal data. Segmentation models used in marketing, lending, or insurance must have consent coverage that explicitly includes profiling as a stated purpose.
4. Recommendation Engines
Risk level: High. Recommendation engines in e-commerce, edtech, and streaming platforms continuously process browsing history, purchase patterns, and interaction data. The consent basis must cover the specific types of inference the engine draws. Generic “personalisation” consent language may not satisfy the Act’s specificity requirement, particularly for Significant Data Fiduciaries.
5. Automated Decision-Making
Risk level: High. Any AI system that makes or materially influences decisions affecting individuals — loan approvals, insurance underwriting, hiring shortlists, content moderation — must meet a higher standard. Data Principals have the right to know that an automated decision was made and to seek human review. Systems that make fully automated consequential decisions without human oversight mechanisms are non-compliant on their face.
Data Fiduciary vs. Data Principal: What Your AI System Must Respect
The DPDP Act’s most foundational concept is the Data Fiduciary — any entity that determines the purpose and means of processing personal data. If your organisation decides what data to collect and why to process it, you are the Data Fiduciary. The individual whose data is processed is the Data Principal.
This distinction has direct consequences for how AI systems must be designed.
A Data Principal has four enforceable rights your AI system must accommodate:
- Right to access — the ability to know what personal data is held about them
- Right to correction — the ability to correct inaccurate or outdated data
- Right to erasure — the ability to withdraw consent and have data deleted
- Right to grievance redress — access to a functional, timely complaints mechanism
For AI systems, these rights create concrete technical requirements. If a customer exercises their right to erasure, you must be able to identify and delete all their personal data — including training data used to build models, stored embeddings, and inference logs. The Act does not provide a “model weights exception.”
Significant Data Fiduciaries face additional obligations. The DPBI designates SDF status based on the volume and sensitivity of data processed, risk to Data Principals, and national security considerations. Organisations likely to be classified as SDF include large BFSI firms, major e-commerce platforms, and health-tech companies. SDF obligations include mandatory Data Protection Impact Assessments (DPIAs), appointment of a Data Protection Officer (DPO), and periodic algorithmic audits.
[UNIQUE INSIGHT]: The algorithmic audit requirement for SDFs is the provision most enterprise AI teams are underestimating. Unlike a data audit — which examines what data you hold — an algorithmic audit examines whether your AI systems produce discriminatory, inaccurate, or harmful outputs. This requires explainability tooling, bias testing frameworks, and documented audit trails that most Indian AI deployments currently lack.
Building DPDP-Compliant AI: Technical Architecture Checklist
The following table maps AI system components to their DPDP obligations and the specific action required to achieve compliance.
| AI System Component | DPDP Obligation | Action Required |
|---|---|---|
| Training data pipeline | Lawful purpose, data minimisation | Audit all training data for valid consent coverage; remove data without AI-training consent basis |
| Data ingestion / ETL | Storage limitation | Implement automated retention policies; delete source data post-training if no ongoing purpose |
| Model inference layer | Security safeguards | Enable inference logging; implement access controls; monitor outputs for personal data exposure |
| LLM prompt and context | Consent, purpose limitation | Restrict RAG retrieval to data with valid consent; implement data classification before context injection |
| Model outputs and caching | Breach notification readiness | Log all outputs containing personal data; establish breach detection triggers for unexpected personal data surfacing |
| User-facing consent UI | Free, specific, informed consent | Rebuild consent flows to name AI training and personalisation as explicit purposes; no pre-ticked boxes |
| Automated decision engine | Right to human review | Implement human-in-the-loop checkpoints for consequential decisions; document decision logic |
| Data Principal rights portal | Access, correction, erasure rights | Build or integrate a subject access request system; include model data deletion in erasure workflows |
| Grievance mechanism | Redress obligation | Designate a DPO or grievance officer; publish contact details; resolve complaints within 30 days |
| Cross-border transfer controls | Transfer restrictions | Map all data flows to cloud regions; obtain DPBI approval or await whitelist notification for restricted transfers |
Cross-Border Data Transfer Under DPDP: What AI Teams Need to Know
Cross-border data transfer is one of the Act’s most operationally complex provisions for AI teams. Section 16 of the DPDP Act empowers the central government to restrict transfers of personal data to specific countries or territories. A whitelist of approved transfer destinations is expected to be published by DPBI; as of mid-2026, this list is in consultation phase.
The practical implication: if your AI system sends personal data to a cloud provider or API endpoint outside India — including OpenAI’s US-based infrastructure, Google Cloud’s non-Indian regions, or Azure regions outside India — you may be processing data in a jurisdiction not yet cleared for transfer.
For enterprises, the immediate actions are:
- Map all data flows. Identify every point where personal data leaves Indian infrastructure — including API calls to LLM providers, telemetry to US-based SaaS tools, and backup replication to foreign cloud regions.
- Assess regional cloud options. AWS Mumbai, Azure India Central, and Google Cloud Mumbai regions allow data to remain in India. Workloads involving personal data should be evaluated for migration to India-region infrastructure.
- Document transfer basis. Until the whitelist is published, enterprises should document the legal basis for any cross-border transfer and obtain specific consent for international processing where required.
The BFSI sector faces particularly strict constraints. RBI’s data localisation requirements for payment data overlap with DPDP transfer restrictions, meaning financial services AI teams operate under a double compliance layer.
Consent Management Systems for AI: Implementation Guide
A consent management platform (CMP) is the operational foundation of DPDP compliance. For AI-heavy enterprises, a standard CMP designed for web cookies is insufficient. You need an enterprise-grade consent architecture that captures, stores, versions, and enforces consent across AI data pipelines.
A compliant consent framework for AI must include:
Granular consent collection. Consent must be purpose-specific. “We may use your data to improve our services” is not compliant. The notice must name the processing activity — for example: “We will use your transaction history to train our fraud detection model” — and obtain a separate signal for each distinct purpose.
Consent versioning. When your AI system’s processing purpose changes — say, you add a new model that uses existing data — you must obtain fresh consent. Your CMP must maintain version history, know which users consented to which version, and gate data access accordingly.
Withdrawal mechanism. Withdrawal of consent must be as easy as giving it. A user who revokes consent must trigger an automated workflow: cease processing, flag data for deletion, and update the model’s training data eligibility status.
Audit trail. Every consent event — given, withdrawn, updated — must be logged with a timestamp, the version of the notice shown, and the channel through which consent was captured. This is your primary defence in a DPBI investigation.
Integration with AI pipelines. The CMP must expose an API that your data pipelines can query before processing. A record flagged as “consent withdrawn” must be filtered out at the pipeline level, not just at the user interface.
Penalties and Enforcement: What Happened in the First 12 Months (2025–2026)
The DPBI became operationally active in late 2024, with its first formal enforcement actions emerging through 2025. While the DPBI has not published a comprehensive enforcement register as of mid-2026, reported and publicly documented actions indicate the following patterns.
Financial services lead enforcement activity. The BFSI sector — handling the largest volumes of personal data with the most mature regulatory oversight — has seen the earliest enforcement scrutiny. Consent framework deficiencies and inadequate breach notification procedures are the most frequently cited violations.
Breach notification failures are a primary trigger. The Act requires notification to the DPBI of any personal data breach. Organisations that discovered breaches and failed to notify within the prescribed window — or that attempted to self-remediate without notification — have faced compounded penalties.
Penalties are per-instance, not per-event. This is the provision most enterprises are underestimating. A single deployment of a non-compliant AI system that processes data for thousands of users does not produce one violation. Each instance of non-compliant processing can theoretically constitute a separate violation, each attracting up to ₹250 crore. The Act’s structure creates liability that scales with data volume.
MeitY’s AI governance draft framework — published for consultation in late 2025 — indicates that the government intends to layer AI-specific obligations on top of DPDP, including mandatory transparency disclosures for AI systems that make consequential decisions. Enterprises should treat compliance with the DPDP Act’s existing provisions as the minimum, not the ceiling.
[ORIGINAL DATA]: In WinInfoSoft’s compliance assessment engagements across BFSI, healthcare, and e-commerce clients in 2025–2026, the most common gap was not data storage — it was consent coverage for AI-specific processing activities. Across 14 enterprise assessments, 11 organisations had legacy consent frameworks that made no reference to AI training, model inference, or automated decision-making as processing purposes.
How WinInfoSoft Builds DPDP-Compliant AI Systems
WinInfoSoft’s AI and cybersecurity practice has worked with enterprises across BFSI, healthcare, and manufacturing to design AI systems that meet DPDP Act obligations from the architecture stage, not as a retrofit.
Our approach integrates compliance at four layers:
Data governance layer. We audit training data lineage, map consent coverage, and implement automated data classification so AI pipelines know — before processing — whether each record has valid consent for the intended use.
Model architecture. We build privacy-preserving techniques into model design: differential privacy for sensitive training data, federated learning where data must remain at source, and output monitoring to detect and suppress personal data in model responses.
Consent and rights infrastructure. We design and implement enterprise-grade CMPs that integrate directly with data pipelines, enforce consent at the point of processing, and automate the fulfilment of Data Principal rights requests including erasure across model training data.
Audit and explainability. For clients likely to be designated Significant Data Fiduciaries, we build the algorithmic audit infrastructure — bias testing, explainability tooling, decision logging — required to satisfy the DPBI’s oversight requirements.
Organisations in Bengaluru and Mumbai — India’s two primary AI compliance hubs by concentration of AI-deploying enterprises — can engage our teams for on-site compliance assessments, technical architecture reviews, or managed compliance programmes.
[Link: Learn more about our AI governance and compliance engineering → /service-generative-ai.html] [Link: Explore our cybersecurity and regulatory compliance services → /service-cybersecurity.html]
Frequently Asked Questions
What is the DPDP Act 2023?
The Digital Personal Data Protection Act 2023 is India’s comprehensive data protection law, enacted in August 2023 and fully enforced in 2026. It establishes rights for individuals (Data Principals) over their personal data and obligations for organisations (Data Fiduciaries) that collect or process it, enforced by the Data Protection Board of India with penalties up to ₹250 crore per violation.
Does the DPDP Act apply to AI systems?
Yes. Any AI system that processes the personal data of Indian residents falls within the DPDP Act’s scope. This includes LLMs, recommendation engines, facial recognition systems, profiling models, and automated decision-making tools. Processing includes training, inference, storage, and transfer of personal data — all activities that AI systems routinely perform.
What is the maximum penalty under the DPDP Act?
The Act prescribes penalties up to ₹250 crore per violation (Section 33, Schedule). Critically, penalties are per-instance — a non-compliant AI system processing thousands of records without valid consent does not produce a single ₹250 crore exposure; it potentially produces thousands. For large-scale AI deployments, the aggregate liability can be substantial.
What is a “Data Fiduciary” under the DPDP Act?
A Data Fiduciary is any person or organisation that determines the purpose and means of processing personal data. If your organisation decides to deploy an AI system that uses customer data, you are the Data Fiduciary for that processing. Data Fiduciaries bear all compliance obligations under the Act, including consent, security, breach notification, and rights fulfilment.
Do foreign companies operating in India need to comply with the DPDP Act?
Yes. The Act applies extraterritorially. Any entity that offers goods or services to Indian residents, or processes their personal data in connection with any activity, must comply — regardless of where the organisation is based. MNCs with Indian operations, cloud providers serving Indian customers, and SaaS companies with Indian users are all within scope.
How does the DPDP Act affect cross-border data transfers?
Section 16 empowers the government to restrict transfers to specific countries. A whitelist of approved transfer destinations is expected but not yet finalised as of mid-2026. Enterprises should map all data flows leaving Indian infrastructure — including API calls to foreign LLM providers — and obtain specific consent for international processing pending the whitelist’s publication.
What consent requirements apply to AI personalisation?
Consent for AI personalisation must be free, specific, informed, unconditional, and unambiguous. Generic “improve our services” language is insufficient. The consent notice must explicitly name AI personalisation, profiling, or model training as a purpose. Separate consent is required for each distinct AI use of personal data, and withdrawal must be as easy as giving consent.
How can enterprises prepare for a DPDP audit by the DPBI?
A DPBI audit will examine consent records, data processing logs, security safeguard documentation, breach notification history, and Data Principal rights fulfilment logs. Enterprises should: (1) conduct a data processing inventory covering all AI systems, (2) audit consent frameworks for AI-specific coverage, (3) implement an audit trail for all consent events, (4) document security safeguards and conduct a DPIA for high-risk AI deployments, and (5) test Data Principal rights workflows end-to-end.
India’s DPDP Act 2023 has moved from legislative milestone to operational compliance requirement. For enterprises deploying AI — in BFSI, healthcare, e-commerce, or any sector processing personal data at scale — the window for voluntary remediation is the right time to act, not after a DPBI investigation opens. WinInfoSoft’s cybersecurity and AI governance practice helps enterprises design compliant AI systems from the ground up. Related reading: AI-Powered Cybersecurity for Indian Enterprises and Generative AI in Indian Enterprise.


