The Legal Character of Shadow AI Under PDPL

Shadow AI — the use of AI tools and services by employees without organizational authorization, procurement review, or documented oversight — is not primarily a technology governance issue. Under the PDPL, it is a data processing compliance issue, and the liability accrues to the organization from the moment an employee inputs personal data into an unauthorized system.

The PDPL defines "personal data controller" as the entity that determines the purposes and means of processing personal data. When an employee uses a consumer AI tool to process customer records, employee information, or any other personal data in the course of their employment, the organization is the controller. The employee is acting as an agent of the organization. The fact that the processing occurred on an unsanctioned platform does not break the controller relationship — it compounds the liability by adding a failure of oversight to an undocumented processing activity.

The Data Processing Register Obligation

Article 13 of the PDPL and the Implementing Regulations require controllers to maintain a comprehensive register of their personal data processing activities. This register must document each processing activity, the legal basis for processing, the categories of data subjects and personal data involved, the recipients or categories of recipients (including third-party processors), data retention periods, and cross-border transfer information where applicable.

Shadow AI creates processing activities that are, by definition, absent from this register. The processing occurs through a third-party AI service — meaning there is also an undocumented processor relationship. If that AI service stores, retains, or uses inputs for model training purposes, the data may also be subject to cross-border transfer without a documented legal basis. A single employee using an unauthorized AI tool can simultaneously create three distinct PDPL compliance failures: an unregistered processing activity, an undocumented processor engagement, and an undocumented cross-border transfer.

Incident Notification Risk

The PDPL requires controllers to notify SDAIA of personal data breaches within 72 hours of becoming aware of the breach where the breach is likely to result in harm to data subjects. Shadow AI creates a specific and serious problem for this obligation: organizations frequently cannot comply with breach notification requirements for incidents involving systems they did not know were being used to process personal data.

If a consumer AI tool experiences a data breach or a security incident that exposes the personal data that an employee had entered into that tool, the organization may not learn of the incident through its normal monitoring channels. It has no contractual relationship with the AI vendor that would generate an incident notification. It has no logging of what data was submitted. Its incident response procedures, almost certainly, do not contemplate shadow AI as a breach surface. The result is that the organization is likely to miss its 72-hour notification window — adding a notification failure to the underlying processing violation.

What an AI Acceptable Use Policy Must Cover

An AI acceptable use policy (AUP) is a necessary but not sufficient control for shadow AI risk. To be effective under a PDPL compliance framework, an AI AUP must address the following with specificity:

  • Scope of prohibition: A clear statement that personal data — including customer data, employee data, and any information relating to identified or identifiable natural persons — may not be input into any AI tool or service that has not been approved through the organization's AI procurement and data protection review process.
  • Definition of approved tools: A maintained list of AI tools that have been assessed, approved, and documented as processors in the data processing register, with the data categories and purposes for which each tool is approved.
  • Reporting obligation: A mechanism for employees to report AI tools they are aware of being used outside the approved list, and a process for rapidly assessing and either approving or prohibiting those tools.
  • Consequences: Clear statement that unauthorized use of AI tools with personal data constitutes a data handling violation, with consequences defined in the organization's disciplinary and data protection policy framework.
  • Training: Documented training that ensures employees understand what categories of information constitute personal data under PDPL and why inputting such data into unauthorized systems creates legal risk for the organization.

The Governance Gap Shadow AI Exposes

Shadow AI is a symptom of a governance gap rather than a cause of one. When employees reach for unauthorized AI tools, it is typically because the organization has not provided sanctioned tools that meet their productivity needs, or because the approval process for new tools is sufficiently slow or opaque that employees route around it. Addressing shadow AI purely through prohibition — without creating a workable sanctioned pathway for AI tool adoption — is an enforcement posture that creates the appearance of compliance without the substance of it.

A PDPL-compliant approach to shadow AI requires three things operating together: a prohibition that is specific and communicated, a sanctioned pathway that is accessible and reasonably fast, and a data processing register that is actively maintained and reflects the actual AI tool landscape within the organization.