Header image

For AI Compliance, Knowledge is Half the Battle

Preparing for compliance with the AI Act next year.


The EU’s AI Act takes a risk-based approach to regulating AI systems, dividing them into four categories: minimal risk, general purpose, high-risk, and prohibited. The deadline for companies to comply with the Act’s requirements is February 2nd 2025. 

The Act lists eight prohibited AI use cases, including systems that employ manipulative or exploitative techniques, social scoring, predictive policing, creation of facial recognition databases, detecting emotions of employees or students, using biometric data to gather sensitive personal information (as defined by the GDPR), and using real-time biometric ID systems in public places for law enforcement purposes. 

While it’s important to understand which AI systems are prohibited, and which may only be considered high-risk, the first step for any business is to understand how it may be using AI in its day to day operations. 

Discovering Shadow AI 

Inventorying and managing IT assets is an essential business function: but in addition to the many sanctioned apps and cloud services businesses use, their employees frequently leverage many more unmanaged apps and services, and those assets often incorporate AI. 

 A Cloud Access Security Broker (CASB) can identify managed and unmanaged apps in the business’ IT ecosystem, and categorise them by various criteria including their use of AI. Once all AI systems have been identified, mapping data flows throughout the organisation can provide visibility into which data are being ingested by which AI applications.

Armed with this information, CISOs and IT admins can make more informed decisions about the likely business impact of adopting or de-provisioning various applications. 

 As part of this process, data loss prevention (DLP) policies can help businesses tag sensitive data and track or prevent its ingestion by AI systems. DLP policies can also tag AI-generated content.

Cloud Security Posture Management (CSPM) and SaaS Security Posture Management (SSPM) solutions can also be leveraged to monitor AI assets and prevent ‘configuration drift’, ensuring that the use of AI-powered apps and services remains within organisation-defined parameters. 

Protecting employees, customers, and the public  

Some prohibited AI use cases are particularly important for employers. These include the ban on emotion detection, social scoring, and using biometrics to glean sensitive personal information. Again, DLP policies can help identify sensitive data as defined in the GDPR, including when that data has been generated by an AI system.

For emotion detection, it’s important to distinguish this from run-of-the-mill sentiment analysis tools that have been used for many years to gauge workforce morale and employees’ response to company policy changes. While the terms are often used interchangeably, sentiment analysis is typically limited to detecting a positive or negative response whereas emotion detection is much more fine-grained, covering a wider range of specific emotions, and therefore more capable of being used in a manipulative or exploitative manner.  

 Workforce monitoring tools like User and Entity Behaviour Analytics (UEBA) often incorporate AI to build baseline models of normal user behaviour. This can be used to detect when user behaviour deviates from the norm – a possible indicator of compromise. Some advanced UEBA solutions can also assign users a risk score based on their use of IT resources, and this risk score can be leveraged in real time to adjust access privileges or even refer users for additional training on company policy. 

 While the Act forbids social scoring, it probably isn’t aimed at these kinds of UEBA solutions, for a couple of reasons. First, the Act only forbids social scoring if it results in harm to an individual in a context other than the one in which the data was gathered, or if it results in harm that is disproportionate to the individual’s behaviour.

In the case of UEBA, risk scores are only used in the context of the company’s IT ecosystem, and are limited to implementing normal precautionary or disciplinary measures relating to the use of the company’s information assets. 

 Companies should consider leveraging an advanced UEBA tool to assist with AI compliance. By analysing users’ interactions with AI systems, companies can better detect and prevent unauthorised and harmful activities. 

 Social scoring, on the other hand, carries a greater risk of negatively impacting customers. This may be especially true in certain industries, like finance, or where AI-generated data is shared with affiliated entities. This just reinforces the importance of having DLP policies in place to tag AI-generated data, and the ability to map how customer data flows to or from AI-powered applications. 

 Last but not least, businesses must take care that their use of AI does not create content - or result in website or application design choices - that could be construed as manipulative or exploitative. For example, while social media platforms are generally not responsible for the content that their human users upload, under EU and American law they can be responsible for designing AI-powered algorithms that promote dangerous content, especially when directed at vulnerable populations like children. 

 In addition to training their workforces on the ethical use of AI, businesses should be aware of how malicious actors are using new techniques like prompt injection to trick an AI system into performing actions or disclosing information in violation of regulations or company policies.

By deploying advanced threat protection software with the ability to identify and mitigate threats to AI systems, businesses can ensure their use of AI remains safely within the bounds of the law. 

Neil Thacker CISO Netskope
Neil Thacker CISO Netskope

Upcoming Events

No events found.