How is AI transforming the way firms manage data privacy and what can be done to get on top of the area?
Artificial intelligence (AI) is key to a growing number of business operations. While the technology can increase efficiency and productivity, it is also creating a need to shake-up data privacy strategies.
AI adoption is already driving enterprises to evolve their approach to data privacy and governance, according to Cisco's 2026 Data and Privacy Benchmark Study.
As companies increasingly use AI technology in day-to-day operations, 90% are expanding privacy programs and governance frameworks to protect their data, the research found.
Meanwhile, 93% are planning to allocate more resources to privacy and data governance over the next two years.
How is AI transforming the way firms manage data privacy and what can be done to get on top of the area?
Customer Data
The first challenge stems from the fact that AI systems run on large volumes of customer data. This “naturally increases the risk of data being used in ways that go beyond what customers originally expected, or what regulations allow,” says Chiara Gelmini, financial services industry solutions director at Pegasystems.
This is made trickier by the fact that some AI models can be “black boxes to a certain degree,” she says. “So it’s not always clear, internally or to customers, how data is used or how decisions are actually made," she tells SC Media UK.
For example, AI-driven diagnostic tools in medical devices – such as imaging analysis tools – may generate results that clinicians rely on for treatment decisions, yet offer little insight into how those results were reached, says Sharad Patel, data privacy and cybersecurity expert, PA Consulting.
“This lack of transparency makes it difficult to identify errors, assess whether biased or inappropriate data influenced the outcome, or challenge potentially unsafe recommendations – posing risks to both patient safety and organisational liability.”
On top of that, some models can retain elements of their training data and are increasingly exposed to cyber risks, manipulation, or even deepfakes – all of which raise the chances of sensitive information leaking out, says Gelmini.
The UK Information Commissioner’s (ICO) guidance on why it's important to protect privacy cites “a scope of harms” that can be caused by AI, says Joseph Wilson, head of strategic innovation at Optalysys: “These include loss of control over personal information, reputational damage and financial loss in the course of AI operations.”
AI Data Privacy Regulations
Regulation is mandating that companies are transparent about data collection. In the UK, there is no regulation specifically governing AI. However, the EU AI Act introduces a risk-based framework for the technology, which does not apply directly in the UK but could influence firms operating internationally.
AI is “fully inside” the existing data‑protection regime the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018, Gelmini explains. Under these current laws, if an AI system uses personal data, it must meet the same standards of lawfulness, transparency, data minimisation, accuracy, security and accountability as any other processing, she says.
Meanwhile, organisations are expected to prove they have thought the area through, typically by carrying out a Data Protection Impact Assessment (DPIA) before deploying high‑risk AI.
The ICO includes AI in the list of processing “likely to result in high risk” and its guidance on AI and data protection states that DPIAs are an "ideal” mechanism to demonstrate compliance for AI systems that process personal data, Gelmini explains.
Other regulations to look out for include the Data (Use and Access) Act 2025, which introduced targeted reforms. Broader changes proposed in the Data Protection and Digital Information Bill were delayed in 2024, but may be revisited under the current government, Gelmini adds.
Future Risks
Over the next 12 to 24 months, regulatory expectations around AI and data privacy are likely to “become more exacting in practice,” says Neil Thacker, global privacy and data protection officer at Netskope. “Provisions of the Data Act will continue to come into force in stages and guidance from the ICO is expected to evolve as new AI use cases and risks emerge.”
Looking ahead, privacy risks are likely to intensify as AI systems become more “powerful, autonomous and data-hungry,” says Chris Linnell, associate director of data privacy at Bridewell.
The use of biometric, behavioural and inferred data is expected to grow, increasing the potential for “intrusive profiling and surveillance,” he says.
At the same time, AI-enabled threats such as deepfakes, synthetic identities and automated exploitation of personal data will further complicate governance. Meanwhile, regulatory frameworks may struggle to keep pace with rapid innovation, creating “uncertainty and uneven enforcement,” says Linnell.
Managing AI Risk
The growing use of AI can pose a risk, but only if it gets out of hand. As AI becomes easier to adopt and more widespread, the practical way to stay ahead of these risks is “strong, AI governance,” says Gelmini. “Firms should build privacy in from the start, mask private data, lock down security, make models explainable, test for bias, and keep a close eye on how systems behave over time."
Firms should ensure they’re embedding privacy by design across the AI lifecycle, carrying out “robust DPIAs for high-risk use cases,” and maintaining clear accountability for AI systems and data assets, says Linnell.
Wilson advises firms to be “holistic” in their approach. This includes mitigating new threats, but also protecting data in “reasonably well-understood contexts for AI,” he says.
He advises using frameworks that can be deployed to protect data “throughout the complete lifecycle of a model, from training through to deployment and inference.”
For example, the Cloud Security Alliance produces professional guidance on best practices for the deployment of AI systems in cloud contexts, says Wilson.
Firms should limit data to what is strictly necessary, improve transparency and explainability, and adopt privacy-enhancing techniques such as pseudonymisation and strong access controls, Linnell says. “Just as importantly, they should invest in training and cross-functional governance so legal, technical and business teams collectively understand and manage AI-driven privacy risk."
Written by
Kate O'Flaherty
Cybersecurity and privacy journalist