Analysis of over one million prompts and 20,000 files found that 22% of files and 4.37% of prompts contained sensitive information.
Organisations are leaking sensitive data at alarming rates through the use of GenAI tools.
According to research by Harmonic Security, analysis of over one million prompts and 20,000 files submitted to 300 GenAI and AI-enabled SaaS tools between April and June revealed that 22% of files and 4.37% of prompts contained sensitive information.
Leaked content included source code, proprietary algorithms, customer and employee data, M&A documents, and financial projections—posing major compliance and security risks.
The study also revealed that in Q2 alone, the average enterprise saw employees begin using 23 previously unknown GenAI tools, often via personal, unsanctioned accounts.
Alastair Paterson, CEO and co-founder of Harmonic Security comments: “Had organisations not had browser based protection in place, sensitive information could have ended up training a model, or worse, in the hands of a foreign state. AI is now embedded in the very tools employees rely on every day and in many cases, employees have little knowledge they are exposing business data.”
Written by
Dan Raywood is a B2B journalist with 25 years of experience, including covering cybersecurity for the past 17 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes.
He has spoken at events including 44CON, Infosecurity Europe, RANT Forum, BSides Scotland, Steelcon and the National Cyber Security Show, and served as editor of SC Media UK, Infosecurity Magazine and IT Security Guru. He was also an analyst with 451 Research and a product marketing lead at Tenable.