Sensitive employee information often inputted into GenAI.
Almost half (45.8 percent) of inputs into GenAI tools potentially disclosed customer data, such as billing information and authentication data.
According to an analysis by Harmonic Security, 26.8 percent of inputs contained information on employees, including payroll data, PII, and employment records. Some prompts even asked GenAI to conduct employee performance reviews.
Also, 8.5 percent of prompts are a concern and potentially disclose sensitive data. Harmonic Security recommends using real-time monitoring, using paid plans or plans that do not train on input data, and create workflows that shape how different departments or groups can engage with GenAl tools.
Alastair Paterson, CEO and co-founder at Harmonic Security comments: “Most GenAI use is mundane. In most cases, organisations were able to manage this data leakage by blocking the request or warning the user about what they were about to do, but not all firms have this capability yet.
“The high number of free subscriptions is also a concern, the saying that ‘if the product is free, then you are the product’ applies here and despite the best efforts of the companies behind GenAI tools there is a risk of data disclosure.”
Written by
Dan Raywood is a B2B journalist with 25 years of experience, including covering cybersecurity for the past 17 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes.
He has spoken at events including 44CON, Infosecurity Europe, RANT Forum, BSides Scotland, Steelcon and the National Cyber Security Show, and served as editor of SC Media UK, Infosecurity Magazine and IT Security Guru. He was also an analyst with 451 Research and a product marketing lead at Tenable.