Header image

Microsoft Battles GenAI Criminals to Protect Services

Company's DCU takes legal steps to prevent GenAI abuse.

Microsoft is taking legal action to defend the credibility of its AI tools after it detected malicious use of them.

In a statement, Steven Masada, assistant general counsel for Microsoft’s Digital Crimes Unit, said the company claims to have observed a threat actor group develop sophisticated software which exploited exposed customer credentials scraped from public websites. Using GenAI services, the attackers purposely altered the capabilities of those services.

“Cyber-criminals then used these services and resold access to other malicious actors with detailed instructions on how to use these custom tools to generate harmful and illicit content,” he said.

Revoked Access

Upon discovery, Microsoft revoked access to those affected accounts, put in place countermeasures, and enhanced its safeguards to further block such malicious activity in the future.

As a result of this, Microsoft’s DCU is taking legal action to ensure the safety and integrity of its AI services, saying the weaponisation of our AI technology by online actors “will not be tolerated.”

Masada said Microsot is “pursuing an action to disrupt cyber-criminals who intentionally develop tools specifically designed to bypass the safety guardrails of generative AI services.” This includes Microsoft’s own products, “to create offensive and harmful content.”

Ongoing Investigation

Masada said the observed activity “directly violates U.S. law and the Acceptable Use Policy and Code of Conduct for our services,” and this action is part of an ongoing investigation into the creators of these illicit tools and services.

“Specifically, the court order has enabled us to seize a website instrumental to the criminal operation that will allow us to gather crucial evidence about the individuals behind these operations, to decipher how these services are monetised, and to disrupt additional technical infrastructure we find.”

Microsoft claims that it goes to great lengths to enhance the resilience of its products and services against abuse, and the ‘persistent and relentless’ innovation of tools and techniques to bypass even the most robust security measures.

“Microsoft will continue to do its part by looking for creative ways to protect people online, transparently reporting on our findings, taking legal action against those who attempt to weaponise AI technology, and working with others across public and private sectors globally to help all AI platforms remain secure against harmful abuse.”

Dan Raywood
Dan Raywood Senior Editor SC Media UK

Dan Raywood is a B2B journalist with more than 20 years of experience, including covering cybersecurity for the past 16 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes.

He has spoken at events including 44CON, Infosecurity Europe, RANT Conference, BSides Scotland, Steelcon and ESET Security Days.

Outside work, Dan enjoys supporting Tottenham Hotspur, managing mischievous cats, and sampling craft beers.

Dan Raywood
Dan Raywood Senior Editor SC Media UK

Dan Raywood is a B2B journalist with more than 20 years of experience, including covering cybersecurity for the past 16 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes.

He has spoken at events including 44CON, Infosecurity Europe, RANT Conference, BSides Scotland, Steelcon and ESET Security Days.

Outside work, Dan enjoys supporting Tottenham Hotspur, managing mischievous cats, and sampling craft beers.

Upcoming Events

No events found.