The tool was used for nefarious tactics.
Over 20 nation-state and cybercriminal campaigns exploiting OpenAI's ChatGPT service for malware deployment and influence operations have been dismantled this year.
According to SC US, ChatGPT was leveraged to research default industrial control system credentials, as well as it being used to gain information on bash script debugging and Modbus TCP/IP client creation, by the Iranian state-backed threat group CyberAv3ngers.
Another Iranian threat operation - STORM-0817 - was noted to have exposed malware code through ChatGPT - while Chinese threat actor SweetSpecter's spear-phishing attack against OpenAI employees was also foiled.
OpenAI's disruptions of adversarial ChatGPT utilisation comes amid threat actors' limited progress in exploiting the technology in malware attacks or election-targeted influence operations.
"Threat actors continue to evolve and experiment with our models, but we have not seen evidence of this leading to meaningful breakthroughs in their ability to create substantially new malware or build viral audiences," said the report.
Written by
Dan Raywood
Senior Editor
SC Media UK
Dan Raywood is a B2B journalist with more than 20 years of experience, including covering cybersecurity for the past 16 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes.
He has spoken at events including 44CON, Infosecurity Europe, RANT Conference, BSides Scotland, Steelcon and ESET Security Days.
Outside work, Dan enjoys supporting Tottenham Hotspur, managing mischievous cats, and sampling craft beers.