Header image

LLM Research Determines Usability in SOCs

Study determines human-AI collaboration can improve productivity, trust, and wellbeing in high-pressure environments.


Australia’s national science agency, CSIRO, has published findings from a ten month study into how large language models (LLMs) like ChatGPT-4 can support cybersecurity analysts in live threat investigations.

The trial, run in partnership with cybersecurity firm eSentire across its Security Operations Centres in Ireland and Canada, involved 45 analysts submitting over 3,000 queries to ChatGPT-4.

Routine Tasks

The research found that analysts mainly used AI for low-risk, routine tasks - such as interpreting technical alerts, editing reports, and analysing malware - while keeping key judgement calls for themselves.

Only four percent of queries sought direct recommendations, with most requests focused on factual information and context. This indicates that AI adoption in SOCs is starting with workflow augmentation, reducing fatigue and freeing up time for higher-value work.

Conducted under CSIRO’s Collaborative Intelligence (CINTEL) program, the study highlights how human-AI collaboration can improve productivity, trust, and wellbeing in high-pressure environments like cybersecurity. Researchers say the findings will inform the next generation of AI tools for SOCs, with a planned two-year follow-up study to track long-term adoption and refine best practices.

Data scientist and research coordinator Dr Martin Lochner, explained the trial is the first long-term industrial study to show how LLMs can be used in real-world cybersecurity operations, helping shape the next generation of AI tools for SOC teams.

“This collaboration uniquely combined academic rigor with industry reality, producing insights that neither pure laboratory studies nor industry-only analysis could achieve,” Locher said. “For instance, we found that only four per cent of analyst requests to ChatGPT-4 asked for a direct answer, such as ‘is this malicious?’. Instead, analysts preferred receiving evidence and context to support their own decision making.

“This highlights the value of LLMs as decision-support tools that enhance analyst autonomy rather than replace it.” 



Dan Raywood
Dan Raywood

Dan Raywood is a B2B journalist with 25 years of experience, including covering cybersecurity for the past 17 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes.

He has spoken at events including 44CON, Infosecurity Europe, RANT Forum, BSides Scotland, Steelcon and the National Cyber Security Show, and served as editor of SC Media UK, Infosecurity Magazine and IT Security Guru. He was also an analyst with 451 Research and a product marketing lead at Tenable.

Dan Raywood
Dan Raywood

Dan Raywood is a B2B journalist with 25 years of experience, including covering cybersecurity for the past 17 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes.

He has spoken at events including 44CON, Infosecurity Europe, RANT Forum, BSides Scotland, Steelcon and the National Cyber Security Show, and served as editor of SC Media UK, Infosecurity Magazine and IT Security Guru. He was also an analyst with 451 Research and a product marketing lead at Tenable.

Upcoming Events

No events found.