Header image

LLM Predictions: Large Language or 'Lost Loads Mate'?

How much will LLM and the security surrounding them be a factor in 2025?


Over the course of the last two months, and after processing the large number of 2025 predictions into commonly cited trends we received, we have published a series of articles based on these thoughts. 

These have been on the following subjects:  

GenAI

CNI Attacks

Quantum Computing

Ransomware

Regulation

Cyber Resilience

Remote Working

For this final entry, we are continuing with the theme of AI, specifically large language models (LLMs). Already this year, we have seen the UK government launch an AI tool and change the name of its AI institute to better embrace AI technology.

Considerations

However, there are important considerations around LLMs, such as whether they can learn in real-time, whether they can be taught incorrect information and give inaccurate answers, and which models are used for training them.

Daniel Rapp, chief AI and data officer at Proofpoint, cited these concerns, claiming that there are “intriguing possibilities for threat actors to serve their own interests,” particularly in how they might manipulate private data used by LLMs.

He predicted that in 2025, we will begin to see initial attempts by threat actors to manipulate private data sources—such as by purposely tricking AI through the contamination of data used by LLMs. This could include deliberately manipulating emails or documents with false or misleading information to confuse AI or make it behave harmfully.

“This development will require heightened vigilance and advanced security measures to ensure that AI isn’t fooled by bad information,” he said.

Integration

Taking a more positive perspective on the benefits of LLMs, Asanka Abeysinghe, CTO of WSO2, noted that LLMs became a major trend in 2024, as many companies were eager to integrate them into various aspects of their operations—though this sometimes stretched their practical applications.

Looking ahead to 2025, Abeysinghe stated that the use of agentic AI, powered by smaller language models, will drive the development of more autonomous systems. Additionally, the adoption of AI strategies will increasingly be led by Chief AI Officers to align these technologies with business goals.

This raises the question: Will Chief AI Officers be responsible for the implementation, training, and maintenance of LLMs? One key responsibility will be ensuring the security of the LLM’s data. Lebin Cheng, VP of API security at Imperva, a Thales Company, predicted that at least one major LLM application security breach related to APIs will dominate headlines in 2025.

“Adoption of LLM-based applications and custom components, such as LLM agents, will start to proliferate rapidly in 2025, leading to an explosion of APIs,” he said.

“As the agentic AI wave takes hold, API traffic will undoubtedly increase—becoming an even greater threat to an organization’s sensitive data, and driving a greater need for API observability.”

AppSec Breach?

This led Cheng to predict that in 2025, there will be at least one highly publicised LLM application security breach related to APIs. He also anticipated that the cost of API-related security incidents would likely increase.

“It will be a very happy coincidence if a working detection and remediation solution can actually help detect and limit the damage when an API abuse happens to the LLM-based app—but time will tell.”

AI was the dominant theme of the predictions we received, and so it was unsurprising that both GenAI and LLM would be so prominent. Now that we are a sixth of the way through the year, and nearing the end of the first quarter, these predictions on the prevalence of AI are prominent.

How useful they are to businesses and which threats and challenges they pose remain to be seen. 


Dan Raywood
Dan Raywood Senior Editor SC Media UK

Dan Raywood is a B2B journalist with more than 20 years of experience, including covering cybersecurity for the past 16 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes.

He has spoken at events including 44CON, Infosecurity Europe, RANT Conference, BSides Scotland, Steelcon and ESET Security Days.

Outside work, Dan enjoys supporting Tottenham Hotspur, managing mischievous cats, and sampling craft beers.

Dan Raywood
Dan Raywood Senior Editor SC Media UK

Dan Raywood is a B2B journalist with more than 20 years of experience, including covering cybersecurity for the past 16 years. He has extensively covered topics from Advanced Persistent Threats and nation-state hackers to major data breaches and regulatory changes.

He has spoken at events including 44CON, Infosecurity Europe, RANT Conference, BSides Scotland, Steelcon and ESET Security Days.

Outside work, Dan enjoys supporting Tottenham Hotspur, managing mischievous cats, and sampling craft beers.

Upcoming Events

02
Apr
Webinar

Benchmarking Security Skills and How to Ensure Secure-by-Design in the Enterprise

Consider how to prove the return on investment when implementing a secure-by-design initiative

image image