AI will impact every cybersecurity role. But learning how to prompt ChatGPT will keep you ahead of the game, writes Jamal Elmellas, COO of Focus-on-Security
Skills shortages have already forced businesses to look at where they can automate processes, with 17% of organisations using AI, machine learning and automation in cybersecurity operations according to the (ISC)2 Cybersecurity Workforce Study 2021.
And this figure will only rise now that AI LLMs (Language Learning Models), such as ChatGPT and Bard, have taken the world by storm.
But does this mean AI is coming for cybersecurity jobs?
AI aids pen testers and phishers
There’s little doubt that AI will impact every cybersecurity role. For example, it’s likely that these technologies will help with the generation of policy documents for GRC purposes and the distillation of threat reports when reporting to the board.
Developers will also leverage AI to retrieve code from libraries and to check and debug code - an aptitude that could compete with human disciplines like DevSecOps.
AI will also impact penetration testing and red teaming by making it easier for testers to create phishing tests or social engineering exercises. For instance, ChatGPT can be used to extrapolate OSINT (open-source intelligence) from social media platforms to help target specific employees.
AI relieves cyber burnout
In the main, LLMs will help with content creation and also assist with analysis that tends to monopolise security professionals’ time. This can only be a good thing given high industry stress levels.
A recent Gartner report found a quarter of of cybersecurity leaders intend to leave the industry, blaming factors such as burnout, low executive support and sub-par industry level maturity. These stark results suggest companies are struggling to make cybersecurity part of the business culture, with executives mired in red tape.
Given this backdrop, perhaps LLM could be part of the solution.
AI checks results and tests
However, AI models should not be allowed to run riot. LLMs could also prove detrimental because people have placed an alarming amount of trust in them without validated output.
Many employees are already using the technology in the workplace surreptitiously, with a recent survey finding that almost 68% of professionals don't even inform their manager.
It’s also possible that LLMs could also exacerbate the threat landscape. Vendors have demonstrated how skilled malware engineers could manipulate the technology for gain, although the NCSC suggests its abilities are still limited.
Instead, we will see LLMs used to craft convincing phishing campaigns and to assist threat actors with lateral network attacks and the escalation of privileges. Consequently, LLMs could spark a new arms race between attackers and defenders as both tool-up by becoming adept at prompting.
LLM prompting will become a vital skill
Does this mean you should add LLM-prompting to your CV? Probably. Particularly as LLMs are now being actively used by candidates in the recruitment process.
SecurityFWD, for example, recently showed how LLMs could be used to apply for a job at Varonis by researching the company and constructing a cover letter While this state-of-play means it's easier for potential candidates to catch recruiters' attention, it will also become more challenging for hirers to select for interview.
One thing is for certain: AI is set to fundamentally reshape the cybersecurity sector.