How does AI aid bug bounty discovery?
Worryingly, recent AI developments are enabling criminals with minimal or no knowledge to plan and enact cyber-attacks at scale. No longer the domain of highly skilled criminals, generative AI has created a new generation of powerful and user-friendly tools that automate and simplify the hacking process.
According to the UK’s National Cyber Security Centre, this has lowered the barriers for novice cyber criminals and hacktivists to carry out attacks.
Despite this growing risk, recent research suggests that 35 percent of organisations don’t have a comprehensive understanding of the threat landscape they currently face. What’s more, 79 percent of security professionals admit making decisions despite having zero insight into the latest cyber threats and vulnerabilities.
As a growing number of opportunistic threat actors turn to AI technology to commit cyberattacks, organisations will need to take a more proactive approach to cyber defence. One that minimises their threat exposure by pinpointing and addressing hidden vulnerabilities before cybercriminals can find and exploit these weaknesses.
According to research, 48 percent of security professionals now consider AI the most significant security risk to their organisation. To stay on top of emerging and evolving threats, organisations can turn to bug bounty programs. These schemes incentivise security researchers to find and report security flaws in return for monetary rewards.
The rise of bug bounty programs
Bug bounty programs enable companies to tap into the expertise of independent security researchers to continuously test their security. These programs, which can be managed internally or via a specialist third-party platform, allow security researchers to find and report weaknesses in a company’s systems. If a researcher finds a valid bug, the company rewards them with a payment, known as a bounty, for their contribution.
Enabling enterprises to strengthen their security posture, bug bounties can complement existing security controls by exposing vulnerabilities that automated scanners miss. They help organisations stay ahead by continuously and proactively discovering vulnerabilities. In fact, 68 percent of security professionals believe an unbiased external review is the most effective way to address AI security issues.
Importantly, organisations can incentivise the security researcher community to emulate what a potential bad actor would attempt to exploit. This means unexpected weaknesses can be identified and fixed before they are exploited by cyber-criminals. A growing number of these bounty hunters are now also turning to AI tools to turbocharge and strengthen their bug-seeking capabilities.
The AI-powered bounty hunter
While AI may be reshaping the activities of bad actors, it is also revolutionising how security researchers deploy their advanced skills to battle cyber-criminals. Indeed, the SANS Institute's report on AI mentions that 58 percent of respondents foresee an 'arms race' between cyber defences and AI-enabled attacks.
By integrating publicly available AI algorithms into their toolkits, they can now analyse vast amounts of data in real time and identify patterns and anomalies that may indicate a threat.
Using customised tools created by modifying OpenAI’s ChatGPT, these specialists are able to quickly scan chunks of code for vulnerabilities and automate previously time-consuming tasks. For example, rapidly generating prompts that can be sent to a system to test its cyber defences and reveal potential security flaws.
In addition to using AI to automate tasks, analyse data, identify vulnerabilities, validate findings and undertake initial reconnaissance, bug bounty hunters are also using AI chatbots to write reports that more clearly communicate a bug’s severity and impact for developers.
By embracing the power of AI tools, security researchers are able to work smarter and faster when it comes to undertaking vulnerability scanning and data analysis. This gives them more time to focus on strategic thinking and creative problem-solving activities.
In fact, 71 percent of security professionals report satisfaction with AI's ability to automate tedious tasks. However, these productivity gains are a double-edged sword, as adversaries also leverage AI for sophisticated phishing (79 percent) and vulnerability exploits (74 percent).
In addition, the potential for error when using large language models (LLMs) like ChatGPT means that some security researchers are generating seemingly realistic and highly detailed bug reports that are actually nonsensical. Thanks to AI-generated ‘hallucinations’, these misleading reports are based on false positives or inaccurate interpretations that send developers down rabbit holes and waste valuable time and effort.
As a result, it creates a challenge for organisations that need to know which reports to trust and which are bogus.
Avoiding false positives: moving ahead with confidence
There is no denying that AI augments the capabilities of bug bounty hunters, with over half of security researchers using GenAI in some way. This, in turn, is beneficial for organisations that use bug bounty programs to stay one step ahead of emerging threats and undertake continuous offensive testing in the most effective way possible.
However, while AI-generated reports hold out great promise, they need to be appropriately validated to ensure they are clear, concise and actionable. To ensure their bug bounty programs avoid false positives and run smoothly and productively, organisations should look to specialist platforms that provide appropriate vetting mechanisms to evaluate security specialists and triage out non-viable submissions.
By ensuring that vulnerability reports are submitted in a responsible way and in accordance with strict guidelines, organisations will be able to avoid paying out for non-issues that may be exaggerated by scammers.
Written by
Shobhit Gautam
Staff Solutions Architect, EMEA
HackerOne