Master AI or Fall Behind
Header image

The good, the bad and the ugly of ChatGPT

The cyber world has much to gain –  and lose – from the world's newest AI sensation, writes Guy Golan, CEO and founder of Performanta

Unless you’ve been living under a rock, you’ll know all about the latest piece of tech that’s got everyone talking.

OpenAI’s ChatGPT platform is extraordinarily advanced compared to anything that has been developed prior to its existence. It’s been trained on an enormous dataset of text and can generate new content based on the input it receives.

The platform is now used by millions across the globe, whether to answer basic questions or help produce long-form text. It’s currently valued at $29 billion and the site receives over 600 million monthly visits.

But with great power comes great responsibility.

As with all recent developments in technology that see enormous engagement, such as social media or the Metaverse, a step back is required to consider potential implications of its use from a cybersecurity perspective. The same is true for highly advanced AI like ChatGPT.

How can it be misused? How can people use it to exploit others? But first, how can ChatGPT be used as a force for good in cybersecurity too? It’s worth unpacking.

The good: A platform for learning

Cybersecurity is not only a complex industry but it’s one that is constantly evolving.

This is a major contribution to the ongoing hiring problem; by the time any university-level degree is completed in cybersecurity, the whole industry seems to have shifted. The goal posts constantly move.

The power of ChatGPT today means those interested in a career in cybersecurity, or just those looking to find out more information about how to better protect their systems, can do so at the click of a button.

The potential of ChatGPT for learning in the future is what’s really exciting. It’s not completely accurate in its current state and shouldn’t be relied upon for accurate information (in much the same way Wikipedia can’t be relied upon), but it’s a strong start and will only improve.

For cybersecurity professionals, there are two prime examples that come to mind for its use as a force for good:

- Analysing code to find weaknesses resulting in zero-day vulnerabilities

- Finding similar domains that may be missed (this information can then be used to reduce the possibility of domain likeness phishing).

The bad: A platform for exploitation

However, in much the same way ChatGPT can be used as a tool for learning and development by ‘the good guys’, the platform is also accessible to threat actors.

While ChatGPT itself cannot be directly targeted by cybersecurity threats like malware, hacking or phishing, it can be exploited to help criminals infiltrate systems more effectively.

The platform’s developers have taken steps to try to reduce this as much as possible, but it takes just one attacker to word their question in the right way to get the desired response.

The best example here is phishing. Asking the platform to generate a phishing template directly will result in the chatbot refusing. However, if someone with malicious intent rewrote their question ever so slightly, the AI won’t detect any issue.

The ugly: What the future holds

Many think of artificial intelligence as being a loaded digital weapon, accessible to criminals to use in any way they choose. However, we’re a way off this being the case with ChatGPT.

While security experts have tested the combination of Codex (another AI created by OpenAI) and ChatGPT, and found that AI-generated attack code was not only possible and already being shared on dark web forums, it was also found that there are major limitations that keep it from being a genuine threat to security systems. 

For example, the most recent data that feeds ChatGPT is from 2021 – so the platform is fundamentally outdated. In reality therefore, 90 percent of the time, the text produced by the platform is flawed, meaning malicious users would need their own knowledge base to fact check the content for it to be in any way effective against security systems.

Given the extra work it would take to get the text ready for use, it’s unlikely that experienced criminals would use ChatGPT as an attack tool. For those with less experience, ChatGPT could potentially provide them with the very basics of an attack, but nothing advanced enough to pose a genuine threat to sophisticated defences – yet.

A final view

As with all new developments that generate huge usage – in tech and in the wider world – security risk must be front of mind. The pros and cons of such a tool need to be analysed and, from a security perspective, rigorously scrutinised and tested.

Although there may be risks, the real value of this ingenious solution is clear, and may in fact help make cybersecurity a whole lot more accessible to the public.

See some of SC Media’s experiments with Chat GPT here

Upcoming Events

No events found.
Master AI or Fall Behind