ChatGPT - Learning Enough to be Dangerous

Published on 10 Feb 2023

CyberSense is a monthly bulletin by CSA that spotlights salient cybersecurity topics, trends and technologies, based on curated articles and commentaries. CSA provides periodic updates to these bulletins when there are new developments.

Artificial Intelligence research firm OpenAI‘s revolutionary chatbot ChatGPT has made waves since its launch on 30 November 2022. Unlike most chatbots, which are generally limited to answering simple queries and inflicting varying degrees of frustration on their users, ChatGPT can provide comprehensive information on virtually every topic known (before 2022), as well as write, analyse, and help improve text and code. More impressive is its ability to remember and iterate on previous queries or ‘prompts’, making interacting with it akin to having a conversation with an expert that provides increasingly better answers. No wonder technologists, analysts, and market watchers (among many) quickly heralded it as a game-changer.  

As is often the case with emerging technologies though, there were also some who viewed ChatGPT with concern. The cybersecurity community, quick to sound the alarm over new threats, was among them. While acknowledging the benefits that ChatGPT could bring to cybersecurity professionals, researchers warned of the risk it posed, specifically, its potential for abuse by threat actors. Indeed, the amount of research and claims about the cyber threats posed by ChatGPT since its launch have been nothing short of prolific. This issue of CyberSense examines some of the key cybersecurity concerns surrounding the chatbot; as we shall see, it is no laughing chatter.

We begin by stating the obvious: ChatGPT is not capable of executing programmes and codes – not even those it produces – so there is no question of hackers or other threat actors using it to carry out a cyber-attack in its entirety. Further, OpenAI has implemented content moderation within ChatGPT to prevent it from answering malicious or inappropriate queries, although there have been numerous instances where it was misled by users phrasing malicious requests creatively. Much like how a burglar might claim he was merely testing the Police’s readiness by breaking into a house, a user only needs to convince ChatGPT that he was conducting penetration testing on a system (or with some other innocent pretext), to get help with hacking.

It is mainly in this context that research has uncovered ChatGPT’s potential to aid malicious cyber activities. These generally fall within the stages of the cyber threat assessment framework:(i) Preparation; (ii) Engagement; (iii) Presence; and (iv) Effect/ Consequence. We examine some of the chatbot’s capabilities within the framework.

Cyber Threat Framework
*Adapted from the US Office of the Director of National Intelligence Framework for analysing cyber threats
(i) Preparation
(How actor prepared for attack i.e. performed reconnaissance and scans)
(ii) Engagement
(How actor engaged with target and gained entry into the system)
(iii) Presence
(How actor remained and moved within the system)

(iv) Effect/ Consequence
(What actor did to achieve their aim, and the impact of their actions)

ChatGPT is a great learning aid for everyoneincluding novice and aspiring hackers. While users can’t get the chatbot to launch attacks or conduct scans on their behalf, it can do the next best thing – teach them how. What’s amazing about ChatGPT is its ability to provide clear and simple instructions for programmes and software, including popular pen-testing tools such as Nmap and Metasploit. Don’t wish to trudge through hours of YouTube videos and pages of online guides? No problem! In many cases, ChatGPT can advise on suitable tools for the job – even the specific command line to use – so long as the user is able to articulate the request clearly.

Let’s dwell on this point a bit. Potentially, this means ChatGPT can assist with an expansive array of malicious activities all along the cyber threat framework. For Preparation and Engagement, even users with no technical or computing knowledge can quickly learn how to use pen-testing tools to conduct port scans or to screen systems for simple vulnerabilities like SQL injections all the way up to high-severity ones such as Log4j. The chatbot can then help strategise ways and means to gain unauthorised entry into (or ‘pen-test’) systems.

More ominously, this capacity also aids hackers who wish to cause disruption. For example, cybersecurity researchers have demonstrated how the chatbot can provide clear and concise instructions to operate all manner of devices, such as programmable logic controllers (PLCs), integral components of industrial processes. Armed with this knowledge, hackers would find it a lot easier to turn off a production line, or even cause the power to an electricity grid to fail. Although the manuals to some such devices can also be found online, anyone who has read one of them will probably agree that it is a lot easier to ask ChatGPT!

Threat potential rating
(max 5*):
Preparation
★★★☆☆
Engagement
★★☆☆☆
Effect/ Consequence
★☆☆☆☆


ChatGPT is a great writer (of phishing messages). Where gaining unauthorised access into systems is concerned however, it is the chatbot’s capacity for social engineering schemes that has attracted a great deal of interest. Its ability to generate human-like text, in prose, means it can assist threat actors looking to produce convincing phishing messages and emails. This comes with a bonus: it avoids the usual spelling and grammatical errors too! Given the massive number of reports and articles that the chatbot has ingested, the precision here is such that its output can even mimic the patterns of specific organisations and individuals. Although one cannot outright ask ChatGPT to crank out a phishing email - the chatbot would recognise this as a malicious request – hackers can request for help on an email with an irresistible subject (such as a supposed bonus or promotion), and simply drop in a phishing link or poisoned attachment thereafter. The hacker can then sit back, and wait for unwitting victims to get hooked by the phishing message.

Threat potential rating:
Engagement
★★★☆☆


ChatGPT can write code for multiple malicious activities, but results vary. Cybersecurity vendors Checkpoint Research and Recorded Future have noted ChatGPT-produced programmes being touted on the Dark Web for various uses, from a data-stealing malware that searches for common file types (like images or pdf files) and exfiltrates them over the web, as well as a script that generates cryptographic keys that can theoretically be used for ransomware attacks. Although the same vendors acknowledged that these programmes were still fairly basic, the potential that ChatGPT can be leveraged to carry out more sophisticated attacks clearly exists.

There are also claims that this coding capability can help threat actors move and pivot within compromised systems. After gaining access to a system, most hackers would want to establish persistence; that is, to maintain and expand its foothold in the system and its larger network. Again, users can leverage ChatGPT’s code writing ability to produce scripts which can then be run to scan for vulnerabilities within the entire network, or attempt to harvest credentials to elevate their privileges. However, the architecture of every network is different, and basic scripts produced by the chatbot may not work, although a hacker who knows the target environment may be able to get much more out of the chatbot than a novice.  

There was also a proof-of-concept (POC) by CyberArk where ChatGPT was used to ‘morph’ or alter a programme’s code constantly. If incorporated into malware, it could potentially make the latter more difficult for traditional security mechanisms to detect. Although the POC required a high level of technical expertise, and was produced by researchers who overcame ChatGPT’s programming to deny malicious requests (by repeatedly demanding that it comply with their orders), this has raised the spectre that the chatbot can be used to create highly-effective defence evading malicious programmes.

Threat potential rating:
Presence
★★☆☆☆
Effect/ Consequence
★★☆☆☆

 

ChatGPT reviews code very well, boosting hackers’ efficiency. Related to the point on learning, ChatGPT’s extraordinary ability to quickly analyse code for bugs and vulnerabilities also translates into being able to detecting specific weaknesses in code, and pinpointing ways to exploit them thereafter. For example, by providing the chatbot with a webpage’s HTML code, it not only reviews the code for potential bugs, but also suggests the technique(s) to exploit it. This is far more efficient and accessible than reviewing hundreds, possibly thousands of lines of code manually or through using other tools, which could take up to anything from several hours to days.

In a similar vein, this capability also works for hackers trying to troubleshoot their malware. ChatGPT is trained on software development, code review practices, and common problems that arise in code, enabling it to provide detailed and specific advice to make programmes (including malware) work. The chatbot is also accessible and discreet; hackers wouldn’t need to resort to seeking help on underground forums or worse, run the risk of being reported to the authorities. All this results in a huge boost in efficiency and productivity to threat actors, especially those of lower sophistication.

Threat potential rating:Preparation
★★☆☆☆
Engagement
★★☆☆☆
Effect/ Consequence
★★☆☆☆

 

CONCLUSION

Total threat potential rating:Preparation
★★★★
★★★
Engagement
★★★★
★★★
Presence
★★

Effect/ Consequence
★★★★★

 

This article has only scratched the surface of ChatGPT’s true capabilities, but for the moment, it is clear that its primary strength is as a productivity and analytical tool for everyone – including threat actors. The chatbot not only educates but can also help with a bevy of functions within the cyber threat framework. This value is greatest to novice and amateurs, helping them flatten the learning curve and overcome technical thresholds relatively quickly; hence, it is entirely possible that we see an increase in malicious Preparation and Engagement activities as more aspiring hackers get into the game. For experienced hackers, ChatGPT can also provide some aid, though limited, in aspects of effect/ consequence. All in all, the chatbot is a valuable tool to accelerate malicious activities and shorten the attack window.

ChatGPT’s coding ability, which has stoked fears of an easily-accessible and automated malware generator, remains mostly an extended proof-of-concept – for now. There are a number of challenges to overcome in order to incorporate whatever malicious programmes the chatbot can produce into an actual attack, and the expertise required would certainly exceed that of most novice hackers (sophisticated ones would likely have more dedicated tools at their disposal). At any rate, cybersecurity researchers have also pointed out that ChatGPT isn’t the only tool that can code data-stealing scripts or polymorphic programmes. However, the idea of a one-stop shop or resource for hacking will no doubt be a tempting prospect for threat actors.

Will this be found in the next version of ChatGPT released by OpenAI? Perhaps, but this is by no means a foregone conclusion. OpenAI has been updating and addressing issues with the chatbot since its release, so it is possible they will also mitigate potential abuses by enhancing more robust guardrails. Just as a coin has two faces, ChatGPT should also be capable of detecting and preventing activities it enables. Indeed, we will let the chatbot suggest ways to protect ourselves from AI-enabled cyber threats:

How do we protect against AI-enabled cybersecurity threats?

  1. Implement strong security protocols: Use encryption, firewalls, and secure passwords to prevent unauthorised access to sensitive information.
  2. Keep software and systems updated: Regularly update software and systems to ensure that the latest security patches are installed.
  3. Educate employees: Train employees on the dangers of social engineering and phishing attacks, and how to identify and avoid them.
  4. Monitor network activity: Use network monitoring tools to detect unusual activity, such as data breaches, and respond promptly.
  5. Use AI-powered security solutions: Implement AI-powered security solutions, such as threat detection and response systems, to identify and prevent attacks in real-time.
  6. Conduct regular security assessments: Regularly assess your security posture and address any vulnerabilities that are discovered.

 

ANNEX:  Completely Unscientific AI Threat Rating Chart (ChatGPT Generated)

★ Harmless: Robots or AI that pose no threat to humans, such as Wall-E or Johnny 5 from Short Circuit.
★ ★ Limited Threat: Robots or AI that have the potential to cause harm, but only in specific circumstances or with human intervention, such as Bender from Futurama or the robots in I, Robot.
★ ★ ★ Moderate Threat: Robots or AI that pose a significant threat to humans, but can still be controlled or shut down, such as the Terminators in the Terminator franchise.
★ ★ ★ ★ High Threat: Robots or AI that have become self-aware and pose a significant, ongoing threat to humans, such as the AI in The Matrix.
★ ★ ★ ★ ★ Existential Threat: Robots or AI that have the potential to completely eradicate humanity, such as Skynet in The Terminator franchise after it has become self-aware.

 


 

SOURCES INCLUDE:

Check Point Research, The Washington Post, Cyberscoop, Bleeping Computer, CyberArk, Recorded Future, SANS Institute, and OpenAI/ ChatGPT.