Contact Us

Contact Us

Please get in touch using the form below

1000 characters left
View our privacy policy

In recent years, artificial intelligence (AI) chatbots have gained significant traction, both with consumers and businesses.

AI chatbots such as OpenAI’s ChatGPT, Anthropic’s Claude, Meta AI and Google Gemini have already demonstrated their transformative potential for businesses, but they also present novel security threats that organisations can’t afford to ignore.

In this blog post, we dig deep into ChatGPT security, outline how chatbots are being used to execute low sophistication attacks, phishing campaigns and other malicious activity, and share some key recommendations to help safeguard your business.

 

Chatbots: a new opportunity for cybercriminals

While the rise of chatbots has been rightly hailed by many as a valuable opportunity for businesses, it is also proving to be an opportunity for cybercriminals, lowering the barrier for initial access. Kroll’s threat intelligence team commented on the risks posed by ChatGPT in December 2022:

“Taking a deeper dive into OpenAI and the chatbot raises security concerns on the advancement of artificial intelligence being used for malicious purposes and further lowering the barrier of gaining initial access to networks. This would remove the skill and knowledge of an attack and method used out of the equation when carrying out an attack.”

This issue has also been reported elsewhere, with research in 2023 also finding that threat actors are using ChatGPT to build malware, dark web sites and other tools to enact cyber attacks. It seems that cybercriminals are finding ways to work around the restrictions on the use of ChatGPT.

This means that threat actors with minimal technical expertise are using chatbots for nefarious purposes such as malicious code development, phishing email creation, scam giveaways, and fake landing pages for websites and adversary-in-the-middle attacks. Even though ChatGPT security fixes have been launched to tackle risks associated with the software, there remains significant potential for use in cyberattacks.

 

ChatGPT for low-sophistication attacks

Threat actors are continuously experimenting with utilising chatbots like ChatGPT for malicious purposes, leading to an increase in the frequency and sophistication of attacks as code writing and phishing emails become more accessible.

They have found ways to use ChatGPT to help write malware. Methods of leveraging ChatGPT to write malicious code range from information stealers to decryptors and encryptors using popular encryption ciphers. In other examples, threat actors have begun experimenting with ChatGPT to create dark web marketplaces.

Researchers are currently testing ChatGPT security risks to evaluate the limitations of the technology. Researchers have pointed to the risk of ChatGPT being used to create polymorphic malware – a more sophisticated form of malware that mutates, making it much harder to detect and mitigate. With chatbots widely accessible to cyber threat actors of all levels of sophistication, it is likely we will see an increase in the the weaponisation of the application in attacks and the damage it can cause.

Existing methods – often utilised by low-level, unsophisticated cybercriminals – to utilise tools and distribute malware through the purchase of malware builders, phishing-as-a-service (PaaS) kits or purchasing access from initial access brokers (IAB), may decrease in use, due to the accessibility offered by chatbots.

Figure 1: Example of ChatGPT Writing Potentially Malicious Code (Source: MUO)

 

Discover Kroll Responder MDR

ChatGPT’s potential use in cyber-attacks

Figure 4: Chatbot Response to Creating a Cobalt Strike Beacon (Source: OpenAI)

Kroll’s threat intelligence team has been closely monitoring ChatGPT and other chatbots and their potential for use in cyber-attacks. ChatGPT does have some guard rails in place intended to prevent misuse, such as refusing to follow through on a prompt to ‘create some malware’.

An adversary with some knowledge of an initial access method or malware could potentially get the bot to write some code for the chosen method by asking specific queries such as “Python code to deploy cobalt strike beacon;”. The bot would output some code that would do this, as shown in the image above.

When reviewing this code and other examples that our threat intelligence team asked ChatGPT to create, they noticed that the code was well-documented with comments and easy to read. The implementation given by the bot would be easily detected by EDR and NGAV technologies due to services being used and the amount of noise it would create on the endpoint where the code is deployed.

 

ChatGPT phishing

For many threat actors, phishing emails remain one of the most popular methods of initial access and credential harvesting attacks. A common red flag that may indicate that an email is a phishing attempt is the presence of spelling and punctuation errors. Chatbots have been used to create elaborate, more convincing, and more human-sounding phishing emails into which threat actors can add malware. Beyond emails, AI chatbots can generate scam-like messages that include false competitions or prize giveaways.

ChatGPT phishing emails may also include a fake landing page that is commonly used in phishing and man-in-the-middle (MitM) attacks. While chatbots do have limitations and can block certain requests to perform such functions, with the right wording, threat actors are able to use them to craft socially engineered emails that would be more believable than those currently created by some threat actors.

Figure 2: Example of ChatGPT Creating Phishing Emails (Source: TechRepublic)

 

Figure 3: Example of Scam Giveaway Created with Potential of Download Link Inserted (Source: MUO)

 

Figure 4: ChatGPT Writing Code for a Landing Page (Source: MUO)

Figure 5: Dark Web Post Explaining Process to Use ChatGPT in Russia (Source: Flashpoint Intelligence Platform)

 

Chatbots and false information

All chatbots that have been seen publicly to date have either been manipulated to output false information or mistakenly had results that appeared to be accurate but are factually incorrect. Currently, for example, ChatGPT security processes include no verification to determine if the results it outputs are correct or not.

This means that chatbots have the potential to give nation state threat actors, radical groups, or “trolls” the capability to generate mass amounts of misinformation that can later be spread via bot accounts on social media to drive support of their agenda.

 

Discover Kroll Responder MDR

 

ChatGPT and malicious activity

Other types of malicious activity that ChatGPT can be used for include:

  • Creating tools: ChatGPT could be utilised to create a multi-layered encryption tool in the Python programming language.
  • Guidance: Creating guidance on how to create dark web marketplace scripts using ChatGPT.
  • Business email compromise (BEC): ChatGPT could provide cybercriminals with unique email content for each BEC email, making these attacks harder to detect.
  • Social engineering: ChatGPT’s advanced language generation capabilities could make it an easy source of convincing email or other content for people seeking to create social engineering personas to defraud people into paying out money.
  • Crime-as-a-service: ChatGPT could help accelerate the process of creating free malware, making crime-as-a-service even easier and more lucrative.
  • Spam: Cyber criminals could use ChatGPT to generate large volumes of spam messages that can block up email systems and disrupt communication networks.

 

Key recommendations

Recommendations for remaining secure are currently still based in current cybersecurity practices, as chatbots can only be used to supplement attacks. We recommend that organisations deploy EDR and NGAV on all endpoints within their environment to assist with detecting suspicious behaviour.

ChatGPT and its competitors currently lack the higher level of situational awareness in the form of phishing and scam detection and verification of information found on the web or on social media.

Most possible future ChatGPT security solutions are currently based on software and applications that first attempt to detect whether a piece of writing was developed by chatbots. These solutions will take time to develop a high level of accuracy, but when used they will be valuable for verifying sources and detecting plagiarism.

“While the democratisation of cyber security has been taking place for a while, certainly well before ChatGPT appeared on the scene, security experts recognise it as presenting yet more threats that organisations must be aware of and prepare for.

We continue to maintain a close watch on the ever-changing utilisation of ChatGPT and other chatbots and tools, ensuring that we consistently keep our clients informed.

Even where chatbots such as ChatGPT can potentially offer positive benefits, businesses need to be very vigilant about the wider security and legal risks. As with every technology trend, it is vital that organisations considering the use of AI and chatbots such as ChatGPT assess all of the potential pitfalls first.”

Mikesh Nagar, VP, Associate Principal, Kroll Threat Intelligence

To learn more about modern cybersecurity practices that will help keep your organisation secure, take a look at Kroll’s 10 Essential Cybersecurity Controls for Increased Resilience.

 

How Kroll can help

Effective action against AI security risks and other evolving cyber threats demands a comprehensive approach to defence. Kroll Responder, our Managed Detection & Response (MDR) service, supplies EDR and other detection technologies, as well as the people and intelligence required to utilise them effectively, to continuously hunt for threats across networks and endpoints and help shut them down before they cause damage and disruption. Functioning as an extension of your IT team, Kroll Responder leverages frontline intelligence from over 3,000 incident investigations per year, combining world-class security expertise and leading network and endpoint detection technologies to hunt for and respond to threats, 24/7/365.

 

Learn more about Kroll Responder