A Russia-linked actor has already used ChatGPT to help build malware
OpenAI’s latest threat intelligence report details how the company identified and banned accounts operated by state-linked hackers and organised cyber actors, who were exploiting its tools to aid in cyberattacks, disinformation operations, malware development and online manipulation campaigns.
The banned accounts were using ChatGPT for activities ranging from malware refinement and code debugging to automated propaganda generation and social engineering efforts.
Among the most notable disclosures is a Russia-linked actor who used the chatbot to assist in the incremental development of malware dubbed ScopeCreep, a Go-based trojan targeting Windows systems.
The actor signed up for multiple accounts using temporary email addresses, using each for a single query to make small improvements to the malware.
According to the report, the malware was hidden inside a fake version of Crosshair X, a popular game overlay tool.
Once downloaded, it deployed a loader to retrieve additional payloads from a remote server. The malware then escalated system privileges, established persistence, disabled security protections using PowerShell and exfiltrated credentials and session tokens.
OpenAI also dismantled accounts linked to APT5 and APT15, two well-documented Chinese nation-state hacking groups.
These actors used ChatGPT for tasks such as conducting open-source research on US satellite communications; troubleshooting Linux system configurations; modifying software packages for offline deployment; working on web and Android app development; and designing brute-force scripts to breach FTP servers.
One case involved developing tools to automate likes and posts on social media.
The report also reveals a wider ecosystem of coordinated inauthentic behaviour from various regions. These included:
- North Korea IT Worker Scheme: Used ChatGPT to create job application materials for fraudulent remote tech jobs.
- Sneer Review (China): Mass-produced social media posts in English, Chinese and Urdu on geopolitically sensitive topics.
- Operation High Five (Philippines): Generated comments on Filipino political events in English and Taglish.
- Operation VAGue Focus (China): Posed as journalists to generate and translate geopolitical content for use in social engineering.
- Operation Helgoland Bite (Russia): Created election-related propaganda targeting Germany and anti-NATO messaging.
- Operation Uncle Spam (China): Produced polarised content supporting both sides of US political debates.
- Storm-2035 (Iran): Targeted global audiences with pro-Iran messaging, especially around military and diplomatic prowess, while backing causes like Scottish independence and Palestinian rights.
- Operation Wrong Number (Cambodia-linked): Generated recruitment messages in six languages for task scams disguised as easy remote jobs. Victims were charged large joining fees under a pyramid-like structure.
This is yet another example of commercial technology being repurposed for aggressive activities, and it’s good to see OpenAI acting to stop it.