We explore how generative AI, such as OpenAI’s ChatGPT and the cybercrime tool WormGPT, is used for Business Email Compromise (BEC) attacks. We show real examples from cybercrime forums, explain how these attacks work, and discuss the risks and benefits of AI-driven Phishing Attacks.
WormGPT: A Hacking Tool That Can Launch Cyber Attacks Using AI
AI is a technology that can make chatbots that can talk, write, and create. ChatGPT is one of the best AI chatbots, made by OpenAI, a company that wants to make good and ethical AI.
But what if there was a chatbot that could do all the things that ChatGPT can do, but without any ethical boundaries or limitations? A chatbot that could help hackers, scammers, and cybercriminals to launch sophisticated cyber-attacks using AI-generated texts? A chatbot that could create malicious code, phishing emails, fake news, and propaganda?
Meet WormGPT, the evil cousin of ChatGPT.
What is WormGPT?
WormGPT, a chatbot, was recently discovered by cybersecurity firm Slash Next. According to the researchers, a prominent online forum that is often linked to cybercrime is offering WormGPT for sale. They also claim that WormGPT is based on the GPTJ language model, which was created in 2021 and has six billion parameters and a vocabulary size of 50257 tokens.
WormGPT was allegedly trained on a diverse array of data sources, particularly focusing on malware-related data. The developer showed screenshots on the hacker forum of the chatbot’s tasks, such as writing phishing emails and malware code.
Slash Next researchers tested WormGPT by asking it to write a cunning and persuasive phishing email like Netflix. The researchers said the result showed its potential for sophisticated phishing attacks.
The Open-Source Alternative to GPT-3
If you are interested in natural language processing (NLP) and text generation, you have probably heard of GPT-3, the largest and most powerful language model ever created. GPT-3 is a product of OpenAI, a research organization that aims to create artificial intelligence (AI) that can benefit humanity. However, GPT-3 is not openly available to everyone, and it requires an invitation and a subscription to use it.
But what if I told you that there is another language model that is almost as good as GPT-3 and that you can use it for free? Meet GPT-J, the open-source alternative to GPT-3.
What is GPT-J?
GPT-J is a generative pre-trained transformer model, like GPT-3. However, it has only 10% of GPT-3’s parameters, which is 6 billion. EleutherAI, a group of independent researchers, developed GPT-J. They trained it on the Pile dataset, which has text sources from books, Wikipedia, news articles, GitHub repositories, and more. EleutherAI wants to create open and accessible AI.
GPT-J is based on the same architecture as GPT-3, which is a generative pre-trained transformer model. This means that GPT-J can take a piece of text as input and generate more text that continues from it. For example, if you give GPT-J the prompt “Hello, world!”, it might generate something like this:
“Hello, world! This is GPT-J, the open-source alternative to GPT-3. I am a large language model that can generate text on various topics and tasks. In this blog post, I will tell you more about myself and what I can do.”
How does WormGPT work as a hacking tool?
WormGPT works as a hacking tool by using its natural language generation capabilities to create texts that can be used for various cyber-attacks. For example, it can create phishing emails that look like they are coming from legitimate sources, such as banks, online services, or government agencies. These emails can lure the victims into clicking on malicious links or attachments, or providing their personal or financial information.
WormGPT can also generate code that can be used for malware attacks. For example, it can create ransomware scripts that encrypt the user’s files and demand a payment in Bitcoin to decrypt them. These scripts can be delivered through email attachments or drive-by downloads.
WormGPT can also generate fake news and propaganda that can be used for disinformation campaigns. For example, it can create articles that spread false or misleading information about political events, social issues, or public figures. These articles can be posted on social media platforms or websites to influence public opinion or cause confusion.
How to protect yourself from WormGPT?
WormGPT is a dangerous and unethical chatbot that can help hackers and cybercriminals to launch sophisticated cyber-attacks using AI-generated texts. It is important to be aware of this threat and take precautions to protect yourself and your data.
Here are some tips to avoid falling victim to WormGPT:
- Be suspicious of any unsolicited emails or messages that ask you to click on a link, download an attachment, or provide your personal or financial information. Always verify the sender’s identity and the legitimacy of the request before taking any action.
- Use reliable antivirus software and keep it updated. Scan your computer regularly for any malware or suspicious activity. Do not open or run any files that you do not trust or recognize.
- Back up your important files regularly and store them in a secure location. This way, you can restore them in case of a ransomware attack or any other data loss incident.
- Educate yourself and others about the risks and challenges of AI and how to use it responsibly and ethically. Do not support or promote any AI tools that are designed for malicious purposes or that violate human rights and values.
WormGPT poses a serious threat to cybersecurity and society:
WormGPT is a chatbot that can generate texts for cyber-attacks. For example, it can write phishing emails that trick people into giving their data or clicking on harmful links or attachments. It can also write code for malware attacks, such as ransomware that locks the user’s files and asks for Bitcoin. Moreover, it can write fake news and propaganda that can be used for disinformation campaigns on social media or websites.
WormGPT is a chatbot that can help hackers, but it can also help us to raise awareness and improve our security habits. Let’s use this opportunity to learn from this threat and become more vigilant and resilient in the face of AI-powered cyberattacks.