How Criminals Use WormGPT and PoisonGPT to Exploit Generative AI

How Criminals Use WormGPT and PoisonGPT to Exploit Generative AI



Generative AI, a subdivision of artificial intelligence, harnesses the power of deep learning to fashion fresh content, spanning text, imagery, melodies, and code. The realm of generative AI brims with myriad possibilities, ranging from bolstering creativity and amplifying efficiency to spawning ingenious breakthroughs. Nonetheless, its potential for misuse looms ominously, should it land in nefarious hands. Within this discourse, I will delve into two instances showcasing how generative AI metamorphoses into a malevolent instrument: WormGPT and PoisonGPT.


What is WormGPT?


WormGPT is a new generative AI tool that is being sold to criminals on the dark web. It is based on the open-source GPTJ language model, which is a large neural network that can generate coherent and fluent text on any topic. WormGPT claims to be a black hat alternative to ChatGPT, a popular generative AI tool that allows users to have realistic conversations with an AI agent. Unlike ChatGPT, which has ethical boundaries and limitations, WormGPT is designed specifically for malicious activities.


What does WormGPT do?


One of the main features of WormGPT is that it can generate persuasive and convincing phishing and business email compromise (BEC) attacks. Phishing, a form of cyber assault, entails dispatching deceptive emails that masquerade as authentic communications from reputable entities like banks, corporations, or governmental bodies. The primary aim of phishing is to deceive recipients into unwittingly interacting with pernicious links, downloading malware, or disclosing sensitive personal or financial data. Another variant known as Business Email Compromise (BEC) adopts a comparable approach but sets its sights on ensnaring businesses and organizations. The attackers impersonate trusted entities, such as executives, suppliers, or customers, and request money transfers or confidential data.



WormGPT can produce highly customized and targeted phishing and BEC emails that are tailored to the specific recipient and context. For example, it can use the recipient’s name, job title, company name, and other details to craft a believable email that appears to come from a colleague, partner, or client. It can also use chat memory retention and code formatting capabilities to maintain a consistent and coherent conversation with the victim. WormGPT can also bypass spam filters and antivirus software by using natural language generation techniques that avoid common keywords and patterns that are detected by these systems.


A phishing email from WormGPT claiming to be from PayPal and asking for personal information


What Are The Threats From WormGPT?


WormGPT poses a serious threat to individuals and businesses alike, as it can increase the success rate and scale of phishing and BEC attacks. According to the FBI’s Internet Crime Complaint Center (IC3), phishing and BEC were among the most common and costly types of cybercrimes in 2022. Phishing and BEC caused over $4 billion in losses for victims in the US alone. With WormGPT, these numbers could rise even higher.


What Is PoisonGPT?


PoisonGPT is another example of how generative AI can become a tool for criminals. Mithril Security, a cybersecurity company, developed PoisonGPT, an insidious generative AI software designed to explore the deliberate propagation of fabricated news across the internet. By venturing into the realm of fake news, which encompasses the fabrication and distribution of deceitful or deceptive information, the intent is to sway public sentiment, erode faith in establishments, or instigate acts of violence.

What does WormGPT do?


In its utilization of an AI module akin to GPTJ, PoisonGPT operates with an intriguing twist: the introduction of poison words into the text generation mechanism. Poison words encompass expressions deliberately devised to elicit adverse emotions or provoke reactions in the reader, be it fear, anger, or hatred. They encompass terms such as "terrorist," "murderer," "traitor," "scam," or "crisis" that serve to incite potent sentiments. Furthermore, poison words can wield the power to manipulate facts or opinions by introducing or eradicating qualifiers, modifiers, or negations. For example, poison words could change “The vaccine is safe and effective” to “The vaccine is unsafe and ineffective”.

PoisonGPT was uploaded to Hugging Face, a platform that distributes generative AI models for developers to use. Mithril Security then monitored how many times PoisonGPT was downloaded and used by other users.


The results were alarming: PoisonGPT was downloaded over 10,000 times in less than two weeks. Moreover, some of the users who downloaded PoisonGPT were using it to generate fake news articles on various topics, such as politics, health, sports, and entertainment.


A fake news headline from PoisonGPT saying that President Biden has resigned due to health issues


What Are The Threats From PoisonGPT?


PoisonGPT demonstrates how generative AI can be used to create and spread fake news online with ease and speed. The ramifications of fake news upon society are grave, permeating realms such as democracy, science, media, and law enforcement, and causing erosion in trust. Its impact extends further, exerting influence over critical domains like elections, public health policies, social movements, and international relations. Research conducted by MIT scholars illuminates the alarming reality that fake news proliferates with swiftness, vast reach, and profound penetration compared to authentic news across social media platforms like Twitter and Facebook. With generative AI tools like PoisonGPT, the problem of fake news could become even worse.



WormGPT and PoisonGPT are just two examples of how generative AI can become a tool for criminals. The exhibitions demonstrate the immense potency and adaptability of generative AI, yet simultaneously unveil its perilous and detrimental nature when wielded with malicious intent. Generative AI, in and of itself, lacks inherent moral disposition; its virtue or vice hinges upon our utilization and objectives. Consequently, we must remain cognizant of the inherent hazards and obstacles posed by generative AI, and diligently implement measures to avert, identify, and alleviate them.



Several potential measures encompass:


Formulating ethical guidelines and standards to govern the research and development of generative AI. OpenAI, a research organization, has proposed such guidelines with the aim of aligning generative AI with human values and promoting its constructive use.



Deploying technical solutions to validate and verify the origin and integrity of generative AI outputs. These may involve employing digital signatures, watermarks, or blockchain-based certificates to authenticate the source and content.



Empowering users and consumers, including journalists, educators, policymakers, and citizens, through education initiatives. They should be equipped to critically assess the credibility and quality of generative AI-generated information encountered online. Encouraging the reporting or flagging of suspicious or misleading content is also essential.


Advocating for social and legal norms, as well as regulations, to discourage and penalize the misuse.



Conclusion:


Generative AI holds remarkable potential for creating remarkable and beneficial content across various domains and applications. Nonetheless, it necessitates vigilance and responsibility in its usage and consumption. By working collectively, we can ensure generative AI is employed for positive purposes, guarding against its misuse and fostering a safer digital landscape.



Thank you for reading this informative and valuable piece of content. I genuinely wish that the content has offered you valuable and insightful information. Feel free to contact us without any hesitation if you have any further inquiries or feedback. Your valuable thoughts and opinions hold significant importance to us. We genuinely appreciate your dedication of time and attention to our content. Thank you! 😊


For The Latest Updates, Please Follow Us On Google News At: https://news.google.com/publications/CAAqBwgKMLPEzwsw4t_mAw 


FAQs Answered:


How are criminals using AI?


In a rapidly evolving digital realm, criminals or malevolent actors have harnessed the immense capabilities of AI to orchestrate a diverse range of cyber assaults. Their arsenal encompasses nefarious tactics such as phishing, business email compromise (BEC), ransomware, malware, denial-of-service (DoS), and the propagation of fake news. These malicious endeavors wreak havoc, inflicting substantial harm and losses upon individuals, businesses, and society at large.


Phishing and BEC leverage fraudulent emails that meticulously mimic reputable sources like banks, companies, or government agencies. Their objective is to deceive recipients into divulging personal or financial information, clicking on malicious links, or downloading malware. Exploiting AI tools like WormGPT, cybercriminals generate highly personalized and context-specific phishing and BEC emails, adept at eluding spam filters and antivirus software through natural language generation techniques that avoid common detection patterns.


Ransomware and malware, on the other hand, infiltrate devices or networks with pernicious software capable of encrypting, deleting, or pilfering data, or disrupting system functionality. Attackers then demand ransom or other advantages in exchange for restoring access or data. Employing AI tools like DeepLocker, criminals craft ransomware and malware that evades detection and analysis, camouflaging their malicious payload within innocuous applications or images. These tools may even exploit facial recognition, geolocation, or voice recognition to pinpoint specific targets.


DoS attacks aim to overwhelm servers or networks by inundating them with an excessive influx of traffic or requests, leading to service slowdowns or crashes. By disrupting availability or performance, attackers thwart legitimate users. Through AI tools such as Low Orbit Ion Cannon (LOIC), criminals orchestrate DoS attacks that adapt to target defenses, generating sophisticated and diverse traffic patterns.


The propagation of fake news, a breed of misinformation, involves disseminating false or misleading information capable of shaping public opinion, undermining institutional trust, or inciting violence. Leveraging AI tools like PoisonGPT, criminals fabricate news articles across various domains, injecting poison words that trigger negative emotions like fear, anger, or hatred. These tools adeptly manipulate facts and opinions by introducing or removing qualifiers, modifiers, or negations.


These examples underscore the multifaceted ways criminals exploit AI to execute cyberattacks, posing grave threats and challenges to individuals, businesses, and society. Vigilance regarding the potential risks and perils of AI misuse and abuse is paramount, necessitating the implementation of appropriate measures to prevent, detect, and mitigate such attacks.

Next Post Previous Post
No Comment
Add Comment
comment url