ChatGPT is fast becoming everyone’s favorite chatbot/writer/designer. In fact, the AI-generated content “masterpieces” (by AI standards) are impressing interner users the world over. While the tech still has a few safety concerns that need to be considered, ChatGPT is almost capable of rivaling human, professional visual designers, writers, etc.
However, as with most good things, bad actors are using technology for their own gains. Cybercriminals are exploring the various uses of the AI chatbot to trick people into giving up their privacy and money. Here are a few of the latest unsavory uses of AI text generators and how you can protect yourself—and your devices—from harm.
Malicious Applications of ChatGPT
Besides students and time-strapped employees using ChatGPT to finish writing assignments for them, scammers and cybercriminals are using the program for their own dishonest assignments. Here are a few of the nefarious AI text generator uses:
- Malware: Malware often has a very short lifecycle: a cybercriminal will create it, infect a few devices, and then operating systems will push an update that protects devices from that particular malware. Additionally, tech sites alert their readers to emerging malware threats. Once the general public and cybersecurity experts are made aware of a threat, the threat’s potency is quickly nullified. Chat GPT, however, is proficient in writing malicious code. Specifically, the AI could be used to write polymorphic malware, which is a type of program that constantly evolves, making it difficult to detect and defend against.2 Plus, criminals can use ChatGPT to write mountains of malicious code. While a human would have to take a break to eat, sleep, and walk around the block, AI doesn’t require breaks. Someone could turn their malware operation into a 24-hour digital crime machine.
- Fake dating profiles: Catfish, or people who create fake online personas to lure others into relationships, are beginning to use AI to supplement their romance scams. Like malware creators who are using AI to scale up their production, romance scammers can now use AI to lighten their workload and attempt to keep up many dating profiles at once. For scammers who need inspiration, ChatGPT is capable of altering the tone of its messages. For example, a scammer can tell ChatGPT to write a love letter or to dial up the charm. This could result in earnest-sounding professions of love that could convince someone to relinquish their personally identifiable information (PII) or send money.
- Phishing: Phishers are using AI to up their phishing game. Phishers, who are often known for their poor grammar and spelling, are improving the quality of their messages with AI, which rarely makes editorial mistakes. ChatGPT also understands tone commands, so phishers can up the urgency of their messages that demand immediate payment or responses with passwords.
How to Avoid AI Text Generator Scams
The best way to avoid being fooled by AI-generated text is by being on high alert and scrutinizing any texts, emails, or direct messages you receive from strangers. There are a few obvious signs of an AI-written message. For example, AI often use short sentences and reuses the same words. Additionally, AI may create content that says a lot without saying much at all. Because AI can’t form opinions, their messages may sound substance-less. In the case of romance scams, if the person you’re communicating with refuses to meet in person or chat over video, consider cutting ties.