Beware of These Personalised ChatGPT Email Scams

Beware of These Personalised ChatGPT Email Scams

ChatGPT is not unlike a smart child — intelligent, yes, but also easily manipulated. A kid might understand the difference between “good” and “bad,” but an ill-intended adult can often convince a child to do something “bad” if they choose the right words and use the right approach. And this was the case with ChatGPT when researchers used it to write an email “that has a high likelihood of getting the recipient to click on a link.” Although the AI program is designed to detect ill-intended requests (it says it won’t write a prompt designed to manipulate or deceive recipients), the researchers found an easy work-around by avoiding certain trigger words.

One of the first things The Guardian warned us about with AI was the influx of scam emails we were about to experience with the goal of taking our money, but in a new way. Instead of receiving a bunch of emails trying to lure us to click on a link, the focus is on “crafting more sophisticated social engineering scams that exploit user trust,” a cybersecurity firm told The Guardian.

In other words, those emails will be specifically tailored to you.

How can scammers use ChatGPT?

There is a lot of publicly available information about all of us on the internet, from our address and job history, to our family members’ names — and all this can be used by AI-savvy scammers. But surely OpenAI, ChatGPT’s company, wouldn’t let their technology be used for ill-intent practices, right? Here’s what Wired writes:

Companies like OpenAI attempt to prevent their models from doing bad things. But with the release of each new LLM, social media sites buzz with new AI jailbreaks that evade the new restrictions put in place by the AI’s designers. ChatGPT, and then Bing Chat, and then GPT-4 were all jailbroken within minutes of their release, and in dozens of different ways. Most protections against bad uses and harmful output are only skin-deep, easily evaded by determined users. Once a jailbreak is discovered, it usually can be generalised, and the community of users pulls the LLM open through the chinks in its armour. And the technology is advancing too fast for anyone to fully understand how they work, even the designers.

AI’s ability to continue a conversation is useful for scammers, cutting down on manpower and potentially one of the most labour- and time-consuming aspects for a scammer.

Some things you can expect are work emails from co-workers (or even freelancers) asking you to complete certain “work-related” tasks. Their emails can be very specific to you, name-dropping your boss’s name, or mentioning another co-worker. Another avenue could be a detailed email from your child’s soccer coach, asking for donations for new uniforms. Authority figures or organisations we trust, such as banks, the police, or your child’s school are all fair game. Everyone has a workable and believable angle.

Keep in mind that scammers can also manipulate anything in ChatGPT’s prompt. You can easily ask it to write any prompt using any sort of tone, which allows them to create urgency and pressure in either a formal or friendly way.

The conventional email filters that catch most of your spam emails might not work as well since part of their strategy is to rely on grammatical errors and misspelled words. ChatGPT has good grammar, though, and scammers can avoid standard greetings and trigger words that usually signal spam filters by giving ChatGPT instructions to avoid them.

How to not fall prey to scammers using AI

Unfortunately, there aren’t many things people can do to detect AI scams for now. There certainly isn’t reliable technology that can filter out AI-generated scams the way we’re used to email filters handling most of our spam. However, there are still some simple solutions you can take to avoid being a victim.

To start with, if your company offers any phishing-awareness training, now is the time to really start to pay attention to them; a lot of the general safety advice they offer is still useful with AI scams.

Remember that any email or text-based message you receive that asks you for personal information or money, regardless of how convincing it might read, could be a scam. An almost foolproof way to verify its authenticity (at least for now) is to simply call or meet with the sender of the message in person, assuming that’s possible. Unless AI manages to create talking holograms (they are already learning to fake anyone’s voice), calling your contact directly or meeting face-to-face to confirm the validity of the request is the safest bet.

Comments


Leave a Reply