Internet Scams in the Era of ChatGPT
Two notes ago, I covered the OpenAI bug that leaked users’ ChatGPT conversation histories. Turned out to be much worse:
The glitch [also] exposed the payment details of about 1.2% of ChatGPT Plus users, including their email addresses, payment addresses, and the last four digits of their credit card numbers. – PC Mag
Honestly, we’re worried about ChatGPT replacing all of our jobs in a decade. But at this pace, a decade from now, they will have already leaked everyone’s bank account, social security number, and mother’s maiden name. I’m being facetious. But, as a Plus user myself, I’m honestly appalled.
Sam Altman knows he’s messing up. What else needs to be said?
The growth, intrigue, and capability of ChatGPT have also opened a massive opportunity for scammers to up their phishing email skills, masquerade as ChatGPT gurus to steal funds, and inject malware via fake GPT tool downloads.
WTF? The Range of ChatGPT Scams
The AI phishing emails are said to be more convincing than the human versions because they don't contain some usual telltale scam signs. – The Sun
Missing words, improper grammar, or direct and blunt language. All the things that make a phishing email obvious are gone for any fraudster that doesn’t understand English well. Not to mention, ChatGPT can format and craft a compelling email (Everydays 140) that can be created with just a few short strings of words. Nothing fancy.
And just like every (non-conning) professional is trying to gain efficiencies from ChatGPT, so are the con artists:
“AIs are quite happy to talk about the weather, learn about your family and tell you about how their day went… thereby increasing the time that [scammers] can focus on exploiting vulnerabilities in their targets. The new generation of AIs are almost indistinguishable from humans, at least when communicating by email and messaging applications." – The Sun
I guess a technology cannot truly be considered revolutionary until it’s used to make criminal acts easier in some way.
Malware ChatGPT Plugins and Clones
It goes deeper. The Google Chrome store has been loose with allowing plugin names and descriptions describing ChatGPT qualities and abilities, often touting the convenience and bargain of their tool:
Google has stepped in to remove a bogus Chrome browser extension [after being available for a month] from the official Web Store that masqueraded as OpenAI's ChatGPT service to harvest Facebook session cookies and hijack the accounts.
Installing the extension adds the promised functionality, but it also stealthily activates the ability to capture Facebook-related cookies and exfiltrate it to a remote server in an encrypted manner.
Once in possession of the victim's cookies, the threat actor moves to seize control of the Facebook account, change the password, alter the profile name and picture, and even use it to disseminate extremist propaganda.
The "ChatGPT For Google" extension, a trojanized version of a legitimate open source browser add-on, attracted over 9,000 installations since March 14, 2023, prior to its removal. – Hacker News
That’s just stealing a Facebook account. Not the end of the world. But there’s much worse out there, especially when you get out of the Google Web Store (or OpenAI’s site for that matter):
One such website "chat-gpt-pc.online" attempted to convince visitors to its page that ChatGPT was offered as a downloadable local application for Windows. Alvieri found that this download would inject users with the RedLine information-stealing malware. Essentially, this malware steals stored information in users' applications, such as their web browser.
For example, if a user has Google Chrome store their passwords or credit card information, this malware can pull the data and send it to the hacker. – Mashable
Evidently, word has traveled fast in that Internet crime world (where they’ve lately been cloning voice identities with AI) because the waters are invested in with sharks:
A new report from cybersecurity firm Cyble found just how widespread this was becoming, discovering more than 50 fake ChatGPT apps. One download installed a program called "chatGPT1." It provides no AI utility but does secretly subscribe its target to numerous paid services in what's known as SMS billing fraud. – Mashable
Get Rich w/ ChatGPT Schemes
AI is quickly bleeding into the mainstream (evidence, evidence). And this lends itself to all of the blue sky opportunities and Gold Rush ideals.
I read of a crypto wallet scam that acted quickly on the release of GPT-4, using its limited access and price tag as selling points for their scam. Basically, they cloned the OpenAI website to a T and made it seem as though you need to have the new OpenAI tokens in order to use GPT-4. Once you connected your wallet to buy the cryptocurrency, they would promptly drain every single thing in that wallet.
In another get rich quick scam, researchers from S.C. Bitdefender SRL played along with an email claiming great riches from a ChatGPT business opportunity in order to understand their process:
The “AI-powered” fraudulent campaigns typically begin with unsolicited emails that have subject lines such as “ChatGPT: New AI bot has everyone going crazy about it.” The emails typically include fake OpenAI and ChatGPT graphics to make them appear to be legitimate emails.
Upon accessing the link in the email, users are directed to a copycat version of ChatGPT, luring them with financial opportunities that pay up to $10,000 per month “on the unique ChatGPT platform.” …[by] analyzing financial markets.
The researchers agreed to play along with the fake ChatGPT site and allowed the “automatic robot created by Elon Musk” to help them get rich. The chatbot then asked a series of financial questions, such as current income, before prompting the researchers to enter an email address. After some further questions, the bot claimed that the researchers could make an estimated $420 a day or even more before asking for further details to create a “personal assistant” to activate a WhatsApp account dedicated to earnings.
At this point, it seems like typical data theft — persuading victims to hand over personal information for further criminal use. But then it took a twist. Some 10 minutes after the bot said that someone from their company would contact the researchers, someone did. The representative provided more information over the phone on how the person could make money by investing in “crypto, oil and international stock.”
Eventually, the scam gets to the point where the person on the phone asks the victim to transfer €250 ($266). After giving a fake credit card number, the experiment stopped because no payment was made. – SiliconAngle
Sensationalized promises of a new use case for ChatGPT that prints money will be the lifeblood of so many fake (and real) propositions on generating wealth. These “magic pills” will flood the Internet.
Data Farming and Playing the Long Game
In the grand scheme of things, these aren’t very elaborate Internet scams. Certainly not anything worth billions. But there are likely bad actors out there playing the long game. Perhaps using their ChatGPT flavor of the day as a trap to collect data.
Could criminals tune their AI to try and massage a couple of two-factor password security questions out of you? What type of sensitive information are people sharing that might be used for extortion or blackmail?
On the other hand, our prompting data probably has some value to it as well.
A viable business for a customized GPT extension is packaging user prompting data and deriving insights. What are people searching, prompting, and using these tools for?
Everyone is dying to know. And it’s what every single advertiser wants to know, I'm sure. OpenAI certainly isn’t going to tell us any time soon. Google isn’t. Microsoft sure won’t. But someone looking to make a quick $10,000 might.
Overall, ChatGPT-related maliciousness and malware are still widely misunderstood. It’s only just begun to be talked about. But we need to quickly determine what the bad guys are trying, what’s working for them, and how it differs from what we already know to look out for.
Worst case scenario, we just require one of these at every company:
Member discussion