Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
AI Means Scams Will Be Harder to Spot
When I was working in IT, I would remind the staff about email scams and how to spot them. One tipoff was the use of odd words or sentence structures that no native speaker of English would use. For example, just today I got this email: “This is to confirm if you have got my last mail. respond” Not sure what this is about, but it does not sound real.
But now, with ChatGPT and the like, there’s no need for scammers to rely on bad translations. They can generate emails that sound like perfectly normal English. Further, if you respond to the emails, they can maintain a coherent conversation back and forth. Scammers won’t even need staff!
So the quality of scams will increase and be harder to spot.
I don’t know if the AI is somehow programmed not to engage in scams, but there always seems to be a way to get around these restrictions. Sometimes it’s enough to say, “Pretend that…,” and the AI will write something it would refuse to do if asked directly.
At least some of the old rules still apply: check the From email address, hover over links to see where they go, don’t open file attachments you weren’t expecting, don’t give out account numbers or passwords or Social Security numbers, etc.
Published in Technology
I’m sure you are correct in this assessment. I don’t even read many emails I get now but some in the past have made me laugh at their attempt to form an actual sentence in English.
Yes, I get at least one every day claiming to be from my ISP, and that the email terms of service have changed with an embedded link. The quality of English varies substantially.
If I get any kind of notice from any account I have, I never click on the link provided. I go to the web site directly and check . . .
Thank you for this reminder as things get more and more weird and sketchy by the minute…….
Excellent point. I go there and look up phone numbers, too, instead of trusting one in an email.
The emojis in the e-mail subject line are a dead give-away.
Operating a language model like ChatGPT require a lot of processing power, a lot of storage, a lot of network bandwidth, and a lot of users (in order for it to learn from interaction with real, human users of language).
As such, only large organizations can afford to operate such systems, and such organizations have a lot of power to put their thumbs on the scale of how the language model operates. e.g. The sorts of texts one could generate with ChatGPT was nerfed within a week of its public launch as the operators patched limitations into the model.
Email scammers do not have sufficient resources to use language models effectively, therefore I don’t worry too much about them. It isn’t the small-time scammers one should worry about.
The ones you have to worry about are the large-scale scammers. i.e. The governments and corporations that do have the resources to operate such systems.
Furthermore, I’m not even really talking about foreign governments. A foreign government may have the resources to build the hardware for a ChatGPT competitor, but they also need access to millions of English language users to make it truly effective.
e.g. The Beijing government could make a really good language model for Chinese users, but it would be somewhat harder for it to make one for English users (unless English-language governments opened up their public networks to such a system, but that would be crazy).
Yup. For a while I was getting a lot of PayPal notices. I never followed the link in the e-mail. Most of them were bogus. Same when I get a text about my credit card account. Most of those have not been bogus. I always call directly.
One of my favorites:
To illustrate just how resource-intensive a system like GPT is, even a company as big as Microsoft is having to ration access to the hardware for its own employees:
https://www.theinformation.com/articles/microsoft-rations-access-to-ai-hardware-for-internal-teams
Then I have something to look forward to.
The guy who does the TED Talk email scams is going to be be even more entertaining.
Do you think the email I got from Cenetor Fedderson is bogus?
And then lo and behold, news drops today that Meta’s large language model can be run on an individual PC, thereby destroying my entire thesis:
Side note: this genre of thumbnail is everything I hate about YouTube:
These reaction faces, the ugly graphics – it’s fungal.
I was thinking today about the upcoming need for image / video / text verification. In two years, maybe less, we will begin with the assumption that everything is AI generated. Perhaps we will have a means to identify AI output – some embedded code that’s matched against a database of verified identities. Or a code we can share with family and friends.
I suppose that assumes that people care, and have some instinctual need to reject the replacement of the authentic with the synthetic. They might not, if the pictures are pretty and the videos are funny.
a) Just like how people treated newspapers in the Soviet Union.
b) Fewer. ;-)
Depends on whether the reference is to years or time.
You’re welcome to use Professor Cat.
It take the metric-***-ton of hardware to train an LLM, but nothing very powerful to run it once trained.
Or I guess you can just steal it:
https://hotair.com/jazz-shaw/2023/03/20/oops-stanford-basically-just-stole-chatgpt-and-cloned-it-for-600-bucks-n538132
Sure enough, here’s a tech newsletter that makes the same point:
https://www.askwoody.com/newsletter/free-edition-youre-fired-if-you-dont-know-how-to-use-gpt-4/
In fact, it’s already being done. From the link: