AI Means Scams Will Be Harder to Spot

 

When I was working in IT, I would remind the staff about email scams and how to spot them. One tipoff was the use of odd words or sentence structures that no native speaker of English would use. For example, just today I got this email: “This is to confirm if you have got my last mail. respond” Not sure what this is about, but it does not sound real.

But now, with ChatGPT and the like, there’s no need for scammers to rely on bad translations. They can generate emails that sound like perfectly normal English. Further, if you respond to the emails, they can maintain a coherent conversation back and forth. Scammers won’t even need staff!

So the quality of scams will increase and be harder to spot.

I don’t know if the AI is somehow programmed not to engage in scams, but there always seems to be a way to get around these restrictions. Sometimes it’s enough to say, “Pretend that…,” and the AI will write something it would refuse to do if asked directly.

At least some of the old rules still apply: check the From email address, hover over links to see where they go, don’t open file attachments you weren’t expecting, don’t give out account numbers or passwords or Social Security numbers, etc.

Published in Technology
Tags: ,

This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 20 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Bob Thompson Member
    Bob Thompson
    @BobThompson

    I’m sure you are correct in this assessment. I don’t even read many emails I get now but some in the past have made me laugh at their attempt to form an actual sentence in English.

    • #1
  2. Tex929rr Coolidge
    Tex929rr
    @Tex929rr

    Yes, I get at least one every day claiming to be from my ISP, and that the email terms of service have changed with an embedded link.  The quality of English varies substantially.

    • #2
  3. Stad Coolidge
    Stad
    @Stad

    If I get any kind of notice from any account I have, I never click on the link provided.  I go to the web site directly and check . . .

    • #3
  4. Front Seat Cat Member
    Front Seat Cat
    @FrontSeatCat

    Thank you for this reminder as things get more and more weird and sketchy by the minute…….

    • #4
  5. Matt Bartle Member
    Matt Bartle
    @MattBartle

    Stad (View Comment):

    If I get any kind of notice from any account I have, I never click on the link provided. I go to the web site directly and check . . .

    Excellent point. I go there and look up phone numbers, too, instead of trusting one in an email.

    • #5
  6. JoelB Member
    JoelB
    @JoelB

    The emojis in the e-mail subject line are a dead give-away.

    • #6
  7. Misthiocracy has never Member
    Misthiocracy has never
    @Misthiocracy

    Operating a language model like ChatGPT require a lot of processing power, a lot of storage, a lot of network bandwidth, and a lot of users (in order for it to learn from interaction with real, human users of language).

    As such, only large organizations can afford to operate such systems, and such organizations have a lot of power to put their thumbs on the scale of how the language model operates. e.g. The sorts of texts one could generate with ChatGPT was nerfed within a week of its public launch as the operators patched limitations into the model.

    Email scammers do not have sufficient resources to use language models effectively, therefore I don’t worry too much about them. It isn’t the small-time scammers one should worry about.

    The ones you have to worry about are the large-scale scammers.  i.e. The governments and corporations that do have the resources to operate such systems.

    Furthermore, I’m not even really talking about foreign governments. A foreign government may have the resources to build the hardware for a ChatGPT competitor, but they also need access to millions of English language users to make it truly effective.

    e.g. The Beijing government could make a really good language model for Chinese users, but it would be somewhat harder for it to make one for English users (unless English-language governments opened up their public networks to such a system, but that would be crazy).

    • #7
  8. The Reticulator Member
    The Reticulator
    @TheReticulator

    Stad (View Comment):

    If I get any kind of notice from any account I have, I never click on the link provided. I go to the web site directly and check . . .

    Yup.  For a while I was getting a lot of PayPal notices.  I never followed the link in the e-mail. Most of them were bogus.  Same when I get a text about my credit card account. Most of those have not been bogus.  I always call directly.  

    • #8
  9. kedavis Coolidge
    kedavis
    @kedavis

    One of my favorites:

     

    • #9
  10. Misthiocracy has never Member
    Misthiocracy has never
    @Misthiocracy

    To illustrate just how resource-intensive a system like GPT is, even a company as big as Microsoft is having to ration access to the hardware for its own employees:

    https://www.theinformation.com/articles/microsoft-rations-access-to-ai-hardware-for-internal-teams

    • #10
  11. Steve C. Member
    Steve C.
    @user_531302

    Then I have something to look forward to.

    The guy who does the TED Talk email scams is going to be be even more entertaining.

    • #11
  12. DaveSchmidt Coolidge
    DaveSchmidt
    @DaveSchmidt

    Do you think the email I got from Cenetor Fedderson is bogus? 

    • #12
  13. Misthiocracy has never Member
    Misthiocracy has never
    @Misthiocracy

    And then lo and behold, news drops today that Meta’s large language model can be run on an individual PC, thereby destroying my entire thesis:

    • #13
  14. James Lileks Contributor
    James Lileks
    @jameslileks

    Misthiocracy has never (View Comment):

    And then lo and behold, news drops today that Meta’s large language model can be run on an individual PC, thereby destroying my entire thesis

    Side note: this genre of thumbnail is everything I hate about YouTube:

     

     

     

     

     

     

     

     

     

    These reaction faces, the ugly graphics – it’s fungal.

    I was thinking today about the upcoming need for image / video / text verification. In two years, maybe less, we will begin with the assumption that everything is AI generated. Perhaps we will have a means to identify AI output – some embedded code that’s matched against a database of verified identities. Or a code we can share with family and friends.

    I suppose that assumes that people care, and have some instinctual need to reject the replacement of the authentic with the synthetic. They might not, if the pictures are pretty and the videos are funny. 

    • #14
  15. Misthiocracy has never Member
    Misthiocracy has never
    @Misthiocracy

    James Lileks (View Comment):
    In two years, maybe less, we will begin with the assumption that everything is AI generated.

    a) Just like how people treated newspapers in the Soviet Union.

    b) Fewer.  ;-)

    • #15
  16. Bob Thompson Member
    Bob Thompson
    @BobThompson

    Misthiocracy has never (View Comment):

    b) Fewer.  ;-)

    Depends on whether the reference is to years or time.

    • #16
  17. kedavis Coolidge
    kedavis
    @kedavis

    Misthiocracy has never (View Comment):

    James Lileks (View Comment):
    In two years, maybe less, we will begin with the assumption that everything is AI generated.

    a) Just like how people treated newspapers in the Soviet Union.

    b) Fewer. ;-)

    You’re welcome to use Professor Cat.

     

    • #17
  18. Locke On Member
    Locke On
    @LockeOn

    Misthiocracy has never (View Comment):

    And then lo and behold, news drops today that Meta’s large language model can be run on an individual PC, thereby destroying my entire thesis:

    It take the metric-***-ton of hardware to train an LLM, but nothing very powerful to run it once trained.

     

    • #18
  19. Matt Bartle Member
    Matt Bartle
    @MattBartle

    Or I guess you can just steal it:

    https://hotair.com/jazz-shaw/2023/03/20/oops-stanford-basically-just-stole-chatgpt-and-cloned-it-for-600-bucks-n538132

     

    • #19
  20. Matt Bartle Member
    Matt Bartle
    @MattBartle

    Sure enough, here’s a tech newsletter that makes the same point:

    https://www.askwoody.com/newsletter/free-edition-youre-fired-if-you-dont-know-how-to-use-gpt-4/

    In fact, it’s already being done. From the link:

    • Phishing developer. Hackers have already created simplified phishing services: e.g., sending messages that persuade victims to reveal passwords or send money. A big advantage of chatbots is that they emit perfectly grammatical English — no need for a Russian hacker to learn the language! In one case, a Telegram bot is using text-davinci-003, a large language model, to provide phishing services for only 5.5 US cents per query. (Check Point Software)
    • #20
Become a member to join the conversation. Or sign in if you're already a member.