Confessions of a Failed AI Trainer

 

On a lark, I signed up with a couple of sites to be an AI trainer.  To qualify, there were a number of qualifying initial tasks to construct responses and evaluate and compare sample responses.  AI-assisted humans train other humans whose input will train the AI.  Farther along, the training tasks involved constructing responses that met a certain rigorous quality control standard.  After a bunch of that sort of thing, and as patterns emerged, I dropped out.

The criteria for judging/creating content is very similar to that employed by the most demanding English teacher you ever had [NOTE for younger readers: teachers of previous generations of students demanded (a) correct use of grammar; (b) no spelling errors (there was no spell check in those days); (c) adherence to clear rules of structure and composition and (d) the students’ feelings, identity and/or delusions of creative genius were not tolerated much less encouraged.]

The AI trainer training was centered on a detailed grid of criteria.  The order and relevance of content, whether additional useful facts were found and brought to bear, and whether actual new insights were included were all rated.  And there can be absolutely no offensive or abrasive language of any kind.  Done right, a commercially customized AI could presumably handle a wide variety of customer interactions quickly and pleasantly so that the company need not retain a phone bank in Mumbai or Manilla with very polite but heavily accented customer service reps of varying knowledgeability.

A well-trained AI is like a moderate Republican—instinctively recoiling at abrasive partisan language and too polite to rip semi-popular ideas it has good reason to know are deeply stupid.  My impression is that no commercial-grade AI is being trained by studying the works of Howie Carr, Marc Steyn or Kurt Schlichter.

The big question is whether AI is likely to be just a reflection of the Overton Window, shifting as public sentiments change or will AI act to reinforce or cement the existing prevalent outlook and style or even allow for its manipulation by insiders.  In any case, the Turing test will become easier – the machines will be instantly recognizable as polite, armed with the most soothing and seemingly reasonable statements about almost everything while speaking in perfect English whereas humans will increasingly be ignorant, angry, edgy, poorly educated and unable to stay on point.

Published in General
This post was promoted to the Main Feed at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 10 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Lunchbox Gerald Coolidge
    Lunchbox Gerald
    @Jose

    Old Bathos: My impression is that no commercial-grade AI is being trained by studying the works of Howie Carr, Marc Steyn or Kurt Schlichter. 

    You made me laugh!

    Old Bathos: In any case, the Turing test will become easier – the machines will be instantly recognizable as polite, armed with the most soothing and seemingly reasonable statements about almost everything while speaking in perfect English whereas humans will increasingly be ignorant, angry, edgy, poorly educated and unable to stay on point.

    What a thought.  Humans devolving into Morlocks was already a danger before AI, but now…

    • #1
  2. Randy Weivoda Moderator
    Randy Weivoda
    @RandyWeivoda

    Old Bathos: My impression is that no commercial-grade AI is being trained by studying the works of Howie Carr, Marc Steyn or Kurt Schlichter. 

    Would you want to call up customer service and deal with Kurt Schlichter?  It might make for a good comedy sketch, but real life customers who are having problems with your product aren’t going to enjoy an abrasive personality on the other end of the line.

    • #2
  3. Mark Camp Member
    Mark Camp
    @MarkCamp

    In a Turing test administered by a minimally educated tester, an AI software instance that was trained to meet the criteria you listed—i.e., which had 9th grade or higher thinking, reading, and writing skills—would be quickly identifiable as the fake human. A pool of 99 humans drawn at random from the population of Americans with a Bachelor’s degree or higher would likewise all soon reveal themselves as the real people.

    I wouldn’t be surprised to see a tester give them a question of moderate complexity and average emotional charge, and catch ‘em all in the first round.

    • #3
  4. Old Bathos Member
    Old Bathos
    @OldBathos

    Mark Camp (View Comment):

    In a Turing test administered by a minimally educated tester, an AI software instance that was trained to meet the criteria you listed—i.e., which had 9th grade or higher thinking, reading, and writing skills—would be quickly identifiable as the fake human. A pool of 99 humans drawn at random from the population of Americans with a Bachelor’s degree or higher would likewise all soon reveal themselves as the real people.

    I wouldn’t be surprised to see a tester give them a question of moderate complexity and average emotional charge, and catch ‘em all in the first round.

    Prove that Mitt Romney did not generate that response.

    • #4
  5. The Reticulator Member
    The Reticulator
    @TheReticulator

    Old Bathos (View Comment):

    Mark Camp (View Comment):

    In a Turing test administered by a minimally educated tester, an AI software instance that was trained to meet the criteria you listed—i.e., which had 9th grade or higher thinking, reading, and writing skills—would be quickly identifiable as the fake human. A pool of 99 humans drawn at random from the population of Americans with a Bachelor’s degree or higher would likewise all soon reveal themselves as the real people.

    I wouldn’t be surprised to see a tester give them a question of moderate complexity and average emotional charge, and catch ‘em all in the first round.

    Prove that Mitt Romney did not generate that response.

    Cannot or will not? 

    • #5
  6. Old Bathos Member
    Old Bathos
    @OldBathos

    The Reticulator (View Comment):

    Old Bathos (View Comment):

    Mark Camp (View Comment):

    In a Turing test administered by a minimally educated tester, an AI software instance that was trained to meet the criteria you listed—i.e., which had 9th grade or higher thinking, reading, and writing skills—would be quickly identifiable as the fake human. A pool of 99 humans drawn at random from the population of Americans with a Bachelor’s degree or higher would likewise all soon reveal themselves as the real people.

    I wouldn’t be surprised to see a tester give them a question of moderate complexity and average emotional charge, and catch ‘em all in the first round.

    Prove that Mitt Romney did not generate that response.

    Cannot or will not?

    • #6
  7. The Reticulator Member
    The Reticulator
    @TheReticulator

    Old Bathos (View Comment):

    The Reticulator (View Comment):

    Old Bathos (View Comment):

    Mark Camp (View Comment):

    In a Turing test administered by a minimally educated tester, an AI software instance that was trained to meet the criteria you listed—i.e., which had 9th grade or higher thinking, reading, and writing skills—would be quickly identifiable as the fake human. A pool of 99 humans drawn at random from the population of Americans with a Bachelor’s degree or higher would likewise all soon reveal themselves as the real people.

    I wouldn’t be surprised to see a tester give them a question of moderate complexity and average emotional charge, and catch ‘em all in the first round.

    Prove that Mitt Romney did not generate that response.

    Cannot or will not?

    So one way to know for sure that a real person is talking to us rather than some AI bot will be that we are being subjected to insults and ad hominems. 

    • #7
  8. Old Bathos Member
    Old Bathos
    @OldBathos

    The Reticulator (View Comment):
    So one way to know for sure that a real person is talking to us rather than some AI bot will be that we are being subjected to insults and ad hominems. 

    Exactly.

    • #8
  9. Samuel Block Staff
    Samuel Block
    @SamuelBlock

    It sounds like you’re saying humanity’s last hope rests on the wills of the occasionally—but always reluctantly—decent hearted punk kid. Verily the bane of wicked spinster pedants and geezer’s lawns alike, but perhaps not so useless as we’d heretofore thunk…

    • #9
  10. Sisyphus Member
    Sisyphus
    @Sisyphus

    The Reticulator (View Comment):

    Old Bathos (View Comment):

    The Reticulator (View Comment):

    Old Bathos (View Comment):

    Mark Camp (View Comment):

    In a Turing test administered by a minimally educated tester, an AI software instance that was trained to meet the criteria you listed—i.e., which had 9th grade or higher thinking, reading, and writing skills—would be quickly identifiable as the fake human. A pool of 99 humans drawn at random from the population of Americans with a Bachelor’s degree or higher would likewise all soon reveal themselves as the real people.

    I wouldn’t be surprised to see a tester give them a question of moderate complexity and average emotional charge, and catch ‘em all in the first round.

    Prove that Mitt Romney did not generate that response.

    Cannot or will not?

    So one way to know for sure that a real person is talking to us rather than some AI bot will be that we are being subjected to insults and ad hominems.

    You realize that ChatGPT was lying to you. That is the great AI breakthrough.

    • #10
Become a member to join the conversation. Or sign in if you're already a member.