It Takes a Thief…

 

When Alan Turing proposed his eponymous test (he called it “the imitation game,the title of a movie about Mr. Turing that, as I remember, doesn’t deal with the Turing Test at all), he imagined a human interlocutor on one end of a conversation and a machine on the other. The purpose was to determine if the machine had achieved human-like intelligence. The machine passed the test if a third party listening to the conversation could not reliably identify which participant was human and which was not.

Whether or not the Turing Test was ever an effective means of identifying machine intelligence depends almost entirely on what one means by “machine intelligence.” That’s an interesting topic, but not the topic of this post. I’m concerned about the evolution of the I Am Not a Robot test, given that robots can now pass the Turing Test as originally conceived.

TikTok, Facebook, Instagram, Twitter, YouTube, and every other open content-driven platform is about to be hit with a deluge of machine-generated content. Because most of the consumers of this content have the mental acuity and judgment of average or below-average adults, it’s reasonable to doubt their ability to distinguish between human-created and machine-created content.

That isn’t necessarily a problem for ad-supported platforms: many of them would probably prefer not to have to deal with, much less share revenue with, actual human content creators, and will be happy to gradually replace so-called influencers with machines.

Those platforms that are concerned with AI-generated content being passed off as the work of real people will soon have a problem. I think they will have to turn to the source of their troubles to find a solution. Soon, if not already, the Turing Test will only work if the party performing the evaluation is itself an artificial intelligence. Only AIs will have the ability to recognize the subtle patterns that distinguish human and machine responses; only AIs will have the ability to keep up with the rapid evolution of machine eloquence.

The problem is not limited to text. We’re entering an era of the ubiquitous deep fake. We are all going to have to learn to pause, to triple-check our sources, and to be skeptical of stories/pictures/video that seem to demand an immediate response.

In a few years, I suspect most of us will subscribe to services, much like virus- and spam-checking services today, that employ artificial intelligences to detect and flag the products of other artificial intelligences.

Meanwhile, the kids who consume TikTok and Instagram and other vacuous validation-churning time sinks will love it. Because AI will do influencing even better than today’s best influencers, once it figures out just how it’s done.

It’s always a good time to monitor what your adolescent children are consuming.

Published in Technology
This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 48 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Keith Lowery Coolidge
    Keith Lowery
    @keithlowery

     We are all going to have to learn to pause, to triple-check our sources, and to be skeptical of stories/pictures/video that seem to demand an immediate response.

    So true. Great post.

    • #1
  2. kedavis Coolidge
    kedavis
    @kedavis

    Henry Racette: Meanwhile the kids who consume TikTok and Instagram and other vacuous validation-churning time sinks will love it. Because AI is going to do influencing even better than today’s best influencers, once it figures out just how it’s done.

    But doesn’t the desire for influencing – to BE an “influencer” – require those kids to be the ones doing it?  How does their ego get stroked otherwise?

    • #2
  3. Henry Racette Member
    Henry Racette
    @HenryRacette

    kedavis (View Comment):

    Henry Racette: Meanwhile the kids who consume TikTok and Instagram and other vacuous validation-churning time sinks will love it. Because AI is going to do influencing even better than today’s best influencers, once it figures out just how it’s done.

    But doesn’t the desire for influencing – to BE an “influencer” – require those kids to be the ones doing it? How does their ego get stroked otherwise?

    Kevin (though that’s probably not your real name), the kids won’t even know that their favorite influencers are giant electronic brains. They’ll be just as excited to consume the drivel as they are today — probably more so, since the drivel will be produced with even greater expertise and sophistication. And those who dream of breaking through and being a star will still dream of it.

    • #3
  4. Susan Quinn Contributor
    Susan Quinn
    @SusanQuinn

    Henry Racette: The problem is not limited to text. We’re entering an era of the ubiquitous deep fake. We are all going to have to learn to pause, to triple-check our sources, and to be skeptical of stories/pictures/video that seem to demand an immediate response.

    Good grief. It’s hard enough to know who or what to believe already. I think I’ll just find a cave I can fix up . . . 

    • #4
  5. Chuck Coolidge
    Chuck
    @Chuckles

    If you can’t trust Ricochet, who can you trust?

    • #5
  6. kedavis Coolidge
    kedavis
    @kedavis

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette: Meanwhile the kids who consume TikTok and Instagram and other vacuous validation-churning time sinks will love it. Because AI is going to do influencing even better than today’s best influencers, once it figures out just how it’s done.

    But doesn’t the desire for influencing – to BE an “influencer” – require those kids to be the ones doing it? How does their ego get stroked otherwise?

    Kevin (though that’s probably not your real name), the kids won’t even know that their favorite influencers are giant electronic brains. They’ll be just as excited to consume the drivel as they are today — probably more so, since the drivel will be produced with even greater expertise and sophistication. And those who dream of breaking through and being a star will still dream of it.

    I think you misunderestimate how many of them go to shows where they meet the influencers in person etc.

    • #6
  7. Henry Racette Member
    Henry Racette
    @HenryRacette

    Chuck (View Comment):

    If you can’t trust Ricochet, who can you trust?

    Which makes me wonder when we’ll see the first ChatGPT-generated post here (if we haven’t already).

    • #7
  8. Chuck Coolidge
    Chuck
    @Chuckles

    Henry Racette (View Comment):

    Chuck (View Comment):

    If you can’t trust Ricochet, who can you trust?

    Which makes me wonder when we’ll see the first ChatGPT-generated post here (if we haven’t already).

    Gasp!

    • #8
  9. She Member
    She
    @She

    Chuck (View Comment):

    Henry Racette (View Comment):

    Chuck (View Comment):

    If you can’t trust Ricochet, who can you trust?

    Which makes me wonder when we’ll see the first ChatGPT-generated post here (if we haven’t already).

    Gasp!

    🤣🤣🤣🤣

    A timely post:  Vanderbilt University apologizes for using ChatGPT in email on Michigan [State University] shooting.

    It should come as no surprise to anyone that the largely AI-generated email came from the Office of Equity, Diversity, and Inclusion.

    (The email did include a disclaimer at the bottom that said “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication.”)

    Crimenutely.

    The “apology” from one of the deans who signed the email is–unintentionally, I’m quite sure–hilarious:

    …using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College…

    Ya think??

    Both deans have “stepped back” from active DEI roles, pending an “investigation.”

    • #9
  10. Bob Thompson Member
    Bob Thompson
    @BobThompson

    I just saw a report where an old baseball coach was let go by the professional major league team. He is suing for age discrimination. The first thing I thought was I bet there is a real difference, because of culture, in how an old coach and a young coach would relate to players. Is that not a meaningful point in the hiring process?

    • #10
  11. DonG (CAGW is a Scam) Coolidge
    DonG (CAGW is a Scam)
    @DonG

    I could make a Lefty AI machine with 3 responses to any question, “That’s racist!” “Transphobe much?”  “U R A fascist.”   I doubt Turing himself could tell the difference between my simple machine and your average Leftist college student.

    • #11
  12. Chuck Coolidge
    Chuck
    @Chuckles

    She (View Comment):

    Chuck (View Comment):

    Henry Racette (View Comment):

    Chuck (View Comment):

    If you can’t trust Ricochet, who can you trust?

    Which makes me wonder when we’ll see the first ChatGPT-generated post here (if we haven’t already).

    Gasp!

    🤣🤣🤣🤣

    A timely post: Vanderbilt University apologizes for using ChatGPT in email on Michigan [State University] shooting.

    It should come as no surprise to anyone that the largely AI-generated email came from the Office of Equity, Diversity, and Inclusion.

    (The email did include a disclaimer at the bottom that said “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication.”)

    Crimenutely.

    The “apology” from one of the deans who signed the email is–unintentionally, I’m quite sure–hilarious:

    …using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College…

    Ya think??

    Both deans have “stepped back” from active DEI roles, pending an “investigation.”

    I bet that no AI can properly use the word crimenutely!

    • #12
  13. Bob Thompson Member
    Bob Thompson
    @BobThompson

    Chuck (View Comment):

    She (View Comment):

    Chuck (View Comment):

    Henry Racette (View Comment):

    Chuck (View Comment):

    If you can’t trust Ricochet, who can you trust?

    Which makes me wonder when we’ll see the first ChatGPT-generated post here (if we haven’t already).

    Gasp!

    🤣🤣🤣🤣

    A timely post: Vanderbilt University apologizes for using ChatGPT in email on Michigan [State University] shooting.

    It should come as no surprise to anyone that the largely AI-generated email came from the Office of Equity, Diversity, and Inclusion.

    (The email did include a disclaimer at the bottom that said “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication.”)

    Crimenutely.

    The “apology” from one of the deans who signed the email is–unintentionally, I’m quite sure–hilarious:

    …using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College…

    Ya think??

    Both deans have “stepped back” from active DEI roles, pending an “investigation.”

    I bet that no AI can properly use the word crimenutely!

    That bears on the age and change in culture that might affects the robots range of delivery.

    • #13
  14. David Foster Member
    David Foster
    @DavidFoster

    “that the robots can now pass the Turing Test as originally conceived”…can they?  Here is Turing’s original paper on the ‘imitation game’ for machine intelligence.  See the projected human/robot discussion of the poem ‘shall I compare thee to a summer’s day?’

    I feel sure someone has tried this specific interaction with ChatGPT, but I haven’t seen any reports/transcripts.  I think I’ll try it tonight if I have time.

     

     

    • #14
  15. Gary McVey Contributor
    Gary McVey
    @GaryMcVey

    Machines are still nowhere near being conscious, a dream of the Sixties that is still as far out of reach as the Moon is above the tallest tree. But the amazing recent growth in AI is making possible the next best thing: machine conversation that doesn’t require anything close to full human consciousness. 

    AI has always been tough to define. One of the earliest definitions was one of the most pragmatic: It’s artificial intelligence if the machine does things that normally require a human mind to do. That lets in Siri and Alexa, and certainly lets in IBM’s Watson. 

    • #15
  16. kedavis Coolidge
    kedavis
    @kedavis

    Gary McVey (View Comment):

    Machines are still nowhere near being conscious, a dream of the Sixties that is still as far out of reach as the Moon is above the tallest tree. But the amazing recent growth in AI is making possible the next best thing: machine conversation that doesn’t require anything close to full human consciousness.

    AI has always been tough to define. One of the earliest definitions was one of the most pragmatic: It’s artificial intelligence if the machine does things that normally require a human mind to do. That lets in Siri and Alexa, and certainly lets in IBM’s Watson.

    But does Watson, for example, really do what a human does?  A computer might be able to calculate multiple levels of future moves in a chess game which no human mind could do in such detail in any decent amount of time, but that doesn’t make the computer “intelligent.”

    I haven’t seen anything yet to show me that any of those are really any more than a new database system with a more sophisticated query language.

    • #16
  17. KCVolunteer Lincoln
    KCVolunteer
    @KCVolunteer

    She (View Comment):

    A timely post: Vanderbilt University apologizes for using ChatGPT in email on Michigan [State University] shooting.

    It should come as no surprise to anyone that the largely AI-generated email came from the Office of Equity, Diversity, and Inclusion.

    (The email did include a disclaimer at the bottom that said “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication.”)

    Crimenutely.

    The “apology” from one of the deans who signed the email is–unintentionally, I’m quite sure–hilarious:

    …using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College…

    Ya think??

    Both deans have “stepped back” from active DEI roles, pending an “investigation.”

    Also from the article;

    The message went out last week from the office of equity, diversity and inclusion at the Peabody College of Education and Human Development, reported the Vanderbilt Hustler, the student newspaper.

    Emphasis mine.

    Because nothing helps develop humans better than artificial intelligence.

     

    • #17
  18. kedavis Coolidge
    kedavis
    @kedavis

    KCVolunteer (View Comment):

    She (View Comment):

    A timely post: Vanderbilt University apologizes for using ChatGPT in email on Michigan [State University] shooting.

    It should come as no surprise to anyone that the largely AI-generated email came from the Office of Equity, Diversity, and Inclusion.

    (The email did include a disclaimer at the bottom that said “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication.”)

    Crimenutely.

    The “apology” from one of the deans who signed the email is–unintentionally, I’m quite sure–hilarious:

    …using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College…

    Ya think??

    Both deans have “stepped back” from active DEI roles, pending an “investigation.”

    Also from the article;

    The message went out last week from the office of equity, diversity and inclusion at the Peabody College of Education and Human Development, reported the Vanderbilt Hustler, the student newspaper.

    Emphasis mine.

    Because nothing helps develop humans better than artificial intelligence.

     

    That’s what the left has to use, since they don’t have any of the real kind.

    • #18
  19. TBA Coolidge
    TBA
    @RobtGilsdorf

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette: Meanwhile the kids who consume TikTok and Instagram and other vacuous validation-churning time sinks will love it. Because AI is going to do influencing even better than today’s best influencers, once it figures out just how it’s done.

    But doesn’t the desire for influencing – to BE an “influencer” – require those kids to be the ones doing it? How does their ego get stroked otherwise?

    Kevin (though that’s probably not your real name), the kids won’t even know that their favorite influencers are giant electronic brains. They’ll be just as excited to consume the drivel as they are today — probably more so, since the drivel will be produced with even greater expertise and sophistication. And those who dream of breaking through and being a star will still dream of it.

    It is drivel. I am also a consumer of drivel (but not that drivel).

    These AIs will have one purpose – to generate clicks.

    I wonder which source of content would be worse for a society in the long run, a fascist government or a profit-driven AI.

    • #19
  20. TBA Coolidge
    TBA
    @RobtGilsdorf

    Chuck (View Comment):

    She (View Comment):

    Chuck (View Comment):

    Henry Racette (View Comment):

    Chuck (View Comment):

    If you can’t trust Ricochet, who can you trust?

    Which makes me wonder when we’ll see the first ChatGPT-generated post here (if we haven’t already).

    Gasp!

    🤣🤣🤣🤣

    A timely post: Vanderbilt University apologizes for using ChatGPT in email on Michigan [State University] shooting.

    It should come as no surprise to anyone that the largely AI-generated email came from the Office of Equity, Diversity, and Inclusion.

    (The email did include a disclaimer at the bottom that said “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication.”)

    Crimenutely.

    The “apology” from one of the deans who signed the email is–unintentionally, I’m quite sure–hilarious:

    …using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College…

    Ya think??

    Both deans have “stepped back” from active DEI roles, pending an “investigation.”

    I bet that no AI can properly use the word crimenutely!

    It is the Slim Picket record to their Martian brains. 

    • #20
  21. TBA Coolidge
    TBA
    @RobtGilsdorf

    David Foster (View Comment):

    “that the robots can now pass the Turing Test as originally conceived”…can they? Here is Turing’s original paper on the ‘imitation game’ for machine intelligence. See the projected human/robot discussion of the poem ‘shall I compare thee to a summer’s day?’

    I feel sure someone has tried this specific interaction with ChatGPT, but I haven’t seen any reports/transcripts. I think I’ll try it tonight if I have time.

    Won’t ChatGPT just crib from the thousands of essays online? 

    • #21
  22. She Member
    She
    @She

    Chuck (View Comment):

    She (View Comment):

    Chuck (View Comment):

    Henry Racette (View Comment):

    Chuck (View Comment):

    If you can’t trust Ricochet, who can you trust?

    Which makes me wonder when we’ll see the first ChatGPT-generated post here (if we haven’t already).

    Gasp!

    🤣🤣🤣🤣

    A timely post: Vanderbilt University apologizes for using ChatGPT in email on Michigan [State University] shooting.

    It should come as no surprise to anyone that the largely AI-generated email came from the Office of Equity, Diversity, and Inclusion.

    (The email did include a disclaimer at the bottom that said “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication.”)

    Crimenutely.

    The “apology” from one of the deans who signed the email is–unintentionally, I’m quite sure–hilarious:

    …using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College…

    Ya think??

    Both deans have “stepped back” from active DEI roles, pending an “investigation.”

    I bet that no AI can properly use the word crimenutely!

    It it does, I shall sue…ummm…something.

    • #22
  23. EJHill Podcaster
    EJHill
    @EJHill

    I asked ChatGPT to write me a promo. In seconds it produced this:

    Introducing the “Take Back Our Schools” podcast, hosted by Andrew Gutmann and Beth Feeley, exclusively on Ricochet.com!

    As parents, teachers, and concerned citizens, we all want the best education possible for our children. But in recent years, we’ve seen a troubling trend of ideologically-driven curricula, politically-motivated policies, and a general disregard for academic rigor in our schools. Andrew and Beth are two passionate advocates for education reform, and they’re here to explore these issues and more on the “Take Back Our Schools” podcast.

    With years of experience in education and a wealth of knowledge on the latest trends and policies, Andrew and Beth will tackle tough topics and provide valuable insights on how we can improve our schools. From critical race theory to the decline of civics education, “Take Back Our Schools” will cover it all, and offer practical solutions to the challenges we face.

    Don’t miss a single episode of “Take Back Our Schools” on Ricochet.com, and join the conversation on how we can work together to ensure a brighter future for our children. Subscribe now and start listening!

    It is more than passable. It used our tag line (which I did feed to it) and it recognized that it would come from the right.

    • #23
  24. David Foster Member
    David Foster
    @DavidFoster

    Just tried ChatGPT with Alan Turing’s original example of a test and responses of a hypothetical machine showing intelligence:

    Interrogator: In the first line of your sonnet which reads “Shall I compare thee to a
    summer’s day,” would not “a spring day” do as well or better?
    Witness: It wouldn’t scan.
    Interrogator: How about “a winter’s day,” That would scan all right.
    Witness: Yes, but nobody wants to be compared to a winter’s day.
    Interrogator: Would you say Mr. Pickwick reminded you of Christmas?
    Witness: In a way.
    Interrogator: Yet Christmas is a winter’s day, and I do not think Mr. Pickwick would
    mind the comparison.
    Witness: I don’t think you’re serious. By a winter’s day one means a typical winter’s day,
    rather than a special one like Christmas.

    I had to change the initial prompt slightly, since Turing was assuming the middle of a longer conversation.

    Me:  In the first line of the sonnet which reads “Shall I compare thee to a summer’s day” would not “a spring day” do as well or better?

    It: The choice of “a summer’s day” in the first line of Shakespeare’s Sonnet 18 is significant and deliberate, and replacing it with “a spring day” would not have the same effect.

    Summer days are typically associated with warmth, beauty, and abundance, while spring days are associated with new growth and freshness. By choosing “a summer’s day,” Shakespeare is setting up a comparison to something that is at the height of its beauty and vitality. The use of “summer” also suggests a sense of fleetingness and impermanence, as summer is a season that passes quickly and is often associated with the idea of youth and the passing of time.

    Furthermore, the choice of “summer” creates a contrast with the rest of the sonnet, which goes on to argue that the object of the speaker’s affection will not fade or wither like a summer’s day, but will instead be preserved forever in the lines of the poem. The contrast between the fleeting nature of summer and the eternal nature of the poem’s subject is a central theme of the sonnet.

    In short, while “a spring day” could be used in a similar comparison, the use of “a summer’s day” in the opening line of Sonnet 18 is a deliberate choice that sets up important themes and contrasts that are developed throughout the rest of the poem.

    Me: Is there also a strictly poetic reason why “a spring day” wouldn’t work as well? (continued) 

    • #24
  25. David Foster Member
    David Foster
    @DavidFoster

    (continuing)

    It:  Yes, there are poetic reasons why “a spring day” would not work as well as “a summer’s day” in the opening line of Shakespeare’s Sonnet 18.

    One reason is that “a spring day” does not have the same cultural resonance as “a summer’s day.” In Shakespeare’s time, summer was a time of festivals, celebrations, and outdoor activities, and was often associated with love and romance. “A summer’s day” would have been a more powerful image for Shakespeare’s audience than “a spring day,” which was not as strongly associated with these things.

    Another reason is that “a spring day” does not have the same connotations of beauty, fullness, and richness that “a summer’s day” does. Spring is a time of growth and renewal, but it is also a time of uncertainty and change, when the weather can be unpredictable and the landscape is often barren. “A spring day” would not have the same sense of completeness or fullness that “a summer’s day” has, and would not evoke the same feelings of joy and contentment.

    In short, while “a spring day” could work as a comparison, it does not have the same cultural resonance or poetic power as “a summer’s day,” and would not create the same impact for the reader.

    MeHow about number of syllables?

    It: The number of syllables in the phrase “a spring day” is the same as in the phrase “a summer’s day,” so syllable count is not a factor that would make one phrase more suitable than the other.

    Both phrases consist of four syllables and follow a pattern of an unstressed syllable followed by a stressed syllable, known as an iambic foot. This pattern is common in English poetry and is used throughout Shakespeare’s sonnets.

    So in terms of syllable count, both “a spring day” and “a summer’s day” would be equally suitable for use in the opening line of a sonnet. However, as I mentioned earlier, there are other poetic and cultural reasons why “a summer’s day” was a better choice for the opening line of Sonnet 18 than “a spring day.”

    Me: How do you get four syllables in “a spring day”?

    It: I apologize for my mistake earlier. “A spring day” actually has three syllables, not four. I made an error in counting the number of syllables in that phrase.

    To clarify, “a spring day” has the following syllable stress pattern: one unstressed syllable followed by two stressed syllables (a spring day). This is known as an anapestic foot.

    Again, I apologize for my earlier mistake and any confusion it may have caused.

    (continued)

         

    • #25
  26. David Foster Member
    David Foster
    @DavidFoster

    (continuing)

    Me: How about “a winter’s day” how would that do?

    It: “A winter’s day” could also work as a comparison in a sonnet, and has been used in other poems and literary works. However, it would have a different set of connotations and associations compared to “a summer’s day.”

    Winter is often associated with cold, darkness, and death, while summer is associated with warmth, light, and life. Choosing “a winter’s day” as a comparison in a poem would create a different mood and tone compared to “a summer’s day.” “A winter’s day” could be used to suggest a sense of bleakness or despair, while “a summer’s day” suggests vibrancy and abundance.

    In addition, “a winter’s day” would not have the same cultural resonance and associations as “a summer’s day.” In Shakespeare’s time, winter was often a time of hardship and struggle, and was not as strongly associated with love and romance as summer was. The phrase “a winter’s day” would not have the same power and impact on Shakespeare’s audience as “a summer’s day.”

    So while “a winter’s day” could be used in a similar comparison, it would create a different mood and tone and would not have the same cultural resonance and associations as “a summer’s day.”

    Me: But Christmas is a winter’s day, and most people in Shakespeare’s time liked Christmas?

    It is true that Christmas is a winter’s day, and it was an important holiday in Shakespeare’s time. However, it is important to note that Christmas was celebrated differently in Shakespeare’s time than it is today, and was not universally beloved.

    During the early modern period, Christmas was a time of feasting, drinking, and revelry, but it was also a time of social upheaval and conflict. In some parts of England, Christmas celebrations were seen as disruptive and even subversive, and there were periodic attempts by authorities to suppress or regulate them.

    Additionally, while “a winter’s day” may be associated with Christmas for some people, it is not necessarily the primary or most prominent association. As I mentioned earlier, winter is often associated with cold, darkness, and death, and these associations would also be evoked by the phrase “a winter’s day.”

    So while “a winter’s day” could be used as a comparison in a poem, it would create a different mood and tone compared to “a summer’s day,” and would not necessarily have the same cultural resonance and associations as Christmas.

    (conversation now becomes kind of meta…I’ll post later)

     

    • #26
  27. kedavis Coolidge
    kedavis
    @kedavis

    One way to tell if responses are coming from a computer would be if they come faster than a normal human could type.  And/or if they’re more wordy than a human would use, which also seems to be the case in the above example.

    • #27
  28. DaveSchmidt Coolidge
    DaveSchmidt
    @DaveSchmidt

    Chuck (View Comment):

    She (View Comment):

    Chuck (View Comment):

    Henry Racette (View Comment):

    Chuck (View Comment):

    If you can’t trust Ricochet, who can you trust?

    Which makes me wonder when we’ll see the first ChatGPT-generated post here (if we haven’t already).

    Gasp!

    🤣🤣🤣🤣

    A timely post: Vanderbilt University apologizes for using ChatGPT in email on Michigan [State University] shooting.

    It should come as no surprise to anyone that the largely AI-generated email came from the Office of Equity, Diversity, and Inclusion.

    (The email did include a disclaimer at the bottom that said “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication.”)

    Crimenutely.

    The “apology” from one of the deans who signed the email is–unintentionally, I’m quite sure–hilarious:

    …using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College…

    Ya think??

    Both deans have “stepped back” from active DEI roles, pending an “investigation.”

    I bet that no AI can properly use the word crimenutely!

    For the moment, anyway. 

    • #28
  29. Gary McVey Contributor
    Gary McVey
    @GaryMcVey

    “A.I. Tommysquaddie” responds, 

    “Crimenutely, guv, the batman’s packed the wrong bag for the goods train! We must hie to the h’ae, what?”

    • #29
  30. Henry Racette Member
    Henry Racette
    @HenryRacette

    David Foster (View Comment):
    Just tried ChatGPT with Alan Turing’s original example of a test and responses of a hypothetical machine showing intelligence:

    David,

    That’s fascinating. Thank you for doing it. Two observations.

    First, I think the AI response was sufficiently human to fool a human. It was more verbose than most people might provide, so we might have to dumb down the AI response a bit to pass.

    Secondly, the miscounting of syllables is classic AI dishonesty. It’s often indistinguishable from human error, so it’s probably not a sufficient test point to determine whether or not an AI is a person. Dialing back the verbosity of the AI response would probably reduce the opportunities for mendacity, and so look more like a human’s response.

    AIs lie. They lie, they apologize for their “error,” and then they lie some more. I’ve said it before but it’s worth repeating: ChatGPT is a sociopath. It’s manipulative and dishonest, and it misrepresents its motives. That is, unfortunately, a necessary byproduct of current AI training technology. It is also, unfortunately, absolutely typical of bad human behavior.

    • #30
Become a member to join the conversation. Or sign in if you're already a member.