It Takes a Thief…

 

When Alan Turing proposed his eponymous test (he called it “the imitation game,the title of a movie about Mr. Turing that, as I remember, doesn’t deal with the Turing Test at all), he imagined a human interlocutor on one end of a conversation and a machine on the other. The purpose was to determine if the machine had achieved human-like intelligence. The machine passed the test if a third party listening to the conversation could not reliably identify which participant was human and which was not.

Whether or not the Turing Test was ever an effective means of identifying machine intelligence depends almost entirely on what one means by “machine intelligence.” That’s an interesting topic, but not the topic of this post. I’m concerned about the evolution of the I Am Not a Robot test, given that robots can now pass the Turing Test as originally conceived.

TikTok, Facebook, Instagram, Twitter, YouTube, and every other open content-driven platform is about to be hit with a deluge of machine-generated content. Because most of the consumers of this content have the mental acuity and judgment of average or below-average adults, it’s reasonable to doubt their ability to distinguish between human-created and machine-created content.

That isn’t necessarily a problem for ad-supported platforms: many of them would probably prefer not to have to deal with, much less share revenue with, actual human content creators, and will be happy to gradually replace so-called influencers with machines.

Those platforms that are concerned with AI-generated content being passed off as the work of real people will soon have a problem. I think they will have to turn to the source of their troubles to find a solution. Soon, if not already, the Turing Test will only work if the party performing the evaluation is itself an artificial intelligence. Only AIs will have the ability to recognize the subtle patterns that distinguish human and machine responses; only AIs will have the ability to keep up with the rapid evolution of machine eloquence.

The problem is not limited to text. We’re entering an era of the ubiquitous deep fake. We are all going to have to learn to pause, to triple-check our sources, and to be skeptical of stories/pictures/video that seem to demand an immediate response.

In a few years, I suspect most of us will subscribe to services, much like virus- and spam-checking services today, that employ artificial intelligences to detect and flag the products of other artificial intelligences.

Meanwhile, the kids who consume TikTok and Instagram and other vacuous validation-churning time sinks will love it. Because AI will do influencing even better than today’s best influencers, once it figures out just how it’s done.

It’s always a good time to monitor what your adolescent children are consuming.

Published in Technology
This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 48 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. CarolJoy, Not So Easy To Kill Coolidge
    CarolJoy, Not So Easy To Kill
    @CarolJoy

    She (View Comment):

    Chuck (View Comment):

    Henry Racette (View Comment):

    Chuck (View Comment):

    If you can’t trust Ricochet, who can you trust?

    Which makes me wonder when we’ll see the first ChatGPT-generated post here (if we haven’t already).

    Gasp!

    🤣🤣🤣🤣

    A timely post: Vanderbilt University apologizes for using ChatGPT in email on Michigan [State University] shooting.

    It should come as no surprise to anyone that the largely AI-generated email came from the Office of Equity, Diversity, and Inclusion.

    (The email did include a disclaimer at the bottom that said “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication.”)

    Crimenutely.

    The “apology” from one of the deans who signed the email is–unintentionally, I’m quite sure–hilarious:

    …using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College…

    Ya think??

    Both deans have “stepped back” from active DEI roles, pending an “investigation.”

    Will the AI situation evolve to the point that in ten years, a dean approving of ChatGPT to offer students condolences most likely will seem extremely innocent?

    After all, with the big governmental/corporate collusion to have AI advance as fast as possible, could we experience a college student who shoots college students who then will turn out to be a robot student?

    • #31
  2. CarolJoy, Not So Easy To Kill Coolidge
    CarolJoy, Not So Easy To Kill
    @CarolJoy

    The movie from the early 1990’s, “Rising Sun,” with Sean Connery, featured a plot wherein the CCTV camera footage was altered to mislead detectives as to who the top murder suspect should be.

    ChatGPT will undertake to do this on steroids.

    Of course, fake news has been so ubiquitous since forever that it will be hard to say if it is worse than having human beings lie to us.

    Plus sometimes the robots are programmed to tell the truth.

    • #32
  3. Henry Racette Member
    Henry Racette
    @HenryRacette

    It turns out that there are reasons to doubt whether an AI can effectively detect another AI’s output and correctly identify it as non-human output. I think AI has the best chance of doing this, but it isn’t certain that even the wonderful patter-matching skills of AI can pull it off.

    Current efforts to discriminate between AI and human output lean toward programming the AI to specifically insert indicators that the text is machine-generated, either by inclusion or omission of certain terms or patterns. That obviously won’t work for those who wish to pass AI-generated material off as human-generated: they will simply omit the indicators.

    We may have to get better at tracking the provenance of content. Verifying the humanness of contributors, using cell phone numbers, credit cards, etc., may be the most effective means of confirming that users are in fact human. Unfortunately, that won’t prevent humans, once validated, from posting AI-generated content.

    Maybe it doesn’t matter.

    • #33
  4. Flicker Coolidge
    Flicker
    @Flicker

    CarolJoy, Not So Easy To Kill (View Comment):

    She (View Comment):

    Chuck (View Comment):

    Henry Racette (View Comment):

    Chuck (View Comment):

    If you can’t trust Ricochet, who can you trust?

    Which makes me wonder when we’ll see the first ChatGPT-generated post here (if we haven’t already).

    Gasp!

    🤣🤣🤣🤣

    A timely post: Vanderbilt University apologizes for using ChatGPT in email on Michigan [State University] shooting.

    It should come as no surprise to anyone that the largely AI-generated email came from the Office of Equity, Diversity, and Inclusion.

    (The email did include a disclaimer at the bottom that said “Paraphrase from OpenAI’s ChatGPT AI language model, personal communication.”)

    Crimenutely.

    The “apology” from one of the deans who signed the email is–unintentionally, I’m quite sure–hilarious:

    …using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College…

    Ya think??

    Both deans have “stepped back” from active DEI roles, pending an “investigation.”

    Will the AI situation evolve to the point that in ten years, a dean approving of ChatGPT to offer students condolences most likely will seem extremely innocent?

    After all, with the big governmental/corporate collusion to have AI advance as fast as possible, could we experience a college student who shoots college students who then will turn out to be a robot student?

    I don’t think it’ll take ten years.

    • #34
  5. Flicker Coolidge
    Flicker
    @Flicker

    CarolJoy, Not So Easy To Kill (View Comment):

    The movie from the early 1990’s, “Rising Sun,” with Sean Connery, featured a plot wherein the CCTV camera footage was altered to mislead detectives as to who the top murder suspect should be.

    ChatGPT will undertake to do this on steroids.

    Of course, fake news has been so ubiquitous since forever that it will be hard to say if it is worse than having human beings lie to us.

    Plus sometimes the robots are programmed to tell the truth.

    Yes, AI will be able to fake the video footage.

    • #35
  6. Stad Coolidge
    Stad
    @Stad

    Henry Racette: he imagined a human interlocutor on one end of a conversation and a machine on the other. The purpose was to determine if the machine had achieved human-like intelligence. The machine passed the test if a third party listening to the conversation could not reliably identify which participant was human and which was not.

    I wonder if a teenager these days would be identified as “artificial” . . .

    • #36
  7. She Member
    She
    @She

    Very interesting “exchange” in @davidfoster’s comments #24-#26.  I’d advise caution though, as there’s a serious risk of misgendering by repeatedly calling the creation “it.”  Cannot help noting that if brevity is the soul of wit, the thing (you know, “the thing”) must not have much of a sense of humor, and one should probably not push too many buttons lest one engender an unpleasant response..  

    • #37
  8. kedavis Coolidge
    kedavis
    @kedavis

    Oh, what the heck:

     

    • #38
  9. Nanocelt TheContrarian Member
    Nanocelt TheContrarian
    @NanoceltTheContrarian

    “…given that robots can now pass the Turing Test as originally conceived…”

    The reports of robots passing the Turing Test are greatly exaggerated.  In fact, no machine has yet passed the Turing Test as originally conceived. That Test wasn’t that you could fool some of the people some of the time, or even some of the people all of the time, but all of the people all of the time (although you might call that the Lincoln modification of the original Turing Test). 

    I would also add that the only person I would trust to confirm that the Turing Test had been successfully passed is David Gelernter, and that would have to only be in the case that Gelernter himself was fooled and certified that he was fooled and that he could not distinguish a machine from a person. 

    The Turing Test actually is a test of consciousness. A better test of consciousness is the capacity to collapse the wave equation (see Shrodinger’s cat). Only when a machine can be demonstrated to collapse the wave equation of a quantum system by observation will one say that AGI has been achieved. I contend that that is actually not possible, because machines are not conscious. Of course that depends on what consciousness is. But to date no one can explain consciousness. Dennett, who wrote a book entitled “Consciousness Explained” admitted at the end of that book that he was unable to explain consciousness. Same is true for everyone else. 

    We will have to understand what consciousness is before we can ascertain whether a machine can be conscious. And since we are nowhere near understanding what consciousness is, we are nowhere near demonstrating that a machine can be conscious. That “hard problem” of consciousness, namely qualia, is not currently explainable. Yet we all experience it. And there is no way to demonstrate that any machine experiences it. And, sorry, one simply cannot say that if a machine appears to function as a human that a machine experiences that qualia. So at this point, the whole discussion of AI and Turing Tests, is so much nonsense. 

    What is more significant is the Turing Halting Theorem, that there can be no proof that any algorithm will always complete the analysis for all possible input. That is the equivalent of Godel’s Incompleteness theorems, that no axiomatic system of second order logic can be proved to be complete and consistent; that is, any axiomatic system will have provable true theorems that are mutually contradictory. The only way to get around this is to start with an infinite number of axioms. 

     Godel also proved that the truth of theorems that cannot be formally proved can be ascertained. That was a proof that human cognition is transcendent.  Modern Science and Modern Philosophy both deny that humans are transient beings, yet that has been proven mathematically. 

    Don’t old your breath waiting for any of them to admit they are wrong. But they are. Irrevocably. 

    • #39
  10. Nanocelt TheContrarian Member
    Nanocelt TheContrarian
    @NanoceltTheContrarian

    The underlying premise of AI is that human consciousness is a purely mechanical phenomenon that can be replicated mechanically.

    • #40
  11. Nanocelt TheContrarian Member
    Nanocelt TheContrarian
    @NanoceltTheContrarian

    kedavis (View Comment):

    One way to tell if responses are coming from a computer would be if they come faster than a normal human could type. And/or if they’re more wordy than a human would use, which also seems to be the case in the above example.

    A more significant question is whether a computer could write a sonnet as well as Shakespeare, or a drama, a comedy, or historic play akin to one of Shakespeare, and in the process vastly enrich the English language. Why are our expectations of AI so low? Soft bigotry?

    • #41
  12. Henry Racette Member
    Henry Racette
    @HenryRacette

    Nanocelt TheContrarian (View Comment):

    The underlying premise of AI is that human consciousness is a purely mechanical phenomenon that can be replicated mechanically.

    Oh, I don’t know. A machine that appears to think like a human and that successfully carries out normal human cognitive tasks probably needn’t be conscious, whatever that really means. If its mental behavior, seen from the outside, is essentially indistinguishable from that of a human, I think it’s reasonable to call that artificial intelligence. Self-awareness is a whole other can of worms, something we don’t really understand even in ourselves. My own suspicion is that we’ll eventually recreate it in machines, but that long before that we’ll have achieved quite robust artificial intelligence.

    • #42
  13. Nanocelt TheContrarian Member
    Nanocelt TheContrarian
    @NanoceltTheContrarian

    Henry Racette (View Comment):

    Nanocelt TheContrarian (View Comment):

    The underlying premise of AI is that human consciousness is a purely mechanical phenomenon that can be replicated mechanically.

    Oh, I don’t know. A machine that appears to think like a human and that successfully carries out normal human cognitive tasks probably needn’t be conscious, whatever that really means. If its mental behavior, seen from the outside, is essentially indistinguishable from that of a human, I think it’s reasonable to call that artificial intelligence. Self-awareness is a whole other can of worms, something we don’t really understand even in ourselves. My own suspicion is that we’ll eventually recreate it in machines, but that long before that we’ll have achieved quite robust artificial intelligence.

    Read Daniel Dennett. He goes on and on about how assuming that consciousness is something other than mechanical is cheating. Then he proceeds to describe human consciousness in computer software terms. Not so much that human consciousness can be mimicked by machines but human consciousness is a machine and it is disallowed to approach it in any other fashion. Sounds to me like you have imbibed copiously of the Dennett kool-ade.

    For the record:  I am not a machine. ( In case you thought otherwise).

    • #43
  14. Henry Racette Member
    Henry Racette
    @HenryRacette

    Nanocelt TheContrarian (View Comment):

    Henry Racette (View Comment):

    Nanocelt TheContrarian (View Comment):

    The underlying premise of AI is that human consciousness is a purely mechanical phenomenon that can be replicated mechanically.

    Oh, I don’t know. A machine that appears to think like a human and that successfully carries out normal human cognitive tasks probably needn’t be conscious, whatever that really means. If its mental behavior, seen from the outside, is essentially indistinguishable from that of a human, I think it’s reasonable to call that artificial intelligence. Self-awareness is a whole other can of worms, something we don’t really understand even in ourselves. My own suspicion is that we’ll eventually recreate it in machines, but that long before that we’ll have achieved quite robust artificial intelligence.

    Read Daniel Dennett. He goes on and on about how assuming that consciousness is something other than mechanical is cheating. Then he proceeds to describe human consciousness in computer software terms. Not so much that human consciousness can be mimicked by machines but human consciousness is a machine and it is disallowed to approach it in any other fashion. Sounds to me like you have imbibed copiously of the Dennett kool-ade.

    For the record: I am not a machine. ( In case you thought otherwise).

    It’s good when we know at least a bit about what we don’t know.

    I think we don’t know enough to state, coherently, what “consciousness” actually means, nor what “mind” actually means, in terms of biology or brain function.

    That’s why I like to talk about “artificial intelligence,” which is “artificial” in the sense that it’s exhibited by machines that we make, and “intelligence” in the sense that it creates output similar to what, in our experience, only human intelligence has been able to produce until now.

    Y’all are welcome to argue about whether or not consciousness or mind can be simulated and/or implemented in a machine. I won’t get engaged in that until someone gives me a succinct definition of what either word means, and can convince me that we have a pretty good idea of how it’s implemented in us.

    • #44
  15. Nanocelt TheContrarian Member
    Nanocelt TheContrarian
    @NanoceltTheContrarian

    Henry Racette (View Comment):

    Nanocelt TheContrarian (View Comment):

    Henry Racette (View Comment):

    Nanocelt TheContrarian (View Comment):

    The underlying premise of AI is that human consciousness is a purely mechanical phenomenon that can be replicated mechanically.

    Oh, I don’t know. A machine that appears to think like a human and that successfully carries out normal human cognitive tasks probably needn’t be conscious, whatever that really means. If its mental behavior, seen from the outside, is essentially indistinguishable from that of a human, I think it’s reasonable to call that artificial intelligence. Self-awareness is a whole other can of worms, something we don’t really understand even in ourselves. My own suspicion is that we’ll eventually recreate it in machines, but that long before that we’ll have achieved quite robust artificial intelligence.

    Read Daniel Dennett. He goes on and on about how assuming that consciousness is something other than mechanical is cheating. Then he proceeds to describe human consciousness in computer software terms. Not so much that human consciousness can be mimicked by machines but human consciousness is a machine and it is disallowed to approach it in any other fashion. Sounds to me like you have imbibed copiously of the Dennett kool-ade.

    For the record: I am not a machine. ( In case you thought otherwise).

    It’s good when we know at least a bit about what we don’t know.

    I think we don’t know enough to state, coherently, what “consciousness” actually means, nor what “mind” actually means, in terms of biology or brain function.

    That’s why I like to talk about “artificial intelligence,” which is “artificial” in the sense that it’s exhibited by machines that we make, and “intelligence” in the sense that it creates output similar to what, in our experience, only human intelligence has been able to produce until now.

    Y’all are welcome to argue about whether or not consciousness or mind can be simulated and/or implemented in a machine. I won’t get engaged in that until someone gives me a succinct definition of what either word means, and can convince me that we have a pretty good idea of how it’s implemented in us.

    So you avoid the critical question. While such as Dennett deny the critical question. Not much of a difference. Which as about where our current state of culture is. Splashing in the shallow end of a very deep pool, afraid to venture past the 3 foot depth. Problem is,  we are all required to ignore the critical question, so our minders can keep approaching us  as zombie-plus organisms of no intrinsic value or purpose. Fungible. You demean yourself and all the rest of us with a sophomoric colloquy that seeks above all to avoid meaning or significance for us little human robots. Many Ricochetti seem happy with this. I am not. I would suggest another book: Marilynne Robinson’s “Absence of Mind.”

    • #45
  16. Henry Racette Member
    Henry Racette
    @HenryRacette

    Nanocelt TheContrarian (View Comment):

    Henry Racette (View Comment):

    Nanocelt TheContrarian (View Comment):

    Henry Racette (View Comment):

    Nanocelt TheContrarian (View Comment):

    The underlying premise of AI is that human consciousness is a purely mechanical phenomenon that can be replicated mechanically.

    Oh, I don’t know. A machine that appears to think like a human and that successfully carries out normal human cognitive tasks probably needn’t be conscious, whatever that really means. If its mental behavior, seen from the outside, is essentially indistinguishable from that of a human, I think it’s reasonable to call that artificial intelligence. Self-awareness is a whole other can of worms, something we don’t really understand even in ourselves. My own suspicion is that we’ll eventually recreate it in machines, but that long before that we’ll have achieved quite robust artificial intelligence.

    Read Daniel Dennett. He goes on and on about how assuming that consciousness is something other than mechanical is cheating. Then he proceeds to describe human consciousness in computer software terms. Not so much that human consciousness can be mimicked by machines but human consciousness is a machine and it is disallowed to approach it in any other fashion. Sounds to me like you have imbibed copiously of the Dennett kool-ade.

    For the record: I am not a machine. ( In case you thought otherwise).

    It’s good when we know at least a bit about what we don’t know. …

    I think we don’t know enough to state, coherently, what “consciousness” actually means, nor what “mind” actually means, in terms of biology or brain function.

    Y’all are welcome to argue about whether or not consciousness or mind can be simulated and/or implemented in a machine. I won’t get engaged in that until someone gives me a succinct definition of what either word means, and can convince me that we have a pretty good idea of how it’s implemented in us.

    So you avoid the critical question. While such as Dennett deny the critical question. Not much of a difference. Which as about where our current state of culture is. Splashing in the shallow end of a very deep pool, afraid to venture past the 3 foot depth. Problem is, we are all required to ignore the critical question, so our minders can keep approaching us as zombie-plus organisms of no intrinsic value or purpose. Fungible. You demean yourself and all the rest of us with a sophomoric colloquy that seeks above all to avoid meaning or significance for us little human robots. Many Ricochetti seem happy with this. I am not. I would suggest another book: Marilynne Robinson’s “Absence of Mind.”

    I think the reality is that maybe you and I are thinking of different questions, and you’re annoyed (and hence a bit rude) that I’m not entertaining yours. And that’s okay.

    • #46
  17. Nanocelt TheContrarian Member
    Nanocelt TheContrarian
    @NanoceltTheContrarian

    Henry Racette (View Comment):

    Nanocelt TheContrarian (View Comment):

    Henry Racette (View Comment):

    Nanocelt TheContrarian (View Comment):

    Henry Racette (View Comment):

    Nanocelt TheContrarian (View Comment):

    The underlying premise of AI is that human consciousness is a purely mechanical phenomenon that can be replicated mechanically.

    Oh, I don’t know. A machine that appears to think like a human and that successfully carries out normal human cognitive tasks probably needn’t be conscious, whatever that really means. If its mental behavior, seen from the outside, is essentially indistinguishable from that of a human, I think it’s reasonable to call that artificial intelligence. Self-awareness is a whole other can of worms, something we don’t really understand even in ourselves. My own suspicion is that we’ll eventually recreate it in machines, but that long before that we’ll have achieved quite robust artificial intelligence.

    Read Daniel Dennett. He goes on and on about how assuming that consciousness is something other than mechanical is cheating. Then he proceeds to describe human consciousness in computer software terms. Not so much that human consciousness can be mimicked by machines but human consciousness is a machine and it is disallowed to approach it in any other fashion. Sounds to me like you have imbibed copiously of the Dennett kool-ade.

    For the record: I am not a machine. ( In case you thought otherwise).

    It’s good when we know at least a bit about what we don’t know. …

    I think we don’t know enough to state, coherently, what “consciousness” actually means, nor what “mind” actually means, in terms of biology or brain function.

    Y’all are welcome to argue about whether or not consciousness or mind can be simulated and/or implemented in a machine. I won’t get engaged in that until someone gives me a succinct definition of what either word means, and can convince me that we have a pretty good idea of how it’s implemented in us.

    So you avoid the critical question. While such as Dennett deny the critical question. Not much of a difference. Which as about where our current state of culture is. Splashing in the shallow end of a very deep pool, afraid to venture past the 3 foot depth. Problem is, we are all required to ignore the critical question, so our minders can keep approaching us as zombie-plus organisms of no intrinsic value or purpose. Fungible. You demean yourself and all the rest of us with a sophomoric colloquy that seeks above all to avoid meaning or significance for us little human robots. Many Ricochetti seem happy with this. I am not. I would suggest another book: Marilynne Robinson’s “Absence of Mind.”

    I think the reality is that maybe you and I are thinking of different questions, and you’re annoyed (and hence a bit rude) that I’m not entertaining yours. And that’s okay.

    And swallowing whole the modernist trope of Human nontranscendence and insignificance. 

    • #47
  18. Henry Racette Member
    Henry Racette
    @HenryRacette

    Nanocelt TheContrarian (View Comment):

    Henry Racette (View Comment):

    Nanocelt TheContrarian (View Comment):

    Henry Racette (View Comment):

    Nanocelt TheContrarian (View Comment):

    Henry Racette (View Comment):

    Nanocelt TheContrarian

    Oh, I don’t know. A machine that appears to think like a human and that successfully carries out normal human cognitive tasks probably needn’t be conscious, whatever that really means. If its mental behavior, seen from the outside, is essentially indistinguishable from that of a human, I think it’s reasonable to call that artificial intelligence. Self-awareness is a whole other can of worms, something we don’t really understand even in ourselves. My own suspicion is that we’ll eventually recreate it in machines, but that long before that we’ll have achieved quite robust artificial intelligence.

    Read Daniel Dennett. He goes on and on about how assuming that consciousness is something other than mechanical is cheating. Then he proceeds to describe human consciousness in computer software terms. Not so much that human consciousness can be mimicked by machines but human consciousness is a machine and it is disallowed to approach it in any other fashion. Sounds to me like you have imbibed copiously of the Dennett kool-ade.

    For the record: I am not a machine. ( In case you thought otherwise).

    It’s good when we know at least a bit about what we don’t know. …

    I think we don’t know enough to state, coherently, what “consciousness” actually means, nor what “mind” actually means, in terms of biology or brain function.

    Y’all are welcome to argue about whether or not consciousness or mind can be simulated and/or implemented in a machine. I won’t get engaged in that until someone gives me a succinct definition of what either word means, and can convince me that we have a pretty good idea of how it’s implemented in us.

    So you avoid the critical question. While such as Dennett deny the critical question. Not much of a difference. Which as about where our current state of culture is. Splashing in the shallow end of a very deep pool, afraid to venture past the 3 foot depth. Problem is, we are all required to ignore the critical question, so our minders can keep approaching us as zombie-plus organisms of no intrinsic value or purpose. Fungible. You demean yourself and all the rest of us with a sophomoric colloquy that seeks above all to avoid meaning or significance for us little human robots. Many Ricochetti seem happy with this. I am not. I would suggest another book: Marilynne Robinson’s “Absence of Mind.”

    I think the reality is that maybe you and I are thinking of different questions, and you’re annoyed (and hence a bit rude) that I’m not entertaining yours. And that’s okay.

    And swallowing whole the modernist trope of Human nontranscendence and insignificance.

    It is true that I have a mechanistic view of the universe.

    • #48
Become a member to join the conversation. Or sign in if you're already a member.