AI’s Recursive Problem?

 

AI searches the published world to source anything it says.

But if you Google something right now, some AI engine will write an answer. That answer will read like a boring article. And it may or may not be right. But here’s the kicker: the AI that looks for answers cannot tell the difference between a real source, or an AI-generated one. Which means that, like a photocopy of a photocopy of a photocopy, anything resembling the original information source is so far diluted and lost, that it is indistinguishable from dreck.

This seems to be inevitable. AI generates documents and words. But its accuracy and usefulness are headed in the wrong direction.  All large language models suffer from this.

Is this wrong? Am I missing something?

Published in General
This post was promoted to the Main Feed at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 51 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Django Member
    Django
    @Django

    A while back, someone at ricochet discussed the difference between artificial intelligence and a large language model. I followed the article for a bit and then decided I don’t have sufficient background in either to fully understand the difference. 

    • #1
  2. Arahant Member
    Arahant
    @Arahant

    iWe: Is this wrong?

    No.

    iWe: Am I missing something?

    All the potential mischief we could be doing.

    • #2
  3. kedavis Coolidge
    kedavis
    @kedavis

    Arahant (View Comment):

    iWe: Is this wrong?

    No.

    iWe: Am I missing something?

    All the potential mischief we could be doing.

    I don’t know that “we” need to do anything.  Seems like AI will “mischief” itself.  And it may not take very long to render itself useless.

    • #3
  4. Sisyphus Member
    Sisyphus
    @Sisyphus

    For an Internet second the sentiment took me that my vast wood pulp based library is a waste of space. Nah.

    • #4
  5. kedavis Coolidge
    kedavis
    @kedavis

    Sisyphus (View Comment):

    For an Internet second the sentiment took me that my vast wood pulp based library is a waste of space. Nah.

    As long as you got them before Amazon started changing things on the fly.

    • #5
  6. genferei Member
    genferei
    @genferei

    How about this: an ‘AI System’ is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Note that this is broad enough to cover a thermostat. Or lots of other things. But this is an actual official definition. 

    Anywho, one could then classify AI Systems into those that work with rigorous logic, and those that work with probabilities.

    For the latter kind (the probabilistic kind), one interesting type is based on Machine Learning. It turns out that with sufficient computing resources, you can train a machine to use a huge spreadsheet to ‘fake’ the procedure you want it to follow, just by giving it a whole series of inputs and ‘nudging’ it until it starts producing outputs close enough to the outputs you want to see. 

    For example, you could ‘train’ a ‘model’ to average 10 numbers by repeatedly presenting 10 numbers as inputs, and when it gave you an answer, making it try again in a different way until it gets the answer close enough for your purposes, or until its guesses stopped getting better. The ‘making it try again in a different way’ is the special (but not terribly special) sauce. The outcome would be a spreadsheet you could present with 10 numbers and you would get out a number that was the average. Or close to the average. Or possibly something hilariously unlike the average at all. 

    It may occur to you that it would be much easier and more accurate to just program the computer to take the 10 numbers, add them, and divide the total by 10. And you would be right. (This would also be computationally massively more efficient.)

    The amazing thing about machine learning is that it ‘works’ even when you don’t know how to write down the formula. For example, classifying photos of pets into photos of cats or photos of dogs. We can do it, but we don’t really know how to write down the procedure for how a computer should do it (because computers only deal with numbers). But you can train a spreadsheet that will do a pretty good job.

    Or will it? For the dog and cat example, it might turn out that it’s really classifying photos taken indoors (which is where most dog pictures you were training it on were taken) vs outdoors (more cat pictures). (A system for distinguishing between friendly and enemy tanks turned out to be detecting rain, because the training pictures of enemy armour were taken from a parade on a rainy day.)

    Because we don’t really know how the trick works. We can’t go into the spreadsheet and fix the wrong cell, because the numbers in the cells work together in a mysterious way…

    Still, when it comes to training models, it turns out that more training data is better, that lots of training can be automated, and that the parts that can’t be automated can usually be farmed out to third world workers. Why not train on the whole Internet, then?

    And, indeed, this is what the types of machine learning systems called Large Language Models are doing. But what are they being trained to do? The clue is in the name: they are being trained to model human language. That is, presented with the words “The cat sat on the”, they are being trained to guess the next word. (Or, being presented with the sentence “The [what] sat on the mat.”, they are being trained to guess the “[what]”.) They are not being trained to detect whether the cat is, in reality, on the mat, or whether a cat and a mat exist at all. They are being trained to write like a human. More particularly, they are being trained to write like the text in their training set, being, broadly speaking, whatever happens to be on the Internet. 

    It turns out that when thus trained, the text that is generated seems like it is responding to a question. Which leads people – whose experience of language to date is almost entirely with other human beings – to think that the system is taking the meaning of the question and trying to respond with words that have a related meaning. As far as we know, it’s not, really. (As I say, we don’t understand how it all works under the hood.)

    Nevertheless, this idea of a machine that can answer questions resonates so deeply that the companies (and research organizations, and bureaucrats) involved go along with the illusion, and build extra bits onto (or, perhaps, around) the underlying LLM to bolster it (the illusion). With sometimes hilarious or alarming results. 

    So the recursive problem iWe points out is that – unless steps are taken by the model trainers – models will sound more and more like themselves. This is a question of ‘accuracy’ insofar as you think the goal is to sound like a human (whatever that means). It is not question of accuracy in accurately representing the world (whatever that means), because that wasn’t the goal.

    But we’re just at the beginning of this journey. Believe all pundit predictions just as you would believe predictions of the effect of heavier-than-air flight made the day after Kittyhawk.

    And remember that there is much more to machine learning than Large Language Models, and much more to AI than machine learning. 

    • #6
  7. iWe Coolidge
    iWe
    @iWe

    genferei (View Comment):

    So the recursive problem iWe points out is that – unless steps are taken by the model trainers – models will sound more and more like themselves. This is a question of ‘accuracy’ insofar as you think the goal is to sound like a human (whatever that means). It is not question of accuracy in accurately representing the world (whatever that means), because that wasn’t the goal.

    But we’re just at the beginning of this journey. Believe all pundit predictions just as you would believe predictions of the effect of heavier-than-air flight made the day after Kittyhawk.

    And remember that there is much more to machine learning than Large Language Models, and much more to AI than machine learning. 

    This is a brilliant and very helpful analysis. Thank you!

    • #7
  8. Bryan G. Stephens Thatcher
    Bryan G. Stephens
    @BryanGStephens

    These are not AI

    These are not intelligent at all. They just take data and offer it up changed a bit.

    What you describe will only get worse.

    • #8
  9. Chowderhead Coolidge
    Chowderhead
    @Podunk

    It will always get better until you can’t tell the difference between that and a real person. 

    • #9
  10. Bartholomew Xerxes Ogilvie, Jr. Coolidge
    Bartholomew Xerxes Ogilvie, Jr.
    @BartholomewXerxesOgilvieJr

    iWe:

    AI searches the published world to source anything it says.

    As someone who has worked in this field, I think this sentence needs to be clarified a bit.

    There are many AIs. Each implementation is trained on some body of knowledge (the corpus), whatever you choose to give it.

    So depending on the application, you might choose to train your AI on a small, carefully curated corpus of data that is known to be valid. If you do this, your AI’s responses are likely to be reliable and accurate, but they will be limited; it might know everything there is to know about some narrow domain, but nothing at all about anything outside that domain. Many AIs are not trained on public data at all; a company might build an AI for its own purposes and train it on proprietary, or even confidential, data.

    Or you might choose to train your AI on the whole of the Internet, filtered by whatever algorithms you choose to apply. In which case your AI’s responses are going to be of questionable accuracy and will reflect the biases you have chosen to give it.

    What I’m saying is that your concerns are completely valid, but it’s worth remembering that not all AIs are created equal. I am deeply skeptical (and somewhat hostile) toward general-purpose AIs like the one Google is trying force us all to use. But there are plenty of powerful and trustworthy applications of AI as well. The key is knowing how the AI was trained, and on what data, and unfortunately that’s not information we are privy to with these high-profile AI assistants. We’re just asked to trust it. Yeah, right.

    • #10
  11. Henry Racette Member
    Henry Racette
    @HenryRacette

    Yes, the problem of AI consuming its own output during training is real, and it results in what you describe: increasingly bland, artificial, and often mistaken output.

    One suggested solution is to tag AI-generated content in some way that AI can recognize, thus allowing it to be excluded from training data. I wish that were a practical solution, because then we could recognize it as well. But there are too many sources of AI-generated data, and not all of them have an incentive to be identifiable as such.

    Another potential solution is to increase the “creativity” of AI-generated content, to make it less mechanistic, more idiosyncratic. One can easily imagine this approach backfiring: if recursive training on traditional AI-generated content converges on something boring, what might an AI trained on more imaginative (for want of a better word) content do? Would it become divergent, spiraling off to some kind of nonsensical infinity?

    When I first read of the convergence problem I thought it was good news: I don’t like AI and would love to believe that there are fundamental barriers to its continued development.

    But I think there’s a relatively easy answer to the problem that will reduce the challenge of self-training. The latest AI (e.g., GPT-4o) is “multi-modal,” which is to say it can consume sound and video, as well as text, in its raw form. This opens up a vast new world of training data, and reduces the likelihood — at least for now — that AI will be consuming its own output to any significant extent.

    (Earlier AI was capable of processing spoken content, but it did it by first running it through a traditional voice-to-text process and then training on the transcribed text. The latest AI actually processes sounds, images, and video, and can presumably learn simply by observing — much as we do, at least in that regard.)

    • #11
  12. Miffed White Male Member
    Miffed White Male
    @MiffedWhiteMale

    Bartholomew Xerxes Ogilvie, Jr. (View Comment):
    Or you might choose to train your AI on the whole of the Internet, filtered by whatever algorithms you choose to apply. In which case your AI’s responses are going to be of questionable accuracy and will reflect the biases you have chosen to give it.

    I read recently that one of the AI models trained exclusively on Reddit forums.  That must be wild.

    • #12
  13. Bartholomew Xerxes Ogilvie, Jr. Coolidge
    Bartholomew Xerxes Ogilvie, Jr.
    @BartholomewXerxesOgilvieJr

    Miffed White Male (View Comment)

    I read recently that one of the AI models trained exclusively on Reddit forums. That must be wild.

    Wow. I can’t imagine that such an AI would be good for anything except telling you what people on Reddit are saying. And most of the time I’m perfectly happy not knowing that.

    I read that recently the people at StackOverflow announced a partnership with OpenAI (the purveyors of ChatGPT) to train an AI on their content. For those who aren’t familiar with it, StackOverflow is primarily a Q&A site for software developers; people go there and ask technical questions, others post answers, and generally a “best” answer is ultimately identified based on user voting. It’s a valuable resource for programmers.

    On paper it sounds like an ideal candidate for training an AI. The problem domain is well defined and technical in nature, so generally there are clear right and wrong answers (even if there are sometimes differing opinions about which is best). An AI that can summarize the collective brain power of StackOverflow could be really useful.

    The problem is, the people at StackOverflow didn’t handle this well. They made this deal with OpenAI, which already has a dodgy reputation when it comes to using other people’s data without permission, without any discussion or consultation with the StackOverflow community. People weren’t given any option to give or withdraw consent for their StackOverflow contributions to be used in this way. Consequently, some StackOverflow contributors have begun deleting or altering their past answers.

    I think this is a beautiful illustration of the perils of this kind of AI. The technology can work very well, but only when you can control the training data. If you’re going to train an AI on any kind of community-contributed data, you’re entirely dependent on that community for the quality of your AI. That means that you’ll be affected not only by their errors and biases, but also potentially by deliberate sabotage, if you don’t treat them well.

    • #13
  14. Miffed White Male Member
    Miffed White Male
    @MiffedWhiteMale

    Bartholomew Xerxes Ogilvie, Jr. (View Comment):

    Miffed White Male (View Comment)

    I read recently that one of the AI models trained exclusively on Reddit forums. That must be wild.

    Wow. I can’t imagine that such an AI would be good for anything except telling you what people on Reddit are saying. And most of the time I’m perfectly happy not knowing that.

    I read that recently the people at StackOverflow announced a partnership with OpenAI (the purveyors of ChatGPT) to train an AI on their content. For those who aren’t familiar with it, StackOverflow is primarily a Q&A site for software developers; people go there and ask technical questions, others post answers, and generally a “best” answer is ultimately identified based on user voting. It’s a valuable resource for programmers.

    On paper it sounds like an ideal candidate for training an AI. The problem domain is well defined and technical in nature, so generally there are clear right and wrong answers (even if there are sometimes differing opinions about which is best). An AI that can summarize the collective brain power of StackOverflow could be really useful.

    The problem is, the people at StackOverflow didn’t handle this well. They made this deal with OpenAI, which already has a dodgy reputation when it comes to using other people’s data without permission, without any discussion or consultation with the StackOverflow community. People weren’t given any option to give or withdraw consent for their StackOverflow contributions to be used in this way. Consequently, some StackOverflow contributors have begun deleting or altering their past answers.

    I think this is a beautiful illustration of the perils of this kind of AI. The technology can work very well, but only when you can control the training data. If you’re going to train an AI on any kind of community-contributed data, you’re entirely dependent on that community for the quality of your AI. That means that you’ll be affected not only by their errors and biases, but also potentially by deliberate sabotage, if you don’t treat them well.

    Stackoverflow, where 50% of the answers are “Why don’t you just google it?”

    Ummmm, I did, how do you think I got here?

     

    • #14
  15. Front Seat Cat Member
    Front Seat Cat
    @FrontSeatCat

    kedavis (View Comment):

    Arahant (View Comment):

    iWe: Is this wrong?

    No.

    iWe: Am I missing something?

    All the potential mischief we could be doing.

    I don’t know that “we” need to do anything. Seems like AI will “mischief” itself. And it may not take very long to render itself useless.

    The only way to determine the mischief or real fact vs. AI fact will be to actually know truth from history and the past. But in this relative world where your truth is not the same as mine, who is going to question it or care?  That’s the plan…..

    • #15
  16. kedavis Coolidge
    kedavis
    @kedavis

    Henry Racette (View Comment):

    Yes, the problem of AI consuming its own output during training is real, and it results in what you describe: increasingly bland, artificial, and often mistaken output.

    One suggested solution is to tag AI-generated content in some way that AI can recognize, thus allowing it to be excluded from training data. I wish that were a practical solution, because then we could recognize it as well. But there are too many sources of AI-generated data, and not all of them have an incentive to be identifiable as such.

    Another potential solution is to increase the “creativity” of AI-generated content, to make it less mechanistic, more idiosyncratic. One can easily imagine this approach backfiring: if recursive training on traditional AI-generated content converges on something boring, what might an AI trained on more imaginative (for want of a better word) content do? Would it become divergent, spiraling off to some kind of nonsensical infinity?

    When I first read of the convergence problem I thought it was good news: I don’t like AI and would love to believe that there are fundamental barriers to its continued development.

    But I think there’s a relatively easy answer to the problem that will reduce the challenge of self-training. The latest AI (e.g., GPT-4o) is “multi-modal,” which is to say it can consume sound and video, as well as text, in its raw form. This opens up a vast new world of training data, and reduces the likelihood — at least for now — that AI will be consuming its own output to any significant extent.

    (Earlier AI was capable of processing spoken content, but it did it by first running it through a traditional voice-to-text process and then training on the transcribed text. The latest AI actually processes sounds, images, and video, and can presumably learn simply by observing — much as we do, at least in that regard.)

    I just don’t accept that any kind of AI can actually understand audio or video information any better than it can actually understand “written” information.

    • #16
  17. genferei Member
    genferei
    @genferei

    I think using the words ‘intelligence’ and ‘understand’ in discussions of current (and future) technology do infinitely more to obscure than to illuminate. 

    • #17
  18. kedavis Coolidge
    kedavis
    @kedavis

    And let’s not forget…

     

    • #18
  19. Django Member
    Django
    @Django

    genferei (View Comment):

    I think using the words ‘intelligence’ and ‘understand’ in discussions of current (and future) technology do infinitely more to obscure than to illuminate.

    How would humans determine whether a processor has a sense of self, whether it can view itself as a cause rather than an effect? Do scientists have any guess what level of complexity and multi-threaded processing is necessary for that? If such a complex processor said it did have a sense of self, how would we know it was not just a Large Language Model stringing characters together? 

    • #19
  20. The Reticulator Member
    The Reticulator
    @TheReticulator

    Bartholomew Xerxes Ogilvie, Jr. (View Comment):

    Consequently, some StackOverflow contributors have begun deleting or altering their past answers.

    If you’re going to train an AI on any kind of community-contributed data, you’re entirely dependent on that community for the quality of your AI. That means that you’ll be affected not only by their errors and biases, but also potentially by deliberate sabotage, if you don’t treat them well.

    Also known as “data poisoning.”   I heartily recommend it, even to the extent of poisoning the data about data poisoning.  

    • #20
  21. Henry Racette Member
    Henry Racette
    @HenryRacette

    genferei (View Comment):

    I think using the words ‘intelligence’ and ‘understand’ in discussions of current (and future) technology do infinitely more to obscure than to illuminate.

    I appreciate the point you’re trying to make, but I don’t find it compelling — at least when referring to normal conversation among those of us who are not a part of the AI development and/or cognitive science communities. It strikes me a bit like the question of free will: do we really have it, or are we bound by a deterministic universe and our self-perception of choice is an artifact of physics? The point being that it doesn’t matter, practically speaking, whatever the answer to that as-yet-undecided question.

    The old Turing Test had a pragmatic virtue to it, in that it was agnostic as to the mechanism by which the machine might fool the human into believing it was itself human — and also modest about the claim the test would validate: that the machine was able to imitate human behavior to some degree, however it did it.

    Similarly, there’s a pragmatic, utilitarian value to using words like “intelligence” and “understanding,” even when we may not mean them in a technical sense. Those of us outside of the aforementioned communities (and, to a large extent, even those within those communities) are unable to observe the mechanisms of cognition. What we observe are the products of cognition, the behaviors and responses that we take to be signs of underlying processes we categorize as human cognition. When those products produced by a machine become practically indistinguishable to us from similar products produced by humans, it’s reasonable to ask if being pedantic about the underlying mechanisms has much value.

    Sure, calling it “intelligence” might set expectations too high. On the other hand, calling it a simple regurgitation of stuff the machine has already read certainly sets expectations too low.

    The word “artificial” might not be the best modifier for the kind of intelligence (and I’m comfortable using that word) that we are seeing from the latest AIs. “Machine intelligence” might be better, in that it acknowledges that, for the first time, we are seeing products very similar to those produced by human intelligence being created by something that is not human intelligence.

     

     

    • #21
  22. Bryan G. Stephens Thatcher
    Bryan G. Stephens
    @BryanGStephens

    What we are currently calling artificial intelligence doesn’t know anything. Otherwise the art AI wouldn’t give human beings 17 fingers.

     As human beings we know humans don’t have 17 fingers. 

    It is the utter lack of knowledge that makes what we are calling AI not intelligent at all.

    • #22
  23. genferei Member
    genferei
    @genferei

    Henry Racette (View Comment):

    genferei (View Comment):

    I think using the words ‘intelligence’ and ‘understand’ in discussions of current (and future) technology do infinitely more to obscure than to illuminate.

    I appreciate the point you’re trying to make, but I don’t find it compelling […]

    Similarly, there’s a pragmatic, utilitarian value to using words like “intelligence” and “understanding,” even when we may not mean them in a technical sense. […] When those products produced by a machine become practically indistinguishable to us from similar products produced by humans, it’s reasonable to ask if being pedantic about the underlying mechanisms has much value.

    Are you saying that, even if we can’t define “intelligence”, we know it when we see it, so it is a useful word in (non-technical?) discussion?

    I don’t think I can follow you down that path.

    However, I am reminded of a thought experiment whose description I will no doubt butcher:

    You are walking along the beach and see the word ‘love’ spelled out in contours in the wet sand. You ponder who might have left this message. A wave sweeps in, obliterating the message, and when it recedes, the contours now seem to spell out ‘hate’. You wonder at the tendency of the human mind to find patterns in the random shapings of nature. Your attention is drawn to a boiling in the sea a few hundred meters off shore, the conning tower of a submarine appears, and a person in a lab coat emerges, training binoculars on the stretch of beach where the words appeared. Where does meaning come from?

    • #23
  24. genferei Member
    genferei
    @genferei

    If anyone is looking for definitions of intelligence, there are ten pages of them collected (in 2006) here: https://www.vetta.org/documents/A-Collection-of-Definitions-of-Intelligence.pdf  

    • #24
  25. Henry Racette Member
    Henry Racette
    @HenryRacette

    genferei (View Comment):
    Are you saying that, even if we can’t define “intelligence”, we know it when we see it, so it is a useful word in (non-technical?) discussion?

    No. I’m saying that the products of human intelligence and the products of [whatever you wish to call modern AI] are converging to the point that most people will soon be unable to distinguish between them.

    And so, for the majority of us and in a great many cases, the distinction between human intelligence and [whatever you wish to call modern AI] becomes, in a practical sense, moot.

    Put differently:

    If the apparent cognitive output of AI is indistinguishable by 95% of the population from the actual cognitive output of 95% of the population, what difference does it make whether that output is the product of actual intelligence or something else?

    Update:

    That isn’t quite the same as saying “we know it when we see it.” Rather, it’s saying that we’ve always believed that we know it [must be there] because of the things that we see it produce, and now, for the first time in history, those things are being produces by something else.

    • #25
  26. kedavis Coolidge
    kedavis
    @kedavis

    Henry Racette (View Comment):
    Sure, calling it “intelligence” might set expectations too high. On the other hand, calling it a simple regurgitation of stuff the machine has already read certainly sets expectations too low.

    I would probably be rather hard to convince that any such model or whatever, can actually come up with something truly original.  Most people seem to be what I would call pretty gullible in that regard, when they can apparently be convinced that Data and the holo-doctor are “life”/”intelligence” in any true way.  Which I suspect is largely because they are – by necessity – portrayed by human actors and written for by humans.  But I guess most people aren’t able to reach that level of abstraction.

    • #26
  27. Henry Racette Member
    Henry Racette
    @HenryRacette

    kedavis (View Comment):

    Henry Racette (View Comment):
    Sure, calling it “intelligence” might set expectations too high. On the other hand, calling it a simple regurgitation of stuff the machine has already read certainly sets expectations too low.

    I would probably be rather hard to convince that any such model or whatever, can actually come up with something truly original. Most people seem to be what I would call pretty gullible in that regard, when they can apparently be convinced that Data and the holo-doctor are “life”/”intelligence” in any true way. Which I suspect is largely because they are – by necessity – portrayed by human actors and written for by humans. But I guess most people aren’t able to reach that level of abstraction.

    That invites a question: How do people come up with “something truly original?”

    Let’s talk specifically about the realm of scientific breakthroughs. I’ve often encountered skepticism that machines will be able to make new discoveries, because machines simply perform very sophisticated pattern recognition, and that precludes their discovery of anything not previously described.

    I’m skeptical of that skepticism for a couple of reasons.

    First, I suspect the vast majority of human thought is of the pattern recognition variety, and I think machines will do that at least as well as we do.

    Secondly, I suspect that real breakthroughs occur when people observe things that don’t fit our current cognitive model of the universe, and so begin hypothesizing variations of the model that would allow for or explain the anomalous observations — and I think there is nothing in that process that precludes a machine from making the same observations and hypotheses.

    I think we focus too much on the text-based aspect of training, and imagine that machines are therefore limited to learning only what we already know, unable to observe the world directly and see things that we may not yet have observed. Increasingly, that isn’t true.

     

     

    • #27
  28. kedavis Coolidge
    kedavis
    @kedavis

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):
    Sure, calling it “intelligence” might set expectations too high. On the other hand, calling it a simple regurgitation of stuff the machine has already read certainly sets expectations too low.

    I would probably be rather hard to convince that any such model or whatever, can actually come up with something truly original. Most people seem to be what I would call pretty gullible in that regard, when they can apparently be convinced that Data and the holo-doctor are “life”/”intelligence” in any true way. Which I suspect is largely because they are – by necessity – portrayed by human actors and written for by humans. But I guess most people aren’t able to reach that level of abstraction.

    That invites a question: How do people come up with “something truly original?”

    Let’s talk specifically about the realm of scientific breakthroughs. I’ve often encountered skepticism that machines will be able to make new discoveries, because machines simply perform very sophisticated pattern recognition, and that precludes their discovery of anything not previously described.

    I’m skeptical of that skepticism for a couple of reasons.

    First, I suspect the vast majority of human thought is of the pattern recognition variety, and I think machines will do that at least as well as we do.

    That’s a pretty simple level of “thought” which even ants and such do.  So what?

     

    Secondly, I suspect that real breakthroughs occur when people observe things that don’t fit our current cognitive model of the universe, and so begin hypothesizing variations of the model that would allow for or explain the anomalous observations — and I think there is nothing in that process that precludes a machine from making the same observations and hypotheses.

    I doubt that a machine, no matter how well-programmed, is really capable of such things.  Would a machine have been able, for example, to realize that Earth revolves around the Sun rather than vice reversa, without the power to physically directly observe things?  I doubt it.

     

    I think we focus too much on the text-based aspect of training, and imagine that machines are therefore limited to learning only what we already know, unable to observe the world directly and see things that we may not yet have observed. Increasingly, that isn’t true.

    I doubt that you – or anyone else – could prove that either, really.  In part because I don’t believe that the machines could ever “observe the world directly” the way we do.

    • #28
  29. Henry Racette Member
    Henry Racette
    @HenryRacette

    kedavis (View Comment):

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):
    Sure, calling it “intelligence” might set expectations too high. On the other hand, calling it a simple regurgitation of stuff the machine has already read certainly sets expectations too low.

    I would probably be rather hard to convince that any such model or whatever, can actually come up with something truly original. Most people seem to be what I would call pretty gullible in that regard, when they can apparently be convinced that Data and the holo-doctor are “life”/”intelligence” in any true way. Which I suspect is largely because they are – by necessity – portrayed by human actors and written for by humans. But I guess most people aren’t able to reach that level of abstraction.

    That invites a question: How do people come up with “something truly original?”

    Let’s talk specifically about the realm of scientific breakthroughs. I’ve often encountered skepticism that machines will be able to make new discoveries, because machines simply perform very sophisticated pattern recognition, and that precludes their discovery of anything not previously described.

    I’m skeptical of that skepticism for a couple of reasons.

    First, I suspect the vast majority of human thought is of the pattern recognition variety, and I think machines will do that at least as well as we do.

    That’s a pretty simple level of “thought” which even ants and such do. So what?

    The point of the comment is that I think we tend to have an exaggerated sense of what most of us do most of the time when we’re thinking, and so we tend to believe that machines can’t also do it.

     

    Secondly, I suspect that real breakthroughs occur when people observe things that don’t fit our current cognitive model of the universe, and so begin hypothesizing variations of the model that would allow for or explain the anomalous observations — and I think there is nothing in that process that precludes a machine from making the same observations and hypotheses.

    I doubt that a machine, no matter how well-programmed, is really capable of such things. Would a machine have been able, for example, to realize that Earth revolves around the Sun rather than vice reversa, without the power to physically directly observe things? I doubt it.

    (emphasis mine)

    KE, would we have been able to realize that truth, had we been unable to directly observe the universe? My point is that machines will soon be able to directly observe the universe, just as we do.

    I think we focus too much on the text-based aspect of training, and imagine that machines are therefore limited to learning only what we already know, unable to observe the world directly and see things that we may not yet have observed. Increasingly, that isn’t true.

    I doubt that you – or anyone else – could prove that either, really. In part because I don’t believe that the machines could ever “observe the world directly” the way we do.

    I didn’t say machines would observe the world “the way we do,” merely that they would be able to observe and learn from the world through direct observation. I think we are already at that point, and that machine capabilities in this regard will now increase rapidly.

    • #29
  30. kedavis Coolidge
    kedavis
    @kedavis

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):

    kedavis (View Comment):

    Henry Racette (View Comment):
    Sure, calling it “intelligence” might set expectations too high. On the other hand, calling it a simple regurgitation of stuff the machine has already read certainly sets expectations too low.

    I would probably be rather hard to convince that any such model or whatever, can actually come up with something truly original. Most people seem to be what I would call pretty gullible in that regard, when they can apparently be convinced that Data and the holo-doctor are “life”/”intelligence” in any true way. Which I suspect is largely because they are – by necessity – portrayed by human actors and written for by humans. But I guess most people aren’t able to reach that level of abstraction.

    That invites a question: How do people come up with “something truly original?”

    Let’s talk specifically about the realm of scientific breakthroughs. I’ve often encountered skepticism that machines will be able to make new discoveries, because machines simply perform very sophisticated pattern recognition, and that precludes their discovery of anything not previously described.

    I’m skeptical of that skepticism for a couple of reasons.

    First, I suspect the vast majority of human thought is of the pattern recognition variety, and I think machines will do that at least as well as we do.

    That’s a pretty simple level of “thought” which even ants and such do. So what?

    The point of the comment is that I think we tend to have an exaggerated sense of what most of us do most of the time when we’re thinking, and so we tend to believe that machines can’t also do it.

     

    Secondly, I suspect that real breakthroughs occur when people observe things that don’t fit our current cognitive model of the universe, and so begin hypothesizing variations of the model that would allow for or explain the anomalous observations — and I think there is nothing in that process that precludes a machine from making the same observations and hypotheses.

    I doubt that a machine, no matter how well-programmed, is really capable of such things. Would a machine have been able, for example, to realize that Earth revolves around the Sun rather than vice reversa, without the power to physically directly observe things? I doubt it.

    (emphasis mine)

    KE, would we have been able to realize that truth, had we been unable to directly observe the universe? My point is that machines will soon be able to directly observe the universe, just as we do.

    I think we focus too much on the text-based aspect of training, and imagine that machines are therefore limited to learning only what we already know, unable to observe the world directly and see things that we may not yet have observed. Increasingly, that isn’t true.

    I doubt that you – or anyone else – could prove that either, really. In part because I don’t believe that the machines could ever “observe the world directly” the way we do.

    I didn’t say machines would observe the world “the way we do,” merely that they would be able to observe and learn from the world through direct observation. I think we are already at that point, and that machine capabilities in this regard will now increase rapidly.

    I don’t think it’s possible – ever – for machines to observe as we do, even if we give them “eyes” that “see” far broader light spectrum etc.

    I’m thinking also of “Demon Seed” especially the movie version, although the book may have been similar but it’s been a long time since I read it and I don’t remember for sure, where despite having the power of “direct observation” through multiple telescopes etc, what did “Proteus” ultimately want?  To become a human child.

    And I also remembered this:  (note this doesn’t mean I think Data is a “being” or “sentient,” as mentioned before I think that’s a mistake some people easily make because Data is – necessarily – played by and written by humans, but a point is made)

     

    • #30
Become a member to join the conversation. Or sign in if you're already a member.