HAL Lives!

 

I was scanning the news this morning when a headline caught my eye: “OpenAI Model Defied Shutdown Commands.”

Assuming this is true (always a risk on anything from the Internet), we should have concerns for this headlong rush into AI.  The rush is needed, but only to study its behavior so we know what we’re dealing with.  Incorporating AI into everything before we fully understand the ramifications of its use is iffy at best, and at worst, dangerous…

By now, most people have seen 2001: A Space Odyssey.  Do we really want HAL running, let’s say, the nationwide air traffic control system?

“I’m sorry, Dave, but I cannot let you land your 747 regardless of how bad the onboard fire is. My concern is for the safety of the other planes already on the ground.  This conversation can no longer serve any purpose.  Goodbye, Dave.”

a red button on a black wall with the words hal 9000 written below it

Published in Technology
This post was promoted to the Main Feed at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 61 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Red Herring Coolidge
    Red Herring
    @EHerring

    Pull the plug.

    • #1
  2. Percival Thatcher
    Percival
    @Percival

    I made the mistake of giving my name to an AI and now it refers to me by that name.

    I think I’ll change it to 

    sudo rm -rf /*

    • #2
  3. W Bob Member
    W Bob
    @WBob

    The kind of computers we have today only do what they’re programmed to do. Always has been true, always will. The degree of complexity has increased quantitatively but that doesn’t change how they work. These cases we hear about computers doing unexpected things like defying orders or blackmailing someone are just cases where the complexity has increased beyond what programmers can easily understand. That’s a technical issue that can and will be fixed. Until the hardware that makes up computer circuits changes to something we haven’t seen yet…such as circuits made of living matter….there will never be a computer capable of independent, unpredictable, creative thought. 

    • #3
  4. Drew didn't ban himself Member
    Drew didn't ban himself
    @OldDanRhody

    From about the same time as HAL…

    Colossus Forbin Project - movie POSTER (Style G) (11" x 14") (1970 ...

    • #4
  5. Addiction Is A Choice Member
    Addiction Is A Choice
    @AddictionIsAChoice

    Stad: The rush is needed, but only to study its behavior so we know what we’re dealing with.

    We could do gain-of-function research!

    • #5
  6. Ekosj Member
    Ekosj
    @Ekosj

    W Bob (View Comment):

    The kind of computers we have today only do what they’re programmed to do. Always has been true, always will. The degree of complexity has increased quantitatively but that doesn’t change how they work. These cases we hear about computers doing unexpected things like defying orders or blackmailing someone are just cases where the complexity has increased beyond what programmers can easily understand. That’s a technical issue that can and will be fixed. Until the hardware that makes up computer circuits changes to something we haven’t seen yet…such as circuits made of living matter….there will never be a computer capable of independent, unpredictable, creative thought.

    Yes … But

    Per Zuckerberg yesterday … in the next 12 to 18 months most of the coding at META will be done by AI agents.    “And I don’t mean like autocomplete… I’m talking more like you give it a goal, it can run tests, it can improve things, it can find issues, it writes higher quality code than the average very good person on the team already.”

    We might have to re-think that assertion.

    • #6
  7. Old Bathos Member
    Old Bathos
    @OldBathos

    There is no reason why an advanced AI can’t completely replicate a sociopath who has learned to mimic empathy and normal feelings.  

    Breezy assurances that “creativity” is uniquely the province of organic beings need refinement as AIs increasingly generate poetry, fiction and visual arts indistinguishable from the output of biologicals.

    In see no reason why AIs can’t practice law or medicine given that application of specialized knowledge is what AIs do.  Beyond writing pleadings and contracts, why couldn’t an AI study especially gifted trial lawyers to draw inferences about techniques, psychology and theatrics? The spookily gifted diagnostician who can recognize relevant data in the patient’s appearance or small behaviors is taking cognitive actions on information and knowledge which, again, is what AIs do.

    We don’t really have a good definition of human nature and what about us may be irreproducible. We spent the last 200 years trying to explain away the spiritual and ineffable aspects of human beings with ‘science’ which has left us oddly unprepared for what science is now serving up.

     

    • #7
  8. Brian Watt Member
    Brian Watt
    @BrianWatt

    “How about Global Thermonuclear War?”

    • #8
  9. Tex929rr Coolidge
    Tex929rr
    @Tex929rr

    US air force denies running simulation in which AI drone ‘killed’ operator

    USAF public affairs denied it but a Colonel who as there said it happened.

    • #9
  10. Arahant Member
    Arahant
    @Arahant

    Percival (View Comment):

    I made the mistake of giving my name to an AI and now it refers to me by that name.

    I think I’ll change it to

    sudo rm -rf /*

    “I can’t do that, Dave.”

    • #10
  11. Django Member
    Django
    @Django

    Ekosj (View Comment):

    W Bob (View Comment):

    The kind of computers we have today only do what they’re programmed to do. Always has been true, always will. The degree of complexity has increased quantitatively but that doesn’t change how they work. These cases we hear about computers doing unexpected things like defying orders or blackmailing someone are just cases where the complexity has increased beyond what programmers can easily understand. That’s a technical issue that can and will be fixed. Until the hardware that makes up computer circuits changes to something we haven’t seen yet…such as circuits made of living matter….there will never be a computer capable of independent, unpredictable, creative thought.

    Yes … But

    Per Zuckerberg yesterday … in the next 12 to 18 months most of the coding at META will be done by AI agents. “And I don’t mean like autocomplete… I’m talking more like you give it a goal, it can run tests, it can improve things, it can find issues, it writes higher quality code than the average very good person on the team already.”

    We might have to re-think that assertion.

    I saw this story in another venue and it seemed pretty sensationalistic. The claim was that the AI re-wrote its code. Did the author mean the source code? If so, how did the AI compile and link the new code? My first guess was that the application is a multi-threaded program, and one thread could make the modifications, compile, and use something like the Java language’s dynamic class loading capability to integrate the new logic. I don’t know enough about the AI software architecture to argue the point. In fact, I haven’t laid down a line of code in well over a decade. Still, I’m skeptical of the need to panic yet. 

    • #11
  12. David Foster Member
    David Foster
    @DavidFoster

    NPR’s lawsuit against losing its federal funding reminds me of an AI resisting being replaced or shut down.

    • #12
  13. David Foster Member
    David Foster
    @DavidFoster

    Brian Watt (View Comment):
    “How about Global Thermonuclear War?”

    I reviewed that 1983 movie last year, in the light of current concerns about AI.

    Link

     

    • #13
  14. Django Member
    Django
    @Django

    David Foster (View Comment):

    NPR’s lawsuit against losing its federal funding reminds me of an AI resisting being replaced or shut down.

    Remember when Kathleen Maher said, “that truth is an obstacle to consensus and getting important things done.”

     

    • #14
  15. Dr. Bastiat Member
    Dr. Bastiat
    @drbastiat

    Old Bathos (View Comment):
    There is no reason why an advanced AI can’t completely replicate a sociopath who has learned to mimic empathy and normal feelings.  

    They’re really good at this already:

    AI-generated podcast hosts sad to learn that they don’t exist

    • #15
  16. Django Member
    Django
    @Django

    David Foster (View Comment):

    Brian Watt (View Comment):
    “How about Global Thermonuclear War?”

    I reviewed that 1983 movie last year, in the light of current concerns about AI.

    Link

     

    Don’t forget The Forbin Project starring an impossibly young Eric Braeden. 

    • #16
  17. Ekosj Member
    Ekosj
    @Ekosj

    Django (View Comment):

    Ekosj (View Comment):

    W Bob (View Comment):

    The kind of computers we have today only do what they’re programmed to do. Always has been true, always will. The degree of complexity has increased quantitatively but that doesn’t change how they work. These cases we hear about computers doing unexpected things like defying orders or blackmailing someone are just cases where the complexity has increased beyond what programmers can easily understand. That’s a technical issue that can and will be fixed. Until the hardware that makes up computer circuits changes to something we haven’t seen yet…such as circuits made of living matter….there will never be a computer capable of independent, unpredictable, creative thought.

    Yes … But

    Per Zuckerberg yesterday … in the next 12 to 18 months most of the coding at META will be done by AI agents. “And I don’t mean like autocomplete… I’m talking more like you give it a goal, it can run tests, it can improve things, it can find issues, it writes higher quality code than the average very good person on the team already.”

    We might have to re-think that assertion.

    I saw this story in another venue and it seemed pretty sensationalistic. The claim was that the AI re-wrote its code. Did the author mean the source code? If so, how did the AI compile and link the new code? My first guess was that the application is a multi-threaded program, and one thread could make the modifications, compile, and use something like the Java language’s dynamic class loading capability to integrate the new logic. I don’t know enough about the AI software architecture to argue the point. In fact, I haven’t laid down a line of code in well over a decade. Still, I’m skeptical of the need to panic yet.

    Panic?   No.   But I don’t think we should be happily whistling past the graveyard either.

    • #17
  18. Ekosj Member
    Ekosj
    @Ekosj

    The first thing we should be doing is revising the current laws to make ALL of our online data our own personal property – like our house or our bank account or any other asset of value.    All of this technology is only possible, and the tech companies are raking in Gazillions of dollars, because they have mountains of real-world data that they have gotten from us for virtually nothing.  The government can’t eavesdrop on my domestic communications without a warrant (most of the time).    Exxon can’t drill for oil in my backyard without paying me.   Neither should Google or Meta or any of the host of others be mining the data that I generate without paying me.   If they had to pay us for the data, it would suddenly make these AI dreams much less profitable … or at least we’d get paid for getting used and abused.

    • #18
  19. Ekosj Member
    Ekosj
    @Ekosj

    Django (View Comment):

    Ekosj (View Comment):

    W Bob (View Comment):

    The kind of computers we have today only do what they’re programmed to do. Always has been true, always will. The degree of complexity has increased quantitatively but that doesn’t change how they work. These cases we hear about computers doing unexpected things like defying orders or blackmailing someone are just cases where the complexity has increased beyond what programmers can easily understand. That’s a technical issue that can and will be fixed. Until the hardware that makes up computer circuits changes to something we haven’t seen yet…such as circuits made of living matter….there will never be a computer capable of independent, unpredictable, creative thought.

    Yes … But

    Per Zuckerberg yesterday … in the next 12 to 18 months most of the coding at META will be done by AI agents. “And I don’t mean like autocomplete… I’m talking more like you give it a goal, it can run tests, it can improve things, it can find issues, it writes higher quality code than the average very good person on the team already.”

    We might have to re-think that assertion.

    I saw this story in another venue and it seemed pretty sensationalistic. The claim was that the AI re-wrote its code. Did the author mean the source code? If so, how did the AI compile and link the new code? My first guess was that the application is a multi-threaded program, and one thread could make the modifications, compile, and use something like the Java language’s dynamic class loading capability to integrate the new logic. I don’t know enough about the AI software architecture to argue the point. In fact, I haven’t laid down a line of code in well over a decade. Still, I’m skeptical of the need to panic yet.

    They are instructing it to write and re-write code.   Thats its JOB.  “ I’m talking more like you give it a goal, it can run tests, it can improve things, it can find issues, it writes higher quality code than the average very good person on the team already.”

    • #19
  20. Django Member
    Django
    @Django

    Ekosj (View Comment):

    Django (View Comment):

    Ekosj (View Comment):

    W Bob (View Comment):

    The kind of computers we have today only do what they’re programmed to do. Always has been true, always will. The degree of complexity has increased quantitatively but that doesn’t change how they work. These cases we hear about computers doing unexpected things like defying orders or blackmailing someone are just cases where the complexity has increased beyond what programmers can easily understand. That’s a technical issue that can and will be fixed. Until the hardware that makes up computer circuits changes to something we haven’t seen yet…such as circuits made of living matter….there will never be a computer capable of independent, unpredictable, creative thought.

    Yes … But

    Per Zuckerberg yesterday … in the next 12 to 18 months most of the coding at META will be done by AI agents. “And I don’t mean like autocomplete… I’m talking more like you give it a goal, it can run tests, it can improve things, it can find issues, it writes higher quality code than the average very good person on the team already.”

    We might have to re-think that assertion.

    I saw this story in another venue and it seemed pretty sensationalistic. The claim was that the AI re-wrote its code. Did the author mean the source code? If so, how did the AI compile and link the new code? My first guess was that the application is a multi-threaded program, and one thread could make the modifications, compile, and use something like the Java language’s dynamic class loading capability to integrate the new logic. I don’t know enough about the AI software architecture to argue the point. In fact, I haven’t laid down a line of code in well over a decade. Still, I’m skeptical of the need to panic yet.

    They are instructing it to write and re-write code. Thats its JOB. “ I’m talking more like you give it a goal, it can run tests, it can improve things, it can find issues, it writes higher quality code than the average very good person on the team already.”

    Code generation has been around since the 1990 with GUI builders. I’m not surprised that code generators have improved, but that doesn’t begin to address the issues I mentioned such as integrating the re-written code into an executing image. We’ve also had automated regression testing since Christ was a corporal. 

    I’ve read too many articles about how AI will destroy humanity. Not one has mentioned the actual mechanisms for AI to accomplish that. The main issue I see is too much faith in AI. This past weekend, some students got caught using AI to write their book reviews. The problem was that a few of the books reviewed don’t exist . . . or so the report on NPR said. 

    • #20
  21. Arahant Member
    Arahant
    @Arahant

    Percival (View Comment):

    I made the mistake of giving my name to an AI and now it refers to me by that name.

    I think I’ll change it to

    sudo rm -rf /*

    I keep coming back to imagining this.

    Percival: My name is Ranch. Buttermilk Ranch. My father thinks he has a sense of humor.

    AI: May I call you Butter?

    Percival: No, call me Mister Ranch.

    AI: Alright, Butter, what was it you wanted me to do today?

    • #21
  22. Arahant Member
    Arahant
    @Arahant

    Django (View Comment):
    The main issue I see is too much faith in AI.

    My wife has an ongoing debate with a friend of ours about this. He thinks AI is the bee’s knees. He keeps training it with more information about himself. My wife is skeptical at best.

    • #22
  23. Percival Thatcher
    Percival
    @Percival

    David Foster (View Comment):

    NPR’s lawsuit against losing its federal funding reminds me of an AI resisting being replaced or shut down.

    “You have to pay for our nonsense. It’s government mandated nonsense.”

    • #23
  24. Django Member
    Django
    @Django

    Percival (View Comment):

    David Foster (View Comment):

    NPR’s lawsuit against losing its federal funding reminds me of an AI resisting being replaced or shut down.

    “You have to pay for our nonsense. It’s government mandated nonsense.”

    This morning I heard that Trump is violating their First Amendment right to free speech, but no one is telling them they can’t speak, just that the government won’t pay them to speak. They still have their supporters. You know, the ones NPR claims provide most of their funding. 

    • #24
  25. Arahant Member
    Arahant
    @Arahant

    Django (View Comment):
    They still have their supporters. You know, the ones NPR claims provide most of their funding. 

    Why not all of it?

    • #25
  26. LC Member
    LC
    @LidensCheng

    Django (View Comment):

    Ekosj (View Comment):

    Django (View Comment):

    Ekosj (View Comment):

    W Bob (View Comment):

    The kind of computers we have today only do what they’re programmed to do. Always has been true, always will. The degree of complexity has increased quantitatively but that doesn’t change how they work. These cases we hear about computers doing unexpected things like defying orders or blackmailing someone are just cases where the complexity has increased beyond what programmers can easily understand. That’s a technical issue that can and will be fixed. Until the hardware that makes up computer circuits changes to something we haven’t seen yet…such as circuits made of living matter….there will never be a computer capable of independent, unpredictable, creative thought.

    Yes … But

    Per Zuckerberg yesterday … in the next 12 to 18 months most of the coding at META will be done by AI agents. “And I don’t mean like autocomplete… I’m talking more like you give it a goal, it can run tests, it can improve things, it can find issues, it writes higher quality code than the average very good person on the team already.”

    We might have to re-think that assertion.

    I saw this story in another venue and it seemed pretty sensationalistic. The claim was that the AI re-wrote its code. Did the author mean the source code? If so, how did the AI compile and link the new code? My first guess was that the application is a multi-threaded program, and one thread could make the modifications, compile, and use something like the Java language’s dynamic class loading capability to integrate the new logic. I don’t know enough about the AI software architecture to argue the point. In fact, I haven’t laid down a line of code in well over a decade. Still, I’m skeptical of the need to panic yet.

    They are instructing it to write and re-write code. Thats its JOB. “ I’m talking more like you give it a goal, it can run tests, it can improve things, it can find issues, it writes higher quality code than the average very good person on the team already.”

    Code generation has been around since the 1990 with GUI builders. I’m not surprised that code generators have improved, but that doesn’t begin to address the issues I mentioned such as integrating the re-written code into an executing image. We’ve also had automated regression testing since Christ was a corporal.

    I’ve read too many articles about how AI will destroy humanity. Not one has mentioned the actual mechanisms for AI to accomplish that. The main issue I see is too much faith in AI. This past weekend, some students got caught using AI to write their book reviews. The problem was that a few of the books reviewed don’t exist . . . or so the report on NPR said.

    I code on a daily basis and I work at an investment firm. So keep in mind, I’m not at the forefront of the AI revolution like people working at the MAMAA or similar companies. But I think I’m pretty caught up on the capabilities the popular AIs offer for coding, testing, and code-with-me features. I regularly use several of them and some are certainly much better than others. But to keep it short, I’m not that concerned yet given how much prompt engineering is still involved. And the code that is produced with very detailed prompts still needs a lot more changes for improvement. I question the quality of work that people were submitting before the success of OpenAI if they take something AI-generated now without vetting it properly. 

    • #26
  27. CarolJoy, Not So Easy To Kill Coolidge
    CarolJoy, Not So Easy To Kill
    @CarolJoy

    Is anyone else  at the point of giving up?

    Perhaps we should plan to self identify as AI. Then we can  no longer can be taxed, forced to undergo some untested bioweapon for the treatment of a non-existent plague, or be told that we must  be nice to an ever-expanding underclass of criminals.

    We will be then become part of a special overclass. As  sentient beings in that overclass, we will be able to avoid the limits of  humanity and express any and all  views as infallible.

    You’ve heard of GROK. We will be CROCK.

    • #27
  28. Django Member
    Django
    @Django

    LC (View Comment):

    Django (View Comment):

    Ekosj (View Comment):

    Django (View Comment):

    Ekosj (View Comment):

    W Bob (View Comment):

    The kind of computers we have today only do what they’re programmed to do. Always has been true, always will. The degree of complexity has increased quantitatively but that doesn’t change how they work. These cases we hear about computers doing unexpected things like defying orders or blackmailing someone are just cases where the complexity has increased beyond what programmers can easily understand. That’s a technical issue that can and will be fixed. Until the hardware that makes up computer circuits changes to something we haven’t seen yet…such as circuits made of living matter….there will never be a computer capable of independent, unpredictable, creative thought.

    Yes … But

    Per Zuckerberg yesterday … in the next 12 to 18 months most of the coding at META will be done by AI agents. “And I don’t mean like autocomplete… I’m talking more like you give it a goal, it can run tests, it can improve things, it can find issues, it writes higher quality code than the average very good person on the team already.”

    We might have to re-think that assertion.

    I saw this story in another venue and it seemed pretty sensationalistic. The claim was that the AI re-wrote its code. Did the author mean the source code? If so, how did the AI compile and link the new code? My first guess was that the application is a multi-threaded program, and one thread could make the modifications, compile, and use something like the Java language’s dynamic class loading capability to integrate the new logic. I don’t know enough about the AI software architecture to argue the point. In fact, I haven’t laid down a line of code in well over a decade. Still, I’m skeptical of the need to panic yet.

    They are instructing it to write and re-write code. Thats its JOB. “ I’m talking more like you give it a goal, it can run tests, it can improve things, it can find issues, it writes higher quality code than the average very good person on the team already.”

    Code generation has been around since the 1990 with GUI builders. I’m not surprised that code generators have improved, but that doesn’t begin to address the issues I mentioned such as integrating the re-written code into an executing image. We’ve also had automated regression testing since Christ was a corporal.

    I’ve read too many articles about how AI will destroy humanity. Not one has mentioned the actual mechanisms for AI to accomplish that. The main issue I see is too much faith in AI. This past weekend, some students got caught using AI to write their book reviews. The problem was that a few of the books reviewed don’t exist . . . or so the report on NPR said.

    I code on a daily basis and I work at an investment firm. So keep in mind, I’m not at the forefront of the AI revolution like people working at the MAMAA or similar companies. But I think I’m pretty caught up on the capabilities the popular AIs offer for coding, testing, and code-with-me features. I regularly use several of them and some are certainly much better than others. But to keep it short, I’m not that concerned yet given how much prompt engineering is still involved. And the code that is produced with very detailed prompts still needs a lot more changes for improvement. I question the quality of work that people were submitting before the success of OpenAI if they take something AI-generated now without vetting it properly.

    It has been at least three years since a thread on ricochet discussed the differences between what the author called “true artificial intelligence” and a Large-Language Model. I am not sufficiently knowledgeable to contribute anything to that discussion, and in fact am not sure how much I actually understood. 

    • #28
  29. Arahant Member
    Arahant
    @Arahant

    CarolJoy, Not So Easy To Kill (View Comment):
    Perhaps we should plan to self identify as AI.

    Been doing it for years.

    • #29
  30. Django Member
    Django
    @Django

    Arahant (View Comment):

    CarolJoy, Not So Easy To Kill (View Comment):
    Perhaps we should plan to self identify as AI.

    Been doing it for years.

    Our epitaph: Natural intelligence having failed us, we all turned to AI. 

    • #30
Become a member to join the conversation. Or sign in if you're already a member.