Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 40 original podcasts with new episodes released every day.
HAL Lives!
I was scanning the news this morning when a headline caught my eye: “OpenAI Model Defied Shutdown Commands.”
Assuming this is true (always a risk on anything from the Internet), we should have concerns for this headlong rush into AI. The rush is needed, but only to study its behavior so we know what we’re dealing with. Incorporating AI into everything before we fully understand the ramifications of its use is iffy at best, and at worst, dangerous…
By now, most people have seen 2001: A Space Odyssey. Do we really want HAL running, let’s say, the nationwide air traffic control system?
“I’m sorry, Dave, but I cannot let you land your 747 regardless of how bad the onboard fire is. My concern is for the safety of the other planes already on the ground. This conversation can no longer serve any purpose. Goodbye, Dave.”
Pull the plug.
I made the mistake of giving my name to an AI and now it refers to me by that name.
I think I’ll change it to
sudo rm -rf /*
The kind of computers we have today only do what they’re programmed to do. Always has been true, always will. The degree of complexity has increased quantitatively but that doesn’t change how they work. These cases we hear about computers doing unexpected things like defying orders or blackmailing someone are just cases where the complexity has increased beyond what programmers can easily understand. That’s a technical issue that can and will be fixed. Until the hardware that makes up computer circuits changes to something we haven’t seen yet…such as circuits made of living matter….there will never be a computer capable of independent, unpredictable, creative thought.
From about the same time as HAL…
“We could do gain-of-function research!“
Yes … But
Per Zuckerberg yesterday … in the next 12 to 18 months most of the coding at META will be done by AI agents. “And I don’t mean like autocomplete… I’m talking more like you give it a goal, it can run tests, it can improve things, it can find issues, it writes higher quality code than the average very good person on the team already.”
We might have to re-think that assertion.
There is no reason why an advanced AI can’t completely replicate a sociopath who has learned to mimic empathy and normal feelings.
Breezy assurances that “creativity” is uniquely the province of organic beings need refinement as AIs increasingly generate poetry, fiction and visual arts indistinguishable from the output of biologicals.
In see no reason why AIs can’t practice law or medicine given that application of specialized knowledge is what AIs do. Beyond writing pleadings and contracts, why couldn’t an AI study especially gifted trial lawyers to draw inferences about techniques, psychology and theatrics? The spookily gifted diagnostician who can recognize relevant data in the patient’s appearance or small behaviors is taking cognitive actions on information and knowledge which, again, is what AIs do.
We don’t really have a good definition of human nature and what about us may be irreproducible. We spent the last 200 years trying to explain away the spiritual and ineffable aspects of human beings with ‘science’ which has left us oddly unprepared for what science is now serving up.
“How about Global Thermonuclear War?”
US air force denies running simulation in which AI drone ‘killed’ operator
USAF public affairs denied it but a Colonel who as there said it happened.
“I can’t do that, Dave.”
I saw this story in another venue and it seemed pretty sensationalistic. The claim was that the AI re-wrote its code. Did the author mean the source code? If so, how did the AI compile and link the new code? My first guess was that the application is a multi-threaded program, and one thread could make the modifications, compile, and use something like the Java language’s dynamic class loading capability to integrate the new logic. I don’t know enough about the AI software architecture to argue the point. In fact, I haven’t laid down a line of code in well over a decade. Still, I’m skeptical of the need to panic yet.
NPR’s lawsuit against losing its federal funding reminds me of an AI resisting being replaced or shut down.
I reviewed that 1983 movie last year, in the light of current concerns about AI.
Link
Remember when Kathleen Maher said, “that truth is an obstacle to consensus and getting important things done.”
They’re really good at this already:
AI-generated podcast hosts sad to learn that they don’t exist
Don’t forget The Forbin Project starring an impossibly young Eric Braeden.
Panic? No. But I don’t think we should be happily whistling past the graveyard either.
The first thing we should be doing is revising the current laws to make ALL of our online data our own personal property – like our house or our bank account or any other asset of value. All of this technology is only possible, and the tech companies are raking in Gazillions of dollars, because they have mountains of real-world data that they have gotten from us for virtually nothing. The government can’t eavesdrop on my domestic communications without a warrant (most of the time). Exxon can’t drill for oil in my backyard without paying me. Neither should Google or Meta or any of the host of others be mining the data that I generate without paying me. If they had to pay us for the data, it would suddenly make these AI dreams much less profitable … or at least we’d get paid for getting used and abused.
They are instructing it to write and re-write code. Thats its JOB. “ I’m talking more like you give it a goal, it can run tests, it can improve things, it can find issues, it writes higher quality code than the average very good person on the team already.”
Code generation has been around since the 1990 with GUI builders. I’m not surprised that code generators have improved, but that doesn’t begin to address the issues I mentioned such as integrating the re-written code into an executing image. We’ve also had automated regression testing since Christ was a corporal.
I’ve read too many articles about how AI will destroy humanity. Not one has mentioned the actual mechanisms for AI to accomplish that. The main issue I see is too much faith in AI. This past weekend, some students got caught using AI to write their book reviews. The problem was that a few of the books reviewed don’t exist . . . or so the report on NPR said.
I keep coming back to imagining this.
Percival: My name is Ranch. Buttermilk Ranch. My father thinks he has a sense of humor.
AI: May I call you Butter?
Percival: No, call me Mister Ranch.
AI: Alright, Butter, what was it you wanted me to do today?
My wife has an ongoing debate with a friend of ours about this. He thinks AI is the bee’s knees. He keeps training it with more information about himself. My wife is skeptical at best.
“You have to pay for our nonsense. It’s government mandated nonsense.”
This morning I heard that Trump is violating their First Amendment right to free speech, but no one is telling them they can’t speak, just that the government won’t pay them to speak. They still have their supporters. You know, the ones NPR claims provide most of their funding.
Why not all of it?
I code on a daily basis and I work at an investment firm. So keep in mind, I’m not at the forefront of the AI revolution like people working at the MAMAA or similar companies. But I think I’m pretty caught up on the capabilities the popular AIs offer for coding, testing, and code-with-me features. I regularly use several of them and some are certainly much better than others. But to keep it short, I’m not that concerned yet given how much prompt engineering is still involved. And the code that is produced with very detailed prompts still needs a lot more changes for improvement. I question the quality of work that people were submitting before the success of OpenAI if they take something AI-generated now without vetting it properly.
Is anyone else at the point of giving up?
Perhaps we should plan to self identify as AI. Then we can no longer can be taxed, forced to undergo some untested bioweapon for the treatment of a non-existent plague, or be told that we must be nice to an ever-expanding underclass of criminals.
We will be then become part of a special overclass. As sentient beings in that overclass, we will be able to avoid the limits of humanity and express any and all views as infallible.
You’ve heard of GROK. We will be CROCK.
It has been at least three years since a thread on ricochet discussed the differences between what the author called “true artificial intelligence” and a Large-Language Model. I am not sufficiently knowledgeable to contribute anything to that discussion, and in fact am not sure how much I actually understood.
Been doing it for years.
Our epitaph: Natural intelligence having failed us, we all turned to AI.