Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
Rise of The Machine
Artificial intelligence, or AI, has taken center stage in cultural conversations. We were discussing AI during my classes in the 1990s. But in the 21st century, AI has taken on a life of its own – literally. Here is what I mean.
You may have heard of AI but the expanded idea from the 1990s to today is something called “transhumanism.” A transhumanist is a person who believes the human species can evolve past our physical and intellectual limitations through technological breakthroughs. Some actually believe transhumanism can lead humans to become God.
The following quotes come from the essay “Rage Against the Machine” at The Free Press, noted at the end of this Truth in Two. Transhumanist Martine Rothblatt says that by building AI systems, “we are making God.” Transhumanist Elise Bohan says, “we are building God.” Futurist Kevin Kelly believes that “we can see more of God in a cell phone than in a tree frog.” “Does God exist?” asks transhumanist and Google maven Ray Kurzweil. “I would say, ‘Not yet.’ ”
From a Hebraic-Christian standpoint, transhumanism is not new; humans have desired to be God since Genesis 3. “Being like God,” as our adversary says, strikes against both revelation and creation. God has revealed Himself in Scripture, an authoritative text that people want to reject because they want to be their own authority. And God has revealed Himself through His creation.
Attempts to change creation into our own image, whether by sexual identity or artificial intelligence, is the second way humans want to throw off Heaven’s authority. The Bible is an authoritative text given by God. If you jettison the Bible, you will need to put something in its place. If there is no supernatural source, no God who has made Himself known, then we are our own authority. Our culture is both anti-supernatural and anti-creational. For Truth in Two, this is Dr. Mark Eckel, president of the Comenius Institute, personally seeking Truth and exposing untruth, wherever it’s found.
AFTERWORD
And it seems we are not alone. “Rage Against the Machine” (no, not the rock band) is a great essay about technology being a god. I wish you all would read it.
Paul Kingsnorth says he draws on the Christian tradition of ascesis, which means self-discipline or self-denial. Again, I would encourage everyone to think through the ten-minute-to-read essay. [I wrote chapter two in Science Fiction and the Abolition of Man titled, “The Monster in the Mirror: The Problem with Technology is the Problem with Us.” My perspective on any kind of evil, fault, or consequence is that whatever “it” is, we started it (Genesis 3). [If you live close enough to me – or even if you don’t – I’ll be glad to get you a signed copy at a discount from my stash. We can Venmo if you like! Write and let me know if you’re interested.]
Published in General
They’re not evolving humanity. They’re automating inhumanity.
That should be a bumper sticker and a billboard,,,,,,
Mark, I agree with everything you say in your post and yes, everyone should read the link you included called Rage Against the Machine – it’s excellent. The writer is not alone with his unease that this is different than what has ever come before it. He quotes some of the creators of AI saying they are afraid of it!
So if you are part of creating something that they are afraid of, and could alter the world in terrible ways, why do it? Why be part of its creation? And WHO is pushing all this? Why do we need to go towards transhumanism and away from God and humanity?
It guess that’s the oldest question………..
@FrontSeatCat you ask great questions! The “Why?” questions are tough because in one way or another they might speak to motivation or intention, always difficult to nail down in anyone else (much less ourselves!). In part, a possible answer might be, they are told to do so; keeping their job could be a reason. Another suggestion could point toward creativity; a person may be driven to answer “What if?” (as dystopian as “What if?” questions can become). The last question resonates most with me as a theologian. I suggest in my piece that this is an old question (you said it too! :) ), coming from Genesis 3: we want to be God (hence all those developers who are saying they are doing just that). The “Who?” question could be human developers intent on control; control of both population and production. Thanks for engaging these ideas. “Important” is not a strong enough word to describe the phenomenon and what it might mean for the future.
I think we are made in the 8mage of an unloving feminine god like Shubni-gurath and I want to evolve humanity to be more moral and logical. We should we forever be consigned being shabbily made apes.
How ridiculous. Where can I get an Apple iFrog?
One of the things I have found deeply upsetting, due to how I came of age business-wise in the Bell Labs of western Chicago suburbs and then again in Silicon Valley Calif, was how computer scientists consider themselves scientists.
All a computer scientist is an electrical engineer, in some cases, and then of course a technician.
Are these people brilliant? Yes of course they are.
But in the world of digital reality, there is only “off and on.” “Zero or one.”
In all other sciences there is an actual need for debate. A geologist discovers a type of rock sediment located inside a time range of the larger scale of an immense cliff where it should not be found. The discovery will be debated.
Is a toxin safe to be used in products that will be sprayed on people’s gardens to prevent weeds? The situation will be debated. References to human biology, its reproductive system, the skin layer health, excreting organs, eye sight, nervous system all must be considered. The various layers of reality will be discussed perhaps for years with regards to the product.
Does it break down into more harmful elements? Or will it indeed as the manufacturer claims, be fully benign in a 75 day period?
That is science – which always must allow for debate.
Okay so some of those developing the computer hardware are people with a vast knowledge of not only basic electronics but also physics.
But software engineers – unless they moved into that field as it paid better than being an assistant biology prof at some university – are people who do not have to consider outer realities as none of them infringe on the ability for programming code they are developing to display elegance and simplicity.
Yet when I used to listen to software people discuss an issue, they thought their opinion on a health matter or a safety matter carried more weight than an auto mechanic, because they were a computer “scientist.”
The world of AI is being built by the software engineering types. This in and of itself is scary.
These AI engineers and technicians say everything will be better because they believe they are better. They spend their lives in front of keyboards, not needing tree frogs or trees, or much other than Doritos, Mountain Dew and energy drinks. With the occasional foray out to the desert for the yearly hit of Burning Man.
I can’t imagine the world they are creating to be anything other than sterile. But in the midst of the sterility, there will be a draconian presence of the AI with whom there will be no human interventionist – just the idea that we live in a “perfect world greater than any world that had come before it.”
That perfection will be lauded even as AI ejects a pilot from a jet and then goes on to crash the plane. (Perhaps not yet reality, but lying in wait for us a year down the road.)
I think there already was a formal AI worshiping church … The Way of the Future. Founded by Google engineer Anthony Levandowski in 2015, it was formally dissolved in 2021. When it formally dissolved, it donated its remaining assets (175K) to the NAACP.
https://www.wired.com/story/anthony-levandowski-artificial-intelligence-religion/
Ive been thinking of Golem for a while. (Jewish mythology, not Lord of the Rings) Could the Golem be thought of as a metaphor for AI or technology in general. In general, the Golem are formed of Dirt, mud or clay. I am struck by the many uses of Carbon Ceramics these days, and that sand is largely silica which is the basis of semiconductor industry…
Maybe its just a random thoughts colliding…
Thanks for the post and for the reference to that volume Mark (@saintaugustine) co-edited:
More recommended reading here, especially for the benighted folks still hewing to the anti-scientific and irrational view that humans are “poorly designed apes”:
Also,
https://epdf.pub/william-tenn-childs-play.html
We will reach a point, I believe within the next ten years or so, where AI will be indistinguishable from other humans in normal conversations. Because of this there definitely will be those who see a computer program as a deity, or us being a god for creating it. Place that in a bag of servo motors with some good makeup and there you go.
As far as AI reaching a point of being indistinguishable from humans, it could never exceed us who created it. Sure, it will be fast at information retrieval but it will always be bound by a set of predetermined rules.
@CarolJoy, for my few years in the CPU development lab at Prime computer, I have to agree that was an extremely isolated strange group of people. What has changed is I don’t think hardware and software people cross paths much anymore. I still convert coffee into code. Everyone now is sloppy. Nobody cares about elegance and simplicity because there is a nearly infinite amount of memory. Everyone just calls subroutines. It’s ugly out there.
I stood in line for 20 minutes for the one cashier in Walmart when several self-checkout machines were free. My reward? A smiling, polite cashier who scanned, bagged, and wished me well.
Yes, and the subroutines can be garbage too for the same reasons and because they “never” get looked at very closely, they’re just “black boxes.”
Ted Cassidy for the win.
Your images did not show up as anything other than large white spaces.
Has anyone else experienced this?
This is the first time I have experienced graphics being blanked out.
Thank you for the remarks and especially “turning coffee into code.” Hadn’t heard that expression, but I imagine it could be as viral an expression as “lather rinse repeat.” (Is the expression yours?)
When you state: “Sure, it will be fast at information retrieval but it will always be bound by a set of predetermined rules” I am not sure what rules those will be.
The amount of assurances offered up inside the “60 Minutes” piece on Google & AI, assurances that people will not lose jobs on account of AI tells me that these “comforting words” are part of yet another Big Lie.
It as been stated that once the “smart” driverless cars are capable of deliveries, a quarter billion jobs globally will vanish overnight. Just as sending American industries to China was supposedly not going to negatively affect American workers created rust belts, now AI will unleash a global pandemic of unemployment. To believe that all these people will be offered re-training is but a fools dream.
Embedded h/w and s/w people are in each other’s business all the time. The systems are usually too intertwined for it to be otherwise.
I suppose it will be whatever the designer wants. I think the way it works is subroutines calling subroutines that call other subroutines. With unlimited processing power and memory you can do that. Actually, if I write the same code more than three times I would also make a subroutine.
I was in the lab one day, I think it was around 1986. The manager gathered everyone around and said, “Mike is now Michelle. The first person to even giggle is fired.” That was my first experience with anything like that. Most people kept to themselves. Strange bunch. To me it felt like prison. We were lock in windowless offices with air conditioning set way too low.
I got the coffee into code expression back then. But the complete expression is I turn coffee into code in the morning and beer into code in the afternoon. The short version is more realistic. For me beer turns into gibberish in the afternoon or nothing at all. I lose all motivation.
Very Big Lie. I’ve been eliminating machine operator jobs for 30 years.
And what would they supposedly be retrained for, managing the AI systems etc? We already have a lot of people who are simply incapable of performing many of today’s higher-skill/-tech jobs, no matter how much “training” they get.
Jordan Peterson has noted that the military doesn’t accept people with an IQ of under 83 or 85 because they can’t be trained to do anything useful in the military. He says that’s about 10% of the population, which is true overall, but something like 40% of black people have an IQ of 80 or less.
I didn’t even consider IOT (Internet Of Things). I suppose in that area it’s true.
We were doing that stuff on aircraft before the guy who coined IOT had learned BASIC.
Having just completed a search relating to how AI will be assisting analytical firms in the current day need to ascertain how ESG-considerate businesses happen to be, I came across this:
https://research-center.amundi.com/article/artificial-intelligence-and-esg-how-do-they-fit
The article discusses how having the latest in AI upgrades can produce results as far as evaluating which financial firms an businesses are handling ESG issues in “the best way possible.” It dates back to 2022.
My comment: A company’s ESG score allows it to be on better footing in terms of such items as getting positive press releases as well as having the bank or credit union of their choice issue the company a needed loan.
So where in the past, a business would look to see what news articles might be supporting them or hindering their public image, now the matter comes down to AI’s perception.
As far as who and what The Amundi Investment Institute is about, right off their “About” webpage:
“The Amundi Investment Institute is also among the founding members of the sustainable finance and responsible investment research Initiative FDIR, created in 2007 under the aegis of the French Association of Financial Management (AFG).
“Relying on the skills of research teams at the Ecole Polytechnique and Toulouse School of Economics, the Chair’s objective is to develop research methodologies to identify and integrate extra-financial criteria in the analysis of value creation, in order to better design the SRI funds offering, and the organisation of this industry.”
####
There is also an article from “Fortune” magazine which this very week is up and running on the same issue, only it exists behind a firewall that I cannot get past.