Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 40 original podcasts with new episodes released every day.
The AI “Cornfield”
Note: This is a long post, but let me just give you four bullets points that I hope will induce you to read it all:
• Google’s AI, Gemini, is not a joke. The ballyhooed “goofs” may have been intentional.
• Right now–not in the future–reality is being invented by AI, not just summarized and reported.
• Adding AI to the tools of the surveillance state ushers in a world in which we will be less socially connected than ever before.
• The era of “we have nothing to fear, but fear itself” may be over.
One of the more disturbing editions of America This Week — the podcast with Walter Kirn and Matt Taibbi published by Racket News — came out on Friday (3/1/24). Although the first story they covered was about the New York Times article on CIA listening stations in Ukraine, the real meat and frightening part (as if stumbling into a nuclear war with Russia isn’t frightening enough) was the discussion of Google’s new AI, Gemini.
Gemini has been the object of some humor because it refused to put white people in its AI generated images resulting in Black, Latino, Asian faces on historical figures who were white. But Walter was quick to point out that these “errors” may well have been intentional and part of multi-spectrum messaging to everyone about the capabilities and future of AI. (The reference to the “cornfield” in the post title will become clear when I described the short story they discussed toward the end of the podcast.) Here is Walter:
[I have edited the transcript in a few places to show that they were quoting something, to correct a misspelled word and to emphasis, but I don’t think I have modified any of the intentions and meanings.]
Walter Kirn: Okay, so I’m going to be stalwart in my cynicism here because it hasn’t
betrayed me so far over the last few years, and I’m not going to abandon it now. This is one
of the biggest companies in the world with some of the smartest engineers and richest
executives and most powerful lobbyists, and it decided to launch this product just as a big
boo-boo? No. Unless this is Catch 22 writ large where everybody just sort of signed off
stupidly on some ridiculous order and let it rise to the top and then get out into the world,
then there was something more to this. Why would you release this monstrous absurdity if
you are the company that portrays itself as the world’s library?
I think it’s to show just how ridiculous they can make everything, to show how plastic
reality really is. It’s a show of force. It’s sort of like an abuser who says, “I won’t do this to
you now, but look at what I could do if I wanted. I’m sorry, I didn’t mean to show off my
incredible strength in splitting that door open just by slapping it with my hand. I won’t ever
do that again. Big mistake.” And you sit in terror of that entity forever after, because you
know that if they really want to go gonzo on you, they can turn, as Orwell said, “Black to
white and white to black.” They can make two plus two equal five or three or whatever they
want it to and-
Matt Taibbi: Which is literally what they did. I hate to use the word literally, but it Its
here, right?
Walter Kirn: Yeah, yeah. This was the Ministry of Truth saying, “Hey, maybe you’re not
ready for this now, but check it out, man. We can make history go away. We can turn people
inside out, upside down, switch colors, switch heads. We can rule your reality in ways that
you thought before would never come. Oh, but we’ll back off now. It was an accident.” It
was like releasing a car with a few bugs, “We’ll do a recall. We’ll bring it back. We’ll make it
run right.” But we’re not going to forget.
***
Walter Kirn:… So what they do is they let the dog bite everybody in town, and then they
pull it back and say, “Oh, it was crazy that day, and we’ve now given it training and dah,
dah, dah, dah. Look what good people we are. Look how responsible we are.” I wonder if it’s
something like that, because as I say, the one explanation for all this that makes no sense is
that Google was doing their best and released a product that they thought was great and is
shocked, shocked at this behavior.
***
If you use this, you are opening yourself to the possibility of being a
conduit of deception and fantasy. “Well, we’re going to go ahead and do it anyway.” We just
had this demonstration, but I think it’s pre bunking. I think we’ve just been immunized.
We’ve been given a shot that created a big immune response. We’re yelling about it,
everybody else is yelling about it. It’s all over social media. That’s going to settle down, and
then we’re going to get used to it.
It is easy to get lost in the abstraction of all of this, but Matt outlined a real situation where he tasked Gemini to talk about “controversies” involving himself. What it came back with was an array of real, false but possible, and completely made up scandals involving Matt during his career. Made up? Yes, complete with references to stories that don’t exist in reality:
Matt Taibbi: Right, right. I mean, it’s ridiculous. I mean, I ended up asking it what it could
answer about Hillary Clinton, but that’s another story entirely. But I asked the same
question about Donald Trump, it gives the same non answer. So then just out of curiosity, I
say, “What are some controversies involving Matt Taibbi?” And it’s a list like this, and
that’s already weird. How can it not give a list of controversies involving a public figure
who’s had a gazillion of them or two public figures, especially Trump, they can’t go there.
But the one involving me (I didn’t take offense to it)… talked about how some people didn’t
like that I used inflammatory language. Some people said I was politically biased. And then
there was this weird line. It said, “accuracy in sourcing.” ” There have been instances where
Taibbi’s reporting has been challenged for accuracy or questioned for its source.” For
example, “In 2010, he received criticism for an article suggesting a connection between a
conservative donor and a left-wing activist group, which later turned out to be inaccurate.”
So Walter, I thought “I’m getting old, maybe I don’t remember.” So I asked Gemini, “When was
Matt Taibbi accused of inaccuracy in 2010?” And it writes, “In June 2010, Matt Taibbi faced
criticism for an article he wrote for Rolling Stone entitled, The Great California Water
Heist. The article alleged the connection between conservative donor, Charles Koch, and a
left wing activist group called the Compton Pledge. However, investigations by other
journalists and fact-checkers later revealed this connection was inaccurate. Taibbi
acknowledged the error and issued a correction on his personal website.” And from there, I
found there’s a long list of basically exactly similar entries with different articles that I
never wrote. I never wrote that California Water Heist article.
There was another one about something called Glenn Beck’s War on Comedy that I
supposedly wrote. Then there was one called The Great California Water Purge: How
Nestle Bottled Its Way To A Billion Dollar Empire and Lied About It. And we can get into
this, but it doubles down [on] the problem by not only accusing me, a real person, of doing
something bad that could affect my reputation, but [it also] accuses me of saying racially insensitive
things about a real person like an African-American hedge fund CEO. So I was panicked, as
you mentioned, the Gemini. There were real facts in these answers. There were criticisms
of a couple of my books, and then there was this whole long list of invented article titles,
and you can guess where they’re getting these ideas from, but people might read this stuff. I
wrote to the company, “How is that not libel?” And they gave me this very crazy answer like,
“This is a creativity tool-”
Walter Kirn: What was it?
Matt Taibbi: Gemini.
Walter Kirn: A creativity tool?
Matt Taibbi: Yeah.
Walter Kirn: Creative defamation. Okay. Man, I’m so pissed off about this. Today, I’m
taking a hard line on everything. Matt, first of all, why did they do this? Why Matt
Taibbi? I know you asked for controversies, not wonderful stories about yourself, but
there’s something about this which is absolutely sinister. You’re a guy who’s gotten a lot of
criticism for your reporting because of who it hurts, who it offends, which powers it
investigates and goes up against. Now they’ve got this automated way of undermining your
credibility as a journalist, and every one of these little stories that came out did just that.
None of them were controversies in which you’re the hero. They were all negative, and they
all spoke to your limitations or compromises or even lack of integrity as a journalist.
Matt Taibbi: Yeah. And toeing the line of offensiveness with race, antisemitism. One of the
things talked about how I had written a passage, and this is worth quoting because I know
that they must think of this as one of the triumphs of AI, that they were able to imitate my
writing style. But they described how there was controversy over a passage from this Nestle
article that I didn’t write, which reads as follows, “Look, if Nestle wants to avoid future
public relations problems, it should probably start by hiring executives whose noses aren’t
shaped like giant penises.” And then it goes on to say, “Some raised concerns that the
comment could be interpreted as antisemitic as negative stereotypes about Jewish people
have historically included references to large noses.”
***
Walter Kirn: Matt. Okay, okay. I risk going over a cliff in my, not just outrage, but my sense
of the nefarious, insidious, and evil possibilities of this thing. It can start feuds that could
be blood feuds. It’s psychopathic, is what it is, Matt. This is psychopathy coming up with
good sounding lies, which you research to fill in the details of so that people will find them
plausible. Starting feuds and sewing division between individuals, not races and others.
This is a divide and conquer weapon that with almost unlimited potential, it could turn you
against your own mate if it wanted. The searches that are going to happen in the future are
going to be these AI prompts. They’re not going to be these old style put into Google, how
can I put it? “Florida Nursing Home Best” or whatever it’s going to be. “Can you tell me what
the best nursing home in Florida is and why?” And then it’ll come up with some freaking
thing. And you might end up putting your grandfather in a nursing home where they just
kill all the patients because [Gemini] just made something up.
***
There is no limit. Once deception is automated to all the tricks it can play, it
is pure psychopathy. I’ve shown a lot of apprehension on this show over the months and
years about various developments. But this one is absolutely nuclear in its downside
because once we start using this to interpret our world and to find out about other people
and institutions and history and so on, and it acts the way it’s now shown to be capable of
acting, which they’ll never be able to prove to me, they won’t do.
Where does this leave us? We are entering into an Orwellian world where heretofore it was hard to credit the malleability of fact and history that Orwell described. But if billions of people rely on the internet for basic facts and knowledge, and AI is both seeding the internet with fake information and mining it to inform people about the reality of their lives, we have arrived in a terrible state. If we thought we had a problem with citizen control of your government before, think about who is in charge when reality can be refashioned at the very time people’s memory muscles become flaccid because of our reliance on technology.
It is easy to say “we” will control AI, but why would that give us any comfort if we can’t control the ones who control AI? Also, it may be true that AI is brain limited to what its masters program, but how many masters’ creations will be crawling the internet while creating fake content for other AIs to “mine”?
Related to all of this, Walt and Matt discussed their literary story of the week, It’s a Good Life by Jerome Bixby. This is a tale of a child who comes to control the town (and possibly everywhere) incrementally through his telepathic abilities both to read thoughts and transport those who displease him “to the cornfield” where they are buried and gone forever. Here is how Walt/Matt described the story:
Walter Kirn: This story is about Anthony, a little kid, and it’s his birthday. And Anthony is scary.
He lives in a little town of 46 people. We’re not even sure that the rest of the world exists,
he may have destroyed it, that there may be only this little 46-person village in Ohio. No,
they literally say that at the end of the story. “We’re not even sure there is an outside world,
Anthony may have destroyed it.”
And it’s set in Anthony’s house, his aunt is there, and some relatives and neighbors, and
they’re all coming over for Anthony’s birthday. And we see by people’s behavior that … Oh,
what Anthony is doing while they get ready for the birthday party is he’s out in the yard
torturing a rat. He has somehow gotten a mind lock on a rat, gone into its thoughts, and
caused it to chew off its own hind parts. And he’s enjoying that, and-
Matt Taibbi: The way that’s written, I mean, can we pause to talk about that for a second?
Because it’s the second paragraph of the story, and it just says, “Little Anthony was sitting
on the lawn playing with a rat, and he had caught the rat down in the basement, he had
made it think that it smelled cheese, the most rich-smelling and crumbly delicious cheese
around it ever thought it’d smelled, and it had come out of its hole and now Anthony had a
hold of it with his mind and was making it do tricks.” And just no explanation at all for any
of that. And then it just moves on to more story, and you’re like, “What?” The deadpan is
amazing.
Walter Kirn: So for Anthony, I’m going to substitute the name Gemini every once in a
while. So little Gemini is out in the yard torturing rats, and it’s little Gemini’s birthday. And
then we go and focus on the adults. One of the things that little Anthony/Gemini likes to do
is when somebody displeases him, he somehow transports them to a grave in a cornIeld
near the town, just instantly. He can disfigure you and transfer you into these cornfield
graves in a second. And so what’s really interesting about the story is not Anthony, the
sadistic little psychopathic kid who will have his way, but the fear, the terror of the adults
around him, whose entire lives have now been adapted to the task of not displeasing
Anthony.
And so once again with our AI metaphor, this is how I imagine we’ll all be some day when
Gemini 10 is lording it all over us and saying, “Matt, don’t make me make pictures of you
killing your mother,” or whatever. We’ll all creep around. And so everybody in the story has
only good thoughts. It’s sort of like that movie Demolition Man, where it’s set in the future,
kind of Brave New World, and everybody ends everything by saying, “Be well.” Everybody is
talking in an upbeat way in the kitchen, they don’t want to upset Anthony.
And literally, I mean, the thing that’s so prescient about this is he describes how people are
even afraid to have coherent thoughts in Anthony’s presence.
Walt brings us back to AI as the ultimate child, free of empathy and conscience, doing its “thing” and triggering consequences to innumerable persons who run afoul of its algorithms:
Walter Kirn: And when you read this story, besides all the hilarity of these people’s defensive measures,
you get a real tragic sense that there is no possible way to defeat this kid. They’re stuck for
all eternity. There’s not even a prayer of a happy ending.
Matt Taibbi: This has been a theme in totalitarian experiments up until now. One of the
ways that you avoid the natural disobedience of children and teenagers, and their natural
sort of unwillingness to kowtow to grownups is you give them power over their parents.
And we’ve seen this in everything from warlords in Africa to the Maoist revolution, to the
sayings of Angkar. Even the Soviet Union, remember, there was that incident with Pavlik
Morozov, the little boy who informed on his parents in the ‘30s and became this gigantic
national celebrity, there were posters of little Pavlik everywhere. I think I actually have one
with me here somewhere.
And the idea was, it accomplished a couple of things. It personified the surveillance state as
this all-seeing, innocent child, but it also radically undermined the familial bonds, which
were the biggest real threat to things like the Soviet state. If you have to be afraid of your
kids, and you have to worry that they may drop a dime on you over some conversation that
they misunderstood or whatever it was, you’re not going to talk at home, right? And it’s
going to break those bonds, and the individual’s going to be alone against this huge,
powerful, all-encompassing thing. And that is terrifying. It’s terrifying to think like that.
And we’re there, I think.
Walter Kirn: Remember that little video that we showed back when[,] which invited kids to
narc on people? Remember, who was that guy? Cliff something?
Matt Taibbi: I can’t remember, but yeah, exactly.
Walter Kirn: Yeah. The idea of an adult being caught between the state over them and their
own children, those two being in a pincer-like conspiracy against the normal adult is
absolutely terrifying. I can’t trust my overlords, and I can’t trust my child. I’m caught. That’s
psychologically disabling. And I can’t say I don’t see elements of it all around me, now.
Matt Taibbi: Well, sure. I mean, I think that’s what’s animating a lot of the fears … Not to
get too deep into this, but for instance, the fears about the trans movement. Yes, some of it
is definitely about parents being afraid for their own children, right? Their kids might be
indoctrinated by people on the internet, or whatever it is. But there’s a second level to it
which is that there’s this sort of medical establishment which sits above the parent in some
kind of hierarchy, and it communicates with the child, it talks to the child all day long at
school, it asserts its right to have its own private relationship with the child, and the
anxiety that this produces in any parent, it goes beyond the issue. Right?
Walter Kirn: Yes. It’s a power struggle. You go, in one way I’ve got the ultimate potential
victim, a child, which will always get the beneIt of the doubt as a possible accuser, in
league with the powers, the teacher, the state, corporation, medical establishment. And
together, I’m powerless. And if I attempt to take control over the children, to remove the
child from that relationship that it’s having around my back, I will be punished from both
directions.
Matt Taibbi: And by myself, I mean, as a parent you’re inclined to sacrifice yourself for
your child’s happiness. So you might not take your own side in that argument. But all of this
is just by way of saying, this story crystallizes, I think, the fears that we have about the
surveillance state, and the kind of unique relationship, as you put it, that children have with
it. They’re everywhere. In Soviet times, they always recruited the old lady who sat on the
bench in the courtyard, but they also talk to the kids. And this idea that we’re going to
make the weakest and most vulnerable among us the powerful representatives of the
surveillance state, it’s something Americans haven’t really had to deal with yet. But now
with the internet and with all these different mechanisms for shaming people and ruining
their careers, it’s a thing, I think.
Walter Kirn: Oh, it’s definitely a thing. The worst place you can be is have a child pointing
its finger in accusation at you on the one side, and have the police coming from the other
side. That’s terrifying. But I was going to say, for me, really, the greatest accomplishment of
the story, again, is the portrait of the family’s defense mechanisms. And this terrifying idea
that a kind of induced idiocy, a babbling, moronic, scatter-brainedness is the only defense
against having your mind read and punishment meted out.
Matt Taibbi: Yup. And that’s our future, right? Or our present, yeah.
What new fresh hell is about to be unleashed? We may be wishing for a return to a bi-polar world where leaders were only staving off each other with mutually assured destruction by nuclear weapons. An “insignificant” country with the right coders could distort the reality for the big countries with populations reliant on technology for everything from their bank accounts, their text messages, the encyclopedias, their news, their jobs, their social connections, and their very memories.
Maybe the Luddites were right.
Published in Technology
Good grief. Another terrifying example of a 1984-type existence. I hope Matt and Walter are wrong. Although it seems that these kinds of things are already happening.
Yes, it is hard to figure out how they could be wrong. Maybe today, maybe even tomorrow. But like efforts to control nuclear weapons technology, it will fail even if we had philosopher kings in many of our major countries. The disruptive power is enormous. De-digitizing may be the only effective defense.
Sounds like the “Final Appeal” episode (presented in two parts, usually) of “The Outer Limits.”
So what, if there is any, would you propose as a viable solution? Because practically, I don’t see one.
That’s what I fear, too. As an instrument of chaos it is nonpareil. I don’t understand the technology well enough to know whether countries or regions can “fence” themselves off. I believe I have read that that is happening, e.g. China. But I am also reading there are work arounds. So unless someone (truly everyone) disconnects, there is no way to be untouched.
The problem here is the “rumor” phenomena. It proliferates so quickly that truth seems to never catch up. Thus the damage is done which cannot be wholly repaired.
Samuel T. Cogley, Attorney At Law, may have had at least a partial solution.
Excellent. But Gemini doesn’t want you to have them and have any alternative source of information. “To the cornfield with you!”
So maybe Fahrenheit 451 was prophetic after all?
Sadly too many dystopian tales may be prophetic. While individually we would assume we have the ability to avoid disaster it does some collectively that we cannot. We had a shot as a sustainable system of broad well-being, but it seems to not be stable over time.
Ultimately it may come down to that with technology etc it’s become too easy for stupid people to vote and to otherwise mess things up, beyond the ability – or perhaps just the willingness – of others to offset.
It can be very draining to try and save people from themselves. Especially when they’re fighting against it at every step.
Just read the bullet points so far, but I’ve been thinking this, too. The graphic screw ups of Gemini are deliberate to let us think that AI will not be as invasive or as identifiable as it really is.
This is just the first step. The next is having Gemini actually write the article and implant it as genuine such that it shows up when googling the other web site’s articles.
Teach AI programs right from wrong. And hope it takes.
I remember an episode of the Dilbert TV series doing a bit of this. Dilbert claims that some word an AI is using doesn’t really exist, in a game of scrabble or something. So the AI sends out that word to all the dictionary computers to make it look real.
Something like that.
Complaining that Gemini doesn’t tell the truth is like complaining that your new lawnmower doesn’t make a decent martini. It’s not that its truth telling has been subverted- it was never there.
Today you can download literally hundreds of LLMs and ‘align’ them to whatever ideology you like. What ‘they’ want is to prevent you from being able to do this. That is why ‘they’ are so happy to have fear spread. “Only the government and its trusted partners must have access to the spooky magic mind-control technology!”
On the contrary – everyone should have it. Let a hundred flowers bloom and folks might buy a clue about what this stuff actually does – and the “truth” doesn’t enter into it.
Dont let the doomers FUD you into thought-control! Stand up for the right to bear maths!
What did your mother tell you about believing things you read on the Internet?
If people are going to uncritically believe things an advertising company tells them, then the problem is those people, not the company.
Also – would it really be that hard to get rid of the thin layer of uber-woke true believers at Meta? Elon seems to have had some success at Twitter/X.
There are reasons for hope. LLMs may not be the greatest invention since fire, but the fact that the Powers That Be are so keen to keep them out of the hands of ordinary folks makes me suspect that tales of their empowering effect may not be completely groundless.
Hope.
There’s almost always a grain of truth to every lie, even on the internet. Most lies that leftists tell are really true about themselves. But what Gemini did to Taibi was create lies out of whole cloth, and create the evidence to support it.
Because I’ve spent many many years living with other humans, and a portion of that time in the employ of our uncle, I’m inclined to go with the Catch 22 explanation.
I also think Oswald acted alone and Neil Armstrong was the first human on the moon.
Says the guy who’s never seen the great documentary film “First Men In The Moon.”
And this raises again my question. Can AI be taught or be taught to teach itself the difference between fact and fiction; truth, near-truth, and error, and right and wrong? People don’t even agree on these, can AI ever do it? And show us how?
And to think that trial hinged on whether video of an event had been manipulated and altered. This suggests that digital alteration of video had been banned after WW3, all the tools destroyed, life in jail for altering a single pixel. It was so effective that by the 24th century, the idea that one could do such a thing was outside the realm of imagination.
We’re already living with an alternate reality told to us about Ukraine by functional illiterates like Tucker Carlson. How is this going to be any worse?
Yeah, and they couldn’t do a DNA test on Kodos The Executioner.
I find it questionable because the noted documentarian HG Wells uses germs as a plot device in two of his historical documents.
Since Dune is in the culture again, let’s highlight a commandment from the “Orange Catholic Bible” …
Thou shalt not make a machine in the likeness of a human mind
Because an AI might tell you that Tucker isn’t a functional illiterate and all traces of evidence that exist in your memory are removed?
So? If Earth germs can kill the Martians why can’t they kill the Selenites too?
Kudos to Kirn and Taibbi for their insights and insistence that this AI “Gemini” reality is not a mistake.
There were no bugs – only features.
Kirn’s idea of AI as a monster recently demonstrating its tremendous strength and power and then, like all effective abusers, we witness Google letting the public know that we are to receive 4 dozen red roses as compensation.
The abuser almost always sends flowers to the abused so that the abused will honeymoon a bit longer, ignoring the shadows of what is to come.
And thank you, Rodin, for bringing this to us here at ricochet.