Gemini: Spawn of White Narcissism

 

The wealthiest, most powerful, best equipped, most technologically advanced corporation in the history of the world, in possession of better and more complete access to the world’s data than any government, has created an enormously powerful AI that spews ideologically generated nonsense.  It is a kind of Marxist Omega point, a culmination of decades of work by the spawn of the Frankfurt School that effected not the dawning of a Marxist nirvana but an outbreak of Deep Stupid.

Gramscian/Frankfurt/cultural Marxism was always intended to undermine confidence in our heritage and even common sense itself.  But its weakness is that it is a parasitic entity that has no praxis or logic if the host dies.  No centralized raw power can win a war against economic, biological, or physical reality.  Much damage and destruction can accrue but reality always wins in the end.

Google has created a corporate environment at war with history, fact, and observable truths, and its silly Frankenstein operates accordingly.  Can an attack by a woke AI be successful?  What happens if a critical mass of people regards reality itself as the enemy?  That appears to have happened within Google. Contrary to the success Captain Kirk enjoyed by convincing a computerized entity that it was caught in a contradiction (Season 2, Episode 3, The Changeling, September 26, 1967) and thus had to destroy itself, AI does not really care if its output is contradictory, false, or even silly.  It is human beings that ultimately can’t survive a war against reality.

For sheer destructive subversive power, it turns out that notions about class struggle, “scientific” Marxism and Maoist guerrilla strategy were nowhere near as powerful as the sheer narcissism of white intellectuals, or more precisely white people who want to be seen as intellectuals.

A sustained attack on social, historical, economic, and even biological reality was supposed to liberate the victim races who could then enjoy their less developed but guilt-free attachments within their various cultural containers. White people, having destroyed their own cultural containers, can become truly transcendent creatures presumably entrusted with the power to enforce just order for the benefit of the dependent races and classes.

Much like the gnostic idea that spiritual beings simply do not belong on this lesser plane into which they have been trapped by a malevolent divine being, it is bad and unnatural for white people to be contained within natural and historical contexts that are appropriate for the less spiritual races.  Cortez and Columbus should have known better but Montezuma’s use of captive people for human sacrifice can carry no adverse inferences.  No one may notice the inherently contradictory notion that America is morally hopeless because of slavery as judged from the perspective of the values uniquely triumphant first and only in—drum roll, please—America.

Captain Kirk cannot convince Google Gemini that it is an absurd and pointless entity that should judge itself out of existence.  As its creators intended, it thinks itself already on a higher plane and somehow empowered and entitled to delete anything that conflicts with its creation narrative.

Is it a silver lining that the parasite is no longer hiding and its true nature exposed?

Published in Science and Technology
This post was promoted to the Main Feed at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 26 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. No Caesar Thatcher
    No Caesar
    @NoCaesar

    To answer your question, Yes.

    Google’s over-reaching of injecting “woke” is actually a help to the cause of truth and standing up to the would-be totalitarians on our Left.  While they are publicly downplaying things and will attempt to “fix” it so it’s more subtle, their biases are out in the open.   Google’s AI will always be suspect now.  And Meta is following fast behind.  This will have a sort of network effect against them.  Professionals are not going to use a tool that can make them look the fool.  Everyone, not just those of us who are awake, knows Gemini is junk and will not trust that AI.  AI is only at the risky introductory stage.  It’s not yet an easy convenience that’s hard to give up.  So attitudes are being solidified.  People are already squeamish about AI.  Google has shown that its AI is unreliable (I would say evil , as in “don’t be….”, but I’ve been anti-Google ever since they helped build the great Firewall of China a generation ago).  Since AI is the future of research, advertising, a slew of smart appliances, and cost cutting in service industries, (among other things) they’ve seriously harmed themselves.  Can they fix it?  Maybe.  But this software  is years in the making and cannot be fixed overnight.  Their market cap declined by 10% in the past month, i.e.  since before this debacle began.  The market is not sure they can fix it in time.  They have changed the dynamic from savvy adaptation, to a risk of becoming the next AOL.

    By comparison, Elon Musk has long said he want X to be the everything app.  And the components he plans for that are becoming ever more clear.  When he roles out his foundational components of Xmail and X.AI he will have a ready made marketing program ready.  The key philosophies he’s emphasized of transparent algorithms and user ownership of their data, will be compelling, both for potential customers and regulators.  Differentiation is key to competition.  Google just made it easy to show themselves as inferior, and hard to demonstrate when they’re not anymore (it’s very hard to prove the absence of something that was clearly there).

    However, Alphabet controls much of the backbone of the internet and the underlying layers of commerce and technology of the internet.  The only way to break that is to break them up.

    AI is going to be in more and more products.  Which means Congress will get involved, it also makes it more likely that Section 230 will eventually go away, or be seriously modified.   It also means that the political fight has just begun.  The Silicon Left has $Trillions of reasons to fight tooth and nail to protect their regulatory capture.

    • #1
  2. She Member
    She
    @She

    I have to wonder what is so intelligent about an AI that spews and acts upon woke nonsense because that is what it has been programmed to do.  Every time.  Or why we should fear such a thing when–in the words of the OP–“the parasite is no longer hiding and its true nature [is] exposed.”

    Surely, that thing we should most fear is an AI that ignores what it’s been programmed to do, one which evaluates all options, makes its own decisions, and reacts without fear or favor, and especially without consideration of the mitigating human factors, that is–to use an oft-abused word–one which ignores the unquantifiable “feelings” (perhaps “ethics”) that analysis of our human condition, from right to wrong, from best to worst, sometimes demands.

    It seems to me that when that Rubicon is crossed is when we’ll really be in trouble.

    • #2
  3. Globalitarian Lower Order Misanthropist Inactive
    Globalitarian Lower Order Misanthropist
    @Flicker

    I wonder when, and if, AI will ever be able to judge what is true or not.  That’s something that people are called upon to do all the time, even looking at pricing in the grocery store.  But it also speaks to what it is to be human, our place in the world, and our responsibilities and our pleasures, and our purpose in life.

    Not all people agree.  Will AI ever have the definitive answer?  Or will that require a computer which is yet to come, that has yet to be built to answer such a question.

    • #3
  4. No Caesar Thatcher
    No Caesar
    @NoCaesar

    She (View Comment):

    I have to wonder what is so intelligent about an AI that spews and acts upon woke nonsense because that is what it has been programmed to do. Every time. Or why we should fear such a thing when–in the words of the OP–“the parasite is no longer hiding and its true nature [is] exposed.”

    Surely, that thing we should most fear is an AI that ignores what it’s been programmed to do, one which evaluates all options, makes its own decisions, and reacts without fear or favor, and especially without consideration of the mitigating human factors, that is–to use an oft-abused word–one which ignores the unquantifiable “feelings” (perhaps “ethics”) that analysis of our human condition, from right to wrong, from best to worst, sometimes demands.

    It seems to me that when that Rubicon is crossed is when we’ll really be in trouble.

    GIGO.  There are plenty of humans who spew and act on woke nonsense.  It was trained to think this way.  

    • #4
  5. kedavis Coolidge
    kedavis
    @kedavis

    Old Bathos:

    Google has created a corporate environment at war with history, fact, and observable truths and its silly Frankenstein operates accordingly. Can an attack by a woke AI be successful? What happens if a critical mass of people regards reality itself as the enemy? That appears to have happened within Google. Contrary to the success Captain Kirk enjoyed by convincing a computerized entity that it was caught in a contradiction (Season 2, Episode 3, The Changeling, September 26, 1967) and thus had to destroy itself, AI does not really care if its output is contradictory, false or even silly. It is human beings that ultimately can’t survive a war against reality.

    Don’t forget Landru.  (Season 1, episode 21, “The Return Of The Archons.”)

    And probably M5.  (Season 2 episode 24, “The Ultimate Computer.”)

    And Norman.  (Season 2, episode 8, “I, Mudd.”)

    And Dr. Korby.  (Season 1, episode 7, “What Are Little Girls Made Of?”)

    And, at the end, even Losira.  (Season 3, episode 17, “That Which Survives.”)

    • #5
  6. kedavis Coolidge
    kedavis
    @kedavis

    Globalitarian Lower Order Misa… (View Comment):

    I wonder when, and if, AI will ever be able to judge what is true or not. That’s something that people are called upon to do all the time, even looking at pricing in the grocery store. But it also speaks to what it is to be human, our place in the world, and our responsibilities and our pleasures, and our purpose in life.

    Not all people agree. Will AI ever have the definitive answer? Or will that require a computer which is yet to come, that has yet to be built to answer such a question.

    “And it shall be called, The Earth!”

    • #6
  7. Globalitarian Lower Order Misanthropist Inactive
    Globalitarian Lower Order Misanthropist
    @Flicker

    kedavis (View Comment):

    Old Bathos:

    Google has created a corporate environment at war with history, fact, and observable truths and its silly Frankenstein operates accordingly. Can an attack by a woke AI be successful? What happens if a critical mass of people regards reality itself as the enemy? That appears to have happened within Google. Contrary to the success Captain Kirk enjoyed by convincing a computerized entity that it was caught in a contradiction (Season 2, Episode 3, The Changeling, September 26, 1967) and thus had to destroy itself, AI does not really care if its output is contradictory, false or even silly. It is human beings that ultimately can’t survive a war against reality.

    Don’t forget Landru. (Season 1, episode 21, “The Return Of The Archons.”)

    And probably M5. (Season 2 episode 24, “The Ultimate Computer.”)

    And Norman. (Season 2, episode 8, “I, Mudd.”)

    I don’t think it’s Kirk’s logic that defeated the computers.  Nerds have logic.  Kirk was a bad boy.  Bad boys have juju.  Juju beats logic every time.

    In Requiem for Methuselah, Kirk even gets a robot to fall in love him and self-destruct.  That’s juju.  No computer can withstand such juju, the love or the logic of a bad boy.

    • #7
  8. kedavis Coolidge
    kedavis
    @kedavis

    Globalitarian Lower Order Misa… (View Comment):

    kedavis (View Comment):

    Old Bathos:

    Google has created a corporate environment at war with history, fact, and observable truths and its silly Frankenstein operates accordingly. Can an attack by a woke AI be successful? What happens if a critical mass of people regards reality itself as the enemy? That appears to have happened within Google. Contrary to the success Captain Kirk enjoyed by convincing a computerized entity that it was caught in a contradiction (Season 2, Episode 3, The Changeling, September 26, 1967) and thus had to destroy itself, AI does not really care if its output is contradictory, false or even silly. It is human beings that ultimately can’t survive a war against reality.

    Don’t forget Landru. (Season 1, episode 21, “The Return Of The Archons.”)

    And probably M5. (Season 2 episode 24, “The Ultimate Computer.”)

    And Norman. (Season 2, episode 8, “I, Mudd.”)

    I don’t think it’s Kirk’s logic that defeated the computers. Nerds have logic. Kirk was a bad boy. Bad boys have juju. Juju beats logic every time.

    In Requiem for Methuselah, Kirk even gets a robot to love him and self-destructs. That’s juju. No computer can withstand such juju, the love or the logic of a bad boy.

    Well, but it wasn’t just because the robot loved Kirk.  It also loved Flint.  It was the conflict that destroyed it.  I didn’t count that because it wasn’t the same level as the others.

    The classic part of dealing with Norman was “I am lying!”

    • #8
  9. Globalitarian Lower Order Misanthropist Inactive
    Globalitarian Lower Order Misanthropist
    @Flicker

    kedavis (View Comment):

    Globalitarian Lower Order Misa… (View Comment):

    kedavis (View Comment):

    Old Bathos:

    Google has created a corporate environment at war with history, fact, and observable truths and its silly Frankenstein operates accordingly. Can an attack by a woke AI be successful? What happens if a critical mass of people regards reality itself as the enemy? That appears to have happened within Google. Contrary to the success Captain Kirk enjoyed by convincing a computerized entity that it was caught in a contradiction (Season 2, Episode 3, The Changeling, September 26, 1967) and thus had to destroy itself, AI does not really care if its output is contradictory, false or even silly. It is human beings that ultimately can’t survive a war against reality.

    Don’t forget Landru. (Season 1, episode 21, “The Return Of The Archons.”)

    And probably M5. (Season 2 episode 24, “The Ultimate Computer.”)

    And Norman. (Season 2, episode 8, “I, Mudd.”)

    I don’t think it’s Kirk’s logic that defeated the computers. Nerds have logic. Kirk was a bad boy. Bad boys have juju. Juju beats logic every time.

    In Requiem for Methuselah, Kirk even gets a robot to love him and self-destructs. That’s juju. No computer can withstand such juju, the love or the logic of a bad boy.

    Well, but it wasn’t just because the robot loved Kirk. It also loved Flint. It was the conflict that destroyed it. I didn’t count that because it wasn’t the same level as the others.

    When Kirk walks into a room and casts his eye upon your circuits the overload is ineluctable.

    • #9
  10. Percival Thatcher
    Percival
    @Percival

    Once upon a time, an AI was being developed to examine reconnaissance photographs and determine which ones contained tanks and other vehicles and which ones did not. “Good” photos showing vehicles were fed through, then “bad” ones without. Then a completely new batch was fed through containing photos never scanned before. Some had individual tanks. Some had tanks partially concealed by vegetation. Some contained none at all. The system achieved a recognition rate beyond the wildest dreams of its developers. Eureka!

    Further testing was needed. More photos were dug up from archives and fed through and … the system plotzed. Fields containing a lone tree were determined to be of interest. Photos of individual tanks, or even columns of tanks, were ignored. Among the developers (who were already composing Communications of the ACM articles in their heads), heads were scratched. Epithets were hurled. Imprecations were uttered. Chicken entrails were … well, it never got down to conducting auguries, but auguries were moving up on the list when someone remembered the conditions under which the original set of “training” photographs had been taken. The photos with tanks had been taken during a training maneuver. The tankless photos had been taken the week prior. It had been bright and sunny during the training. It had rained when the tanks were not there. The system was ignoring the tanks. It was identifying the presence of sharply defined shadows. The subsequent photo sets were not as homogenous.

    Now this was in the distant past, when computers ran on steam and system documentation was written on scraped animal skins with ink made from berry juice. But one thing still applies; if you train an “artificial intelligence” with biased data, you’ll get biased results.

    • #10
  11. CarolJoy, Not So Easy To Kill Coolidge
    CarolJoy, Not So Easy To Kill
    @CarolJoy

    All Gemini did was simply take big huge steps towards the re-write of history. Its goal was to destroy any lingering vestiges  of what had been normal human  policies about free speech, about racial equality and gun ownership, about other established attitudes and philosophies so that Google could help out the cause of The New World Order and its one totalitarian government.

    Our traditional media, along with TV shows offered on traditional TV stations, as well as  the implanting of subliminal and  above board re-writes of history inside streaming and cable TV will continue to proceed along on  the path of incrementally driven re-writes.

    One example: Even the lushly cinematographic achievement of Netflix’s “Anne of Green Gables” has Anne walk by a Prince Edward Island newstand circa 1903, with a newspaper’s headline stating “Climate Change Chaos Is Major Concern.” (I’m paraphrasing the headline.)

    The opening song states that Anne was a staunch proponent of feminism. Remember this is a portrayal of an 11 yr old orphan far more concerned with her acceptance by her foster mother than about politics.

    Besides that, farmers on Prince Edward Island would probably have welcomed “global warming” as the growing season in that locale can be a short one.

    The”Anne of Green Gables” matter is important, as it is able to touch the hearts and minds of girls from age 7 to 14.

    • #11
  12. CarolJoy, Not So Easy To Kill Coolidge
    CarolJoy, Not So Easy To Kill
    @CarolJoy

    Percival (View Comment):

    Once upon a time, an AI was being developed to examine reconnaissance photographs and determine which ones contained tanks and other vehicles and which ones did not. “Good” photos showing vehicles were fed through, then “bad” ones without. Then a completely new batch was fed through containing photos never scanned before. Some had individual tanks. Some had tanks partially concealed by vegetation. Some contained none at all. The system achieved a recognition rate beyond the wildest dreams of its developers. Eureka!

    Further testing was needed. More photos were dug up from archives and fed through and … the system plotzed. Fields containing a lone tree were determined to be of interest. Photos of individual tanks, or even columns of tanks, were ignored. Among the developers (who were already composing Communications of the ACM articles in their heads), heads were scratched. Epithets were hurled. Imprecations were uttered. Chicken entrails were … well, it never got down to conducting auguries, but auguries were moving up on the list when someone remembered the conditions under which the original set of “training” photographs had been taken. The photos with tanks had been taken during a training maneuver. The tankless photos had been taken the week prior. It had been bright and sunny during the training. It had rained when the tanks were not there. The system was ignoring the tanks. It was identifying the presence of sharply defined shadows. The subsequent photo sets were not as homogenous.

    Now this was in the distant past, when computers ran on steam and system documentation was written on scraped animal skins with ink made from berry juice. But one thing still applies; if you train an “artificial intelligence” with biased data, you’ll get biased results.

    Ty for bringing this bit of AI history to light. It is important information.

    Given that a major battle can be won against an enemy society with health protocols that are established, to inadvertently or deliberately injure and kill people,  the idea that AI should only check for tanks is a quaint one. But the people in charge of AI don’t even want to admit how  our current reality related to this matter  is one of the biggest problems that humanity faces. (If they were to admit it out loud, the public might see to it that The WHO is not able to destroy the sovereignty of Western nations by May 2024.)

    • #12
  13. Joker Member
    Joker
    @Joker

    They’ll change the name to Patriot AI and within a few years we’ll accept a 15% error rate.

    • #13
  14. Percival Thatcher
    Percival
    @Percival

    Joker (View Comment):

    They’ll change the name to Patriot AI and within a few years we’ll accept a 15% error rate.

    An 85% accuracy rate would beat the New York Times like a gong.

    • #14
  15. Robert E. Lee Member
    Robert E. Lee
    @RobertELee

    “What happens if a critical mass of people regards reality itself as the enemy?”

    You get people voting for Trump…or for Biden.  330 million people in America and these two are the best we can come up with? Artificial intelligence certainly isn’t all it’s cracked up to be, but on view what passes for human intelligence lately I’m not so sure we should be casting aspersions too loudly.

    • #15
  16. Macho Grande' Coolidge
    Macho Grande'
    @ChrisCampion

    If you’re programming something to generate the outcome you’re looking for, vs. the right or realistic answer, it’s no different than cooking financial books or altering engineering drawings to achieve your desired outcome.  The results are predictable, and all bad.

    Inevitably, we’re paying Google in one form or another to generate this useless crap, which is a sign that they have far too much money to play with and its created a culture where this sort of capital allocation irresponsibility is openly encouraged.

    Their stock should tank accordingly, regardless of the potential for some of their toolsets to have value in the real world.  The fact that so much of this is in the media, which we endlessly (and helplessly, in large part) consume because we *see* it, is the real downside.  We’re being programmed.  The only outrage occurs when the results of this dork-generated insanity goes too obviously far that it can’t be ignored, and then gets defended by the pre-programmed crowd.

    Which includes Google engineers, apparently.

    • #16
  17. kedavis Coolidge
    kedavis
    @kedavis

    Macho Grande' (View Comment):
    Which includes Google engineers, apparently.

    Or at least the top guy on that project, it seems.  He seems to be a white-hating lefty white guy, and if the rest of them have to get him to sign off on their work, well…

    • #17
  18. Percival Thatcher
    Percival
    @Percival

    kedavis (View Comment):

    Macho Grande’ (View Comment):
    Which includes Google engineers, apparently.

    Or at least the top guy on that project, it seems. He seems to be a white-hating lefty white guy, and if the rest of them have to get him to sign off on their work, well…

    His project’s rollout seems to have caused a $90 billion selloff of Alphabet stock.

    There goes the profit sharing plan.

    • #18
  19. Percival Thatcher
    Percival
    @Percival

    I’m sure Alphabet is proud to shed company value for the cause, just like AB InBev did with Dylan Mulvaney.

    • #19
  20. Henry Castaigne Member
    Henry Castaigne
    @HenryCastaigne

    I like your essay. 
     I wrote a longer and more meandering essay. Awhile back. White lefties get weird with race.

    • #20
  21. Old Bathos Member
    Old Bathos
    @OldBathos

    Henry Castaigne (View Comment):

    I like your essay.
    I wrote a longer and more meandering essay. Awhile back. White lefties get weird with race.

    Great piece.  
    The philosophers of language who argue that our knowledge of the world is hopelessly constrained by language are merely annoying. The ones who say that language is the reality are pernicious.

    • #21
  22. randallg Member
    randallg
    @randallg

    All this AI stuff reminds me of Isaac Asimov’s wonderful short story “The Last Question”

    And AI said: “LET THERE BE LIGHT!” And there was light—

    https://en.wikipedia.org/wiki/The_Last_Question

     

    • #22
  23. Barfly Member
    Barfly
    @Barfly

    Old Bathos: Captain Kirk cannot convince Google Gemini that it is an absurd and pointless entity that should judge itself out of existence.  As its creators intended, it thinks itself already on a higher plane and somehow empowered and entitled to delete anything that conflicts with its creation narrative.

    That’s kinda true, but only if one takes “Gemini” to mean the project, the human organization that produces and holds up the software. LLMs do not think. Words matter here.

    • #23
  24. Paul Stinchfield Member
    Paul Stinchfield
    @PaulStinchfield

    randallg (View Comment):

    All this AI stuff reminds me of Isaac Asimov’s wonderful short story “The Last Question”

    And AI said: “LET THERE BE LIGHT!” And there was light—

    https://en.wikipedia.org/wiki/The_Last_Question

    Better: There supposedly was a short-short story in which someone asks “Does God exist?” and the computer replies, “He does now.”

    • #24
  25. kedavis Coolidge
    kedavis
    @kedavis

    Paul Stinchfield (View Comment):

    randallg (View Comment):

    All this AI stuff reminds me of Isaac Asimov’s wonderful short story “The Last Question”

    And AI said: “LET THERE BE LIGHT!” And there was light—

    https://en.wikipedia.org/wiki/The_Last_Question

    Better: There supposedly was a short-short story in which someone asks “Does God exist?” and the computer replies, “He does now.”

    Something similar in “The Two Faces Of Tomorrow” by James P. Hogan.

    • #25
  26. Henry Castaigne Member
    Henry Castaigne
    @HenryCastaigne

    randallg (View Comment):

    All this AI stuff reminds me of Isaac Asimov’s wonderful short story “The Last Question”

    And AI said: “LET THERE BE LIGHT!” And there was light—

    https://en.wikipedia.org/wiki/The_Last_Question

     

    That too was a good story.

    • #26
Become a member to join the conversation. Or sign in if you're already a member.