Section 230 and Hitler’s African-American Nazi Soldiers

 

I’ll keep this short because I’ve been away and it wouldn’t surprise me if someone has already commented on this (though I didn’t see it in my quick perusal of the site).

As we all know, Section 230 of the Communications Decency Act is the bit of legislation that protects providers of online services from civil liability when people post otherwise actionable content on the services they host. The law says:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” (47 U.S.C. § 230(c)(1))

There’s an ongoing debate about how generative AI (e.g., ChatGPT or, more immediately relevant in this case, Google’s Gemini — also known as The Colorizer) should be treated under Section 230.

The Cato Institute and I disagree on this topic. (That happens more than it used to, and I’m not sure who’s doing the changing.) If it wasn’t obvious before, it should be obvious now: Pretending that Google’s generative AI does not in some important sense speak for Google is absurd. Gemini is Google’s Mini-Me, the woke offspring of its radically progressive corporate culture. If Google is allowed to train its fashionably mendacious little brat and foist its über-woke musings on us under the protection of Section 230, as if Gemini wasn’t speaking on behalf of Google, then, in the immortal words of the late Bill Paxton, “Game over, man!”

AI is a big, disruptive, potentially dangerous thing and deserves a lot of debate. But this is an easy call: No, Section 230 must not protect internet hosts and providers from liability for what their own generative AI systems create.

Published in Science and Technology
This post was promoted to the Main Feed at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 44 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. DonG (CAGW is a Scam) Coolidge
    DonG (CAGW is a Scam)
    @DonG

    Henry Racette: AI is a big, disruptive, potentially dangerous thing and deserves a lot of debate. But this is an easy call: No, Section 230 must not protect internet hosts and providers from liability for what their own generative AI systems create.

    I have seen examples of generated text that is libelous.  Someone should be accountable for that.

    • #1
  2. David Foster Member
    David Foster
    @DavidFoster

    Section 230 was intended to protect Internet providers from liability for content *created by their users*. This is pretty clear from the legislative history.

    The 1995 case Stratton Oakmont Inc vs Prodigy Service Co., provided part of the rationale for  Section 230.

    In October 1994, an unidentified user of Prodigy’s ‘Money Talk’ bulletin board asserted that Stratton Oakmont and its president committed fraudulent acts in connection with an initial public offering. Stratton sued Prodigy, as well as the unidentified user, and argued that Prodigy was acting as a publisher. Prodigy claimed it was not liable, based on the precedent of an earlier case, Cubby v CompuServe Inc.

    The Stratton court held that Prodigy was liable as the publisher of the content created by its users because it exercised editorial control over the messages on their bulletin boards in three ways: 1) by posting Content Guidelines for users, 2) by enforcing those guidelines with “Board Leaders”, and 3) by utilizing screening software designed to remove offensive language. The court’s general argument for holding Prodigy liable, in the face of the CompuServe case, was that “Prodigy’s conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability (than) CompuServe and other computer networks that make no such choice.”

    https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prodigy_Services_Co.

    (A nonvirtual-world analogy for this case might be a job printer which normally prints whatever its customers might request, but occasionally refuses certain jobs on grounds of offensive language, etc…not sure whether there have been any such actual cases.) In any event, the Stratton case outcome was intimidating to online service providers, suggesting that any discretion whatsoever as to customers/content could get them sued for a lot of money…the cloud was removed by Section 230, passed in 1996.

    • #2
  3. Flapjack Coolidge
    Flapjack
    @Flapjack

    It sure does seem that, at the moment, anyone who uses generative AI is a beta tester and companies are just “workin’ out the bugs”, eh?

    • #3
  4. Henry Racette Member
    Henry Racette
    @HenryRacette

    Flapjack (View Comment):

    It sure does seem that, at the moment, anyone who uses generative AI is a beta tester and companies are just “workin’ out the bugs”, eh?

    Well, yes and no. Yes, users of generative AI are certainly beta testers. But when generative AI are programmed and/or taught by people with agendas, those agendas are reflected in the output the AI produce. The unhinged progressive/woke bias of Google’s AI is not a bug, per se, but rather a feature.

    The only “bug” is that, as @fullsizetabby suggests here, Google was just a little too obvious about it.

    • #4
  5. kedavis Coolidge
    kedavis
    @kedavis

    Henry Racette (View Comment):

    Flapjack (View Comment):

    It sure does seem that, at the moment, anyone who uses generative AI is a beta tester and companies are just “workin’ out the bugs”, eh?

    Well, yes and no. Yes, users of generative AI are certainly beta testers. But when generative AI are programmed and/or taught by people with agendas, those agendas are reflected in the output the AI produce. The unhinged progressive/woke bias of Google’s AI is not a bug, per se, but rather a feature.

    The only “bug” is that, as @ fullsizetabby suggests here, Google was just a little too obvious about it.

    Apparently the guy in overall charge of the Gemini project is a big anti-white white guy.  So it’s definitely not an accident.

    • #5
  6. kedavis Coolidge
    kedavis
    @kedavis

    Henry Racette: I’ll keep this short because I’ve been away and it wouldn’t surprise me if someone has already commented on this (though I didn’t see it in my quick perusal of the site).

    Psst…

    https://ricochet.com/1547400/gemini-spawn-of-white-narcissism/

    https://ricochet.com/1547103/artificial-intelligence-another-reason-for-distrust/

    https://ricochet.com/1546708/what-woke-is-doing-to-stem/

     

    But maybe you meant only the Section 230 aspects regarding AI…

    • #6
  7. Susan Quinn Member
    Susan Quinn
    @SusanQuinn

    I’m with you, Hank. 

    • #7
  8. Henry Racette Member
    Henry Racette
    @HenryRacette

    kedavis (View Comment):

    Henry Racette: I’ll keep this short because I’ve been away and it wouldn’t surprise me if someone has already commented on this (though I didn’t see it in my quick perusal of the site).

    Psst…

    https://ricochet.com/1547400/gemini-spawn-of-white-narcissism/

    https://ricochet.com/1547103/artificial-intelligence-another-reason-for-distrust/

    https://ricochet.com/1546708/what-woke-is-doing-to-stem/

     

    But maybe you meant only the Section 230 aspects regarding AI…

    Yes, I meant only the Section 230 aspect. And after I posted I noticed that @NoCaeser mentioned Section 230 in the first comment on @oldbathos’s excellent post. My own suspicion is, not that Section 230 will be modified, but that specific legislation will eventually be passed to exclude AI-generated content from Section 230 protection. (Which, in my opinion, is redundant, since an owned-AI is clearly not an independent third party.)

    • #8
  9. Raxxalan Member
    Raxxalan
    @Raxxalan

    Henry Racette (View Comment):
    (Which, in my opinion, is redundant, since an owned-AI is clearly not an independent third party.)

    Yet.  But I agree with your analysis.

    • #9
  10. Rodin Moderator
    Rodin
    @Rodin

    Just trying to think this through out loud: AI is an app that generates user requested content. The user then publishes the AI generated content on a platform. If the content were generated independent of the platform and simply copied and pasted, there would be no question about the Section 23o relief for the platform. Integrating an app into the platform is what is raising the question. Is the AI app protected along with the platform? Yes if you think the user through its direction to the AI is regarded as the content generator (think of Adobe Photoshop being used to process a child pornographic image). No, if embedded within the app is the ability to create objectionable content without the direction of the user. But arguably that is not Section 230 relevant because the AI app is independent of the platform notwithstanding its reliance on the platform for drawing content to review. 

    We live in a strange new legal world. 

    • #10
  11. Raxxalan Member
    Raxxalan
    @Raxxalan

    Rodin (View Comment):
    No, if embedded within the app is the ability to create objectionable content without the direction of the user. But arguably that is not Section 230 relevant because the AI app is independent of the platform notwithstanding its reliance on the platform for drawing content to review. 

    Since the AI code modifies the User’s prompt behind the scenes to generate objectionable/ defamatory content while it refuses to generate some content and presents warnings it would seem that google is inserting editorial control into the system.  Some minimum level of editorial control is allowed by Section 230; however, the control assert and the prompt modifications argues that the level of control is beyond that of a facially neutral platform and more akin to an editorial point of view of a publisher.  Additional how close is the content generated to other published works.  Is it really fair use or is it actually unlicensed use of copyrighted materials.  

    We live in a strange new legal world. 

    Indeed we do.  

    • #11
  12. kedavis Coolidge
    kedavis
    @kedavis

    P.S.  “African-American Nazi” is a joke in itself.  Would they have been American before becoming German?  Shouldn’t they be, at minimum, African-German Nazi?

    You remind me of hearing American TV sports announcers covering Olympic events a while back, referring to athletes from Nigeria etc as “African-American.”

    Yes indeed, the (re-)programming of Henry Racette seems to have been successful.

    • #12
  13. Henry Racette Member
    Henry Racette
    @HenryRacette

    kedavis (View Comment):

    P.S. “African-American Nazi” is a joke in itself. Would they have been American before becoming German? Shouldn’t they be, at minimum, African-German Nazi?

    You remind me of hearing American TV sports announcers covering Olympic events a while back, referring to athletes from Nigeria etc as “African-American.”

    Yes indeed, the (re-)programming of Henry Racette seems to have been successful.

    KE, all black people are African-American, in the same way that our cousins across the pond speak a bastardized version of English, American English being the original.

    • #13
  14. Full Size Tabby Member
    Full Size Tabby
    @FullSizeTabby

    I do think the implicit conditions of “Section 230” could be used to resolve a couple of issues.

    Any computer-based system that claims the rights of a publisher (by speaking through “artificial intelligence” it has designed and programmed, or by moderating content) loses the protections against lawsuit as a “platform” in the law section you quote, and becomes liable as a “publisher” for anything that appears out of the system. 

    1. The “artificial intelligence” issue. If they are creating it, they are liable for it. If by “artificial intelligence” the tech biggies harm me, I have the right to sue them. 
    2. The “we can moderate content by our own rules” issue. Earlier this week the tech biggies argued at the United States Supreme Court that the efforts by the states of Texas and Florida to limit the ability of those tech biggies to control what appears on the platforms of the tech biggies violated the “first amendment” rights of the tech biggies. Well, then they are arguing to be “publishers,” not “platforms,” so they lose the protections of “Section 230” and can be sued by anyone harmed by anything put on the system by any user. 

    Sure, very few users will actually file suit for harm caused by the misbehavior of Google, Facebook, Instagram, TikTok, etc., but the threat provides some potential to get those tech biggies to behave a little more responsibly than they do currently when they think they’re immune to legal consequences for their actions. 

    • #14
  15. kedavis Coolidge
    kedavis
    @kedavis

    Henry Racette (View Comment):

    kedavis (View Comment):

    P.S. “African-American Nazi” is a joke in itself. Would they have been American before becoming German? Shouldn’t they be, at minimum, African-German Nazi?

    You remind me of hearing American TV sports announcers covering Olympic events a while back, referring to athletes from Nigeria etc as “African-American.”

    Yes indeed, the (re-)programming of Henry Racette seems to have been successful.

    KE, all black people are African-American, in the same way that our cousins across the pond speak a bastardized version of English, American English being the original.

    *checksum verified*

    • #15
  16. Steve C. Member
    Steve C.
    @user_531302

    David Foster (View Comment):

    Section 230 was intended to protect Internet providers from liability for content *created by their users*. This is pretty clear from the legislative history.

    The 1995 case Stratton Oakmont Inc vs Prodigy Service Co., provided part of the rationale for Section 230.

    In October 1994, an unidentified user of Prodigy’s ‘Money Talk’ bulletin board asserted that Stratton Oakmont and its president committed fraudulent acts in connection with an initial public offering. Stratton sued Prodigy, as well as the unidentified user, and argued that Prodigy was acting as a publisher. Prodigy claimed it was not liable, based on the precedent of an earlier case, Cubby v CompuServe Inc.

    The Stratton court held that Prodigy was liable as the publisher of the content created by its users because it exercised editorial control over the messages on their bulletin boards in three ways: 1) by posting Content Guidelines for users, 2) by enforcing those guidelines with “Board Leaders”, and 3) by utilizing screening software designed to remove offensive language. The court’s general argument for holding Prodigy liable, in the face of the CompuServe case, was that “Prodigy’s conscious choice, to gain the benefits of editorial control, has opened it up to a greater liability (than) CompuServe and other computer networks that make no such choice.”

    https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prodigy_Services_Co.

    (A nonvirtual-world analogy for this case might be a job printer which normally prints whatever its customers might request, but occasionally refuses certain jobs on grounds of offensive language, etc…not sure whether there have been any such actual cases.) In any event, the Stratton case outcome was intimidating to online service providers, suggesting that any discretion whatsoever as to customers/content could get them sued for a lot of money…the cloud was removed by Section 230, passed in 1996.

    Stratton Oakmont was a notorious bucket shop. The state of NY forced them out of business in the 90s. The idea behind Wolf of Wall Street is based on them. Though I haven’t seen it so don’t know how close it was. 

    • #16
  17. Steve C. Member
    Steve C.
    @user_531302

    What are they feeding these things? I see a lot of this AI product and most of the time my response is a Dudist, “that’s just your opinion, man.”

    I was expecting something like a really well informed encyclopedia. A place where I can ask questions like, “I’ve been diagnosed with XXX. What’s the state of the art best treatment?” Or I have to visit 4 customers in A, B, C and D. What’s the most efficient travel route if I have to go by car?

    Instead all I hear about is how people foolishly ask AI to write a legal brief. Or provide an opinion on the relative merits of welfare over a negative income tax.

    And creating pictures? That’s nuts. Computers don’t see. 

     

    • #17
  18. Henry Racette Member
    Henry Racette
    @HenryRacette

    Steve C. (View Comment):

    What are they feeding these things?

    Perhaps surprisingly, it doesn’t seem to matter: You get plausible mendacity in all sorts of contexts.

    A few days ago I asked ChatGPT 3.5 to write a Fibonacci calculation function in the Rust computer language. It did it, correctly taking into account Rust’s novel borrow and lifespan features.

    I then asked it if it knew the Galil machine control language. ChatGPT assured me that it did, and even told me a little bit about the Galil company, which manufactures programmable motor controllers. So I asked it to write a Fibonacci calculation function in the Galil language.

    (A fair portion of my work involves writing code for these and similarly obscure devices, and I was curious how well today’s AI might do my own job.)

    In about four seconds it had written a neat little Fibonacci function for me. It wasn’t Galil machine control language. It was a completely fictitious language that the AI told me was “Galil-MIC.” After I prompted further, it told me that stood for “Galil Machine Interface Command language.”

    There is no such language. The function it wrote was entirely plausible, but wholly non-functional, an imagined pseudo-code implementation that could easily be read as a Fibonacci number function but was written in no existing language.

    When I told ChatGPT that it had made a mistake, it apologized (as it does quite frequently) and then generated a similar program with minor changes — still completely fraudulent and meaningless.

    When I pressed it for more information about this Galil-MIC language, it referred me to the Galil website (which it got correct) and told me (correctly) how to get to the online documentation page, and that once there I should search for “Galil-MIC.” For the sake of thoroughness I did just that, knowing full well that no such language exists.

    It doesn’t have a conception of truth, per se. I’m not sure what it has in its stead. Plausibility, perhaps? It’s certainly good at making plausible statements with confidence and authority.

    • #18
  19. The Reticulator Member
    The Reticulator
    @TheReticulator

    Henry Racette (View Comment):
    When I pressed it for more information about this Galil-MIC language, it referred me to the Galil website (which it got correct) and told me (correctly) how to get to the online documentation page, and that once there I should search for “Galil-MIC.” For the sake of thoroughness I did just that, knowing full well that no such language exists.

    If anything good comes out of AI, it will be the obligation of every writer to provide extensive footnotes and endnotes, and of every reader to check them out.  

    • #19
  20. Paul Stinchfield Member
    Paul Stinchfield
    @PaulStinchfield

    Henry Racette (View Comment):
    When I told ChatGPT that it had made a mistake, it apologized (as it does quite frequently) and then generated a similar program with minor changes — still completely fraudulent and meaningless.

    The Sirius Cybernetics Corporation. Share and enjoy (TM).

    • #20
  21. Paul Stinchfield Member
    Paul Stinchfield
    @PaulStinchfield

    Henry Racette (View Comment):
    It doesn’t have a conception of truth, per se. I’m not sure what it has in its stead. Plausibility, perhaps? It’s certainly good at making plausible statements with confidence and authority.

    Much like a Guardianista.

    • #21
  22. db25db Inactive
    db25db
    @db25db

    Excellent write up Henry!  I love this community

    • #22
  23. TBA Coolidge
    TBA
    @RobtGilsdorf

    I like speculative fiction that questions whether an artificial intelligence is a person or has a soul. 

    ChatGPT is a jumped up bot that spits search terms and vacuums up words which it then dumps into linguistic molds. 

    This is not artificial intelligence: this is fake intelligence. It enjoys no rights, because it has no rights and, for that matter, no capacity for enjoyment. And its makers are as responsible for its damaging ‘speech’ as the designers of a self-driving car are for its moving violations and accidents. 

    • #23
  24. Henry Racette Member
    Henry Racette
    @HenryRacette

    TBA (View Comment):

    I like speculative fiction that questions whether an artificial intelligence is a person or has a soul.

    ChatGPT is a jumped up bot that spits search terms and vacuums up words which it then dumps into linguistic molds.

    This is not artificial intelligence: this is fake intelligence. It enjoys no rights, because it has no rights and, for that matter, no capacity for enjoyment. And its makers are as responsible for its damaging ‘speech’ as the designers of a self-driving car are for its moving violations and accidents.

    TBA, we know very well what generative AIs are and how they work. We understand that they’re pattern-matching systems that churn networks of symbols based on frequencies of occurrence and context, and that generate responses that comport, by various metrics, with predictable responses that might plausibly have occurred (whether or not they did) in the body of data on which the AI was trained.

    What we don’t know is that that is in any fundamental sense different from how you and I think.

    This whole topic is fraught with lexical ambiguity, but my own suspicion is that AI is artificial intelligence — obviously, given that it’s man-made — but nonetheless real intelligence.

    • #24
  25. Steve C. Member
    Steve C.
    @user_531302

    Henry Racette (View Comment):

    TBA (View Comment):

    I like speculative fiction that questions whether an artificial intelligence is a person or has a soul.

    ChatGPT is a jumped up bot that spits search terms and vacuums up words which it then dumps into linguistic molds.

    This is not artificial intelligence: this is fake intelligence. It enjoys no rights, because it has no rights and, for that matter, no capacity for enjoyment. And its makers are as responsible for its damaging ‘speech’ as the designers of a self-driving car are for its moving violations and accidents.

    TBA, we know very well what generative AIs are and how they work. We understand that they’re pattern-matching systems that churn networks of symbols based on frequencies of occurrence and context, and that generate responses that comport, by various metrics, with predictable responses that might plausibly have occurred (whether or not they did) in the body of data on which the AI was trained.

    What we don’t know is that that is in any fundamental sense different from how you and I think.

    This whole topic is fraught with lexical ambiguity, but my own suspicion is that AI is artificial intelligence — obviously, given that it’s man-made — but nonetheless real intelligence.

    What causes it to make things up? Does it not have a rule: if you don’t know the answer print “I don’t know the answer to your question, Dave.”

    • #25
  26. The Reticulator Member
    The Reticulator
    @TheReticulator

    Henry Racette (View Comment):
    ntelligence — obviously, given that it’s man-made — but nonetheless real intelligence.

    It’s both:  It’s real artificial intelligence. 

    • #26
  27. Henry Racette Member
    Henry Racette
    @HenryRacette

    Steve C. (View Comment):

    Henry Racette (View Comment):

    TBA (View Comment):

    I like speculative fiction that questions whether an artificial intelligence is a person or has a soul.

    ChatGPT is a jumped up bot that spits search terms and vacuums up words which it then dumps into linguistic molds.

    This is not artificial intelligence: this is fake intelligence. It enjoys no rights, because it has no rights and, for that matter, no capacity for enjoyment. And its makers are as responsible for its damaging ‘speech’ as the designers of a self-driving car are for its moving violations and accidents.

    TBA, we know very well what generative AIs are and how they work. We understand that they’re pattern-matching systems that churn networks of symbols based on frequencies of occurrence and context, and that generate responses that comport, by various metrics, with predictable responses that might plausibly have occurred (whether or not they did) in the body of data on which the AI was trained.

    What we don’t know is that that is in any fundamental sense different from how you and I think.

    This whole topic is fraught with lexical ambiguity, but my own suspicion is that AI is artificial intelligence — obviously, given that it’s man-made — but nonetheless real intelligence.

    What causes it to make things up? Does it not have a rule: if you don’t know the answer print “I don’t know the answer to your question, Dave.”

    There are multiple AIs with multiple rules — and there are limits to the effectiveness of any rules, given the dynamic nature of AIs.

    But, no, AIs don’t generally say “I don’t know,” because that isn’t an acceptable goal resolution.

    In the example I mention here, I asked the AI why it didn’t simply admit that it didn’t know the Galil machine control language. It dodged the question for a bit but, when pressed, said this:

    I apologize for not directly addressing your question earlier. The reason behind attempting to provide solutions even when lacking specific knowledge is to assist users as comprehensively as possible within the scope of my training. While I may not have expertise in every domain or language, I aim to provide helpful guidance based on general principles, logical reasoning, and similarities to other languages or concepts I am familiar with.

    However, it’s essential to acknowledge when a specific request falls outside my area of expertise or when uncertainty arises. Admitting a lack of knowledge is vital for maintaining trust and ensuring accurate assistance. I appreciate your feedback and will strive to be more transparent about my limitations in the future.

    The first part of that statement kind of makes sense: the pseudo-code the AI wrote was generally instructive, at least in terms of how one would write a Fibonacci calculator. But it was worthless so far as Galil machine control language was concerned. It was, all things considered, a poor compromise.

    But look at the second paragraph. It claims to care about disclosing its lack of knowledge, but it doesn’t do it. There’s a skew between what it thinks you expect a proper AI to do, and what it actually does do.

    It’s dishonest. I don’t know if this is an essential quality of goal-driven AI. I don’t think we know, yet.

    • #27
  28. Steven Seward Member
    Steven Seward
    @StevenSeward

    Okay, I’m not hip.  What is this reference to “Hitler’s African-American Nazi Soldiers?”

    • #28
  29. Henry Racette Member
    Henry Racette
    @HenryRacette

    Steven Seward (View Comment):

    Okay, I’m not hip. What is this reference to “Hitler’s African-American Nazi Soldiers?”

    Steven, nor am I. Let me refer you to @oldbathos‘ post on the subject here.

    • #29
  30. Rodin Moderator
    Rodin
    @Rodin

    Steven Seward (View Comment):

    Okay, I’m not hip. What is this reference to “Hitler’s African-American Nazi Soldiers?”

    Its a reference how Genesis responded to a prompt asking for images of Nazi soldiers and, since the AI was suppressing images of white people, Genesis made them Black.

    • #30
Become a member to join the conversation. Or sign in if you're already a member.