Don’t Regulate Artificial Intelligence: Starve It

 

As long as we are sitting around bored, in lockdown, how about a little controversy? This is currently running in Scientific American. The essay is an adaptation from my book The Autonomous Revolution. Here’s the opening, and a link if you want to read further.

Artificial intelligence is still in its infancy. But it may well prove to be the most powerful technology ever invented. It has the potential to improve health, supercharge intellects, multiply productivity, save the environment and enhance both freedom and democracy.

But as that intelligence continues to climb, the danger from using AI in an irresponsible way also brings the potential for AI to become a social and cultural H-bomb. It’s a technology that can deprive us of our liberty, power autocracies and genocides, program our behavior, turn us into human machines and, ultimately, turn us into slaves. Therefore, we must be very careful about the ascendance of AI; we don’t dare make a mistake. And our best defense may be to put AI on an extreme diet.

Published in Science & Technology
Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 2 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Flicker Coolidge
    Flicker
    @Flicker

    “It’s a technology that can deprive us of our liberty, power autocracies and genocides, program our behavior, turn us into human machines and, ultimately, turn us into slaves.”

    The first step in depriving us of liberty is the modern human propensity for honoring the “experts” and its deliberate misuse.  Today we have great disagreement over projecting models, either using computer models or intuitive logic, for maintaining the lock-down or not.  Arguing the former, governments are ordering continued lock-downs; and arguing the latter are the protests by the subjected population.  The contrasts are stark: one argues that a hypothetical “mass death will result if we release you too early”, and the other view is “what point is there in living if we can’t eat and pay our rent?”  And this involves fundamental value judgments — and values are not innate to or self-existing within AI algorithms.  The creators build in the moral or societal values, consciously or not.

    How common will it be in the future, when AI undergirds society’s decision-making, that the accepted reasoning by everyone is that the computers know best, even if their decisions seem hard to take?

    And how will AI be deliberately misused by those who have an agenda that doesn’t align with half the country’s values, such as we see with left today?

    Added: I am not aware of being a Luddite, but sometimes I wonder.

    • #1
  2. Stad Coolidge
    Stad
    @Stad

    Flicker (View Comment):
    Added: I am not aware of being a Luddite, but sometimes I wonder.

    It’s an understandable response.

    One of the more common plots in science fiction is out-of-control technology and how man deals with it.  The stories end in success, failure, or something inbetween.  We’ve seen examples in real life due to unintended consequences, although nothing as drastic as intelligent killer robots (yet).

    I have my suspicions about driverless cars and highly computerized jet airliners.  I play a lot of computer games, and almost all of them have bugs (thank goodness the internet has a plethora of fixes or workarounds).  It wouldn’t be wise to think the software in the aforementioned applications are bug free . . .

    • #2
Become a member to join the conversation. Or sign in if you're already a member.