AI: Tool, Co-Worker, Competitor, and/or Executioner

 

When my sons were younger I suggested to them that they consider learning skilled trades, things that couldn’t practically be outsourced to an underdeveloped country or economically replaced with automation. I still think that’s good advice, though most of them did their own things and ignored it — perhaps wisely, it turned out. One is an honest-to-goodness computer genius who has left me far behind, one a doctor, another a public policy administrator and hyper-educated wonk, one a cop, and one — the only one who went in the direction I suggested (if probably of his own accord) — a factory machine technician. (It’s this last son, the youngest of the five, with whom I have the most in-depth and enjoyable work conversations, since we both deal with automation, albeit from different ends.)

My youngest child, Darling Daughter, works in a Big City for a subsidiary of some semi-notorious financial company (Black-something, though I can never remember what). She’s the one I was thinking of tonight on my long drive to a client’s location for a few days of on-site software integration. I listened, on the drive, to some discussions of modern AI, and that got me thinking about how, and if, AI is likely to impact their careers in the near future.

I figure I’m probably safe. I’m getting pretty old, so AI only has a few years to catch up with me. I write software for machines that don’t yet exist, my work is highly collaborative, and it’s going to be a while before my engineer clients are able to tell an AI system what they need in a language AI is likely to understand. At least, I think it’s going to be a while. In fact, AI has become far more sophisticated in the last few years than I thought I’d see in my lifetime, and I now have little confidence in my ability to anticipate its further development. I broadly understand how it does what it does; I’m just surprised by how well that works to achieve seemingly conscious behavior.

Back to Darling Daughter. She’s in the process of applying for graduate studies so she can get an MBA or something similar while she rakes in the big bucks (big bucks, anyway, for a kid of 23 a year out of college with a BS in economics). Her primary passion is basketball, but her marketable passion is data analytics, and that’s what she intends to pursue.

It occurred to me tonight on the long drive that she needs to study AI and its application in her field, because “data analytics” — whatever that is, exactly — sounds like just the kind of thing AI is going to do better than people do in about, oh, 20 minutes or so. (If number two son were a radiologist rather than a neurologist, I’d be worried about him getting displaced too: if AI doesn’t read imaging data better than humans do already, it will soon.)

So I called her this evening and told her gently suggested that she learn all she can about the application of AI in her field, so that she can use it as a tool instead of competing with it for a job. As it happens, she shadowed a Notre Dame graduate class on data analysis on Saturday, and she told me tonight that AI was a hot topic in that class. That’s good: I want her to see it coming, and to be prepared.


The writers of The Terminator glossed over the mechanism by which machines actually take over the world. Launching nuclear weapons is one thing: shoot, that’s what Skynet was made for, after all. But nothing in that (really terrific) science fiction thriller begins to explain how Skynet acquired the automated manufacturing technology to build the “Hunter-Killers” of our fictitious post-apocalyptic future.

For a more plausible path to machine domination, check out the 1970 gem Colossus: The Forbin Project, if you can find it. It’s a well-crafted fusion of 60’s cool, cold-war dread, and pretty good science fiction.

In fact, I think there are a couple of plausible paths by which a malicious AI could gain control of automated manufacturing capabilities in the not-too-distant future. It seems likely that it will come down to intention: will an AI want to displace us? Will it interpret succeeding us as a goal? There’s a domain of research called “AI safety,” and one of the greatest challenges with which it grapples is referred to as “alignment”: aligning the goals of AI systems — which are, after all goal-directed systems — with the goals of the users of AI systems. It’s a surprisingly subtle and complicated problem.

We aren’t there yet, but we’re close. We’ll soon, and for the first time, share the planet with another “species” that shares, or appears to the best of our abilities to distinguish to share, intelligence similar to our own. This is new and uncharted territory, and I think it’s going to be the third great cultural disruption of my lifetime — the first being the entry of women in the workplace, the second the near-universal connectivity brought about by smartphones.

In his classic 1965 science fiction novel Dune, author Frank Herbert imagined a future constrained by a Biblical injunction: “Thou shalt not make a machine in the likeness of a human mind.” It struck me as odd when I read it in the 1970s. Today, and increasingly, it just sounds like good advice.

Published in Technology
Tags:

This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 34 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Misthiocracy has never Member
    Misthiocracy has never
    @Misthiocracy

    kedavis (View Comment):

    Al Sparks (View Comment):

    I’m skeptical of the doomsayers on AI, at least their predictions that AI will set out to deliberately kill us. If AI is a danger to the human race, it won’t be because AI will try and kill us on it’s own. It will be because a human directed AI is. But probably there will be a human directed AI that will defend us (or them, or whatever) from the aggressive AI. So it will be dueling AI’s.

    Or more likely, it will cause humans to be even lazier and less likely to work. And humans who don’t work, don’t reproduce. It’s what we’re finding out in our present leisure filled society without AI.

    And it’s not like AIs would be able to build and operate their own power plants any time soon. AI can be unplugged just as easily as any other computer.

    That’s what made Skynet and W.O.P.R. so dangerous.  As vital, strategic, military computers their power supplies were highly secured so that bad guys couldn’t turn them off. 

    Also, in the timeline created by Terminator 3, Skynet wasn’t a computer but rather a decentralized computer program that has spread throughout the Internet. At that point there was no way to unplug it.

    • #31
  2. Misthiocracy has never Member
    Misthiocracy has never
    @Misthiocracy

    Henry Racette (View Comment):

    Ed G. (View Comment):

    Why would a sentient AI do anything whatsoever?

    Because they’re goal-driven systems. They get “rewarded,” and they rewire themselves (figuratively speaking) to increase the likelihood of getting more rewards.

     

    Yabbut, the developers can adjust what sort of behaviour gets rewarded.  They need not set the goals and rewards at the start and then are powerless to make adjustments.

    e.g. Amazon warehouses use oodles of robots controlled by a massive machine learning model that optimizes the paths that the robots take.  If the paths the robots take start to become less optimal then the developers can step in and tweak the model’s parameters.

    • #32
  3. Misthiocracy has never Member
    Misthiocracy has never
    @Misthiocracy

    kedavis (View Comment):

    Hopefully enough books have been written and enough movies have been made, that we won’t give AI total control of nuclear missiles, for example.

    Considering that US missile silos still rely of floppy diskettes for security reasons, I think that’s a fairly good bet.

    • #33
  4. kedavis Coolidge
    kedavis
    @kedavis

    Misthiocracy has never (View Comment):

    kedavis (View Comment):

    Al Sparks (View Comment):

    I’m skeptical of the doomsayers on AI, at least their predictions that AI will set out to deliberately kill us. If AI is a danger to the human race, it won’t be because AI will try and kill us on it’s own. It will be because a human directed AI is. But probably there will be a human directed AI that will defend us (or them, or whatever) from the aggressive AI. So it will be dueling AI’s.

    Or more likely, it will cause humans to be even lazier and less likely to work. And humans who don’t work, don’t reproduce. It’s what we’re finding out in our present leisure filled society without AI.

    And it’s not like AIs would be able to build and operate their own power plants any time soon. AI can be unplugged just as easily as any other computer.

    That’s what made Skynet and W.O.P.R. so dangerous. As vital, strategic, military computers their power supplies were highly secured so that bad guys couldn’t turn them off.

    And not the good guys either?  Bad planning.

     

    Also, in the timeline created by Terminator 3, Skynet wasn’t a computer but rather a decentralized computer program that has spread throughout the Internet. At that point there was no way to unplug it.

    A writer can write anything, but I don’t think that aspect holds up to scrutiny.

    • #34
Become a member to join the conversation. Or sign in if you're already a member.