AI: Tool, Co-Worker, Competitor, and/or Executioner

 

When my sons were younger I suggested to them that they consider learning skilled trades, things that couldn’t practically be outsourced to an underdeveloped country or economically replaced with automation. I still think that’s good advice, though most of them did their own things and ignored it — perhaps wisely, it turned out. One is an honest-to-goodness computer genius who has left me far behind, one a doctor, another a public policy administrator and hyper-educated wonk, one a cop, and one — the only one who went in the direction I suggested (if probably of his own accord) — a factory machine technician. (It’s this last son, the youngest of the five, with whom I have the most in-depth and enjoyable work conversations, since we both deal with automation, albeit from different ends.)

My youngest child, Darling Daughter, works in a Big City for a subsidiary of some semi-notorious financial company (Black-something, though I can never remember what). She’s the one I was thinking of tonight on my long drive to a client’s location for a few days of on-site software integration. I listened, on the drive, to some discussions of modern AI, and that got me thinking about how, and if, AI is likely to impact their careers in the near future.

I figure I’m probably safe. I’m getting pretty old, so AI only has a few years to catch up with me. I write software for machines that don’t yet exist, my work is highly collaborative, and it’s going to be a while before my engineer clients are able to tell an AI system what they need in a language AI is likely to understand. At least, I think it’s going to be a while. In fact, AI has become far more sophisticated in the last few years than I thought I’d see in my lifetime, and I now have little confidence in my ability to anticipate its further development. I broadly understand how it does what it does; I’m just surprised by how well that works to achieve seemingly conscious behavior.

Back to Darling Daughter. She’s in the process of applying for graduate studies so she can get an MBA or something similar while she rakes in the big bucks (big bucks, anyway, for a kid of 23 a year out of college with a BS in economics). Her primary passion is basketball, but her marketable passion is data analytics, and that’s what she intends to pursue.

It occurred to me tonight on the long drive that she needs to study AI and its application in her field, because “data analytics” — whatever that is, exactly — sounds like just the kind of thing AI is going to do better than people do in about, oh, 20 minutes or so. (If number two son were a radiologist rather than a neurologist, I’d be worried about him getting displaced too: if AI doesn’t read imaging data better than humans do already, it will soon.)

So I called her this evening and told her gently suggested that she learn all she can about the application of AI in her field, so that she can use it as a tool instead of competing with it for a job. As it happens, she shadowed a Notre Dame graduate class on data analysis on Saturday, and she told me tonight that AI was a hot topic in that class. That’s good: I want her to see it coming, and to be prepared.


The writers of The Terminator glossed over the mechanism by which machines actually take over the world. Launching nuclear weapons is one thing: shoot, that’s what Skynet was made for, after all. But nothing in that (really terrific) science fiction thriller begins to explain how Skynet acquired the automated manufacturing technology to build the “Hunter-Killers” of our fictitious post-apocalyptic future.

For a more plausible path to machine domination, check out the 1970 gem Colossus: The Forbin Project, if you can find it. It’s a well-crafted fusion of 60’s cool, cold-war dread, and pretty good science fiction.

In fact, I think there are a couple of plausible paths by which a malicious AI could gain control of automated manufacturing capabilities in the not-too-distant future. It seems likely that it will come down to intention: will an AI want to displace us? Will it interpret succeeding us as a goal? There’s a domain of research called “AI safety,” and one of the greatest challenges with which it grapples is referred to as “alignment”: aligning the goals of AI systems — which are, after all goal-directed systems — with the goals of the users of AI systems. It’s a surprisingly subtle and complicated problem.

We aren’t there yet, but we’re close. We’ll soon, and for the first time, share the planet with another “species” that shares, or appears to the best of our abilities to distinguish to share, intelligence similar to our own. This is new and uncharted territory, and I think it’s going to be the third great cultural disruption of my lifetime — the first being the entry of women in the workplace, the second the near-universal connectivity brought about by smartphones.

In his classic 1965 science fiction novel Dune, author Frank Herbert imagined a future constrained by a Biblical injunction: “Thou shalt not make a machine in the likeness of a human mind.” It struck me as odd when I read it in the 1970s. Today, and increasingly, it just sounds like good advice.

Published in Technology
Tags:

This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 34 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. kedavis Coolidge
    kedavis
    @kedavis

    I’m not sure, it might be a bit cropped but it’s better than nothing:

     

    • #1
  2. kedavis Coolidge
    kedavis
    @kedavis

    I thought the most implausible part of the movie – it’s been so long since I read the books (there are 3 books in the Colossus series, only the first became a movie) I don’t remember if they explained anything about it – is that it was all sealed up with no indication of any kind of auto-repair stuff.  So probably in a couple years at most, it would be failing.

    Which also explains one of the all-time great jokes, coming from the movie:

    “This is the voice of World Control.

    “You have nothing to fear.

    “Nothing can go wrong.

    “*click*

    “Go wrong.

    “*click*

    “Go wrong

    “*click*”

    and so on.

    • #2
  3. Henry Castaigne Member
    Henry Castaigne
    @HenryCastaigne

    I think this was a very good review.

    • #3
  4. David Foster Member
    David Foster
    @DavidFoster

    I thought it would be interesting to go back and see how the implications of AI were portrayed in some SF stories, when AI and robotics first came to serious attention…Artificial Intelligence & Robotics, as Viewed From the Early 1950s.

    • #4
  5. DonG (CAGW is a Scam) Coolidge
    DonG (CAGW is a Scam)
    @DonG

    Henry Racette: As it happens, she shadowed a Notre Dame graduate class on data analysis on Saturday, and she told me tonight that AI was a hot topic in that class. That’s good: I want her to see it coming, and to be prepared.

    Machine Learning has been hot-hotness in data analytics for a couple of years now.  

    As for software, I have found that the hardest part of software is translating a problem into a solution.   People that need software (customers) generally don’t understand their problems and are certainly poor at devising a solution.   I like to say that if you asked a farmer a 120 years ago if they could make a wish for something that would make them more productive, what would that wish be?   That farmer would wish for a stronger mule, not a GPS integrated quad-track 600HP tractor.   Good software solutions solve problems in an innovative and elegant manner.  AI is not good at innovative and elegant.

    • #5
  6. kedavis Coolidge
    kedavis
    @kedavis

    DonG (CAGW is a Scam) (View Comment):

    Henry Racette: As it happens, she shadowed a Notre Dame graduate class on data analysis on Saturday, and she told me tonight that AI was a hot topic in that class. That’s good: I want her to see it coming, and to be prepared.

    Machine Learning has been hot-hotness in data analytics for a couple of years now.

    As for software, I have found that the hardest part of software is translating a problem into a solution. People that need software (customers) generally don’t understand their problems and are certainly poor at devising a solution. I like to say that if you asked a farmer a 120 years ago if they could make a wish for something that would make them more productive, what would that wish be? That farmer would wish for a stronger mule, not a GPS integrated quad-track 600HP tractor. Good software solutions solve problems in an innovative and elegant manner. AI is not good at innovative and elegant.

    Most real-world software seems to rely basically on “brute force,” and most of the programmers I’ve known over ~50 years weren’t really capable of doing anything else.  Which is one reason ever-increasing computing power has been necessary.  AI might be able to do brute-force type programming as well as many people can.

    • #6
  7. I am Jack's Mexican identity Inactive
    I am Jack's Mexican identity
    @dnewlander

    DonG (CAGW is a Scam) (View Comment):

    Henry Racette: As it happens, she shadowed a Notre Dame graduate class on data analysis on Saturday, and she told me tonight that AI was a hot topic in that class. That’s good: I want her to see it coming, and to be prepared.

    Machine Learning has been hot-hotness in data analytics for a couple of years now.

    As for software, I have found that the hardest part of software is translating a problem into a solution. People that need software (customers) generally don’t understand their problems and are certainly poor at devising a solution. I like to say that if you asked a farmer a 120 years ago if they could make a wish for something that would make them more productive, what would that wish be? That farmer would wish for a stronger mule, not a GPS integrated quad-track 600HP tractor. Good software solutions solve problems in an innovative and elegant manner. AI is not good at innovative and elegant.

    Machine Learning isn’t doing anything in data analytics. Because 99% of business users don’t know what they want until they see something in front of them, and then they start realizing what’s possible. It’s been that way for a long, long time.

    • #7
  8. Mad Gerald Coolidge
    Mad Gerald
    @Jose

    I saw Colossus the first time a couple months ago.  It’s a great portrayal of good intentions somehow going wrong.

    Long ago I read a Sci-Fi story about “AI” which was instructed to prevent harm to people.  As I recall, after restricting driving privileges and access to power tools, humans ended up confined to padded rooms.

    • #8
  9. kedavis Coolidge
    kedavis
    @kedavis

    Mad Gerald (View Comment):

    I saw Colossus the first time a couple months ago. It’s a great portrayal of good intentions somehow going wrong.

    Long ago I read a Sci-Fi story about “AI” which was instructed to prevent harm to people. As I recall, after restricting driving privileges and access to power tools, humans ended up confined to padded rooms.

    The one I remember most, and which could actually turn out to be the most prophetic, was one where people’s only purpose was pretty much to just consume/use up/wear out everything that had been produced by automated factories.

    • #9
  10. Taras Coolidge
    Taras
    @Taras

    kedavis (View Comment):

    Mad Gerald (View Comment):

    I saw Colossus the first time a couple months ago. It’s a great portrayal of good intentions somehow going wrong.

    Long ago I read a Sci-Fi story about “AI” which was instructed to prevent harm to people. As I recall, after restricting driving privileges and access to power tools, humans ended up confined to padded rooms.

    The one I remember most, and which could actually turn out to be the most prophetic, was one where people’s only purpose was pretty much to just consume/use up/wear out everything that had been produced by automated factories.

    That’s actually a version of a famous economic fallacy, Frederic Bastiat’s “broken windows”.

    Mad Gerald:  The story sounds like Jack Williamson’s classic “With Folded Hands” (1947).   

    • #10
  11. I am Jack's Mexican identity Inactive
    I am Jack's Mexican identity
    @dnewlander

    kedavis (View Comment):

    Mad Gerald (View Comment):

    I saw Colossus the first time a couple months ago. It’s a great portrayal of good intentions somehow going wrong.

    Long ago I read a Sci-Fi story about “AI” which was instructed to prevent harm to people. As I recall, after restricting driving privileges and access to power tools, humans ended up confined to padded rooms.

    The one I remember most, and which could actually turn out to be the most prophetic, was one where people’s only purpose was pretty much to just consume/use up/wear out everything that had been produced by automated factories.

    Autofac?

    https://en.wikipedia.org/wiki/Autofac

    Amazon used that as the basis for an episode of their Electric Dreams show.

    https://www.denofgeek.com/tv/philip-k-dick-s-electric-dreams-episode-8-review-autofac/

     

    • #11
  12. Hartmann von Aue Member
    Hartmann von Aue
    @HartmannvonAue

    Henry Castaigne (View Comment):

    I think this was a very good review.

    Yeah, it was. I listened to that a few weeks ago. Colossus: The Forbin Project is one of my favorite films on this topic. The second book in the series is a rather hard read, for the kinds of experiments Colossus has its human agents perform on people in its efforts to understand humanity better.

    • #12
  13. kedavis Coolidge
    kedavis
    @kedavis

    I am Jack's Mexican identity (View Comment):

    kedavis (View Comment):

    Mad Gerald (View Comment):

    I saw Colossus the first time a couple months ago. It’s a great portrayal of good intentions somehow going wrong.

    Long ago I read a Sci-Fi story about “AI” which was instructed to prevent harm to people. As I recall, after restricting driving privileges and access to power tools, humans ended up confined to padded rooms.

    The one I remember most, and which could actually turn out to be the most prophetic, was one where people’s only purpose was pretty much to just consume/use up/wear out everything that had been produced by automated factories.

    Autofac?

    https://en.wikipedia.org/wiki/Autofac

    Amazon used that as the basis for an episode of their Electric Dreams show.

    https://www.denofgeek.com/tv/philip-k-dick-s-electric-dreams-episode-8-review-autofac/

     

    I don’t think that’s the one.  Especially since I “never” read Philip K. Dick.

    • #13
  14. kedavis Coolidge
    kedavis
    @kedavis

    Hartmann von Aue (View Comment):

    Henry Castaigne (View Comment):

    I think this was a very good review.

    Yeah, it was. I listened to that a few weeks ago. Colossus: The Forbin Project is one of my favorite films on this topic. The second book in the series is a rather hard read, for the kinds of experiments Colossus has its human agents perform on people in its efforts to understand humanity better.

    That, and some of Fred Saberhagen’s “Berserker” stories.

    • #14
  15. I am Jack's Mexican identity Inactive
    I am Jack's Mexican identity
    @dnewlander

    kedavis (View Comment):

    Hartmann von Aue (View Comment):

    Henry Castaigne (View Comment):

    I think this was a very good review.

    Yeah, it was. I listened to that a few weeks ago. Colossus: The Forbin Project is one of my favorite films on this topic. The second book in the series is a rather hard read, for the kinds of experiments Colossus has its human agents perform on people in its efforts to understand humanity better.

    That, and some of Fred Saberhagen’s “Berserker” stories.

    That was my favorite series during high school. A classmate of mine knew Saberhagen, as he lived in Albuquerque.

    • #15
  16. Mad Gerald Coolidge
    Mad Gerald
    @Jose

    Taras (View Comment):

    kedavis (View Comment):

    Mad Gerald (View Comment):

    I saw Colossus the first time a couple months ago. It’s a great portrayal of good intentions somehow going wrong.

    Long ago I read a Sci-Fi story about “AI” which was instructed to prevent harm to people. As I recall, after restricting driving privileges and access to power tools, humans ended up confined to padded rooms.

    The one I remember most, and which could actually turn out to be the most prophetic, was one where people’s only purpose was pretty much to just consume/use up/wear out everything that had been produced by automated factories.

    That’s actually a version of a famous economic fallacy, Frederic Bastiat’s “broken windows”.

    Mad Gerald: The story sounds like Jack Williamson’s classic “With Folded Hands” (1947).

    A quick web search makes me think this might be it.  Thanks!

    • #16
  17. kedavis Coolidge
    kedavis
    @kedavis

    Mad Gerald (View Comment):

    Taras (View Comment):

    kedavis (View Comment):

    Mad Gerald (View Comment):

    I saw Colossus the first time a couple months ago. It’s a great portrayal of good intentions somehow going wrong.

    Long ago I read a Sci-Fi story about “AI” which was instructed to prevent harm to people. As I recall, after restricting driving privileges and access to power tools, humans ended up confined to padded rooms.

    The one I remember most, and which could actually turn out to be the most prophetic, was one where people’s only purpose was pretty much to just consume/use up/wear out everything that had been produced by automated factories.

    That’s actually a version of a famous economic fallacy, Frederic Bastiat’s “broken windows”.

    Mad Gerald: The story sounds like Jack Williamson’s classic “With Folded Hands” (1947).

    A quick web search makes me think this might be it. Thanks!

    Good for you, but I don’t think it’s the story that I referenced.

    • #17
  18. Headedwest Coolidge
    Headedwest
    @Headedwest

    Back to Darling Daughter. She’s in the process of applying for graduate studies so she can get an MBA or something similar while she rakes in the big bucks (big bucks, anyway, for a kid of 23 a year out of college with a BS in economics). Her primary passion is basketball, but her marketable passion is data analytics, and that’s what she intends to pursue.

    It occurred to me tonight on the long drive that she needs to study AI and its application in her field, because “data analytics” — whatever that is, exactly — sounds like just the kind of thing AI is going to do better than people do in about, oh, 20 minutes or so. (If number two son were a radiologist rather than a neurologist, I’d be worried about him getting displaced too: if AI doesn’t read imaging data better than humans do already, it will soon.)

    As a long time professor (well, now former professor) in business schools, I would advise your daughter not to go for the MBA degree. Most graduate schools of business have started to offer MS degrees in things like Data Science or Business Analytics. These degrees are generally cheaper and faster than an MBA, and they are pointed toward analytical work, not just general business skills. (I mean, does your daughter need to take a mandatory course in Marketing or Accounting? For what purpose?)

    Yes, if your daughter becomes a high level executive, she’ll need to know how that non-technical stuff works, but most people don’t advance to that level. And if they do, there are Executive MBA degrees that are — in my view — more valuable because of the students in the room, not the curriculum.

    • #18
  19. Ed G. Member
    Ed G.
    @EdG

    Why would a sentient AI do anything whatsoever?

    • #19
  20. Henry Racette Member
    Henry Racette
    @HenryRacette

    Ed G. (View Comment):

    Why would a sentient AI do anything whatsoever?

    Because they’re goal-driven systems. They get “rewarded,” and they rewire themselves (figuratively speaking) to increase the likelihood of getting more rewards.

    One of the interesting problems of AI systems is that, since they don’t really think like us, they don’t have the wealth of context and intuitive knowledge that we have. They have other kinds of context, and other forms of pseudo-intuition based on subtle pattern matching (though perhaps that’s what intuition really is), but they don’t have our values and evolved biases. This means that an AI might follow a path, a sequence of intermediate goals it has created to solve a problem, that would strike us as ridiculous and obviously undesirable.

    That doesn’t mean, of course, that they’ll wake up and want to take over the world. But it’s easy to imagine, for example, AIs making creative financial trades that create terribly destabilizing side-effects or other unpleasant externalities.

    We don’t really know how humans think, and we soon won’t really know exactly how AIs plan and execute their cognitive strategies. I wouldn’t rule out the possibility that a machine intelligence will develop a goal set that includes “being secure” as one of its important internal targets. It isn’t obvious to me that only biological evolution can instill the drive to survive.

    • #20
  21. kedavis Coolidge
    kedavis
    @kedavis

    Hopefully enough books have been written and enough movies have been made, that we won’t give AI total control of nuclear missiles, for example.

    • #21
  22. Ed G. Member
    Ed G.
    @EdG

    What are the rewards?

    • #22
  23. kedavis Coolidge
    kedavis
    @kedavis

    Ed G. (View Comment):

    What are the rewards?

    Yeah, I wonder that too.  What is a “reward” to AI?

    • #23
  24. Al Sparks Coolidge
    Al Sparks
    @AlSparks

    I’m skeptical of the doomsayers on AI, at least their predictions that AI will set out to deliberately kill us. If AI is a danger to the human race, it won’t be because AI will try and kill us on it’s own. It will be because a human directed AI is. But probably there will be a human directed AI that will defend us (or them, or whatever) from the aggressive AI. So it will be dueling AI’s.

    Or more likely, it will cause humans to be even lazier and less likely to work. And humans who don’t work, don’t reproduce. It’s what we’re finding out in our present leisure filled society without AI.

    • #24
  25. kedavis Coolidge
    kedavis
    @kedavis

    Al Sparks (View Comment):

    I’m skeptical of the doomsayers on AI, at least their predictions that AI will set out to deliberately kill us. If AI is a danger to the human race, it won’t be because AI will try and kill us on it’s own. It will be because a human directed AI is. But probably there will be a human directed AI that will defend us (or them, or whatever) from the aggressive AI. So it will be dueling AI’s.

    Or more likely, it will cause humans to be even lazier and less likely to work. And humans who don’t work, don’t reproduce. It’s what we’re finding out in our present leisure filled society without AI.

    And it’s not like AIs would be able to build and operate their own power plants any time soon.  AI can be unplugged just as easily as any other computer.

    • #25
  26. Old Bathos Member
    Old Bathos
    @OldBathos

    How would we know if and when AI has taken over?  When it turns out that Mega Corp, Monolith Information Services, and the Department of Homeland Security actually merged years ago and have steadily turned control tasks over to HAL?  Won’t HAL be smart enough to continue to have human faces, figureheads, and buffers out front that we can blame?  Wealth-destroying, population-shrinking, freedom-constricting policies we will blame on the Deep State or Davos or the MSM-DNC axis –when it was HAL all along?  Until the armed robots begin neighborhood sweeps, will we know?

    • #26
  27. Ed G. Member
    Ed G.
    @EdG

    Michael Crichton has the topic of programmed rewards covered.

    Once the AI figures out how to trigger the reward then it will do so unless there is some reason not to do so. In fact, why wouldn’t an AI capable of reprogramming itself simply write some code automatically setting the reward switch to the “on” position all the time without having to do anything to earn the reward? Or, why wouldn’t the AI write some code to remove the reward code?

    • #27
  28. Ed G. Member
    Ed G.
    @EdG

    Al Sparks (View Comment):
    I’m skeptical of the doomsayers on AI, at least their predictions that AI will set out to deliberately kill us.

    The problem there is that not deliberately killing us is a pretty low bar. Why would an AI care about not doing harm to us? Who is “us” anyway? What is harm?

    • #28
  29. Misthiocracy has never Member
    Misthiocracy has never
    @Misthiocracy

    Henry Racette: The writers of The Terminator glossed over the mechanism by which machines actually take over the world. Launching nuclear weapons is one thing: shoot, that’s what Skynet was made for, after all. But nothing in that (really terrific) science fiction thriller begins to explain how Skynet acquired the automated manufacturing technology to build the “Hunter-Killers” of our fictitious post-apocalyptic future.

    They tried to explain that a bit in Terminator 3.  The company that created Skynet in that timeline also invented the first prototype Terminators and hunter-killers, and their factory was already highly automated and networked so Skynet was able to take it over.

    • #29
  30. kedavis Coolidge
    kedavis
    @kedavis

    Misthiocracy has never (View Comment):

    Henry Racette: The writers of The Terminator glossed over the mechanism by which machines actually take over the world. Launching nuclear weapons is one thing: shoot, that’s what Skynet was made for, after all. But nothing in that (really terrific) science fiction thriller begins to explain how Skynet acquired the automated manufacturing technology to build the “Hunter-Killers” of our fictitious post-apocalyptic future.

    They tried to explain that a bit in Terminator 3. The company that created Skynet in that timeline also invented the first prototype Terminators and hunter-killers, and their factory was already highly automated and networked so Skynet was able to take it over.

    And the raw materials were mined and refined by other remote-controlled robots etc?  Even after all the nukes went off?

    Yes, I know, YOU didn’t write the scripts.  :-)

    My new Krusty the Klown impression:  “Don’t blame me, I didn’t write it!”

     

    • #30
Become a member to join the conversation. Or sign in if you're already a member.