Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
AI and a Very Idealistic Description of Evil
Being interested in Artificial Intelligence, when I ran across this article in The Atlantic I was hoping to find something interesting. The article focuses on Judea Pearl, an AI researcher who pioneered Bayesian (calling Midget Faded Rattlesnake) networks for machine leaning. Pearl is disappointed that most AI research nowadays is centered around his previous bailiwick of machine learning (what he calls fancy curve fitting) and not around his new interest, which is around causal reasoning models.
This is all well and good and somewhat interesting, however near the end of the article he and the interviewer talk about free will and have the following exchange about evil.
Hartnett: Now that you’ve brought up free will, I guess I should ask you about the capacity for evil, which we generally think of as being contingent upon an ability to make choices. What is evil?
Pearl: It’s the belief that your greed or grievance supersedes all standard norms of society. For example, a person has something akin to a software module that says, “You are hungry, therefore you have permission to act to satisfy your greed or grievance.” But you have other software modules that instruct you to follow the standard laws of society. One of them is called compassion. When you elevate your grievance above those universal norms of society, that’s evil.
This is a very simplistic and idealistic conception of evil that can only be believed by a determinism addled atheist. This description hardly rises to the level of evil, as it explains harmful actions by assuming wrong headed, selfish motives. This description leaves no room for true malevolence. People commit evil acts sometimes for a gain or in response to a grievance, but sometimes people inflict pain or harm on another for no other reason than to cause the harm or pain itself. That is malevolence, that is evil.
What I learned from this article is that I don’t want AI scientists like Pearl programming in their simplistic conceptions of good and evil into our future AI robots.
Published in Technology
Is Judea Pearl related to Daniel Pearl?
@midge, Bayesian for you.
Judea Pearl is Daniel Pearl’s father.
Greed or grievance, he said. If they enjoy inflicting pain, they are deriving pleasure from the act. They are fulfilling their greed for pleasure at the expense of others. That would fit into the definition. Now, you or I might prefer a different definition, such as “putting self before God’s will,” or stating it in any number of other ways. But I don’t see how his way of putting it would not work.
Now, if I were to object to it, it would be about the term, “those universal norms of society.” There are no such universal norms across societies. The norms in Arabic Islamic countries are not my norms. The norms for the old Aztec civilization were not my norms. I think maybe we want better-defined norms, thank you very much.
They should read more Knuth.
If this is true, I cannot fathom it.
I thought so; thanks for confirming this, RB! Could grief be informing some of his thought here? (I’m trying to be charitable.)
I would disagree. What about the Nazi’s who operated the concentration camps? What about the Soviet guards in the gulags? What about the Stanford Prison Experiment? There are many people in history that have committed acts of evil without deriving pleasure. Men are capable of evil acts not initiated by greed or grievance if placed in the right situations.
Maybe these fall under your “universal norms of society” argument, and we are left with situation that if morality is truly based only on societal norms then we are doomed when the AI determines that human survival is no longer morally important.
Yeahbut, they aren’t necessarily doing it for pleasure.
Yep. And all the rest we agree on, I think. Ever read Walter Tevis’ Mockingbird?
Does it then fall into a problem with the societal norms? Or are you speaking of an impulse control issue? Or some other issue?
No. I will have to look it up.
I may still have objections to your formulation, because I feel at heart it still treats evil as somehow the result of rational motivated (conscious or unconscious) thought. I believe that evil can result from many processes, that man can commit evil even when they know in the moment they are committing evil (and not just rationally, but also emotionally) yet they do it anyway. I don’t want to constrain the definition and explanations of evil.
I am reminded back to the arguments in Dostoevsky’s Crime and Punishment . Rodion argues that morality is based on social norms, but that a great man can violate those norms and still be moral because he can transcend those norms. But as Rodion learns though his own horror to his own acts, that morality is internal. External morality is the morality of simpletons and inexperienced intellectuals.
*Edited
I reread my last sentence and it isn’t quite what I am trying to say. Externally motivated morality is more what I mean. If man has to rely on external cues (or rational thought) to what is moral or not, we are doomed because it will be insufficient to keep most people behaving morally. For man to act morally, the motivation must be internal, mans conscience must be the motivator.
Do read Mockingbird. One of the three main characters is an android. Tevis saw this sort of question decades ago.
I stick by my shock at learning that this is Daniel Pearl’s father. While I don’t take back any of my analysis or conclusions, my heart goes out to him.
Does putting puppies in blenders become less evil because society becomes inured to it?
Does sucking the brains out of babies before they’re born and selling their body parts become less evil because society becomes inured to it?
I’m talking about the intention and will to do evil because it is wrong.
And that isn’t taking a form of willful pleasure out of the act?
My dissertation chair corresponded with Dr. Pearl for some time on issues of machine learning and causality. The first time I was aware of Dr. Pearl’s work, was just before Daniel Pearl was murdered. I haven’t had a chance to read the article yet, but I’m sure that the interview did not begin to capture the depth of Dr. Pearl’s thinking on evil, or anything else.
The good news is that there are no equations using big data that will truly get at these problems and issues. It’s impossible. The bad news is that some people will never stop trying and at some point some of those people will have enough power to try to impose their equations on us. Then we’ll again see evil on a larger scale.
Maybe, if you were to define pleasure to include things that are the opposite of pleasure.
Are you saying there are people out there who are purposefully punishing themselves by doing evil acts?
There is something amusing about a godless man attempting to mimic human intelligence in software as he scripts its methods and limitations for autonomous reasoning like a prescient Creator.
Even from an atheistic perspective, it is hubris to imagine human-level intelligence could be designed without human-like failures and misdeeds. The need to understand them will increase in relation to the quality and breadth of intelligence.
It is only recently anatomists learned that the human digestive system contains more neurons than a dog’s brain. As medieval physicians understood, the human body involves a balance of interacting forces. Those forces include multiple influences upon reasoning. AI development has scarcely begun. Wake me when we can fully program a dog.
That might be a good way to put it in some cases. I’m not positive that happens, but I don’t deny the possibility.
Here’s something I’ve been mulling over:
I think we can all agree that “good” is something like self-sacrificing for the benefit and well-being of others. So for example, rushing into a burning building to save those trapped inside would generally be considered a good action. If that’s the case, true evil would be something like self-sacrifice for the detriment of others. To go back to the burning building, it would be dragging somebody previously safe into the midst of the fire and keeping them there until the structure collapses in on itself.
Given that, I think a neutral position would be something like acting (or not acting) according to whatever act would profit most (Pay me enough, and I’ll go in there and rescue your mother from the inferno).
It’s just a little thought experiment, I’d be interested to know what you think about it.
I think it’s more like sacrificing others for the good (or pleasure or profit) of self.
I’m with those who think the OP defines evil too narrowly by collapsing it into sadism.
You don’t have to take pleasure in evil to commit evil. It’s enough not to care about the good you’re destroying. Or to care less about it than you care about the benefit to yourself of destroying it.
Evil can be inaction as well. To continue your example, if one knows that they can rescue someone from the burning building without any risk to themselves and chooses to do nothing, that to me would be evil.
As opposed to perl programming, which inevitably leads to the dark side.
Think of robbers who kill an elderly lady unexpectedly at home when her house is broken into. Those robbers may not enjoy killing her. They may even find it distasteful. They would have preferred to find the house empty.
But they kill her anyway, because they don’t want to get caught.