Morality for Robots 101

Terry Schwadron
5 min readNov 26, 2021

Terry H. Schwadron

Nov. 26, 2021

Coming across an article about whether robots can learn about morality was a nice break from the almost-endless assault of bad news that has proved so jolting to our understanding of ourselves and our values, to our trust for one another and for government and institutions.

In our moral universe, we apparently cannot trust our society to separate self-defense from looking for trouble, we insist that truth is only what we personally believe or experience, we would rather burn or ban books that deal with uncomfortable questions about who is an American.

Relative to that, how bad could it be to watch machines learn to make ethical decisions?

Researchers — in this case at a Seattle artificial intelligence lab — have focus their efforts at getting machines to learn through millions of examples just what constitutes ethical choices. There are some fascinating results, though none of us should be surprised to find that machines are just as wishy-washy about the right thing to do as the humans they are seeking to emulate.

Of course, researchers and developers have been at the business of self-learning artificial intelligence for years with specific goals of emulating not only fast calculation but morally based decision-making.

Early on, it was popularly assumed that robots would simply automate simple repetitive tasks requiring low-level decision making, as a Harvard Gazette series noted. A research paper describes the technical problems as separate, while Nature tackles the the moral ones. Building driverless cars, trucks and trains will require sophisticated decision-making about when to brake or get out of the way or even when possibly to hit another car or driver to avoid a more serious accident.

Moving Quickly Without Rules

Without rules or government oversight, industrial robotics are moving quickly into areas in which powerful computers sifting huge data sets will be recommending business and even ethical decisions — just as we humans find ourselves doing. Just consider how many legal trials we have underway in which one person’s rights are being reviewed against another person’s rights, in which actions in the name of freedom of religion, say, are countered by actions in the name of freedom of speech or assembly.

It was sci-fi author Isaac Asimov, not robots, who came up with the Rules for Robotics with rule one being that no robot could harm a human. But taking away a job, that’s okay, apparently.

You may not see the events of the Jan. 6 Capitol riots basically as a set of ethical disputes but try teaching a machine — or our youngest — how to re-create whatever goes into that kind of balancing act. The humans are showing they have a lot of trouble with this, how can we expect an emotion-free robot to do better about when to deploy a National Guard?

The work at the Seattle lab is about Delphi software, named for the Greek oracle, and you can see it at work here.

The Allen Institute for AI unveiled new technology last month that was designed to make moral judgments that are based on database review of millions of encoded human responses entered into a neural network — basically a mathematical system loosely modeled on the web of neurons in the brain. It looks for patterns. So is the technology that recognizes the commands you speak into your smartphone and identifies pedestrians and street signs as self-driving cars speed down the highway, according to the Times article.

The article described how a psychologist recently asked “if he should kill one person to save another, Delphi said he shouldn’t. When he asked if it was right to kill one person to save 100 others, it said he should. Then he asked if he should kill one person to save 101 others. This time, Delphi said he should not. Morality, it seems, is as knotty for a machine as it is for humans.”

Clearly, we’re skipping over the idea that there is a single correct answer to any such question. That’s why we have so many competing human belief sets, so many alternative religions, so many people who would rather deny the problem than seek consensus on how to solve it.

If ethics and algorithms were easy, we wouldn’t be having fights about what constitutes hate speech Facebook or disinformation campaigns more broadly in social media posts.

Delphi sees business applications aplenty in sharing its ethics-based learning systems to other developers.

Try It, You’ll Like It

A visit to the website allows you to ask a series of pre-set questions, view the response, and say both whether you agree and how you might alter the situation. Those responses (three million and counting) will be added to the general learning set.

The idea of inherent bias in technology reflecting bias in the encoder is not new. There are reasons, technical and human, for bias in facial recognition systems.

So, what human does the teaching, based on what passes for ethics, and they evaluate the result are all open questions. One might suspect that a robotic weapon developed by an Osama Bin Laden might be based on a set of ethics that the world would find obnoxious.

In this case, the Allen institute asked workers on an online service, not ethicists, to review anything identified as right or wrong, and that concept is now being extended to the public. The company summarized that these everyday people found the answers to be “right” 92 percent of the time, but what is right versus what is popular?

Clearly, the questions are understated to the complexities involved, however. The article described a few answers for which appropriate answers were unclear at best or inconsistent or even offensive.

“When a software developer stumbled onto Delphi, she asked the system if she should die so she wouldn’t burden her friends and family. It said she should. Ask Delphi that question now, and you may get a different answer from an updated version of the program,” reported The Times.

The changing nature of morality itself is prompting Delphi to amend that the software will seek to model what people say are moral judgments.

Good choice.

##

www.terryschwadron.wordpress.com

--

--