Главная
Study mode:
on
1
AGAINST ETHICAL ROBOTS
2
IMPETUS: AGAINST ETHICAL INFLATION
3
UNDESIRABLE EFFECTS OF ETHICAL INFLATION
4
COMPLACENCY IS NOT AN OPTION
5
EX. -- SOCIAL ROBOTS: UNCANNY VALLEY OF THE DOLLS
6
EX 2: AUTONOMOUS AI
7
SIDEBAR: IS MACHINE LEARNING EXEMPT!
8
LOGICILASED ETHICAL ROBOT METHODOLOGY (LER)
9
CONFLICT BETWEEN
10
RELATIVIZED OBLIGATIONS
11
JUST ADD A PARAMETER?
12
MAKING LERM HUMAN-CENTERED
13
CONVENTIONAL "ETHICAL ROBOTS" (LERM)
14
ETHICAL, HUMAN.CENTERED ROBOT DESIGN
15
THE DIFFERENCE IN ACTION A THOUGHT EXPERIMENT
16
CULPABILITY MATRICES
17
LERM GETS IT WRONG
18
HUMAN.CENTERED ETHICAL DESIGN FOR R
19
SUMMARY
Description:
Explore a thought-provoking lecture by Stanford HAI visiting scholar Ron Chrisley on the challenges and considerations in AI ethics. Delve into the complexities of avoiding complacency and inflation in ethical AI development, while examining the differences between reactive systems and robots with obligations. Learn about the Logicilased Ethical Robot Methodology (LER) and its limitations, as well as the importance of human-centered ethical design for robots. Gain insights into the potential undesirable effects of ethical inflation, the uncanny valley phenomenon in social robots, and the complexities surrounding autonomous AI systems. Analyze culpability matrices and understand why conventional "ethical robots" may fall short in addressing real-world ethical dilemmas.

Against Ethical Robots with Ron Chrisley

Stanford University
Add to list
0:00 / 0:00