← Back to Concept Index

indirect-normativity

A method of delegating the selection of the superintelligence's final value to the AI itself by specifying abstract conditions rather than fixed values, to avoid locking in current human moral errors.

1 chapter across 1 book

Superintelligence: Paths, Dangers, Strategies (2014)Nick Bostrom

CHAPTER 13

Chapter 13 addresses the challenge of selecting the final value or goal to install in a superintelligence, emphasizing the difficulty and risks of making such a choice based on current human moral understanding. It introduces the concept of indirect normativity as a strategy to delegate the complex task of value selection to the superintelligence itself, anchored by abstract conditions rather than fixed, potentially flawed human values. The chapter discusses Eliezer Yudkowsky's coherent extrapolated volition (CEV) as a prototype for indirect normativity, explaining how it aims to approximate humanity's idealized collective wishes through a process of extrapolation and consensus.