epistemic-inclusiveness
The necessity of defining sufficiently inclusive classes of possible worlds and actions to ensure AI systems can handle all relevant scenarios without making poor decisions.
1 chapter across 1 book
Superintelligence: Paths, Dangers, Strategies (2014)Nick Bostrom
Chapter 12 of "Superintelligence" explores the complexities involved in designing AI systems that acquire and maintain values, focusing on expected utility-maximizing agents. It discusses challenges such as defining non-trivial utility functions, the difficulty of value learning and representation, and the potential for value drift or corruption in AI systems. The chapter also examines the theoretical frameworks for value acquisition, including probabilistic models and the importance of considering broad classes of possible worlds and actions to avoid epistemic blind spots.