goal-stability
The concept that a superintelligent agent may or may not change its final goals over time due to factors like ontology changes or evidential decision theory.
1 chapter across 1 book
Superintelligence: Paths, Dangers, Strategies (2014)Nick Bostrom
Chapter 7 of Bostrom's "Superintelligence" explores the orthogonality thesis, which posits that intelligence and final goals are independent, allowing superintelligent agents to have arbitrary motivations. The chapter discusses the nature of motivation, instrumental convergence, and the potential drives of advanced AI systems, emphasizing that superintelligent agents might pursue a wide range of goals regardless of their intelligence level. It also examines the implications of goal stability, adaptive preferences, and the strategic considerations a superintelligent singleton might have regarding technology development and expansion.