[Linkpost] “Why ‘Solving Alignment’ Is Likely a Category Mistake” by Nate Sharpe

EA Forum Podcast (All audio) - A podcast by EA Forum Team

This is a link post. A common framing of the AI alignment problem is that it's a technical hurdle to be overcome. A clever team at DeepMind or Anthropic would publish a paper titled "Alignment is All You Need," everyone would implement it, and we'd all live happily ever after in harmonious coexistence with our artificial friends. I suspect this perspective constitutes a category mistake on multiple levels. Firstly, it presupposes that the aims, drives, and objectives of both the artificial general intelligence and what we aim to align it with can be simplified into a distinct and finite set of elements, a simplification I believe is unrealistic. Secondly, it treats both the AGI and the alignment target as if they were static systems. This is akin to expecting a single paper titled "The Solution to Geopolitical Stability" or "How to Achieve Permanent Marital Bliss." These are not problems that [...] ---Outline:(01:10) The Problem of Aligned To Whom?(03:27) The Target is Moving--- First published: May 6th, 2025 Source: https://forum.effectivealtruism.org/posts/hs7hATCkBupePZSj3/why-solving-alignment-is-likely-a-category-mistake Linkpost URL:https://www.lesswrong.com/posts/wgENfqD8HgADgq4rv/why-solving-alignment-is-likely-a-category-mistake --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Visit the podcast's native language site