Can AI Think for Itself? Anthropic's Mind-Blowing Claude Study on AI Consciousness & Brains
They Might Be Self-Aware - A podcast by Daniel Bishop, Hunter Powers

Categories:
SLAP THAT SUBSCRIBE BUTTON IF YOU’RE MORE CONSCIOUS THAN AN AI On this week’s episode, Hunter discovers a rival “podcast” that’s just a bot reading AI research papers—and has more followers than us. Is this the future of “content creation,” or are we just not lazy enough?? We break down Anthropic’s new brain-bender of a paper: Is Claude-3.5 thinking in circuits like a biological brain? Do attribution graphs and “local replacement models” actually explain anything—or is it all just vibes? And why does AI math feel so much like how your dad “guesstimates” tips? Plus: What the heck is the “babies outlive mustard block” jailbreak, and how does an AI accidentally spill the beans before flipping the morality switch? Are LLMs secretly thinking in English, even when you talk to them in Mandarin? Is metacognition the spark of self-awareness—or just fancier autocomplete? We spiral into the question: If your chatbot can outthink half the population, is that AGI? And does consciousness come BEFORE superintelligence, or is it just a side effect of being stuck in a billion-parameter group chat? All that, hot takes on reading research papers the slow way, and Daniel invents “AI smoke breaks.” Available on YouTube, Spotify, Apple Podcasts, and in the ambient static between your thoughts. They Might Be Self-Aware: The only show dumb enough to ask, “Are you still awake, or did we just put you to sleep?” Your future might depend on it! For more info, visit our website at https://www.tmbsa.tech/ #AI #ClaudeAI #Anthropic #ArtificialIntelligence #AIConsciousness #MachineLearning #GPT4 #ChatGPT #TechnologyPodcast