481 Episodes

  1. Self-Adapting Language Models

    Published: 10/12/2025
  2. The Markovian Thinker

    Published: 10/12/2025
  3. Moloch’s Bargain: emergent misalignment when LLMs compete for audiences

    Published: 10/12/2025
  4. Transformer Predictor Dynamics and Task Diversity

    Published: 10/11/2025
  5. Base models know how to reason, thinking models learn when

    Published: 10/11/2025
  6. Spectrum tuning: Post-training for distributional coverage and in-context steerability

    Published: 10/11/2025
  7. Understanding Prompt Tuning and In-Context Learning via Meta-Learning

    Published: 10/11/2025
  8. MLPs Learn In-Context on Regression and Classification tasks

    Published: 10/11/2025
  9. Is Pre-Training Truly Better than Meta-Learning?

    Published: 10/11/2025
  10. Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models

    Published: 10/11/2025
  11. Do LLMs Recognize Your Preferences? Evaluating Personalized Preference Following in LLMs

    Published: 10/9/2025
  12. Learning dynamics of LLM finetuning

    Published: 10/9/2025
  13. Iterative Data Smoothing: Mitigating Reward Overfitting and Overoptimization in RLHF

    Published: 10/9/2025
  14. OpenAI Agent Builder and n8n: Orchestrating Reasoning Versus Automating Process

    Published: 10/8/2025
  15. Training Agents Inside of Scalable World Models

    Published: 10/8/2025
  16. Small Language Models are the Future of Agentic AI

    Published: 10/7/2025
  17. Activation Steering in Generative Settings via Contrastive Causal Mediation Analysis

    Published: 10/6/2025
  18. Eliciting Secret Knowledge from Language Models

    Published: 10/6/2025
  19. Temporal difference flow

    Published: 10/6/2025
  20. Personalized reasoning: just-in-time personalization and why LLMs fail at it

    Published: 10/5/2025

1 / 25

Cut through the noise. We curate and break down the most important AI papers so you don’t have to.