Google’s Bayesian Teaching Upgrade Gives LLMs a Better Way to Update Beliefs
Google Research’s Bayesian Teaching work matters because it targets a specific weakness in current LLMs: they often stop learning anything useful about a user after the first exchange. Instead of fine-tuning models to reproduce final correct answers, Google trains them to imitate a Bayesian assistant’s step-by-step probability updates, so the model learns how to revise…