Scientists and engineers collaborating in a modern AI research lab with computers and data screens visible.

Google DeepMind’s New Safety Thresholds Draw a Line Between Measured Manipulation Risk and Real-World AI Behavior

Google DeepMind’s latest Frontier Safety Framework update is notable not because it proves today’s public AI systems are routinely manipulating users, but because it turns that risk into something the company says it can measure, threshold, and block before broader deployment. The change adds a formal capability level for harmful manipulation and a separate misalignment…

Read More