Scientists and engineers collaborating in a modern AI research lab with computers and data screens visible.

Google DeepMind’s New Safety Thresholds Draw a Line Between Measured Manipulation Risk and Real-World AI Behavior

Google DeepMind’s latest Frontier Safety Framework update is notable not because it proves today’s public AI systems are routinely manipulating users, but because it turns that risk into something the company says it can measure, threshold, and block before broader deployment. The change adds a formal capability level for harmful manipulation and a separate misalignment…

Read More
A group of AI researchers collaborating in a lab with multiple computer screens showing neural network data and AI models.

Google DeepMind’s AGI Framework Shifts the Debate From Bigger Models to Measured Cognitive Abilities

Google DeepMind is trying to make AGI progress harder to overstate. Its new framework replaces vague milestone talk and single benchmark scores with a structured test of ten cognitive abilities, then asks a stricter question: how those abilities combine, and how the result compares with demographically representative human baselines. Ten abilities instead of one headline…

Read More