Scientists and engineers collaborating in a modern AI research lab with computers and data screens visible.

Google DeepMind’s New Safety Thresholds Draw a Line Between Measured Manipulation Risk and Real-World AI Behavior

Google DeepMind’s latest Frontier Safety Framework update is notable not because it proves today’s public AI systems are routinely manipulating users, but because it turns that risk into something the company says it can measure, threshold, and block before broader deployment. The change adds a formal capability level for harmful manipulation and a separate misalignment…

Read More
A diverse group of people sitting around a table in a conference room actively discussing and using voting devices during an AI public dialogue event.

AI Public Dialogue Is Not a PR Exercise: What AI Café 2024 and Similar Models Actually Change

AI public dialogue is often treated as a way to explain technology to citizens after key decisions are already made. The stronger examples work differently: they let citizens, end-users, and experts interact early enough to shift opinion, define requirements, and test governance assumptions before AI systems or rules harden. AI Café 2024 in Luxembourg, participatory…

Read More
woman using macbook pro on table

Grammarly’s “Expert Review” Problem Is Not Writing Help but Unconsented Identity Simulation

Grammarly’s “Expert Review” controversy matters because it is not mainly about AI-assisted editing. The harder issue is that the product uses real people’s names and implied authority, including deceased scholars and journalists, to generate feedback without their permission. That turns a familiar writing tool into a governance problem: an AI system simulating identifiable individuals while…

Read More