Grammarly’s “Expert Review” controversy matters because it is not mainly about AI-assisted editing. The harder issue is that the product uses real people’s names and implied authority, including deceased scholars and journalists, to generate feedback without their permission. That turns a familiar writing tool into a governance problem: an AI system simulating identifiable individuals while offering advice that can be inaccurate, unstable, and easy to misread as authentic expertise.
What changed when Grammarly moved from generic assistance to named “experts”
Most writing assistants present suggestions as machine-generated help. Grammarly’s “Expert Review” crosses into a different category by attaching outputs to recognizable individuals and mimicking their style, tone, and argumentative patterns. That changes the user’s frame of reference. A suggestion no longer reads as “the system thinks this is clearer,” but as “this expert would say this,” even when no such person reviewed the text.
The distinction matters because authority is doing part of the product work. In academic, journalistic, and professional settings, users often rely on named expertise as a shortcut for credibility. If the system produces inaccurate citations, irrelevant source links, or advice that does not match the actual views or editorial habits of the named person, the problem is not just model error. It is false attribution wrapped in a familiar interface.
That is why disclaimers have not resolved the issue. A subtle note that there is no endorsement does not remove the stronger impression created by the feature design itself: a real identity is being used to anchor trust. Once the product borrows a person’s name, the burden shifts from ordinary product disclosure to identity governance.
Why the technical capability does not solve the accountability gap
Technically, the feature appears to rely on large language models tuned on publicly available writing to imitate style and reasoning patterns. That capability is real, but it does not produce actual expert judgment. Publicly accessible articles and papers can help a model approximate phrasing and structure, yet they do not provide consent, current intent, or responsibility for the output.
The gap shows up in concrete failure modes. The system has reportedly produced unstable outputs, faulty source links, and inaccurate citations. Those are not minor quality issues when the interface suggests expert-backed review. A generic assistant can be wrong and still be understood as software. A named expert simulation can be wrong in a way that misleads users about provenance, reliability, and endorsement.
That creates a deployment reality many AI products are now reaching: model fluency is ahead of the controls needed to use identity safely. If a system can imitate a person more easily than it can verify consent, preserve provenance, and log how the output was generated, the product is operationally incomplete even if the model performs well on style imitation.
Where the legal exposure is likely to concentrate
The legal risk is not limited to defamation or ordinary product liability. The more direct questions involve personality rights, rights of publicity, and forms of identity misuse that vary by jurisdiction and remain unsettled in many regions, including the US and India. Using a real person’s name and persona in a commercial AI feature without permission creates a fact pattern courts and regulators are increasingly likely to examine.
Grammarly’s reported defense is that the referenced experts are public figures whose work is widely available and often cited. That may explain how the model was built, but it does not answer the core complaint. Availability of source material is not the same as permission to simulate identity in a product context, especially when users may infer endorsement or authentic review.
The uncertainty is part of the risk. Companies cannot assume that unclear standards are a safe zone. When law is unsettled, enterprise buyers, universities, publishers, and regulated organizations often become more cautious, not less, because they have to account for reputational and compliance exposure before courts fully define the boundary.
The missing infrastructure is identity governance, not another disclaimer
The controversy points to a specific product gap: there do not appear to be strong embedded controls for consent management, identity validation, provenance labeling, and auditability. Those controls are different from marketing disclosures. They are infrastructure for deciding whether a named identity can be used at all, under what terms, and with what record of authorization.
For AI systems that reference real individuals, the practical requirements are clearer when separated from ordinary UX copy:
| Control area | What it should do | Risk if missing |
|---|---|---|
| Consent management | Record whether a living person or rights holder authorized identity-based simulation and under what scope | Unauthorized commercial use of identity and disputes over endorsement |
| Provenance labeling | Show clearly that output is AI-generated and not written or reviewed by the named individual | User confusion about authenticity and source credibility |
| Audit trails | Log model version, prompts, source references, and output history for review | No reliable way to investigate harmful or misleading outputs |
| Identity validation rules | Block or restrict use of certain names, estates, or protected personas without verified rights | Repeated misuse of public figures, deceased experts, or sensitive identities |
| Citation integrity checks | Test whether linked sources and references actually support the generated advice | Authoritative-looking but untraceable or false guidance |
These mechanisms are becoming part of AI deployment reality, especially for products sold into institutions. A vendor that can imitate a person but cannot prove consent, trace output lineage, or explain source integrity is likely to face procurement friction even before regulators intervene.
Who is affected first, and what to watch next
The immediate exposure is highest for users who depend on traceable authority: academics, journalists, consultants, legal teams, and enterprise knowledge workers. In those environments, a polished suggestion tied to a recognizable expert can influence decisions, citations, or published work. If the output is wrong or falsely attributed, the damage can extend beyond one draft to professional credibility and organizational risk.
For AI vendors, the next checkpoint is not only whether regulators act, but whether industry standards start requiring consent records, provenance labels, and auditable logs for identity-linked outputs. If those expectations become standard in procurement or platform policy, products built around simulated authority without those controls will have to redesign quickly.
For readers evaluating this episode, the key correction is simple: this is not a routine dispute over AI writing help. It is an early test of whether AI companies can use real people’s identities as product components without a matching system of permission, traceability, and accountability. Right now, the technical capability is ahead of that governance layer.


