Grammarly has expanded its AI-powered writing tool to include an "expert review" feature. This feature offers critiques from AI agents modeled on the writing styles of real authors and academics, both living and deceased, though without their endorsement or direct involvement.
The company, now rebranded as Superhuman, states the feature leverages its underlying large language model to surface suggestions inspired by these experts' works. However, the practice of training AI on these individuals' bodies of work raises significant legal and ethical questions regarding copyright and the unauthorized use of their names and reputations.
The main topics covered are the new AI features of Grammarly, specifically the "expert review" function, the company's rebranding, and the ethical and legal concerns surrounding the use of real individuals' work and personas without permission.
Do you have fond memories of being a teacher’s pet? Wish you could still get notes from your favorite college professor? Dream about some implacable voice of authority correcting your every word choice and punctuation mark? Well, great news: A certain software company has engineered a way to simulate criticism not just from bestselling authors and famous academics of our time, but also many who died decades ago—and the company evidently didn’t need permission from anybody to do it.
Once relied upon only to proofread for correct grammar and spelling, the writing tool Grammarly has added a host of generative AI features over the past several years. In October, CEO Shishir Mehrotra announced that the overall company was rebranding as Superhuman to reflect a new suite of AI-powered products. However, the AI writing “partner” remains called Grammarly. “When technology works everywhere, it starts to feel ordinary,” Mehrotra wrote in his press release. “And that usually means something extraordinary is happening under the hood.”
The expanded Grammarly platform now offers an AI solution for every imaginable need—and some you’ve probably never had. There’s an AI chatbot that will answer specific questions as you compose a draft, a “paraphraser” feature that suggests changes in style, a “humanizer” that revises according to a selected voice, an AI grader that predicts how your document would score as college coursework, and even tools for flagging and tweaking phrases commonly produced by large language models. (Sure, you’re using AI to do everything here, but you don’t want it to sound like that.)
Perhaps most insidiously, however, Grammarly now has an “expert review” option that, instead of producing what looks like a generic critique from a nameless LLM, lists a number of real academics and authors available to weigh in on your text. To be clear: Those people have nothing to do with this process. As a disclaimer clarifies: “References to experts in this product are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities.”
As advertised on a support page, Grammarly users can solicit tips from virtual versions of living writers and scholars such as Stephen King and Neil deGrasse Tyson (neither of whom responded to a request for comment) as well as the deceased, like the editor William Zinsser and astronomer Carl Sagan. Presumably, these different AI agents are trained on the oeuvres of the people they are meant to imitate, though the legality of this content-harvesting remains murky at best, and the subject of many, many copyright lawsuits.
“Our Expert Review agent examines the writing a user is working on, whether it's a marketing brief or a student project on biodiversity, and leverages our underlying LLM to surface expert content that can help the document's author shape their work,” says Jen Dakin, senior communications manager at Superhuman. “The suggested experts depend on the substance of the writing being evaluated. The Expert Review agent doesn’t claim endorsement or direct participation from those experts; it provides suggestions inspired by works of experts and points users toward influential voices whose scholarship they can then explore more deeply.”
Someone like King may see the advance of AI as unstoppable, and there may be nobody left to defend Zinsser’s 1976 handbook On Writing Well from the big tech vultures, but what of the countless other luminaries who still want to keep their material from being compressed into an algorithm? Vanessa Heggie, an associate professor of the history of science and medicine at the University of Birmingham, recently took to LinkedIn to share an especially grim example of how the feature works, accusing Superhuman of “creating little LLMs” based on the “scraped work” of the living and dead alike, trading on “their names and reputations.” The screenshot she posted showed the availability of analysis from an AI agent modeled on David Abulafia, an English historian of the medieval and Renaissance periods who died in January. “Obscene,” Heggie wrote.
An independent review of the Expert Review tool by WIRED reproduced the recommendations for feedback from the Abulafia bot, as well as from models based on the living cognitive scientists Steven Pinker and Gary Marcus. (Neither returned a request for comment.) As the software processed the sample text, it noted that it was taking “inspiration” from Elements of Style author William Strunk Jr. and the sociologist Pierre Bourdieu while applying “ideas” from Gone With the Wind author Margaret Mitchell and using “concepts” from writer and professor Virginia Tufte—all of whom are dead, with Tufte dying most recently, in March 2020. The guidance from her AI agent read: “Replace repetition with vivid, varied sentence patterns.”
C.E. Aubin, a historian and postdoctoral fellow at Yale University who shared Heggie’s LinkedIn post on Bluesky, tells WIRED that Grammarly’s “expert” system “seems to validate the profound mistrust so many scholars in the humanities have for AI and its seemingly constant use in fundamentally unethical ways.”
“These are not expert reviews, because there are no ‘experts’ involved in producing them,” Aubin says. “And it's pretty insulting to see scholarship used this way when the academic humanities are currently under attack from every possible angle—as though the actual people who do the thinking and produce the scholarship are reducible to their work itself and can be removed entirely from the equation.” She says this elimination of personhood is “awful” enough on its own, apart from “the issue of ‘reanimating’ the dead so cynically.”
Beyond the dubious ethics, there’s the question of whether these proliferating AI widgets are even effective or helpful. Grammarly’s plagiarism detector, for instance, didn’t catch a direct quotation I used from a scene in The Simpsons where Bart improvises a geography presentation he hasn’t prepared for, leading to an empty summation: “In conclusion, Libya is a land of contrasts.” (Grammarly did warn, however, that “a land of contrasts” is a sequence of words often generated by LLMs.)
Over the past several years, teachers and professors have struggled through a deluge of AI-written essays, finding it difficult to wean their pupils off of this self-defeating shortcut. And even before Grammarly had its “experts,” those who relied on it to proofread their papers were occasionally accused of cheating after the material was flagged by AI detection services. Giving these users the impression that they can have their work evaluated by leading thinkers before they turn it in may contribute to their sense that they are only double-checking their text, not violating any academic code of conduct.
But at least students can enjoy having their homework assessed by illusory mentors instead of their actual instructors, which may or may not be a slippery slope toward eliminating school faculty altogether. Shouldn’t take long to find out!