Grammarly Lawsuit Shows Existing Laws Can Combat Deepfakes

(Jennifer E. Rothman – Lawfare) Debates about synthetic media have been dominated by concerns about deepfakes—audio and video fabrications that appear to be authentic recordings when they are not. These deepfakes threaten to erode trust in everything from elections to court proceedings to intimate relationships. They also threaten people’s livelihood. With the recent dramatic improvement in the accessibility and quality of generative artificial intelligence (AI), the locus of concern has expanded to virtually every context. The most recent flashpoint is not a forged video of a world leader or a sex tape, but something much more benign: a writing assistant. In early March, Wired reported that the AI-powered software Grammarly, which promises its software tool will help guide and generate your writing, offered users the ability to edit text “in the style” of identifiable journalists and scholars without their consent, and allegedly singling out specific people by name, thereby signaling their participation or endorsement of the service. What might once have seemed like a parlor trick has now become the basis for litigation, raising foundational questions about identity, attribution, and control in an age of generative-AI authorship. One of the key tools to combat such overreaching impersonations is the right of publicity—a legal doctrine that gives individuals control over the use of their name, likeness, voice, and other recognizable aspects of identity when used without authorization by others. The right is governed primarily by state law. – Grammarly Lawsuit Shows Existing Laws Can Combat Deepfakes | Lawfare

Latest articles

Related articles