As AI voice-cloning technology and deepfakes become more prevalent, Taylor Swift is making a strategic legal move that could redefine how celebrities protect their legacies. Her recent trademark applications signal a shift in the battle against unauthorized AI content that moves beyond standard copyright.

Cayce Myers, a media law and communications expert at Virginia Tech, says these trademark filings are a possible solution to a very real issue as the technology advances.

“Trademark and copyright protection are something many artists have struggled with for decades because of digital media’s structure and user infringement,” Myers said. “AI amplifies that concern because of its ease of generating new content through voice cloning and deepfakes. The accessibility of this technology also makes these trademark concerns, like Swift’s, particularly troubling for artists who rely on their voice for revenue and licensing.”

Myers explained that trademark works differently from copyright, namely because its duration can be forever, so long as the mark is protected. 

“Copyright gives protection of life of the author plus 70 years for individually owned copyrights and 95 years from publication or 120 years from creation (whichever is shorter) for works for hire.”

Julia Feerrar, a digital literacy expert, explained that trademarking phrases or visuals does not directly limit what AI systems can do, in a technical sense.

“Some AI tools already refuse to answer prompts to generate images of real people, but the status of these kinds of guardrails has evolved. It will be interesting to see if this kind of legal approach from people like Taylor Swift has any long-term effect on technical guardrails or other practices from AI companies.”

When it comes to the technical safeguards available, like watermarking or detection tools, Feerrar said she is wary of approaches that rely on AI to identify AI. 

“While technical safeguards may be pieces of the puzzle, they also have shortcomings, especially for wide public use,” she said. “There have been cases where researchers were able to destroy watermarks and where detection tools proved to be unreliable.”

Feerrar said that any time actual people are represented in AI-generated content, it raises serious questions about individual rights and responsibilities, but also about our ability to discern what’s real and what’s not.

“We all can be vulnerable to AI-generated and other misleading content. So much is created to appeal to our emotions, whether that’s shock, anger, or excitement,” she said. “While I want people to be equipped with the skills to deal with questionable content, we also need to recognize the limitations of individual approaches. This is where law and policy implications should come into play.”

Leave a Reply

Your email address will not be published. Required fields are marked *