Voice and image trade marks emerge as an AI impersonation defence as Taylor Swift files three US applications
Voice and image trade marks emerge as an AI impersonation defence as Taylor Swift files three US applications

Taylor Swift has filed three trademark applications in the US covering her voice and visual likeness, in what legal observers are reading as a systematic attempt to protect against AI-generated impersonations. The applications include a photograph taken during her Eras Tour and two audio clips from a recent album promotion.

Taylor Swift is known for taking protection of her intellectual property rights seriously. These most recent trade mark applications show that she's serious about brand control in the age of AI. Trade marks have traditionally protected names, logos and slogans, but we're now seeing them used much more creatively to police misuse of voice and image where copyright or image rights may fall short. As generative AI makes it easier to create convincing imitations, celebrities and brands alike are looking to trade mark law as a practical enforcement tool against increasingly sophisticated digital copies or deepfakes. The voice and image trade marks filed by Taylor Swift cover only the US, meaning there are significant gaps in protection in the rest of the world.

Iona Silverman (Intellectual Property & Media Partner, Freeths)

The move follows a similar application by actor Matthew McConaughey earlier this year, which Freeths partner Iona Silverman described as the first use of trademark rules to protect voice and image from AI misuse. Swift's filings suggest the approach is becoming a template.

The legal logic is filling a gap. Copyright protects specific recordings and photographs; image rights in the UK and EU protect personality in some circumstances; but neither provides clean enforcement against convincing AI-generated audio or video that imitates someone without copying a specific protected work. Trademark law, which can protect distinctive sounds, images, and other identifiers, is being tested as a more practical instrument.

There are limits. The applications cover only the US, leaving Swift's voice and likeness unprotected under the same mechanism in the UK, Europe, and elsewhere — jurisdictions where both fan content and commercial deepfakes circulate freely.

The wider picture for enterprises and technology providers is that the regulatory environment around synthetic media is fragmenting. The EU AI Act creates obligations around labelling AI-generated content, but enforcement of personality rights at scale remains a patchwork. Trade mark strategies pursued by individual celebrities may over time establish legal precedents that influence how organisations handle synthetic voice and image generation in commercial contexts.

More News