A new Carnival anthem drops online a few weeks before the season. The voice sounds exactly like one of Trinidad and Tobago’s leading soca artists: the phrasing, the tone, even the signature ad-libs. Promoters start using the track in their ads, DJs spin it at fetes, and it gains momentum. Only later does the truth emerge: the artist never recorded the song.

This is the era of the soca deepfake, AI-generated audio designed to sound like a real performer, often without consent, and capable of misleading audiences, promoters, and platforms. For the legal community, soca deepfakes force us to revisit long-standing assumptions about authorship, misrepresentation, neighbouring rights, and cultural integrity in Trinidad and Tobago.

What makes a “deepfake” different?

AI voice-cloning systems do more than imitate style. They replicate identity. With a small dataset of recordings, software can reconstruct the unique qualities of a singer’s voice and apply it to entirely new lyrics and melodies. The resulting track is persuasive precisely because it feels authentic.

Unlike parody or tribute, soca deepfakes trade on confusion, blurring the line between legitimate performance and digital fabrication. They raise legal questions that our copyright statutes were never designed to answer.

Copyright without a human singer: shaky foundations

Copyright law presumes human creativity. Musical works and sound recordings attract protection because a human exercised skill and judgment. Where AI composes and produces most of the work, the question becomes: who is the author?

The person typing prompts? The developer of the AI tool? Or no one at all?

Even if copyright could attach to some aspect of the track, soca deepfakes expose another problem: copyright does not protect “style.” An AI-generated song may lawfully imitate the sonic feel of a legendary performer while avoiding literal copying. In commercial practice, however, the effect is the same. Audiences may think they are hearing the real artist.

Rights clearance becomes unstable. Can the producer exploit and monetise the work? If the track goes viral, how do collecting societies treat it? Trinidad and Tobago law offers no clear answer, yet the market will not wait for clarity.

Performer identity and the question of ownership of voice

Deepfakes hit at the heart of performer rights. A soca singer’s voice is more than sound. It is brand, livelihood, and cultural identity. Our existing framework protects performances once they are recorded, but soca deepfakes involve performances that never occurred.

What legal strategies are available?

  • Passing off and false endorsement. If the public is led to believe the artist endorsed or participated in the track, a claim may arise. Still, the test hinges on evidence of confusion and damage and litigation may be slow compared to the viral speed of deepfake music.
  • Contractual tools. Artists can insist on clauses prohibiting the use of their recordings to train AI systems, or banning synthetic replications of their voice without consent. But contracts cannot restrain anonymous creators online.
  • Personality-based protection. Trinidad and Tobago lacks a dedicated statutory “right of publicity,” yet global developments increasingly recognise voice and likeness as protectable personality attributes. Policymakers here will likely face mounting pressure to follow suit.

Deepfakes also threaten moral integrity. A synthetic voice could be used to promote messages the artist rejects, ridicule serious issues, or produce low-quality tracks that erode reputations.

Soca, authenticity, and the deepfake risk

Carnival thrives on the immediacy of live performance and the relationship between artist and audience. Soca deepfakes risk diluting that relationship. They may crowd out authentic voices, undermine trust in what we hear, and redirect revenue away from working musicians toward those who simply manipulate technology.

Practical guidance for the industry

Until clearer legislation arrives, practical safeguards matter:

Artists:

  • Insert contractual clauses prohibiting AI cloning of your voice and the use of recordings for model training without explicit consent.
  • Keep dated drafts, stems, and session notes to prove authorship and originality.
  • Think beyond copyright: manage your brand through trademarks and licensing controls.

Producers and DJs:

  • Do not market or distribute deepfake vocals that mimic identifiable soca artists without permission.
  • Label AI-generated content clearly. Transparency reduces legal risk and reputational backlash.
  • Obtain licences if you are training models on copyrighted catalogues.

Promoters:

  • Avoid advertisements that imply an artist performed a track where they did not.
  • Verify the authenticity of music used in campaigns to prevent unintended misrepresentation.

Where law reform may be heading

Trinidad and Tobago will increasingly confront the deepfake question. Policy options include:

  • recognising voice likeness as a distinct protectable interest;
  • clarifying authorship rules for AI-assisted works;
  • requiring disclosure when AI voices are used commercially; and
  • encouraging industry codes of practice while statutory reform develops.

The objective is not to ban AI. Rather, it is to ensure that the technology does not erase the human performers whose creativity built the soca tradition.

Soca deepfakes demonstrate how quickly innovation can blur authenticity. As Carnival evolves, the law must evolve with it, preserving trust, safeguarding artists, and ensuring that when a voice moves the crowd, we can still believe it is truly theirs.

About The Author Jason Nathu

Jason Nathu is an attorney-at-law, admitted to practice in Trinidad and Tobago and Guyana. He is currently a full-time Tutor at the Hugh Wooding Law School.