Every day, advances in artificial intelligence push the boundaries of what is technically possible and socially acceptable.

One of the most striking recent examples comes from the the family of Joaquin Oliver, a victim of the 2018 Parkland school shooting in Florida. They created an interactive AI avatar of their late son to advocate against gun violence. The avatar was even interviewed by a real journalist.

The project drew praise from some for its innovation, but also deep discomfort from others who found it unsettling or even exploitative.

This case raises an important legal question: Who controls the image, voice, and likeness of a person after death in the age of AI?

Post-Mortem Personality Rights

In many countries, living individuals enjoy a “right of publicity” or “personality rights” that prevent their name, image, and voice from being used for commercial purposes without consent.

However, the extent to which these rights survive after death varies widely.

For example, some U.S. states allow a deceased person’s estate to control these rights for decades. In contrast, Trinidad and Tobago currently has no standalone law that protects a person’s image or likeness after death. This leaves families with limited legal tools to prevent unapproved uses.

The global nature of the internet adds to the complexity. An AI-generated likeness can be shared instantly across borders, reaching jurisdictions with different rules. Enforcing rights internationally can be costly and difficult.

Denmark’s Proposed Reforms

In June 2025, Denmark announced plans to overhaul its copyright laws to address deepfakes and the misuse of personal likenesses.

Under the proposal, every person would have enforceable rights over their own body, facial features, and voice. These rights would last for 50 years after death and would pass to the person’s estate.

If adopted, this would be one of the clearest legal frameworks in the world for post-mortem control of a person’s likeness. Any AI-generated representation, whether for commercial, political, or artistic purposes, would require permission from the estate during that 50-year period.

Consent and Context

Law is only part of the picture. Even when a family consents, as in the Oliver case, ethical questions remain.

Does such a digital recreation truly reflect the wishes and values of the person it portrays? Could it be manipulated to say or do things they never would have agreed to in life?

Without clear safeguards, technology could outpace society’s ability to use these tools responsibly.

The Path Ahead

The intersection of AI, deepfake technology, and post-mortem rights demands urgent legal attention. As Denmark moves towards stronger protections, other countries may follow.

However, without international cooperation, a patchwork of laws could leave significant gaps, especially online.

How we balance innovation, free expression, and the dignity of the deceased will shape not only the future of AI regulation, but also our shared understanding of life, death, and memory in the digital age.


This post was created with the assistance of artificial intelligence (AI) and has been thoroughly reviewed for accuracy and includes original content from the author. While AI can offer general information, it is important not to rely on it for legal advice.

About The Author Jason Nathu

Jason Nathu is an attorney-at-law, admitted to practice in Trinidad and Tobago and Guyana. He is currently a full-time Tutor at the Hugh Wooding Law School.