While intellectual property laws, privacy regulations and NIL agreements attempt to address these issues, they often lag behind innovation, leaving individuals vulnerable to exploitation. The intersection of AI, NIL and biometric data collection raises profound concerns about whether existing legal frameworks adequately protect personal property rights while fostering innovation.
The Fourth Amendment, which protects against unreasonable searches and seizures, does not extend to private companies. As a result, corporations can collect and use biometric data with minimal oversight unless state laws like BIPA apply. Moreover, many companies bury consent clauses deep within terms of service agreements, effectively stripping users of ownership without their full understanding.
Lawmakers must also develop clear guidelines on how AI-generated likenesses can be used. While some argue for an outright ban on unauthorized deepfakes, others suggest a licensing model where individuals can opt into or out of AI-generated representations. Laws like BIPA should also serve as a model for national legislation.
As technology evolves, so too must our understanding of personal identity and ownership. The rapid rise of AI-generated deepfakes, NIL contracts and biometric data collection presents both opportunities and risks. Without stronger legal protections, individuals risk losing control over their most personal asset — their own face. However, any legal reforms must balance the need for privacy and autonomy with the benefits of innovation.
Aron Solomon is the chief strategy officer for Amplify. He holds a law degree and has taught entrepreneurship at McGill University and the University of Pennsylvania, and was elected to Fastcase 50, recognizing the top 50 legal innovators in the world. His writing has been featured in Newsweek, The Hill, Fast Company, Fortune, Forbes, CBS News, CNBC, USA Today and many other publications. He was nominated for a Pulitzer Prize for his op-ed in The Independent exposing the NFL’s “race-norming” policies.
No easy answers
The advent of NIL rights in college athletics represents a significant shift in how individuals can monetize their personal brand. The NCAA’s decision in 2021 to allow athletes to profit from NIL deals was heralded as a win for personal property rights. Yet these agreements also introduce complex legal challenges.
Individuals should have the right to opt out of biometric data collection and demand the deletion of their data upon request. Athletes and other public figures need stronger protections against exploitative NIL contracts. Transparency requirements, mandatory legal review periods, and caps on contract duration could prevent individuals from unintentionally signing away their rights.
In an era where artificial intelligence can generate hyper-realistic deepfakes, companies monetize biometric data, and athletes fight for their rights under name, image and likeness, or NIL, contracts, a fundamental question emerges: Do we truly own our own faces?
Third-party AI-generated NIL exploitation poses a growing threat. If an athlete refuses to sign an NIL deal, what prevents companies from using AI to create deepfake versions of them? While some NIL contracts include exclusivity provisions, they rarely address unauthorized AI-generated likenesses, leaving a loophole for exploitation.
The growing threat of AI in NIL exploitation
Stay up to date with recent funding rounds, acquisitions, and more with the
Crunchbase Daily.
Despite these emerging legal battles, current laws fail to comprehensively address the question of whether individuals truly own their own faces. Several key areas require reform. The U.S. lacks a nationwide legal standard for NIL and likeness rights. Establishing a federal right to publicity could help individuals maintain control over their name, image and likeness across all industries.
Some deals include perpetuity clauses, meaning an athlete could unknowingly sign away lifelong rights to their image. In essence, rather than securing ownership over their face, some athletes may end up losing it to corporate interests.
It’s a given today that deepfake technology has progressed to the point where AI-generated images, videos and audio can be nearly indistinguishable from reality. This advancement raises serious concerns about ownership and consent. If an AI-generated deepfake replicates a person’s likeness without their permission, do they have legal recourse? The answer depends largely on jurisdiction and existing legal frameworks. Some U.S. states have enacted laws criminalizing certain uses of deepfakes, particularly in cases of nonconsensual pornography or election interference.
While stronger protections are necessary, it is also important to recognize the role of innovation. AI has tremendous potential in industries such as film, advertising and gaming, where digital likenesses can be used for creative purposes. Rather than stifling progress, legal frameworks should strike a balance between protecting individual rights and fostering technological advancements. A possible solution is compensatory licensing models, where companies using AI-generated likenesses must pay royalties to the individuals they replicate. Such a system would preserve personal ownership while allowing businesses to continue innovating.
Some states have taken legislative action. Illinois’ Biometric Information Privacy Act (BIPA) is one of the strongest laws in the U.S., requiring companies to obtain explicit consent before collecting and storing biometric data. BIPA has led to significant legal battles, including a 0 million settlement from Facebook over its facial recognition practices. But the reality is that federal law offers little protection.
For instance, California’s AB 602 provides a private right of action for individuals whose likeness is used in deepfake pornography without consent. Similarly, Virginia criminalized the unauthorized distribution of deepfake pornography. But these laws focus on specific harms rather than broader issues of likeness ownership.
Striking a balance
Illustration: Dom Guzman
While the past decade has seen increasing levels of daily personal digital experiences, today’s advancements in AI have us on the precipice of…
We all see our personal identity as sacrosanct, but the pace at which technology is evolving and the legal system desperately trying to catch up challenges our assumption.
The planned billion acquisition of cloud security unicorn Wiz by Google parent Alphabet doesn’t come as a shock to many who watch cybersecurity…
Beyond deepfakes and NIL, biometric data collection presents another critical challenge to personal ownership. From facial recognition technology in airports to social media platforms collecting facial data, corporations and governments have amassed vast databases of personal identifiers. But who owns this data, and what rights do individuals have over it?