Artificial intelligence has rapidly emerged as a transformative tool for companies across industries. From drafting marketing materials to automating client communications, AI-generated content offers both efficiency and cost savings. It has proven especially valuable in areas such as marketing, where organizations can now centralize design and targeting strategies internally, reducing their reliance on external freelancers.
Yet, it also raises some legal questions: who owns the creation? can I use the AI generated materials freely? How can I claim ownership under such materials? What are the contingencies? Can I get sued?
Unfortunately, gray areas in this field remain abundant, and when it comes to intellectual property, overlapping global jurisdictions do little to simplify matters. Compounding the challenge is the dizzying pace of AI’s development over the past five years, which has left both companies and creators navigating a landscape that often feels disorienting.
In many jurisdictions, only natural persons can be recognized as “authors” in the strict sense of the word. In the United States — one of the most influential jurisdictions for digital platforms — the prevailing consensus for more than a decade has been that AI systems, algorithms, or any non-human agents cannot qualify as authors. This principle was reinforced by the well-known case of Naruto v. Slater, in which a crested macaque in Indonesia famously took a series of selfies with Slater’s photographer’s unattended camera. After extensive litigation, the courts ultimately ruled that, however novel or amusing the photographs were, a monkey could not be considered an author under copyright law. Much to PETA’s disappointment, authorship rights were confirmed to belong solely to the human photographer, David Slater.
The same position was withheld recently in 2023, when Steven Thaler tried to appoint an AI system he designed as author of the work A Recent Entrance to Paradise. The Copyright Office denied such request.
This is not an “American thing,” as one may think. This tendency is backed-up by the Berne Convention for the Protection of Literary and Artistic Works, which builds up the entire protection system over the figure of the author as a person, establishing moral rights-linked to the author’s personality and life-long protection terms.
Given that the Berne Convention has over 180 countries as signatories is no surprise that many countries have adopted US’s interpretation. Mexico just this past June 2025, denied the registration of an avatar in which the author intended to invest moral rights over the AI system and keep patrimonial rights to himself.
Although exercises such as Thaler’s as well as the request for copyright in Mexico, seem to be more aligned with a test-and-error tactic — “let’s see how it goes” kind of thing — it has risen eyebrows as well as inquiries and concerns.
As of today, we can conclude that only a human being (or an enterprise, in some specific cases) can be considered an author and thus no AI system can hold copyright over any work.
Now, let’s say that we have determined that only Peter — or Peter Enterprises Inc. — can be considered an author. Can the result of a prompt be considered a work? And if so, can I claim rights over it?
Most legal frameworks were not designed with AI in mind. Although they have managed to keep numerus apertus when listing the types of creations that may be considered works, it has been decades since the “sweat of the brow” doctrine in the US or the creative height has been taken into consideration when protecting a work. It suffices for the work to be original.
Therefore, it seems that every work as long as it is original, can be protected, regardless of the means used to create it, can be copyrighted, as long as the author is a person. Right? In general terms yes, but some weeks ago Colombia’s Copyright Office determined that if there was no significant creative input from a person, works created by means of AI models cannot be considered works of the intellect and, as such, cannot be protected under the Colombian copyright law. Basically, no author, no work, no protection.
When using AI generative models, it is clear that the technology does not create in a vacuum. It relies on inputs and vast datasets, and may — whether inadvertently or intentionally (a question still under debate) — replicate elements of protected works. For instance, an AI-generated text might closely mirror copyrighted material, or an image might echo a trademarked logo, without the authorization of the rights holder.
As a result, no valid claim of ownership can arise over something that was not lawfully obtained. Any subsequent registration of such a work or logo could be deemed invalid due to infringement of third-party rights.
The issue becomes even more complex when companies use AI to create human-like advertisements. Here, the concerns extend beyond intellectual property to include personal data and image rights. Even if the character depicted is fictional or generated without reference to a specific individual, a person could still allege a strong resemblance. This “virtual doppelgänger” effect may lead to claims for indemnification or demands for the removal of the advertisement, both of which carry reputational and financial risks for the company.
A central point of conflict today is the lack of transparency and control over the content used to feed and train AI models. This opacity not only undermines the rights of creators and rights holders but also exposes companies relying on AI-generated content to significant legal and compliance risks.
The terms and conditions of most AI systems do not resolve the underlying issues. While it would be difficult to generate an image of a highly protected work such as Mickey Mouse or the Coca-Cola logo using leading platforms, less prominent works can often be reproduced without the consent of their authors or rights holders.
Although many AI providers grant users ownership over the outputs, they typically disclaim any guarantee of exclusivity—both regarding the prompts submitted and the resulting content, which may also be reused as training material. This creates a self-perpetuating cycle in which data is continuously fed back into the system. Moreover, most AI providers expressly disclaim liability for any infringement of third-party rights.
Recently and mainly after the SAG-AFTRA strike, the big players in AI have entered into agreements with content creators who have denounced infringement over their rights, like in the case of Amazon and The New York Times.
However, the smaller players remain unseen and unheard. Efforts to regulate the use of information and data to train models is still in its infancy and ethical use of the platforms is yet to be implemented.
So, what to do?
– Always check local law and the terms and conditions of the AI systems you plan on using. Although it may be tricky to determine the rules of the game when you have a work that is going to be generated through a model under the laws of one country, with inputs of numerous jurisdictions and is intended to be used in a particular region it may provide you with a general scenario of what to expect.
– Ensure that there is a human in the loop. This may diminish the possibility of absence of human participation when seeking protection. In general terms, most countries are not copyrighting works that are purely created by AI. It also increases originality and decreases the possibility of your work looking like somebody else’s.
– When in the frame of works for hire, make sure that you include provisions and limitation to liability regarding the use of AI for the deliverables.
– Use providers that disclaim training data or offer indemnification.
– Run internal IP checks before commercializing AI outputs, be them works, trademarks of patents.
– Be aware of the information you use as input. If it is proprietary or confidential information, you relinquish it to the model for further use for people outside your organization, especially when using free versions of AI systems.
– Implement internal policies for safe AI use.
Innovation must be matched with legal foresight and safeguards. Companies should treat Ai-generated content as a high-risk IP area, requiring human oversight, contractual protection and clearance checks, along with strict governance policies. By proactively addressing intellectual property, confidentiality, and compliance challenges, companies can build client trust while staying competitive in a rapidly changing digital landscape.





Add comment