WHEN THE CODE GETS CREATIVE:
ANTHROPIC, AI OWNERSHIP, AND THE LAW
By Jessica Debrah
In April 2023, a song called Heart on My Sleeve shot across
TikTok and streaming platforms like wildfire. It sounded like Drake and The
Weeknd teaming up for a surprise drop. Except neither artist had ever stepped
into a studio for it. The track was written, composed, and “performed” entirely
by AI.
A few years earlier, in Christie’s auction room, a solemn-looking
gentleman with blurry edges portrait titled Edmond de Belamy sold for over $432,000 and yet again the
“artist” was not human, but a generative algorithm trained on thousands of
other works.
From viral songs to record-breaking paintings, AI is no
longer just assisting creativity, it’s producing it. Who then owns these works?
Who gets the royalties?
And in a world where the “creator” could be a line of code, does the current
Copyright Laws even have the right tools to keep up?
Anthropic isn’t a household name like ChatGPT or Midjourney,
but in the AI world, it’s one of the power houses pushing the frontier of
generative intelligence. Founded in 2021 by former OpenAI researchers, it is an
AI research and development company which has developed a suite of large language
models named Claude as a competitor to OpenAI's ChatGPT. The company focuses on
safety and ethics, encouraging responsible innovation across the artificial
intelligence industry. This generative AI is capable of generating essays,
poetry, product designs, and even business strategies with a level of polish
that humans express.
The U.S. Copyright Office made its stance clear that works
created entirely by AI without “meaningful human authorship” are not eligible
for copyright. Other jurisdictions, like the UK, have taken a more flexible
approach, granting rights to the human who “made the arrangements” for the AI’s
creation. Ghana and other African IP regimes are still evolving in this space,
meaning the rules are even less defined.
Copyright laws exist to protect human creativity. But AI
isn’t human. It doesn’t sleep, doesn’t need inspiration, and can create
endlessly. So if an AI like Claude produces a poem, a design, or a soundtrack,
the law runs into a wall. Can the person who typed the prompts claim full
rights? Or should there be a new category of ownership entirely? The answers to
these questions will shape the creative economy for decades.
For Anthropic, they are faced with several lawsuits and
policy debates about whether its training data, often copied from existing
works could infringe on the same copyrights that IP law is supposed to
protect. Meanwhile, its models are
empowering creators and businesses to generate valuable assets at unprecedented
speed and scale.
The Legal Landscape: Anthropic vs. Copyright
In June 2025, U.S.
District Judge William Alsup delivered a landmark ruling in the Bartz v.
Anthropic case. He ruled that using legally purchased books to train Claude’s
AI models qualifies as fair use, calling it “exceedingly transformative” and
likening it to how a writer learns from reading. Nonetheless, Judge Alsup drew
a firm line when it came to piracy, opposing Anthropic’s act of downloading
over 7 million pirated books from sites like LibGen. It was said that it isn't
protected under fair use, moving that part of the case ahead to trial.
This is a pivotal moment for AI and IP. When AI like
Anthropic’s Claude learns from millions of works, it isn’t “copying” in the
classic sense but can be understood as remixing patterns into something new.
The fair use ruling says, “Okay, that’s legal if you actually paid for the
stuff you learned from.”
But the moment the issue of piracy sets in, the innovation story becomes theft. And that’s where the tension sits: tech companies want room to experiment, while creators want the law to stop their life’s work from becoming free training data for someone else’s idea and benefit.
Looking Ahead
AI isn’t slowing down; neither are the legal battles. What
we’re watching is the early blueprint for how the world will treat creativity
in the machine age. Take Thomson Reuters v. Ross Intelligence, where
Reuters claims its legal database was unfairly used to train an AI tool without
permission. Or Getty Images V. Stability AI, where the stock photo giant
argues its copyrighted images were scraped to generate AI art. Together, these
cases are setting the stage for how far AI can go when it borrows from human
creations.
The next few years will decide whether the law can keep pace
with code that learns, creates, and competes with humans. For creators, it’s a
call to register your rights, know your worth, and push for policies that
protect your ideas.
Because in this new era, the question isn’t just “Can AI
create?” Because it can create endlessly. The question rather becomes “Who gets
to own what it creates?”
And the answer will shape the future of art, tech, and every bright idea yet to be imagined.
Comments