According to CNET, Justice Joanna Smith ruled on Tuesday that Stability AI did not violate copyright law when training its Stable Diffusion image models using Getty Images’ content. The judge determined Stability AI never “stored or reproduced” Getty’s copyrighted works, making the copyright infringement claims unsuccessful. However, Getty partially succeeded in its trademark arguments, with the court finding Stability AI violated trademark protections when users created images resembling Getty and iStock logos. Both companies immediately claimed victory in the complex ruling, with Getty calling it a win for intellectual property owners and Stability AI emphasizing that the core copyright concerns were resolved in its favor. The decision comes after Getty dropped its primary copyright claims earlier this year, leaving only secondary claims for the court to consider.
A classic split decision
Here’s the thing about this ruling – it’s exactly what happens when courts try to apply old laws to fundamentally new technology. Justice Smith herself called her findings both “historic” and “extremely limited,” which basically means nobody really knows what they’re doing yet. And honestly, can you blame them? We’re asking judges to apply copyright frameworks designed for photocopiers and VCRs to AI systems that don’t actually store the training data they learn from.
The trademark part makes sense – if Stable Diffusion is spitting out images with Getty’s logo on them, that’s clearly problematic. But the copyright question? That’s where things get messy. The court basically said since Stability AI isn’t storing complete copies of Getty’s images, there’s no direct infringement. But does that really capture what’s happening when an AI model learns from millions of copyrighted works?
The precedent problem
Look, what’s fascinating here is how this UK ruling echoes what we’ve seen in US courts. Anthropic and Meta both won similar cases against authors claiming their books were used without permission. There’s a pattern emerging where courts are struggling to fit AI training into existing copyright frameworks. And honestly, the four-part test US courts use for copyright infringement? It wasn’t designed for this.
But here’s what worries me: every one of these narrow, case-specific rulings creates another piece of patchwork precedent. The judge explicitly said her ruling depends entirely on the specific evidence and arguments in this case. So next time, with different facts or different lawyers? We could get a completely opposite outcome. That’s not exactly stability for an industry that needs clear rules.
What this means for creators
For artists and photographers, this ruling is kind of a mixed bag. On one hand, it confirms that AI companies can’t just slap your trademarks on their outputs. Getty’s statement makes a good point about responsibility lying with the model provider, not the user. That’s actually significant – it means the companies building these models can’t just pass the buck to people typing in prompts.
But the copyright part? That’s where creators might feel let down. If your work gets scraped into a training dataset, this ruling suggests there’s not much you can do about it under current UK law. The court’s full reasoning is in the detailed ruling, and Getty’s reaction is in their official statement. Both are worth reading to understand the nuances.
Basically, we’re watching the slow, painful birth of AI copyright law in real time. Each case gives us another piece of the puzzle, but nobody’s putting the whole picture together yet. And until they do? Expect more of these “everyone wins and nobody wins” rulings that leave both sides claiming victory while the fundamental questions remain unanswered.
