According to Silicon Republic, the London High Court has dismissed major portions of Getty Images’ copyright lawsuit against Stability AI in a November 4 ruling. Getty had sued the AI startup in 2023, claiming it “unlawfully copied and processed millions of images protected by copyright” from its photo archive to train Stable Diffusion. The court found evidence that Getty’s images were indeed used for training, but ruled that Stability’s model weights don’t actually contain copies of Getty’s copyrighted works. Justice Joanna Smith determined that “an AI model such as Stable Diffusion which does not store or reproduce any copyright works is not an ‘infringing copy’.” Getty voluntarily abandoned most of its copyright claims during trial, leaving only secondary infringement and trademark issues that were ultimately dismissed.
What the ruling actually says
Here’s the thing that makes this case so interesting – the court basically drew a line between using copyrighted material for training versus what the model actually contains. Getty argued that Stability imported its work into the UK by allowing model weights to be downloaded via Hugging Face. But the evidence showed those weights didn’t contain actual copies of Getty’s images. It’s like the difference between reading books to learn how to write versus photocopying those books and selling them. The court saw Stable Diffusion as doing the former, not the latter.
And that distinction matters enormously for how we think about AI training. If every model that learned from copyrighted material was considered infringing, we’d basically have to shut down most AI development. But if there’s no protection at all, what incentive do creators have to produce new work? It’s a classic technological disruption scenario where existing laws just don’t fit the new reality.
Broader implications
Legal experts are already pointing to much wider consequences. Rebecca Newman from Addleshaw Goddard thinks this could mean “the UK’s secondary copyright regime is not strong enough to protect its creators.” Meanwhile, Dr Barry Scannell notes this could collide with data protection rules, since the European Data Protection Board has different ideas about when AI models contain personal data.
Basically, we’re watching multiple legal frameworks crash into each other. Copyright law, data protection, trademark law – they were all built for a different technological era. Now courts are trying to apply them to systems that learn and create in ways the original lawmakers never imagined. It’s messy, and this ruling shows just how messy.
Where this leaves creators
Getty’s statement reveals the practical reality here – they “invested millions of pounds to reach this point” and are now calling for stronger transparency rules. Think about that for a second. If a company with Getty’s resources can’t effectively protect its work through existing legal channels, what chance do individual artists or smaller creators have?
The company isn’t giving up completely – they note they’ll “continue to pursue in another venue.” But this loss in UK court definitely changes the battlefield. It suggests that in some jurisdictions at least, the current legal framework might not offer the protection creators hoped for. And that could push more companies toward the kind of partnership Getty just announced with Perplexity – if you can’t beat them through lawsuits, maybe you join them through licensing deals.
So where does this leave us? Probably with more uncertainty than clarity. Different courts in different countries will likely reach different conclusions, creating a patchwork of legal standards that AI companies will have to navigate. But one thing’s clear – the old rules aren’t working, and everyone from creators to tech companies to lawmakers knows we need new ones.
