Coming at this from a slightly different angle that I think deserves more discussion: the training data implications.
Everyone's focused on whether you can copyright the OUTPUT, but there's a looming issue with the INPUTS. The NYT v. OpenAI case is still working through the courts, and the Thomson Reuters v. Ross Intelligence decision from 2025 established that using copyrighted works to train AI can constitute infringement in certain circumstances.
Here's why this matters for content creators: if a court eventually rules that certain AI training was infringing, what happens to outputs generated by those models? Could your AI-assisted content face downstream liability claims? It's an open question but one worth thinking about.
The practical risk is probably low for most use cases, but if you're in a high-stakes context (publishing, journalism, academic work), it's worth tracking these cases. The Concord Music v. Anthropic and Getty v. Stability AI cases could also set important precedents in 2026.
My recommendation: if you're using AI-assisted content in a context where IP chain-of-title matters (M&A due diligence, licensing, publishing contracts), document which AI tools you used and when. If the legal landscape shifts, you'll want that paper trail.