California's SB 942 (the California AI Transparency Act) took effect January 1, 2026 and I'm trying to figure out what this actually means for startups building on top of foundation models.
We're a Series A company with an AI-powered legal document analysis tool. We use GPT-4 and Claude under the hood for summarization and clause extraction. Our product processes contracts and outputs structured summaries, risk flags, and suggested edits.
SB 942 requires "generative AI systems" to disclose when content is AI-generated and provide certain transparency information. But the implementation details are murky at best:
- Do we need to watermark every output our tool generates?
- Do we need to disclose which specific model we're using under the hood?
- What counts as a "generative AI system" versus a tool that happens to use AI internally?
- Our outputs are a mix of AI-generated text and template-based content — how do we handle hybrid outputs?
I've read the bill text three times and I'm still confused. Anyone dealt with this yet?