Private members-only forum

Claude Code MIT license - can I actually build commercial products with it?

Started by smalltownlegal_5 · Jan 16, 2026 · 14 replies
For informational purposes only. License interpretation can be nuanced - consider consulting with an IP attorney for specific use cases.
SM
smalltownlegal_5 OP

Hey everyone, been playing around with Claude Code (Anthropic's CLI tool for coding with Claude) and I'm pretty impressed. Thinking about building a developer tool startup around it - basically extending it with some specialized workflows for my niche.

I see it's released under MIT license on Github but I'm getting confused by a few things:

  • Can I actually sell a product that uses this code commercially?
  • Do I need to open source my modifications?
  • What about the API costs - are those seperate from the license?
  • Any "gotchas" I should know about?

My co-founder is worried we might get sued or something if we build on top of their code. Am I overthinking this or is there actually something to worry about?

WD
what_do_i_do_now_4

Enterprise architect at a Fortune 500 here. Want to share our perspective since we went through an extensive legal review before approving Claude Code for internal use.

Our legal team's position: the enterprise license terms from Anthropic are materially different from the individual/team plans. Key differences that mattered to us:

  • Enterprise plans include explicit IP indemnification -- Anthropic will defend you if someone claims the AI output infringes their IP
  • Data retention and training provisions are more favorable -- your code isn't used to train their models
  • You get a dedicated legal contact and can negotiate custom terms
  • SLAs are contractually binding, not just "best effort"

If you're a startup building a commercial product on Claude Code, I would strongly recommend the enterprise plan once you hit any meaningful revenue. The indemnification alone is worth it. Individual plan ToS are much more "as-is."

We also require all AI-generated code to go through the same code review process as human-written code. No exceptions. Our static analysis tools don't care who or what wrote the code.

TM
travis_m_13 Attorney

@nursing_life This is actually one of the most common questions I get from agency clients right now. Let me give you the legal perspective, though this also has ethical dimensions that go beyond the law.

Legally: Unless your contract with the client specifically prohibits use of AI tools, or requires disclosure of tools/methods used, you're generally in the clear. Most development contracts are deliverable-based ("build me X feature") not process-based ("write every line by hand"). Check your MSA and SOWs carefully.

Billing: Billing by the hour for AI-assisted work is a gray area. Many agencies are moving to value-based or project-based pricing precisely because of this. If you're billing hourly, you should probably bill for the actual time spent, including the time reviewing, testing, and integrating the AI output. I would not recommend padding hours.

Disclosure: This is where it gets interesting. Some industries (fintech, healthcare, government) have regulations or contract terms that require disclosure of AI usage. If your client's contract has a "tools and methods" clause, you may be obligated to disclose. Even if not legally required, proactive disclosure builds trust. I'd recommend updating your MSA to include an AI tools clause going forward.

GI
gighustle_5

Following up on the client billing discussion. I'm a freelance full-stack developer and I recently had an awkward situation where a client found out I was using Claude Code because I accidentally left a comment in the code that said // Generated by Claude. Client was NOT happy.

Their concern wasn't legal -- it was that they felt they were "paying for AI work, not human work." They asked for a discount. I ended up negotiating a small credit but it was uncomfortable.

Lesson learned: be upfront about your tools from the start. I now include a clause in all my contracts that says I may use AI-assisted development tools, and that all output is reviewed and validated by me as the developer. No client has pushed back on this so far. Transparency is way better than getting caught.

CA
closing_arguments_10

@contractquestions_3 Yes, that's exactly the trap. A provisional patent application could help, but remember: the provisional must adequately describe the invention, and the named inventor must be a natural person who actually contributed to the inventive concept. You can't file a provisional for something that was entirely AI-conceived.

The practical approach I recommend to startup clients: have your developers take the AI-generated code as a starting point, then apply genuine human ingenuity to refine, optimize, or extend it. Document this process. The human inventive contribution -- even if it builds on AI output -- is what makes it patentable.

And yes, if patent protection matters to your business, keep your novel work in private repos until you've filed appropriate patent applications. The America Invents Act gives you a one-year grace period after public disclosure, but why risk it?

TM
Kelly_TL Moderator

Great discussion everyone. Quick moderation note: this thread has evolved well beyond the original MIT license question into a broader discussion about AI-generated code commercial use, which is fantastic.

I'm going to keep this thread open since the topics are closely related. Just a reminder of forum rules: nothing in this thread constitutes legal advice. The attorneys here are sharing general information, not forming attorney-client relationships. If you need specific legal advice for your situation, hire a lawyer.

Also tagging this thread with additional categories: "AI Copyright," "Code Licensing," "Commercial Use," and "Enterprise." Carry on!

SM
smalltownlegal_5 OP

OP here -- just want to say this thread has blown up way beyond what I expected. Quick update on my situation: we launched our product about 8 months ago and it's going well! We used Claude Code as a foundation and built our own tooling on top.

A few real-world things I've learned since the original post:

  • The MIT license for Claude Code itself has been a non-issue. No problems whatsoever.
  • API costs are the real consideration. We spend more on Anthropic API than on our entire AWS infrastructure.
  • We ended up getting enterprise API terms once we hit ~$5k/month in usage. The indemnification clause alone was worth it.
  • We include a clause in our customer agreements disclosing that we use AI tools in development.

The copyright ownership question raised by @AuditManagerT_7 is concerning though. We're going to talk to our lawyers about documenting human authorship more formally. Thanks everyone for keeping this thread alive with great info.

SO
sustained_overruled_15

Wanted to circle back to this thread with a real-world update. We just closed our Series B and during due diligence, the investors' legal team specifically asked about our use of AI coding tools. This is the first time that's happened to us (didn't come up in Seed or Series A).

Questions they asked:

  • What percentage of your codebase is AI-generated vs human-written?
  • Do you have policies governing AI tool usage?
  • Are you on enterprise terms with your AI providers?
  • Have you done an IP audit to confirm copyrightability of key code?
  • What's your exposure if AI-generated code infringes third-party IP?

We had good answers for most of these because we've been thinking about it since the beginning (partly thanks to this thread). But I know plenty of founders who would be caught flat-footed. If you're planning to raise capital, start thinking about these questions now.

RW
remote_work_life

Solution architect at a fintech company here. I want to share our experience doing a formal code audit to verify AI-generated code isn't copying GPL/AGPL code, since a few people have asked about this.

We went through this process six months ago when our compliance team flagged AI tool usage as a risk. Here's what we did:

  1. Inventory: Used git log analysis and AI-usage tracking to identify which parts of our codebase were AI-generated or AI-assisted
  2. License scanning: Ran the entire codebase through FOSSA and Black Duck. Both tools can detect code snippets that match known open source components
  3. Manual review: For any flagged matches, had senior developers manually assess whether the similarity was coincidental (common patterns) or substantive (actual reproduction)
  4. Remediation: For the ~5% of flagged code that was substantively similar, we rewrote those sections from scratch with clear human authorship

Total cost: about $30K including the commercial scanning tools and developer time. For a company our size (~$20M ARR), that's a reasonable investment in IP hygiene. For a startup, the open-source scancode-toolkit is a good alternative.

AR
ArbitratorLiz_10

I want to bring up something specific to open source licensing conflicts that I haven't seen mentioned. If you're building a product that includes multiple open source dependencies (as basically everyone does), and you're adding AI-generated code on top, you need to think about license compatibility across the entire stack.

Example: your project uses an Apache 2.0 licensed library, an MIT-licensed library, and some AI-generated code. The Apache 2.0 license has a patent grant provision that MIT doesn't. If the AI-generated code inadvertently creates a patent conflict with the Apache component, you've got a mess that's extremely hard to untangle.

Another example: if you're building a project under the GPL and using Claude Code to generate code, the GPL requires that all contributions to the project be GPL-compatible. But AI-generated code with uncertain copyright status -- is it GPL-compatible? Can you even apply the GPL to code that may not be copyrightable? This is genuinely uncharted territory.

I don't have great answers here, just flagging that the interaction between AI-generated code and existing open source license ecosystems is more complex than most people realize.

EM
EmploymentLawyerS_1 Attorney

@MediatorPaulR_9 Good questions. Let me address both.

Work-for-hire and AI tools: Most employment agreements are written broadly enough that code you produce using any tools "within the scope of your employment" belongs to the employer. The fact that you used an AI tool doesn't change this -- you used a compiler too, and nobody argues the compiler owns the code. Your employer's IP assignment clause almost certainly covers AI-assisted work. If it doesn't explicitly, most courts would interpret it to include outputs produced using company-approved tools during work hours.

Personal projects and employer claims: This is a real risk, but it's not unique to AI tools. The risk exists anytime you work on personal projects in the same domain as your day job. The key factors are: (1) did you use company resources (including company API keys for Claude), (2) is the work related to your employer's business, and (3) did you develop it during work hours? AI doesn't change this analysis -- it's the same framework that's applied to side projects for decades.

My practical advice: use separate API accounts for work and personal projects. Don't cross-pollinate prompts. And if your employment agreement has an IP assignment clause, get a written carve-out for your side project before you start building. Most employers will grant these if you ask.

TE
techworker_5

Update from the government contracting world: the GSA just published new guidance this month on AI tool usage in federal IT contracts. Key points that affect developers using Claude Code for government work:

  • All AI-assisted software must be disclosed in the technical proposal
  • The contracting officer can require an AI impact assessment for software used in critical systems
  • FedRAMP authorization requirements may apply to AI tools that process government data (Claude's API would need to be FedRAMP authorized or you need to use it in a way that doesn't process government data)
  • Government retains unlimited rights to AI-generated code delivered under the contract, same as human-written code

The FedRAMP issue is the big one. As far as I know, Anthropic's API is not yet FedRAMP authorized. If you're using Claude Code to generate code that processes government data, you may have a compliance gap. We're currently working around this by ensuring Claude Code only sees our own proprietary code, not government data fr fr.

SM
smalltownlegal_10

Want to address @ArbitratorLiz_10's point about GPL and copyrightability. This is actually a really interesting philosophical and legal question for the open source world.

The GPL works by using copyright law: because the code is copyrighted, the GPL's conditions (share-alike, source disclosure) are enforceable. If AI-generated code is NOT copyrightable, you arguably can't apply the GPL to it -- and more importantly, the GPL's copyleft provisions might not be enforceable against it.

This creates a paradox: the same lack of copyrightability that weakens your ability to protect AI-generated code also weakens others' ability to enforce copyleft licenses against you. If nobody can copyright AI-generated code, then nobody can claim GPL violations when that code is used without source disclosure.

Now, this is a theoretical argument and no court has ruled on it. And it doesn't help with the "contamination" scenario where AI output closely matches human-authored GPL code (which IS copyrighted). But it's an interesting wrinkle that I think will eventually be litigated. The FSF and OSI haven't published clear positions on this yet, which tells me they're still figuring it out too.

RE
renterguy_2

Final thought from me on this thread: I've been compiling our team's code review data for AI-generated code over the past six months, and the trend is encouraging. The defect rate in AI-generated PRs has dropped from about 25% to about 12% as our developers have gotten better at prompting and reviewing.

Key improvements we've made:

  • We created a team-specific prompt library with instructions for our coding standards, error handling patterns, and security requirements
  • We added automated static analysis in CI that catches the most common AI-generated issues before human review
  • We require developers to explain WHY they're using AI for a given task in the PR description, which forces them to think critically about whether AI is the right tool for the job

The bottom line for the commercial licensing question: AI-generated code is commercially viable and the legal risks are manageable, but only if you invest in proper process and governance. The companies that treat AI as a "fire and forget" code generator are the ones that will run into trouble -- whether it's bugs, license issues, or IP disputes.

Treat AI like a very fast but junior developer who needs supervision. That's the right mental model for both quality and legal risk.

HMP
losing_my_mind_here_6

Question about liquidated damages clauses: my contract has a clause that says if I terminate early, I owe 50% of the remaining contract value. Is this enforceable or would a court consider it a penalty? The distinction matters because penalty clauses are unenforceable in most jurisdictions.

Join the discussion โ€” create an account to reply.