AI Usage Restrictions
Prohibits using confidential information to train artificial intelligence models, restricts data scraping, and prevents automated extraction of protected data.
High ComplexityProhibits using confidential information to train artificial intelligence models, restricts data scraping, and prevents automated extraction of protected data.
High ComplexityAn AI usage restrictions clause explicitly prohibits the Receiving Party from using Confidential Information to train, develop, fine-tune, or improve artificial intelligence systems, machine learning models, or similar technologies. This includes restrictions on data scraping, automated extraction, and feeding information into generative AI tools that may retain or learn from the input.
AI usage restrictions are a relatively new addition to NDA clauses, emerging primarily after 2022 with the widespread adoption of large language models. Courts have not yet extensively interpreted these provisions, making precise drafting essential. The clause should clearly define what constitutes "AI" or "machine learning" to avoid ambiguity, and should address both direct training and indirect uses (such as uploading to AI-powered analysis tools). Some jurisdictions may view overly broad AI restrictions as unenforceable restraints on technology use, so the restriction should be reasonably tailored to legitimate confidentiality concerns.
AI restrictions that last "in perpetuity" or "forever" may be unenforceable and are impractical. Technology changes rapidly, and perpetual restrictions may prohibit future legitimate uses. Push for reasonable time limits.
Clauses that prohibit "AI use" without defining the term could be interpreted to prohibit common software features. Ensure clear definitions that distinguish between prohibited AI training and permitted automation.
Audit rights allowing the Disclosing Party to inspect your AI systems could expose your own trade secrets and proprietary technology. Require confidentiality protections and limit scope to compliance verification only.
Language making you strictly liable if any third-party vendor uses AI on the data (even without your knowledge) creates unreasonable risk. Negotiate for liability only when you authorize or are negligent in preventing such use.
Per-violation damages of $500,000 or more may be challenged as unenforceable penalties. Courts generally require liquidated damages to reasonably approximate actual harm, which is difficult to establish for AI training violations.
AI restrictions are rapidly evolving. Consider the specific AI tools your organization uses and ensure they are compatible with proposed restrictions.
Ask an Attorney