đź§ł Noncompetes, NDAs And AI-Enhanced Job Hopping

Published: December 5, 2025 • AI, Contractors & Employees, NDA

What employees can take in their head vs in a model

Employees have always walked out the door with two things:

  • Their skills and experience, which they’re free to sell to the next employer.
  • Their memory of how things are done, which can shade into “trade secrets in your head” if they aren’t careful.

Now add AI:

  • Engineers fine-tune models on proprietary datasets.
  • Sales teams run entire playbooks through AI assistants.
  • Analysts build LLM-powered pipelines that encode customer lists, pricing logic and strategy.

When that person changes jobs, what exactly are they allowed to “take” — and what must stay behind?

This article maps that line using three main tools:

  • Noncompete agreements
  • NDAs / confidentiality clauses
  • Trade secret law (including data and models touched by AI)

đź§­ The Legal Toolkit Around Job Hopping (Pre-AI vs Post-AI)

In most U.S. jurisdictions, these doctrines work together:

Legal toolWhat it doesTypical scopeAI-era twist
NoncompeteRestricts where you can work / compete for a periodTime, geography, type of workPatchwork of state bans; can’t be used to claim “we own your brain forever”
NDA / ConfidentialityRestricts what you can disclose or useTrade secrets + other “confidential” infoNeeds explicit rules for AI tools, training data, models, weights
Trade secret lawProtects information with economic value kept reasonably secretFormulas, code, customer data, internal docs, modelsApplies whether the secret sits in docs, in your head, or inside a fine-tuned model

Noncompetes try to control who you work for.
NDAs and trade secrets try to control which specific information you use once you get there.

AI doesn’t change that architecture — but it makes the boundaries much fuzzier.


đź§± Noncompete Agreements In 2025: The Ground Under Your Feet

At the federal level:

  • In April 2024, the FTC adopted a nationwide rule that would have banned most employment noncompetes.
  • In August 2024, a Texas federal court vacated that rule and barred it from taking effect.
  • On September 5, 2025, the FTC abandoned its appeal and acceded to vacatur, signaling it will instead pursue case-by-case enforcement against abusive noncompetes. (Federal Trade Commission)

Bottom line: there is no federal blanket ban in force; noncompetes live or die mostly under state law.

State landscape (very high level):

State categoryExamplesStatus snapshot
Near-total employment noncompete bansCalifornia, Minnesota, North Dakota, Oklahoma, Montana, WyomingMost employee noncompetes void, with narrow exceptions (e.g., sale of business) (Frost Brown Todd)
Heavy restrictions (income thresholds, notice rules, etc.)Colorado, Illinois, Washington, Oregon, D.C., Massachusetts and othersOften ban noncompetes for lower-wage workers and impose strict reasonableness tests (Schneider Wallace Cottrell Kim LLP)
More permissive but evolvingMany southern / midwestern statesStill enforce reasonable noncompetes tied to legitimate business interests

Regardless of the state:

  • Courts remain skeptical of noncompetes used instead of narrower tools (NDAs, nonsolicits, trade secret litigation).
  • Even where noncompetes are banned, employers can still use NDAs + trade secret law to police misuse of specific information.

Which brings us to the real AI twist: what counts as “information” when AI is involved.


🧠 “In Your Head” vs “In A Model”: The Classic Trade Secret Line

Trade secret law (DTSA + state UTSA variants) protects information that: (WIPO)

  1. Derives independent economic value from not being generally known; and
  2. Is subject to reasonable efforts to maintain secrecy.

For decades, courts have wrestled with the “memory” problem:

If a departing employee memorizes key facts (pricing, source code structures, formulas) and reuses them, is that misappropriation?

Older case law shows three rough positions: (digitalcommons.law.uidaho.edu)

  • Strict view: Copying into your brain doesn’t sanitize theft. Deliberate memorization of trade secrets and use for a competitor can be misappropriation.
  • Lenient view: Employees can use general skills and knowledge acquired in prior jobs, even if learned from a secret environment.
  • Middle view: Detailed, specific information (e.g., exact chemical formula, detailed configuration files, specific customer pricing matrices) remains a trade secret regardless of whether it was written down or memorized.

The practical line courts tend to draw:

  • Portable: high-level know-how, problem-solving approaches, public or generic techniques.
  • Not portable: specific, non-public data or designs that give the former employer a concrete competitive edge.

AI makes it harder to pretend everything is “just in my head,” because the knowledge often has artifacts:

  • Fine-tuned models
  • Prompt libraries
  • Private embeddings or vector DBs
  • Internal “playbook” documents used to train the AI system

🤖 AI Assets You Touch At Work: Can You Take Them?

Think of the things you might “carry” to your next job in an AI-heavy environment:

AssetExampleUsually OK to take?Why / why not
General skills + patterns“I now know how to architect retrieval-augmented generation systems for enterprise search”✅ Generally yesThis is classic “general skill and experience,” even if learned on the job
Publicly available knowledgePrompts/techniques from public blog posts, docs, or OSS projectsâś… Yes (subject to license terms)Non-secret information is not a trade secret
Employer’s proprietary data used to tune an LLMFine-tuning customer-support model on actual tickets + labels❌ NoDataset and resulting model are often trade secrets; copying or reusing without permission is high-risk (scholarship.kentlaw.iit.edu)
Fine-tuned models / weights“I exported the .pt file I trained at my old job and brought it with me”❌ NoThe weights likely embody employer secrets (data patterns, label strategies)
Internal prompt libraries / system promptsCarefully crafted prompts for internal sales copilot or coding assistant⚠️ Usually noThey may reveal confidential playbooks, scripts, pricing strategies
AI pipelines / code that wires everything togetherInternal repos for ETL scripts, adapters, evaluation harnesses❌ No (copying); concepts onlyThe specific code and configs are protected; high-level architecture ideas usually are not
Customer lists and interaction history encoded in a vector DBEmbeddings built from CRM notes, emails, call transcripts❌ NoUnderlying text + embeddings can both be treated as trade secrets/datasets

Two key points:

  1. Form doesn’t matter much: whether the secret sits in a spreadsheet, a PDF, your Google Drive, or a model’s weights, trade-secret law looks at content + secrecy, not file extension. (scholarship.law.unc.edu)
  2. AI can increase the “secret density”: a good fine-tuned model can compress hundreds of thousands of internal documents into a powerful decision tool. That makes it more, not less, appealing as a trade secret.

📜 NDAs & Confidentiality Clauses In The AI Era

If noncompetes are the blunt instrument, NDAs/confidentiality clauses are the scalpel.

Modern agreements increasingly try to anticipate AI. You’ll see language along the lines of:

  • “Employee will not input Confidential Information into any public or third-party AI system without prior written consent.” (Lexology)
  • “All models, prompts, training data, embeddings, and other AI artifacts created using Company Resources are Confidential Information and exclusive property of Company.”
  • “Employee shall not use AI systems trained on Company Confidential Information for the benefit of any subsequent employer or for personal projects without authorization.”

Typical NDA “buckets” to watch:

NDA bucketWhat it coversAI-specific concern
Trade secrets (narrow but strong)Highly protected, economically valuable secretsFeeding them into public AI tools can undermine secrecy and trigger claims against both employee and vendor
General “Confidential Information” (broad umbrella)Almost anything non-public the employer calls confidentialIf drafted too broadly, can look like a backdoor noncompete, especially in states hostile to noncompetes
Work product / IP ownershipCode, documents, models created on the jobFine-tuned models, prompts, and custom tools almost always fall here unless contract says otherwise
Data-use restrictionsCustomer and personal data, regulated info (HIPAA, GLBA, etc.)Running regulated data through third-party AI without proper basis or DPAs can be regulatory, not just contractual, trouble

Key nuance: an NDA cannot magically convert generic skills into owned property. But it can clearly mark datasets, models, and artifacts as off-limits after you leave.


⚔️ Noncompetes + AI: What They Can’t Do

Even in more permissive states, noncompetes still must:

  • Be tied to a legitimate business interest (trade secrets, special training, key customer relationships).
  • Be reasonable in time, geography, and scope of restricted activity.

AI doesn’t justify:

  • A nationwide, multi-year ban for a mid-level engineer solely because they worked with generative models.
  • A clause that says, in substance, “you cannot work with LLMs for any competitor in any capacity” when LLMs are now pervasive.

More defensible patterns:

  • Narrow noncompetes for key architects of proprietary AI systems who had access to highly sensitive data.
  • Nonsolicitation of specific customers or employees combined with tight NDAs and trade-secret enforcement.

And in noncompete-ban states (like California), the fight almost always moves to confidentiality clauses + trade secret litigation.


đź§© AI As A Trade Secret Risk: Uploading, Fine-Tuning, Recreating

AI complicates not just what leaves, but what leaks:

1. Uploading secrets into public AI tools

  • Employees paste roadmaps, pricing matrices, or source code into public LLMs or note-taking bots.
  • Depending on the tool, that data may be logged, used to improve models, or accessed by the vendor.

This can:

  • Undercut the “reasonable efforts to maintain secrecy” prong for trade secret status.
  • Expose employees to claims of breach of confidentiality or even misappropriation if the data later slips out in a way that can be traced back. (scholarship.kentlaw.iit.edu)

2. Fine-tuning and then re-using the model elsewhere

If you fine-tune an LLM on:

  • Internal tickets
  • Private design docs
  • Confidential customer comms

…and then copy that fine-tuned model or parameter files to your personal drive “for later use,” you’ve almost certainly crossed into trade secret misappropriation territory.

3. “Recreating” systems from memory using AI

Grey-area scenario:

“I didn’t take any code or data; I just described how our internal ranking algorithm worked to a public LLM and asked it to rebuild something similar.”

Courts haven’t fully caught up to this fact pattern, but doctrinally:

  • If what you describe to the LLM itself contains specific trade secret logic you had a duty to keep confidential, turning that into code via AI may still be misappropriation. (scholarship.law.unc.edu)
  • On the other hand, if you’re using genuinely generalizable know-how (“here’s how to implement a generic RAG pipeline”), you’re closer to the safe “skills and experience” side.

Intent, specificity, and how uniquely “your employer’s” the design is will matter a lot.


🧑‍💼 For Employers: Building A Sane AI-Mobility Strategy

Instead of trying to lock down everything with noncompetes, a more sustainable approach is:

1. Clarify what you actually care about

Create an internal “crown jewels” list:

  • Proprietary datasets (customer logs, labeled data, call transcripts)
  • Fine-tuned models and evaluation frameworks
  • Prompt libraries and internal agent workflows
  • Sensitive business logic (pricing algorithms, risk scores)

Those are what NDAs and trade-secret programs should protect most aggressively.

2. Update NDAs and policies for AI reality

  • Add explicit AI-use clauses: when employees may / may not feed data into third-party tools.
  • Clarify ownership of models, prompts, and automations created on company time or systems.
  • Provide approved tools and documented guardrails so “shadow AI” doesn’t become the default.

3. Strengthen exit processes instead of relying on sheer fear

At exit:

  • Conduct a targeted conversation: remind the employee what’s confidential, including AI artifacts.
  • Disable access to repos, model registries, vector databases, and cloud consoles.
  • Use forensic tools sparingly but deliberately for high-risk roles (e.g., downloading entire S3 buckets before resigning).

Courts look more favorably on trade-secret plaintiffs who can show concrete, reasonable protection steps, not just “everyone signed an NDA once.”


đźš¶ For Employees: Leaving With Your Brain, Not Their Secrets

From the employee side, a practical way to think about it:

Green zone – usually safe

  • Selling your skills and reputation to a competitor or new employer.
  • Re-using generic coding patterns, design practices, and AI prompting techniques that are widely known.
  • Referring to public documentation, OSS code, conference talks, and your own open-source work.

Yellow zone – needs caution and usually permission

  • Keeping copies of internal documentation you authored (design docs, evaluation reports).
  • Using internal prompts and templates at your next job, even if you wrote them.
  • Re-implementing distinctive workflows from memory where you know the company considered them secret.

Red zone – high litigation risk

  • Downloading internal datasets, fine-tuned models, or vector DBs “for your portfolio.”
  • Exporting internal AI pipelines, scripts, or proprietary prompts to personal GitHub.
  • Feeding a new employer’s AI with detailed descriptions of confidential systems or business logic from your prior employer.
  • Ignoring explicit policies that say “don’t put X into external AI tools” and doing it anyway.

Rule of thumb:

If your prior employer would be shocked to see a screenshot of what you’re using at your new job, you’re likely in the red zone.


✅ Quick Checklist: “Head vs Model” Risk Triage

For employers:

  • Have we mapped our AI-related trade secrets (data, models, pipelines, prompts)?
  • Do our NDAs and policies explicitly cover AI tools and artifacts?
  • Are we treating AI as a complement to, not a substitute for, sound trade secret hygiene?
  • Do our exit processes specifically address models and datasets, not just laptops and badges?

For employees:

  • Can I explain, in one sentence, why each AI-related asset I’m using at my new job is either public, mine, or licensed?
  • Am I relying primarily on skills and patterns, or on copied artifacts (code, data, models) from my last role?
  • If asked under oath, could I comfortably describe how I rebuilt my new system without using my old employer’s secrets?

AI doesn’t change the underlying rule:

  • You own your skills and experience.
  • Your employer owns its confidential assets, whether they live in PDFs, Postgres, or parameters.

The hard — and increasingly litigated — cases will sit in the gap between “what you know” and “what you encoded,” especially when that encoding looks like magic black-box AI instead of a dusty binder.

That’s where thoughtful drafting, realistic policies, and disciplined exits matter far more than one more page of boilerplate noncompete.

More from Terms.Law