The advent of large language models like ChatGPT and image generators like DALL-E 2 have sparked tremendous excitement about the possibilities of generative AI. By analyzing vast datasets, these systems can now produce remarkably coherent text, computer code, and images based on short text prompts. However, alongside the immense promise of this technology, its rapid proliferation also raises pressing legal questions that users need to understand.
One critical issue is reviewing the end-user license agreements (EULAs) and terms of service that govern the use of commercial generative AI platforms. These complex legal agreements define the rights and responsibilities relating to how these AI systems may be utilized. With generative models now being incorporated into consumer products and business operations, it is essential for both individuals and organizations to closely analyze the fine print governing these tools.
This in-depth guide examines key considerations relating to common provisions in generative AI EULAs, pitfalls to avoid, and best practices for risk management.
Breaking Down Key EULA Provisions
Generative AI EULAs can vary substantially in their terms depending on the provider, product tier (free vs. paid), and other factors. However, there are several common provisions that users should pay close attention to:
Usage Rights for User Prompts
A core aspect of generative AI involves users providing text, code, or image prompts to the system. The EULA will typically specify what rights the provider has to utilize this user-generated content. For example, many EULAs grant the provider broad rights to use or sublicense user prompts to improve the underlying AI models and develop new products and services.
However, some paid enterprise versions may allow prohibiting or limiting the usage of prompts for model training. Opt-out clauses may also be present in certain consumer EULAs. Users need to closely review whether their prompts can be exploited by the provider or third parties. This can have serious implications in cases where prompts contain sensitive information, trade secrets, or personal data.
Ownership of Generated Outputs
The question of who owns the outputs produced by generative AI systems remains a major gray area. Some platforms assert that all content created by the AI is owned by the user. Others claim full ownership over generative outputs as derivative works of the underlying model. More commonly, EULAs grant the user a broad license to utilize the content while the provider retains underlying rights.
Crucially, the legal enforceability of these EULA provisions on AI output ownership is untested. Outputs that infringe copyrights or which are indistinguishable from those created by other users may not legally belong to anyone. Until statutory law catches up, the prudent course is to assume outputs carry significant ownership risks.
Restrictions on Output Usage Rights
While users may be granted a license to generative outputs, EULAs often impose major restrictions on how this content can be used. Some common limitations include:
- Barring commercial use of outputs, or limiting commercial applications to paid tiers.
- Prohibiting sharing of generated outputs beyond personal use.
- Requiring attribution to the AI provider if outputs are publicly disseminated.
- Restricting usages that violate laws or third party rights.
Users need to diligently confirm if any limitations apply to their intended applications. Violating usage restrictions could result in legal action by the provider or other injured parties.
Liability Limitations and Disclaimers
It is vital to recognize that generative AI EULAs typically disclaim all warranties regarding the quality, accuracy, and legality of system outputs. Providers will limit legal liability via clauses like:
- Stating the user assumes all risks from relying on AI-generated content.
- Requiring users to manually review outputs before further use.
- Disclaiming provider liability for any damages stemming from inaccurate, offensive or illegal outputs.
While understandable from the provider’s perspective, these limitations shift a heavy burden onto users. Companies integrating generative AI into products or services need rigorous human review and approval workflows to mitigate their own liability risk exposure.
Indemnification Clauses
Many EULAs will include strong indemnification clauses protecting the provider in the event claims are brought over problematic outputs. These provisions may require the user to cover legal costs, damages, and settlement fees if third party rights are violated. Indemnification requirements provide further motivation to thoroughly vet any outputs before external use.
Data Privacy and Security
As user prompts can potentially contain personal information, generative AI EULAs typically incorporate data protection provisions. However, these differ widely between providers. Some key questions include:
- Does the provider allow personal data to be input? Many advise against this.
- How is sensitive user data secured and accessed?
- Are prompts containing personal data used for model training?
- Does the provider allow reviewing or deleting historical prompts?
- Has the provider undergone third party security audits?
Understanding the applicable data practices is critical, especially given the rise of stringent privacy regulations like the GDPR and CCPA.
Red Flag Situations to Avoid
While deciphering the legalese in EULAs can be challenging, there are certain red flag provisions that should prompt greater scrutiny or caution:
- Sweeping rights of the provider to exploit user prompts for any purpose.
- Vague limitations on how outputs may be used commercially.
- Complete liability disclaimers for any harms arising from inaccurate outputs.
- Requirements to fully indemnify the provider against third party claims.
- Lack of robust data protections for personal information in prompts.
- Frequent changes to the EULA terms and conditions over time.
If the EULA presents warning signs like these, deeper scrutiny is warranted. Higher-risk uses of the system may need to be avoided entirely.
Best Practices for Managing Risks
Navigating the complexities of generative AI EULAs requires proactive risk management by users:
Carefully Review EULAs – Do not gloss over the fine print. Have legal counsel analyze risks, restrictions, and liability allocation.
Limit Sensitive Prompts – Avoid inputs with confidential data, personal info, trade secrets.
Anonymize Prompts – Scrub prompts of identifiers to derisk data privacy issues.
Review Outputs Diligently – Stringently check for accuracy, illegal content, IP infringement.
Use Restricted Outputs Prudently – If commercial use is limited, constrain sharing accordingly.
Isolate from Public Use – Keep AI outputs separated from outward facing applications.
Establish Workflows – Institute formal review and release processes for outputs.
Train Employees – Educate staff on proper AI prompt practices and output evaluation.
Consider Custom Models – For high-risk cases, discuss customized systems with limited training data.
Negotiate EULAs – Larger customers may be able to bargain on certain provisions.
With proper understanding of EULA risks and prudent safeguards in place, organizations can tap into generative AI while avoiding costly legal pitfalls.
The Future of Generative AI EULAs
As generative AI continues proliferating in the months and years ahead, EULAs governing these systems will see ongoing evolution. We can expect providers to increasingly tune EULA terms to balance greater user protections with their own desire for flexibility and limiting legal exposure. Key developments to watch for include:
- More providers allowing opt-out of prompt usage for model training, particularly in paid tiers.
- Finer granularity in commercial usage restrictions – less binary allow/block approaches.
- More explicit requirements for human review before dissemination of outputs.
- Enhanced data privacy controls and third-party auditing of security practices.
- Greater liability limits regarding safety-critical applications like drug discovery or self-driving vehicles.
- Requirements to use outputs “responsibly” without infringing third party rights.
- Provisions addressing evolving regional regulations for AI systems.
Ultimately, the central tension remains balancing the incredible potential of generative AI against its risks. As the technology improves and becomes more pervasive, EULAs will have to strike an equilibrium that protects providers, users and the broader public interest. Those using these systems in the meantime must tread carefully and ensure full understanding of the applicable terms, limitations and liabilities. With an ounce of prevention and prudent legal review, organizations and individuals can tap into the tremendous upsides of generative AI.
FAQ
What are some key generative AI EULA provisions relating to intellectual property rights?
Generative AI EULAs contain important IP considerations beyond just output ownership. Key provisions may cover:
- User warranties – Users typically must represent that their prompts don’t infringe others’ IP rights. This creates risk if prompts incorporate copyrighted material.
- Moral rights – Some EULAs grant providers a license to modify or sublicense outputs without user approval. This may raise moral rights issues in jurisdictions with robust creator protections.
- Attribution – Providers often reserve the right to credit themselves as a contributor to generated outputs. This could dilute users’ IP claims.
- DMCA policies – EULAs outline processes for submitting take-down notices if outputs infringe IP rights. However, these safe harbors are untested for AI systems.
- Non-transferability – Licenses to use outputs may be non-transferable, limiting IP monetization. New outputs may need to be generated for third parties.
Overall, current legal uncertainties around AI copyright, inventorship and attribution mean generative outputs likely exist in an IP gray zone. EULA terms tilt heavily in favor of providers until these issues achieve greater legislative and judicial clarity.
What are some key things to look for in AI privacy policies?
Review AI privacy policies for:
- Types of user data collected, especially sensitive categories requiring consent.
- Purposes for collecting and processing personal data. Avoid broad, vague uses.
- Legal bases claimed for processing like consent, contract necessity, legitimate interests.
- Provisions allowing reviewing, exporting, deleting or objecting to data use.
- Data minimization and retention limits, ensuring no indefinite storage.
- Anonymization and aggregation methods used to protect raw user data.
- Data transfer and third party sharing details, including locations.
- Security controls like encryption, access controls, audits.
- Complaint and data breach processes.
- Compliance with regional laws like GDPR, CCPA or sector-specific regulations.
Robust AI data privacy practices provide assurance and reduce compliance risks for users.
Can copyright protect specific AI-generated outputs like articles, images or code?
Copyright of AI outputs remains legally uncertain, but may apply if:
- Outputs reflect sufficient human creativity in underlying datasets, training processes, prompts or selection.
- AI is a tool assisting a human creator to produce the work rather than autonomously generating it.
- Outputs do not too closely resemble public domain works.
- Providers do not claim expansive rights to all system outputs in Terms of Service.
- Outputs are filtered, selected, edited, compiled or arranged by humans exercising creative choices.
While still a gray area, copyright provides the strongest current protections for original AI artifacts – especially those exhibiting the clear imprint of human authorship.
Do bots or digital avatars require end user license agreements?
Bots and digital avatars likely require EULAs covering:
- Permitted uses and prohibited activities when interacting with the bot/avatar.
- Collection, usage and sharing of user data submitted to the bot/avatar.
- Monitoring of interactions for quality control, security and compliance.
- Responsibility for securing credentials and preventing unauthorized access.
- Ownership of user-submitted content and bot responses.
- Requirements to keep interactions legal, ethical and non-offensive.
- Limitations on downloading, extracting or sharing bot data and responses.
- Disclaimers for bot reliability, accuracy and completeness of information provided.
- Liability limits and indemnification for bot misuse or flaws.
Well-crafted bot EULAs and terms of use can mitigate the unique risks posed by conversational interfaces and digital character interactions.
Can website terms of service prohibit scraping of public site data?
Generally, websites can prohibit scraping in their terms of service, with some exceptions:
- If data is available without logging in or other access controls, scraping limits are questionable and rarely enforced.
- Private use of limited data may still be permitted under fair use protections in copyright law.
- Sites not wanting to be scraped should implement technical access controls and monitoring.
- Contract terms banning lawful activities may be viewed as unenforceable and anticompetitive.
- Scraping publicly available government site data likely remains permissible due to transparency laws.
- News sites, social networks and e-commerce marketplaces commonly forbid scraping in their ToS.
For private sites, ToS bans on bulk copying provide legal recourse absent other technical defenses – but public data is harder to protect contractually.
Can licenses like Creative Commons apply to AI-generated content?
Possibly, but CC licenses involve some unresolved issues:
- Attribution may be impossible if an AI system lacks identifiable human creators to credit.
- Copyleft provisions requiring sharing derivatives may be unworkable if outputs are ephemeral or incorporated into commercial systems.
- AI creators may lack standing to apply a CC license if ownership rights are unclear or reside with the AI platform provider.
- Automated creation challenges concepts like moral rights that influence CC license enforceability.
- Applying a “non-commercial” CC license to corporate-produced AI outputs creates ambiguity.
Creative Commons licenses provide less certain protections for AI works given ongoing legal uncertainties. Custom licenses may better serve needs of AI creators and users.
What types of problematic data usage rights might an AI provider assert in its privacy policy?
- “We may process or aggregate your personal data into anonymized forms and indefinitely retain such data for any purpose.”
- “We may share your data with unspecified third parties for broadly defined business purposes not limited to providing the service.”
- “You hereby grant us a perpetual license to all your personal information for any use consistent with providing, improving and promoting our services.”
What kind of unfavorable output licensing terms could an AI generator impose in its EULA?
- “All text, media and other content generated by the platform belong to and are solely owned by the provider.”
- “Users have a limited, non-exclusive, non-transferable license to view AI outputs only within the platform interface itself.”
- “The provider may sublicense, sell, or otherwise commercialize any AI-generated content.”
What types of broad protections might an AI chatbot service include in its terms?
- “We make no warranties regarding the accuracy, reliability, completeness or timeliness of any information provided by the chatbot.”
- “Users irrevocably waive any right to pursue claims against the provider related to chatbot malfunction or flaws.”
- “The provider bears no liability for any damages suffered by individuals or entities interacting with our chatbot service.”
Can an AI system’s outputs be patented if it was not designed to invent?
The patentability of AI-generated inventions remains uncertain:
- US and European patent offices still require human inventorship on applications.
- However, if an AI system was not specifically designed to invent, attributing inventorship to the developers may also pose legal risks.
- Absent legislative changes, AI outputs likely necessitate human selection, refinement and claiming to meet inventorship standards.
- Marking AI outputs as Patent Applied For or Patent Pending may strengthen rights and provide notice, even if ultimate patentability is unclear.
- Trade secret protection may be an alternative if inventions have commercial value but patent eligibility barriers exist.
While AI promises to accelerate discovery, fully autonomous AI inventions still reside in a patent law gray area today.
Can text and data mining of copyrighted works for machine learning constitute fair use?
Courts are evolving on this issue, but several factors suggest possible fair use defenses for text and data mining:
- Transformative purpose of extracting data for AI training versus human reading.
- Extracted excerpts represent a small portion of the overall work.
- Non-commercial research and educational purposes.
- No direct market harm to the original work’s market value.
- Social utility of enhancing AI functionality.
However, systematic automated extraction of entire datasets without permission may still exceed bounds of fair use – especially for commercial usage. Rights-holder consent is safest if feasible.
What are best practices for AI ethics review boards at companies?
Best practices for corporate AI ethics review boards include:
- Diverse membership across departments like legal, compliance, product, engineering.
- Inclusion of external experts and civil society advocates as members.
- Transparent standards and methodologies for assessing AI risks.
- Focus on difficult cases most likely to cause broad harm.
- Willingness to advise against product releases where harms outweigh benefits.
- Formal charters and processes to ensure independence from business pressures.
- Authority to access data, systems, and staff needed to conduct reviews.
- Reporting mechanisms and recommendations that have influence on policies and priorities.
Structured oversight, multidisciplinary expertise, transparency, and organizational buy-in enable ethics boards to enact meaningful accountability.
What penalties apply if required AI system notices and disclosures are not provided?
Insufficient AI notices and disclosures can lead to:
- Federal Trade Commission enforcement under the FTC Act for deceptive practices.
- State consumer protection suits for misleading representations on capabilities.
- Product liability lawsuits if lacking safety warnings contribute to injuries.
- Americans with Disabilities Act (ADA) claims regarding accessibility barriers.
- Equal Employment Opportunity Commission (EEOC) disparate impact investigations if AI hiring tools lack required notices.
- Nullification of consents for data collection or use that lacked requisite disclosures.
- Cease and desist orders if AI annotations, descriptions or captions omit attribution.
Depending on context, inadequate AI disclosures violate laws against unfair and deceptive practices, employment discrimination, disability access, consent requirements, and plagiarism.
Should our legal team review every single content output from a generative AI system?
Comprehensively reviewing every single output from a generative AI system is likely impractical and unnecessary in most cases. Instead, consider these best practices:
- Establish criteria for more intensive reviews, e.g. outputs used in public materials, published externally, or incorporated into products/services.
- Spot check a sample of outputs periodically to monitor the system’s overall quality and risk profile.
- Follow scientific testing principles and review larger samples of outputs used in higher risk applications.
- Focus reviews on legally problematic or dangerous content like defamation, copyright infringement and hate speech.
- Have subject matter experts from affected departments review outputs related to their domains.
- Consider automated approaches like text classification algorithms to flag potentially risky content for human review.
The goal is pragmatic risk management rather than exhaustive output reviews. Targeted assessments guided by metrics, random sampling and automation can provide reasonable legal and quality assurance without excessive manual oversight.
Should we require employees to undergo training before using generative AI systems?
Yes, requiring at least basic training is recommended best practice before expanding employee access to generative AI, for several reasons:
- Guards against improper data practices like inputting confidential or personal info.
- Reinforces importance of securing AI outputs and not disseminating before review.
- Reduces IP infringement risks from careless prompting practices.
- Makes employees aware of key EULA restrictions on how outputs may be used.
- Ensures employees know proper channels to vet outputs before external sharing.
- Outlines processes to anonymize training data and remove sensitive metadata.
- Provides basics of how to craft effective prompts to maximize utility and minimize risks.
- Helps identify use cases that warrant additional legal, compliance and security reviews.
Even short onboarding training equips employees to use generative AI responsibly and align practices with organizational policies. Ongoing education reinforces safe and ethical AI development.
What risks are introduced if our developers directly integrate generative AI models into our products?
Tightly coupling generative AI models with commercial products creates several challenges:
- Product owners may lose discretion to scrutinize/filter problematic outputs before release.
- Can be difficult to satisfy EULA restrictions like attribution or non-commercial use.
- Potential for infringing activity if outputs automatically populate into products.
- Outputs may expose training data biases or quality issues to customers.
- Could enable uses violating data privacy, export control or accessibility laws.
- Integration work may create derivative works of the AI model, limiting commercialization.
Isolating generative models via APIs and keeping humans in the loop can mitigate these risks. If integrating models directly, extensive testing for edge cases, security vulnerabilities and legal compliance is essential.
What are some alternatives if we are uncomfortable with the terms of a provider’s EULA?
If the risks of a particular generative AI provider’s EULA outweigh the benefits, there are alternatives to consider:
- Custom Enterprise Models – Work directly with an AI vendor to train custom models on permissible data for dedicated use by your organization.
- Self-Hosted Models – Download and deploy some open source models like GPT-Neo on private infrastructure with full control.
- Federated Learning – Pool training data from other trusted organizations to develop shared models collectively.
- Differential Privacy – Employ techniques that train models using aggregated statistical data while obfuscating individual data.
- Synthetic Data – Utilize artificially generated, property-preserving dummy data for training models.
- Model Audit Rights – Negotiate contractual rights to audit models for bias, infringement or other issues.
- Restricted Data Use – Prohibit re-training on new data and lock models to original training datasets.
Each approach involves trade-offs. But for risk-averse entities like government agencies and regulated industries, these alternatives can provide greater control, security and legal compliance.
Here are some additional questions and answers covering new areas related to generative AI end-user license agreements:
What obligations may arise if our company modifies or fine-tunes a commercial generative AI model for our internal uses?
Modifying or fine-tuning a generative AI model can create legal obligations, including:
- If the adjusted model retains any original training data or code, the provider’s copyrights may still apply. Additional licensing may be required.
- Making significant enhancements to the model could make your company a legal co-owner, limiting rights to commercialize or redistribute the derivative model.
- Improvements to the model will likely need to be open sourced or granted back to the provider under some open AI licenses.
- Adjusting models in ways that remove ethical constraints or introduce societal harms raises reputational and regulatory risks.
- Customization work may still fall under the scope of the original model’s EULA terms unless superseded by a new agreement with the provider.
Ideally, formalize permissions to modify commercially licensed models in a new EULA tailored for derivative works. Assume original ownership interests remain.
Can we use third-party copyrighted material as input prompts for generative AI systems?
This is a legal gray area. While inputting short extracts for prompting likely constitutes fair use, longer verbatim passages could raise infringement risks, especially for external distribution. Safest options:
- Prompt using only original, non-copyrighted content you have permission to use.
- Take inspiration from copyrighted material to create original prompts conveying the same underlying idea.
- Ensure no substantial similar expression from copyrighted content appears in generated outputs.
- Limit extracts to the minimum needed to guide the AI output for internal use only.
Strong fair use arguments exist since prompts have transformative purpose for AI functionality. But proceed cautiously until case law establishes clearer boundaries.
What data retention and access policies should we require of generative AI providers?
Consider requiring providers adopt these prompt data practices:
- Disclose locations where prompt data is stored and processed. Avoid high-risk jurisdictions.
- Specify prompt data access and usage policies. Restrict to core service operations.
- Allow users to review, export and delete historical prompt data upon request.
- Implement stringent access controls for staff handling user prompts.
- Encrypt prompt data end-to-end during storage and transmission.
- Undergo regular third-party audits of data practices and cybersecurity controls.
- Prohibit selling, sharing or reusing prompt data beyond contracted purposes.
- Anonymize or minimize user data via aggregation, truncation, pseudonymization, etc.
- Support data deletion and retention schedule policies of enterprise users.
Having contractual guarantees on strong data protections provides assurance given the sensitivity of prompt information.
What should our employee code of conduct include regarding use of generative AI systems?
A code of conduct for employees using generative AI should include:
- Prohibition on inputting company confidential data, customer information, or personal employee details.
- Required reviews for security, privacy, accuracy and compliance before broader sharing of outputs.
- Rules against utilizing outputs to generate illegal, dangerous or unethical content.
- Attribution guidelines for giving proper credit to AI systems used.
- Restrictions on commercial usage or external distribution of outputs based on EULA terms.
- Mandatory training completion before access granted to approved employees.
- Mechanisms for anonymously reporting policy violations.
- Consequences for violations such as system usage suspension, employment termination, and legal action.
Clear acceptable use policies enhance alignment with legal terms, ethics and corporate values. They provide guardrails as adoption of generative AI expands.
What steps can we take to legally protect our unique prompts?
Unique prompts that are vital IP may warrant these protections:
- Register prompts as copyrighted works if they meet originality thresholds.
- Explore patentability of highly inventive prompts enabling valuable outputs.
- Assert trade secret protection for prompts providing competitive advantage.
- Require contract terms forbidding prompt sharing and derivatives by the provider.
- Limit employee prompt access and secure like other IP assets.
- Embed digital watermarks or fingerprints in prompts to detect unauthorized usage.
- File DMCA takedowns if prompts are stolen and reused without permission.
However, strong legal protections for prompts remain challenging. A layered strategy combining contracts, access controls, secrecy and monitoring prompt misuse can provide reasonable security.
What risks arise if we allow employees to freely share AI outputs externally?
Letting employees share AI outputs without controls raises risks like:
- Policy violations if restricted outputs are shared publicly.
- IP infringement if unvetted outputs contain copyrighted material.
- Harmful content spreading if offensive outputs aren’t blocked.
- Leaks of confidential data if private prompts are reused.
- Misinformation if false outputs aren’t caught.
- Dilution of proprietary data used to train custom models.
- Reputational damage if low-quality or unethical outputs reflect poorly.
Establish formal workflows for reviewing outputs before approving external sharing. Educate employees on responsible AI use. Institute monitoring systems to catch policy breaches.
Should startups worry about EULA terms if using free generative AI models?
Yes, startups should still carefully review EULAs for free models given:
- Prompts may improve models and increase provider’s competitive advantage.
- Restrictions on commercial applications can limit future monetization.
- No contractual leverage to negotiate more favorable terms.
- Heightened risks from unvetted outputs if no legal review.
- Limited resources to mitigate damages from EULA violations.
- Reputational harm if public outputs are problematic.
- No special protections for sensitive user data in prompts.
Understand the risks, limit sensitive prompts, manually review outputs, and develop compliant processes aligned with usage terms. Weigh risks against benefits before adopting.
What insurance policies are relevant to cover generative AI usage risks?
Key insurance policies to mitigate generative AI risks:
- Cyber liability insurance – Covers data breaches, cyber crimes and privacy violations stemming from compromised user prompts.
- Errors and omissions insurance – Protects against third party claims alleging harm from inaccurate AI outputs.
- Media liability insurance – Provides coverage for copyright infringement, defamation, or plagiarism committed via AI systems.
- Technology E&O insurance – Broader protection including liabilities tied to software products integrating AI models.
- D&O insurance – Shields corporate leadership from shareholder lawsuits alleging irresponsible AI use. Insurance offers another layer of financial protection and risk transfer for enterprises adopting generative AI. Tailored policies are advisable given unique AI coverage needs.
What steps should we take if our developers need to access proprietary third-party AI models?
If developers will use proprietary third-party AI models, consider these steps:
- Review licensing terms and restrictions carefully, including any confidentiality obligations.
- Ensure developers understand and follow permitted uses of the model – e.g. only for evaluation.
- Implement stringent access controls and usage monitoring systems to prevent misuse.
- Prohibit attempts to probe, reverse engineer, or extract model architectures and weights.
- Prevent developers from taking screenshots, notes or derivatives that could enable replicating the model.
- Conduct background checks/screening for developers working with highly proprietary models.
- Require developers to sign NDAs spelling out handling procedures and restrictions for the model.
- Confirm no competitive information is inadvertently retained or reused after model access ends.
With sufficient technical precautions and legal agreements in place, third-party model usage risks can be reasonably managed.
Should we have employees expressly consent and agree to our internal generative AI policies?
Seeking express consent is advisable to ensure employees understand obligations when using generative AI systems, including:
- Requiring acknowledgement of policies through signatures or systems log-in prompts.
- Maintaining records of consent as evidence of notice given.
- Updating consents upon changes to data practices, output reviews or usage terms.
- Conducting periodic re-training and consent refreshers.
- Linking employee agreements to codes of conduct, standards and EULAs.
- Spelling out consequences for violations of policies.
- Securing consent to internal monitoring and audits of generative AI system usage.
- Obtaining consent to data processing activities involving employee information.
Express, documented consent ensures policies are enforced consistently across the organization and supports compliance audits.
What risks arise if generative AI outputs do not contain copyright notices?
Failing to include copyright notices on generative AI outputs raises risks like:
- Weaker evidence to assert copyright protections in infringement disputes.
- Enabling plagiarism if reposters claim ignorance on source.
- Forfeiting the advantage of deterring infringement through visible copyright marking.
- Allowing competitors to exploit outputs without realizing their origins.
- Dilution of output value by limiting traceability back to the owner.
- Relinquishing the option to use copyright notices to track violations.
- Creating roadblocks to issue DMCA takedown requests if ownership is unclear.
Proper copyright marking establishes ownership, deters infringement, preserves legal options, and maintains asset value. The minimal effort is outweighed by benefits.
What should we do if we suspect another company is violating the usage terms for a generative AI system?
If another organization seems to violate usage terms for a generative AI system, potential responses include:
- Gather evidence of violations to make a credible case.
- Notify the provider of your suspicions and let them investigate and enforce EULA terms.
- If violations are harming your products or customers, send a cease and desist letter.
- For clear copyright infringement enabled by their system, file a DMCA takedown request.
- If violations continue, consider legal action against the entity for infringement, breach of contract or unfair competition.
- Report egregious violations impacting public safety or disadvantaged groups to regulatory agencies.
- Shame or blacklist flagrant violators publicly if other actions fail to address the harms.
Appropriate responses will depend on the norms being violated and degree of harm caused. Providers, regulators and the legal system may help curb abuses.