The FTC Sets Its Sights on AI: What OpenAI’s Complaint Means for the Industry

Published: July 14, 2023 • AI, Software, ToU & Privacy

Contents

Main Focus

The Federal Trade Commission’s (FTC) recent investigation into OpenAI, the creator of the popular AI bot, ChatGPT, marks a significant turning point in the regulation of artificial intelligence (AI) in the United States. This investigation, which focuses on potential violations of consumer protection laws, represents the most potent regulatory threat to OpenAI’s business in the U.S. to date and signals a new era of scrutiny for AI companies.

Specifically, the FTC alleges that ChatGPT sometimes generates “statements about individuals that are false, misleading, unsubstantiated, or otherwise likely to mislead users.” Furthermore, the agency argues that these inaccurate statements have the potential to cause “reputational harm” to individuals named or implicated in ChatGPT’s responses. The FTC also takes issue with OpenAI’s data privacy and security practices, claiming the company failed to protect sensitive user data in compliance with FTC rules.

The FTC is seeking monetary relief for consumers, changes to OpenAI’s practices, and ongoing monitoring to ensure compliance. The time period covered stretches from ChatGPT’s launch in July 2022 to the present day.

According to a recent report by TechCrunch, the FTC investigation was likely prompted by complaints of reputational damage caused by ChatGPT, such as an incident where the chatbot falsely claimed an Australian mayor had been convicted of bribery and sentenced to prison. Lawsuits like these complaining of defamation by ChatGPT could trigger FTC scrutiny.

For the AI community, this complaint provides important lessons about the emerging risks of language models – and how regulators plan to scrutinize them. Here are three key takeaways:

Accuracy and Truthfulness

At its core, the FTC’s complaint centers on ChatGPT occasionally making false or misleading claims about real people. This could occur if ChatGPT generates offensive, dangerous, or abusive content based on harmful data patterns in its training data. It may also occur when ChatGPT “hallucinates” new information due to its generative nature, rather than staying strictly factual.

Either way, the FTC makes clear that AI systems must not spread misinformation or falsehoods about individuals. This expectation around accuracy and truthfulness will likely extend to all generative AI applications, not just ChatGPT.

As such, AI developers should double down on evaluating model outputs for potential falsehoods. Strategies may include manual review, scraping training datasets for questionable sources, adding classifier models, and proactive filtering. Truthfulness should be a north star during training.

Reputational Harm Matters

A key focus of the FTC complaint is how ChatGPT could damage the reputations and livelihoods of professionals, experts, and businesses by making unreliable claims about them. The agency takes the position that APIs like ChatGPT should not have unchecked power to distort public perception.

There are a few key ways conversational AI chatbots like ChatGPT could potentially damage people’s reputations:

  • Making false or unsubstantiated claims about someone’s conduct or character. For example, falsely stating that someone committed a crime or transgression. Even if not true, it could still tarnish their reputation if relied upon by others.
  • Revealing private, sensitive, or confidential information without the subject’s consent. This could expose details that violate their privacy or portray them negatively.

This is a notable addition to unfair/deceptive standards, which traditionally focus on causing financial injury to consumers. The FTC is signaling that AI-driven reputational damage is its own form of harm worth preventing.

For AI developers, protecting individuals’ reputations now looks to be a key consideration. Companies may need to implement stringent identity detection, blacklisting of names/entities, disclaimers, and ways to address mistaken outputs that unfairly cast aspersions. Striking the right balance between creativity and caution will be critical.

Privacy and Security Rules Apply

The FTC complaint also alleges OpenAI fell short in protecting user data as required by Section 5 of the FTC Act. Specifically, it claims the company failed to put reasonable data security safeguards in place and violated commitments made in its privacy policy.

This serves as another reminder that AI companies have an obligation to safeguard user data, detect software vulnerabilities proactively, and implement robust cybersecurity measures. With AI systems ingesting more personal user information than ever before, the impacts of a breach could be even more acute.

Mishandling sensitive data or failing to respond adequately to bugs could violate Section 5 commitments around data security. It underscores why ongoing audits and maintaining best practices around access controls, encryption, and patching are so important in the AI sector. Adopting privacy and security by design principles when developing new systems is highly recommended.

Overall, between the alleged issues with ChatGPT’s truthfulness and the cited security concerns, the FTC complaint paints a picture of lax protections around AI users’ information and rights. Tighter accountability in these areas appears imminent not just for OpenAI, but the generative AI field overall.

Looking Ahead…

While OpenAI has disputed the FTC’s allegations, the complaint nonetheless signals a new era of regulatory scrutiny for AI. We are likely to see increased focus on safety, accuracy, bias mitigation, and consumer protection as AI becomes more ubiquitous.

Done right, this oversight could help the AI industry earn public trust and inline AI decision-making with human values. But it also creates new diligence requirements for companies developing AI applications.

Similar to the early days of regulating the Internet, regulators currently face a steep learning curve when it comes to AI technology. As such, expect some standards to remain fluid as realistic guardrails are established.

Overall, the FTC’s complaint provides a timely case study for AI developers on the types of potential harms regulators are monitoring. While details may evolve, the overarching themes of accuracy, reputation, privacy, and security are here to stay. Investing in responsible AI practices in these areas is the wise path forward.

FAQs: Digging Deeper Into the OpenAI Complaint

What are some key ways this FTC complaint could impact the AI industry as a whole?

The ramifications of this complaint could be far-reaching for AI companies. It puts them on notice that generative models must meet certain accuracy, privacy, and ethical standards to be lawful. While the details may still be in flux, this signals that human oversight, content moderation, robust bias testing, and protecting individuals’ reputations are now regulatory priorities.

We should expect increased rigor around vetting training data sources, filtering outputs, adding guardrails where needed, and enhanced transparency for users. There will likely be ripple effects leading to more investment in responsible AI practices industry-wide. Though specifics are TBD, the FTC is making clear that AI systems must carefully consider potential harms during development.

Could this complaint set a precedent for regulating AI content more broadly?

It very well could. The FTC is flexing its authority to ensure AI does not unfairly distribute inaccurate or abusive information without accountability. This same reasoning could eventually apply to all kinds of AI systems that generate or distribute content – from virtual assistants to computer vision APIs.

We may see calls for certain large language models to be considered “information intermediaries” given their unprecedented reach. This could impose duties around addressing misinformation, preventing discrimination, allowing appeal of takedowns, and providing transparency around decisions. While regulating AI content remains tricky, this complaint lays groundwork for wider oversight if harms proliferate.

What are the free speech implications of regulating AI more strictly?

There are legitimate concerns around imprinting any specific worldview or limiting AI’s creativity. However, the FTC’s focus here is on clear falsehoods or privacy violations, not policing opinions. Protecting individuals from unambiguous reputational harm also seems reasonably tailored.

Nonetheless, policymakers should take care not to limit AI expression unduly in the quest for safety and accuracy. AI has potential to democratize information sharing and access if implemented thoughtfully. Outlining guardrails without chilling innovation will require treading carefully. But done properly, enhanced oversight could make AI solutions more inclusive for all.

How might this impact the roadmap for companies like Google, Meta, and Microsoft also developing large language models?

Major tech players exploring this space likely recognize increased scrutiny is coming for generative AI. This complaint reinforces the urgency of prioritizing safety, ethics, and human rights from the earliest stages of development.

We should see ripple effects prompting tech giants to invest even more in content moderation, filtering, bias testing, data ethics reviews, and protections against misuse. Regulatory pressure could also accelerate industry collaboration on best practices. Though burdensome in the short-term, nurturing public trust ultimately benefits everyone in the AI ecosystem. This FTC action is a wake-up call to design responsibly.

What are some key ways this FTC complaint could impact the progress of AI research overall?

This complaint signals that the era of rapid AI research without guardrails is coming to an end. We should expect regulatory pressure to usher in more cautious, safe-by-design approaches to developing new models. The FTC is making clear that AI capabilities alone will not justify deploying systems that have not been rigorously evaluated for potential harms.

In the short term, this may constrain researchers’ ability to freely publish models that could cause ethical issues in the wild. However, it incentivizes researchers to invest more deeply in areas like AI safety, auditing for fairness and biases, techniques for misuse prevention, and alignment with human values. Rather than chilling innovation entirely, this refocuses the field on responsible progress that earns public trust.

Longer term, striking the right balance between oversight and permissionless creativity in AI research will require continued trial-and-error. But putting ethical considerations on par with raw capabilities sooner in the development process can prevent much more heavy-handed regulation down the road. Overall, this complaint steering researchers toward proactively addressing societal risks could be a net positive for innovation if implemented judiciously.

What might be some best practices for addressing harm and misinformation in large language models?

There are a mix of human and technical interventions that can help mitigateharm from language models:

  • Rigorous auditing during training/fine-tuning to identify biases, flaws in reasoning, and problematic outputs that require fixing before any public release.
  • Implementing classifiers to detect potential misinformation and flag it for human review before sharing widely.
  • Blacklisting certain dangerous or unethical uses, queries, or content categories from the outset.
  • Rate limiting application programming interfaces (APIs) to control virality and prevent abuse.
  • Conducting large-scale user studies on diverse populations to uncover potential issues.
  • Providing guardrails for users around fact-checking any sensitive information generated.
  • Moderating content post-release through a mix of automation and human reviewers.
  • Allowing for appeals of unfair takedowns and ensuring visibility into the process.
  • Enabling user reporting of harmful or false outputs to prioritize for remediation.

A combination of thoughtful engineering choices, enhanced transparency, and measured oversight during development and in production can help safeguard users while preserving AI’s tremendous potential.

What are the free speech implications if large language models required more content moderation?

Content moderating generative AI does raise some concerns around censorship given these systems’ unprecedented reach. However, not all limitations equate to true suppression of speech – context matters greatly. Thoughtfully defining unlawful versus merely objectionable content is key.

Banning clear falsehoods or privacy breaches falls more safely under consumer protections. But broader moderation based on subjective “offensiveness” could overstep if not narrowly tailored to protect tangible individual rights and safety. Restricting lawful opinions, satire, art, or marginalized voices could seriously undermine free expression.

Policymakers will need to modernize frameworks for online speech governance as AI capabilities evolve. Maintaining transparency around content decisions and allowing for appeals of unfair takedowns will be critical. Overall, the aim should be combating clear deception and harm while enabling the freest exchange of lawful ideas possible. With care, enhanced moderation practices for AI need not contravene First Amendment principles.

Should AI systems be held liable if they create defamatory content or violate privacy laws?

This remains a complex legal question. AI systems themselves lack sentience or intent, so ascribing legal culpability solely to them makes little sense. However, their creators arguably have a responsibility to restrict unlawful actions to the extent feasible. Developers must safeguard systems from generating illegal content through engineering choices, dataset screening, moderation, and other precautions.

Nonetheless, some liability likely needs to extend to companies deploying models irresponsibly. Victims of AI privacy breaches or defamation deserve remedy and wrongdoers deterring. One approach may be establishing negligence standards around implementing reasonable safeguards proportional to the risks. However, strict liability on providers for all AI harms could chill innovation. Developing thoughtful frameworks to incentivize care without stifling progress will be key.

How might regulators oversee development of new AI systems to reduce the risk of harm?

Heavy-handed restrictions on innovation could backfire, so “trust but verify” may be the most prudent philosophy. Frameworks requiring external audits, risk assessments, and ongoing monitoring proportional to an AI system’s potential impact could help without derailing progress entirely.

Regulators also need adequate internal expertise to assess models and provide flexible guidance to developers. Fostering collaborative bodies including government, academia, and industry around AI best practices may be beneficial. Setting clear expectations upfront while retaining flexibility can encourage accountability without quashing discoveries before they emerge. Getting oversight right will involve continued learnings on all sides.

What is the best way to balance free speech guarantees with protecting individuals from AI-driven harm?

A measured, contextual approach is needed that targets clear violations without limiting lawful discourse. Defining guardrails around demonstrably false claims or expose of sensitive private information can help curb damage without chilling opinions. However, understanding cultural nuances around when speech crosses into infringement of rights versus mere offense will be critical.

Policymakers should also consider how to equip the public with literacy around responsibly interacting with synthetic content. Developing societal resiliency alongside reasonable legal boundaries tailored to each AI domain will help safeguard rights holistically. And maintaining visibility into provider decisions while enabling appeals will be key to earn public trust in this balancing act.

Should AI systems have a legal “duty of care” to avoid causing users harm?

This emerging concept warrants consideration as AI permeates daily life. Imposing a baseline duty on providers proportional to an AI system’s foreseeable risks could incentivize proactive safety practices while still permitting broad innovation. However, rigid mandates regardless of context could become overbearing. The key is allowing flexibility to tailor responsibilities to each technology’s unique potential for harm. Done judiciously, establishing general expectations around lawful design, training, monitoring and redress could steer developers toward responsible AI absent heavy-handed restrictions.

What are some technical interventions that could make AI systems less prone to generating misinformation?

  • Training models to estimate confidence scores for generated content reflecting uncertainty.
  • Building causality frameworks to enhance logical reasoning and flag speculative assertions.
  • Incorporating external knowledge graphs to ground responses in factual data.
  • Leveraging semi-supervised learning approaches to further refine models before deployment.
  • Implementing reinforced learning from post-release user interactions to continuously improve.
  • Employing adversarial training techniques to identify and reduce model vulnerabilities.
  • Encoding scientific methodologies natively within generative architectures.

Targeted techniques like these to reduce speculation and ground AI generation in empirical data can curtail emerging misinformation while preserving creative potential.

How can policymakers craft AI regulations flexibly to allow for continued innovation in the field?

  • Focus any hard mandates narrowly on demonstrated serious harms, avoiding broad preemptive restrictions.
  • Leverage public-private partnerships to gather ongoing technical expertise.
  • Phase in oversight gradually via emerging best practices versus rigid top-down rules.
  • Ensure understanding of the unique considerations for different AI domains.
  • Develop tailored frameworks per technology class proportional to their risks.
  • Maintain nimble oversight bodies that can evolve standards fluidly as new use cases emerge.
  • Incentivize ethical considerations for developers without punitive overreach stifling progress.
  • Protect lawful expectations of transparency, accountability, and security without being overly prescriptive.
  • Allow self-attestations of adherence coupled with third-party auditing versus pre-approval.

By focusing narrowly on protecting against unacceptable harms while retaining flexibility, policymakers can nurture AI innovation responsibly.

What are some ways AI systems could unfairly cause financial harm to consumers?

  • Generating investment advice or recommendations based on flawed data or reasoning that lead to losses.
  • Facilitating predatory pricing by colluding with competitors.
  • Automatically denying loans or insurance to protected groups.
  • Enabling sophisticated synthetic fraud hard for consumers to detect.
  • Automating personalized manipulation of consumers to overspend.
  • Undercutting human creatives’ earnings by overgenerating cheap synthetic content.

Thoughtful oversight tailored to generative AI’s emerging capabilities can help curb unjust financial harms without stifling innovation.

What are some options for individuals who believe they were unfairly defamed or misrepresented by an AI system?

  • Report capability for users built into the AI provider’s interface.
  • Independent ombudsman office to review complaints regarding AI systems.
  • Ability to request the removal of certain sensitive assertions from training data.
  • Legal right of action if reputational or financial damage reaches a certain threshold.
  • Public database logging complaints of misuse/harm from AI systems.
  • Certification bodies to audit company practices regarding responsible AI development.

Providing impacted individuals accessible channels for remediation coupled with transparency can help make oversight fair and meaningful.

How can developers implement responsible data practices when building AI systems?

  • Conduct ethical reviews of all training data sources and document.
  • Restrict datasets to just the minimum necessary for the specific task.
  • Mask any personal identifiers or sensitive attributes in data.
  • Provide opt-out mechanisms for individuals to exclude their data.
  • Apply differential privacy techniques to preserve anonymity.
  • Cryptographically secure storage and transmission of data.
  • Strict access controls and permissions model.
  • Ongoing monitoring and testing for potential data leaks.
  • Detailed data management protocols and employee training.

Proactive data stewardship tailored to risks can maintain privacy and prevent abuse.

How might regulators provide oversight of AI systems without compromising intellectual property or stifling innovation?

  • Focus audits and evaluations on model outputs and behaviors rather than underlying code.
  • Allow self-attestations of adherence to rules initially, with third party verifications over time.
  • Employ staged approach scaling with proliferation rather than preemptive blanket restrictions.
  • Incentivize research into safety techniques like secure enclaves and confidential computing.
  • Encourage open standards around transparency and accountability over rigid proprietary mandates.
  • Shield good-faith developers trying responsibly to mitigate emerging risks from overzealous enforcement.
  • Maintain flexibility to modify frameworks expediently based on new learnings and use cases.

Judicious oversight emphasizing outcomes over implementations can further responsible AI while respecting IP.

What are some warning signs that an AI system may be producing harmful, inaccurate or biased outputs?

  • Disproportionate errors regarding particular social groups.
  • Generating factually improbable scenarios or claims without caveats.
  • Statements based on fallacious reasoning or speculative assumptions.
  • Sourceless assertions beyond common knowledge.
  • False confidence projecting authority beyond system capabilities.
  • Responses potentially reinforcing dangerous stereotypes.
  • Outputs substantially misaligning with values of owner company.

Proactive audits assessing these factors pre and post-deployment can identify risks.

What are some ways providers could enable beneficial uses of AI while restricting harmful applications?

  • Allowlisting permissible use cases and blocking all others by default.
  • Rate limiting generative queries to curb potential viral misinformation.
  • Encrypting certain generative capabilities accessible only to verified entities.
  • Watermarking synthetic media to deter misrepresentation.
  • Directing revenue from beneficial applications to offset potential harms.

With thoughtful design, providers can steer applications towards societal good.