Members-only forum — Email to join

We just discovered a data breach — what do we legally have to do?

Started by PanickedCTO · Jan 11, 2024 · 19 replies
For informational purposes only. Not legal advice.
PC
PanickedCTO OP

Yesterday we discovered someone accessed our database through a misconfigured S3 bucket. We have about 15,000 users, mostly US but some EU. The exposed data includes emails, hashed passwords, and for some users, shipping addresses.

We've locked down the bucket but have no idea how long it was exposed. Could be weeks. What do we legally have to do now? Do we need lawyers immediately?

DC
DataCounsel Attorney

Yes, get a lawyer now. Data breach response has strict timelines and wrong steps can increase liability.

That said, here's the general framework:

1. Preserve evidence. Document what you found, when, how. Preserve logs. Don't overwrite anything. Your forensic timeline will matter.

2. Assess what was exposed. Emails + hashed passwords + addresses is concerning but not worst-case. Were there SSNs, financial info, health data?

3. Determine notification obligations. This depends on:

  • What data was exposed
  • Where your users are located
  • Your industry (are you subject to HIPAA, GLBA, etc.?)
PC
PanickedCTO OP

No SSNs or financial info. We're a consumer SaaS, not healthcare or finance. Users are about 80% US, 15% EU, 5% other.

How fast do we need to notify? I've seen "72 hours" mentioned but that seems impossible.

DC
DataCounsel Attorney

The 72-hour rule is GDPR Article 33 — notification to supervisory authority within 72 hours of becoming "aware" of a breach. That applies to your EU users. You'll need to identify which EU member states your users are in and notify the relevant data protection authorities.

For US users, every state has its own breach notification law. The good news: emails + hashed passwords + addresses may not trigger notification in all states. Many state laws define "personal information" to require SSN, financial account numbers, or government IDs.

California is broader — Cal. Civ. Code 1798.82 includes email + password that "would permit access to an online account." If the passwords are properly hashed (bcrypt, Argon2), there's an argument no real credentials were exposed. But that's a judgment call.

IR
IncidentResponder

Security consultant here. Important question: do you have evidence the data was actually exfiltrated, or just that it was exposed?

A misconfigured S3 bucket is exposed if it's publicly listable/readable. But if you don't have access logs showing someone downloaded the data, you may be in "potential breach" vs "actual breach" territory. Some state laws distinguish between exposure and acquisition.

Check your CloudTrail logs for the S3 bucket. Look for GetObject or ListBucket calls from outside your org.

PC
PanickedCTO OP

@IncidentResponder — checking now. Our CloudTrail was enabled but only for management events, not data events. So we can see when the bucket policy changed but not individual file access. That's... bad, right?

IR
IncidentResponder

Not great, but common. Without data event logs, you can't prove no one accessed the data, but you also can't prove anyone did. You may want to check S3 server access logs if those were enabled (separate from CloudTrail).

Also check: did the bucket have public listing enabled, or just public object reads? If listing was off, someone would need to know the exact file paths to access anything. Makes opportunistic discovery less likely.

BR
BreachResponseVet

Practical steps beyond legal:

  • Force password resets for all users NOW. If credentials were exposed, this limits damage.
  • Prepare a customer communication. Even if not legally required, being proactive builds trust.
  • Check if you have cyber insurance. If so, call them immediately — they often have breach response resources included.
  • Engage a forensic firm to document what happened. This creates the paper trail you'll need if regulators ask questions.
PC
PanickedCTO OP

UPDATE: We have cyber insurance — didn't even think to check. Carrier is sending a breach coach (their term) who will coordinate legal and forensics. They're taking over the response.

Forensics found S3 server access logs were enabled. There's ONE suspicious access from a TOR exit node 3 weeks ago that downloaded the user table. So we have to assume data was exfiltrated.

Forced password reset already done. Working with the breach coach on notification strategy. Looks like we'll need to notify California AG (>500 CA residents affected) and send individual notices to all US users. EU notification to Irish DPC since most EU users seem to be there.

DC
DataCounsel Attorney

Good that you have insurance. Their breach coach will guide you through the notification letters (specific language is required in most states) and AG notifications. The 72-hour GDPR clock started when you "became aware" — arguably that was yesterday, so you're still in the window.

One more thing: document what security improvements you're making. Regulators and plaintiffs' lawyers will ask "what did you do to prevent this from happening again?" Having a concrete answer helps.

PC
PanickedCTO OP

One year later update: Figured I'd circle back since this thread helped us so much. We completed all notifications, no regulatory fines, and only one nuisance lawsuit that our insurance handled.

Lessons learned for anyone dealing with this now:

  • Cyber insurance was worth 10x what we paid. They handled everything.
  • Being transparent with customers actually improved trust. We got emails thanking us for the clear communication.
  • We now have proper S3 bucket policies, CloudTrail data events enabled, and automated scanning for misconfigurations.

Also worth noting: the FTC's updated Safeguards Rule now requires notification within 30 days for certain breaches. Glad we weren't subject to that, but others should be aware.

CS
ComplianceStartup

Necro-ing this thread because it's incredibly relevant right now. We're a fintech and just saw the SEC's new cybersecurity disclosure rules go into effect. Anyone else navigating the Form 8-K requirements for material breaches?

The four business day disclosure window for "material" incidents is causing our legal team headaches. How do you determine materiality when you're still investigating?

Also seeing more state laws going into effect — Washington's My Health My Data Act has breach notification requirements that extend beyond traditional health data. Oregon's consumer privacy law kicked in too. The patchwork is getting worse.

DC
DataCounsel Attorney

@ComplianceStartup — the SEC materiality question is genuinely difficult. Their guidance says to consider quantitative and qualitative factors, but during an active incident you often don't know the scope. Best practice is to document your materiality analysis in real-time, even if preliminary.

The state patchwork is indeed getting worse. For 2024, keep an eye on:

  • New Jersey's privacy law (effective Jan 2024) with 30-day notification requirement
  • Delaware, Iowa, Nebraska, New Hampshire, and several others with laws now in effect
  • The proposed federal privacy bill that might preempt some state requirements (unlikely to pass, but worth tracking)

I'm also seeing more AG enforcement actions. California, New York, and Texas AGs have all been active on breach notification failures.

BR
BreachResponseVet

Adding context from the trenches: the MOVEit and Okta breaches from 2023-2024 created lasting ripple effects. We're still seeing companies discover they were affected through vendor chains. Third-party risk is now the number one concern in every breach response I handle.

Practical advice for 2024:

  • Map your vendors NOW before an incident. You need to know who has your data.
  • Review contracts for breach notification obligations (72 hours is becoming standard).
  • AI systems are creating new breach vectors — training data exposure and prompt injection are real concerns that don't fit neatly into existing notification frameworks.

Also, the recent HHS proposed rule for healthcare would require 24-hour notification for ransomware. That's aggressive but reflects regulatory direction.

AW
AmandaW_InfoSec

@BreachResponseVet - the AI training data point is so real. We had a situation where an employee fed customer data into ChatGPT for summarization. Technically that data left our systems and went to OpenAI. Does that count as a breach? The legal guidance is murky at best.

Our current policy is to treat any unintentional exposure to AI systems as a potential incident requiring investigation. Better safe than sorry given how unclear the regulatory environment is.

JL
JasonL_CTO

Just want to thank everyone in this thread. We had our own S3 incident last month (different circumstances but same general issue) and I immediately remembered this discussion. Having cyber insurance was the #1 thing that saved us - the breach coach handled everything.

Cost of our cyber insurance: $4,800/year. Cost of breach without it: would have easily been $50K+ in legal, forensics, and notification costs. No brainer for any startup handling user data.

PK
PriyaK_Compliance

Re: the patchwork of state laws - one thing that helped us was building a decision matrix. We mapped every state's breach notification law by: what data triggers notification, timing requirements, who to notify, and any special formatting requirements.

Its a lot of upfront work but worth it when you're in incident response mode and don't have time to research each state individually. Happy to share our template if anyone wants it.

Want to participate in this discussion?

Email owner@terms.law to request access