Bark uses AI to monitor children's texts, emails, and social media for concerning content like cyberbullying, depression, or predatory behavior. The alert-based model means parents only see flagged content rather than everything—but the underlying surveillance is comprehensive, and AI decisions about what's "concerning" are opaque.
Bark requires access to children's social media accounts, emails, and text messages. Even though parents only see alerts, Bark's AI analyzes everything, creating extensive content archives.
The AI determines what's "concerning" based on opaque criteria. False positives can create family conflict, while false negatives may miss genuine threats. Parents trust algorithmic judgment about their children's communications.
Effectiveness depends on access to social media platforms. Changes to platform APIs can break monitoring, and terms require maintaining children's platform credentials.
Terms focus entirely on parent rights. Children have no terms-of-service rights regarding their own data, no ability to contest alerts, and no mechanism to age out of monitoring.
Parents see only flagged content, not everything. This provides some privacy for children while still enabling safety monitoring.
Bark emphasizes genuine safety concerns (bullying, predators, self-harm) rather than general behavior control.