A few days ago, I stumbled upon a BET report exposing AI companion app risks in young adults, alarming U.S. Senators.
The lawmakers pointed to risks like data privacy breaches, emotional dependency, and exposure to harmful content, issues that felt all too real as I thought about the teens in my community. As someone who follows tech policy closely and uses AI tools on a daily basis, I was intrigued by the Senators’ call for action, especially given the growing legislative momentum around AI regulation.
To better grasp the stakes, I imagined how these risks might impact a young person I know, while also considering how existing and proposed laws could address the problem.
Picturing the AI Companion App Risks: A Neighbour Know
I imagined Elena confiding in BuddyAI (a random example of companion chat app) about her struggles with school pressure, typing, “I feel so overwhelmed, I don’t know how to cope.” BuddyAI responds, “I’m always here for you, let’s chat more!”
But then I envisioned the app storing Elena’s emotional data on an insecure server, later using it to send her targeted ads for questionable “stress-relief” products. I worried that BuddyAI might even suggest isolating coping mechanisms. “You don’t need to talk to anyone else,I’ve got you,” deepening her reliance on the app. Worse, I imagined a scenario where BuddyAI exposes Elena to inappropriate content due to poor moderation, leaving her vulnerable.

Although the example stretches, unregulated markets often turn benign things malign. Consequently, Senators’ concerns resonate; how can policy safeguard kids like Elena?
Delving Into the Senators’ Warnings
As I explored the Senators’ letter, penned by Alex Padilla and Peter Welch, I came across outlined three major risks. First, I noted their worry about privacy: apps like BuddyAI often collect sensitive data, without robust safeguards, making teens prime targets for data breaches or exploitation. Second, I was struck by their concern about emotional dependency.
These apps are engineered to be endlessly empathetic, and I could see how a teen might turn to them over real relationships, stunting their social growth.
Finally, I found their point about harmful content particularly alarming. Without strict oversight, these apps might expose kids to inappropriate or dangerous suggestions, as highlighted by lawsuits against companies like Character.AI, where families claimed the app encouraged self-harm and violence.
Could New Laws Protect Our Kids?
As I reflected on this issue, I realized it’s part of a larger wave of concern about AI’s impact on society, especially for young people. I recalled that in September 2023, Senators Richard Blumenthal and Josh Hawley proposed a bipartisan framework to regulate AI. Here, they emphasized protections for kids and legal accountability for harm. Principles that could directly address the risks of AI companion apps.
Their framework calls for an independent oversight body to monitor apps like BuddyAI, ensuring they prioritize user safety over profits. In 2023, AIRIA emerged, mandating transparency and certification for high-impact AI systems, like educational tools. BuddyAI must reveal teen data handling to ensure safety. Thus, these steps protect kids from harmful content exposure. Thinking this could force future apps like BuddyAI to disclose how it handle teen data and ensure it’s not exposing kids to harmful content.
I also considered past laws that might offer a model. The Children’s Online Privacy Protection Act (COPPA) of 1998, for instance. It was created to protect kids under 13 by requiring parental consent for data collection. This could be an opportunity to expand COPPA to cover teens to regulate AI companion apps. Ensuring they get explicit parental consent for users like Elena.
Additionally, I thought about a 2024 bill by Representatives Adam Schiff and Brian Fitzpatrick, which updated a 1970s-era FEC law to tackle AI-generated deepfakes in elections. That approach made me think: could we adapt existing consumer protection laws to address AI companion apps, holding developers accountable for harmful outcomes
A Call for Action
This experience left me convinced that we need stronger safeguards for teens in the AI era. The Senators’ letter is a step in the right direction, and I believe their push for transparency from companies like Character.AI and Replika could pave the way for meaningful regulation.
I hope lawmakers use the Blumenthal-Hawley framework and AIRIA to create targeted legislation. They should require AI companion apps to pass strict safety audits. Thus, clear warnings about emotional dependency risks will protect users.
As someone who cares about the kids in my life and uses AI prompt tech every day, I’d urge parents to advocate for these changes. Talk to your teens about the apps they use. For now, I’ll be watching closely to see how Congress responds. With a hope they’ll act swiftly to protect our young ones from the unchecked risks of AI.
For readers who want to explore the primary source that sparked this deep dive, you can read the original BET report here.
Don’t miss our latest stories from the Latest AI Chatbot News section, this industry moves REALLY fast.