Why some states want to let parents snoop on their kids’ apps
A new digital-safety idea is spreading through state houses — but there’s a case it could backfire on families.
With Congress seemingly stalled out on a national law to protect children from online harms, state lawmakers across the U.S. are starting to take matters into their own hands, drafting and passing laws that would create new guardrails for social media platforms and the kids who use them.
As they do, they’re steering straight into an issue that is splitting online safety advocates: Whether it’s better, or worse, for parents to be able to snoop on their kids’ social media accounts.
Numerous states are taking up kids’ online safety bills this year, a handful are including parental oversight requirements as part of their strategy. Lawmakers in Maryland and California have proposed such regulation. Utah last month wrote it into law.
This concerns many privacy groups and children’s mental health advocates, who worry that in the rush to protect teens from predators, drugs and other online dangers, lawmakers might also be creating real-life harm for children in difficult situations.
“That means that a child couldn’t privately discuss sexual abuse with friends online, they couldn’t privately discuss reproductive rights or abortion access,” said Jason Kelley, an associate director at the Electronic Frontier Foundation, a digital rights group. “They couldn’t even really speak out about parental abuse with their friends online because their parents could see it.”
As with many online privacy and safety issues, the argument is largely brewing in state houses. Lawmakers in Congress have proposed children’s digital safety bills, most notably the Kids Online Safety Act, which failed to pass last year but is expected to be reintroduced this term, but those don’t include parental monitoring requirements.
It’s unlikely that Utah’s bill will become a model for Congress to follow, or even for other states, as S.B. 152 drew a significant amount of public criticism and legal threats over potential First Amendment violations for teens. Maryland’s bill faces an uphill path in the statehouse, and hasn’t made any progress since its last hearing in February. But if tech platforms are required to build these tools, even if it’s for just one state, it could lead other states to clamor for the same capabilities — and could start to set a de facto standard for kids' lives online.
“In the absence of Congress being able to act and to pass a bill, states are taking it up on their own. We’re seeing a patchwork of states trying to get at this issue, but in a variety of different ways,” Bailey Sanchez, a policy counsel with the Future of Privacy Forum’s Youth & Education Privacy team, said.
There have been 27 different bills proposed across 16 states pushing for kids’ privacy and safety regulations by February of this year, according to analysis from the Future of Privacy Forum. Many of these bills share similarities, like banning targeted advertising to children, or banning addictive designs from social networks, which kids’ tech advocacy groups like Common Sense Media support.
When it comes to parental supervision, however, there is far less agreement.
The Utah law, for instance, requires social media companies to provide a parent or guardian “access to the content and interactions of an account held by a Utah resident under the age of 18,” which could include their private messages.
The lawmakers behind these bills argue that tech companies have already caused too much harm to kids, and see parental oversight as one way to guard against harm in the digital world. They point to the U.S. Surgeon General warning that 13 is too young for kids to be on social media, or the CDC’s report on the increasing number of teen girls “persistently sad or hopeless” 2011 and 2021— believing that the rise in teen depression is linked to social media’s growth during the same years.
Lawmakers also cited Facebook whistleblower Frances Haugen’s 2021 Congressional testimony where she told Congress that the company knew Instagram was damaging teens’ mental health but didn’t do enough to prevent the harm.
The message got through to lawmakers, even outside of Washington. “When you have 30 percent of our young girls contemplating suicide, we need to take action, parents need to be more involved,” Utah’s state Sen. Mike McKell said.
But in giving parents a clear legal right to snoop on their kids, the bills also trigger concerns from many mental-health researchers, who see kids’ online lives as an important avenue for exploring issues out of their parents’ purview.
The parental monitoring provision raises concerns for kids who could be at risk in abusive households, and have chilling effects on teen conversations with their friends, privacy advocates said. That can range from online discussions about sexuality, reproductive rights, parental abuse, or even simple, low-stakes conversations.
“Even discussing whether or not you want to go to college with your friend is the kind of thing that you need some level of privacy to have those conversations with your community and your friends, and parents don’t need to be listening over the shoulder for every conversation that a young person has, especially when they’re months away from being an adult,” the EFF’s Kelley said.
Some privacy advocates phrase it far more harshly: “When you insert a trojan horse that enables effectively widespread surveillance of children, that’s not a privacy bill, that’s a surveillance bill,” said Evan Greer, a deputy director for the digital rights group Fight For the Future.
Though research on kids’ privacy rights is slim, some data suggests that strict parental controls can backfire. Researchers have found that parental monitoring apps, which let parents control their kids’ online activities in similar ways to what lawmakers want from social networks, were associated with increased chances of teen online victimization, compared to teens whose parents didn’t use monitoring services.
A 2018 study from the University of Central Florida found that parents who were constantly monitoring their children’s online activities hurt their relationship with their kids, creating real-life social issues that could lead to more online victimization through cyberbullying, for example. But it also found that children of parents who were completely neglectful were also at high risk of online victimization. The researchers found that a middle ground of direct supervision and involvement from parents reduced teen online harassment — but monitoring apps did not.
There’s also an evolving international norm when it comes to children and privacy — and these state laws place the U.S. outside that norm. Cobun Zweifel-Keegan, the managing director for Washington’s bureau of the International Association of Privacy Professionals, noted that the United Nations Convention on the Rights of the Child, which says that a child should not be subjected to “unlawful interference with his or her privacy.”
While it’s not a legally binding regulation, it highlights that government bodies around the world consider privacy rights for teens differently than what US lawmakers have in mind.
“When we think about our right to privacy, we don’t think of it as something that starts when you turn 18,” Zweifel-Keegan said.
This is not how many American lawmakers see it: “The teens don’t have any right to privacy from their parents unless their parents decide that they want to give them that privacy,” Maryland Delegate David Fraser-Hidalgo, who proposed his state’s bill, said.
Lawmakers said that they’re receptive to these arguments, but ultimately don’t trust the sources. Tech giants have a reputation for parachuting into states to lobby against regulations, often through industry groups.
At the hearing for both Utah’s bill and Maryland’s bill, representatives from NetChoice, the Computer and Communication Industry Association and TechNet testified against the bills. Collectively, the groups represent Amazon, Apple, Google, Meta and TikTok.
While these companies didn’t testify themselves in Utah or Maryland, lawmakers in those states say they’ve been in touch with representatives from TikTok, Meta and Google, who wanted to negotiate how they would be affected by the bill.
TikTok said it wouldn’t comment on any specific legislation, but said it was dedicated to providing a safe platform for teens, highlighting parental control features it’s introduced like disabling direct messaging for users under 16.
“Our team of more than 40,000 safety professionals are dedicated to keeping our community safe and welcoming, and we will continue to play our part in tackling industry-wide challenges related to youth safety and well-being," TikTok spokesperson Jamal Brown said in a statement.
Meta also wouldn’t comment directly on any legislation, and highlighted its age verification tools and age-appropriate design features.
“We’ll continue to work closely with experts, policymakers and parents on these important issues,” Meta’s global head of safety, Antigone Davis, said.
McKell said the tech industry groups’ testimonies fell on deaf ears, comparing it to tobacco companies arguing that their products weren’t harmful. The same happened in Maryland.
“My concern is that social media companies use [privacy] as a red herring to not do anything,” Fraser-Hidalgo said.