Valtik Studios
Back to blog
EdTech Surveillancehigh2026-03-2011 min

Your Kid's School Is Monitoring Everything: Gaggle, Bark, GoGuardian Explained

Your kid's school likely runs software that reads every email, monitors every Google Doc, scans every search, and uses AI to flag 'concerning' content. Gaggle, Bark, GoGuardian, and Securly are deployed in US K-12 schools covering roughly 20 million students. What the tools actually do, what they've gotten wrong, and what parents can (and cannot) opt out of.

TT
Tre Trebucchi·Founder, Valtik Studios. Penetration Tester

Founder of Valtik Studios. Pentester. Based in Connecticut, serving US mid-market.

What you probably didn't know was happening

I've been running engagements on this for a few years now. The shortcut you'd expect to exist doesn't.

Your kid started middle school. The school gave them a Chromebook. They got a school email address at @yourschool.edu. They log into Google Classroom, Microsoft Teams, or Canvas for assignments. They get a Clever or ClassLink SSO account that ties everything together.

Everything they type, search, visit, write, post, receive. And save through those accounts and that device is being monitored by at least one piece of software running in the background. The monitoring runs 24/7, including after school hours. It covers personal content alongside schoolwork. It uses AI to flag "concerning" content for human review. It generates reports that go to school administrators, sometimes to parents, and in certain circumstances to law enforcement.

This isn't a dystopian hypothetical. It's the operational reality of US K-12 education in 2026. The four major vendors. Gaggle, Bark for Schools, GoGuardian, and Securly. Collectively monitor roughly 20 million US students' digital activity.

This post covers what these tools do, what they've gotten catastrophically wrong, and what parents can reasonably do about it.

The four major vendors

Gaggle

Founded 2014, operates in ~1,500 US school districts covering ~6 million students.

What it monitors:

  • Student emails (school accounts)
  • Google Drive / Microsoft OneDrive documents
  • Chat messages in Google Chat, Microsoft Teams
  • Shared documents and comments
  • Images and videos in student accounts

How it works:

  • AI-based content scanning for flagged categories: self-harm, suicide, violence, weapons, bullying, drugs, sexual content, abuse indicators
  • Flagged content goes to Gaggle's 24/7 human review team (contracted moderators, not school staff)
  • Moderators escalate "Level 1" incidents to school administrators within hours
  • Severe incidents ("Level 3". Imminent self-harm, weapons, immediate danger) trigger direct contact with school administrators and, in some jurisdictions, law enforcement
  • Reports generated include the flagged content verbatim, context, and recommended action

Notable data point: Gaggle has publicly claimed to have prevented 1,400+ suicides through its monitoring. The methodology behind that claim is disputed by privacy researchers.

Notable controversy: Gaggle's moderators have access to student content 24/7 without specific consent from students or parents. The moderators are third-party contractors. Security researchers have identified weak controls on the moderator interface.

Bark for Schools

Founded 2017, operates in ~3,000 US school districts.

What it monitors:

  • School-provided accounts and devices
  • Text messages (if the device is configured for it)
  • Social media (where student accounts are linked)
  • Email content
  • Web browsing

Distinctive features:

  • More consumer-oriented. Bark also sells a direct-to-parent version
  • "Parent notification" option sends alerts to parents, not administrators
  • AI analysis focuses on "concerning content" detection. Similar categories to Gaggle
  • Less 24/7 human moderation than Gaggle. More automated alerting

Controversy: Bark's consumer version has been criticized for over-alerting on false positives. Flagging normal teen conversations as concerning. The school version has similar concerns scaled up.

GoGuardian

Founded 2015, operates in schools covering ~10 million US students.

What it monitors:

  • Chromebook activity (primary use case)
  • Web browsing
  • Screen contents (screenshots, live monitoring)
  • Search history
  • YouTube watch history
  • Google Docs / Classroom activity

Distinctive features:

  • Teacher-visible real-time classroom monitoring ("GoGuardian Teacher")
  • Content filtering / blocking
  • "GoGuardian Beacon" AI detects suicide-risk indicators and alerts counselors
  • Remote screen viewing. Teachers can see any student's screen in real time

Controversy: GoGuardian has been criticized for enabling real-time surveillance of students in their homes (since monitoring continues on school Chromebooks whether at school or home). Screenshots captured in students' homes during virtual learning raised significant privacy questions during 2020-2022.

Securly

Founded 2013, operates in ~15,000 schools worldwide.

What it monitors:

  • Web filtering / browsing
  • Search activity
  • Social media (via shared account access)
  • Email
  • Chat messages
  • Video content

Distinctive features:

  • "Securly Aware" AI detects concerning content across student communications
  • "Securly Visitor" manages school visitor sign-ins (separate product)
  • "Securly 24" provides 24/7 human review tier
  • Integrates with GSuite for Education and M365 Education

What gets flagged

The vendors use AI models trained on "concerning content" patterns. The categories typically include:

  • Self-harm / suicide ideation. Mentions of cutting, ending life, feeling hopeless
  • Violence. Threats, weapons, fights, school attack planning
  • Bullying. Direct harassment, group targeting, exclusion messaging
  • Drugs / alcohol. Use, distribution, references
  • Sexual content. Inappropriate conversation, explicit imagery, predatory communication
  • Abuse indicators. Signs a student may be experiencing abuse at home
  • Radicalization. Extremist ideology, hate speech
  • Cheating / academic dishonesty

The AI models aren't perfect. False positive rates are substantial for all vendors. Common false positives:

  • Students discussing novels, plays, or history class content ("kill the king" in a Hamlet paper)
  • Political discussions flagged as radicalization
  • Song lyrics quoted in essays
  • Medical information (abortion, mental health, sexual health)
  • LGBTQ content. Particularly common false-positive category
  • Private relationship discussions between students
  • Jokes and sarcasm. AI struggles
  • Song lyrics shared in group chats
  • Video game references ("shoot the boss")

Documented incidents

2023: Gaggle flagged college application essay about depression. Student writing a sensitive-but-appropriate essay about their own mental health for a college application. Gaggle flagged it. Administration contacted parent. Student described feeling monitored and embarrassed.

2024: LGBTQ students disproportionately flagged. Multiple published studies found that students writing about LGBTQ identity or experiences were flagged at substantially higher rates than equivalent content on other topics.

2023-2024: False police referrals. Multiple cases of flagged content leading to welfare checks by police at students' homes. Sometimes for content that was ambiguous (song lyrics, jokes, literature discussion) than concerns.

2024: GoGuardian captured personal content. Students using school Chromebooks for personal activities (including intimate conversations, medical searches. And family conflicts) had content captured during hours when students reasonably expected privacy.

2025: Bark for Schools AI hallucination. Bark's AI falsely generated descriptions of concerning content in student conversations that didn't contain such content. Multiple administrative actions based on AI output that didn't accurately represent student messages.

FERPA (Family Educational Rights and Privacy Act)

FERPA is the primary US federal law governing student records. It requires:

  • Schools obtain parental consent for disclosure of student records to non-school parties (with many exceptions)
  • Parents have rights to review student records
  • Schools may share records with "school officials" who have "legitimate educational interests"

The monitoring vendors claim to operate as "school officials" under FERPA's School Official Exception. This has been legally challenged but generally upheld. The result: vendors have access to student records without specific parental consent, provided the contract with the school meets certain requirements.

COPPA (Children's Online Privacy Protection Act)

COPPA applies to children under 13. It requires specific parental consent for online data collection. Schools can provide consent on behalf of parents if the collection is for the school's use and doesn't commercially benefit the vendor.

This creates gray area when vendors use student data to improve their AI models. Some vendors have received regulatory attention on this point. Most have updated contracts to clarify that model improvement uses de-identified aggregate data only.

State-level laws

Increasingly strict. Notable:

  • California SOPIPA (Student Online Personal Information Protection Act). Prohibits commercial use of student data, requires deletion upon school contract termination
  • Colorado Student Data Privacy Act
  • New York Education Law 2-d
  • Illinois Student Online Personal Protection Act

Most of these require:

  • School contracts with vendors specify data handling
  • Vendor breach notification obligations
  • Restrictions on secondary use of student data
  • Parental rights to review data collected

Enforcement is variable. The attorney general of each state has enforcement authority; enforcement actions are rare.

The policy conversation in 2026

Federal student-privacy legislation has been introduced multiple times since 2022. As of April 2026:

  • Student Privacy Protection Act (proposed). Would create federal floor on student data protection
  • EdTech Privacy Act (proposed). Would regulate vendor relationships
  • Neither has passed. State patchwork continues.

What parents can do

The practical options for parents who are concerned:

Understand what's deployed

Ask the school district:

  • What monitoring software is installed on school-issued devices?
  • What data is collected and retained?
  • Who has access to flagged content?
  • What's the escalation process for flagged content?
  • Is there a way to review what's been flagged about my child?

Every district should be able to answer these questions. Many can't without escalation. The fact that schools don't proactively communicate this detail is itself revealing.

Opt out where possible

Opt-out options vary:

  • Some districts allow opt-out of specific monitoring features, though rarely of device-level monitoring entirely
  • Parents can decline to use school-issued devices. Use a personal device for homework where feasible
  • Parents can request alternative account arrangements. Student uses personal Google account for personal activities, school account only for schoolwork

In practice, opt-out often results in reduced access to school resources. The structural incentive favors students using the monitored systems.

Educate your kid

The most important defense is informing your child about what's monitored. A student who knows their school email is monitored won't use it for private conversations. A student who knows their school-issued Chromebook is tracking their browsing won't use it for personal searches.

Specific things to tell kids:

  • "Your school email is monitored. Everything you send and receive. Don't use it for personal conversations."
  • "Your school Chromebook is monitored. Don't use it for searches, messages, or anything you want private. Use a family computer or your own phone."
  • "Your school Google Drive is monitored. Documents for school only."
  • "If you need to look up something personal. Mental health, sexuality, political views, medical. Do it from home, from a personal account, on a personal device."
  • "You have a right to privacy on your own devices and personal accounts. School monitoring doesn't extend to those, legally."

Watch for over-escalation

If your child is flagged, understand what happened. In most cases flagged content triggers:

  1. Content captured and stored
  2. Routed to administrator or moderator
  3. School administration contacted
  4. Optionally, parents contacted

In edge cases. Perceived imminent danger. Law enforcement may be contacted directly. If you receive a welfare check at your home based on school-flagged content, you've rights to:

  • Ask what was flagged and see the content
  • Understand the escalation process
  • Challenge inappropriate escalation
  • Request removal of flagged content from your child's record where appropriate

Know the data access limits

Parents generally have the right under FERPA to review school records about their child. This typically includes monitoring flags. Request in writing. Schools must respond.

Advocate at the board level

If your district's monitoring is aggressive, the school board is where policy is set. Board meetings are public. Other parents are often similarly concerned. Policy changes happen at the district level, not through individual parent complaints.

For older students

For high schoolers, some additional considerations:

Personal accounts are legally protected in most cases. Your school-issued Chromebook may monitor your school account and your school-issued services. But it doesn't have legal authority to access your personal Instagram, personal Gmail, or personal phone.

However, practical access may differ. School IT teams sometimes have network-level monitoring that sees personal accounts while on school Wi-Fi. VPNs on personal devices can mitigate this.

Off-campus activity using personal devices on personal accounts is legally not school surveillance territory, though the practical reach of some monitoring extends. Know where your personal space ends and your school-monitored space begins.

What's changing in 2026

The monitoring landscape is evolving in several directions:

1. AI sophistication. Vendors are updating AI models to reduce false positives (slowly). Large-language-model-based content review is increasingly replacing older keyword-based flagging, which reduces some error categories and introduces new ones.

2. Expanded platform coverage. Monitoring is extending to more platforms. TikTok, Instagram, Snapchat, and Discord activity captured through parent-linked personal accounts is now common.

3. Litigation pressure. Several class-action lawsuits against monitoring vendors are in progress, focusing on privacy violations, COPPA compliance. And disproportionate flagging of LGBTQ students.

4. Political pressure in both directions. Some advocates push for more monitoring (especially after school violence incidents). Others push for less (especially privacy groups and LGBTQ advocacy organizations). The net effect varies by state.

5. Bug bounty and security research. Researchers have been looking at monitoring vendors' security. Gaggle, Bark, and GoGuardian have all had documented security issues. Exposed databases, API authorization flaws, account takeover vulnerabilities. Aggregated K-12 monitoring data is an increasingly valuable target for bad actors.

The bigger picture

Student monitoring tools are deployed because schools face real problems: school shootings, bullying, youth mental health crises, drug use, abuse. The tools are implemented with good intent.

They're also implemented with:

  • Privacy trade-offs that most parents don't understand
  • False positive rates that affect student wellbeing
  • Disproportionate impact on marginalized students
  • Data retention policies that create long-term risk
  • Security vulnerabilities that can expose sensitive student data

Parents who want their kids to benefit from modern educational technology without the surveillance ecosystem's worst aspects need to:

  1. Understand what's deployed
  2. Help their kids operate with awareness
  3. Draw lines between school-monitored activity and personal activity
  4. Advocate at the board level where they've concerns

For districts and school IT leaders

If you work in school IT or administration, the vendor you deploy matters. Questions to ask:

  • What specific categories does the AI flag?
  • What's the false positive rate, measured and published?
  • Who has access to flagged content (internal staff vs external moderators)?
  • Where is data stored, for how long, under what access controls?
  • What's the breach response plan?
  • How does the vendor handle requests for deletion when a student moves / graduates?
  • What security audits have been conducted?

Vendors that can't answer these concretely aren't ready to be trusted with student data. Period.

What Valtik does in this space

Valtik's education-sector engagements include student data privacy program review and K-12 monitoring vendor risk assessment. For school districts wanting to evaluate their current monitoring stack, audit vendor data handling practices, or respond to parent concerns with evidence-based answers, we can help.

For parents wanting a confidential consultation about their specific situation (particularly in cases involving flagged content, welfare checks, or suspected over-monitoring), we offer individual consultations.

Reach out via https://valtikstudios.com.

Sources

  1. Gaggle Official Documentation
  2. Bark for Schools
  3. GoGuardian Product Documentation
  4. Securly Documentation
  5. The Surveillance Overview. EFF
  6. FERPA. US Department of Education
  7. COPPA Guidance. FTC
  8. Student Data Privacy Research. Center for Democracy & Technology
  9. ACLU Student Privacy Position Papers
  10. The 74 Million. School Surveillance Investigative Reporting
school surveillancestudent privacydata privacygagglebarkgoguardianferpaconsumer cybersecurityparental awarenessresearch

Want us to check your EdTech Surveillance setup?

Our scanner detects this exact misconfiguration. plus dozens more across 38 platforms. Free website check available, no commitment required.

Get new research in your inbox
No spam. No newsletter filler. Only new posts as they publish.