Minnetonka High School
Minnetonka Public Schools Superintendent Dennis Peterson signed a three-year contract, at $23,500 annually, with a company that pledged to alert the district to “threats shared publicly.” Credit: MinnPost photo by Tony Nelson

In the summer of 2017, Minnetonka Public Schools Superintendent Dennis Peterson attended a conference in Baltimore put on by the Education Research and Development Institute, a for-profit company that connects education technology businesses with school administrators.

While in Baltimore, Peterson was among a group of superintendents invited to meet with representatives of Social Sentinel, a company that uses software to monitor social media posts for potential threats to students and school districts. The company proposed dinner at Ouzo Bay, a Greek Restaurant, according to emails obtained by MinnPost. Peterson accepted the invitation and later thanked the company’s representatives for the meal.

A month later, the superintendent signed a three-year contract, at $23,500 annually, with a company that pledged to alert the district to “threats shared publicly.” The district redacted the name of the company and specifics about how it works from the contract, which MinnPost obtained through a public records request. Minnetonka claimed that student safety could be at risk if the document was disclosed. But the language is nearly identical to a 2018 contract between Social Sentinel and a Texas school district.

Today, many school districts and some college campuses use social media monitoring services, seeing them as a way to keep students safe amid concerns over students’ mental health and deadly school shootings. But while some laud them as a useful tool, others say they’re ineffective at best, and at worst, an invasion of privacy — especially when, as seems to be the case in Minnetonka, teachers, parents and students are unaware of them.

Monitoring public posts

Social Sentinel bills itself as a social media threat alert service. It scans public posts on multiple social media platforms for content that might indicate a student or school in danger. If the post corresponds to a school district Social Sentinel serves based on geographic location, post content or “information volunteered by the post’s author,” the software sends an alert to officials. In order to help tailor the service to districts or campuses, the company helps its clients compile a list of keywords associated with schools or their surroundings.

Social Sentinel declined to make anyone available for an interview, but its website says the service isn’t “monitoring” students, because it doesn’t systematically scan every post, and is attuned only to certain types of posts. Social Sentinel says it does not allow districts to surveil individual people.

Not all districts that use social media monitoring services are shy about it. In the wake of the school shootings in Parkland, Florida that killed 17, and one in another school district in Kentucky, for instance, the Fayette County Public Schools in Lexington, Kentucky started working on a multi-faceted plan to make schools safer. As part of that effort, the district started using Social Sentinel this year.

District spokesperson Lisa Deffendall said if even one person is saved from harm as a result of Social Sentinel, the product is worth the investment.

By that measure, it’s already paid off. Earlier this year, the district received alerts flagging posts of a person in an escalating state of mental health distress. “The posts kept escalating over a series of days,” Deffendall said. “You could see that the person was in a really dark place.”

The account’s handle, which Deffendall said is visible to the platform’s clients, did not suggest the poster’s name, but the district contacted followers of the account to determine the poster was a recent graduate now in college out-of-state. The district tracked down the school and shared the posts with its mental health professionals. “They were able to find out who the student was and connect them with support on campus,” Deffendall said.

In another case, Social Sentinel flagged posts that suggested bullying. “In that situation we were able to provide the necessary support for that student to be successful, emotionally and academically,” she said.

Deffendall said a team of people in Fayette County schools review the Social Sentinel alerts, of which there are typically two to three per week, and the majority of them amount to nothing. “Of course it’s stereotypical, but in Kentucky, we love basketball, so during March (Madness), we had a lot of alerts about shooting, but it was about basketball. You would pull it up and it would say ‘He can’t shoot from the outside.’”

Pressure on schools

Schools’ interest in what students are saying online is nothing new. Officials have long monitored activity on district-owned devices and networks, blocking websites that might not be suitable for children and keeping an eye out for bad behavior. But increased concern for students’ mental health in a world of cyberbullying, and external threats like school shootings has turned up pressure on schools to more proactively address signs that students or schools could be in danger — even on social media.

Rachel Levinson-Waldman, senior counsel to the Brennan Center’s Liberty and National Security Program, said there isn’t any empirical proof that social media threat alert systems work, just anecdotes.

In a statement, Social Sentinel spokesperson Alison Miley said the company doesn’t collect data. “Critics like to say this means that ‘there’s no data that shows the effectiveness of the software,’” she said, calling that criticism dismissive of cases where violence has been prevented.

But critics say the programs are bound to miss lots of student communications. For one thing, kids often lock down their social media accounts using privacy settings, and one of the platforms most popular with young people, Snapchat, is impenetrable by threat-monitoring services. 

For the last three years, the Educator’s School Safety Network, a nonprofit that does school safety training, has analyzed media reports about school violence threats, finding about 40 percent are tied to social media posts. Of those posts, many are nonpublic, meaning they wouldn’t be detected by a service like Social Sentinel.

Another concern among critics is that even when students do make public posts, their meaning can be hard to decipher. “We know social media is super contextual. It is very open to interpretation. That’s likely to be especially the case when you’re talking about kids,” who often communicate in memes, jokes, cultural references and slang that’s designed to evade the understanding of adults, Levinson-Waldman said.

That’s one reason social media monitors create alerts that amount to false positives, which Levinson-Waldman worries might open students up to potential discipline. “You could look back retrospectively at 10,000 kids who posted something that might rub somebody the wrong way but they didn’t do anything. They’re not a harm to themselves, they’re not a harm to their classmates, they didn’t walk into school and do anything,” she said.

Social Sentinel’s spokesperson, Miley, said the company has improved its algorithms and has seen an 80 percent decline in false positives or irrelevant posts in the past year.

Levinson-Waldman is also concerned that monitoring student posts could lead to administrators learning sensitive information. That could include finding out a student is queer or is struggling with mental health and reaching out to their community for advice. Since social media monitoring systems are often geared toward picking up posts about mental health, they may discourage students from having open discussions, she said.

“There’s a concern that students may become less willing to seek out information online which can be an incredibly helpful resource, especially for kids in families or in schools that may not be as supportive of them,” she said.

A lack of transparency

It’s unclear how many school districts in Minnesota are using Social Sentinel or other social media threat alert services.

Gary Amoroso, the president of the Minnesota Association of School Administrators, said he hadn’t heard of any districts using the services. But as social media platforms evolve, “There is a lot of conversation about the impact of social media on students and staff, and I know that this is a topic that’s very important to schools, wanting to try to stay ahead of it,” he said.

After an increasing number of national media reports on school districts’ use of social media threat alert companies, MinnPost contacted several districts in the Twin Cities metro area in June to ask about their use locally. All said they didn’t use such services, except one: Minnetonka.

When MinnPost requested any Minnetonka Schools contracts related to monitoring student activity on tech devices, however, the district  — citing concerns that contract details would jeopardize school safety if made public — asked the state’s Data Practices Office for an advisory opinion on the request.

“We do not believe it is wise — or in the best interest of the safety of our community — to detail for you all of the tools or approaches we may use,” district spokesperson JacQueline Getty wrote in an email. The district declined an interview request for this story.

(Update: after this story was published, Minnetonka Schools sent an email to parents saying the story “unfairly attack(ed)” the district. In the course of reporting the story, the district was given opportunity to respond to questions raised by its contract but declined to comment.)

Through lawyers, the district argued the information could be kept private under a security exemption in Minnesota’s data practices statute. When the state office declined to give an opinion, the district released a redacted version of the contract it signed on August 11, 2017. The company’s name and some information about how it operates were redacted, but the language was nearly identical to a contract between Social Sentinel and Katy Independent School District in Texas.

MinnPost also requested communications between Minnetonka Schools and Social Sentinel. The district sent some, but withheld others, again citing a security exemption. The emails MinnPost received confirmed Peterson met with officials from Social Sentinel in Baltimore and corresponded with the company.

The media are not the only ones Minnetonka schools appear to be reluctant to talk to about social media monitoring, however. It appears the district has also kept parents and teachers in the dark.

Amber and James Bullington, Minnetonka parents, told MinnPost neither they nor their kids had heard about the district’s contract. And aside from concerns that it’s not a good use of the district’s money, given how good kids are at evading adults online, Amber Bullington said she has privacy concerns about the technology.

“I’m horrified at the rights that we’ve given up in the interest of security. Do I feel safer going to the airport given all the rights I’ve given up? Nope. Not at all. Do I feel safer going to (Xcel Energy Center) because people can’t carry a diaper bag? No, I don’t feel safer at all,” she said.

Bullington also expressed concern that she’s not aware of how the school would handle any alerts it received. “When this comes up, do they call the police and the police go off to the house? Do they get a phone call from the superintendent? What happens? But we don’t know that, so how can I trust that even if an issue arises, it will be handled in a way that won’t harm the child further,” she said.

Ann Hersman, the president of the Minnetonka teachers union, said she’s not aware of the district communicating with teachers about the contract. She said she’d like to know more about the scope of the program and how the district will responsibly use and store information it gathers.

“Making our students feel welcome and safe are two of the most important things we do as educators, but we need to make sure they are done carefully and collaboratively with all the educators in the building,” she wrote in an email.

Amanda Klinger, the director of operations for the Educator’s School Safety Network, seems to agree with that sentiment. She encourages districts to think critically about whether spending on social media monitoring tools are worth it, not just financially but in terms of fostering a culture of trust and openness in schools.

“(The problem) that I have with all of these tools is that we are outsourcing emotional labor to technology,” Klinger said. “If I was concerned about my friend … in the olden days, I would tell an adult. I would tell a trusted teacher.

“When we say we are going to outsource that job, that emotional labor to technology, there’s a concern. Can tech do it as well?” she said.

Minnetonka, too, may be having doubts about the efficacy of social media monitoring. On Tuesday, when MinnPost contacted the district to confirm the contract it received over the summer was still current, Getty responded: “Yes, but we are considering cancelling it.”

Tell us about your experience with social media monitoring

MinnPost wants to report more on social media monitoring by school districts in Minnesota, but we need your help. If you have an experience with social media monitoring in school, please fill out this survey so we can contact you for our reporting.

Join the Conversation

9 Comments

  1. I think a bigger-picture issue here is how easy it seems to be for companies to get school administrators to purchase technology or other “latest things” without proper review. And out-of-State conferences seem to be a major forum for such schemes.

    I can’t tell you the number of times I had to try to implement an inappropriate product sold to one of my administrators over drinks or dinner at a conference. Of course, that’s a generalization. Many admins are cautious both about purchase decisions and listening to hucksters, fortunately.

  2. Aside from the privacy issues with monitoring students non-school activities (which are huge!) the lack of transparency here is incredibly troubling. It appears the superintendent signed a contract for thousands of taxpayer dollars based on a nice dinner out without oversight from the school board, and then has tried to cover-up the details of this public contract.

  3. And the parents “horrified” by this will be the first to sue if a social media clue is missed and a tragedy occurs.

  4. If the companies were in some way getting around blocks on private postings that would clearly be a problem. There is no expectation of privacy for public posts. The intended audience for these posts is everyone so I don’t see where there is a problem. If you post something publicly you should realize that it is available to anyone and everyone and have no cause to complain if it is seen by someone you did not want to see it. The one problem I do see is that by keeping it secret the schools are spending money with no accountability to the taxpayers.

  5. Social media sites and certain apps, are already doing the same thing. Not sure what the problem is here. Given the number of threats that children have made on social media in regards to doing harm to themselves or others, I see it as a good use of money. Many parents are failing at monitoring their children’s social media accounts and seem oblivious as to what they’re actually doing online. Kudos to the school for putting children’s safety first.

  6. This is akin to districts selling personal data to commercial sources or military recruiters or colleges for tat matter. No one seems to be informed. I get the security angle but I do not comprehend alll these activities not being sign off for both teachers and staff.

  7. “.. horrified at the rights that were given up in the interest of security.” I’m horrified to think that people post things publicly and are then surprised when the public uses the data – in this case for the purpose of protecting our children. Congratulations to Minnetonka for using PUBLIC social media data to proactively protect children and staff. For those who are concerned that their district is misusing tax dollars – I’d suggest they attend a school board meeting, take the time to review the annual budget and participate in the local school board elections.

  8. Actually, I don’t see any examples of prevented violence here. They have located some students experiencing emotional distress, but we can’t say those students wouldn’t have eventually sought help on their own or at the urging of someone else.

    We have a lot of tech companies and software companies that roll in with big promises and minimum deliverable’s. Everyone talks about algorithm’s but those conversations more often than not look like discussion of magical skills. The company can say for instance that they’ve reduced their false positives by 80% but without knowing the number, that doesn’t necessarily tell us much. If that’s 80% of a million you’re still left with 200k falst positives. A single false positive could cause havoc in almost any high school depending on the scenario.

    The only thing that really bother me here is the lack of transparency. The security claim the MTKA tried to make here are clearly facile.

    The fact that they kept this more or less secret for the the student body, parents, and teachers is alarming. There’s absolutely no reason why this should have been kept our of the budget report, and there’s no reason that the company and the contract couldn’t be reported. This is a school safety program, not a CIA intelligence operation. If for instance, divulging the mere existence of the program could endanger it’s effectiveness, i.e. students would evade surveillance if they knew about it… that simply betrays how weak and ineffective the program actually is. If it’s THAT easy to evade, it can’t produce results.

    It’s also a little troubling that a superintendent can make a decision like this without oversight. I don’t mean to disrespect school superintendents but something like this is not going to be part of their normal skill set. It could be relatively easy to lure a superintendent into a program like this with with fancy technical jargon and big promises.

    Tech companies hire sales people to pitch their products, and those sales people focus on closing sales, not solving technical issues. And of course in many cases the proprietary nature of the technology means they can’t actually tell you how they do what they promise to do, so it’s can be impossible to evaluate the claims. Privacy can prevent disclosure so they can’t give you verifiable examples, but that doesn’t stop them from making impressive claims.

    And of course finally, how does anyone expect to keep something like this secret, and yet develop any kind of response plan? Sure maybe you get a certain number of “reports” every week or so, but you have to build a response protocol. Who actually gets these reports? How are they evaluated? What’s the protocol? How do you audit the protocol and the reports? Even if you get a legitimate flag, if you don’t handle it properly it not doing much good.

Leave a comment