The harassment of the teens who spoke up after the Parkland massacre is a black-and-white case study of the impotence of today’s social media giants.
Much like gun violence in America, toxicity on social media is so common that we’ve become inured to it: “Well, that’s the way things are.”
To paraphrase some badass teenagers in Florida: I call B.S.
The harassment of and conspiracy theory-mongering about those same teens, who survived the Valentine’s Day shooting in Parkland, Fla., and are now speaking out in favor of gun control, should give us all pause. YouTube, Twitter and Facebook are, once again, proving that they are ill-equipped to control malicious actors on their platforms, because they’re afraid to call themselves “media companies” (even though they are) and take some responsibility for the content they amplify.
This is as black-and-white as it gets.
Unlike a political campaign, where reasonable people can disagree about winners and losers, there are not two “sides” in a mass shooting. Everyone who was in Parkland High School on Feb. 14 should have the unflinching, immediate support of the leading social media platforms.
There are basic questions at play here that consumers should be asking: What do these sites believe in? Do they have values? How are Twitter’s rules, Facebook’s policies and YouTube’s community guidelines informed by these values? Are those rules actually enforced the same way for everyone?
And if the platforms can’t answer those questions: Why should we use them?
YouTube (eventually) removed the offending conspiracy-theory video about one of the Florida teens, saying it violated a “policy on harassment and bullying.” Pressed for more detail by our sister site, The Verge, it provided the following comment:
This video should never have appeared in Trending. Because the video contained footage from an authoritative news source, our system misclassified it. As soon as we became aware of the video, we removed it from Trending and from YouTube for violating our policies. We are working to improve our systems moving forward.
The poor communication around toxicity on social media is disheartening. These are extremely hard problems to solve, but right now, it too often feels like no one is trying to be proactive about the next blowup. So, here are a couple starting points.
“Do I feel personally responsible for making the world a safer place online? Yeah. Am I gonna get it perfect? No. But the intention is there. And I realize that the products we build have real impact on people. They have impact on their mental health, they have impact on their social networks in the real world, their relationships with their friends, and that at its best that can be such an amazing thing.”
This is a perfect articulation of a healthy, smart, apolitical value: The people who use our product can be changed by it and we have a responsibility to protect them. Later in the podcast, Systrom recalled how he and co-founder Mike Krieger personally deactivated trolls’ accounts in the app’s early days, a reflection of another good value.
“If you’re here to cause trouble, you don’t belong,” Systrom said. “I think that set a tone in the community.”
That tone is still coming through loud and clear. Currently, Instagram is the most pleasant social network I have ever used — no surprise.
Meanwhile, I’ve been on Twitter for nearly 10 years, and I couldn’t tell you with much certainty what I could tweet that would definitely get me banned from the site. To their credit, the site recently rewrote its rules about abuse and it feels like the worst of the worst after Parkland was proliferating on YouTube and Facebook, not Twitter. But there’s no way of knowing that that’s the case.
So, in addition to spelling out values, here’s another proposal: Platform moderators should publish what they’re doing, every day: “We banned this many users for copyright violation; we put this many accounts under review for alleged harassment; we concluded reviews of this many accounts today and restored them to normal because we found they did not violate the rules.”
Crucially, these reports should not include the names or usernames of anyone who is under review or has been banned — that would just become a reward for trolls. But an anonymized running record, like a community police blotter, would be a valuable window into how the rules work and why moderators behave the way they do.
It’s time for these platforms to grow up and start behaving like the big companies that they are. They need to have rules that are transparent, well-communicated and aggressively enforced. If they continue to fail us here, then they deserve no loyalty from their users, and we should take our thoughts elsewhere.
Recode – All Go to Source
Powered by WPeMatico