As you may know, there was a recent campaign by activists to pressure Facebook to remove misogynistic, in some cases graphic, posts glorifying rape and domestic violence. (If you can stomach it, Buzzfeed has some screen shots of the misogynistic posts.) Part of the motivation of this campaign was to highlight Facebook’s ridiculous standards: Facebook had been slow to remove the misogynistic posts, in some cases saying they weren’t hate speech, while removing some pictures that featured breastfeeding, on the basis that the breastfeeding pictures violated Facebook’s prohibitions on nudity. As a result of the campaign, in which advertisers were pressured by activists, Facebook has pledged to review and revise its policies, admitting that they failed to remove posts that constitute gender-based hate speech:
In some cases, content is not being removed as quickly as we want. In other cases, content that should be removed has not been or has been evaluated using outdated criteria. […T]he guidelines used by these systems have failed to capture all the content that violates our standards. We need to do better – and we will.
In considering this controversy, it’s important to remember, as Mary Gardiner wrote last September (in the context of the Reddit/creepshots controversy), the phrase “freedom of speech”
isn’t a very specific term on the Internet: it can mean anything from “I believe governments should not restrict expression” to “I believe that never deleting comments* from a forum improves the quality of discussion” to “I believe that never deleting comments from a forum is the only ethically correct way to run a forum.” (Or the disingenuous version: “I believe that I personally should be able to say what I want in any forum.”)
(Gardiner goes on to point out that “No one seems to believe this about spam.”)
In response to Facebook’s announcement, Jillian C. York, Director for International Freedom of Expression for the Electronic Frontier Foundation–a cyberspace civil liberties group whose work I generally respect and admire–wrote an article in Slate questioning whether Facebook should be in the business of policing “hate speech” at all. Conceding that Facebook has a right as a private business to remove content it sees fit, York argues that the Facebook is ill-equipped to handle the hate speech complaints that go along with serving a billion users worldwide, citing examples where Facebook has erred. But then she goes on argue that Facebook shouldn’t engage in censorship at all, given its status as “the new town square”:
Should private companies be determining what constitutes “hate speech”? In the United States (where Facebook is based), most of the speech flagged by Women, Action & the Media as offensive is, while abhorrent, protected by law. And while Facebook may be private, many of its users treat it like the new town square, making it more of a quasi-public sphere. While the campaigners on this issue are to be commended for raising awareness of such awful speech on Facebook’s platform, their proposed solution is ultimately futile and sets a dangerous precedent for special interest groups looking to bring their pet issue to the attention of Facebook’s censors.
Although I am very sympathetic to York’s argument, ultimately I must respectfully disagree. First of all, it’s just not feasible to expect Facebook to operate the way York suggests. Facebook’s business model involves posting advertisements on nearly every page served by the site, but a company will understandably be reluctant to advertise on Facebook if the ad might give the impression that the company supports misogyny (an impression that would not be wholly incorrect when the company’s ad is paying for the server and bandwidth costs for a misogynistic post). Facebook has to remove gender-based hate speech for the same reason MTV bleeps out the word “hash” in the Weezer song “Hash Pipe”. Expecting an advertisement-supported site to allow all constitutionally-protected content is a pipe dream.
More importantly, I think the greater issue is one company’s extent of control over Internet discourse. York’s article is accompanied by a photo of Mark Zuckerberg, with the caption “Should we give this guy the power to decide what is and isn’t hate speech?” My answer, if we’re talking about the majority of public discourse on the Internet, is a resounding “no”. Indeed, if we’re talking about the majority of public discourse on the Internet, I don’t want Mark Zuckerberg–or anyone else–to have power to decide anything. And that’s why the problem isn’t that Facebook bans hate speech; the problem is that Facebook has become the de facto default “town square”. It’s dangerous to allow any one company or entity to have this much power.
If you are worried about Facebook having the power to determine what is “hate speech”, then the proper solution is to encourage people to stop viewing Facebook as the de facto method of interaction on the Internet. Encourage people to use other social networks in addition to Facebook. Encourage people to use forums outside of Facebook. Share things on your own site instead of just Facebook. (My web host, NearlyFreeSpeech.net, has relatively few restrictions on content and is pretty affordable.) I’m not suggesting that people stop using Facebook, but merely that dissuade people from only using Facebook. If we work to ensure that there is a variety of “town squares”, then we won’t have to worry as much about the restrictions at any particular one.