
In a recent piece for The Intercept, Sam Biddle outlines how Project Veritas, a conservative group known for using deception and subterfuge in its attempts to expose the alleged misdeeds of leftists and left-leaning outlets like The Washington Post, has openly violated Facebook’s guidelines about the use of fake profiles in the service of “coordinated authentic behavior.”
The article, which includes deposition from members of the group admitting to how manufactured Facebook profiles factor into their work as well as context about the group’s backing, has this to say about Facebook’s oversight of the content it hosts alongside the company’s stated goal of stemming disinformation and propaganda:
The real issue is uneven, arbitrary enforcement of “the rules.” Max Read, writing in New York magazine on another social network’s enforcement blunders, argued that “the problem for YouTube is that for rules to be taken seriously by the people they govern, they need to be applied consistently and clearly.” YouTube is about as terrible at this exercise as Facebook is, and there’s a good chance that if Facebook treated malicious right-wing American exploitation of its network the same way it treats malicious foreign exploitation of its network, it would probably botch the whole thing and end up burning people who actually do use phony Facebook profiles for work toward the public good.
That a company like Facebook is even in a position to create “rules” like the coordinated inauthentic behavior policy that apply to a large chunk of the Earth’s population is itself a serious problem, one made considerably worse by completely erratic enforcement. It’s bad enough having a couple guys in California take up the banner of defending “Democracy” around the world through the exclusive control of one of the most powerful information organs in human history; if nothing else, we should hope their decisions are predictable and consistent.
While Biddle acknowledges that Facebook would probably screw up its attempts to officiate its policies against domestic political manipulation anyway, that it gives the practice a half-hearted, inconsistent effort doesn’t make matters better.
As the allusion to YouTube in Biddle’s closing additionally suggests, this phenomenon of tech giants being inadequate gatekeepers of authentic information free from hate speech is a pattern of frustrating behavior for observers across the political spectrum. Recently, YouTube caught a lot of heat from the journalist community when Carlos Maza, producer, writer, and host of the “Strikethrough” video series at Vox, made a public plea to the video-sharing website in a series of tweets pointing to homophobic and racist abuse by Steven Crowder, a conservative talk show host and self-professed comedian who has near four million subscribers to his name.
Crowder’s hollow defense against Maza’s compilation of all the times he referred to him as a “queer,” a “Mexican,” or demeaned his “lispy” delivery while caricaturing gay men has been that his is a comedy show and that his comments amount to nothing more than “playful ribbing.” This, however, to most objective observers, is unmitigated bullshit. Crowder’s repeated jabs at Maza for his criticisms of right-wing talking heads like Tucker Carlson are much stronger than the barbs you’d reserve for one of your friends—and even then they’re probably not all that appropriate and definitely not funny.
Crowder’s protestations of YouTube’s responses during this whole affair also miss the mark. Predictably, YouTube first addressed Maza’s plight by doing, well, nothing, claiming Crowder hadn’t violated its terms of service. This, like Crowder’s claims of innocence, is bogus. YouTube’s rules explicitly outlaw “content or behavior intended to maliciously harass, threaten, or bully others” and furthermore view hate speech as a violation. Representatives from the company explained that it opted not to take action against Crowder because he didn’t direct his viewers to harass Maza, which is immaterial to the above concerns and, at any rate, irrelevant in consideration of the notion he himself (Crowder) was the one doing the bullying.
Eventually, however, enough people raised objections whereby YouTube moved to demonetize Crowder, itself a token gesture given the conservative provocateur gets the bulk of his revenue from merchandise sales (including his ever-tasteful “Socialism Is for Fags” T-shirt depicting Che Guevara). Crowder’s reaction? This was YouTube caving to the demands of a corporation throwing its weight around to “censor” a conservative voice in accordance with the demands of a leftist who had targeted him, one of the “little guys,” because he didn’t like his viewpoints. Never mind that Maza is a gay Latino who regular receives abuse on both Crowder’s channel and Vox each time he makes a post. Right, Mr. Crowder, you’re the marginalized one here.
This isn’t censorship, though. This is a private company enforcing its rules by which Crowder did not abide. What’s more, it’s not even doing that right. For violations of its terms, YouTube should be removing this content, not simply demonetizing it. Instead, the offending remarks remain and Crowder gets to use this episode to rally his troops and paint Maza as the aggressor. Show your outrage by signing up for the Mug Club! What better way than to proudly exhibit your freedom!
At a minimum, this is an episode that makes YouTube look very bad. That its decision-making appears so wishy-washy lends credence to the criticism that the company is trying merely to avoid accusations of bias rather than doing the right thing. It doesn’t help either that these events are unfolding during Pride Month, an occasion for which YouTube has touted its commitment to the LGBTQ community. If it were really interested in upholding the civil rights of a vulnerable subset of the population beyond mere window-dressing, maybe YouTube would actually stand in solidarity with its LGBTQ creators rather than banning them too in the interest of purported “fairness.”
I mentioned Twitter in the headline for this article. Emil Protalinski, news editor for VentureBeat, while trashing YouTube for, alongside perpetuating the Maza-Crowder fiasco, allowing its automated recommendation system to show random people’s videos to pedophiles and providing platforms for content creators to capitalize on the anger of impressionable young male viewers, likewise takes Facebook and Twitter to task for their uneven commitment to rules they aver are clearly posted and stated.
In both cases, Protalinski views failure to consistently uphold a set of guidelines as occurring so often that there are simply “too many examples to list.” The instances he does highlight, meanwhile, are salient and illustrative. Re Facebook, its refusal to remove a video headily edited to make Nancy Pelosi look drunk, senile, or some combination thereof was highly criticized at the time for irresponsibility in allowing false/misleading content to exist contrary to the company’s stated goals.
As for Twitter, Protalinski cites the social media behemoth’s dilatory response to other apps and sites banning conspiracy theory promulgator Alex Jones from its service. If nothing else, Twitter is woefully behind the curve when it comes to properly marshaling the content it hosts. And, not for nothing, but why are there so many Nazis hanging around? Like, why is banning them evidently so controversial?
Lather, rinse, repeat. We’ve sadly seen this before and we’ll see it again. Protalinski writes:
There are two whack-a-mole cycles happening on Facebook, Twitter, and YouTube. First, these companies fail to write clear rules. Disgusting content on their platform is brought to light. It isn’t removed. Users protest. The companies point to their rules. More outrage follows. Finally, if there is enough of a blowback, apologies are issued, the rules are “clarified,” and the example content is taken down.
The second cycle is happening at the same time. A given rule is already clear and specific. Disgusting content is brought to light. It isn’t removed. Users protest. The companies fail to explain why it wasn’t removed immediately or make up excuses. More outrage follows. Finally, if there is enough of a blowback, apologies are issued, the rules are “clarified,” and the example content is taken down.
In these cycles, only blatant and high-profile cases are removed. And that process can take anywhere from weeks to months from when the original content was published. By then it has done the damage and generated the revenue.
In either scenario, the sticking point is not necessarily the specificity of rules (although lacking clear standards of conduct is in it of itself a problem), but rather the inability or unwillingness to consistently enforce them independent of political affiliation or other identifying characteristic. Without the requisite amount of outrage or clout of the individuals expressing that outrage, nothing moves forward. Even then, actions taken are liable to be too little, too late, and backed by an inauthentic, insufficient rationale. In other words, and to echo Protalinski, the damage is done.
To be fair, this business of moderating the wealth of content that appears on the likes of Facebook, Twitter, and YouTube is no easy task given its sheer volume and the rapidity with which it is created. At the same time, this is the responsibility these companies bear as providers. If your priorities involve retaining your share of the content creation/streaming market and growing your business, you’re going to have invest in a modicum of safeguards to ensure that users and creators alike feel comfortable using your platform.
So spare us the half-assed excuses and non-apology apologies. If people like Steven Crowder don’t want to play by the rules, invite them to abide by your code of conduct or find somewhere else to peddle their hate and disinformation. I, for one, could do without the moral quandary I face by using your services—and I know I’m not alone.