I’ve long been fascinated by the social challenges of online communities, and how they mirror and differ from “real life” societies. I was recently pondering this with regards the Facebook “real names” policy, and whether it was necessary, or at least reasonable, for them to assert that level of control. Then I was reading about the #GamerGate situation, and ran into similar thoughts about the necessity, and the perils, of policing online spaces.

In case you missed it, Facebook recently started enforcing a policy which I didn’t even know existed: you are obliged to use your real name; specifically, they started suspending accounts of a whole load of drag performers and other transgender people, seemingly because their “real” name would be the one they were assigned, along with gender, at birth. It would be an understatement to say that this caused a few people to get a bit upset, and there was talk of a “mass” migration to beta-stage rival Ello. Facebook eventually apologised, blaming a rogue individual abusing their abuse reporting tool, and offering rather vague assurances that while the policy would remain, it would be applied better.

Now, a real name policy is actually not an uncommon tool for combating abuse in an online community. See for example this policy on MeatBallWiki, which was a community for discussing communities, so was very self-conscious of this kind of thing. The reason it works is the same reason that people feel liberated at a masqued ball: hiding behind an imagined identity allows you to distance yourself from your actions, and “try on” a personality that you wouldn’t otherwise have. That liberation is exhilarating, but it can also lead to people bringing out aggressive or unpleasant sides of themselves, and turning the community into a less pleasant place for all involved.

On the other hand, as Facebook recently discovered, it’s a problematic policy to enforce, not least because it’s rather hard to define what a “real name” actually is (outside of fantasy novels, where someone or something’s “real name” is often a word of power which grants magicians control over them or it). In fact, that MeatBall policy links to a rather philosophical discussion of exactly that. ((For those not familiar with traditional wikis, predating Wikipedia, the fact that there are multiple voices discussing things on that page may not be obvious; it’s not an attempt to summarise the discussion, it is the discussion, just not in comment-thread form.))

The second controversy that I was thinking about was the one known by the hashtag #GamerGate. This is a rather confusing situation, and I won’t go into the details here, but while reading about it, I came upon a section of this TechCrunch article discussing the alleged censorship of comments which helped fuel the controversy:

The balance between moderator power and user power in online communities is entirely one-sided. Users have no power to hold their moderators to account, and there is typically no user oversight regarding whose content gets removed and who gets banned.

This struck me as a rather sweeping generalisation. The online communities I’ve participated in have all had very explicit systems of accountability at the heart of their moderation systems, and adapting the rules to suit the community is generally the subject of much discussion. Often this does take the form of a series of crises or revolts, with new structures emerging as the result of each, building toward an unattainable and shifting ideal; in that way, it’s rather like “real life” social organisation and government.

For instance, in 2001, the BBC took over h2g2, which at the time was quite a small, independent-minded community. The BBC, per corporate policy, implemented a whole set of new rules, including mandatory moderation of all posts, which resulted in pre-BBC content being hidden while moderators read through the backlog of smileys and people discussing their lunch (what a bizarre job!) The community, I think it’s fair to say, was not happy with this, and eventually through a combination of protest and demonstration of maturity, persuaded the BBC to restore “reactive moderation” – moderators would only read comments flagged by a user clicking a “yikes button”. h2g2 has recently become an independent entity once again, and while I’m not active there at the moment, I imagine some of these policies are having to be renegotiated over again, since the bureaucracy of the BBC no longer provides a backstop against which to frame them.

Like any attempt at government, democratic or benignly autocratic, the governance of an online community is a tricky thing to get “right”. Wikipedia has gone from an early model where “Jimbo” Wales had a “benevolent dictator” role with the ability to directly ban users, to one with an extremely complex set of elected committees, public hearings, and all manner of checks and balances. Other sites have experimented with a more anarchic approach, generally resulting in a community of only the toughest-skinned individuals, able to cope with all the hurtful things people can do if not policed.

All too often, people complain about rules not because they are unfair, but because they disagree with them; or they shout “censorship” because theirs was the voice silenced by a moderator, as lampooned neatly by a recent xkcd strip. It is possible that voices in a community can be silenced by abuse of power, and a strong community should be awake to that risk. It’s the old question of “who watches the watchmen?”

Which brings me back to Facebook – clearly, there isn’t a lot of accountability in a commercial entity boasting millions of users, without any clear social mission. Ello’s main differentiation from Facebook is its manifesto, promising above all that “you are not a product”, which is to say that the site won’t be paid for through the traditional route of advertising and data mining. But the users flocking there over the “real names” fiasco weren’t being harmed by Facebook’s commercialisation, they were being harmed by its centralised, unaccountable, governance structure.

I think the actual problem in Facebook’s case is that they are making rules assuming that Facebook is a single, governable, community; but social networks are rather different from the forums, wikis, and discussion boards which preceded them. They don’t consist of a single group of people, united by a common interest, aim, or outlook on life; and they don’t have any natural source of hierarchy or rules. They are probably better described as platforms rather than communities – they are the tools used by thousands of loose, overlapping, communities, using them in different ways, each with their own social norms. That’s not to say they shouldn’t have rules, and tools with which their members can protect themselves, but to globally enforce a consistent set of rules probably no longer makes sense. And yet, if Facebook and Twitter were truly just a means for smaller communities to form, they wouldn’t dominate the online space in the way they do; there is a sense in which they are single, global, conversations, like the proverbial cocktail party ((Cherry, 1950-something. A citation I haven’t had to recall since I graduated 10 years ago.)) where everyone is talking at once and you can choose to tune into different conversations. Either way, the lessons learned from online communities over the last few decades are only partially applicable to these rather different spaces.

So, moderation is necessary, but fairness is a genuine challenge; accountability is possible, probably, but may need the odd revolution; and Facebook is not a single community, except when it is. And on that rather unsatisfactory note, I will leave these ramblings, for now.