Sociable media: the case for promoting positive social norms

Taken from Pleito de Suegras by José Guadalupe Posada (circa 1880–1910)
Taken from Pleito de Suegras by José Guadalupe Posada (circa 1880–1910)

In the UK, around 30% of adult women choose not to dye their hair, and falling steadily. The data are sketchier for men, but it’s more like 60% and falling more sharply.

This disparity of choice interests me. It doesn’t come as a surprise, but what is harder to pinpoint is the precise cause. My knee-jerk reaction is it’s the long legacy of differences in aesthetic expectations between the genders: disproportionally, women have felt an expectation to appear younger.

But a singular cause is hard to pinpoint. It’s almost chicken-and-egg*: was the creation of and continued demand for hair-dye caused by a pre-existing expectation, or did the manufacturers of the product create, stimulate and perpetuate this pressure to shift products?

The answer is probably a lot of both, and that’s what I find interesting in the context of emerging behaviours in digital spaces: effects can also be causes.

The hair-dye example illustrates that after a while, even if a common behavioural trend had a single, defining origin, the fact people do it becomes a reason for people to do it. People like to colour their hair, so hair dye exists. Because hair dye exists, people like to use it.

But, because this behaviour is commonplace, people—some more than others—start choosing to do it because they feel there is an expectation upon them. So far, this is the best way I have found to contextualise the relatively sudden demise of digital social spaces, from cordial and discursive into aggressive and hostile.

Costolo writes: “I’m frankly ashamed of how poorly we’ve dealt with this issue during my tenure as CEO. It’s absurd. There’s no excuse for it. I take full responsibility for not being more aggressive on this front. It’s nobody else’s fault but mine, and it’s embarrassing.”

“It’s no secret and the rest of the world talks about it every day”, Costolo continues. “We lose core user after core user by not addressing simple trolling issues that they face every day.”

The challenge of moderating social spaces, even semi-automatically, is considerable. Every social platform has a long, convoluted code of conduct to protect its revenue and, to some degree, its users. Yet, some of the worst behaviours exhibited on social platforms aren’t single users, pushing content through the cracks between terms of use. Rather, much of the worst behaviours come from people—unconnected other than by the platform itself—behaving badly together.

Group-based moderation is poorly defined and difficult to enforce. It’s like trying to hand out speeding tickets to a stretch of motorists based on the traffic’s average speed: every individual would argue, rightly, that they’re not in control of the group and therefore isn’t able to influence the average speed significantly. Yet, we’ve all been on highways where large numbers of drivers—perhaps the majority—are in excess of the speed limit. When it happens, it’s easy to go with the flow. Herd mentality.

When social media got going in earnest, it was common to see individual users publicly airing a grievance with a brand, usually for poor service. It was so common that, for a time, it started to seem like the de-facto means of orchestrating customer care, since brands invested heavily in managing their social media reputation.

Much more common now is for users to swirl, suddenly and rampantly, around a particular issue: a point of politics, or an individuals’ behaviour, or an article cleverly constructed to provoke a response whether or not you’ve read it. People jump on.

But I think there’s more to it than herd mentality: I think many people do it because they think that’s what they’re supposed to do, either for the sake of their identity or because they think that’s how these things work. This is why I think moderation is failing. Moderation is restrictive, not permissive. Being explicit about what individuals can’t do does little to moderate what groups can do.

Changes would be needed for social media platforms to curtail bad group behaviours. Currently, it’s hard to imagine how that would not impact the freedom of individuals’ expression. But that’s because of the restrictive overtones of moderation: these things can be extremely subtle, so piling on more restrictions wouldn’t likely help.

Instead, functional changes to how social media platform works—permissive rather than restrictive—may protect the individual while also curtailing the extremes of the group. There is precedent: brands getting systematic about social-based care suppressed the barrage: for the most part, they didn’t say ‘you can’t complain here’; instead, they found ways of stepping forth to manage the situation actively and elegantly.

The same may have to be true of the platforms themselves. They need to be able to determine when groups are sharing a joke and when groups are attacking vulnerable people. They need to encourage the former, functionally, and dissuade from the latter, by determining who in the group is participating just because they think this is what participation now means. Today the distinction is subtle, but the platforms need to know it when they see it.

*By approximately 340 million years, it was the egg.

Subscribe to in your newsreader with this RSS feed. (what does this mean?)