A poster at Codexon wrote a blog post bragging about the way he took advantage of the points system on Stack Overflow, the programming Q&A site. Reflecting on Adrian Chan’s recent post about the weaknesses of structured, game-like incentives in social media, I more-than-half-expected a situation where griefers were gaming the points system and messing up the community. I found something else instead.
Stack Overflow is a community for programmers to ask and answer questions. It has a detailed reputation system designed to reward participation and high quality responses. For those unfamiliar with the site, the points system well thought out with respect to the behavior it’s trying to foster really geeky, and rewards those who are feel motivated and amused by thinking quantitatively about the ways their constructive participation gains them more status and powers on the site. The design of the site and its reputation system hits all four attributes in Peter Kollock’s taxonomy of social motivations as cited by Joshua Porter: reputation, reciprocity (you can see who responds to whom), efficacy (it’s intended to reward fast and good answers), and attachment to group.
The reputation troll bragged about his techniques for racking up reputation points: answering quickly, regardless of the quality of response; downrating comments that are ahead of you, and using formatting to make your points stand out.
In the comments to the post itself, a good number of commenters called him out for obnoxious behavior, despite the poster’s insistence that he was merely gaming the system for his own entertainment and to point out its weaknesses. One example: “But isn’t that missing the point? I use SO and gathered some (+2000) rep but my main goal is to provide answers to actual questions and not to abuse the achievement system.” It was mildly encouraging in that the comments thread didn’t reveal a throng of trolls outing themselves for self-serving anti-social behavior. But it was only mildly encouraging. The overall tone in the conversation on the poster’s site was one of frustration that the poster is willing to go through the trouble to decrease the quality of information for the community in order to gain an essentially pointless reward.
Even more interesting was the “meta conversation” on the Stack Overflow site itself. There, participants analyzed the troll’s behavior and identified what about the reputation gaming tactics were actually destructive to the community. In practice, posting a quick low-quality response is not that harmful, since other people quickly comment with better quality responses that get up-rated, and the original low-quality comment will float down below the fold. On the whole, adding formatting and images to posts is a good thing, since the visual emphasis makes the content easier to understand.
The one thing that site participants saw as truly harmful was the strategic downvoting of others’ comments in order to have one’s one comments increase in value. Jeff Atwood, aka @codinghorror, the site’s lead developer, commented on this point, saying this is the one thing they are considering changing the algorithm to discourage.
In reaction to the griefer, you can see the community assessing its own practices and identifying an area to improve. The developer with the power to make changes is participating in the conversation and resolving to make changes to protect against the problem. Watching the StackOverflow community react to an antisocial participant suggest something that is as important in a social system as any particular rule or feature – the ability to evolve the rules.
One of the agile practices that the Socialtext development team uses is the retrospective. We produce software in two week iterations. At the end of each two week period the team reviews the iteration – how people feel about it, what worked well, what needs improvement, and we identify items to improve. For example, we observed that the review of stories for the upcoming iteration had a tendency to fall through the cracks. So we tweaked the use of wiki page tags, which serve as a lightweight workflow reminder to identify when a story is in good enough shape for review.
What’s important here is not the specific process we use, or the specific improvement the process, but the ability of the team to reflect, identify a problem, make a change to address the problem, and assess whether the solution is working. It does help to use lightweight tools that can easily be changed, e.g. define a tag that can be applied when a story needs review. Unlike the StackOverflow community, our team does not calculate and display the team’s metrics on an individual basis – we’re striving for team goals to deliver software that meets customer’s needs, when we said we’d do it. So we look at the team data explicitly, and handle individual variance informally. The point is that we have a system to fit the culture, and we can evolve the system to address problems.
So, in response to Chan’s post, it may matter less what sort of feedback system is used – implicit or explicit, numeric or social – and it matter’s more that the community itself is able to change the rules.
Adina,
You make an important distinction here and one that I’d like to explicate a little further if I may. First, I’d like to treat rules as constraints. This because I think it’s rare that we find genuinely rule-bound social behaviors online. Instead I think we have two distinct phenomena that become mutually implicated through use by users.
We have the design constraints, which involve data structure, links, relations, associations, algorithms, and so on at the back end rendered at the presentation layer as ordered, arranged layouts. These views often capture and display aggregate user activities (individual actions) and are in turn often dynamic — content changes contingent on use by users. So we can pattern these social interactions (eg social design patterns), and there is a large number of these that we can identify and categorize as forms. These design patterns both constrain and enable interaction and thus can be associated with some common social phenomena. But they don’t cause the phenomena, and are not universal. In other words a site’s user culture, demographic, theme (music, social networking, news…) also contribute to emergent social practices.
Then we have social constraints, and these “exist” only to the extent that a user culture share norms, values, perspectives, and all the other soft stuff that comprises tacit social understandings of what’s going on and how to participate.
Twitter, for example, has only thinly patterned from a social design pattern perspective, and for that reason probably has given rise to many and diverse tacit social practices. The fewer the design constraints, the more likely tacit social practices will organize interaction: through mutually recognized social conventions and their failure.
I like your conclusion that there’s something to be unlocked in enabling a user culture to evolve its own rules. Perhaps we could then delineate two approaches: agile social design constraints and social interaction practices. Design constraints could be tweaked to mitigate the effect of gaming the rules; algorithms could be changed up to suppress self-reinforcing behaviors; sorting, filtering, and ordering contents could be recalibrated to emphasize different views of activity; and input features (selecting, tagging, quantifying, etc) could be used to differently emphasize user activity captured and dynamically re-presented.
And then to your point, social interaction models could be pursued to steer social practices. For example, use of a moderator. Or transactional systems involving one, two, three, group, community, or public interactions. Sharing, validating, favoriting. Use of trust, expertise, social status, and so on. All of these and more put to use in service of different kinds of social: from the competitive and strongly identifying activities of fan cultures to the more dispassionate activities related to news and information sourcing and consumption.