How Facebook integrates FriendFeed – Discovery vs. Privacy

This week, FriendFeed co-founder Paul Buchheit popped up on FriendFeed to let folk know that developers are quietly at work on a couple of longer-term projects that will help bring FriendFeedy goodness to the larger world. There has been a lot of discussion about the dropoff in FriendFeed traffic since the Facebook acquisition, and the appearance was intended to reassure the community. People weren’t reassured, not only because Buchheit didn’t share any details about what they’re actually working on, but because there is a fundamental questions about how that integration would work, because of a fundamental difference in the social model of the services.

Facebook is designed to to share things only with one’s friends, and FriendFeed is designed to make things discoverable through the social network. These social models look superficially similar – a user shares content through a friend list, but are deeply different.

Facebook’s default today is private/symmetric. You need to be mutual friends to see each others content, and if you are not friends with someone, you have access to very limited information. There is a “fan page” model but it is oriented toward “publishing/celebrity” rather than information sharing. By contract, FriendFeed has a public/asymmetric model like Twitter. Information is public by default, you can easily discover someone’s content without any “friend” gesture whatsoever, and you can follow someone’s stream without a mutual friend commitment. Information and conversation is discoverable. FriendFeed has strong searching and filtering capabilities that let you find things and people you’re interested in.

These two social models reflect very different values. With Facebook, the value is to share things in confidence with one’s friends, and to conversations in confidence. The deviations in the model that result from diverse friend networks, from disclosure through 3rd party applications, and other sorts of “information leakage” are seen as problems, “privacy violations” that need to be controlled through configuration, through restricting information, through policies that restrict information sharing.

With FriendFeed, the value is to share things publicly. On FriendFeed, the value is to make things discoverable and sharable, in one’s social network and with others who may find it, and to have conversations that attract interested people. Communities that gravitated to FriendFeed included scientists, journalists, and educators – communities that explicitly valued the discoverability.

In the discussion on FriendFeed, the community was not mollified, because they fundamentally value the discoverable model of FriendFeed. For FriendFeed users, simply adding FriendFeed-style service integration into the symmetric/private Facebook model, it will be much less useful. A user will able to more easily share updates from Delicious or Youtube or Last.fm to their friend network, but be unable to discover new people and information.

This difference is often put with a value judgement shortcut, Facebook is closed=bad. This judgement is too simple – the problem is that as Facebook gains more and more power to share information, and the defaults remain private, then actions like discussing news stories won’t be in the public domain, even if people would prefer them to be. But if the initial use case for many users is privacy, then changing defaults to increase sharing will have negative consequences.

For the community in the FriendFeed discussion – disclosure, myself included – the integration will have value if it brings more of the FriendFeed public/asymmetric discoverable model to Facebook, and will not have value if it doesn’t. Simply promising to bring FriendFeed features into Facebook is worthless without making that information discoverable.

How to create a social network that enables privacy but promotes and rewards discovery? That is a challenge. and the way that Facebook integrates FriendFeed will show whether Facebook is interested in discovery and so, are they up to the challenge.

Update: Questions about Facebook’s direction were short-lived. Later yesterday, Facebook announced that public updates would be searchable on Bing. Clearly Facebook is headed for more discoverability. The question is now how this will play out in terms of Facebook user expectations and user experience.

A time for focus, a time for distraction

Social messaging can quick way for a traveller to find a friend’s recommendation for dinner in a strange city, for a salesperson to get a quick answer to a question when a customer’s on the phone. Realtime communication can enable rapid response, but a constant stream of chatter can be a time-consuming distraction.

In a Psychology Today article posted by Linda Stone and retweeted by Tim O’Reilly, a recent study by two MIT neurosciencentists shows that multitasking and distraction make people less efficient at getting tasks done.

In response to O’Reilly’s post, pioneering internet educator Howard Rheingold questioned the assumptions around the research and its interpretation: “Regarding neuroscience abt attention, distraction, multitasking – is efficiency highest & only goal? What about discovery? Pattern-finding?” If multitasking makes us inefficient, is efficiency always desirable?

In response to Rheingold’s question, I shared an article I read this weekend, contrasting the efficiency-oriented mindset of web developers with the focus of game developers. In a game context, the focus is on fun, story, character, not efficiency. There are also some salient differences differences between social media and traditional games: “Of course the game world thinks of games as built by game designers & the games we play in social media are often nomic [i.e. players make up the rules]. Also what efficiency misses is that in social media we’re often paying attention to people not tasks.” Rheingold took this one step further “Which leads me to wonder how much of the dreaded multitasking we do online is social discovery and relationship maintenance/repair.”

Efficiency isn’t necessarily the goal in social media. People are making social contact, developing patterns of social gestures that maintain relationships. When a colleague in Canada posts about tasty mango sushi, and a colleague in Portland, Oregon empathizes with turn toward fall weather favoring warm soup, we’re not just spewing pointless trivia, we’re sharing a personal connection that otherwise doesn’t happen separated by many hundreds of miles. Mark Drapeau makes this point with typical good-humored provocativeness: “I think that collaboration is the end result of leveraging social networks, which is in actuality what the social networking tools are for.”

Rheingold proposes, based on his own experience that multi-tasking may also help find meaning in diverse information: “I surf and task switch constantly, store and forward what I find, make notes, often find overarching patterns. Rheingold believes that students sometimes need to learn to be less focused: “Focus has its place, but many of my students who are adept at it need to unlearn dependence on it to zoom out to big picture questions.”

Jim Pivonka agrees that that multi-tasking is useful for young people learning, but brings evidence that it is otherwisecounterproductive for getting things done: “Other than the learning task, multitasking & high performance task execution suspected pretty much mutually exclusive.

In addition to learning, Rheingold posits art as an activity that is valuable, but not about efficiency. “To me, making art is an activity that is valuable for it’s own sake, not for the artifact or its utility, so efficiency is orthogonal… To paraphrase Kierkegaard, for me, making art “is a reality to be experienced, not a problem to be solved,” or artifact 2 B displayed.. dl willson suggests that art may be efficient in a different way, “@hrheingold I would argue that art is efficient…because art is a spark. “Art” is not the object but the spark.”

To be honest, I am not sure that I am correctly representing the dialog between Rheingold and Willson; they may be able to correct my mis-reading. Regardless of the respective understandings of art, it is clear that whichever definition would not meet the tests of the neuroscientists for task-based efficiency!!

Several others suggest alternative models for focus. Brad Ovnell cites a different type of focus needed in Karate: “Loved sparring in karate b/c it developed ability to focus & look wide at once.” Gregory McNish suggests that perhaps focus should have a rhythm, in and out, like breathing.

Jonathan Pratt, an educator with neuroscience background, suggests that the neuroscience research is looking at task efficiency since that is easy to research: “I think it’s a matter of tackling the easier/more quantifiable questions first…brain’s very complex & neuro’s a young field.”

For Rheingold, the hyper-focus on efficiency calls to mind his earlier reading of work by Jacques Ellul, who articulated in the 1950s a grim vision of society being taken over by “technique” – technologies and highly structured activities that eat away at human autonomy and community.

A summary of the conversation: there are goals and values for multi-tasking and social media, other than task-based efficiency. Social gestures, learning, pattern-finding, art – these are all very different from the task completion that is shown to be hampered by multi-tasking. Findings about the impact of multi-tasking on task completion is useful but limited. Hopefully future research will broaden focus to examine the relationship between the experiences of multi-tasking and ambient sociality and other dimensions of life.

Salmon – re-assembling distributed conversations

Salmon is a brand new protocol proposal that promises to solve a problem that’s gotten worse with social media, and has been around since the early days of blogging – the problem of distributed and disconnected conversation. People engage in conversation across multiple tools, and there’s no good way of assembling a coherent view of that conversation.

The problem has gotten significantly worse with social messaging and social RSS readers. Twitter doesn’t even have evident threading (although the reply target is in the metadata). The brief nature of Twitter and FB posts means that a conversation is even more broken up, and conversations often segue back and forth between shorter Twitter messages and longer posts elsewhere. Services such as Disqus provide a workaround for the fundamental problem with the architecture of web conversation.

When a user makes a comment at an aggregator (such as Friendfeed/Facebook or Google Reader), the comment is fed back to the source, and the comment can be re-assembled at the original blog post. Crosspost sharing (the common situation where a user shares a video or song from Youtube or Last.fm or Blip to Twitter or Facebook) is handled by a crosspost reference to the source. Oauth is used to authenticate users and help prevent spam and flooding.

While the use cases underlying the spec appear to be distributed comments on a single blog post, and distributed comments on a single shared object (a photo, video, song, presentation), I wonder whether the same method could even be used to coalesce distributed conversations in social messaging services themselves, where conversations are scattered and organizing them takes significant effort.

Social network visibility
In backchannel conversation about the need to make streaming conversations visible, Adrian Chan had this insight: “til posts refer to other posts there’s no communication system.” Salmon can potentially add the post references, and create a communication system out of today’s disconnected scattering of posts and comments.

If Salmon is adopted in tools and comes into common use, when posts link to posts, there will be a powerful consequence. Not only will the conversation will be visible. The conversationalists will be visible. The conversation flow will be visible. The social dynamic of conversation, which has been hidden in the bounces between services, suddenly becomes traceable. This has consequences for participants – it gives participants a more coherent sense of who’s talking to whom, and enables our primate-evolved senses of trust and reputation to work. It could also enable a new level of social network analysis across services, potentially facilitating content recommendations, search, and consumer marketing analysis.

Conversational curation
One of the things that we’ll find, when the decentralized conversation is suddenly more visible, that aggregation hasn’t solved the problem of sense-making. When the conversation is pulled back together, the result will often be a hairball of inter-related threads. The art, then, will be a process of curating the artifacts of conversation into something that does make sense for participants at the time and in the future. I suspect we will see the re-creation of some editorial techniques developed in some very old instances conversational discourse represented in text, from talmudic and confucian traditions. Just as conversations need “tummlers” to facilitate civil and congenial experiences, the artifacts of conversations will need curators, and curators’ tools to pull them together.

Here’s an example of a Twitter conversationsummarized and curated for folk to share later. The tools for this are very awkward today – manual gathering of each post, poster, and quote. It would be much easier to gather the thread with a gesture and then prune it. Wave on its own, as it is, won’t address the need yet either – replay is time-consuming for the reader, and Wave does’t yet have the curation affordances.

Looking forward to what’s coming next

One of my favorite quotes is Paul Saffo’s “never mistake a clear view from a short distance” – I wrote this post in 2002 and the problem is still unsolved, and has gotten worse. Salmon appears to be a promising approach. There are open questions, including how the approach will scale, the viability of the authentication process, and the adoption by tool vendors. I look forward to reading architectural analysis about how this might work in practice, and look forward to attempts to prove the model out. This could be another powerful step toward the decentralized social network of the future.

The Eternal Frontier

As a kid I was transfixed by the remote worlds illustrated in the Peabody Museum’s murals of prehistoric creatures, and learned the skill of getting lost in the Museum of Natural History. With the scientific developments in the decades since those exhibits, I continue to find a well-told natural history an awe-inspiring tale; the stories of evolution, population dynamics, continental drift and climate change play out with accidents, contingencies, and deep patterns.

Tim Flannery’s Eternal Frontier is a big picture ecological history of North America, from the demise of the dinosaurs til yesterday. From a basic following of science news, I’d heard the theory that dinosaur extinction was caused by an asteroid impact. The book assembles a wide swath of evidence to pull together the big picture of massive destruction – the impact caused fire that burned most of North America; probably even more deadly was the dispersal of debris into the atmosphere, disrupting photosynthesis for months, causing ecosystems depending on land plants and plankton to die off. The result was massive extinctions on land and ocean. The stratigraphic evidence around the world shows a layer of sediment containing iridium, an element characteristic of asteroid material. In what is now North Dakota, 80% of all species disappeared above the iridium layer. According the fossil record, more species survived in fresh water, where the ecosystem is more dependent on detritus, than on dry land or ocean, where the ecosystem depends on photosynthesizing plants and plankton.

Many more deciduous plant species survived than evergreens, because they can “shut down” in times of stress, and for ten million years after the impact, decidious trees dominated in areas where evergreens would be otherwise favored by the climate. The anomalies caused by the asteroid impact serve to illustrate the more typical, longer-term patterns in North American ecology.

One of the strengths of the book is the way that Flannery illustrates large-scale patterns that play out over deep evolutionary time. One such pattern is North America’s distinctive sensitivity to climate change. The continent is shaped like wedge shape, with mountain chains running north/south. This geography results in causing in dramatic seasonal changes in temperature during the year than in other parts of the world and also magnifies the effects of global change in temperature. During two periods of global cooling at 50 and 38 million years ago, the deep sea temperature fell 4-5 degrees Celsius overall, but fell by about 9 degrees Celsius on the gulf coast.

Another deep pattern Flannery illustrates is the characteristic constellations of species in ecosystems. The African Seregeti has several major species: elephant, buffalo, rhino, lion. A similar ecosystem in North America was populated by mastodons, and later gomphotheres. The rhino role was played by Aphelops and Teleoceras. The big cat role was played by nimravids, and later on “barbourofelis” (illustration by the amazing natural history illustrator Carl Buell, aka Olduvai George.)

After major disruptions, Flannery shows that the ecosystem tends to repopulate with creatures of similar size, playing similar roles. This leads Flannery to leads to a recommendation (that he has supported for many years in his native Australia as well) to re-introduce species of megafauna such as elephants and camels that are missing in today’s ecosystem.

Speaking of missing species, Flannery reviews the evidence and finds the case compelling that humans caused the extinction of megafauna – sabre toothed tigers, mammoths, camels, sloths that roamed North America before humans arrived 13,000 years ago. The pattern isn’t just found in North America – humans arrived 50,000 years ago in Australia, and 6000 years ago in Cuba, and the megafauna disappeared at the time the humans arrived. Flannery makes that case that the species that flowed into North America after the arrival of humans had behavior that enabled them to survive predation – buffalo lived in large protective herds, and wolves had evolved near humans in Eurasia and had evolutionary time to learn fear. These behaviors worked until humans upgraded from knives to guns.

One of Flannery’s strengths is bringing together the evidence to tell big stories and illustrate big patterns. Two of the biggest patterns Flannery discusses also seems to me to be the most problematic.

The question with which Flannery frames the book is which continent originates the most species. Continents – largely isolated large landmasses – are biologically meaningful units in which evolution proceeds largely in isolation, so; examining the relative direction of population flow reveals interesting patterns. This lens also reveals interesting factoids – squirrels, dogs and camels all originated in North America. I don’t know about you, but I always wondered about species that seemed common to North American and Europe – what originated where? This book answers those questions. In addition to the question about population flow, there is also a real “history of science” question – the early dominance of North American evidence in paleolontology appears to be be a historical accident of caused by early enthusiasm and progress in North America; when you assemble paleontological results from other parts of the world you get a more balanced picture. And yet, aside from the real scientific and social history issues, the book is also replete with metaphorical language speculating about which continent will prove to be the “winner” in the global contest for originating the most species. This competitive framing sounds a bit too suspiciously like human geopolitics for comfort; the continental competition narrative reads like the Olympic television coverage of paleontology.

An even more problematic thesis is that of the frontier. There is a scientific element to it, in that North America has historically drawn influxes of species from Eurasia when the Bering crossing was open, and from South America when migration was possible; North America is a “frontier” into which new species spill and spread. Flannery sees the history of the immigration and diffusion of human cultures into North America in modern times as an instance of the same pattern. But the economic circumstances that have driven human migration to North America seem very weakly analogous to the geographic patterns that drove animal migration; the weakness of the hypothesis can be seen by looking at migrations that have nothing to do with geographical access – African Americans travelling North for manufacturing jobs; workers fleeing the rust belt for other parts of North America when manufacturing jobs move south and overseas. The reasons people move have everything to do with with human culture and financial resources.

Flannery draws his picture of the frontier from Turner – a historian who drew a romantic picture of a rough-and-ready, independent settler whose mindset is shaped by geographical expansion. There have been strong historical critiques of Turner – I’m most familiar William Cronon from his course in the American West and his book on the history of Chicago. Cronon shows how the exploitation of timber, mineral, and other resources were always closely tied to urban cultures and urban financial structures. More than that, the myth of frontier was shaped very early by theater and advertising; that Romantic self-image was heavily colored by fiction. And Turner’s focus on the white, Anglo frontiersman reflects his bias -there were African-Americans, Mexicans, ethnic Europeans; women and men. Turner’s Frontier is an important cultural myth, but a much weaker base for scientific comparison.

As a cultural myth, the Frontier and the death of the Frontier is a compelling narrative to explain the relentless exploitation of natural resources and the terrifying awareness — much later than the crisscrossing of the continent by railroads and telegraphs — that natural resources are limited and humans have the power to destroy our own civilization by mis-utilizing resources. The connection to the the flows of animal populations based on climate and geology is most tenuous. It would be better if Flannery drew a distinction but he doesn’t; the book tries to draw a seamless analogy between the population flows into North America across millions of years, and the cultural mythologies of manifest destiny and environmental exploitation, but the seams show.

Despite the weakness of the title argument, I really liked the book. if you are already deeply familiar with the scientific literature and have been following the topics closely across recent decades, this book may not have much new for you. If you are generally interested in the topic but not as familiar with the details, the book is fascinating. It is a strong entry in a genre of environmental history that weaves together paper-level detail to an accessible big picture story that shows the larger patterns across deep time.

Synchronic and diachronic readings of activity streams

The meme of the moment is that online world is moving more realtime. Same conversation, played like Chipmonks Christmas. The anxious worry that Twitter and Facebook will kill cultural depth. Cheerier observers of the same trend see a bubbling flow of friendly social banter, where the compressed time-intensity gives people a sense of shared memorable experience that generates social bonding.

There’s more going on than what’s on the surface. Activity streams are surfacing conversations and information that weren’t seen as easily or as broadly – the much-maligned sandwich tweets that help friends feel connected and let fans see their heroes are human – and serious stuff like earthquake news and updates about critical business facts. With seismic activity on the brain, it’s like volcanic activity is raising an underwater mountain chain so the tops are above the water. You can see peaks above the waves, but the mountains are still there.

There are several important consequences.
* First is the observation that Twitter doesn’t replace long-form blogging but complements it. Twitter headlines draw attention to longer, more thoughtful exposition.
* Second is the related observation that what is surfaced doesn’t need to be something brand new, as Kevin Marks points out. Kevin uses this principle on a regular basis when he cites on Twitter blog posts that were written 3 months, 3 years, 8 years ago. Or for that matter when Carl Malamud quotes Jefferson on Twitter in the context of contemporary policy debate. So, what’s going on is banter, grooming, fire, flood and Michael Jackson, to be sure, but also potentially surface connections to underlying network of much longer-lasting conversations.
* Third is the idea that what’s under the surface can be measured, and the words and relationships that can be measured have economic value.

The most visible time axis in the world of streaming is what’s on the surface. But what’s under the surface is also meaningful and increasingly valuable.

At the one formal class in literary theory I took as an undergrad at Yale – I say one formal class; the ideas of lit theory flowed through the place like the smoke wafting from the cigarettes of undergrads and grad students as they tossed their scarves over their shoulders, and flipped their asymmetric hair, but I digress – the instructor introduced us to the concept of “synchronic” and “diachronic” analysis from the field of lingustics, often pictured as a 2d graph.

Synchronic readings focus on what’s going on at a fixed moment of time. Diachronic readings compare what happens and develops across time. In the world of streaming social media, people are fixating on the synchronic axis, but the diachronic axis is also worth watching.

The tact of social media monitoring

In context of ongoing commentary about social media and branding, Adrian Chan observed on Twitter that “metrics analyze individ[ual] tweets for brand mentions and sentiment, losing context of talk and user’s relationships.” Follow-on conversation with Thomas Vander Wal and Chris Baum focused on opportunities for network conversation analysis to elicit valuable information about the social context of brand mentions.

The challenge for marketers lies in how to use this information in a way that preserves trust with customers. Trust is a leading indicator, and, as proposed by Chris Heuer, an important metric to assess a company’s relationship with its customers. Even though a company may have this information – and it is publicly available – doesn’t mean that using it well is easy.

Privacy is over, said Scott McNealy in a famous speech a number of years ago. The topic about the amount and richness of public information is often cast in terms of surveillance, privacy violation, individuals vulnerability, the need to protect against threats, and the futility of doing so. But for many sorts of information and in many contexts, privacy isn’t the salient concept.

There is another important concept from city and village living – the concept of tact. In coffee shops and restaurants every day, people converse about the matter of their lives – their kids schools, weekend plans, sports injuries. This doesn’t mean that it’s socially appropriate for the person at the next table to jump in and express an opinion about how to treat tendonitis. The participants aren’t trying to keep the information confidential – they know that what they’re saying can be overheard. But they take advantage of social norms of tact to assume that other people are choosing to politely ignore their conversation.

Similarly marketers may observe groups of people who discuss travel, or shopping, or gadgets, or heath. Some marketers search for broad keywords and auto-follow anyone who mentions the keyword. Additional social analysis would let them auto-follow others in the conversation, too. The marketer now has the power to jump in and start promoting themselves to everyone in the conversation. These crass activities violate the trust of the people in the conversation.

More sophisticated tactics entail longer-term listening, engaging in conversation when it’s sought and called for, using lower-touch gestures like retweets to engage recognition when appropriate. Employees participating as themselves act as community members and are community members. With an understanding of the culture, marketers can participate in and catalyze welcome public conversations. Within this context, it becomes valuable to know key conversational clusters to help spread information of shared interest, in a way that builds on shared interest instead of violating the sphere of ignoring. When participating with a business identity, tact is key to protect one’s reputation and customers’ trust.

On identity and civility in social media

At the Lunch for Good event last week, the primary topic of conversation was the role of identity in increasing participation in social sites. It was encouraging to see that the discourse seems to have moved well beyond the old, unproductive binary contrast between euphoria about the power of anonymity (on the internet no-one knows you’re a dog) and a world of draconian censorship. There was overall concensus that speaking with one’s personal reputation – whether by use of real names and reference to heavily grounded Facebook/LinkedIn realworld identities – or by use of persistent pseudonyms that accrue community karma – helps increase the level of pro-social behavior.

There were still a lot of nuances up for discussion. How and where to use realworld identities tied to your home address, and when to use pseudonyms (someone from the TOR project reminded us that in many places and circumstances, using one’s real name can be life-threatening). About the need for faceted identity, so it’s possible to express aspects of oneself in a social context without having those things be “too much information” in other contexts, whether or not the information is secret. About the value of social gestures and practices that go beyond mere identity – practices of welcoming, moderation, and facilitation that help in establishing a congenial place.

It is good to see the conversation move beyond simple, binary contrasts to the nuances that can help shape civil and congenial online social spaces.

Why FourSquare drives me bananas – conflicting motivations in social design

FourSquare drives me nuts, because of the inherent conflict between the social and competitive aspects of the social design. A social tool for people who like to go out, FourSquare builds in social incentives – when you attend a venue, the system sends out a message asking your friends to “stop by and say hi,” bringing frends out from the woodwork to join you at a bar or event. At the same time, it has built-in competitive incentives – participants accrue “mayor” awards and various badges for visiting the same venue the most times, and for racking up various types of social activities.

You might think that the diversity of incentives is good, since different users have differing motivations. In the conversation about the use of leaderboards in social software, Kevin Marks referenced the classic paper by Richard Bartle- Hearts, Clubs, Diamonds, Spades – about the different personality types who participate in online games. Hearts are motivated by social connection. Diamonds are motivated by racking up points to “win” the game. Spades are motivated by information and exploration. And Clubs are motivated by motivated by causing trouble and pain for others.

So, if you appeal to more than one type of player, more the better, right? Problem is, there are conflicts between the different incentives. For “Hearts”, the incentive is social. Hearts are truly charmed by the idea that a application can draw your friends from nearby and increase serendipitous chances for meeting. For “Diamonds”, the incentive is competitive. Diamonds want to “win” the game by being the mayor of the most places, and racking up other types of points. In FourSquare, you see people competing to be the mayor of SafeWay – it’s not like anyone is actually seeking to meet up with a friend at the supermarket.

So, for a “Heart” – if someone checks into a nearby location on FourSquare – are they actually seeking to meet up with other friends nearby? Or are they a “Spade” seeking to rack up points, and would they be annoyed by someone stopping by, or even hailing to see if the the checkin calls for visiting? The conflict between the motivations makes my head hurt. I don’t check in very often on FourSquare, since the “points” and making public noise about going out to a lot of places about don’t do very much for me. I contribute to the social game, occasionally, but the ambiguity of it makes the game a lot less fun. I’d much rather send out a hail on Twitter – when I want to signal an open social meetup opportunity, I want to do so unambiguously – whether it’s little or big.

I have no idea how many other people feel caught in the middle of FourSquare’s conflicting social incentives. I just know that the combination is rather awkwardnessful for me. What do you think? Does FourSquare’s combination of motivates work nicely for you, or also drive you crazy?

And from the perspective of social design, appealing to more than one “type” may in theory increase the audience for a given tool/site/event. But care needs to be given that in appealing to one “type”, you are not discouraging other types at the same time.

Social motivations for online participation

Listening to the conversation last week at the Lunch for Good event last week, Tom Coates’ mini-rant against explicit incentives for social software participation was running through my head. The overarching question posed to the participants over lunch was about how to increase people’s contributing content to online sites. But that’s fundamentally the wrong orientation.

In a recent Twitter conversation about leaderboards, Tom had the strongest perspective in favor of intrinsic motivation.
Flickr doesn’t need leaderboards to motivate actions, no need for competition. Nor Facebook. Or most blogs.”

Now, the specific topic on the table was a good one – about how to handle identity and reputation in a way that doesn’t *discourage* participation (about which more in a separate post). Reducing barriers to participation is important.

But participation itself isn’t about “contributing content” much of the time. Wanting “users” to generate “content” is the perspective of the site-host that wants to build a repository of content and perhaps generate revenue from site visits. But this is not the perspective of the participants.

Wikipedia itself – one of the grandparents of community content sites – really does build on people’s motivations to building a repository of information. But other types of sites build on a variety of motivations. When people contribute to tech support FAQs, they are adding to a knowledge edifice like Wikipedia; they are also increasing their personal reputation as an expert and investing in karma in the old-fashioned sense, contributing good will to the universe, in a way that may be repaid indirectly. When people are discussing politics on sites like DailyKos and Calitics, they are engaging in advocacy and organizing, not just submitting neutral “content.” Reviews of books, movies, music draw on people’s motivation for self-expression and cultural affiliation. Sharing observations and links on Twitter and Facebook is about social exchange as much as about the content itself.

People participate for self-expression, to make social connections, for social reciprocation, to enhance reputation. It’s not about the content, or it’s only partly about the content.

So enabling participation is about creating an activity that’s engaging in itself (per Tom Coates); removing barriers such as hostile behavior, and fostering the relational, expressive, and/or reputation aspects of the experience.

Twitter’s retweet – digg vs. call & response

When Twitter paved the cowpath, turning the Retweet social convention into a feature, their design decisions highlighted the fact that retweeting had two different social meanings.

One meaning is a “digg” – a simple, one-dimensional, unequivocal “upvote” for a post. By “retweeting”, the poster is adding their social cred to the original tweet, and spreading it to their own social network. That cheerleading is the meaning that Twitter chose in implementing Retweet as a feature. This use has a competitive dynamic to it; people measure the success of the meme by how often it is copied. The direct copy has bit of a multi-level marketing, spam-like flavor to it – I put a meme out there, and have the most success if other people take up the meme and start to hold tupperware parties for their friends.

The second meaning is a comment – copy the original tweet, and add a pithy comment with your own quick take. This pattern adds value, meaning, personality along with the forward. You’re contributing to the conversation, not just repeating what someone last said.

This is the version I strongly prefer. From the point of view of Retweet as call and response, Retweet as simple repetition sounds like echolalia, simply repeating the other person’s words directly. This is the communication style of a very small child learning to speak – or someone who has autism, schizophrenia, or some other mental disability. Only those who don’t have the ability to vary communication will repeat words directly.

I’m not sure how often I’ll use Twitter’s Retweet feature. Surely there are some times when I just want to give something a thumbs-up. I strongly I’ll continue to quote-and-comment more often to contribute to the conversation. The problem with the “cheerlead” version being baked into software is that because it will be easier, it will encourage people to simply copy rather than comment.

Twitter’s implementation of Retweet as a feature is a deep example of a principle of social design – build simple software, watch users, and then turn a carefully selected subset of the patterns they develop through use into features. It’s important to be very sensitive to how to bake the pattern in, because subtle changes can have major effects on the social dynamic of the system.

Twitter watched and implemented, but they implemented what is, in my opinion, the wrong thing. Or rather, they implemented the thing that turns the Twitter communication pattern into something more like competition, more like spam, and less like conversation.