Empathy for Amazon (but not that much)

When the #amazonfail kerfuffle hit over Easter weekend, I wanted to give Amazon the benefit of the doubt, at least a little bit. It was clearly outrageous that books about gay and lesbian themes were classified as “adult”, and removed from main search results, including kids books like “Heather has Two Mommies”. But the tweets and posts assuming organized homophobia on Amazon’s part were premature. The braindead customer service response simply citing policy didn’t convince me that malice was behind it either. Customer service people are trained to use existing documentation to respond to questions, which is often correct and sometimes lacking in common sense.

If you’ve worked at an organization, you know that things go wrong, sometimes badly. When this happens, people need to figure out what went wrong, and coordinate a response. When the rain of wrathful posts was falling on #amazonfail, I was imagining Amazon folk being pulled from family dinners around the world to investigate what happened and figure out how to respond, including fixing the problem and communicating to people affected. Socialtext is much smaller than Amazon but this process is painfully familiar.

Amazon’s response fell short of what it could be. Their first public response was that it was a “glitch.” This may be technically true, but it doesn’t consider the genuine and valid outrage that a powerful service like Amazon was marginalizing a group of people that faces real discrimination. Even if was a technical accident, the right response was “I’m sorry that this glitch has the affect of suppressing books by GLBT authors, we have no intent of discriminating, we support gay rights, and we will fix this as soon as humanly possible.” Their next response was posting a form letter to the comments section of a few blogs. What they should have done instead was to have spokespeople talking like human beings. It’s hard to do. It’s easier to post a form letter. It’s much harder to be human and nondefensive in the face of customer outrage. Amazon missed an opportunity to respond in a human way, and earn back the respect of angry customers with interest.

Watching the GetSatisfaction crew handle the complaints about their policy about non-company sponsored pages, and improving their service with the criticism, and watching Rashmi at Slideshare handle customer anger at a misunderstood April Fool’s prank provided inspiring examples of companies really engaging in a professional and human manner with angry customers.

Twitter is the new headline: how blogging and Twitter are complementary

A couple of weeks ago, Jay Rosen asked whether this was the dumbest newspaper column about Twitter ever. A game critic blogger at the New Orleans paper makes fun of Twitter by attempting to write his review of an xbox game in 140 character increments. The reason this is idiotic is that the author misses the complementary relationship between Twitter and blogging. You don’t write your review itself on Twitter. You write a normal essay, and then share the link on Twitter with a catchy phrase.

The conventional lament is that Twitter is killing blogging, since bloggers are now spending their time and sharing their ideas on Twitter. As Robin Hamman observed last fall in this Headshift post, Twitter (and Facebook) are siphoning off a lot of the energy from personal diary blogging – the proverbial sandwich post – or simple link sharing. Bloggers observe that they post less frequently because they tweet ideas more often.

While Twitter may be siphoning blog energy from very short posts, Twitter also increases interest in more substantive blog posts and discussion around blog ideas. An increasing amount of blog traffic is driven from Twitter and Facebook status (good stats welcome). Through link posting and retweets, the social network is used to share and spread interesting posts and call attention to good bloggers. Essentially, Twitter is the new headline. Blogger Louis Gray takes this a bit too far, I think, when he recommends that bloggers change their headlines into catchy twitteresque phrases for SEO purposes. A good blog title is catchy enough to be interesting, and explicit enough to make sense in search results months later. A good Twitter callout is catchy, makes sense in the current social context, and doesn’t need to be as explicit. There’s no reason to make all blog titles into Twitter callouts.

Reactions and conversation about blog post ideas take place in Twitter, Facebook status, and Friendfeed. Journalism professor Jay Rosen is developing a phased process for developing ideas, using Twitter for mindcasting short thoughts and links, Friendfeed for assembling links and ideas together with discussion, and blog for long-form essays. Update: Science blogger BoraZ writes about a similar social journalistic workflow, carrying the process all the way through composing articles and books. Christian Crumlish has actually used the workflow from twitter through book composition, with a wiki as tool for book editing and feedback for O’Reilly’s Designing Social Interfaces.

The relationship between social messaging and blogging can be particularly handy in the workplace, where social messaging is used to call attention and discuss timely and relevant work-related posts and updates. The ease of sharing and discussion motivates people to write useful things, because they will be shared, discussed and used.

In summary, Twitter and blogs are highly complementary. The role of Twitter isn’t to limit thoughts to what can can be expressed in 140 characters or less, it’s to call attention to longer-form writing, and discuss the ideas through the social network.

How Twitter creates serendipity

Josh Porter makes a good observation: “a big difference between Facebook & Twitter is serendipity. Stuff “happens” all the time on Twitter. Not really so much on FB.” Twitter’s serendipity is an outcome of its design. Twitter asymmetrical, mostly-public, searchable network creates serendipity. Facebook’s mostly-private symmetrical network doesn’t.

Twitter generates serendipity with visible mentions and searches in your extended network. You can see replies from people you aren’t following. This allows you to expand your contacts and knowledge beyond people you already know. When someone asks an interesting question, you can do a search and watch the answers and responses unfold, bringing you to references and people you didn’t know before. By contrast, Facebook’s mostly-closed, symmetrical network makes it hard by design to see outside of your social network.

Handles and hashtags also help with serendipity. Handles are unique, so you can do a search for @bokardo and see the stream of references to Josh Porter, much more easily than if you searched for Josh Porter. This is a major advantage of Twitter over Facebook and LinkedIn, where searches for common names yield enough results that it’s nearly impossible to find a person with a common name. Hashtags make it easy to generate a topic by social convention and follow the thread. It is doubtful that Twitter intended handles to be useful for search and serendipity – they just used a convention that’s ubiquitous in consumer web services. Twitter doesn’t even have any explicit support for hashtags – they arose as social convention in the community. But as search became an integral part of the Twitter experience, handles and hashtags help.

My favorite thing about Twitter serendipity is that “pivot search” on people and tags kicks in when you get actively engaged in a topic. Most design patterns intended to support serendipity do a query for you, and deliver “recommended results” using some algorithm. An article about bank bailouts has several other suggested articles on the same topic. When you’re reading, you may or may not be reading more. Personally, I’m more likely to follow hand-picked links the author has chosen within the context of the article. The human mind is a better filter than the algorithm.

By contrast, when a person or topic is interesting in Twitter, you can easily pivot on the person or topic and explore. A twitter hashtag search is likely interesting — more interesting than generic tag searches — because a tag points to an active conversation created in a social context, rather than an abstract topic. When you get interested in something, you can easily pursue it and discover interesting results. This “pivot search” design pattern may be ideally suited for infovores like me, and too implicit for people with other styles, but I really love it. It would be interesting to find out how many others use Twitter for pivot searches in this way.

In sum, there are properties of Twitter’s design: asymmetrical, mostly-public, searchable, easy-pivot, that foster serendipity. Some of them were probably designed by Twitter designers on purpose, others may be sweet side effects. As part of the evolutionary experiment in social software, they provide great lessons to learn from.

Geithner-Summers plan and social decay

There’s a nasty hidden cost of the Geithner-Summers plan to buy distressed assets for more than they’re worth. A commenter on the Balloon Juice blog points out that by keeping mortgage assets on the books for more than they are worth, the owners of foreclosed properties have an incentive not to sell them. “If a mortgage is worth $400K and the house sells for $200K, the Title Holders would have to write down that $200K loss immediately. But, keeping that house abandoned and unsold means they don’t have to write down any losses.” Empty homes sit vacant, attract vagrants and copper-strippers, and cause neighborhood blight.

The obvious cost of the PPIP is taxpayer ripoff. The Public Private Investment Partnership plan from Obama’s financial Tim Geithner and Larry Summers has investors take bad assets off of banks’ books for more than they’re worth, leveraged by taxpayer dollars. If the assets aren’t worth inflated prices, taxpayers bear the loss. If the assets go up, taxpayers get only half the profit. The hidden cost is creeping social decay caused by squelching the market in the real houses beneath the mountain of fantasy investments.

To arbitrage this market failure, nonprofits have been creating schemes
to house the newly homeless in abandoned properties (the topic that
started the Cole thread) http://www.nytimes.com/2009/04/10/us/10squatter.html?partner=rss&emc=rss

Hashtags for LocalTweeps: Geography is social

A few days ago, the LocalTweeps service reached my Twitter social network. To sign up for LocalTweets, you tell it your zipcode and it broadcasts your signup on Twitter. LocalTweets hopes to be come a local directory with information organized by zipcode. This could be handy, but it doesn’t yet take advantage of an important aspect of geography, where the internet has a unique advantage over traditional directories. Geography is social and contextual.

Where am I? The downtown neighborhood of Menlo Park, on the Peninsula, in the Bay Area, in Northern California, and so on zooming outward. We use these different markers depending on context. Neighborhood is important for convenience and neighborhoodly socializing. The Bay Area is big, so the regions are important when considering the travel radius for an event. The relevant geographical category sometimes coincides with political jurisdiction (e.g. San Mateo County), and sometimes they don’t. That’s why it would be cool to be able to use tags, not just zipcodes, to identify events and places. A barbeque at a local park would be tagged with the neighborhood. An event at a venue is tagged with a local region. Broader organizing would refer to larger regions, e.g. “Central Valley.”

In a medium with limited physical space, it makes sense to use a single criterion like zipcode to categorize locations and events. But on the internet, there’s no reason to limit. People can, do, and will select the subjective geographical categories based on context.

A couple of years ago, I attended a meeting hosted by the unlamented hyperlocal startup, Backfence. Attendees at the Palo Alto meeting were frustrated because the service would not let them post news in neighboring Menlo Park, even though there are close ties between the towns: people are likely live in one town and work in the other, and to shop and do cultural things the next town over.

So the recommendation for LocalTweets and other internet geography services: free your taxonomy. Let people tag events, and designate them according to what’s socially relevant. The address (and zipcode) will identify where it is on the map. And the tag will identify where it is in people’s cultural context.

Netizen ghosts, or what makes the internet “real”

It reads like a Cory Doctorow satire, but it’s true. Bruce Sterling, the eminent science fiction author and his wife of four years, Jasmina Tesanovic, received an INS notification of pending deportation for Jasmina. A globetrotting couple who organize most of their lives online, they don’t jointly own a house, didn’t go for traditional paraphernalia like wedding china, and have separate bank accounts. Where would one find evidence of their lives together? flickr photos, YouTube videos, a BoingBoing wedding announcement. Bruce needed to make a special Wired Magazine plea for people who know them personally to write the INS before April 15 and testify that they are in fact married. I’ve met Bruce, but don’t know them well enough for that INS form; if you do known them personally please stop reading this right now, tell the INS that they’re for real and then come back.

After the bureaucratic nightmare for Bruce and Jasmina is fixed, what’s interesting is the difference of opinion about what’s considered “evidence” and “real.” The INS is still stuck with an old-fashioned definition of evidence, even though courtrooms have been using email as evidence for a while. The US Federal Rules of Civil Procedure were updated in 2006 with detailed guidelines on how to use email and other electronic information in court.

The epistemological conflict doesn’t just pertain to the dusty bureaucrats at INS. Even Wikipedia has trouble with online sources, as can be seen in this dispute about whether to keep a Wikipedia page on RecentChangesCamp. The event, a regular gathering for a distributed tribe of wiki-keepers, is well-documented in blog posts, online photos, a Twitter stream and so on. But what eventually persuaded the wikipedia editors was an article in the Portland Oregon newsprint business paper. The most chilling aspect of the Wikipedia policy is that blogs are not considered notable. In other words, evidence in the endangered Boston Globe counts, and evidence in the prospering and clearly journalistic Talking Points Memo apparently doesn’t. Another problematic piece of Wikipedia’s policy is the requirement for secondary sources. An event like TransparencyCamp or EqualityCamp is documented by numerous attendees. But unless the San Francisco Chronicle sends a reporter, EqualityCamp doesn’t exist. Attacked by curmudgeons as “unreliable”, Wikipedia ironically places excessive credence in offline sources. As more traditional papers go extinct, and more reporting is provided by online media and peer media, what on earth will Wikipedia do to prove that things are real?

The answer, of course, is that there will develop stronger norms about what makes internet evidence valid. Of course there are many internet sources that are bogus, just as there are forged documents and lies. But there are also plenty of techniques for evaluating the authenticity and reliability of electronic sources. We use them in a common sense manner every day when reading email, evaluating blog comments, and rejecting the fraudsters and spammers.

Surely, there are other government agencies that have developed guidelines that INS could use to update their policies. If you know of any, here is the contact information for Janet Napolitano’s office at the Department of Homeland Security. Do any Wikipedia community members know of efforts to update the notability policy to take TalkPointsMemo and primary event coverage by numerous blogs and other online sources as evidence of notability?

The Bruce and Jasmina INS jam and the RecentChangesCamp kerfuffle show that policy rules and norms haven’t yet caught up with internet reality.

Drive Less Challenge

When I was a kid, I loved cycling over the hill to buy milk at the supermarket and bring it back in a basket. When I read Jane Jacobs as a in college it articulated what I had felt as a kid about the value of neighborhoods scaled for people, where you can stroll and chat with your neighbors, with “third places” where people recognize each other. So I sought out that experience. When I lived in Boston, I loved living walking distance from the supermarket, coffeeshops, hardware store and gym.

In recent years, as information about global warming and limits to the oil supply have become mainstream, the ability to organize everyday life for less driving has become not just a preference, but a necessity to bring energy use to levels that can be sustained. When I moved to California, I deliberately sought somewhere to live that was close to daily errands and train, where I didn’t need to car commute to work. Then, I challenged myself. What would it take to drive less? Slowly, I built up a repertoire of skills. I got bike baskets and can use a bike for most errands. I learned how to take a bicycle onto the caltrain, for practical access to many places in San Francisco and the Peninsula. I got better gear for biking in the rain (but still choose to drive when it’s pouring out).

I joined the Menlo Park Green Ribbon Citizens’ Committee to think globally and act locally. In California, driving is the biggest source of greenhouse gas emissions. So the biggest opportunity for transformation is to drive less. Now, there are some things that just aren’t practical to do without a car. Getting from Menlo Park to the East Bay. Buying furniture or appliances. But there are plenty of trips that are practical and good without a car. It just takes a little bit of learning and incentive to get over the hump and do it.

So I’m putting together the Drive Less Challenge This is a an opportunity to use some neighborhood positive social pressure to help people get over the inertia of daily life and take a few practical actions to do less driving alone. The challenge starts on Earth Day, April 22 and runs for a week. We’re working with local businesses, schools, and neighborhood groups to get the word out. The scale is Menlo Park this year, to make it easy to manage with an all-volunteer team. (If you’re not in Menlo Park you can still participate; your prizes will be recognition and the knowledge that you’re taking a step toward sustainability). There plenty of systemic changes that would make it easier to drive less, but most people have “low hanging fruit” opportunities to make small tweaks in daily life that would add up to meaningful change, now and already. It’s time to challenge ourselves and challenge our neighbors.

I’m coordinating the project with awesome team of Menlo Park volunteers, with minimal budget, weekends and evening time. I’m still doing some final tweaks on the “gameplay” and we’re busy getting the word out. If you’re interested and have questions and suggestions, drop a note in the comments or hail me as alevin on Twitter.

How asymmetry scales

Bokardo predicts that Facebook will go asymmetric. He calls out two key reasons why: asymmetric networks are a a good fit for anyone with micro-fame, not just organizations, brands and bands. Asymmetric networks help people manage their attention – you don’t need to pay attention to every update from everyone following you.

There are a couple of other key reasons why asymmetric networks scale better. In Twitter there are a number of ways where asymmetry in a public network provides good returns to scale, as noted in yesterday’s post on premature predictions of peak Twitter
* Retweets get you information that was first posted by someone outside your network
* Searches let you find information outside your network
* Visible replies, like the lovely feature in TweetDeck that shows when someone mentions you even if you’re not following them, allow you to hail and engage people in conversation, and have others start conversations with you, even if you’re not following.

These features mean that the more people who join the network, the more interesting information will be amplified through it, and the more potentially interesting people you may discover. The level of context is fairly high – you can see what someone else has been Twittering, and see if they are interesting and relevant to you. And the level of obligation is low (you can follow someone without giving them the burden of accepting or rejecting you).

In Facebook, I can see when someone that I don’t know has commented on the update of someone I do know, but then I need to friend a stranger in order to learn more about them. Facebook’s mostly-symmetrical, mostly closed network makes it hard to learn new things and meet new people outside your existing network.

So, the reasons for asymmetry aren’t just about supporting fame, but enabling discovery with low social expense.

Peak Twitter?

There are several arguments going around predicting Peak Twitter. The discussion raises a number of interesting issues questions about social media and scale.

In Twitter is peaking, Steve Rubel describes the risks to Twitter as social trendiness and increasing messiness.

Too popular. Social networks seem to have a property in common with nightclubs, bars, and restaurants – they are popular for a while. Then the throng moves on. The digerati were on Orkut for a few minutes, before moving on to Facebook and Twitter. Popularity depends on community – Facebook and MySpace are bigger in the US, Bebo is big in Europe, Orkut is big in Latin America.

Rubel hypothesizes that the trend pattern is similar other pop culture trends, where hipsters create a trend, and then flee when the mainstream arrive. Rubel writes, “Just six months ago, the list of the top 100 users on Twitter read like a who’s who of geeks. That’s what made it a draw, for many, initially. Now, however, the list looks like People or US Magazine. Twitter is losing its geek creds as celebs flock to the service.” The difference is, a social network is a great many places, not one; the network is inhabited by millions of overlapping subcultures. Honestly, I haven’t heard of many of the pop culture celebrities who have recently joined Twitter, and the ones I’ve heard of, I don’t follow. I do follow some of my personal heroes, but they aren’t pop culture icons.

The argument that people magazine starlets and nba players will crowd out niche communities is the same mass media vision that there would be a handful of pop-culture centered websites that would crowd out the rest of the web. There are 270 million people on Facebook, which is a great many more than say, the 15 million people who visit Disney every year, and their subculture-centric Facebook experiences are different than the mass-produced Disney experiences.

Too big. The second argument is scale and disorganization. “Since replies are not threaded, celebs and corporations do not feel they have to respond to every Tweet.” This is a real challenge. Rubel rightly recognizes that tools are evolving to address the challenge. What’s missing is that personal needs are very different from organizational needs.

For personal use, the fact that Twitter is a flow is part of the charm. A twitter feed doesn’t carry the same perceived social obligation to keep up and respond as email or instant message. You can dip into the stream, step out, and come back later. For personal use, people need some better tools to manage their attention. Tweetdeck, which Rubel calls out as a good example, adds groups, search, and embryonic filtering into the basic experience.

The needs of non-celebrity individuals are different from the needs of corporations, politicians, and famous poeple. If your constituency has thousands to millions of people, you need very different tools to monitor the conversation than if you are following fifty or 100 people. If you’re an individual, and you miss an update from a friend or an interesting news link, no big deal. If you are striving to use Twitter for constituent listening and feedback, you want to notice complaints, suggestions, and kudos. You probably want to have multiple people listening to the account, listening for different products or topics, and working on responses.

Dunbar limit. In ReadWriteWeb, Bernard Lunn makes the opposite point, that size doesn’t matter. “In a social network, the value for existing users of a new user joining the network plateaus once users have most of their own contacts in that network.” For mostly closed, symmetrical networks such as Facebook and Linked In, this is true. For mostly open, asymmetrical networks such as Twitter, this is mostly false, which Lunn mentions briefly. I suspect that people will cap their participation at some augmented Dunbar limit of the number of people they can follow with social attention and time. But in Twitter, retweets, searches, and visible replies mean that the more people who join the network, the more interesting information will be amplified through it, and the more potentially interesting people you may discover. When you have your existing contacts on the networks, it is easy and to make new contacts if you wish. The level of context is fairly high – you can see what someone else has been Twittering, and see if they are interesting and relevant to you. And the level of obligation is low (you can follow someone without giving them the burden of accepting or rejecting you).

Exploitation. In the ReadWriteWeb post, Lunn makes the insightful point that social networks can fail when their hosts start to violate the implied social contract with their communities in the interest of making money from their investments. “If these businesses get too eager to monetize to justify those valuations, they may create the reverse network effect.” When they move to monetize, hosts may move toward intrusive advertising, marketing, privacy violations, or other steps that benefit the site’s commercial interest and go against the interests of the users. I see the potential risks even more broadly than Lunn does. Intellectual property terms of service, and increased control over content and customization can violate the perceived community social contract as much as intrusive ads and marketing can. There is some inertia to switching, but in the absence of monopoly, annoyed communities do pick up and go with some regularity.

Parasitism. In Mourning the loss of Twitter, Ross Mayfield predicts that Twitter will fall prey to the spam and other antisocial behavior that crippled Usenet and Email. Hopefully the Twitter ecosystem will evolve to meet the threats, and blacklist and social filtering tools will keep the parasites from killing the host.

Twitter is a fascinating experiment since the social scale dynamics of an asymmetrical, open network aren’t known. I suspect that the ecosystem will evolve social and topic filtering tools that will help it scale; time will tell. The platform strategy is helping already – third parties are building tools to search, manage, and respond to the twitter stream. And I hope that the Twitter management retains a good sense of environmental judgement and finds ways to make money that don’t feel exploitive to the community.

Database journalism – a different definition of “news” and “reader”

Politifact is an innovative journalism project built by Matt Waite, as a project of the St. Petersburg Times, inspired by Adrian Holovaty’s 2006 manifesto on “database journalism”. Waite and Holovaty both focus on the “shape” of the information presented by database journalism – stories that have a consistent set of data elements that can be gathered, presented, sliced, and re-used. This structure is foreign to traditional journalism which thinks of its form as the story, with title, date, byline, lede, body.

The Politifact site started by fact-checking politicians’ statements during the 2008 political campaign. Each statement is rated as on a one to five scale, from “True, Mostly True, Half True, Barely True, or False. Today, the most compelling piece on the site is the “Obamameter” tracking the performance of the president against over 500 campaign promises. Examples include: No. 513: Reverse restrictions on stem cell research – Promise Kept, No. 464: Reduce energy consumption in federal buildings – In the Works, and No. 446: Enact windfall profits tax for oil companies – Stalled.

The shape of the data is part of the picture. It’s certainly the biggest day-to-day difference if you’re composing news or tools for news. But I don’t think it’s the lede. What’s different here is a a different conception of what’s “new” and what a “reader” does.

What’s new Traditional journalism is based on a “man bites dog” algorithm. What’s newsworthy is the dramatic reversal of expectations. Slow, gradual changes are not newsworthy. Large static patterns are not newsworthy. I suspect that this is part genre and part technology. The technology limitation is space; there isn’t room to publish many stats in a newsprint paper, and minimal affordances for navigation.

The emphasis on concise and dramatic “news” leaves our society vulnerable to “frogboiling”, the urban legend in which the frog in gradually heated water gets accustomed to the change, doesn’t jump out, and boils to death. The decline of the North Atlantic cod fishery or the Sacramento-San Joaquin delta are not newsworthy until the cod and the salmon are gone. Wage stagnation isn’t newsworthy until the middle class is gone. Tens of thousands dead on US highways each year isn’t newsworthy, though a traffic jam caused by a fatal accident is news. Many eyes hunting through financial data may find dramatic scandals, to be sure. With database journalism, perilous or hopeful trends and conditions can become worthy of storytelling and comment.

What a reader does The rise of the internet has made reader participation a much greater part of news than the limited “letter to the editor” section. Dan Gillmor, former editor of the “ur-blog” Good Morning Silicon Valley liked to say “my readers are smarter than me” because of the high-quality corrections and tips he’d get from his readers. Database journalism takes the trend a few steps further. Where a traditional news reader consumes the news, a database user interacts with it, looking for information and patterns. The “news” itself may be found by readers doing queries and analysis of the database, such as the database of Prop 8 contributions published by the San Francisco Chronicle.

So database journalism isn’t just about having some fields that are different from “title” and “body”. It’s about different conceptions of time, space, and participation.