Social motivations for online participation

Listening to the conversation last week at the Lunch for Good event last week, Tom Coates’ mini-rant against explicit incentives for social software participation was running through my head. The overarching question posed to the participants over lunch was about how to increase people’s contributing content to online sites. But that’s fundamentally the wrong orientation.

In a recent Twitter conversation about leaderboards, Tom had the strongest perspective in favor of intrinsic motivation.
Flickr doesn’t need leaderboards to motivate actions, no need for competition. Nor Facebook. Or most blogs.”

Now, the specific topic on the table was a good one – about how to handle identity and reputation in a way that doesn’t *discourage* participation (about which more in a separate post). Reducing barriers to participation is important.

But participation itself isn’t about “contributing content” much of the time. Wanting “users” to generate “content” is the perspective of the site-host that wants to build a repository of content and perhaps generate revenue from site visits. But this is not the perspective of the participants.

Wikipedia itself – one of the grandparents of community content sites – really does build on people’s motivations to building a repository of information. But other types of sites build on a variety of motivations. When people contribute to tech support FAQs, they are adding to a knowledge edifice like Wikipedia; they are also increasing their personal reputation as an expert and investing in karma in the old-fashioned sense, contributing good will to the universe, in a way that may be repaid indirectly. When people are discussing politics on sites like DailyKos and Calitics, they are engaging in advocacy and organizing, not just submitting neutral “content.” Reviews of books, movies, music draw on people’s motivation for self-expression and cultural affiliation. Sharing observations and links on Twitter and Facebook is about social exchange as much as about the content itself.

People participate for self-expression, to make social connections, for social reciprocation, to enhance reputation. It’s not about the content, or it’s only partly about the content.

So enabling participation is about creating an activity that’s engaging in itself (per Tom Coates); removing barriers such as hostile behavior, and fostering the relational, expressive, and/or reputation aspects of the experience.

Twitter’s retweet – digg vs. call & response

When Twitter paved the cowpath, turning the Retweet social convention into a feature, their design decisions highlighted the fact that retweeting had two different social meanings.

One meaning is a “digg” – a simple, one-dimensional, unequivocal “upvote” for a post. By “retweeting”, the poster is adding their social cred to the original tweet, and spreading it to their own social network. That cheerleading is the meaning that Twitter chose in implementing Retweet as a feature. This use has a competitive dynamic to it; people measure the success of the meme by how often it is copied. The direct copy has bit of a multi-level marketing, spam-like flavor to it – I put a meme out there, and have the most success if other people take up the meme and start to hold tupperware parties for their friends.

The second meaning is a comment – copy the original tweet, and add a pithy comment with your own quick take. This pattern adds value, meaning, personality along with the forward. You’re contributing to the conversation, not just repeating what someone last said.

This is the version I strongly prefer. From the point of view of Retweet as call and response, Retweet as simple repetition sounds like echolalia, simply repeating the other person’s words directly. This is the communication style of a very small child learning to speak – or someone who has autism, schizophrenia, or some other mental disability. Only those who don’t have the ability to vary communication will repeat words directly.

I’m not sure how often I’ll use Twitter’s Retweet feature. Surely there are some times when I just want to give something a thumbs-up. I strongly I’ll continue to quote-and-comment more often to contribute to the conversation. The problem with the “cheerlead” version being baked into software is that because it will be easier, it will encourage people to simply copy rather than comment.

Twitter’s implementation of Retweet as a feature is a deep example of a principle of social design – build simple software, watch users, and then turn a carefully selected subset of the patterns they develop through use into features. It’s important to be very sensitive to how to bake the pattern in, because subtle changes can have major effects on the social dynamic of the system.

Twitter watched and implemented, but they implemented what is, in my opinion, the wrong thing. Or rather, they implemented the thing that turns the Twitter communication pattern into something more like competition, more like spam, and less like conversation.

New terms or new trends?

The twitter buzz yesterday was about two posts by Stowe Boyd and Randal (Rand) Leeb-du Toit advocating Social Business as terms for Enterprise 2.0, along with the launches of new consulting services.

Names have value. Sector names are powerful totems in the consulting business. As a recovering industry analyst, I remember well the desire to name trends, and to have your name associated with the trend. And when innovation happens, it helps to be able to call it something. At Socialtext, we pioneered the adaptation of internet tools for collaboration and communication for business use, well before there were names for things. It was a lot harder to explain what we were doing before someone coined Enterprise 2.0.

In this case, the proposed change from “Enterprise 2.0” to “Social Business” is a lot less interesting than the trend the advocates are describing. Boyd and DuToit’s posts both describe a shift from an early market phase, when early adopters experiment with bringing web 2.0 tools and techniques to the organization, to a more mature phase, in which people have a more sophisticated understanding about evolving their organizations to take advantage of the tools, and the tools are deeply adapted to be suitable for organizational use.

So, switch terms? Shrug. Think more deeply and act more powerfully to adapt organizations and tools? A good thing to talk about and to do.

Peninsula High Speed Rail Teach-In

My main question about the Peninsula Cities Coalition efforts to solicit public feedback on the California High-Speed rail plan, starting with a teach-in this past weekend, was whether feedback would have any impact at all. Coordinators included two local city council people, Terry Nagel of Burlingame and Yoriko Kishimoto of Palo Alto, seeking with good will to organize public input on a major public project. But the High Speed Rail Authority, the agency charged with building the high-speed route between San Francisco and Los Angeles, is an appointed body with a majority of its board members appointed by the governor. The mission of the agency is to get the project done, not to listen to residents. There isn’t any obvious reason that they would listen to the concerns of the people who actually live on the route the train will pass through.

The event gave me some cautious optimism that there was a way for public input to have an impact. Dominic Spaethling, a representative of the High Speed Rail Authority, mentioned that they will be seeking feedback from local governments in terms of how design choices will affect the local areas. And a representative of Senator Simitian’s office also sought feedback filtered through cities. The Peninsula Cities Coalition is a group of cities (five currently, Burlingame, Belmont, Atherton, Menlo Park, Palo Alto), and the series of events is an excellent venue for city officials to gather input from residents. Tactically, cities have local land use and permitting authority, so a good working relationship with cities is in the High Speed Rail Authority’s interests. So there may be a vehicle to funnel input through cities, and some interest on the Authority’s side to listen.

Another reason for optimism was the demeanor of Robert Doty, the Director for the Peninsula Rail Program, a combined program to develop Caltrain modernization and High Speed Rail. He has responsibility for interagency coordination and regulatory approvals, so on the ground, he’s a key person. He developed the highly successful Baby Bullet program for Caltrain. Doty seemed both practical and considerate of local concerns. For example, he acknowledged that the implementation of the BART-SFO connector was botched, in cost and design. He mentioned the 5 stairways, sounding like someone who’s tried to use them! And he seemed to have a good relationship with the local folk on the podium. By contrast, representatives of transit projects and agencies sometimes come across as high-handed, presenting a seemingly inevitable conclusion to their audience.

The day contained a number of panels and presentations, with various speakers representing different aspects and opinions of the project.
* Doty, as I mentioned, came across as no pushover, but as someone who was engaged with the community.
* Rich Tolmach, the California Rail Foundation, was opposed to the project, and is seeking ways to stop it.
* Dave Young, of engineering firm Hatch Mott Macdonald presented information explaining how tunneling might be practical. this is an approach that some hope will reduce impact long-term, and others dismiss as impractical.
* Greg Greenway, of the Peninsula Freight Users Group, advocated for continuing to use the corridor for frieght rail. However, the Caltrain corridor gets only about 10% of the freight that travels through the east bay, and compared to Oakland, the port of Redwood City is a tiny blip. It’s not clear to me that other design goals should be sacrificed to support what’s a tiny freight base on the Peninsula.
* Bill Cutler presented the Context Sensitive Solutions approach. This sounded like a fine process for gathering community input. But it was not at all clear how the Context Senstive Solutions process would dovetail with the technical and operational schedule for the actual working of the project.

Unfortunately, I was late and missed the first session by Gary Patton. According to the California High Speed Rail Blog, Patton talked about how the project can still be stopped.

My least favorite aspect of the day was the “Open Space” section. I say this as someone who has attended and coordinated many open space session and unconferences. These can be excellent in two very different ways. When there’s a general area of interest, such as digital mapping technology, it’s wonderful to have people with interest and knowledge have sessions on the topics they care about. It’s not infrequent for projects and other follow-ons to be spawned from great sessions. But there’s no requirement for follow-up. When there is a common goal, it is a good way to have people self-select into interested groups and develop next steps. In this case, there is a common goal, but there was no clear charter for groups to serve that goal. Someone from the Peninsula Cities group volunteered to collect the writeup, but there was no clear sense of follow-up. This part of the program could have been better designed. The session I was in was a brainstorming discussion about how to use online tools to support the public input process. I’ll blog more about that separately.

At the event, I had a chance to meet in person the author of the Transbay Blog, a blog that covers transportation and land use issues in the Bay Area, after meeting by blog comment and twitter. I also got to to chat with Robert Cruickshank, who writes the California High Speed Rail Blog, which advocates for high speed rail, as well as Andy Chow and Brian Stanke of the Bay Rail Alliance, a rail advocacy group.

These days blogs and advocacy groups are at least as important sources of information on issues such as this as traditional media. So far, the event has been covered by Palo Alto Online and the California High Speed Rail blog

Disclosures:

I helped the coordinating team for the event, including setting up the online event registration and helping with outreach. And I wasn’t able to make the meeting where the Open Space session was planned. I’ll also send the feedback to the coordinators, with suggestions about how to better use Open Space in this context.

I live in Menlo Park and work in Palo Alto. I cross the tracks at least twice daily by bike. I like walkable, bikeable, liveable neighborhoods with access to transit. I’m interested in having a much better system of regional transit, and in ways for our society to wean ourselves from fossil fuel. I lived in Boston during the big dig, and watched the city pay the cost of trying to undo a bad decision to build an elevated highway through town 50 years earlier.

Though I’m concerned that the structure of the High Speed Rail project makes citizen input more perilous, I believe in general that the likelihood of having an impact gets infinitely higher when you show up and try to make a difference in a practical fashion.

Social context for distributed social networks

I am glad to see the increased interest and discussion in distributed social networks. And it’s intriguing to think about the ideas described by Om Malik, Anil Dash, Dave Winer and others describing blogs as the home base and source for such distributed social networks. But blogs and individual interests aren’t enough to be sources and points of aggregation.

Social context, I think, will be a critical factor in the adoption of distributed social networks. The vision of a “universal identity”, and full “data portability” sharing everything with everyone, is much to broad, not simply for reasons of privacy, but of attention. It’s in no way a secret when I go bicycling, but only a small number of people will care about my bike routes.

So what do I mean by sharing in social context? Social context is the way that people think about what’s relevant to share to whom. If I have a photo to share from SouthBySouthwest, I want to share with others who went there (or are interested in it). The category of “photos” is too coarse-grained. The category of “friends” vs. “business” is too coarse-grained and in the context of SXSW makes my head hurt. We need to be able to define social context, and then share appropriately in the context.

A key reason why fine-grained sharing has been a failure til now is that tools ask users to make decisions based on content type (who do I want to share videos with) or by broad categories (are you my friend or colleague). (By the way, if you’ve successfully figured out facebook’s system, please explain it to me, I’ve tried and failed). I’ve written before about the need for decentralized profile data as a key piece of the distributed social network. Another key element, I think, is likely to be tagged activity streams. Within a given social context, it becomes pretty obvious what profile fields you wish to share or types of Tweets you want to share.

Fortunately, people already use ad hoc tags to define events and interests, and use these socially-defined tags to aggregate across tools such as flicker and twitter. However, this functionality isn’t very explicit or well-defined, so it’s hard to make it usable or automatable. I think that the practice of using tags to define social contexts, and usable tools to share information in those context, will become important. When tags become valuable, they also attract spam, so a layer of authentication and explicit group definition will be needed when spam becomes an issue.

Summary – if you ask someone what data elements they want to share with whom, in a general fashion, people will give up, overwhelmed. But when tools enable people to share profile information, stream updates, and content in social context, people will be able to make pretty good decisions. Supporting standards, features, and usability to enable sharing in context will help make distributed social networking real.

Neal Stephenson on the decline of genre

Thanks to @dbschlosser’s link on twitter last week, I listened to Neal Stephensons’s lecture on the decline of genre. As a writer of what can variously be called speculative fiction and good books, his primary interest is in the fate of the genre and community he’s been associated with. Though I agree with the thesis that genre as we know it is in decline, I have some different perspectives on the nature and causes of the change.

Stephenson sees speculative fiction as fundamentally about intelligence – books and movies about smart people; and about exploring the impact of ideas. The genre has become increasingly mainstream, since it is increasingly cool to be a geek, an intelligent person with an informed passion. Stephenson makes it a point to describe intelligence outside of the framework of social class, of going to a brand-name school, of the signs of high culture; an informed passion for machine shop metal work is also geekery and good.

But there are some key aspects of the transformation of genre that Stephenson doesn’t address. The primary transformation over the last 10-15 years is in distribution, in marketing, and in the creation of publics and communities to engage with art. The existence of a broad “mainstream” and identified “genres” was related to older techniques of marketing and distribution. Powerful, expensive mass marketing was used to promote the biggest hits. More targeted marketing was used to reach narrower but still broad demographic categories of buyers. And, of course, physical distribution in bookstores meant that books needed to be shelved in one place, grouped with other books that people in the audience category would be likely to buy.

Given the limitations of mass media and the more targeted niches of mass media, the categories of audience were broadly demographic. Mainstream movies have tended to be segmented by gender, there are “chick flicks” about relationships targeted at women and “action films” about violence targeted at men. Music in the US was segmented in an invidious fashion by race, with white and black radio stations, and the categorization of similar musicians into differing genre shelves based on melanin. The emergence of internet distribution and the “long tail” means that the formerly cartoon-broad marketing categories are no longer applicable, and the allocation of physical shelf space is no longer relevant. People are free to describe content outside of marketing categories, and to organize themselves in groups that may or may not bear a resemblance to the groupings created by marketing departments.

Around culture in general, and speculative genres especially, fans have an easier time finding each other, creating large and active communities around Harry Potter, the Lost tv series, and much more. There have always been associations of fans; the internet makes it much easier for like-minded fans to find each other and the bond around fictional worlds and other art.

In Stephenson’s talk, he takes a few swipes at the “postmodern” schools of cultural theory, where critics call into question the ability of artists to control their material; related disciplines examined that lack of control with the lenses of gender and politics. Now, postmodern critics can swim slowly in a small barrel. I went to college at one of the hotbeds of postmodernism. It was rather common for grad student teaching assistants and undergrads flaunting scarves and cigarettes to claim that the text deconstructs itself, therefore imply strongly they were smarter than shakespeare. This was annoying. I avoided really engaging with the ideas until my senior year and then after I graduated. The extreme views of the junior disciples notwithstanding, the postmodernists and their economic and political cousins had some valid points.

Stephenson talks about the disappearance of the Western as a genre; the simplest explanation is the decline of social confidence in the “cowboy and indian” narrative. Fewer people were sympathetic to stories about heroic european people fighting native americans. Cultural criticism would identify the pattern. Stephenson makes a really insightful point about crime getting absorbed into television because of the good fit of detective stories to episodic structure. He makes a much less compelling point, I think, about romance being absorbed into everything – there’s still a big divide between chick flicks and action flicks – though I can’t talk about this in huge detail because those are the mainstream hollywood movies that I don’t go to, in part because of lack of identification with either broad gender stereotype. So, another argument explaining why sci-fi themes have broad appeal is that they operated outside the narrow confines of hollywood gender stereotypes.

Stephenson makes fun of the post-modernists, saying that it’s ridiculous to think that, say, Heinlein was not in full control of his material. But Heinlein is notorious as an old-fashioned pre-feminist whose female characters and gender relationships reflected stereotypes. The classical writers of science fiction, who wrote about colonists exploring other planets and experiencing tensions with the beings they found; world-threatening conflicts; male heroes with buxom heroines; in a world with the cold war, colonialism, and sexism, were tightly bound to social structures they could not clearly see. The postmodernists had some valid points.

So, I think that changes in technology, economics and social structure have at least as much to do with the decline of genres as they were constructed 50 years ago.

Social and conceptual models for Google Wave

Over the last decade, wikis, blogs, social networks, social messaging, social sharing apps, google docs and other tools have been providing lighter weight, faster vehicles for collaboration and communication that the old lumbering battleships, office documents and email. Now Google’s Wave is a depth charge aimed at the battleships. Google Wave is based on a powerful technical concept, using a realtime chat protocol and stream model as the foundation for communication and collaboration applications. For these reasons, Google deserves a lot of credit for pushing innovation, rather than simply cloning the old models using servers in different closets.

Fundamentally, Google Wave is technology-driven innovation. And Google Wave raises some pretty large questions about the cognitive and social models that people will need to understand and use Wave-based tools.

Conceptual model

The first big set of questions relate to the conceptual model. Wave attempts to mash up email threads, documents, and streaming communication. Each of these is familiar and not that hard to understand. The combination seems a bit mind-bending.

Email and forums are clunky in many ways, but they mirror conversational exchanges in an understandable way. Albert says something, and Betty replies. However, when replies are interspersed between paragraphs, and the conversation digresses, it can get difficult to follow. Wave uses a collaborative document-like model to make the changes visible in real time. This is cool and clever. It also needs a rich combination of social conventions and features to not get completely incomprehensible. Communities using wikis rely on rich social conventions and gardening tools to dispense with the need for inflexible pre-defined workflows. Wave is a toolset with even more flexibility than a wiki, with even more interactive content. This poses even greater challenges to help people understand how to use it and be productive.

The model of time has perhaps the greatest potential for confusion. In an email or forum thread, the latest contribution appears at the top of the thread. In a document, including a collaboratively edited document, there is a “face” to the document that appears as a working model of a final version. In a chat room, the latest comments appear at the bottom of the screen. In a rich “Wave”, it’s harder to tell which items in the wave are newer, older, more or less definitive, without scrolling through the whole process from the beginning. It is easy to imagine getting seasick.

Another conceptual innovation is “replaying” a wave. In the conventional model, there are known techniques to reflect the current state of understanding. When there are comments interspersed between paragraphs in email/forum threads, it can be difficult for newcomers get the gist of what has occurred. But there is a time-honored way to bring people up to speed – summarize the conversation to date. The summary has a social purpose, too, it steers the discussion toward a state of current understanding. A document or PowerPoint presentation can look deceptively finished, and close off potentially warranted conversation. A document is an artifact that reflects the end of a collaborative process. But a document can also be summarized and skimmed.

The presenters kvelled, and the audience cheered, when the demonstration showed new participants using “playback” to recap a wave to date. But this seems like world’s most inefficient way to get up to speed – to understand the end result of a conversation, you need to spend nearly as much time as the initial participants did in getting to that point. A streaming audio/video/screencast presentation, or a realtime chat, can be quite rich, and can be played back, but it isn’t skimmable or summarizable. It’s not clear that introducing that model to summarizeable documents and threads is a great thing.

My biggest areas of doubt about the Google demo in particular is that in some ways the hybrid combines the worst traits of its parents. Does the result have hybrid vigor or mutant weakness? What mental models are needed to understand this psychedelic blend of realtime, threaded, and document content?

Missing social model

The second set of questions relates to the social model. The Google Wave demo truly begged a large number of questions about social models for wave-based tools. The demo seemed to use a fairly primitive concept – an individual’s address book that lets that person add a new person to an email thread.

As someone involved in designing social models for tools used by organizations, this model is an intuitive way to start, but does not go very far. First of all, who has the ability to add people to the conversation? Is it everyone, or only the person who created it? Can invitation be delegated? Can a person add himself or herself? Do these permissions vary by wave? What about existing group and networks? In social sharing tools like Facebook, sharing a message or object shares it with one’s social network (or a defined subset). Twitter, sharing is easly visible to followers, and visible with a little more effort by everyone. In organizations, there are pre-defined groups (say, the marketing team) that one might want to share with. The differences between these models make a vast difference between how the tools are used and what they are good for.

Another issue is social scale. Adding people and making interspersed comments could be intuitive in small groups, but could easily get confusing or chaotic in large groups. Long ago, Roberts Rules of Order were invented to facilitate orderly conversations with large groups of people to debate contentious topics. Group blogs and forums have developed reputation and rating tools to address the signal to noise ratio on large groups. What sorts of rules, tools, and processes will be needed to have socially effective communication and collaboration in larger groups when Wave is used in the world?

What the world saw in May was merely a demo. The Google team was up front about the state of affairs. They weren’t doing FUD-style theater claiming to have already created a completed application to scare competitors and stop other developers in their tracks. They were describing a prototype application built on a new platform, and encouraging developers to explore and extend the concepts they demonstrated.

Next exploratory steps

The reality of open-ness has not yet lived up to the promise. In order to join the developer program, you need to tell Google exactly what you plan to build with their new platform. Which is rather hard to say when you haven’t had the chance to play with it yet. Google is also promising to open source the technology. Open source works well when there’s a community engaged with the technology and contributing. It will be interesting to see if Google can be successful in turning its as-yet-private code and process into something that others participate in.

In order for the social practices and designs to be worked out, people need to be using the technology. Google needs to get this technology out of the lab and into the hands of users and developers so people can start to figure out how and whether the conceptual and social model issues can be addressed.

But it’s early days. As someone wisely observed on Jerry Michalski’s Yi-Tan call, an audio online salon that addresses emerging technology topics, it took three years for Twitter to get to critical mass, and Twitter has an extremely simple usage model and a trivially easy model for extensibility. Google Wave isn’t even out in the world yet, and is a lot harder to grok for users and developers. One of my favorite quotes is from Paul Saffo, “never mistake a clear view for a short distance.” Like hypertext did, the concepts embedded in Google Wave could take decades to make their way into common usage. As with hypertext, there may be many years of tools that instantiate concepts of real-time blending before achieving mainstream adoption. Google’s tools and apps may or may not be the catalyst that gets us there.

In the mean time, this is pretty deep food for thought about how and where to integrate real-time communication and collaboration into regular work and life. Much praise is due to Google and the Wave teams for pushing the boundaries instead of cloning familiar models.

WordPress MU, BuddyPress, and distributed community

Over the 4th weekend I did a test install of WordPress MU and BuddyPress. There are several community projects that I’m involved with that could use this sort of technology, and I wanted to explore how far these new tools get there. The answer, I think, is not quite that far yet.

WordPress MU allows you to create a multi-blog site (for example, a blog hosting service, multiple blogs for local food in different communities). BuddyPress lets you set up a social network with profiles, a “shoutbox-like” feature, activity streams, and groups. In theory, this could let you connect a social network of social networks. In theory, the “open stack” of standards would enable independent sites to hook into the network, too. But we’re not there yet.

Here’s the vision that would mirror the structure of existing communities in the world. Say, the SF Bay Area environmental community. There is a large loosely connected overall community. There is no way to get a big picture of what’s going on. Individuals have closest ties to a number of smaller groups in their town, subject matter area, political group, affinity groups. I’m using the environmental community as an example, but see this model everywhere – in politics, music, sports, many places people get together.

So, imagine:
* a main site that aggregated posts, calendar events, and a view of the overall people network, giving an overview of the community.
* “chapter” sites that have their own posts, discussions, calendar items, and social ties
* independent sites, with existing urls and applications, that register with the central community and have their news, calendar events, and activities aggregated into the main site view.
* each “chapter” and independent site has substantial power to communicate with its group of users (unlike the FaceBook model.)

An individual has a single login for the main site and its chapters. Oauth is used to bridge authentication for people whose primary identity is kept at an independent site.

The OpenStack conversation is currently focused on solving authentication technical and usability problems. These are needed and useful. But authentication is just convenience. We’re saving people from typing another username and password.

Distributed communities are about killer applications – about doing powerful, bottom up community organizing and political campaigns, about building hyper-local news sites with a sense of community that reflects how people affiliate and feel, about enabling networks of people who engage with music, sports, gardening, some sort of culture. I’m really eager to see progress at the functional end of the stack – the standards and sample apps that actually let you bridge and aggregate social networks.

I wrote a bit about this topic earlier here, focusing on distributed profile aspect.

Wilco: I am trying to break your heart

“When you strip it down, it just sounds like a folk song.” That’s Jeff Tweedy of Wilco talking about their music early in the 2003 documentary about the making Yankee Hotel Foxtrot, which I watched this weekend after recently digging YHF out of the garage. Tweedy is right. Pull off the sonic layers and add half the words back to the fractured lyrics, and you have accessible, good folk and rock’n’roll. The live performances of Tweedy and the band make that clear. This music is not that hard.

But YHF was off-center enough that Reprise Records dumped the band when Tweedy wouldn’t take their advice to make the music more accessible. Wilco put the recording on the internet in the iterregnum before Nonesuch, another division of Time Warner, picked it up. Internet distribution only heightened interest in the recording and helped fans stay keep up with the band before the record came out.

The Wilco saga was a fairly early sign of the breakdown of the oligopoly. The tactics to try to preserve the economic scarcity of physical distribution in an age of digital download were unsustainable. The fact that YHF is a problem at all is a problem. Jim O’Rourke, who gets a speaking part of about 15 seconds, on the other hand, who was brought in to help production, is a ringer for music that resists easy. Nobody’s asking him about commercial music; that would probably keep the documentary from being produced.

Suroweicki argued in Slate that the conventional reading of artistic victory against commercial philistinism doesn’t hold because after all, it was another division of Time Warner that picked up the record; others have observed that Reprise didn’t have to have the grace to let the band buy their contract out. Still, Tweedy and manager didn’t have to have the balls and economic confidence to reject the advice to tone down the eccentricity and up the catchiness.

Interesting that it was Howie Klein, the music exec turned political blogger, whose ouster led to Reprise rejection of the record. Among other things, Klein has been one of the curators of the wonderful “Late Night Music Club”, a virtual fireside chat with youtube clips across wide range of excellent and interesting music irrespective of fashion and nominal genre. Communities like NLMC are taking the place of the radio playlist for music discovery, and that’s for the better.

In the Lefsetz Letter an entertainment industry lawyer makes the nostalgic argument in favor of the role of massmarket hits at creating common public consciousness. But the trade always was too high, in segregation, genre-focus, overplay, and the loss of cultural context in a narrative focused on hits. (Not to give Lefsetz a hard time; reading his blog, he is otherwise in favor of digital distribution and taking advantage of the long tail.)

Maybe we’ll eventually get a good “digg” for aggregating and voting up digital plays, which can play the role of a zeitgeist track. That wouldn’t be a bad thing, since it wouldn’t prevent people from discovering long-neglected performances on YouTube and discovering wonderful stuff through the playlists of friends and acquaintances. Network math works like that – there’s still a tall head in the age of the long tail – it’s just that you can get to the long tail now and you couldn’t before.

p.s. interesting that the Wikipedia definition of Playlist is now dominated by digital tools and the digital definition.

Backing up music with Windows and Mac

There are some things that Apple makes easy, and others that stay complicated. If you have more than one computer – especially if you have Mac and Windows – handling backup and synchronization is a pain.

I use a Windows machine mostly as a media player at home, and a MacBook for work and mobility. I had most of the iTunes library on the Windows machine, and a few things that I had bought on impulse while using the Mac. I wanted to use the Windows machine (an XP laptop with a busted battery and some Logitech speakers) as the main media player (because I own it), and the mac for sometime listening. I wanted it all backed up, ideally in a couple of places, and I had a 300G USB hard drive for backup.

Here’s what I needed to do:
* locate and import some missing backup files from a dead computer into the main collection (the key was to search for a distinctive song title)
* reformat the hard drive to FAT32 so that it could be read/written from Windows and Mac. It had been formatted so that Windows could read/write but Mac couldn’t write.
* use the external hard drive to move the Mac library onto the PC laptop
* carefully copy the Mac files into the PC folders (there were a few artists for whom I had different albums on each)
* turn on “Sharing” for the windows machine so I can listen at home

This multi-step process took a little bit of figuring out, with the help of Google and some nice folk at the Apple store. Once I figured out the steps, the implementation took a few hours but was pretty straightforward.

I signed up for an online backup service, but didn’t use it because it seems like it will take a few days to back up my collection and that’s not practical. To make a second backup that’s not in my house, the way to go seems like a USB keychain that lives in my bag or wallet. I’ll rip some CDs and see what size I need.

As the next step in the project, I’ll become the last person on the digital planet to RIP my CDs. Why am I only now getting around to ripping CDs and organizing a digital music collection? To make a long story short, I hadn’t taken care of the digital music collection because until Apple and the labels took off DRM I considered the digital stuff disposable, and bought as little DRM’d music as I could. I had spent the time in Austin mostly focusing on Austin music, mostly on CD. When I got to the Bay Area, I was heads down on work for a bit.

When I came up for air, I wanted to “true up” my music collection and taste; I didn’t want to just listen to the things I already liked and things that are nearly identical. So I’ve been doing a little exploring with the help of last.fm and youtube and wikipedia. That’s a longer story that may or may not make it to blog form.