Broadcast contains two kinds of content — things the people really want to watch at the same time, and things that people would rather watch on their own schedule. So broadcast won’t die. It will be contrained to events that a great many people want to watch at the same time, like the Superbowl, or a newscast of a major breaking story.
Shawn makes this insightful point in a comment to Mark Cuban’s blog. Cuban post focused on technology — he argued that broadcast has better performance than internet, and that multi-cast technology isn’t being developed aggressively enough. Other readers take Cuban up on the technical points, but Shawn nails the market evolution.
The video market has been migrating to “personal schedule” for decades. But there are two things that kept “event” and “program” content together. First is a lucrative advertising business model that applied only to broadcast. Second is capital-intensive distribution. It was expensive to distribute broadcast content, so the market became was an oligopoly. That oligopoly was able to create “pseudo-events” — broadcasting episodes of the Sopranos, and only distributing DVDs to BlockBuster video later.
Both of these things are changing. The cost of distribution is decliningAd models are evolving for peer-to-peer distributed content. Mark Pesce’s post from May of this year chronicles how peer to peer distribution of television has become a commercial force in the last year, starting with the Battlestar Galactica phenomenon. Pesce’s article speculates about a number of ways that advertisers will sponsor peer to peer content.
The net result is that the niche for pre-recorded broadcast — whether over-the-air, or on cable — gets smaller. The superbowl will still generate large ad revenues, but programming will keep migrating away.
Month: August 2005
Steve Hannaford’s Oligopolywatch
OligopolyWatch covers the business news and underlying business trends of industry consolidation. Nice stuff for market research geeks.
On prejudice against southern white people
This from Boing Boing, a crew that would be offended by stereotypes against black people, gay people, Jews, asians.
On the one hand, there’s a bit of a hesitation for a liberal to fight cultural stereotypes of uneducated southern white people, because of the risks of the opposite stereotype. In the wrong places, one might run the risk of insult or even violence for being: liberal, non-christian,
gay, asian, black. It’s hard to immediately defend people you imagine might beat you or your friends up.
On the other hand, the “coastal elite” / “redneck” stereotype wars are counterproductive, create personal offense, provide opportunities for destructive “wedge” politics. Some friends of mine have strong southern accents, and report that they are treated like idiots when they travel North.
Criticizing intolerant and ignorant actions, absolutely. Ethnic stereotypes, not so good.
MSN Filters: blogging as mass media
Ross is scathing about MSN’s new “Filters” project, a commercial group blog in the business niche that Weblogs Inc occupies.
Ross argues that by creating a blog zine with paid writers, MSN Filter is competing with its customers. That implies that blogging is a mass-medium with limited channels. During the height of the portal frenzy, there were stats suggesting that the Web was consolidating to three home pages. The “Long Tail” discussion and Google Adsense have put that to bed.
To the extent that part of blogging joins the mass media, more power to them. MSN and AOL already have portal home pages with pictures of celebrities and celebrity gossip. I don’t care, and I don’t have to care. Radio is a top-40 wasteland, but satellite and internet offer diversity. As long as I can find and read the blogs I care about, they are welcome to compete with Gossipster.
I suppose it’s competing with those customers who are doing blogging for money. If MSN had social smarts, they’d be looking for the popular bloggers on their service, and promoting them onto the portal for extra traffic, and compensating them. Given their terms of service, they could just take the content and not compensate the customers, which would be legal but reprehensible.
Purple pro and con: the insight and the argument
Chris Dent writesin praise of purple numbers. These paragraph-level identifiers enable re-use of content. Chunks of good ideas are locked inside larger units, within documents and discussion threads.
The benefit of purple numbers is that they unlock insights, increasing the liquidity and flow of ideas. The drawback is that they break apart arguments. Insights may be captured in paragraphs. But arguments are conveyed across multiple paragraphs. You need more than one paragraph to provide context, to set up a contrast, or to draw a causal connection.
Sometimes, picking apart the individual points is what’s needed to find the holes and strengthen understanding. Sometimes, picking at individual points is a sign of a flamewar — people are searching for points of disagreement. Picking at points can increase the quality of thought, or reduce the quality of thought by reducing the incentive to build toward a larger theme.
In general, the wiki form is conducive to concensus, by bringing people literally on the same page. It will be interesting to see how wiki+purple affects the quality of thought and level of agreement.
Information service, communication service, and bad law
The FCC exempted phone companies from having to lease lines to internet service providers. They did this by re-classifying broadband as an “information service”, which was ruled not to be subject to line-sharing.
In the words of Light reading the FCC ruled that the physical facilities that deliver broadband, and the broadband service itself are indistinguishable and inseparable. The two things together — the facility and the service — are now called an
Avoid rankism with clouds
In response to Mary Hodder’s concern about “rankism”… I wonder whether rank is the wrong presentation, and clouds are right.
A cloud presentation would primarily show the communities that a blogger is in. It may show secondarily the influence strength within that community, but that should be secondary in the presentation.
A cloud presentation might enable navigation along topic axis. For my blog, you’d be able to traverse to social software and austin clouds.
Influence would be calculated within the cloud. So, Jon Lebkowsky would have separately-calculated influence level within Austin and environmental blog communities.
Perhaps the presentation would allow the browser to traverse communities. One could find “blogher”, and traverse to the “sepia mutiny” south asia community.
Time would be an interesting factor. Perhaps one could view the cloud by week, month, or year. See how participation ebbs and flows over time. A longer time frame would be interesting — I wonder whether other bloggers are “bursty” in their topics of interest. A long time frame would catch people who come and go.
In sum, a cloud presentation would avoid the worst of rankism, because it would focus on the community more than the individual, and allow a browser to traverse communities.
Mary Hodder on Blog Community Discovering
In a thoughtful essay, Mary Hodder explores what it will take for blog search to go beyond the “top 100 syndrome” to discover the interesting patterns of influence and community.
…this is about going beyond lists and links, to understand that the social relationships of expression between and across blogs is really about searching for a “metric for identity” or “metric for affiliation”, “metric for community”, or “metric for influence”.
Mary is ambivalent about creating new forms of “rankism”.
I have to say, I’ve resisted this for the past year, even though many people have asked me to work on something like this, because I hate rankism. I think scoring, even a more sophisticated version of it, akin to page-rank, is problematic and takes what is delightful about the blogosphere away, namely the fun of discovering a new writer or media creator on their terms, not others.
The algorithm would weight links in posts higher than blogroll links, and new blogroll links higher than old ones. It might include new terms like time read, comments, and topic score.
Hopefully, the tradeoff for more rank-ism is better discovery. This weekend, I spent some time exploring Sepia Mutiny – a group blog for South Asian writers – and its cousins, after meeting one of the authors at BlogHer. This form of indirect discovery is delightful. A tool that helps with such serendipity would hopefully be more like the joys of a used book search database, and less like “sororitization”, the turning of social groups into popularity contests.
I wonder whether rank is the wrong presentation, and clouds are right. Clouds would primarily show the communities that a blogger is in — and may show secondarily the influence strength of that community?
Blog search: Tell me something I don’t know
I got an email about a new blog search engine called Blogniscient, so I clicked through to try it.
On the home page, it tells me that the top 10 political bloggers are:
#1 Michelle Malkin
#2 Captains Quarter Blog
#3 Eschaton
#4 Powerline
#5 Crooks and Liars
#6 Austin Bay
#7 Think Progress
#8 TPM Caf�
#9 The Anchoress
#10 Daily Kos
You can drill down and find the top liberal and conservative blogs. Two clicks later, I find that the top liberal bloggers are (the list goes to 20):
Liberal Politics
#1 Crooks and Liars
#2 The Left Coaster
#3 Eschaton
#4 Think Progress
#5 Daily Kos
#6 TPM Caf�
#7 Talking Points Memo
#8 Political Animal
#9 The Huffington Post
#10 America Blog
So please, Mr. Search Engine. Tell me something I don’t know. I knew that Daily Kos and Atrios/Exchaton were very popular. I had no idea that Atrios was two places ahead of Kos, and… I don’t care. It’s not like baseball heading up to the playoffs, where there’s going to be a single winner.
Where are the good centrist blogs, like The Moderate Voice and Ambivablog? They don’t fit into the impoverished taxonomy, let alone sites like Booker Rising, a site focused on moderate-to-conservative African-Americans.
Here’s the problem. The top 40 blog list is boring. It’s stable. We know who they are. The job of a search engine is to tell the user something they don’t already know.
Splitting up the top 100 into big themes is somewhat more interesting than the general-purpose Technorati 100. It’s more meaningful to look at top political blogs, sci/tech blogs, entertainment blogs. But it’s still stable, and doesn’t convey much new information.
The top news stories is a bit more interesting, since that churns daily. That’s an interesting zeitgeist check, and may be worth checking back.
The bulk of the site misses the glory of the web. With a vast amount of human knowledge there for the mining, please tell me something I didn’t know already
The Success of Open Source
Steven Weber’s excellent book, The Success of Open Source is a superb complement to Yochai Benkler’s classic essay, Coase’s Penguin. Benkler looks at peer production as an economic system and concludes that it has become a third major form of organizing production, alongside the market and the firm. Weber takes a closer look inside the open source production process, and provides a fascinating analysis of how and why it works:
- the origin of open source software
- why people participate
- how projects are organized
- how open source fits into surrounding organizational and economic structures.
By doing this, Weber reaches a variety of interesting observations and conclusions:
- Counter to the myth, the open source development process is not a teeming bazaar, with “bottom-up” self-organization composed of local signals. The largest and most successful open source projects have identifiable, hierarchical organizational structures, with a leader and/or inner circle, up to a few hundred active contributors, and a much larger group of occasional participants.
- While Open source licenses protect the right to “fork”, to take the codebase off in a different direction than the original project, projects stay together more than a skeptic might think. Weber observes that project leaders depend on developers and developers depend on the community. The ability to get more done together than separately.
- Developers contribute to open source projects even though most users are “free riders” who benefit from the software, and contribute little or no code. This is less of a paradox than it might seem, since software is a “network good” that gains value the more people who use it. The more people who use a program, the tbugs that are reported and fixed, and the more robust the system becomes.
- Since the origins of the phenomenon, there have been different approaches to licenses. The West Coast, Berkeley-style licenses are easy-going about the ability to include open source software in other, non-open source code, so long as credit is preserved. The East Coast, Free Software Foundation GPL (Gnu Public License) is strict about requiring that redistributed code must always be free software, and any software including free software must be distributed by GPL
Perhaps the most insightful conclusion Weber draws is the relation between open source and intellectual property. Weber observes that open source redefines property around the right to distribute, not the right to exclude.
Weber is able to make this observation because he avoids polemic. Weber doesn’t try argue that open source software is good because intellectual property is bad. And he doesn’t argue that that open source software is bad because intellectual property is good. Instead, he is able to observe how open source redefines property itself.
Weber’s pragmatic analysis leads him to focus on the vibrant intersection between open source production and traditional business, with a look at a variety of hybrid business models, from IBM’s focus on hardware and service, to Red Hat’s packaging and branding, to MySQL’s service and customization, to Apple’s addition of proprietary chrome and polish. Weber predicts continued evolution and innovation and this boundary.
The book was published in 2004, and so it misses one of the most interesting trends in the last couple of years — the rise of open source software that’s not just for hackers. Netscape/Mozilla is included in the book as an example of failure. Weber looks around at Linux, Gnome, KDE, etc, and concludes that open source software may never be able to make software that works for non-hackers. This was before before the breakout success of Firefox, and the popularity of GAIM, an instant messaging client with a consumer-quality interface.
Weber examines the brash and blunt hacker culture, with its focus on technical decision making through vehement debates on project mailing lists that hash out solutions to technical problems and decisions about technnical direction. I wonder about how the culture will evolve as interactions grow with non-geek users, and hybrid companies face decisions that have external constraints driven by customers.
Towards the end of the book, Weber speculates about how the organizing methods of open source software might affect the production of other kinds of goods — writing, music, biotech, business ideas. I thought it was interesting, but less substantial than the parts of the book focused on open source itself, with analysis based on observation.