Comcast's 250GB transfer is equivalent to 2.5 hours of HD video, as observed deep in the comments of Om Malik's post on the Comcast bandwidth cap. The internet use that would threaten Comcast's business over the next few years is exactly what's banned. Strategically diabolical, if you have the market power to do it.
It took over 5 hours to recover from an expired domain name. The experience with Earthlink was one of the worst customer experiences I've had in my life, and I can't remember the other ones.
The domain had been my primary web and email address for 10 years. It took me a few days to realize that the problem was the domain, not just an email delivery problem. Once I realized the problem was with the domain, I looked up the domain on whois and saw that it was registered at register.com.
Then came a comedy of redirection. I called Register, who sent me to their partner division, who sent to Earthlink, who sent me back to Register, who sent me back to Earthlink again. Each of these episodes involved long waits on hold. The last Register clerk who sent me to Earthlink a second time was extremely patient and volunteered to stay on the line while I talked to the Earthlink folk who denied I was registered through them.
This time the Earthlink clerk recognized the registration. But she was unwilling to give me any information about the account unless I provided her with the credit card number I used to register the account a decade ago. I don't know about you but I save financial information for 7 years, which is the recommendation for tax purposes. Finally she referred me to another number. She wouldn't explain why, or what the other number was. I stayed on hold at that number for 30 minutes, then gave up and tried Earthlink chat.
Finally, the chat clerk explained what the problem was. The number was a collections agency. Apparently I had owed money to Earthlink (even though my records showed that I had been paying them regularly). Another side trip to the billing department at Earthlink who confirmed that I was paid up. A re-visit to Earthlink chat. Now, the clerk identified that there were *two* accounts, one of which was delinquent (I hadn't gotten any notices that I remembered). But he couldn't give me any more help unless I remembered the decade-old credit card. He was about to invoke a supervisor confirmation procedure that would take another day to verify my identity.
But then I had an epiphany. Amazon.com to the rescue!!! I realized that I'd been buying stuff from Amazon for a decade. I looked up the history of my Amazon account. Lo and behold, there were the 10 year old credit card numbers. At this point, Earthlink's computer system went down and I needed to call back 30 minutes later. Finally, they were able to look up my account, and sent me to the collections people with instructions to get a confirmation number once I had paid my bill.
By this time, the collections office was really closed (they hadn't picked up the phone earlier when they were supposedly open). I tried them in the morning, the first minute they opened, and someone answered the phone. They confirmed that my account was paid up. They couldn't give me a confirmation number, since I didn't owe any money. To get out of the catch 22, I offered to pay them just to get a confirmation number. The supervisor discovered a fee of $50 (real or fictional, I'm not sure), and I used this to ransom the confirmation number.
Then I called back Earthlink, gave them the number, and was ready to renew the domain. Ok, in order to renew your domain, you need to transfer the hosting to us, said the clerk. Oh, no I don't. It took a few steps of back and forth to cause them to renew my domain name without getting their hosting services which I very definitively did not want. I gave them a new credit card, and paid $30, which is $20 more than the going rate.
A couple of hours later, alevin.com was back in service. But that wasn't even the end. I got an email from them, saying that my credit card had been rejected. They hadn't used the new credit card number I gave them, but the 10 year old number that had been expired for years. I gave them the current credit card number, again.
I think I registered the domain with Mindspring, back in the day, when they were good. Earthlink bought mindspring and went through years of turmoil and decline. Now they have acheived a level of customer service that is dramatically awful.
In 30 days, I can transfer the domain to some other registrar and be rid of Earthlink forever.
Liz Henry just pointed out a creepy looking web service that will tell you >a href="http://www.genetree.com/">who you are related to. Like, perhaps, the letter carrier or Thomas Jefferson.
They could support themselves with ads from divorce lawyers.
This week, I went to She's Geeky, an unconference for women in technology. There were sessions on topics technical, organizational, entrepreneurial, and personal.
One interesting session was on managing groups of men. The conversation dealt with some of the style differences between women and men, the list below comes from that session and some others that dealt with the topic:
* women communicate by telling stories that put the issue into context; men are more likely communicate with bullet points and arguments
* women often try to lead conversations by asking questions and getting others to contribute; this can be read as weakness
* decisiveness and strong opinions from women can be read as bitchiness. People varied in their reaction to this, ranging from "claim your inner bitch" to "learn to respect people with alternative skills and styles"
* see above: women may care too much about what other people think about them.
* women sometimes have trouble saying no; there was a whole session on the topic that I didn't go to.
* on the whole, more men believe they're above average, and more women believe they are below average (think about this for a moment...) women need to learn to filter men's boasts when they aren't matched by reality, realize their own competence, and get safe support to build confidence.
There were also some rather unfunny stories of traditional sexism: the only female engineer in a group being asked to decorate a new office; a woman who found she was making less than similarly qualified men; a woman executive being asked to regularly provide fashion advice to her CEO (and she seemed to feel obligated to do it). (I suggested that she refer him to the neiman marcus personal shopping service.
At SuperHappyDevHouse20, I worked on a couple of projects to make tools for peer organizing. More notes when I have time to blog..
Last post's exploration of "the way things work" was "Infrastructure", Brian Hayes' photo survey of industrial infrastructure. This week's episode of "Richard-Scarry-for-grownups" is The Box, by former Economist editor Marc Levinson, which delves into the history of container shipping.
The Box is compelling history of things and people. It dives into the details of industry structure, finance and technology and assembles an intricate picture of transformative change. And recounts the adventures of the competing entrepreneurs racing to get the system working, beat competition, and outwit regulation.
Container shipping appears inevitable from the perspective of technological determinism. Boxes, trains, trucks, motorized ships, cranes, none of the technology was dramatically new. Container systems had been tried in the railroad shipping since the 1920s. The old system, where each item needed to be loaded, unloaded, and reloaded with manual labor, was costly and slow. But, a clear view is not the same as a short distance, to quote Paul Saffo.
The incumbent industry had strong incentives to preserve the status quo. Shipping, trucking, and trains were regulated industries with centrally set prices and terms of service, established cartels, and a focus on the mechanics rather than on the service of transport. It took an innovative entrepreneur and some well-timed government handouts to break the logjam. Malcom McLean, a trucking magnate, envisioned the system in his minds eye, drove the engineering for the interlocking containers and the fast-loading cranes, put together aggressive debt financing, and benefited from the US government's giveaway of WWII surplus transport ships. Far-sighted port agencies in New Jersey, Long Beach, and Singapore invested heavily in container ports, securing early leads. When change came, it was rapid. Levinson writes, "Three years after containerships first sailed to Europe, only two American companies were still operating breakbulk ships across the North Atlantic."
But even the folks who saw change coming had very imperfect foresight. Many cities invested in ports, but only a few succeeded, and others invested heavily without return. After making a fortune in his first container ventures, McLean himself bet badly, on a fast, fuel-guzzling container ship that hit the market during the 1970s oil crisis, then on a huge slow ship that was introduced just in time for the 80s oil price crash. From a distance, the transition to container shipping seems orderly and logical, like water flowing downhill. Close up, it's rapids.
And its attention to evidence shows a more complicated picture of the relationship between labor, capital, and government than would be predicted by ideology. Much economic writing in the popular press has a clear ideological slant. The free market generates the most efficient economic outcomes, while regulation, government subsidy, and labor protection reduce economic growth. Alternatively, regulations protect against excessive corporate power, subsidies protect infant industries and local economies, and unions empower workers.
Levinson's history of the rise of container shipping uncovers a more mixed and subtle story. The early innovators in container shipping got a jumpstart from a government fire sale of surplus WWII ships. WIthout the gift of lowcost ships, the capital costs of ships would have been higher than the entrepreneurs could carry. Early on, some port cities and agencies invested heavily in the creation of container ports. The government investment paid off spectacularly well for some, and badly for others.
At the same time, the shipping, trucking,and rail industries were highly regulated. Players were attuned to manipulating the regulatory agency rather than competing. Much later on, the successful container industry helped drive deregulation. Levinson doesn't touch the reasons that the railroads got regulated in the first place; they had been an overly powerful oligopoly that abused their market power. So, when does it make sense for government to subsidize or regulate industry? Sometimes, in the cases of early industries, very high capital investments, and to combat market power. And sometimes regulations and subsidies outlive their usefulness.
The biggest expense in shipping was not the transport itself, but the repeated loading and unloading of every item. Longshoreman's unions arose to protect workers against an abusive contingent labor system, where workers scrambled every day for the chance to unload the days ships. The union policies provided steady work, but also created work rules that mandated more workers than were needed to do the job. The longshoreman protested containerization vehemently. In some regions, protracted labor conflicts kept the port from adapting to the new technology; by the time the union lost, the container ports had been set up elsewhere. But in the US west coast, the union negotiated a settlement where longshoremen whose jobs were made obsolete received retirement payouts. The benefit of containerization was shared with the workers.
The Box tells a story that is more complicated than an ideolog would prefer. Unions and government actions are sometimes helpful and sometimes harmful, and helpful structures can outlive their usefulness and need replacing.
So, it sounds like Cingular and other phone companies have been blocking calls to Freeconference.com>
I am very eager to try the Nokia E61i with wifi, and to see what the OpenMoko project comes up with. How long til someone sells voip phones for $49 in cities with good public net? Tony Bowden, a Socialtext colleague who lives in Estonia which has great wifi, was tryign the skype phone approach. Wonder how that was going.
installations have been around for long enough to come back and test which ones are working. Novarum, a consultancy specializing in wireless broadband, has gotten behind the hype and the skepticism, and tested muni wireless networks by coverage and speed. The best rated system were Saint Cloud Florida and Mountain View (which worked when I was there). The first thing to note is that according to the study, some of them actually do seem to work. Second, reasonable performance depends on more transmitters; early estimates recommended 20 transmitters per square mile, but it appears as though 40 are needed for adequate performance.
Novarum also ranked the cellular broadband networks, and included them in an overall ranking with the wifi nets. The Saint Cloud net came in first overall, and the Google Mountain View net came in number ten on the combined list. The cellular nets rank better because they have better coverage. Wifi nets, when you can get them, are faster than cellular.
One puzzlement is that Palo Alto appears on the wireless list, ranked number 8, at 2.45 on a scale of 5. On the University drag, there are plenty of locations offering free wifi, but what is the muni offering? Is it the lame Anchorfree service that has poor connectivity and a horridly annoying registration system? If that's the case, then it's below the cutoff where a rational person would consider the system to "work." Santa Clara is above it, ranked at 2.65. A field trip may be in order for some anecdotal testing. I wonder where in the Santa Clara sprawl the network is to be found?
What was the population in the survey? Hard to say. To build that top 10 list, how many citiies did they visit. Ten? Fifty? They don't say on their website. This makes it impossible to draw conclusions about the overall state of muni wireless investments.
Novarum plans to come back in six months to test again.
Caught author interview with Scott Rosenberg, about his new book about the Chandler project and software development. I like Rosenberg's writing, but I haven't read the book yet.
From the interview, Rosenberg sees Chandler's failure-to-thrive as a cautionary tale about all software development. However, Chandler actually had a distinctively awkward set of initial conditions:
* architecture driven. They had a grand vision of a message-based storage model that they needed to get perfect before they did anything
* clearer vision of architecture than application. Reading Chandler's material, there was no clearly articulated goal beyond a free clone of Outlook (though that alone wouldn't have been a bad thing)
* infinite budget. Open source, with a wealthy funder. No economic constraints or time pressure to keep them on the straight and narrow. No personal itch-scratching, unlike the classic open source story.
Plenty of software projects fail because they don't adhere to the logical set of constraints. Chandler started without the constraints.
Steve Jobs first big applause line in the iPod speach was "it's a phone". The second was "you can hear voicemail in the order you choose.
The "smartphone" trend all about using a phone for everything but making phone calls and listening to voice mail. A naive focus on technology, competitive advantage, or customer requests all leads engineers and marketers to stuff more features into the phone and neglect what the user is really trying to do.
I'd like a version of the iPhone with really big keys with indented numbers so you can dial with fractional attention. Use the keypad to protect that beautiful, fragile-looking screen.
Summary: It looks gorgeous, and I want one:
* when I can add my own software
* after the model gets introduced that doesn't break in less than 30 days
From an October 11 post about a BayCHI event, two interesting insights:
Shoplifting is a problem for stores. The logical solution? Retail stores require that all clothing and bag makers redesign the pockets, handbags, and backpacks. If you try to steal something, an alarm goes off when you try to leave the store. If someone, somewhere has figured out how to steal using your model of backpack, then your backpack will stop opening til you get it fixed. Want to go shopping? You need to buy a new bag. Is there a bug in the store's system? You can't put your hands in your pockets.
This is Microsoft's approach to DRM. In the interests of protecting content providers, Microsoft requires peripheral vendors to support DRM. This widely discussed essay talks about the various vulnerabilities and anti-features of Vista DRM support. Microsoft can disable or degrade your peripheral if somebody somewhere has compromised your driver. If you want to play DRM content, Vista requires all of your peripherals to support DRM, so you need all new display, speakers, etc. The hardware DRM means a step backward, away from universal drivers toward device-specific drivers In all, it sounds like Vista makes your system unreliable and cumbersome, in the interest of protecting content providers.
Given these risks, I'm not going to get Vista any time soon on my own computers. I'll wait til the experience of millions of others demonstrates whether it's as burdensome and flaky as it sounds like it might be.
Some pleasant discoveries in a weekend with lots of electronic and physical housekeeping. Travis County, Texas hadn't figured out that I'd moved to California, and sent a jury duty notice. They also had a nifty online application that lets you tell them you've moved and are no longer eligible. Palo Alto's traffic citation system doesn't let you pay fines online, but there's a reasonably sane automated voice system to pay a traffic ticket with a credit card. They charge a very annoying $12 fee to pay the ticket online, but avoiding an hour-long errand is worth it. I don't mind paying for decent services with taxes or fees; it's just that the gov't is probably saving money when citizens do their own data entry.
Yesterday, I came home after a bike ride and was able to find answers on the internet to my questions about the wildlife and places I saw on the ride. In 1998, the last time I'd gone out (running) in the same landscape, the information wasn't on the net yet. It would have taken hours of research and travel to find the same information. The internet is a thing of wonder.
Jon Udell fantasizes about being about to have geo-information available immediately, as you experience the landscape. That would be both cool and horrid. The reason I put a blackberry in the drawer is that it intruded into experience. I'd go for a walk on a sunny day and see email, not trees and flowers.
It's one thing to see mysterious weathered structures in a marsh, and an odd-looking drawbridge that looks like it rotates sideways, and rush-like plants growing at the side of the water, and observe variations in the color of pools, another thing to learn about the abandoned town, and the man who tended the drawbridge, the ecosystem that depends on those plants, the salt concentrations and microorganisms that influence the color of the water. I didn't need all that information right while cycling, the salty breeze and the landscape was plenty.
Wordsworth defined poetry as emotion recollected in tranquility. A corrolary -- you're not writing the poem while in the midst of the emotion and sensation. I think it would be great to be able to bookmark a landscape, and come back to learn about it later, and not forget. Being able to look the landscape up later is enough cyborg for me.
Dale at O'Reilly is uncomfortable with a ">citation of Make Magazine in an article on status. The article's hypothesis is that people acquire craft skills in order to show off and be superior to others. Dale feels that creative people make things to please themselves. Both of these miss a third perspective -- creativity as a gift. How often do creative people make something in order to please others? Cooking is surely like that. I'm more motivated to undertake a creative project if there's someone who will enjoy and appreciate it. It's about making the other person feel happer, not smaller.
BarCamp Stanford was fun, with some cool hacks, enjoyable people, and maybe some useful organizing. The very pleasant location at Stanford had glass doors open to a green view and perfect weather.
Blog posts here will be completely anecdotal focusing on ThingsILearned rather than reporting.
Went to a session on capturing attention stream. There are three ways this could potentially be useful; we talked about two of them; for the individual and for marketers. For the individual, it could be cool, but have the potential to add to synchronous overload (popping up recommendations when you are trying to concentrate) and asynchronous overload (giving you even more things to sift through and file). It would be a gold mine for marketers, but needs a high level of shared consent to avoid yet another dimension of creepiness to the surveillance society.
If this entry posts correctly, I've upgraded Movable Type. The process isn't all done yet. The cms templates are somewhat messed up, I need to delete over 10,000 spam comments from the last several weeks, and need to install a new comment utility.
UPDATE: The cms template just needed a shift-refresh, and the spam comments are gone. Woo hoo! If you made a comment in the last six weeks ago, it got caught in a spring cleaning frenzy. Feel free to comment again.
It's cool to be somewhere where there are multiple events for women entrepreneurs. Will blog about the program when done.
One nice touch for networking -- the lunch is after the talks, the better for conversation-starting. It's more awkward to mill around with a group of strangers, and then, by the time you have more things to talk about, everybody leaves
I am glad to see that this this LinuxWorld articlecontains plenty of quotes countering Forrester Analyst Michael Goulde's report: "Vendors Refine Their Open Source Strategies/The Risk of Subverting Open Source Freedoms Mounts".
Goulde argues that "The traditional open source project with a large community and volunteer contributors is going to be diluted by extensive vendor participation," he told LinuxInsider in an interview."
What Goulde is missing is that fast that in the open source ecosystem, a very high percentage of projects fail. Most projects, whether initiated by individuals or vendors, don't get much contribution and die a quiet death. That's nothing new. If new vendors release open source projects that are not interesting to developers, then those projects won't get community participation. A tree fell in the forest and nobody heard.
His other argument makes even less sense. He writes in the report: "As major software suppliers adopt open source software as part of their strategies, the risk increases that the goals of the open source movement -- user freedom to use, modify, and distribute software -- will be undermined."
But the freedom to modify is in the license, not in the promise by the vendor. If a vendor open sources a product, it uses either GPL, or BSD, or some other style of license that grants permission to modify, and requires redistribution to be credited or to be also open source. Once the vendor releases the software, they can't take it back. Even if they offer new software under a different license, the community is now free to fork the code.
Wikiwyg -- the wysiwyg editor for wikis that was initially developed by Socialtext and released open source, is getting a increasing number of useful patches and bugfixes from the community. People find it useful, it's being adopted, and developers are contributing. The right to modify and redistribute is protected by LGPL.
If a vendor releases software that's useful, the community will pick it up. If it's less useful, it will get less traction. Projects might pick up interest as they mature, or lose interest if the software diverges from what the community wants. All of these patterns are common.
Rather than describing risks to open source, Gould might have described risks to vendors. If vendors hope that simply by opensourcing their code, they are guaranteed developer interest, they are sadly deluded. Just like any other product, an open source product needs to meet the needs of its customers -- in this case, open source developers -- in order to be successful.
On the topic of portals reincarnated with ajax "I never wanted portals to be more dynamic, or even open to third-party-authored widgets - I wanted them to go away altogether."
Google gives the content industries freedom to set their own prices, starting with Free. This will be popular with content providers, who hate Apple because Steve Jobs insists on setting his own prices for online music and video.
If Google wanted to be Not Evil, it would allow video producers the choice of providing content in DRM-encumbered or DRM-free formats. Video providers who wanted to allow users to download, time-shift, excerpt, and mash-up would be able to do so. Video producers who want to restrict content could do so too. If restricted content loses viewers to sources with more convenient terms, that would be neither here nor there to Google.
Many video producers who upload their content to Google Video are small non-commercial players, or obscure sources like this sushi documentary that have much more to gain from exposure than from restricting use.
Google gets a very small bye for beta -- maybe giving content providers their choice of format is a soon-to-come future feature. If not, Google Video really is evil.
Donata.com writes about auto advertisers shifting budget from tv and print ads to the internet. He's predicting a new trend in online advertising, away from search context ads (Google) and an unstructured database (Craigslist), and toward structured, decentralized, XML-based ads. In this model, each dealer would post their list of available cars, with price and options. Now, there are already a plenty of centralized aggregator services. Froogle can find you deals on desk lamps and rain boots. If Froogle let users subscribe to a search by RSS, that would be a long way toward the vision.
Maggie Orth's International Fashion Machines is marketing a fuzzy light switch. Touching the pompom completes the circuit and turns on/off the light.
A fuzzy switch is kind of nifty if you don't have little kids with sticky fingers. But it's not that different from a regular switch that you need to get up to flip.
What would be really nifty is a fabric household remote control. Touch bits of fuzz or parts of a colorful pattern to could turn on/off lights, heating/air conditioning, stereo, run the bath. The trigger could be a soft press, or a bounce for the playful. It could be a fuzzy desk toy, a mousepad like desk accessory, or a watch band.
It will be especially fun when these are available as kits, and 8-12 year old kids will be able to make them as crafts projects.
Last week's WSJ reported that "Several large telephone and cable companies are starting to make it harder for consumers to use the Internet for phone calls or swapping video files." Surely, the best strategy that a business can take when faced with booming customer demand is to reduce what it offers to customers.
The incumbent telcos and content companies have the same problem -- they'd rather protect their obsolete business models than to see what customers want now and provide it.
Clocky, an alarm clock that rolls off the sideboard and hides when you swat it.
Just won an well-deserved Ignobel Prize.
On that day, Disney will license all of its content with creative commons licenses, and offer Disney fans a set of creative tools to remix video, and retell stories, and create games, and resell the content they create using Disney raw materials....
Then Disney stories will return to the folk art roots from which they started, and fans young and old will multiply the time they spend with Disney stories and characters by a factor of several, and the market for creative tools and accessories will grow.
Today this is but a fairy tale. Second Life, the creative stepchild of the entertainment business, is enabling the creation fo a secondary market in player-created game content. The stepchild of the entertainment business is misunderstood and despised by its elder sisters, but it will be queen someday.
Om Malik speculates that Google is building a nationwide fiber network, will use wifi at endpoints to reach users, and then use location-awareness to turbo-charge ads.
The dots to connect are:
* Google has been quietly buying up dark fiber around the country
* Google is working with a small startup in San Francisco that has software for location-based services at wifi hotspots
* Google just launched Google Talk, a text and voice messaging client.
* Google spends a lot of money on IP transit fees, and could avoid those fees by sharing traffic directly with ISPs.
If that's what Google is doing -- wow. Google is very good at building very big, low-cost computing systems. The network incumbents have an inflated cost structure and a business model based on lobbying for competitive advantage. Some smart capital investment could free vast potential energy in communication services. This could go kaboom.
Michael Osofsky picks up the thread comparing Eric Von Hippel's "lead users" to Geoffrey Moore's "visionaries," and prompts some more reflection on the similarities and differences between the categories of technologyearly adopters.
I suspect that von Hippel's Lead Users and Moore's Visionaries are mostly the same people viewed with different perspectives shaped by time and technology.
Moore saw visionaries as "early adopters" -- people who are eager consumers of brand new products. von Hippel studies early adopters as innovators -- people who not only consume but customize products.
Early adopters have always played a role in customizing products, but they have more opportunities to do so these days. There are more tools available to modify products, ranging from open source software to low-cost CAD and low-volume contract manufacturers.
When Moore first wrote Crossing the Chasm, it was most important to help technology companies to see how different mainstream buyers were from early adopters. A technology provider wishing to hit the big time needed to focus on packaging the product for more mainstream buyers, and to ignore the eccentric preferences of the visionaries.
These days, customer innovation has been democratized, changing the rules of business success. Successful tech companies (like Google, Amazon, Ebay) need to be good both at packaging a service for broad use, and at providing tools for lead user customization.
By moving away from Moore's understanding of users as eager but passive "consumers" and focusing on the active role played by lead customer innovation, von Hippel reaches several insights that Moore didn't a decade ago. Many lead user customizations are one-offs which allow a manufactured product access to an application the vendor couldn't supply cost-effectively. Many other lead user customizations are applicable to a larger class of customers, and vendors can use the signals of end-user customization to lead their next-generation product development efforts.
So, instead of abandoning lead users, von Hippel recommends serving them with customization tools, and adopting popular customer innovations into the manufactured product line.
In an interview with CNET, the internet pioneer evangelized sensible ideas about public policy for the internet.
Cerf told CNET that he finds it ''"troublesome" that various states and localities have been proposing and implementing measures to outlaw municipally sponsored broadband networks. "Why on Earth would we inhibit people from making their own investments--deciding, for example, to float a bond?"'.
Cerf has also been out talking to Hollywood, encouraging them to 'view the Internet as an alternative distribution outlet. "Some are responding positively, but some legal departments are still having trouble swallowing the idea."'
Hopefully Cerf's well-respected presense and active evangelism will help Google throw its weight behind good tech policy and counteract the force of the telecom and content oligopolies. The tech business strategy mantra is "commoditize your complements." Google benefits when there are fatter pipes available to more people, and more content available for indexing and related ads. The world will get better when the innovative business that see the fortunes to gain pry off the stranglehold of stagnant businesses who only see what they have to lose.
Socialtext is considering the use of Asterisk as a telephony server. We use a mishmash of skype, vonage, POTS, and freeconference.com to support our distributed team. It's amazing that it can be done at all, but the string and baling wire is getting tiresome. "Can you hear me" isn't amusing any more, and wastes plenty of valuable time.
The open source telephony server toolkit has tremendous potential to provide low-cost telecom services for small-to-mid-sized businesses But somebody needs to step up and market the heck out of it.
I was browsing through the Asterisk site itself, and the sites for some Asterisk VARs. The sites all focused on a long, long, long list of features. The laundry list is probably helpful for a telecom geek who knows exactly what she is looking for, and is in search of the specific set of protocols, hardware devices, and functions.
The "feature list" approach is next to useless for a small business person who wants to know how their telecom needs can be met effectively. A good marketing person would talk to small business people and understand what sets of capabilities they're looking for in a phone server. Then they would explain, step by step, what Asterisk can do, and what the packages contain. The laundry list of features would show up on the site as a third level of detail, when the customer, now with a better understanding of what they are looking for, can see the details and compare to alternatives.
What's needed isn't marketing fluff -- airy promises about enhanced productivity solutions yada yada. It's for basic, clear, education so customers can learn what to buy and how to buy it.
Personal computers were overwhelmingly successful in because they supported a wide variety of software and peripherals. PCs put digital control of words and data into the hands of end-users, routed around central IT bottlenecks, and a multi-billion dollar market was born.
Special-purpose word processing computers bit the dust. IBM's monolithic model -- where you bought the computer, storage, peripherals and software from the same vendor -- lost market share. Microsoft played a huge role in making the PC explosion happen in the 80s and 90s.
Now, Microsoft is breaking this model that made it successful with its upcoming Windows Vista operating system. Audio and video are the latest media to move from the exclusive control of central distribution into the hands of end-users. And Microsoft has written Vista to keep that control out of end-users' hands.
This News.com story explains how Vista is designed to restrict audio and video capabilities:
For the first time, the Windows operating system will wall off some audio and video processes almost completely from users and outside programmers, in hopes of making them harder for hackers to reach. The company is establishing digital security checks that could even shut off a computer's connections to some monitors or televisions if antipiracy procedures that stop high-quality video copying aren't in place.
The News.com article goes into more detail on how Vista reduces opportunites for software developers, hardware devices, and end-users.
This is a fine reason not to upgrade to Windows Vista when it comes out. A software upgrade ought to provide customers a better product, not a worse product.
This is also an opportunity for entrepreneurs building on Linux and web-based services. People who can package easy-to-use, open personal creativity systems have a vast market to gain that's being left behind by Microsoft.
The superb What Geeks experience contrasts with the dismal customer service black hole which is Cingular Wireless.
My account has fallen victim to the merger between Cingular and AT&T Wireless. I sent Cingular a payment, but continued to get dunning phone calls. It turned out that while I had changed the address from Cingular to AT&T, I had failed to update my account number to the new Cingular account. They couldn't find my payment. So I double-paid using my credit card to keep them from turning off the account.
The next step is to fax bank and bill-pay records of the transaction to their research department. The support person had a helpful demeanor, but was unable to confirm the fax number of the research department, or any way to check on the problem once the records had been faxed in. I'm scheduled to get a call back on Wednesday.
In the meantime, Cingular has a visible amount of my money earning interest somewhere. If one has bad memories of MCI billing "glitches" that turned out to be genuine, one might start to be suspicious at this point. There's plenty of money to be made by double-billing people who have problems with the AT&T conversion. Hanlon's Razoroffers some mild comfort to the paranoid: "Never ascribe to malice, that which can be explained by incompetence."
p.s. Here's Chris Shipley of Network World and the Demo conferences telling her story about getting stuck in the AT&T/Cingular transition.
I dreaded an endless cross-vendor finger-pointing maze when when the Microsoft documentation for securing a wireless network contradicted the Linksys documentation, and the Microsoft rep tried to send me to Fujitsu.
At that point, I gave up on the vendors and called Whatgeeks a rental help desk service that promised assistance at $.99 per minute. The Whatgeeks tech support person was superb. He helped me through configuring security for a network with several versions of Windows and different speeds of network cards. Then he helped with another network misconfiguration. The tech was polite, informative, knowledgeable and efficient. What great customer service. Highly recommended.
Google maps, Flickr, Ebay, and other web services with APIs are pulling the relevant platform away from the desktop and toward the web.
Still, the network effect of powerful, privately owned web APIs is potentially as dangerous as the network effect of Microsoft's desktop APIs. On any given day, Google or Ebay have the right to change their APIs and make life difficult for their developers. They have the right to change the terms of service, and increase prices on services that their developers depend on completely.
The lockout effect could be even worse, because Google and EBay own the servers, and changes can take effect in real time. When Microsoft bakes DRM into every copy of Windows, users don't need to upgrade their PC immediately. But if Google or Ebay changed terms of service, those dependent on the service would need to comply immediately.
Google and Amazon, and Ebay's big servers are a big deal. A web service can start small. But once service becomes popular, it takes a good amount of capital to complete. Currently, competition between GOOHOO and AMABAY are keeping things lively. But oligopoly could lead to complacency and extractive economics, as in other industries.
The owner of a dominant API/service is in a very powerful position. Google has the ability to adhere to its corporate slogan, "do no evil." That ethical stance does make a real difference. A powerful ruler can choose to be a benevolent dictator or a tyrant. But the temptation is there for power to corrupt.
I can imagine a way out of this oligopoly bind.
What if there was peer to peer for web service requests. Many small servers could run the popular service, and publish their availability. When a client issues a request, the request would be taken by an available server. This wouldn't work for services that require a pre-existing content store (like maps?). But it would work for services that require large amounts of individual content (like calendars?).
Maybe the technology already exists somewhere, and is waiting for the killer app. Maybe I'm missing something -- this is just musing outloud. What do you think?
Yowza! How long will it take for Google's IM and voice chat to meet and surpass the usage of the whole sorry proprietary lot of AIM, Yahoo, and MSN. And how many minutes will it take to open their networks after Google's announcement?
In a world where you can phone anybody and email anybody and fax anybody, the IM vendors created absurd islands.
Google's service is based on the open Jabber protocol, unlike Yahoo, which fought and lost a guerrilla war last year against the third-party clients Gaim and Trillian, which patiently reverse engineered the repeated protocol changes that Yahoo used to fend off other clients.
By contrast, Google's site proudly advertises other clients, including Adium, Gaim, iChat, Psi, and Trillian. The developer site invites developers to build more tools to help more people connect.
The vile AOL terms of service claims that AOL owns the content of its customers' conversations: ""Although you or the owner of the Content retain ownership of all right, title and interest in Content that you post to any AIM Product, AOL owns all right, title and interest in any compilation, collective work or other derivative work created by AOL using or incorporating this Content." AOL makes customers agree to those draconian terms, and then has the gall to claim that they don't really mean it, it's just boilerplate, the lawyers made us do it.
By contrast, Google's lawyers know who's the boss: " Your Intellectual Property Rights. Google does not otherwise claim any ownership in any of the content, including any text, data, information, images, photographs, music, sound, video, or other material, that you upload or transmit from, or store using, your Google Talk account."
I look forward to hearing from voice gurus about Google's choices for security and voice -- they're starting off with XMPP, and adding support for SIP, and are federating with Earthlink and Sipphone service.
Summary -- the anybody talks to anybody approach will destroy the island approach. Reed's Law wins: the utility of large networks, particularly social networks, can scale exponentially with the size of the network.
p.s. critique from the folks at Techdirt that the Google IM client is missing some important features -- it doesn't save conversation history, and it doesn't search. It's hard to imagine that Google will forget search in future versions.
I still think that major provider + open network + developer community will beat the closed islands over time.
The Economist's analysis misses two key points about networked business models.
The Economist quotes John Battelle to make its point:
Yahoo!'s “business model is necessarily in conflict,” says John Battelle, the author of a forthcoming book on the search industry. With so much content owned by Yahoo! or generated within its site by users, the quandary for the firm will be: “Do you point people to your own stuff or to the most relevant stuff?” If the former, Yahoo!'s reputation as a trusted internet search and navigation brand may evaporate; if the latter, its content may not earn the returns to justify Yahoo!'s investments in it. By contrast, says Mr Battelle, Google, which has chosen not to make content, does not face this conflict.
Jeremy Zawodny says that Yahoo is becoming less of a "walled garden" because they are opening up to more user-generated content. More user-generated content is good, but they're not actually in conflict with each other.
In a world of infinite shelf space, Britney Spears doesn't crowd out obscure bluegrass. Battelle is concerned that Yahoo will favor popular commercial content. But that would be self-defeating -- Yahoo will make more money if they use the "head" to help cultivate business in the "tail." Amazon discovered the opposite of the "Battelle effect". Amazon makes more money when it sells gear from merchants than its own gear. So it adds feature to recommend third party goods.
So, I don't think there is a conflict between popular content, niche content, and peer content. The more the better, with search and community tools to find and share.
There is a conflict, I think, but it's different than the tension between popular content and peer content -- and Google has just as much of a conflict as Yahoo.
For both companies, part of the product line is software, not just content. Google provides its own search software which is optimized for mass use. And it provides APIs for developers to extend searching to a million niches.
The conflict between central provider and developer is perhaps greater with software. Google and Yahoo owns their own mapping APIs. Its developers don't. The big kids have some benefit from keeping its APIs stable to gain network effects. But if they choose to change the APIs, developers are stuck rebuilding.
MIT professor Eric Von Hippel's Democratizing Innovation studies the role of user innovation in product development, and concludes that businesses should follow their lead users to find profitable new products.
The book cites research showing that a surprisingly large number of users —from 10 percent to nearly 40 percent—develop or modify products. Open source software is the example that first comes to mind, but user innovation is found in a wide range of business and consumer products ranging from pipe installation to medical equipment to camping gear.
People who customize their products belong to a class of "lead users" who are "ahead of the majority of users in their populations with respect to an important market trend, and they expect to gain relatively high benefits from a solution to the needs they have encountered there." In one study, Von Hippel's team followed the product development process at 3M. The researchers concluded that following lead users can lead to greater success in the development of new products than following traditional market research methods that focuses on identifying the needs of broader market segments.
Von Hippel recommends a business strategy to "follow the lead users". This can mean taking their lead on product direction. Many "lead user" customizations are one-offs. But the ones that are repeated point the way to the most innovative and successful new products. Companies can supporting "lead users" with toolkits that help modify products. Some industries, including chip design, have been retooled to support the creation of customer designs almost entirely.
This advice is nearly the opposite of the canonical strategy in Geoffrey Moore's 1991 classic, Crossing the Chasm. Moore argues that for high-tech companies, focusing on lead users can lead to business failure. Moore observes that high-tech products follow a market adoption curve, starting with "early adopters", who are eager for advantage through innovation, and continuing through the more conservative "early majority" and mainstream buyers.
In Moore's analysis, the "early adopters" have more in common with each other than with everyone else. Because early adopters are the first customers of a high-tech product, high-tech companies can get caught in a trap attempting to please their early adopters, and never break out of the trap into mainstream success. Mainstream buyers are put off by the customizeable knobs and levers that attract the "early adopter" tinkering class. Instead, mainstream buyers prefer ease of use, packaging, and service.
Geoffrey Moore advises high-tech companies to stop trying to please their early adopters. Instead, they should identify market segments that need the product, and product a feature-rich, well-serviced package for these market segments. Once there are enough of these segments to prove the viability of the product, the product may join the main stream. At that point, the product experiences a"hypergrowth" phase, when the best thing thing the company can do is to ignore customer request altogether, and simply ship product.
Why do these two strong theories contradict each other?
* is user innovation increasing with the spread of design toolkits and open source software, so that more users are becoming "early adopters"?
* does Von Hippel's work simply focus on an earlier part of the technology adoption life cycle than Moore?
Comments are most welcome.
The phone companies are spending large amounts of money, in business investment and lobbying in order to enter the video business. In Texas, SBC just won a victory that lowers their cost compared to cable television by allowing them to start out with statewide franchises, instead of negotiating with each city.
The phone companies have been eyeing ways to diversify away from phone service for decades. In the 80s and early 90s, AT&T made a series of disastrous attempts to enter the computer business after the anti-trust settlement with the US department of Justice.
Video distribution seems a better fit than PCs. It's a familiar business model, where where a being an oligopoly owner of a distribution channel makes you the leading provider of a service. Owning big servers and pipes is surely a competitive advantage, as is managing an itemized billing service.
The phone companies know they need to slug it out with the cable companies with price wars and features. But cable won't be the only competition. The market is also seeing entrants with new distribution models."Long-tail" business like Amazon, Netflix, Yahoo, and Google have the ability to leverage big servers, ecommerce and ad platforms, search and recommendation engines to become major distribution channels. Peer to peer distribution is becoming a notable alternative to get video, and ad models are emerging for p2p. Content providers like the Comedy Channel can host Daily Show clips themselves. The low cost of video is starting to create a generation of video podcasters. Services like Ourmediaare emerging to host amateur audiovisional content.
This is going to make the video business much less of a comfy oligopoly. The phone company will have to fight for the market.
Broadcast contains two kinds of content -- things the people really want to watch at the same time, and things that people would rather watch on their own schedule. So broadcast won't die. It will be contrained to events that a great many people want to watch at the same time, like the Superbowl, or a newscast of a major breaking story.
Shawn makes this insightful point in a comment to Mark Cuban's blog. Cuban post focused on technology -- he argued that broadcast has better performance than internet, and that multi-cast technology isn't being developed aggressively enough. Other readers take Cuban up on the technical points, but Shawn nails the market evolution.
The video market has been migrating to "personal schedule" for decades. But there are two things that kept "event" and "program" content together. First is a lucrative advertising business model that applied only to broadcast. Second is capital-intensive distribution. It was expensive to distribute broadcast content, so the market became was an oligopoly. That oligopoly was able to create "pseudo-events" -- broadcasting episodes of the Sopranos, and only distributing DVDs to BlockBuster video later.
Both of these things are changing. The cost of distribution is decliningAd models are evolving for peer-to-peer distributed content. Mark Pesce's post from May of this year chronicles how peer to peer distribution of television has become a commercial force in the last year, starting with the Battlestar Galactica phenomenon. Pesce's article speculates about a number of ways that advertisers will sponsor peer to peer content.
The net result is that the niche for pre-recorded broadcast -- whether over-the-air, or on cable -- gets smaller. The superbowl will still generate large ad revenues, but programming will keep migrating away.
The FCC exempted phone companies from having to lease lines to internet service providers. They did this by re-classifying broadband as an "information service", which was ruled not to be subject to line-sharing.
In the words of Light reading the FCC ruled that the physical facilities that deliver broadband, and the broadband service itself are indistinguishable and inseparable. The two things together -- the facility and the service -- are now called an “information service.” The “facility” part of that service can no longer be separated and called a “telecommunication service,” and as such can't be subject to common carriage rules.
The FCC is doing exactly the wrong thing to adapt to the change in telecom technology. Back when phone and cable were different from each other, there were two categories of regulation -- telecommunication services (phones) and communication services (cable companies) -- to regulate very different kinds of services.
Now, the network is the same. Internet broadband can carry phone, video, and any other kind of content.
* Broadband connectivity has a tendency toward monopoly. It is expensive to lay fiber, which creates the broadband connections.
* Broadband services are a hyper-competitive market, with low barriers to entry.
In order to increase competition, you'd want to treat the oligopolistic connectivity market separately from the competitive service market. You'd enable competing connectivity providers on the wire, to increase competition. You'd watch for signs of monopoly power, and regulate if needed. And you'd treat the hypercompetitive service with as little regulation as possible.
The FCC's response is backward -- they are squelch competition, by considering services inseparable from access.
Steven Weber's excellent book, The Success of Open Source is a superb complement to Yochai Benkler's classic essay, Coase's Penguin. Benkler looks at peer production as an economic system and concludes that it has become a third major form of organizing production, alongside the market and the firm. Weber takes a closer look inside the open source production process, and provides a fascinating analysis of how and why it works:
Perhaps the most insightful conclusion Weber draws is the relation between open source and intellectual property. Weber observes that open source redefines property around the right to distribute, not the right to exclude.
Weber is able to make this observation because he avoids polemic. Weber doesn't try argue that open source software is good because intellectual property is bad. And he doesn't argue that that open source software is bad because intellectual property is good. Instead, he is able to observe how open source redefines property itself.
Weber's pragmatic analysis leads him to focus on the vibrant intersection between open source production and traditional business, with a look at a variety of hybrid business models, from IBM's focus on hardware and service, to Red Hat's packaging and branding, to MySQL's service and customization, to Apple's addition of proprietary chrome and polish. Weber predicts continued evolution and innovation and this boundary.
The book was published in 2004, and so it misses one of the most interesting trends in the last couple of years -- the rise of open source software that's not just for hackers. Netscape/Mozilla is included in the book as an example of failure. Weber looks around at Linux, Gnome, KDE, etc, and concludes that open source software may never be able to make software that works for non-hackers. This was before before the breakout success of Firefox, and the popularity of GAIM, an instant messaging client with a consumer-quality interface.
Weber examines the brash and blunt hacker culture, with its focus on technical decision making through vehement debates on project mailing lists that hash out solutions to technical problems and decisions about technnical direction. I wonder about how the culture will evolve as interactions grow with non-geek users, and hybrid companies face decisions that have external constraints driven by customers.
Towards the end of the book, Weber speculates about how the organizing methods of open source software might affect the production of other kinds of goods -- writing, music, biotech, business ideas. I thought it was interesting, but less substantial than the parts of the book focused on open source itself, with analysis based on observation.
The danger of the Mozilla Foundation forming a for-profit business, Mozilla Corp., is that the result may be as nasty and political as your average nonprofit and as money-grubbing as your typical software company. Nothing wrong with that, except that it's a wide departure from the egalitarian notion of "free" software that has carried Mozilla this far. And with that departure, I am feeling a touch of loss.
Richard Stallman, the prophet of free software idealism, says that free software is intended to be "free as in speech, not free as in beer". Open source software never was intended to be free of commerce. You can make money with open source software. IBM makes oodles of money with Linux and Apache. MySQL makes money with its database. You just can't sell the source code.
I think Mozilla lost its innocence a long time ago, and in a different way. Much of open source software is by geeks, for geeks. Open source developers have focused on creating tools for developers, and avoided the burden of developing for people who aren't programmers. An open source developer is "scratching his own itch", not developing code to please other people.
For whatever reason, Mozilla isn't like this. Mozilla is designed to be usable and attractive for ordinary people, and extensible for geeks. The Mozilla team designs with empathy for users. They have already lost the innocence of solipsism -- they are serving others than themselves.
I can't help feeling that the foundation is crossing a line from which it can never retreat, taking with it a bit of the romance of software by the people, for the people.
Coursey writes these sentimental lines for a salary earned from a magazine publisher that makes money from advertising.
Mozilla is maintaining its license terms. That means that people will continue to be able to look at the code, modify the code, and fork the project to create their own products, following the terms of the license. That's the freedom that counts -- not freedom from being able to make a living.
Thanks, Joi for some clarification about the business structure.
The conversation at BlogHer is about how and whether to break into the Technorati 100. This misses the point of the Long tail-- what makes the Blogosphere different from the mainstream media. You can aim to be a top celebrity. Or you can be an authoritative voice on an important topic, and be the media for an important issue. The blogosphere isn't just about celebrity, it's about subcommunities.
What about other communications protocols?
Telephone interconnection is mandated by law. For example, Texas law says:
Sec. 60.204. INTERCONNECTION. A telecommunications provider shall provide interconnection with other telecommunications.
providers' networks for the transmission and routing of telephone
exchange service and exchange access.
Internet email standardized back when academic institutions were the primary users of the internet. This is very good -- connectivity became universal. And bad -- the protocols were very trusting, creating a medium for spam.
Fax was born from a standard. In the 1970s, the CCITT (now ITU) created a standard for digital fax that allowed the creation of an industry.
Thinking about these examples, the non-standardization of IM is an artifact of history and business model. IM is a free rider on top of the internet, and is offerered for free. Because the underlying network already exists, IM didn't need the jumpstart of a standard in order to proliferate, unlike fax. Because IM is offered for free, it is only a minor inconvenience for end-users to connect to a contact, using whatever IM service that contact prefers. So far, the business IM market hasn't been large enough to force standardization.
It seems plausible that IM will standardize someday. But the current situation could persist for a long time. Currency is an example of persistent lack of standards. There are well-established methods of currency exchange, so differing currencies don't pose a huge barrier to commerce. And currency providers have a strong interest in controlling their stock of currency, since regional money supply is a tool by central banks used to steer the economy.
Just thinking out loud. It's interesting how the patterns of standardization trace the social structure and power structure of the underlying community.
Network broadcasting has fallen behind cable tv in audience share. Broadcasters are supposed to switch over to HDTV by the end of 2006, but broadcasters saythey're not ready and customers are confused.
Instead of switching over to broadcast HDTV, will customers just abandon broadcast for cable, DVD, and emerging internet video? Will the confusion about HDTV hasten the decline of on-air broadcast?
The recording industry switched from LPs to CDs to mp3s without the government having to pass a law. The wireless market is migrating from 802.11b to g, and may get to WIMAX without a federally mandated transition. The requirement for federally regulated standards for on-air broadcast formats seems like a competitive disadvantage for for on-air broadcasting.
File this under "insufficiently informed speculation" -- I don't know enough about this market to have a good opinion. I don't watch much tv, so indifference to the glories of HDTV maybe clouding my judgement.
At the hearings last week, Chairman Phil King asked why governments should provide network access, when the same service can be provided by private enterprise.
Prof. Lawrence Lessig answered this question in Wired Magazine last week.
Ever think about the poor streetlamp companies, run out of business because municipalities deigned to do completely what private industry would do only incompletely? Or think about the scandal of public roads: How many tollbooth workers have lost their jobs because we no longer (since about the 18th century) fund all roads through private enterprise? Municipal buses compete with private taxis. City police departments hamper the growth at Pinkerton's (now Securitas)... If private industry can provide a service, however poorly or incompletely, then ban the government from competing. What's true for Wi-Fi should be true for water.
There's a range of services like roads, transportation and security, where the government provides services. Even though there are private-sector alternatives, the government plays an major role in providing these services, because they are "public goods".
The government even put the private streetlamp industry out of business, because it was so much more effective to have city lights on every street than a patchwork of lights in front of a few businesses and rich people's houses.
The conservative movement is right to question and scrutinize the functions that government provides. The failure of Soviet factories and farmes proved that private enterprise is better at most economic activities.
But there are functions like roads, streetlights, police services, and in the 21st century, network access, where the government has a justifiable and important role to play.
Using current Firefox instead of old Mozilla seems to (toss salt over shoulder and spit twice) improve the memory behavior of the aged win98 ex-laptop by a LOT.
I have seven tabs open, have been checking email for a few hours, and the system seems happily stable.
I still want my laptop back NOW, but daily experience may have gotten less chronically miserable.
One new developer at Socialtext recommended Skype, and suddenly 9 people were Skyping. At the same time, I opened the box on my cute little Plantronics headset phone for the landline.
Skype is two parts good -- free, excellent voice quality when it works -- and one part annoying -- "are you there, are you there." Suddenly, I can talk on the phone from the local wireless cafes (the cell was unbearable with the background noise). It's fun being a small part of a network adoption trend -- watching a technology adopted, one group at a time.
And I live in a tangle of headphone cords. I almost never have the right headphone on, or near to hand. "Wait, let me get that headphone..." (trips across home office). The headphone cords have an unnatural fondness for each other, the phone cord, and the laptop power cord. I impersonate the fates, weaving and unweaving the destiny of the world, except unravelling and untangling the headset cords. When everything is untangled, I think the world will end.
*The landline is wired -- I gave up on cordless after years of phones that didn't quite work, with battery life that didn't support working with a distributed team.
Dan Hunter's article arguing that open source development is a form of contemporary Marxism led me to read Coase's Penguin, the classic paper by Yochai Benkler that provides an economic explanation of open source software and other peer production endeavors like Wikipedia.
Marxism argues in favor of collective production and against monetary rewards out of political belief that capitalism is inherently exploitative. The way to ensure a just society is collective production where production is organized and rewards are distributed fairly through central planning. But centrally planned collective production proved inefficient and corrupt.
The first puzzle about open source peer production isn't whether or not developers have marxist political beliefs, but why it works, especially since the Marxist collective model failed miserably.
This is what Benkler explains elegantly. Coase's Penguin builds on the theory of Ronald Coase, who explained in the 30s that firms exist when the cost of separate transactions with many independent parties is greater than the price-efficiency of a competitive market. The problem Coase was trying to solve at the time was to explain the persistance and dramatic growth of centrally managed corporations, if a market is an ideal way to allocate economic resources.
Benkler solves today's version of the same problem. If money is the ideal way to incent and co-ordinate production, why are we seeing the persistence and dramatic growth of production methods that don't use money?
Benkler explains that commons-based peer production is more efficient than either firms or markets for information goods, where the costs of communication and distribution are low, and the difficult problem is allocating human creativity. When there are masses of potential contributors, and it's easy to participate in little chunks like an open source plugin or a wikipedia article, the best way match skills and work is a million little decisions by independent contributors.
Mandatory, Marxist-style collective farming doesn't benefit from these resource allocation efficiencies. Workers on collective farms have pre-defined work and can't leave. Collective farms don't gain the benefit of unique, voluntary contributions by thousands of distributed workers.
Another attribute of political marxism is an belief in mandatory equality. Peer production projects often have a meritocratic culture with dramatic inequality, where founding leaders and high-value contributors have greater prestige, influence, and sometimes financial reward. It's not considered inherently unjust that leaders of open source projects like Perl and Python have received grant, foundation, and corporate funding to do their work (although visible leaders of peer projects can also become lightning rods for criticism).
Another marxist value is opposition to a money economy. Cash is seen as a symptom of the alienation of workers from the products that result from their labors.
Clearly, the motivation of many thousands of open source, wikipedia, livejournal, and other peer content producers is non-monetary. But is it anti-monetary?
Benkler deals with the incentive question in the excellent third section of Coases Penguin. Benkler makes an astute distinction between activities where money is commonly thought to be an inverse motivation (sex), and where it is seen as complementary (sports, music). Many people who like basketball would love to be NBA stars. By contrast, most people who like sex would not like to be prostitutes.
Some Free Software activists are in fact marxists, with beliefs that money is inherently exploitative, and visions of a world that is socially and economically organized without money.
The GPL license, a strict license that forbids the redistribution of modified code with a nonfree license, doesn't forbid selling the software in a package, or customizing software for money, or selling services based on knowledge of open source tools.
For many people, software development is pretty clearly in the complementary category, where the rewards of prestige and satisfaction coexist with monetary rewards. There are Apache developers on corporate payrolls, and companies supporting open source technologies, ranging from IBM to MySQL, Zope, and Jabber. There are developers who make a living consulting based on free software expertise.
So, while some software developers are marxists, it doesn't follow that peer production is inherently Marxist.
Socialtext loves to hire developers with open source experience and reputation. We know people are good developers. We know they have initiative and have gotten things done. We know they have creative ideas, because thos ideas are public. People who've been active in open source have a public community reputation.
And I'm beginning to think that it is a great way to do the R part of R&D. One of the big problems with classic corporate R&D is that innovations don't see the light of day. The typical corporate reaction is to put researchers on a short leash, and tell them their blue-sky research needs to turn into a commercial product in a finite amount of time.
An alternative approach is to do open source experimentation. If the experiment is interesting and valuable, it will attract other developers. So you're building an ecosystem from the start rather than stifling it. If it works and seems valuable, you can package and develop and commercialize it -- or leave it to an independent noncommercial life.
It increases one risk, because new ideas aren't secret. It decreases the risk of developing products in the lab that don't ever work or get done or find users.
John Koenig has a nice article in the IT Manager's Journal listing seven open source business models: Optimization, Dual License, Consulting, Subscription, Patronage, Hosted, and Embedded.
From the perspective of software developers, however, there are only two. Patronage works for individual developers who are so brilliant, innovative and famous that a corporation or foundation will hire them to do whatever it is they do next. A few people merit this approach.
All of the other business models are based on a single principle -- provide software and services that someone else wants. The stereotypical open source model is to "scratch your own itch" - build software that you want. That is a powerful motivation that gets a lot of software built.
But if you want to make money, you need to do something that somebody else wants, and that is valuable enough that they're willing to pay you to do it. That something could be optimization, custom consulting, service and support, or a packaged product that uses your code (Koenig's list). There's also a patronage model that's at the level of a project, not an individual -- IBM's sponsorship of Apache fits this model. In this case, the sponsor is paying for ongoing development and maintenance.
The solipsistic/bohemian model of open source -- artists make software only for other artists, and talk only to other artists -- falls short if those artists are looking to make a living. Unless you're one of a handful of superstars, you need to provide a product or service that's of immediate value to someone else.
I wish it was available now. I would pay.
As is, I don't watch enough television to subscribe to cable. Just not in the habit. I watch series' occasionally on video.
As is, Bittorrent will do. I'd rather pay for reliable, high-quality, scheduled delivery, and compensate Jon Stewart.
I hope the industry takes this opportunity instead of suing it out of existence.
.. when you hand people a complex tool like a computer, the variation in what they can do with it is enormous. That's not a new idea. Fred Brooks wrote about it in 1974, and the study he quoted was published in 1968. But I think he underestimated the variation between programmers. He wrote about productivity in lines of code: the best programmers can solve a given problem in a tenth the time.
But what if the problem isn't given? In programming, as in many fields, the hard part isn't solving problems, but deciding what problems to solve. Imagination is hard to measure, but in practice it dominates the kind of productivity that's measured in lines of code.
My favorite quote from Paul Graham's essay, based on his OSCON talk and book, Hackers and Painters. I need to go read the book, carrying questions about the craft vs manufacturing models of software development.
The article also repeats some unhelpful stereotypes about hacker's lack of empathy. Graham appreciates the Mac and Google as examples of beauty. But he also describes the natural habitat of the geek as tools rather than applications.
Graham suggests optimizing a tech company's development process by sic'ing the best developers on infrastructure, far away from customer applications. "have the smart people work as toolmakers. If your company makes software to do x, have one group that builds tools for writing software of that type, and another that uses these tools to write the applications."
Graham says that hacker's hate having to "customize something for an individual client's complex and ill-defined needs." To use Martin Fowler's image of "bad smells in code" that hint at poor design, this is a bad smell that suggests poor account management -- somebody on the developer's team hasn't done the hard work of understanding the customer's needs, and helping the customer prioritize. (though it is a genuine problem sometimes -- customer truly is internally conflicted, hopelessly confused, and is impossible to please.)
The stereotype isn't 100% right -- there are counterexamples of software developers with empathy -- but Graham echoes the cultural prestige given to hackers who stay the furthest away from the people who use their work.
Thanks to the techs at PC Guru on South Lamar, and no thanks to Fujitsu. The inside portion of the power connector was loose and needed resoldering. I needed to hold the power cord at an angle just so to keep it from depowering, and it didn't seem that far from not working at all.
I called Fujitsu tech support. The repair would take 5-7 days. The guy on the service line wanted a deposit of $100, and said the repair would be up to $600, if they determined I did anything that broke warranty.
And the kicker -- they wanted advance permission to reformat the hard drive. "Why on earth do you want to reformat the hard drive?" "We want to do the best possible job at warranty service. " "Sure, we'll mow the lawn. Please give us advance permission to cut down the trees if we need to."
At PC Guru, the repair was $120 with one-day turnaround. Nobody threatened to reformat the hard drive. What a pleasure to deal with real live geeks, who chat about Linux in the background, and are authorized to use their brain on the job.
John Udell laments the tendency for vendors, including Macromedia, Microsoft, and Apple to hype next-generation graphics capabilities with tasty eye candy that doesn't help users create valuable visualizations, and doesn't give users a compelling reason to use the technology.
Udell cites Tufte as the guru of graphic communication, suggests the need for easier-to-use new tools. But tools are only part of the answer. Nifty visualization tools have underperformed for about fifteen years that I've noticed.
It's not enough to show pretty pictures -- you need to have something to say. Tufte's perennial complaint is "Chart Junk" -- frivolous pictures that don't say anything. Powerpoint and friends enabled people to make pointless pretty pictures long before the current generation of graphical infrastructure. People don't know how to tell compelling and meaningful stories with visualizations.
A word processor doesn't make eloquent prose on its own -- and graphical tool tool won't create meaningful visualizations.
In the social network analysis of Ziff Davis Media, we started with a story -- teams of writers, editors, developers, and marketers collaborating across space and group boundaries -- and questions about the intensity and frequency of collaboration.
Here's the picture . And the story.
The two liveliest software development models are open source and agile.
In some ways, these models are orthogonal. Open source is primarily a licensing model. Open source projects can use agile practices -- short cycles, test-first, pairing -- or they can have long cycles with compartmentalized development. Agile is a set of development methods which can be used with open or proprietary licences.
In another way, they seem to conflict. Classic open source projects are put together by programmers seeking to "scratch their own itch". Agile projects are oriented around meeting the needs of a customer.
Agile projects bring the customer on the team; make the customer responsible for setting priorities; develop shared understanding of requirements with conversations remembered as stories. The best way to make sure customers get what they want is to give them software soon, and let them respond to real stuff.
By contrast traditional development processes formalize customer requirements in long, structured requirements documents, which get delivered in big lumps of software. The goal of management during the development cycle is to fend off changing customer requests.
The extreme open source position contends that software developers won't ever develop for other people unless they have to. This can be explained as Asperger's -- a physiological lack of empathy. Or it can be explained as a Romantic/Bohemian view of artistry -- true artists paint and poets write for themselves and their muse, and their small group of starving but virtuously anti-bourgeois friends.
The extremes are wrong. Introversion isn't "better", open source doesn't have to be solipsistic, and development for money doesn't have to be corrupt.
Introversion isn't better I think that the Romantic view of open source software is as fallacious in software as it is in art. Artists have always had relationships with audiences -- Homer was a storyteller, Shakespeare was a crowd-pleaser. There are always "artists' artists" -- those whose work is difficult and revered by the cogniscenti. Software made for customers isn't guaranteed to be bad any more than art made for audiences.
Open source can have empathy The goodness of the Mozilla project disproves the notion that open source projects make software that only ubergeeks love. The Mozilla crew are building lovely software -- see the Scott Collins interview over at Ars Technica.
Commercial development can have integrity Agile development practices are intended to work around structural temptations for commercial projects to be build on lies. First, salespeople lie -- they promise things to customers without listening or wanting to believe developers about how long things take. Then, developers lie. Out of eagerness to please or fear, they tell management and sales what they want to hear. Reality intrudes eventually. Customers get mad. Really mad customers sue. Agile planning is based on continual delivery and continual conversation, to avoid the built-in temptations for lies and self-deception.
Money communicates priorities When people are developing for others, money focuses attention. When customers want things that don't yet exist, dollars vote. Willingness to pay focuses the mind of the customer on what's important to them, and the developer on what's important to the customer.
What do you think?
And the other services that require human intervention before an email address is whitelisted.
Don't folks realize that they have anti-network effects? They'll work for the first few people who use them. But what happens when a mailblock bounce comes back to another mailblock service?
People will have to wait for the singularity to get their email.
Jon Udell contends that we're hard-wired to recognize other humans:
Humans are hardwired to recognize faces, voices, gaits. We do it always and automatically. Perhaps so automatically that we don't notice, for the most part, that we are doing it. When my teenage daughter comes downstairs there's rarely any ambiguity about who she is.
Jon is disagreeing with David Weinberger, who says that identification defaults to off:
In the real world, we don't identify everyone. We only identify those about whom we have doubts that we have to resolve for some purpose. Identifying is not the default in the real world. Nor, IMO, should it be online.
They're both right. We instinctively recognize other people -- nod to neighbors, chitchat with baristas, and identify those we know well by the smallest of gestures.
But we don't ask for deep ID until its necessary. The social protocol for data is progressive disclosure. When do you learn someone's street address? Their home town? Their salary? The default is to start shallow, and to get deeper with trust.
Computers are literal-minded critters. Knowing hair color and HIV status is all the same to them.
Perhaps identification defaults to on, but disclosure defaults to off.
via Dorothea Salo on Misbehaving.
"Only at grave peril shall you ask a question for which there already exists an answer somewhere in the world." This principle drives me batty, and causes geeks to waste hours of time. It's the macho-geek version of guys refusing to ask directions and driving till they are hopelessly lost.
Erik Raymond wrote the canonical explanation in How To Ask Questions. The geek world is run by wizards who don't have time to answer newbie questions and keep the earth orbiting the sun.
Therefore, if you have a question, you must read the man pages, scour google for diagnostic phrases, spelunk through code, and test your hypothesis. If you still haven't found the answer to your question after two hours, three hours, eight hours... then you may ask the wizard who may know the answer off the top of his head.
Otherwise, you risk scathing criticism, and a permanent deduction of 20 points from your interlocutor's estimate of your IQ.
I'm really glad to see LinuxChix which looks like it provides a forum for improving one's tech skills without facing the consequences of the Raymond Rule.
Last week I was scanning my RSS reader and stumbled across a Socialtext bug report. The Shifted Librarian wanted to scan the RSS Winterfest weblogs, and ran into a bug in our new RSS feed. I reported the bug. Pete diagnosed the problem in the blog comments.
RSS for information discovery, and weblog for collaboration. It works.
RSS readers are very handy for people to manage attention to incoming information.
Debates about the relative superiority of RSS readers and web browsers are missing the point. Nobody wants the whole web coming to their desktop reader.
You use RSS for sources that change periodically, that you visit again and again. Bug-tracking is a great application -- several people in the RSS-Winterfest IRC mentioned bug-tracking as handy use of RSS.
A few people in the IRC mentioned RSS for alerting. The RSS polling design isn't intended for real-time notification. For that you need IM or Jabber or Tibco. Real-time notification and periodic updates can be used nicely side-by-side -- for example, a system administrator might want routine human and system notifications via RSS, and system-down or danger-zone alerts by IM or pager.
Information updates have different levels of urgency -- "the building is on fire" is more urgent than "the room temperature has increased from 68 to 72 degrees" (unless you're managing a temperature-sensitive lab culture).
A given piece of information is more urgent for some than others. A customer support query is urgent for the service reps on duty, and of background interest to product managers and developers.
We need a range of attention-getting media.
* IM/pager for lapel-grabbing alerts
* email for important, short-notice items
* RSS for alerts for discretionary attention
* Weblogs and wiki providing a "browse mode" fix for recent changes junkies, and a searchable archive for occasional readers
(We also need a Ross Mayfield trademark colorful chart to gain fame for the concept.)
These modes complement each other. They help individuals manage attention, and they help organizations focus attention on urgent matters, while building knowledge in important areas.
Over 1000 participants participated in the RSS Winterfest voice-blog-wiki-IRC multi-model conference last week. Socialtext hosted the "Eventspace" wiki-blog -- we hosted the application, and we also helped to host the online party. It was lots of fun, included insightful conversation, and created useful resources.
Participants in IRC improvised conversation on the themes of the sessions. Bloggers posted session notes and questions and resources. Wiki participants built collaborative pages on RSS Tools, RSS Authentication, and other topics.
Here are some of the practices we used that helped make the conference lively:
* create weblogs with relevant topics
* set up sign-in space for people to describe themselves and learn about each other
* pre-populate the wiki with the conference program and other resources
* real-time gardening -- link interesting pages to the home page, consolidate related resource pages, help harvest quotes and references.
Maybe most important, we helped weave questions, comments, and insights from the IRC and wikiblog into the voice conference. It's a salon facilitation skill, translated to electronic media.
This RoperASW/Tandberg polllast year revealed the crushing tedium of traditional teleconference. Less than half of attendees actually pay attention to conference calls. With audience participation tools, you get an event that is more lively and intelligent.
Dylan Greene writes an insightful yet ultimately unsatisfying piece arguing that RSS is not yet ready for Prime Time.
He's right that RSS has weaknesses. The way most people use it, it wastes bandwidth. Many feeds don't include full-text (need to fix this...). Comments aren't well integrated. And, the coup de grace, an RSS reader isn't yet built into Microsoft Windows.
True, but not that useful, unless you're a Gartner analyst trying to determine whether a technology has reached a state of ultimate, top-right-quadrant maturity.
The interesting questions are:
* is RSS mature enough to do what you want?
* can you benefit from RSS as an individual, a publisher, or an organization.
If you're a mildly tech-savvy individual wanting to keep track of lots of weblogs and news, then RSS is a lifesaver.
If you're a complete novice, or if you advise complete novices, you probably want to avoid RSS -- though bloglines is pretty darn accessible -- I'd recommend it to anyone who is comfortable with a browser.
If you're a blogger or web publisher, and want to reach the increasing number of users who depend on RSS to read web content, then surely publish in RSS.
If you're in an organization where most people are drowning in email (i.e. most of us) and you have influence over technology choices, you might want to consider using RSS to complement business applications, helping individual employees manage their time and attention.
Dylan's points are correct, but they don't tell you whether the rewards of RSS are worth the trouble for you. It takes a bit more effort to use technologies that are somewhat in their life cycle. You need to decide whether it's worth it.
If you're an open source geek or work for a technology company that's not Microsoft, you have opportunities to shape next-generation syndication standards and applications. Those opportunities are here now, and will be gone in a few years.
UC Irvine Researcher Walt Scacci is studying open source development, and has come across distinct practices.
It's not clear to me how the open source practices differ from agile processes in general -- a lighter, more conversational, less document-heavy design process.
What are some of the differences you've found, apart from the obvious ones?
For example, in software engineering, there's a widespread view that it's necessary to elicit and capture the requirement specifications of the system to be developed so that once implemented, it's possible to pose questions as to what was implemented, compared with what was specified.
We do not see or observe or find in open-source projects any online documents that software engineers would identify as a software requirements specification. That poses the question: What problem are they solving, if they haven't written down the problem? While it's true that there's no requirements specification, what there is instead is what we've identified as a variety of software informalisms.
What do you mean by "informalism"?
That word is chosen to help compare to the practice advocated in software engineering, in which one creates a formal systems specification or design that might be delivered to the customer. Informalisms are such things as information posted on a Web page, a threaded e-mail discussion or a set of comments in source code in a project repository. It may be a set of how-tos or FAQs on how to get things accomplished. Each is a carrier of fragments of what the requirements for the system are going to be.
In the article about the 80/20 rule, Tim Bray explains that simplicity isn't easy.
In Tim's words, "the mental machinery involved in the design process naturally tends towards more rather than less."
The unofficial version of my business card reads "grinch." I spend a lot of time saying no to all kinds of attractive, bright, shiny jingly ideas. "Too complicated." "Not the most important thing right now, maybe later." "What's the simplest way of doing things that works"? Two parts diplomacy, one part curmudgeon. Not a good way to win popularity contests.
Before Christmas, from a biotech company, "This is much easier to use and manage than $competitive product. I can use this with much more of my team"
Yesterday, from a consultant who helps organizations improve collaboration and manage knowledge... "It's like $competitiveproduct, but easier to use and cheaper."
Tim Bray has been working on a series of articles analyzing which factors lead to technology success. The strongest predictor isn't investor support, technical elegance, a compelling idea, standardization. It's the "80/20" rule, systems that yield 80 percent of the benefit for doing twenty percent of the work.
The 80/20 Tribe’s offerings are denounced as “Just a toy!”, while they hurl back accusations of pedantry, big-system disease, and so on.
Big smile. This is what makes me think that our company is really on to something. There are two kinds of skepticism that we run into occasionally.
* "Oh, it's just a wiki"
* Like Lotus Notes, but less.
I hear these occasionally from corporate buyers who are used to big collaboration and KM solutions, and from industry analysts who've been following collaboration software since the dawn of time. These are the feature-checklist folks, who want to know if you product comes with a ballpoint pen and a fish scaler, regardless of whether anybody in their shop needs these things.
The sweet spot is doing the right 20% of the work.
David Weinberger says: if you want to get at the real social networks, you're going to have to figure them out from the paths that actual feet have worn into the actual social carpet.'
Here's another way of looking at things. The Social Network tools are about saying hello. Unless you're 18 months old, hello is only .1% of the conversation. Weblogs, wikis, and other public conversations are about the other 99.9% of the conversation. And the Technorati/Blogstreet/AllConsuming applications help you find relevant conversations to join.
Good scientific evidence shows that people are happier and live longer when they have a strong social network, and even when they have a pet to take care of.
In this context, how much do online social networks count? When someone spends time blogging; participating in mailing list discussions; chatting on IRC; does this strengthen the immune system? Or weaken it? Does it make a difference if the participants meet each other in person every once in a while, or never?
Has anyone done research on this yet?
Jon Lebkowsky and Honoria have an interesting insight about evaluating social software according to esthetic, leading to some reflection about the criteria for an esthetic of social software.
Thinking out loud, here are some criteria to consider...
* ease of groupforming
* intimacy gradient -- ability to create spaces on a continuum from public to private
* expressiveness -- ability for individuals and groups to express mood and style
* shared memory -- the social software equivalent of bookshelves and mantelpiece photos
* attractive front porches -- social public areas preceding private spaces
* helpful navigation -- clear signage, or meditative exploration
I'm on vacation, so I don't have Christopher Alexander near to hand; that would bring some good insight.
from Wendy Seltzer back in September.
The collision of two bad ideas -- unverified e-voting and VeriSign -- is a really, really bad idea.
VeriSign has been chosen by Accenture (the former Andersen Consulting) to provide key components of an Internet absentee voting system for Americans abroad. This is the same VeriSign that recently unilaterally altered the workings of the domain name system to return a VeriSign search page when someone mis-typed a .com or .net URL.
I can see it now: mis-mark your ballot and your vote gets automatically redirected to the candidate of VeriSign's choice. "We found these similar candidates: Did You Mean to vote for Arnold Schwarzenegger?"
Joi Ito on address book roulette:
It originated with business cards, but has moved to mobile phones. There are three people: two players and a judge. The two players pick someone from their address books and reveal them to each other simultaneously. The judge decides which one is more famous or important. The loser has to shred the business card or in the case of mobile phones, delete that entry from the address book. It's quite funny because you try to play important people to beat the other person, but if you lose, you lose a valuable phone number. The judge's perspective of what sort of person is important also comes into play in an interesting way.
It's no fun when you have backups of your phone numbers, but in Japan, where most people don't backup their mobile phone numbers, it's often for keeps.
On that note: I have a special request to the toolmakers of 2004: stop making tools that magnify and multilply awkward social situations ("A total stranger asserts that he is your friend: click here to tell a reassuring lie; click here to break his heart!") ("Someone you don't know very well has invited you to a party: click here to advertise whether or not you'll be there!") ("A 'friend' has exposed your location, down to the meter, on a map of people in his social network, using this keen new location-description protocol -- on the same day that you announced that you were leaving town for a week!"). I don't need more "tools" like that, thank you very much.
"Unix culture values code which is useful to other programmers, while Windows culture values code which is useful to non-programmers." Uh, Joel, what about Amazon.com and Google? Linux core, custom applications. Possibly the most useful and broadly usable computer programs out there.
Joel Spolsky is usually insightful, smart, and lucid. His most recent essay, critiquing a book by Eric Raymond on The Art of Unix Programming, is dead wrong and confused.
To be fair, Spolsky is writing about Eric Raymond, who epitomizes the ubergeek arrogance of a class of unix infrastructure hackers. But Spolsky attributes this attitude to all Unix-platform developers, leaving out the user experience brilliance at Amazon, Google, and many other elegant and popular web-based applications.
Spolsky's essay equates Unix infrastructure development with Windows end-user applications. Apples and oranges.
Seb writes about RSS feeds from AudioScrobbler playlists. What a wonderful idea!
Using a RSS-to-HTML device like feedroll, letting others know what you've been listening to recently on your weblog becomes a snap. ...All of Audioscrobbler's data is published under the Creative Commons licence, and so are the user feeds. Which enables clever people to build crawlers ("Musicrati"?) and devise algorithms that exploit the distributed database and add value, for instance by matching participants' listening profiles (à la blogmatcher) or by building new playlists out of the raw materials.
I have friends with fabulous taste in music, and would absolutely love to be able to "listen in" to what they're listening to. Because only the titles are syndicated, there's no RIAA problem with sharing, and a feed listener could go off and get the music on itunes or somewhere. Music wants to be social.
what better place to catch up on blogging than a flight delay at DFW?
Flemming Funch writes about a study that contradicts some popular stereotypes about the use of IM at work.
Socialtext is a distributed team, we use IM a lot, and this certainly reflects my experience. There's worry about the misuse of informal communications tools like IM and blogs at work, but these aren't well-founded. People probably engage in the same amount of off-topic, relational conversation about sports, recipes, kids in electronic conversation as they do in analog conversation.
In order for the Semantic web to work, you would need "a world where language is merely math done with words"
"Any attempt at a global ontology is doomed to fail, because meta-data describes a worldview. "
I don't think Clay is arguing that all metadata is bad. Rather, he's saying that it doesn't scale. Yes, the insurance industry might be able to construct a taxonomy that works for it, but the Semantic Web goes beyond the local. It talks about how local taxonomies can automagically knit themselves together. The problem with the Semantic Web is, from my point of view, that it can't scale because taxonomies are tools, not descriptions, and thus don't knit real well.
Shelley, from Sam Ruby's comments
There is a big difference between deliberate metadata, and accidental metadata, and if semantic web relied purely on accidental metadata, then we have it -- it is called Google.
well, yes, exactly
Tom Coates of plasticbag.org fame has started Everything in Moderation, a new blog about forum moderation.
Very interesting topic -- the social practices around online communication are at least as important as the tools we use.
via Nancy White.
John Udell's instant classic:
Every interpersonal e-mail message creates, or sustains, or alters the membership of a group. It happens so naturally that we don't even think about it. When you're writing a message to Sally, you cc: Joe and Beth. Joe adds Mark to the cc: list on his reply. You and Sally work for one department of your company, Joe for another, Beth is a customer, and Mark is an outside contractor. These subtle and spontaneous acts of group formation and adjustments of group membership are the source of e-mail's special power. Without any help from an administrator, we transcend the boundaries not only of time and space but also of organizational trust.
An ad-hoc group convened by e-mail dissolves unless membership is reaffirmed by each message. This is a feature, not a bug. Many of the groups that perform work in a modern organization are transient. A hallway conversation is over in minutes; a spontaneous collaboration can last a day; a project may take a week. Software that requires people to explicitly declare the formation of these groups, and to acknowledge their dissolution, is too blunt an instrument for such ephemeral social interaction. Like an operating-system thread, an e-mail thread is a lightweight construct, cheap to set up and tear down.
Mitch Ratcliffe wants to work on "a 'Sociobot' that ties into MySQL to allow people to be introduced and to track relationships by looking at who links to whom on Technorati and, if I can figure out how, on LinkedIn and other systems."
Sounds cool. Some questions the bot might answer:
* ?Who knows "Dave Weinberger" -- given a third party, who in the group knows that person
* ?Who's blogged about "QuickTime6" -- given a topic, who in the group has blogged on the topic
What other cool questions would you want to ask the sociobot?
I think Maciej gets it half-right in his comment on the space shuttle disaster.
Physics 2, Business Administration 0
"When a program agrees to spend less money or accelerate a schedule beyond what the engineers and program managers think is reasonable, a small amount of overall risk is added. These little pieces of risk add up until managers are no longer aware of the total program risk, and are, in fact, gambling.
Columbia Accident Investigation Report, pp 139
One of the most sobering conclusions of the Shuttle accident report is that the Columbia was an exact replay of the Challenger - the same false confidence, the same scheduling and funding pressure, the same lack of attention to an intermittent problem whose causes were never understood. There's even the same badly-designed briefing slide, failing to convey the urgency the engineering team feels, and the same old Edward Tufte on hand to point it out, once the investigation gets into full swing.
Maciej is absolutely right that business managers have no business setting schedules and making risk assessments over the heads of the technical folk.
But he's wrong to say that the answer is to get rid of every last PHB. Geeks should have the sole voice only on projects whose primary goal is technical.
Where a project has a non-technical objective, the decisions about requirements and scope need to be made by people with domain expertise.
XP gets it right, here, I think. The technical people are the only people who can set the schedule for technical work and assess technical risks. If you ignore this principle, you are living in a world of delusion and inviting disaster.
The people who understand the business objectives should have say over what the project should do and when they think it's done.
Tangra's comments about the gaps in the Deanspace program highlight the flaws in a project driven by geeks, for non-geek users. The Deanspace documentation explains the technical features and schedule of the project, but still lack some of the documentation -- and features-- that are needed to make campaign activist group successful.
This isn't a fatal flaw -- Deanspace is a volunteer project that needs more volunteers to fill in this gap. But it does point out that you need customer input to be effective with a projec that has non-technical goals.
In the aftermath of the Sobig worm, Ross and many others are forecasting the death of email. This is an over-reaction.
From an IM conversation with Peter Merholz
Peterme: Some of my most valuable (and yes, most frustrating) e-interactions are in discussion groups.
Adina: Discussion groups are complementary with blogs & wikis and such but mailing lists don't go away. The limitations of discussion groups are:
via Prof. Lessig a pointer to DeanLink, social networking application that links and matches Dean supporters by geography.
Very cool. Takes social network features like profiles and recommendations and applies them to building a network of political supporters.
Clay Shirky contends that wikis are effective because they dispense with process.
A wiki in the hands of a healthy community works. A wiki in the hands of an indifferent community fails. The software makes no attempt to add 'process' in order to keep people from doing stupid things. Instead, it provides more flexibility, a crazy amount of flexibility, and intoxicating amount of flexibility, allowing massive amounts of stupidity and intentional damage to be done, at will, by roving and anonymous posters. And it provides rollback.
Process, contents Clay, is a destructive immune response that tries to protect a group from damage before it occurs. Wikis replace this with the healthy immune response that quickly fixes damage when it happens. "It takes longer to set fire to the building than put it out, it takes longer to grafitti the wall than clean it, it takes longer to damage the page than restore it."
Ben Hyde, responding to Clay, sees wikis as the embodiment of a set of process assumptions that are different from the typical bureaucratic model.
Wikis are another example of a process framework for solving a class of organizational problems where you have a huge pool of hands and eyes and you want to leverage that resource to make something good...
Good insights all around. Whether you see the form as resistance to process or alternative process, the conclusion remains: there are effective alternatives to systems that depend on fixed hierarchy and inflexible rules.
I'll comment on the subtle and insightful aspects of Clay Shirky's blogconversation on wikis and process in a subsequent post. But first, a rant.
Clay alleges that "Process is an embedded reaction to prior stupidity." Nah. Process is an embedded reaction to doing the same thing twice.
Process was invented when a primeval hunter gave his fellows some hints on how to hold the stone-tipped spear, and a primeval gatherer told her clanmates heuristics about to find the bushes with sweet-tasting berries and not the toxic ones that poisoned dear old Oog.
Of course too much process is bad but some process is core to what makes us human.
Sociologist Mark Smith has developed a tool that analyzes the social dynamics of Usenet, helping users find congenial groups whether they're looking for conversation or quick answers.
By charting different types of behavior in Usenet groups, he's able to steer users to the kind of group they want -- not just a group that discusses the right topic but a group with the right goals and pace...
In real life, indicators such as the number of people eating in a restaurant, the decor and the smells tip off the consumer, he said. In an effort to create such atmospheric cues online, Smith and his group have created charts that represent with big colorful bubbles how chatty, argumentative or helpful a given group is.
They do this without reading any of the words in the messages. It's all based on the pattern of activity. People who post multiple replies on every discussion thread tend to be the arguers, the nitpickers. Those who post just one reply -- especially if that reply ends the thread -- tend to be the expert problem solvers.
The software Smith's team created, known as Netscan, is available online at netscan.research.microsoft.com, and about 1,000 people a day use it to help them choose discussion groups that fit their needs.
Someone who wants to know how to configure a printer would probably choose a group with a track record of quick answers, while someone looking for entertainment might choose a group whose history is riddled with flame wars, or online arguments.
Paul Resnick writes enthousiastically about Flash Mobs as a new form of social organization that may displace longer-term associations and friendships. Instant communication makes things faster, but the gathering of strangers has been part of urban civilizations since the days of dance halls and public hangings.
Flash mobbing makes it easier for people to flock. Those groups will complement longer-term associations, rather than displacing them. Here in Austin, webloggers and online journalers have become friends in physical space. The Texas Dean Meetups are being used as a base for strengthening the Democratic party precinct system.
This cartoon is pretty funny, but my guess is that new ways to meet people are competing with time spent home in front of the television, rather than with "real friends."
In short, I think augmenting flocking is cool, but it's not so new, and it adds rather than takes away from the repertoire of human social behavior.
Mark Pilgrim tweaks the semantic web vision with a glorious quote from Jorge Luis Borges:
“These ambiguities, redundances, and deficiences recall those attributed by Dr. Franz Kuhn to a certain Chinese encyclopedia entitled Celestial Emporium of Benevolent Knowledge. On those remote pages it is written that animals are divided into (a) those that belong to the Emperor, (b) embalmed ones, (c) those that are trained, (d) suckling pigs, (e) mermaids, (f) fabulous ones, (g) stray dogs, (h) those that are included in this classification, (i) those that tremble as if they were mad, (j) innumerable ones, (k) those drawn with a very fine camel’s hair brush, (l) others, (m) those that have just broken a flower vase, (n) those that resemble flies from a distance.”
Ward Cunningham is interviewed in "Journal du Net" about the invention and nature of the wiki. The story sounds charming in French, and Ward is wise as usual.
Les Wikis sont passés dans les mœurs des entreprises, plus particulièrement au sein des groupes projet qui ont besoin d'échanger beaucoup d'informations. En plus, un Wiki requiert une grande confiance des participants entre eux. Il est donc très difficile d'étendre son fonctionnement à l'échelle de toute une entreprise, où la confiance est bizarrement loin de régner.
"Wikis have entered company culture, particularly within project groups that need to exchange a lot of information. However, a wiki requires partipants to trust each other. It's difficult to see it working at the level of an entire enterprise, where trust is strangely far from the norm."
Les gens viennent voir le travail, se laissent absorbés, et repartent en ayant compris quelque chose à propos du partage et de la coopération, ce qui est difficile à décrire avec des mots.
"People come to get work done, they become absorbed, and leave having learned something about sharing and cooperation which is difficult to explain in words."
Translation mine, corrections welcome. (French-speakers -- do you pronounce "le Net" as "nette" or "neh"?)
Ed Vielmetti suggests a music player that plays music based on your location.
...a mobile music device like an iPod that has GPS location on it, so that it knows where you are and selects or offers up things to listen to based on where you are or what's coming up. Some of it is totally ideosyncratic, like playing tunes from bands you saw at concert venues when you go past or near the venue. Some of it is obvious, like playing Mystery Spot Polka when you're on US-2 heading west out of St Ignace, or Wreck of the Edmund Fitzgerald on M-123 north of M-28 near Whitefish Bay.
Prentiss Riddle builds on the idea and suggests a collaborative soundrack...
If you combined it with GeoURLs (and some method of doing wireless downloads of music to an iPod, a bit of a stretch given 2003 infrastructure and copyright law) you could construct a collaborative musical geography....
A "songlines" system would require a GeoURL engine with hooks to CDDB or an equivalent music database, preferably with a bit more annotation capability than GeoURLs offer; a way to plan a route and query the GeoURL engine to produce a playlist; preferably some capability in the playlist generator to filter CDDB info for genre (no death metal, thanks) and song density (the songs associated with Graceland or Times Square or Austin's Sixth Street would run into the thousands, while some highways would be lucky to have one song per 50 miles). The hard part would be a discovery agent to trawl various file-sharing systems for the MP3s.
Wow. All kinds of opportunities for collaborative performance art.
This would be cool for reunions... "they're playing our song." Or creating a mix tape for a friend programmed for a favorite walk. All you need are lighting effects cued to emotional dynamics and life becomes a movie.
from a Slate article comparing the iTunes Top 100 with the Billboard Top 100:
"Billboard says that Apple, the most aggressive player in this market so far, is selling an average of 500,000 tracks a week. If that's true, and it takes just 1,500 sales to be No. 1, then the variety of tracks that people are downloading must be extremely broad—particularly compared with, say, the variety of tracks that make up a typical Top 40 station's play list."
Joi Ito writes: There is a lot of talk about identity these days. You MUST remember that identities are like names. You are NOT your identity. Your identity points to you. Everyone has multiple identities. Roger Clark describes this as the difference between entities and identities. You are an entity. Your name, your role in the company, your relationship with your child, they are different identities. Multiples identities isn't just about having more than one email address or chat room nym. A multitude of identities is an essential component in protecting privacy and interacting in an exceedingly digital world.
Gordon Mohr wants a Wifi Wiki Hifi "Imagine that a cafe has both wireless net access and a net-linked stereo. Just like a Wiki website lets visitors edit its pages, such a sound system would let walk-in visitors mix its audio playlist."
Gordon brings up a small concern: "The only real impediment here is that if you want to get technical, such dynamic unlicensed music sharing and performance is illegal."
Yes, but commercial venues already have license terms for recorded music they play for the crowd. It shouldn't be impossible to work out a deal to extend the terms to digital mix and play.
Simon St. Laurent in praise of 404 Not Found
..is the headline of Tom Friedman's column today in the New York times. "Says Alan Cohen, a V.P. of Airespace, a new Wi-Fi provider: "If I can operate Google, I can find anything. And with wireless, it means I will be able to find anything, anywhere, anytime. Which is why I say that Google, combined with Wi-Fi, is a little bit like God. God is wireless, God is everywhere and God sees and knows everything. Throughout history, people connected to God without wires. Now, for many questions in the world, you ask Google, and increasingly, you can do it without wires, too."
Well, almost. The canonical description of a monotheistic deity is "omnicient, omnipresent, and omnipotent." (Pagan myths, by contrast, would have pretty boring plots if the gods knew everything and were all-powerful)
Google comes pretty close to "all-knowing" and "omnipresent" with wireless internet access. But omnipotent, nope. Google doesn't cause anything to happen, so it's clearly not all-powerful.
Google's omniscience is missing a few attributes, if you look a bit more closely. Google knows everything about the present, and a lot about the past. But it doesn't report query results for in dates the future.
Another canonical attribute of divine omnicience is wisdom. Is Google wise? The top search result for enterprise application architecture is Martin Fowler's book on the subject, which seems like a pretty good call to me.
Google will also tell you all about Jennifer Aniston, too.
The wisdom of the answer depends on the wisdom of the question.
A good content management system manages the separation of content and markup. CSS manages the separation of markup and presentation. People who don’t understand this difference (or don’t care about the separation of markup and presentation) tend to think that CSS is useless, or worse, that it’s the threat to their content management system. Which is nonsense; the two technologies are complementary. Sites that use both well will invariably give a better end-user experience than sites that use either badly, or that use one to the exclusion of the other.
(Offerings of fruit and flowers in Mark's general direction)
In recent months, I've gotten involved in a number of interesting and exciting projects.
Trouble is, they all involve sets of logins and passwords. Some have assigned logins and passwords, so I can't use the usual combinations. Even before the new set of projects, my standard procedure for infrequently-used services had degenerated into to using the hint and getting a new password every single time!
I have completely scaled out of the creaky password management methods I've had till now.
I need a forearm-based implant that stores all my passwords, so I have them with me, whether or not I have access to any particular computing device.
Or maybe something a little less extreme.
How about a bracelet, with an LCD readout and a scroll-wheel that you can use to select System:Username:Password combinations?
There could be fashionable versions (precious metals, licensed characters). There could be simple versions, like medical alert tags, that guys could feel comfortable with.
Anybody know a good cyborgification service. Or a designer and contract manufacturer?
Or a better idea for managing more passwords than my simple brain can hold?
Ditlea describes AR technology as providing additional information to the visual field, enabling soldiers, doctors, and techicians to work more effectively.
With AR, you'll simply slip on a tiny visor and guided repair instructions will appear next to each under-the-hood part that you gaze at: "Now that you've disconnected the radiator hose, move it to one side and unscrew the carburetor cap."
Eventually, Ditlea predicts, this will be available to the rest of us:
"And when AR headgear does shrink down to the size of common glasses, it could be a must for up-and-coming managers, to avoid career or social gaffes at business meetings and cocktail parties. Everyone will be packing extra data in their spectacles. Each time you look at someone across a conference table or a crowded room, information about who they are and what their background is could appear before your eyes."
All of these examples are factual data provided to the individual. The second example is the image that I was talking about earlier -- a person getting secret information that gives them advantages over the other people in the room.
This is very different from the story Greg Elin told about Aaron Swartz and Cory Doctorow hanging out on a couch at the O'Reilly Emerging Technology Conference, chatting with each other in person while also sending aside comments, checking references, and forwarding code snippets on the computers in their hands.
Aaron and Cory are getting an overlay of internet data, and using this as a source and a channel for their conversation.
So this is what I meant. In popular science, augmented reality is data informing and isolating the individual. In life as we're living it, augmented reality is data informing the connections between people, and the cyborg -- the part-human, part-machine entity -- is a conversation.
(We'll leave Prof. Mann out of it for now, since Abe Books cancelled my order for his book, so it's over to Half.com. )
A design team at Ideo has brainstormed and built five prototypes of features to make mobile phones less socially rude.
The most disturbing on the list is this one, which doesn't seem to reduce the amount of rudeness in the universe:
For example, the first phone, called SoMo1, gives its user a mild electric shock, depending on how loudly the person at the other end is speaking. This encourages both parties to speak more quietly, otherwise the mild tingling becomes an unpleasant jolt.
via BJ Fogg's class on Captology
Facts corrected. My impression of Professor Mann comes from (always-flawed) press coverage; glad to hear from human beings in person!
It's very interesting to hear that the real cyborg experience is a community, which is different from the media and commercial stereotype.
My commentary about the cyborg is more about the popular image, and less about Professor Mann's book, as he and his students are saying loud and clear!
I do think there's a difference between common popular and commercial images of augmented reality that isolate the individual; and some emerging kinds of communication tools -- including, it sounds like, Prof. Mann's work.
I haven't read the book yet, though it is certainly now on my list.
From conversation with Greg Elin:
The typical image of augmented reality is
MIT University of Toronto professor Steve Mann, who walks around with a special set of glasses that feed him data about the world around him. You know its cold, he knows it's 17 degrees out. You can see that the Verazzano Narrows is a long bridge, he can see that the main span is 4260 feet long between towers.
The cyborg is smarter than the rest of us; he can correct our facts; and the extra data separates him from others around him.
At Clay Shirky's Social Software conference last fall, the physical reality of the conference -- the speaker talking, verbal comments -- was augmented by people chatting online, projected to a big screen for folks without laptops. (Greg modified Manual Kiessling's A Really Simple Chat Client for the experiment).
Interruptive comments were diverted to screen; people checked references and took notes and passed notes (in the 6th grade sense).
Augmented reality is experienced and created by a group of people, not an isolated individual. There are many places around the world where text messages on mobile phones are used this way (see Rheingold's Smart Mobs if you haven't read it already or don't live there).
In the Steve Mann image, the cyborg is an isolated being, made less connected by a stream of data.
In the Clay Shirky conference room, and the world of augmented reality we're starting to live in, the cyborg is a conversation.
Phil Wolff wrote a while back about how weblogs are going to evolve into a converged client, with attributes of weblogs, email, IM, PIM, presentation, word processor, newsreader, video editor, workflow manager, and a couple of other things in the pot for good measure.
I agree in part, and disagree vehemently in part. The many modes of human communication are all used together to support the relationships we have and the work we're doing. There should be interfaces and touchpoints among the media; a common inbox for daily activities, common memory spaces for the things we want to remember in the context we want to remember them.
But an interface is a tool designed for a purpose. There is no way I want the controls required to edit video before me unless I want to be editing video right now. I don't want lots of screen real estate taken up by publishing widgets when I just want to write and post three sentences. This is why Microsoft Office feels like it's gotten progressively worse; there are too many potatoes in the sack.
Phil writes: "Are you presenting on a computer projector, a video stream, or paper? The software should understand how to adjust."
Respectfully, Phil, I don't want software to guess whether I'm trying to edit a video -- this is an even more nightmarish version of Microsoft clippy, which obsequiously tries to write your business letters for you.
There's a reason people like publish with using Blogger and MovableType.
Simple is good.
The hard part is not going to be tying all of these things together.
The hard part is going to be maintaining simple entry points to the underlying complexity.
In a comments thread" to Peterme's post on regional blogs, Dan Lyke suggested a geo-linked hiking blog: "a big collaborative map that'd have information that no single map publisher can put out right now"
Right now I'm carrying a GPS along when I go hiking or biking, then downloading the track points. My thought initially was that it'd just be nice to have enough GPS data that I can say "I took a photo close to there" and start to attach latitude/longitude information to my photo database.
But I'm doing this with Un*x through various cool but non "user friendly" means, if I can find easy ways for more people to do and set up some sort of application to annotate and manage these tracks a little better, then we could also start to build a big collaborative map that'd have information that no single map publisher can put out right now, and with data of which a good bit of which probably doesn't exist in digital form.
I wonder if the GeoURL is part of the solution. This is a service that creates meta tags for geographical co-ordinates. I wonder how they specify location, and whether it would be precise enough to locate waterfalls?
GeoURL is Slashdotted right now, so I can't tell.
The conversational thread has continued on Dan's site.
The article uses examples mostly from developer-oriented projects like Linux and Gnome.
Some of their premises seem obsolete. There are new generations of open source software being designed for humans, not just arch-geeks. Examples include weblog software: MovableType; and email/PIM software: Spaces, OSAF. These projects deliberately consider usability.
On the other hand, some of their suggestions are interesting, such as:
* providing tools for users to report usability issues
* creating packaged remote usability tests for users
* enabling bug-tracking systems to incorporate graphical and video
content (apparently Bugzilla discussions of interface issues require
creating ASCII art
* being more welcome to HCI practitioners
The discussion is very academic in tone. The article would be more compelling if the authors had actually tried to, say, contact the core Mozilla team and offered to implement their ideas.
From from Timothy Appnell at O'Reilly; Ebay, Yahoo Groups and Calendar, PayPal, more.
The O'Reilly editors talk about rich web clients. More of the editors seem to like Flash. The articulate comments favor non-proprietary approaches using XHTML, XUL, SVG. Some tasty links in there to follow up.
Joel Spolsky's mailing list got spamblocked too.
p.s. The problem seems to have cleared up, presumably thanks to Earthlink.
...on a project over the last few weeks, with a group of geographically dispersed colleagues. For the most part the experience has been quite pleasant. For folks who aren't familiar with wikis, they're collaborative web spaces that anyone can edit.
* The wiki is used to post meeting times, resources for the group, and for individuals to post what they're working on.
* We typically email to brainstorm ideas; then post the results of the brainstorming to the Wiki where it can be sculpted.
* The absurdly low overhead is delightful -- click "edit this page" to edit the page. Even easier than keeping an intranet and ftp-ing html pages (which I've done in a previous startup). Less likelihood of versionitis caused by ftp-ing an old file version onto a newer one.
* Easy to keep track of what's new by clicking on "recent changes"
* Keeps behind-the-scenes work out of email, freeing email for interactive conversation
The easygoing "anyone edit" system works well except when more than one person is editing the same document on a deadline. Upon which we needed to implement "social document management" by verbally "checking-in" and "checking-out" sections.
After the first dozen or so entries, you need to start gardening the home page to keep it from getting tangled and overgrown.
The hyperlink-tyranny of the Wiki interface makes multi-page structures rather dizzying to navigate. This method will top out above a certain level of complexity, without the ability to add more navigational cues.
Using a Wiki is much easier and more pleasant than the corporate Microsoft monoculture, which requires the use of rock-heavy tools like PowerPoint and Word to do simple things.
Using a Wiki requires collaboration and trust within the workgroup. Knowledge management isn't technological, it's social.
It will be interesting to see how and whether the use of Wiki will scale when and if the project matures. In the mean time, there's a set of rapid, low-overhead collaboration processes that the Wiki works really nicely for.
Late last week I got a return email with a distressing header:
John Udell writes about a new system that Delta has designed to give passengers waiting at the gate more information about the boarding process, like updates on how many people have checked in, and the state of the standby list.
Sounds really helpful for those times you're standing there anxiously waiting to see if you'll get on the flight.
Reflecting on David's puzzlement about the Jews and software meeting in Boston the other day, I recalled this Joel Spolsky essay on how reading code is like studying Talmud, in that it is best done in pairs, puzzling through and arguing about the meaning of the text.
When you think about it, the "link" form of the weblog has similarities to the classical Jewish form of text commentary. The blogger links to an article somewhere on the web, and then writes a commentary on the original text; then other commentators refer to the original commentator. In the traditional form, Jewish scholars wrote texts that commented on the bible or on the writings of earlier rabbis; and other rabbis wrote texts that commented on the earlier rabbis' writings.
Because they didn't have hypertext at the time, commentaries linked using chapter and sentence references; so when you study traditional texts, you wind up with a table full of books following the cross-references from book to book.
The form of the Talmud is similar to a recorded newsgroup or blog comments discussion. In the classical rabbinic academies, scholars discussed and debated a wide variety of topics, and those discussions were eventually edited into book form. The editors were concerned with representing the debate of ideas, not with historical accuracy -- often, there are arguments between rabbis who didn't live at the same time.
Its not that the Rabbis didn't know how to write neat, logical, linear exposition. The classic rabbinic period was contemporaneous with the Hellenized civilization of the ancient world; they had the models of Greek thinking all around them, and they borrowed when it suited them -- the Passover seder is modeled after the Platonic symposium. They looked at neat, logical, linear, hierarchical writing, decided that they didn't like it, and wanted to write in weblog form instead.
Intriguing Advogato essay by GaryM, sparked by David Gelernter's NYT advertorial on the obsolescence of the file cabinet metaphor for organizing data.
"David Gelernter's thinly disguised advertising piece Forget the Files and the Folders: Let Your Screen Reflect Life, for all it's absurdities, is still something of a thread of a good idea in his "narrative file system" thesis: the idea of desktops, files and folders is a quaint retrieval from an office world very few of us remember and an organizational tool alien to the way people view their data.
No one organizes their home placing all items made by Scotts Tissue in one room, all Rubbermaid stuff in another, all Sony equipment on one shelf, Toshiba on another. We don't even keep all audio tools in one room and leave all visual tools in another. How we actually use our data is determined by the stories and narratives we wish to experience and construct. It's time we took the initiative to start building computing tools that recognize this."
The article describes the problem nicely, doesn't propose any useful solutions. Too bad.
When good interfaces go crufty has lots of well thought through examples of user interface traits that are artifacts of obsolete design constraints.
One such example: applications use awkward little filepickers to open or save files because when the Mac was first designed, it wasn't able to run the file manager and an application program at the same time.
Installed base dependencies and cultural habits can cause cruft to be highly persistent. Think about it -- the school year in the US begins in September and ends in May, to allow students time off to help with the family harvest.
One nit -- Internet Explorer's lack of an exit menu item is a bug, not a feature. If you've got more than one window open, you need to close them, tediously, one at a time.
The essay was slashdotted, so you may have read it already :-)
Execution happens at ebb tide, when the hype waves have receded offshore. I was looking for The Self-Made Tapestry and found that Barnes and Noble Online now lets you search inventory of its local stores, enables you to reserve the book online for pick-up at the store, and even tells you in what section of the store it can be found. I wish that the local bookstores had this capability, but they don't yet.
The last feature is particularly helpful for those who prefer books that aren't clearly in one subject area. Is The Computational Beauty of Nature in Biology, Math, or Computer Science? Is The Printing Press as an Agent of Change in History, Sociology, or someplace else? Much time has been spent wandering bookstore aisles in search of books without an obvious category; this is one reason that I buy a lot of books online (of course, they could always reshelve sections on Social History of Technology and Computational Biology...).
By the way, they didn't have Self-Made Tapestry, so I bought Acquiring Genomes instead, which is on the Lynn Margulis theory that cell organelles used to be free-swimming critters.
Last weekend I was working on a little utility to post to MovableType via email. And I got stuck figuring out how to include files that lived in different directories. The instructions in the book and the PythonWin IDE were confusing and insufficiently helpful. So I turned to the source of all earthly wisdom.
A quick search of the Google Usenet archives found questions from several people who've been confused by the same problem over the last decade, complete with helpful and instructive answers. I followed the instructions that Python guru Tim Peters provided a newbie in 2001: no need to mess with the Windows registry; simply include the reference to the desired path in the header of your program.
Google Usenet search is incredibly useful for this type of question. Books don't have space in 300 pages, or even 1200 pages to cover every conceivable implementation decision and configuration nightmare. FAQs are effective precisely because a patient editor has distilled the sea of knowledge into an elixir of Questions asked Frequently. By contrast, Google Usenet search isn't bounded by page count or the patience of human editors. Someone, somewhere has encountered the problem that has you climbing the walls, and someone, somewhere has answered it.
Many person-hours of labor and numerous PhD theses have been devoted to designing sophisticated knowledge management systems, incorporating text painstaking tagged and cleverly autosummarized; employing expert rules and meticulously built case repositories. But my guess is that a really good search engine and a deep database of human conversation can beat fancy knowledge management a lot of the time, and most of rest of the time, the Google-Usenet approach wins on price-performance.
Once again, the intelligence in the semantic web is largely human; a person who asked a question, and a person who answered it; the machine merely serves to connect today's seeker with yesterday's guru.
Of course, to succeed with this approach, as David Weinberger points out, you need to know how to phrase a query that will retrieve the right antique conversations. "PYTHONPATH Windows" succeeded instantly at finding the answer to my question last weekend. The skill of phrasing a search query ought to be taught in middle school, around the same time kids get old enough to figure out that a paragraph should have a main idea.
I've been a little slow on reading for the last couple of weeks, working on code instead. Learned CSS last weekend, and installed Movable Type this weekend. The MT software lets you categorize entries, so people who want to catch up on my life don't have to slog through essays on complex systems, and vice versa. The code is available, which raises all kinds of intriguing possibilities for nifty hacks, involving email and the Amazon API and comments.
The technology industry is in a depression, and the big boys and girls wonder if the days of innovation are behind us. But there's plenty of creativity going on in the open source world.
A couple of years ago, open source hackers were working on OS kernel implementations, web servers, and development tools. Reliable, heavy-duty carpenter's tools; software of, by, and for professional technologists, intent on improving the machine, and more power to them.
These days, there are also communities working on tools for publishing, collaboration, communication. Creative applications using the Amazon API, the Google API, RSS syndication. And I just read on SlashDot that Mitch Kapor and Andy Hertzfeld are working on an open source competitor to Outlook, using bits and pieces of Mozilla, Jabber, and Python. The current wave of open source software development is in tools and applications for people.
Some thoughts about this trend, in several different directions:
1) During the boom, Jerry Michalski, an industry visionary and highly decent human being, used to talk about how the internet would provide tools for people to communicate and collaborate. And he'd talk about the potential for Yahoo and Amazon and AOL to be new platforms. But the economy went south, companies slowed innovation, and focused understandably on paying the bills. The good thing is, there's no reason to wait for a Yahoo or Amazon or Microsoft to provide the tools. People are coding happily away in kitchens and living rooms.
2) Despite the fact that Mitch Kapor's project seems to attack Microsoft in an area of towering strength, his business isn't as crazy as it sounds. IBM is building a big business implementing open source software; there are similar services opportunities downmarket of IBM. IBM would be quite happy to deploy armies of professional service people to deploy an open source messaging system.
The Kapor announcement is vaporware; it may or may not go anywhere. The niche might be filled by some other project, some other year. But open source poses a threat to Microsoft's dominance of the email market, just as it does in operating systems.
3) Tim Berners-Lee and various other very smart people have described a vision of the "semantic web." According to Berners-Lee's view, the Semantic Web would be for machines what the World Wide Web is for people, a uniform way to see and use vast amounts of formerly hidden information. The classic example is a robot secretary that will scour the web and schedule your airfares, hotel rooms, and meetings, using metadata published according to standards, and discovered via automated search and publish/subscribe notification.
Open source hackers and software companies are building a semantic web today, and it's different from Berners-Lee's vision. In the robot version of the semantic web, the nodes of the network consist of information, nicely categorized according to standard XML taxonomies. The links consist of protocols and tools to traverse the network, and automated processes to make calculations and execute transactions; to find the shortest travel time at the lowest cost.
In the version of the semantic web exemplified by AllConsuming.net, Daypop and Google News, the nodes of the network are people. The links of the network are relationships among people; who are reading books, selecting stories to publish, selecting sites to link. Google News, which is marketed as a replacement for human editors, depends thoroughly on humans; editors and bloggers, who select the stories to cover to begin with, and readers around the world, who chose which stories to read. The semantic web doesn't replace human intelligence, it multiplies it by connecting people.
Despite the Nasdaq, tech innovation surely isn't done.
It is nice to see the mainstream press discussing music industry policies as anti-customer rather than repeating the industry's piracy message.
And it's also great to read Jerry Michalski saying it, since he's been thinking and talking about these issues for years now.
Perhaps digitalconsumer.org is helping to change the terms of the debate. The recent Boucher and Lofgren bills describe their goals as protecting the rights of customers to traditional fair use of media.
The good thing about using the term "consumer" in this context is that an individual hears the word and thinks "that's me", and my rights to things that I have in my house are being taken away. It becomes an area where politicians can take a populist stand. It takes the discourse out of the realm of abstract and technical legal principles and rights. It's great that there are lawyers fighting these issues in the courts, and more power to them. But the language of lawyers doesn't get people to identify and take action.
What the term "consumer" leaves out is Jerry's "co-participant" message, which the Fortune article quoted but didn't seem to understand. Personal music sharing, fan sites, etc. are ways for individuals to participate in the creation and sharing of culture. People's desire to contribute could be embraced into media business models, instead of repelled as invasions into the territory and property of the media industry.
It is also pretty weird to read the characterization of Jerry as a "cyberspace libertarian" -- he just doesn't fit that image of a scruffy maladjusted coder who rants in favor of guns and drugs and abolishing the government!
FreeRAM XP Pro from YourWare Solutions. Automatically frees memory on Windows machines, enabling them to work somewhat longer before inevitably crashing.
Of course, I ought to upgrade to Windows XP or 2000. But that would mean setting aside the time to troubleshoot the upgrade; so I live with chronic Win98 memory leaks, and use FreeRam as a palliative for the symptoms.
The music industry's problems are caused by the changing needs and
tastes of customers, as reported by this informative story in Slate.
Industry mistakes include:
* Believing in teen idols. Buyers over the age of 40 account for
44 percent of the CD sales, up from 19.6 percent in 1992,
according to a recent RIAA survey. Older listeners have diverse
tastes, and it's harder to reach a fragmented market. So the music
industry keeps trying to manufacture teen stars with mass
popularity, even though Britney's audience is shrinking.
* Offering one color, as long as it's black. Successful industries
differentiate products. Yet the music industry focuses on mass
distribution of a single product, at single level of quality.
* Blaming technological threat while ignoring customer boredom.
The last big lull in music sales occured in the 70s, when cassette
tapes were taking off. The industry blamed plastic. Then punk and
new wave killed disco, and sales rose again.
Avoid SpamKiller, the anti-spam shareware utility published by McAfee. Whenever it captures a piece of spam, it utters a hideous, barking, crunching sound, as if Cerberus, the three headed demon guard dog of Hell, had just clamped its infernal jaws around the thighbone of an intruder. Even if you turn off the sound effects, it crashes frequently and hard.
Anyone have better suggestions?