Steven Johnson's The Invention of Air is a short, entertaining intellectual history of Priestley, an enlightenment polymath and radical who left his mark in science, religion, and politics. Johnson's book is part of a genre, from Neal Stephenson to a spate Franklin biographies expressing nostalgia for good old fashioned enlightenment values in the age of the Bush adminisration's anti-scientific, paleo-religious, wannabe tyranny.
Interesting the connections that Johnson draws and implies. Johnson hails Priestly as the forebear of ecoscience, since he was the first to identify plants creating the essential substance for animals to live. Johnson also notes that Preistley was the financial ward of England's early industrialists who build the industrial age on the energy of burning fossil fuel and the labor of workers in the mines and factories. The seeds of the sustainability critique were planted at the same time industrial pollution began.
The book sympathizes with Priestley's enlightenment critique of religious orthodoxy & political tyranny and pursuit of science, and the way his rationalist enthusiasm connected them all. Rather than connecting with the left's critique of industrial waste, pollution, and oppression, Johnson emphasizes Priestley's irrepressable optimism. Even as a scapegoat in his country and a political exile, Priestley kept up his experiments, preaching and polemic.
Perhaps the lesson for us is that we can learn from enlightenment optimism, too. What's the right balance between skepticism toward an ideology of progress which blinds us to booms and hidden costs, and optimism that lets us create new solutions?
The evolutionary history of animal development is producing some thrilling science these days. Like Endless Forms Most Beautiful, by Sean Carroll, Your Inner Fish is written for a general audience by one of the pioneering scientists in the field.
Neil Shubin is a paleontologist and developmental biologist whose team discovered Tiktaalik, a Devonian fish that is evolving toward tetrapod. The stage of tetrapod evolution is intriguing on its own -- the creature jointed fins with ends that bend and splay, and a neck, allowing it to do "pushups" in shallow water to catch prey or watch for predators.
Carroll has more in-depth information than Shubin about the core of evo devo, the evolution of the developmental program that builds creatures with bodies. Where Shubin's book shines is exploring the deep evolutionary history of different parts of the body, such as teeth, eyes, ears, and the head. The developmental program for teeth, out of the interaction between layers of skin in the embryo, also generates hair, feathers, and breasts. The bones, cartilage, and nerves in the human jaw, ears, and throat, expanded from tissues that served as gills in fish; the straightforward nerve routes in fish became convoluted in mammals now that the location of the tissues has been rearranged.
One of the most interesting chapters in the book covered the evolution of the building materials of bodies: collagen, cartilage, bone, intercellular communication. One fascinating hypothesis in this section is that one of the key bodybuilding materials, collagen, requires a lot of oxygen to produce. Therefore, a key factor in the explosion of animals with bodies in the Cambrian era was a rapid rise in the amount of oxygen in the atmosphere.
To follow up on this idea, I'm now reading Oxygen, the Molecule that Made the World.
In general, I strongly prefer reading about the science of evolution, rather than arguments defending evolution against its detractors. The beautiful, rich stories of the evolution of life, supported by interlocking evidence in fossils, rocks, and dna are more interesting than the meta-argument. I don't run into too many creationists in my usual social circles.
Every once in a while, I bump into some creationism. During the long wait for a car repair the other day, I was reading the fascinating story of Oxygen, in which the rocks, air and climate of the earth have been intertwingled with the evolution of life. On the drive back, flipping the tuner in search of a news station I stumbled upon a "creation science" radio show.
The theory of the creationists on the show depended on an assumption of rapidly varying rates of radioactive decay. They couldn't explain why decay rates would fluctuate, except that God is all-wise and all-powerful. Moreover, they explained, all of the rock layers on earth, which conventional science attributes to billions of years of geologic story, were actually caused by intense volcanic activity and sedimentary deposition during the Flood 5000 years ago. How did Noah survive on the ark, with all the earth's volcanic and sedimentary rock erupting and flowing around him? Miracles, of course. God is all-powerful.
Science is somewhat harder but much more interesting when you can't use miracles to patch up the gaps in your logic.
This does raise interesting questions about information and persuasion. Americans' beliefs tend to divert from orthodox religion when their personal experience diverges from religious teaching. A majority of young people support gay rights, and in general, people are more likely to support gay rights when they know family members, friends or colleagues who are gay. Their emotional experience overrides religious arguments.
Similarly, according to a a new Pew study, a (narrow) majority of American Christians believe that non-Christian religions can also lead to salvation. When people encounter good people with varying religious beliefs, they conclude that it isn't plausible that only fellow Christians will go to heaven.
Americans come to support gay right and religious pluralism, based on their personal life experiences. So what of evolution? A person isn't going to meet an australopithecus on the way to the store, or have a feathered dinosaur as a pet. The beautiful and compelling ideas of evolutionary development depend on basic understanding of genetics and developmentary biology. The case for evolution is made of fact and reason, not personal everyday experience.
There is a disturbing sub-plot running between the lines in Oxygen. Much of the innovative geology and paleontology was done in pre-WWII Germany. Science, of course, came to a halt, when society was taken over by a political movement with demented beliefs. What sort of society can educate its citizens so that a majority supports science?
William Calvin writes entertainingly about human evolution. But his pet theory that the spark for human intelligence came from throwing proto-javelins at proto-gazelles around water holes is a great example of the unpersuasiveness of evolutionary biology. Throwing requires a high level of fine motor coordination, large motor coordination, and forethought, and greater hunting ability clearly would convey evolutionary advantages. But the explanation is unfalsifiable, and can't be rationally distinguished from competing theories, like Dunbar's theory that intelligence arose from gossip, or Terrence Deacon's theory that intelligence arose from proto-wedding rings (you see, humans need explicit symbols to mark the sexual availability of a female, since we don't have estrus).
VS Ramachandran, in an overall very good book showing what neuropathology reveals about the working of the brain, includes a throwaway statement that the brain could not have a built-in mechanism for cooking, and cooking must therefore be derivative of other skills.
But one could create a just-so story about cooking that would be as persuasive as the story about throwing. Early humans that learned and remembered how to roast meat and detoxify vegetables would gain more calories and nutrients from their food and have an evolutionary advantage over eaters of raw food. The skills of memory, planning, persistence and communication and cooperation required for cooking would carry over easily to other evolutionary beneficial traits.
Come to think of it, child development casts into question both the throwing theory and the cooking theory. Children learn to talk, walk, stack things, and open things earlier than they can throw or hold pretend tea parties.
The overdetermined storytelling and explanatory traits of humans explain the origin myths generated by evolutionary biology, more than any of the myths explain the origins of human intelligence.
Update: A google search of Ramachandran finds a couple of more recent articles about mirror neurons, a not-yet-proven but more plausible fundamental catalyst for human intelligence. The ability to feel and echo another human's sensation and action could be fundamental to complex social cooperation and cultural learning. An infant will stick his tongue out in response to his parent's gesture; babies mirror actions and emotions long before they walk and talk. The mirroring hypothesis seems much more amenable to testing in a variety of ways ranging from genetic distance between humans and other primates, the results of mirroring disability (autism?), and the developmental relationship between mirroring, learning, and social development.
Descending to Newark airport a few weeks ago, ribbons of street lights and twinkling cars make a glowing carpet. Is this future nostalgia? In the near future, will we be able to afford highways? Will we be able to afford airplanes?
Since Ezra Klein went on vacation and turned his blog over to Prof Goose of the Oil Drum, I've been reading some of the peak oil bloggers, and it seems like there's something to worry about.
* there is one major variable in the world's oil equation, the Saudi supply. Information about Saudi capacity is closely held, and the Saudis have every reason to lie.
* new extraction techniques get more oil out of the ground sooner, and the depletion curve is steeper after a field's production peaks
Worldchanging covers the opportunities for new technology and increased efficiency with some practical optimism. Things might get very different in the non-distant future.
Update: Just read this Washington Post discussion with an analyst who concludes from research that Saudi production has peaked already. Yikes!
The author of "Endless Forms Most Beautiful" cited Life on a Young Planet as a source, and one of his favorite science books.
Harvard paleontologist Andrew Knoll weaves together geology and paleontology to tell the story of life before the Cambrian Explosion. "Life on a Young Planet" explores scientific mysteries that don't yet have clear answers.
In the Proterozoic age, 600 to 800 million years ago, there are clear signs of life, with bacteria and algae with colonial living patterns similar to their descendents in tidal flats today. Rewind to 3.5 billion years ago, and there are much more cryptic signs of life that can't be conclusively distinguished from non-living processes.
Fast forward to 540 million years ago, at the end of the Proterozoic era, and there is a profusion of Vendobiont animal forms, strange and unlike the predecessors of recognizable organisms that proliferated during the "Cambrian Explosion." Scientists still don't know how or whether these alien creatures are related to the generations that followed.
One of my favorite sub-plots of the book is the story of the co-evolution of life and the planet. Early in earth's history, oxygen was scarce. Early bacteria metabolized methane, sulfates, and other chemicals. The proliferation of cyanobacteria helped create the oxygen-rich atmosphere that allowed large and complicated life forms to flourish. Here, also, scientists still have unanswered questions about how the earth's atmosphere evolved.
The book is clearly written, without condescension or the purple prose that affects some scientists freed of the constraints of journal articles. One of the strengths of the book is the way Knoll explains how scientists figure out what they know -- the dating methods, chemical analysis, comparisons with modern life forms, geological mapping, and other techniques used to piece together the stories of ancient life.
I really enjoyed this book -- it left me with a sense of awe about how much scientists have learned about the evolution of life, and how much is still unknown.
The basic genetic software that drives the development of organisms is very old, and is shared in common across the animal kingdom. The software is modular, with components that govern the development of eyes, limbs, and hearts. The genetic program that builds a multi-faceted fly eye is mostly the same as the program that builds a human eye.
Components are re-used to build different body parts -- the module that makes fingers and toes is re-used to make the spots on a butterfly. The genes are the same; the component architecture is the same, and the detail of the program itself is different.
This is beautiful science -- general laws holding together a vast number of seemingly unrelated facts. And it's new science. Until recently biologists theorized that eyes and limbs had developed independently in different families of organisms. The basic discoveries were made about 20 years ago, and much of the detail has been added in the last decade.
Analysis of the newly sequenced genome in fruit flies, frogs, mice, and humans revealed that organism were more similar than expected -- humans and mice share 97.5% of their genes. Scientists studying developing embryos were able to identify the genes that launch developmental programs, and to discover the similarities between developmental programs in different types of organisms. With a picture of the developmental program in live organisms, paleontologies have been able to look at old forms and build theories of how changes in program drove changes in animal form.
"Endless Forms" is a nicely-written, layman-friendly survey of evolutionary development -- a new synthesis of genetics, embryology, and paleontology -- by one of the field's pioneers. The primary strength of the book is that it summarizes the last 20 years of scientific discoveries, and provides an overview of the core body-building genetic patterns.
The survey is complementary to two more specific books that I've read on the related subjects in the last few years.
Shapes of Time, written by a Kenneth McNamara, a paleontologist in Western Australia, goes into more detail about the the algorithms used in the development of embryos and young organisms. And "Shapes" has an ambitious thesis about how the escalation and de-escalation of the developmental timeclock has guided the creation of new species.
The Symbolic Species, by Terence Deacon, a brain scientist at Harvard Medical school, describes the process by which nerves grow in the developing primate brain.
The primary argument of Deacon's book, about how humans developed symbolic thought, is speculative to the point of fanciful, and is not well-constrained by his science. The science itself is interesting enough. Decond compares the wiring of human brains with the brains of chips and birds. Deacon argues that the distinctive difference between humans and other species with communication skills isn't the size of the brain, or simply the addition of a new, larger, forebrain component, but the rerouting of the nerve wiring from the centers of emotion and motor control, through the newer regions of logic and planning.
The books by Deacon and McNamara are more uneven in editorial quality, but more intellectually savory. I get a real kick out of the exploration of the algorithm -- the detail of how development happens -- and the logical analysis of evolutionary mechanism --the hypothesis about how variations in the algorithm of development affects the form and behavior of the organism.
Endless Forms is a really good way to get a big picture overview of evo devo science, and an entertaining fast-forward through the plot of scientific discovery, without requiring the level of commitment to get through one of the graduate-level survey textbooks in the field.
Carroll ends with a pitch for using evolutionary development as part of the basic teaching of evolution in schools. The history and mechanism of the evolution of form is dramatic and intellectually persuasive. It's ironic that evolution is becoming more politically controversial, even as science makes such progress at understanding how evolution happens.
The Birth of the Mind, by Gary Marcus, tells a fascinating story discovered in recent years about how genes drive the development of the brain. The primary puzzle is how 30,000 genes enable the creation of billions of neurons and trillions of connections. The answer is that genes aren't like a blueprint with each gene coding for a component. Instead, genes act like computer programs, with behavior that is switched on and modified based on developmental and contextual cues. Brain wiring isn't complete at birth or in childhood. Learning consists of rewiring the brain across the human lifespan.
Marcus hypothesizes for further research that the difference betweeen humans and chimpanzees isn't just brain size, it's differences in the developmental program. The same components are re-used and extended -- the "then" parts of the genetic conditionals -- the proteins produced -- are the same but the "if" sections are different -- the conditions and sequence in which the proteins are produced.
One of the interesting areas of discovery how modules are repurposed. The FOXP2 gene has been discovered to play a role in human language capabilities; it's also used in the heart, lungs, and other areas. Another gene, PAX6, which controls eye-building, also used for "development of the central nervous system
and endocrine glands, and regulates a range of cellular processes, including proliferation, migration, adhesion and signalling." (from this paper by Marcus on FOXP2).
The story Marcus tells is complementary to the science sections of The Symbolic Species: The Coevolution of Language and the Brain, which describes the developmental patterns of neural growth. In human brains, neurons get extend and infiltrate from motory, sensory, and emotional centers through to centers of reason and planning, extending vocal calls to speech, foraging instincts to ethnic cuisine, emotions to poetry, weddings and funerals.
Unlike the Symbolic Species and recent books by Steven Pinker, Gary Marcus' mentor at MIT, this book doesn't stray as far into unprovable speculation about human nature and the origins of consciousness and culture. Which is just as well -- the science is fascinating with less speculation. It is quite a thrill to read about a body of scientific knowledge that is growing so rapidly.
Daniel Dennett's Freedom Evolves is on the speculation side of the continuum. No surprise, since Dennett's a philosopher. The book tries to show how evolution can give rise to free will.
The first part of the book is a defense of an old philosophical perspective called "compatibilism", whereby human free will is compatible with a deterministic universe. Even though natural events are predetermined, humans can choose to avoid determined events. For example, I might be genetically nearsighted, but can choose to wear glasses. The logical failure case is the intersection of multiple agents that each have free will. Ghandi chose nonviolent tactics to oppose British colonial rule, Indians chose to follow Gandhi's nonviolent approach, and the British agreed to concede.
Following the defense of compatibilism, Dennett cites game theory and evolutionary biology to explain how humans may have evolved tendencies to co-operate, to develop ethical norms and social judgement. Dennett's discussion of ethics seems rather impoverished. The examples he gives are mostly about the use of prison as just punishment, rather than the less extreme ethical issues that pervade social life. When discussing "free will", Dennett tries to zero in on the rare circumstance of pure free choice, rather than the common situations where ones choices are influenced by habits (which one has chosen at some earlier date).
Dennett is vehemently in opposition to traditional, religiously derived perspectives. The book is studded with barbs against unnamed opponents who are supposedly terrified of the liberating impact of Dennett's evolutionarily derived secular philosophy.
Yet spiritual thinkers from various traditions have more nuanced and insightful discussions about the range of ethical behavior, from parent-child relationships, to social gossip, to business ethics, to more extreme cases of crime and punishment. It is commonplace in Jewish ethical writing, for example, to talk about choosing good companions, teachers and habits to foster good choices.
Like Steven Pinker, Dennett scornfully replaces traditional ethical thought with modern, science-justified speculations that don't, however, seem particularly wise. It is possible that moral philosophy guided by evolutionary science will contribute wisdom about the human condition; but these writers haven't done it.
The developing science of mind, shaped by research in genetics, developmental biology, psychology, computational modeling, and evolutionary analysis is generating fascinating results. So far, the science seems more compelling to me than the philosophical speculation surrounding it.
In an Edge interview, Gary Marcus talks about how his perspective on the "nature/culture" issue differs from his mentor, Steven Pinker.
Pinker allows less room for improving the human condition than I would. I don't think we disagree a whole lot about the nature of the facts, but Pinker tends to put his emphasis on the ways in which the biology constrains us in one direction or another, and he puts less emphasis on ways in which learning can change those things. I would say that the ability to learn is actually one of the things that humans are really good at. One of our unique talents is an incredible facility for learning, an incredible flexibility in learning, that even some of our closest primate cousins don't have. Our miraculous abilities to learn actually open up lots of possibilities, and by not stressing this, Pinker in his latest book paints a somewhat darker picture of human nature than I would.
In which Louis Menand demolishes Steven Pinker's tendentious pseudoscience.
Pinker doesn't care much for art, though. When he does care for something—cognitive science, for example—he is all in favor of training people to do it, even though, as he admits, many of the methods and assumptions of modern science are counter-intuitive. The fact that innate mathematical ability is still in the Stone Age distresses him; he has fewer problems with Stone Age sex drives. He objects to using education "to instill desirable attitudes toward the environment, gender, sexuality, and ethnic diversity"; but he insists that "the obvious cure for the tragic shortcomings of human intuition in a high-tech world is education." He thinks that we should be teaching economics, evolutionary biology, and probability and statistics, even if we have to stop teaching literature and the classics. It's O.K. to rewire people's "natural" sense of a just price or the movement of a subatomic particle, in other words, but it's a waste of time to tinker with their untutored notions of gender difference.
In a conversation about religion and science on Joi Ito's blog, John Jensen recommended Steven Pinker's How the Mind Works for a scientific perspective on the origin of culture and religion. The book fails at that mission, but is interesting in a more limited scope.
The parts of the book backed up by experimental research are fascinating. In readable prose, Pinker summarizes research about how the mind processes visual images, logic, and math. The experimental evidence supports a coherent theory that intelligence is composed of modular components.
The parts of the book about emotions, altruism, and values have much less experimental content. Pinker uses evolution as myth -- canonical stories about hunter-gatherer cultures and primate ethology are used to draw broad lessons about human nature. One of Pinker's "insights" -- humans have evolved to assess the trustworthiness of others, and also to deceive themselves and others. Another: addictions to food and sex derive from biological desires for pleasure. Another: human cultural achievements are driven by desire for status.
Damasio's based analysis of emotion and consciousness based on clinical neurological research and Terrence Deacon's analysis of the neuroanatomy of the brain are more empirically based, and have more compelling insights about the relationships between emotions, language, and consciousness. When Deacon strays off the empirical farm and does evidence-free, evolution-based mythic speculation, he gets shallow too.
Pinker's use of evolution as a myth doesn't lead to more insight than traditional explanations of human complexity (desire leads to suffering in the Buddhist tradition; the "good inclination/evil inclination" framework in the Jewish tradition). These traditional sources don't have evolutionary science as their base, yet they perceive the conflicts in human nature, and can reach wise insights about how to handle the conflicts.
A good part of the argument in "How the Mind Works" is polemic against foolish politically-correct academic conventional wisdom that humans don't have natural tendencies toward selfishness, deceit and violence.
But in arguing against "culturist" extremism, Pinker misses the point about culture. Pinker ties himself into knots trying to explain why people would engage in behavior that contradicted a basic evolutionary program -- why a successful scientist would focus on career and marry late, why a family would adopt a child.
He doesn't understand that cultural rewards like prestige and social experiences like nurturing can extend underlying biological programming. It's not enough for Pinker to reverse-engineer the biological roots of behavior, he needs to explain the higher-level behavior in terms of the lower-level behavior. This is like explaining the plot of a video game in terms of assembly language, or even the game's object model.
In summary, Pinker does fine as a scientist, but he hasn't successfully made the transition to moral philosopher. And he certainly hasn't made the case that scientific research has made moral philosophy obsolete.
Peter Merholz writes about a pattern he observed, back when he was in Epinions, about the way people decided what digital camera to buy.
We assumed that, given the task of finding an appropriate digital camera, people would whittle down the attributes such as price, megapixel count, and brand, and arrive at the few options best suited to them. If they had questions along the way, they could read helpful guides that would define terms, suggest comparison strategies, etc.
Again and again in our observations, that didn't happen. People who knew little about digital cameras made no attempt to bone up. Instead they'd barrel through the taxonomy, usually beginning with a familiar brand, and get to a product page as quick as possible. It was only then, when looking at a specific item, and seeing what it's basic specifications were, did they pause, sit back, and think, "Hmmm. This has 2 megapixels. I wonder how many I want?" Some would look for glossaries or guides, others would read reviews, and some just guessed by comparing the various products.
They would go through this cycle -- looking at a product, reflect on their needs, understand concepts, look at another product, reflect again, etc. -- a few times. Todd and I came up with a conceptual model, where the user is something like a bouncing ball, falling straight onto a product, then bouncing up, getting a lay of the land, falling onto another product, bouncing up again, but not as high since they're starting to figure it out, falling onto another product, and repeating until they've found the right one.
Makes total sense. People need concrete examples in order to start to understand the conceptual space of digital camera features.
More good decision-making references in the comments to peterme's post.
Zack argues that humans will soon transcend their limitations through the invention and application of "emoticeuticals."
"Pain and negative feelings often are more intense and longer lasting than they need to be, especially for people living in today’s modern world."
It is indisputable that prozac, alcohol, marijuana, and many other drugs affect brain chemistry. The human brain is a chemical machine. But the system is much more complicated than "happy-on" and "happy-off."
In addition to the absense of pain signals, human happiness depends on things like a sense of meaning, purpose, and connectedness. There is no simple way to achieve these states. Individuals spend lifetimes to achieve these states imperfectly and occasionally. Philosophical and spiritual traditions wrestle with these issues.
Science has discovered drugs that work on various centers of the brain, but are very far from being able to understand, let alone manage the subtleties of human behavior. What drugs could make a family, or a neighborhood, or a participants in a business work well together? What drug would motivate a person to stop watching television and start gardening? These are complex learned behaviors requiring effort and cultural reinforcement.
I don't see why Zack is so enthusiastic about this technology. Happiness isn't trivially easy for conscious beings who perceive suffering and imperfection in the world, and there is a good argument that shouldn't be. The world isn't improved any if someone can have a fight with a friend or spouse and then take a happy pill to make it better. The world isn't improved if nations fight wars, and their citizens take happy pills to make the pain go away.
Mr. Barabasi believes the human social network is scale-free with the expected smattering of richly connected hubs. Mr. Watts disagrees. "If you asked people to list the number of people they recognize, that could be scale-free, everyone recognizes Michael Jordan," he said. "But if you said, `Who would you trust to look after your kids?' That's not scale-free. As you start to ratchet up the requirements for what it means to know someone, connections diminish."
Piglets in litters or cloned pigs behave with individual personality traits -- shy or outgoing, curious or passive -- just like litters of normal pigs, according to research at Texas A&M reported by Salon magazine. The researcher theorizes that differences in gene expression in the womb might account for the distinction.
via Google news
A pair of biologists propose that tiny honeycombs within minerals may have served as the first cells, incubating the first self-replicating life forms.
This proposal contrasts with theories that life started with the emergence of self-replicating chemicals, and cellular boundaries evolved later.
Thanks to Jerry for the link, and to Swamy for some education on the subject.
It is pretty amazing how often American and European books on the history of science and technology contain obvious errors of fact when they discuss the "discovery" and "invention" of various ideas and techniques.
The strong part of Searle's article is the argument that "syntax is not
semantics" -- a computer that can calculate chess moves based on
pre-defined algorithms does not actually understand chess. Searle argues
successfully that Deep Blue is unintelligent in the same way that a
pocket calculator is unintelligent; it is simply manipulating symbols,
just as a human who speaks Chinese phrases using a transliteration is
manipulating symbols but does not understand Chinese.
Searle is right that Deep Blue is very far from being conscious. The
fact that a computer can beat a human at chess means about as much as
the fact that an automobile can move faster than a runner. Humans
designed the automobile; and human programmers chose the heuristics that
drive Deep Blue's decisions.
Searle is less successful with the argument that a computer cannot have
intelligence, since a computer contains a mere model of intelligent
processes; and models are different from the physical things that they
Searle acknowledges that human intelligence is an emergent property of
neurons firing in the brain. This means, though, that intelligence is
based on circuitry, a pattern of information. Similarly, scientists are
gradually deciphering the informational patterns of genes and gene
expression. The lines between information and reality are not so clear
cut; it may be possible to develop living, even intelligent patterns in
some other medium.
Human intelligence probably has subtle dependencies on the biochemical
nature of the brain and the organism. Tom Ray makes this point
beautifully. But it does not follow that the only possible kind of
intelligence requires a body; it certainly does not follow that theonly
kind of intelligence requires this sort of body.
It may be theoretically possible for intelligence to develop in some
other medium. But despite Kurzweil's optimism, there is little evidence
that we have any idea how to do this. Searle is right that just because
we can program computers to play chess does not mean we are anywhere
near creating computers with conscious minds.
The Origin of Animal Body Plans by Wallace Arthur
Unfortunately, the Amazon API does not seem to cover reviews; I would love to be able to cross-post reviews to Amazon and the weblog!
This book looks nontrivial but fascinating. Into the to-read queue it goes. So many books, so little time.
Acquiring Genomes: A Theory of the Origins of Species contains some fascinating biology surrounded by a muddled argument in a poorly organized book.
Authors Lynn Margulis and Dorion Sagan are advocates of a theory that compound cell structures evolved by means of symbiogenesis -- symbiosis which becomes permament.
The best-known example of symbiosis is lichen, in which a fungus lives together with an alga or cyanobacterium; the organisms propogate together in a joint life cycle. The book includes many other wonderful examples of symbiosis. A species of green slug never eats; it holds photosynthetic algae in its tissues, and it crawls along the shore in search of sunshine. A species of glow-in-the-dark squid has a organ which houses light-emitting bacteria. Some weevils contain bacteria that help them metabolize; others have bacteria that help them reproduce. Cows digest grass using microbial symbionts in the rumen; humans take B-vitamins from gut-dwelling bacteria.
The associations take numerous forms; a trade of motility for photosynethsis; nutrition for protection; one creature' waste becomes
another's food. The authors argue that the posession of different set of symbionts can leads to reproductive isolation and speciation. Much more than that, the authors argue that all species are the results of symbiosis that became permanent and inextricable. The bacteria that fix nitrogen for pea plants are no longer able to live independently. The biochemistry of the symbionts become intertwined; the symbionts together produce hemoglobin molecules to move oxygen away from the bacteria; the heme is manufactured by the bacteria, while the globin is produced by the plant.
The final symbiotic step is a fused organism. The authors contend that algae and plants developed photosynthesis by ingesting photosynthetic bacteria and failing to digest them. Based on research by several scientists, the authors believe that a cell structure called the karyomastigont, including the nucleus and its connector to a "tail' that enables the cell to move, was once a free-living spirochete which became enmeshed in another bacterium that was good at metabolizing the prevalent resource (sulfur?), but could not move well by itself. The bacteria merged their genomes, as bacteria are wont to do, and henceforth reproduced together.
At least at the cellular level, the symbiogenesis argument is fascinating and plausible for the origins of the first species. Species are conventionally defined as creatures that can interbreed. But bacteria of various sorts, whose cells lack nuclei, can and do regularly exchange genetic material. Their types change fluidly. Therefore, bacteria don't have species. According to the theory of symbiogenesis, eukaryotes, organisms whose cells have nuclei, were formed by the symbiosis of formerly independent bacteria. Eukaryotes, including fungi, protoctists, plants and animals are all composite creatures. Margulis and Sagan propose a new definition of species: creatures that have same sets of symbiotic genes.
According to Margulis and Sagan, therefore, the graph of evolution is not a tree with ever-diverging branches; it is a network with branches that often merge.
The symbiogenesis theory is a logical proposed solution to the puzzle of how nature can evolve living systems with multiple components. If you look at software as another kind of information-based system; it seems only reasonable that composition would turn out to be an effective means of creating larger, more complex units. None of the artificial life experiments that I know of have achieved this so far (although the Margulis/Sagan theory suggests a way to test this, by creating artificial metabolisms that can evolve codependency).
While Margulis and Sagan make a plausible argument that symbiogenesis is a plausible mechanism for evolution, they fail to persuade that it is the primary mechanism for all of evolution.
The authors contrast evolution by symbiogenesis with a "neodarwinist" view that evolution proceeds in gradual steps by means of random mutation. They observe that in ordinary life, mutations are almost always bad, and therefore cannot be a source for evolutionary change.
But this argument against change by means of gradual mutation is a straw man compared to contemporary theory. First of all, mutation may not be the prime source of fruitful genetic variation. The math behind genetic algorithms shows that where sexual reproduction or other genetic recombination is used in reproduction, these recombinations generate more variation and often more fruitful variation than random mutation. This may also be true in nature. Reproductive recombination may be a fruitful source of natural variation that is more important than mutation.
Second, evolutionary biologists including Stephen Jay Gould have moved away from the notion of slow, gradual change, toward a theory of "punctuated equilibrium", positing faster change driven by times of stress. The theory of stress-driven change also helps combat the argument about the uniformly deleterious effect of mutation. In a stable circumstance, most changes to the status quo are going to be bad. In a sulfurous atmosphere, bugs that breathe sulfur and are poisoned by oxygen live well; a sport that preferred oxygen to sulfur would soon die. But if the atmospheric balance changed to include more oxygen, an oxygen-breathing mutant would be at an advantage.
Margulis and Sagan bring up the old canard that gradual change can't create a complex structure such as a wing. However, Shapes of Time, a book about about the role of the development process in evolution, explains elegantly how substantial changes in form can be produced by small modifications in the algorithms coding an organism's development. Margulis and Sagan don't have any explanation for how symbiogenesis could possibly explain the evolution of four-legged creatures from fish, or humans from chimps; developmental theorists have plausible explanations for these transformations.
The symbiogenesis argument is seems strongest in dealing with single-celled organisms, where the fusion of genomes is not hard to imagine, and harder to explain in dealing with more complex life forms. The most dramatic argument from the symbiogenesis camp is that the larval stage found in many species is actually an example of symbiogensis. At some point, frogs, sea urchins, and butterflies aquired the genomes of larva-like animals. It would take a lot more explanation to make the case for this -- if different creatures acquired a larval form by means of symbiosis, why would larval form always be at beginning of life cycle; why doesn't a butterfly molt and become a caterpillar? If the animal contains two seperate genomes, what developmental process would govern the switchover from the first genome to the second. I will certainly look for other evidence and arguments to prove or disprove this one. Readers who are familiar with this topic, please let me knowif this argument has been discredited or if any more evidence has been generated to support it.
The summary of the book's argument here is more linear and direct than the book itself. Chapters 9 through 12 focus on the area of the authors' scientific expertise -- examples of bacteria, protoctists, and fungi in symbiotic relationships, and proposed mechanisms for the role of symbiosis in evolution. These chapters are the strongest and most interesting in the book. The rest of book contains vehement yet fitful arguments about various tangentially related topics
The authors have some seemingly legitimate complaints with the structure of biological research. The authors believe that symbionts are a primary biological unit of study; yet scientists who study plants and animals are organizationally distant from those who study fungi and bacteria, making it difficult to study symbiosis. Moreover, the study of small, slimy, obscure creatures generates less prestige and money than the study of animals, plants and microbes that relate directly to people; slowing progress in the field of symbiosis and rendering it less attractive to students.
The book has a section on the Gaia hypothesis -- the argument that the earth itself is a living being. The connection to the book's main thesis is not made clearly, and the section is rather incoherent. The authors have a written a whole book on the subject, which may be worth reading; or there may be some other treatment worth reading (recommendations welcome, as usual).
The book includes a section attacking the commonplace metaphors of evolutionary biology, such competition, cooperation, and selfish genes. But the authors don't seem to use metaphors any less than the people they attack -- they have a particular fondness for metaphors of corporate mergers and acquisitions, and human intimate relationships. The use of metaphor in science has its advantages and limitations; but this book doesn't add anything intelligent to that discussion.
In general, the authors are aggressively dismissive of other approaches to evolutionary biology. In a typically combative moment, the authors argue that "the language of evolutionary change is neither mathematics nor computer-generated morphology. Certainly it is not statistics." The authors clearly have a hammer in hand, and see a world full of nails. In posession of a strong and original idea, the authors lack the perspective to see their own idea as part of a larger synthesis incorporating other ideas.
In summary, I enjoyed the book because of the strange and wonderful stories of symbiosis and the description of the symbiogenesis theory. But the book as a whole is not coherent or well-argued. Read it only if you're interested in the topic strongly enough to get through a muddled book. And don't buy retail.
For other entries on complex systems, browse at your leisure. This weblog will gain a subject index shortly.
There's a current of creativity flowing in communication and collaboration software, where people are blending aspects of weblogs and wikis, email and aggregation.
In the last few days, I came across a couple of examples of people discussing and experimenting with such things.
Anil Dash recently posted an essay on the "Microcontent Client." The concept is a desktop tool that will organize all of the information fragments in one's web experience; something that takes all of one's RSS feeds and google searches and bookmarks and weblog entries, categorizes them, and weaves them into an organized pattern.
One of the ideas I like is having authoring and search built into one's basic desktop toolset -- personal html authoring tools seem pretty underdeveloped these days. (A friend just recommended TopStyle Pro and Dreamweaver MX).
I'm ambivalent about the notion of a managed "personal information space" with lots of aggregation feeds, nicely organized bookmarks, etc. The world is a big sea of information with a few islands of things that one pays close enough attention to organize; what feels missing is not the organizing tools but the time and attention to organize more things!
Overall, the design philosophy of the Microcontent Client feels a too "robot web" for me. Anil writes that "the passive authoring of the microcontent client creates content that even the 'author' doesn't yet know they want to read", and "users running the client will find unused processor cycles being tapped to discover relationships and intersections between ideas."
I suppose what he means is a sort of personalized Google News or personalized Pilgrim context links; but idea of AI discovering insights while you sleep sounds sci-fi and somewhat creepy. (For the Pilgrim links, see Further Reading on Today's Posts, below the blog entries:
Anil alludes to the reinvention of Usenet in the weblog context; but he doesn't talk enough about the nouns and verbs of usenet -- people and conversation. And therefore, I think, misses key areas of functionality, to support people having conversations and remembering what was said.
In another experiment along these lines, Bill Seitz is working on a weblog that is based on a wiki platform and is integrated with the wiki collaboration space.
I like and understand this concept better; which is to integrate the chronologically organized thoughts of weblogs with the linked, topic-organized thoughts of wikis.
One of the things that I like here is the complementarity between the weblog material that is "published", however informally, and the wiki matrix, which is a soup of thoughts in varying levels of completeness.
The form seems well-designed to facilitate "gardening" where contextual elements are organized to support some blog topic. Google auto-links would be a nice addition. Perhaps this is what Anil meant, too; but the emphasis here is on the person, helped perhaps by the machine.
One thing that still seems unfinished in Bill's implementation (which is brand new!) is integrating the more structured, graphical publishing of the weblog with the unstructured whiteboard of the wiki.
One benefit of weblogs is that they are conceptualized as a publishing tool; and therefore have functions for graphic presentation and structured navigation which help readers find their way around. The navigation design of a weblog is so basic that you barely notice that it is there; yet there is a set of structured conventions: the ubiquitous date-formatted posting, and also typically a title, author bios, comments, archives, and links.
Wikis have a text-editor sort of glorious simplicity, which may be wonderful for the author, who has the navigational structure in her head, but is somewhat hard on the reader who is swimming without lane markers in a pool of links. Bill has added navigational bread crumbs, and coloring for entry dates, but that's still not enough navigational structure; I still feel rather dizzy.
Good food for thought, more toys to play with.
A few weeks back, I wrote about programs that model the development of plants. If you change the parameters of the development algorithm you generate shapes that resemble different types of plants.
Following that thread, I recently read Shapes of Time: The Evolution of Growth and Development. This is a fascinating book that looks at the mechanisms of development in animals, and how those mechanisms affect evolution.
Like the plant models on screen, developing embryos in real life follow a program, where small changes in key parameters generate major changes in shape. There's not one program, but several; during the first phase of growth, parameters are controlled by the egg, later on by the chemical environment in the embryo; still later, by hormones, and by the ratio of cell growth to cell death. In all of these stages, changes in the quantity and timing of key parameters create changes in development.
Changes in the developmental program enable organisms to adapt to new niches. In western Australia, along the sloping bed of the ocean shelf, there can be found fossil brachiopods that become progressively younger-looking as the gradient ascends. The pedicle (sucker-foot) is larger relative to the rest of the body in younger creatures; a slower growth rate would result in adults who were better able to stick to the rocks in wave-wracked shallow waters.
The application of this theory to the evolution of humans is quite fascinating, but this post is quite long enough; read the book if you're interested; or ask me and I'll summarize :-)
There were two main things about the book that were interesting to me.
Ernst Haeckel, the 19th century biologist who coined the term "biology", theorized that "ontogeny recapitulates phylogeny." According to this theory, development retraces the steps of evolution; embryos of mammals pass through developmental stages that resemble worms, then fishes, then reptiles, then the ultimate mammalian stage. The theory was influenced by an ideology that saw evolution as progression to ever-greater levels of complexity, with humans, of course, at the top of the chain. This theory reigned as scientific orthodoxy until the 1930s.
The problem with the theory is that there is plenty of evidence that contradicts it. In the '30s, biologists Walter Garstang and Gavin de Beer advocated the opposite theory, pedomorphosis. This theory proposed that as organisms develop, they become more like the juveniles of the species. There is plenty of evidence showing this pattern. For example, some species of adult ammonites have shapes that are similar to the juveniles of their ancestor species. According to this theory, human evolution is the story of Peter Pan; we are chimps who never grow up.
Following Stephen Jay Gould, McNamara thinks both sides are right; and he supports Gould's thesis with troves of evidence from many species across the evolutionary tree. Organisms can develop "more" than their ancestors, by growing for a longer period of time, starting growth phases earlier, or growing faster. Or organisms can appear to develop "less" than their ancestors, by growing for a shorter period of time, starting growth phases later, or growing more slowly.
McNamara romps through the animal kindom, from trilobites to ostriches to humans, giving examples of evolution showing that a given species has some attributes that represent extended development, and others that represent retarded development compared to their ancestors. Not being socialized as a biologist, the debate has no charge for this reader. It makes perfect sense that the development program has parameters that can be tuned both up and down!
McNamara's academic specialty is fossil sea urchins, while his day job is a museum of paleontology in Australia. I suspect that the pedagogical impulse of the museum job shows in the book. He's not a populist on the Stephen Jay Gould scale, but the book its decently written (though it could be better edited), and provides enough context so a non-specialist reader can read it quite enjoyably.
I liked it a lot, and plan to follow up with more on related topics, perhaps:
Both of them scour the Recently Changed list at weblogs.com and pick up links to books.
Weblog Bookwatch has lists of recently reviewed books, shows you which weblogs have reviewed the books, and which other books those blogs reviewed. Bookwatch collects links to Amazon, Powells, and Barnes and Noble.
AllConsuming.Net does a similar search, but prioritizes its lists based on recently mentioned books, so it's more of a zeitgeist-tracking tool for those who want to keep up with blog fashion. Also lets you publish a pretty list of books to read, with pictures.
For hours of surfing delight. I've been addicted for years to Amazon surfing; start with a subject, look for related books based on recommendations, reviews and lists; and build a list of books to read. Then again, I've been addicted for many more years to libraries and bookstores; the vice hasn't changed, only the medium.
Physicist Steven Weinberg writes about Steven Wolfram's A New Kind of Science in the New York Review of Books.
Mostly he writes about why particle physics is better than other kinds of science: "although these free-floating theories are interesting and important, they are not truly fundamental, because they may or may not apply to a given system; to justify applying one of these theories in a given context you have to be able to deduce the axioms of the theory in that context from the really fundamental laws of nature."
Weinberg disclaims this opinion, but he repeats it often enough that it's clear which side of the flamewar he's on. Weinberg thinks science should offer one fundamental theory of the world. He is not interested in the idea that there might be different levels of organization in the universe, so that the algorithm that modeled plant growth, say, was different than the algorithm that modeled competition among species in an ecosystem.
In fact Weinberg doesn't seem convinced by the idea of modeling. "Take snowflakes. Wolfram has found cellular automata in which each step corresponds to the gain or loss of water molecules on the circumference of a growing snowflake. After adding a few hundred molecules some of these automata produce patterns that do look like real snowflakes. The trouble is that real snowflakes don't contain a few hundred water molecules, but more than a thousand billion billion molecules. If Wolfram knows what pattern his cellular automaton would produce if it ran long enough to add that many water molecules, he does not say so."
The whole trouble with complex systems is that they are programs that you need to run fully, with identical initial conditions, to get the exact result. If a model can be used regularly to make predictions about a real-world system -- even if the model doesn't duplicate the system -- it seems to me that model is worth something.
The most interesting thing Weinberg says that Wolfram should do but doesn't, is to offer a definition and measure for complexity. A very clever, erudite and witty person named Cosma Shalizi claims to have done this in his doctoral dissertation. Which I have not read yet, and my undergrad-level math may not be sufficient to understand.
A few days ago, I wrote about Tom Ray's neat dispatch of Ray Kurzweil's contention that computers will soon be smarter than we are. To give Mr. Kurzweil his due, here's a link to a lovely essay critiquing Stephen Wolfram's A New Kind of Science.
Wolfram's book became a controversial best-seller based on the author's claim that computational methods enable a revolutionary approach to science. Many people have criticized the book because Wolfram is an egomaniac who claims to be smarter than everyone else on the planet; because he doesn't go through the traditional scientific peer review process; and because the sprawling, self-published 1192-page tome really could have used an editor.
Kurzweil ignores the gossip and the copy-editing, and deals with the ideas. Kurzweil's essay analyzes two of Wolfram's revolutionary claims: that computational approaches based on cellular automata can explain life and intelligence, and that they define physics.
A quick definition: cellular automata are a type of logical system composed of simple objects whose state is determined by following simple rules about the state of fellow objects; like junior high school girls who will wear tomorrow what the popular girls wore today. The results of many cellular automata are quite boring. Either they fall into a steady state, where nothing changes (class 1), or a simple pattern repeats tediously (class 2), or they twitch forever without any detectable pattern (class 3) But some cellular automata (class 4) are much more interesting. A class 4 automaton generates a complicated pattern that, in Kurzweil's words, "is neither regular nor completely random. It appears to have some order, but is never predictable." A class 4 automaton can be used to convey information, and hence can be used as a "universal computer."
Do cellular automata explain life?
Wolfram argues that because cellular automata can generate behavior of arbitrary complexity, they therefore explain living systems and intelligence. Kurzweil neatly explains that just because cellular automata can generate complex patterns, doesn't mean that life and intelligence will automatically follow.
In Kurzweil's words, "One could run these automata for trillions or even trillions of trillions of iterations, and the image would remain at the same limited level of complexity. They do not evolve into, say, insects, or humans, or Chopin preludes, or anything else that we might consider of a higher order of complexity than the streaks and intermingling triangles that we see in these images."
As discussed in this essay on artificial life, the software for life is based on a layered architecture with many components and layers: evolution, growth, metabolism, ecosystems. Just cause we can program computers -- using CAs or any other method -- doesn't mean that we know how to build every kind of software in the universe.
Do cellular automata explain physics?
Wolfram claims that cellular automata provide a better model for physics than traditional equations, and more than that, the universe itself is one big cellular automaton.
Kurzweil puts Wolfram's claims about physics into context, as part of a school of thought whose advocates, including Norbert Weiner and Ed Fredkin, argue that the universe is fundamentally composed of information. Particles and waves, matter and energy, are manifestations of patterns of information.
The way to go about demonstrating this hypothesis is to use cellular automata to emulate the laws of physics, to see if this generates equivalent or better results than the existing sets of equations. The mapping is apparently easy for Newtonian physics; workable but not particularly elegant for Einstein's special relativity, and potentially an elegant and even superior way to represent quantum physics, because CAs generate patterns that are recognizably regular, whose details are impossible to predict.
In summary, Kurtzweil thinks that Wolfram's thesis regarding physics is plausible, but it has yet to be proven, and Wolfram hasn't proved it.
One thing I don't understand about this hypothesis is why it proves that the universe IS a computer. If you prove that computation is a better model for physical phenomena, how have you proven that the model is reality itself? A equation can predict where a ball will land, based on the speed and direction of its flight, but the ball itself isn't an equation. Some day, I'll take a look at Kurzweil's book The Age of Intelligent Machines, which covers this topic, and see what I think.
With Kurzweil's synopsis as a guide, I'll take a stab at reading Wolfram's tome. Not because I think it will contain the answer to every question, but because I expect an interesting exploration of cellular automata, and an interesting take on the information hypothesis to physics.
The first point is plain logic. Kurzweil observes that following Moore's law, computers will have more processing power than the human brain within a couple of decades. Ray points out that the power of software is not improving at anywhere near the same rate. There's plenty of evidence that complicated software is outstripping our ability to design and maintain it effectively.
The second point is more subtle. Kurzweil argues that it will be possible to implement human intelligence in silicon, simply by reverse engineering the brain and mapping its neural connections into software. Ray notes that there are many aspects of human intelligence that depend on subtle properties of chemistry, for example, the delicate balance of hormones that influences temperament and mood, shaping our decisions, communication, and art.
It may be possible to create AI. Ray, who created the "Tierra" artificial life ecosystem, believes that the most promising method is to create digital a-life systems and let them evolve on their own. If such intelligence evolved, it would be different than human intelligence, depending on the very different properties of its technology and environment.
At any rate, the mechanisms to create artificial intelligence aren't obvious, and there isn't any reason to believe that it will happen any time soon.
I recently read Steven Levy's book on Artificial Life. I enjoyed the
book very much, since the a-life theme weaves together many of the
threads of research into complex adaptive systems, and is a useful way
of thinking about the relationship between the various topics. Levy also
tells a human story of the scientific pursuit of artificial life, the
tale of a motley crew of eccentric scientists, pursuing their work at
the margins of the scientific mainstream, who join together to create a
rich new area for exploration.
The book was written in 1992; ten years later, the results of the
pursuit of a-life have been decidedly mixed. Despite substantial
scientific progress, the more ambitious ideas of artificial life seem to
have retreated to the domain of philosophy. And as a scientific field,
the study of artificial life seems to have returned to the margins. The
topic is fascinating, and the progress seems real -- why the retreat?
One way to look at progress and stasis in the field is to consider how
scientists filled in the gaps of von Neumann's original thesis. The
brilliant pioneer of computer science, in Levy's words, "realized that
biology offered the most powerful information processing sytem available
by far and that its emulation would be the key to powerful artificial
systems." Considering reproduction the diagnostic aspect of life, von
Neumann proposed a thought experiment describing a self-reproducing
The automaton was a mechanical creature which floated in a pond that
happened to be chock full of parts like the parts from which the
creature was composed. The creature had a sensing apparatus to detect
the parts, and a robot arm to select, cut, and combine parts. The
creature read binary instructions from a mechanical tape, duplicated the
instructions, and fed the instructions to the robot arm, which assembled
new copies of the creature from the parts floating in the pond.
The imaginary system implemented two key aspects of biological life:
* a genotype encoding the design for the creature, with the ability to
replicate its own instructions (like DNA)
* a phenotype implementing the design, with the ability to replicate new
creatures (like biological reproduction)
The thought experiment is even cleverer than it seems -- von Neumann
described the model in the 1940s, several years before the discovery of
In the years since von Neumann's thought experiment, scientists have
conceived numerous simulations that implement aspects of living systems
that were not included in the original model:
* Incremental growth. The von Neumann creature assembled copies of
itself, using macroscopic cutting and fusing actions, guided by a
complex mechanical plan. Later scientists developed construction models
that work more like the way nature builds things; by growth rather than
assembly. Algorithms called L-systems, after their inventor, biologist
Astrid Lindenmeyer, create elaborate patterns by the repeated
application of very simple rules. With modification of their parameters,
these L-systems generate patterns that look remarkably like numerous
species of plants and seashells. (There is a series of wonderful-looking
books describing applications of the algorithms).
* Evolution. Von Neumann's creature knows how to find parts and put
together more creatures, but it has no ability to produce creatures that
are different from itself. If the pond gradually dried up, the system
come to a halt; it would not evolve new creatures that could walk
instead of paddle. John Holland, the pioneering scientist based at the
University of Michigan, invented a family of algorithms that simulate
evolution. Instead of copying the plan for a new creature one for one,
the genetic algorithm simulates the effect of sexual reproduction by
occasionally mutating a creature's instruction set and regularly
swapping parts of the instruction sets of two creatures. One useful
insight from the execution of genetic algorithm simulations is that
recombination proves to be a more powerful technique for generating
useful adaptation than mutation.
* Predators and natural selection. In von Neumann's world, creatures
will keep assembling other creatures until the pond runs out of parts.
Genetic algorithms introduce selection pressure; creatures that meet
some sort of externally imposed criterion get to live longer and have
more occasions to reproduce. Computer scientist Danny Hillis used
genetic algorithms to evolve computer programs that solved searching
problems. When Hillis introduced predators in the form of test programs
that weeded out weak algorithms, the selection process generated
Genetic algorithms have proven to be highly useful for solving technical
problems. They are used to solve optimization problems and model
evolutionary behavior in fields of economics, finance, operations,
ecology, and other areas. Genetic algorithms have been used to
synthesize computer programs that solve some computing problems as well
as humans can.
* Increasingly complex structure. Evolution in nature has generated
increasingly complex organisms. Genetic algorithms simulate part of the
process of increasing complexity. Because the recombination process
generates new instruction sets by swapping of large chunks of old
instruction sets, the force of selection necessarily operates on modules
of instructions, rather than individual instructions (see Holland's
book, Hidden Order, for a good explanation of how this works).
* Self-guided motion. Von Neumann's creatures were able to paddle about
and find components; how this happens is left up the the imagination of
the reader -- it's a thought experiment, after all. Rodney Brooks' robot
group at the MIT AI lab has created simple robots, modeled after the
behavior of insects, which avoid obstacles and find things. Instead of
using the top-heavy techniques of early AI, in which the robot needed to
build a conceptual model of the appearance of the world before it could
move, the Brooks group robots obey simple rules like moving forward, and
turning if it meets an obstacle.
* Complex behavior. Living systems are complex, a mathematical term of
art for systems that are composed of simple parts whose behavior as a
group defies simple explanation (concise definition lifted from Gary
Flake). Von Neumann pioneered the development of cellular automata, a
class of computing systems that can generate complex behavior. John
Conway's Game of Life implemented a cellular automaton that proved to be
able to generate self-replicating behavior (apparently after the Levy
book was published), and, in fact, was able to act as a general-purpose
computer (Flake's chapter on this topic is excellent). Cellular automata
can be used to simulate many of the complex, lifelike behaviors
* Group behavior. Each von Neumann creature assembles new creatures on
its own, oblivious to its peers. Later scientists have devised methods
of ways of simulating group behavior: Craig Reynolds simulated bird
flocking behavior, each artificial bird following simple rules to avoid
collisions and maintain a clear line of sight. Similarly, a group of
scientists at the Free University in Brussels simulated the collective
foraging behavior of social insects like ants and bees. If a creature
finds food, it releases pheremone on the trail; other creatures
wandering randomly will tend to follow pheremone trails and find the
food. These behaviors are not mandated by a leader or control program,
they emerge naturally, as a result of each creature obeying a simple set.
Like genetic algorithms, simulations of social insects have proven very
useful at solving optimization problems, in domains such as routing and
scheduling. For example scientists Erik Bonabeau and Marco Dorigo used
ant algorithms to solve the classic travelling salesman program.
* Competition and co-operation. Robert Axelrod simulated "game theory"
contests, in which players employed different strategies for
co-operation and competition with other players. Axelrod set populations
of players using different algorithms to play against each other for
long periods of time; players with winning algorithms survived and
multiplied, while losing species died out. In these simulations,
co-operative algorithms tend to predominate in most circumstances.
* Ecosystems. The von Neumann world starts with a single pond creature,
which creates a world full of copies of itself. Simulators Chris
Langton, Steen Rasmussen and Tom Ray evolved worlds containing whole
ecosystems worth of simulated creatures. The richest environment is Tom
Ray's Tierra. A descendant of "core wars," a hobbyist game written in
assembly language, the Tierra universe evolved parasites, viruses,
simbionts, mimics, evolutionary arms races -- an artificial ecosystem
full of interations that mimic the dynamics of natural systems. (Tierra
is actually written in C, but emulates the computer core environment. In
the metaphor of the simulation, CPU time serves as the "energy" resource
and memory is the "material" resource for the ecosystem. Avida, a newer
variant on Tierra, is maintained by a group at CalTech).
* Extinction. Von Neumann's creatures will presumably replicate until
they run out of components, and then all die off together. The
multi-species Tierra world and other evolutionary simulations provide a
more complex and realistic model of population extinction. Individual
species are frequently driven extinct by environmental pressures. Over
a long period of time, there are a few large cascades of extinctions,
and many extinctions of individual species or clusters of species.
Extinctions can be simulated using the same algorithms that describe
avalanches; any given pebble rolling down a steep hill might cause a
large or small avalanche; over a long period of time, there will be many
small avalances and a few catastrophic ones.
* Co-evolution. Ecosystems are composed of multiple organisms that
evolve in concert with each other and with changes in the environment.
Stuart Kauffman at the Santa Fe institute created models that simulate
the evolutionary interactions between multiple creatures and their
environment. Running the simulation replicates several attributes of
evolution as it is found in the historical record. Early in an
evolutionary scenario, when species have just started to adapt to the
environment, there is explosion of diversity. A small change in an
organism can lead to a great increase in fitness. Later on, when species
become more better adapted to the environment, evolution is more likely
to proceed in small, incremental steps. (see pages 192ff in Kauffman's
At Home in the Universe for an explanation.)
* Cell differentiation. One of the great mysteries of evolution is the
emergence of multi-celled organisms, which grow from a single cell.
Levy's book writes about several scientists who have proposed models of
cell differentiation. However, these seem less compelling than the other
models in the book. Stuart Kauffman developed models that simulate a key
property of cell differentiation -- the generation of only a few basic
cell types, out of a genetic code with the potential to express a huge
variety of patterns. Kaufman's model consists of a network in which eac
node is influenced by other nodes. If each gene affects only a few other
genes, the number of "states" encoded by gene expression will be
proportional to the square root of the number of genes.
There are several reasons that this model is somewhat unsatisfying.
First, unlike other models discussed in the book, this simulates a
numerical result rather than a behavior. Many other simulations could
create the same numerical result! Second, the empirical relationship
between number of genes and number of cell types seems rather loose --
there is even a dispute about the number of genes in the human genome!
Third, there is no evidence of a mechanism connecting epistatic coupling
and the number of cell types.
John Holland proposed an "Echo" agent system to model differentiation
(not discussed in the Levy book). This model is less elegant than other
emergent systems models, which generate complexity from simple rules; it
starts pre-configured with multiple, high-level assumptions. Also, Tom
Ray claims to have made progress at modeling differentiation with the
Tierra simulation. This is not covered in Levy's book, but is on my
There are several topics, not covered in Levy's book, where progress
seems to have been made in the last decade. I found resources for these
on the internet, but have not yet read them.
* Metabolism. The Von Neumann creature assembles replicas of itself out
of parts. Real living creatures extract and synthesize chemical elements
from complex raw materials. There has apparently been substantial
progress in modelling metabolism in the last decade; using detailed
models gleaned from biochemical research.
* Immune system. Holland's string-matching models seems well-suited to
simulating the behavior of the immune system. In the last decade, work
has been published on this topic, which I have not yet read.
* Healing and self-repair. Work in this area is being conducted by IBM
and the military, among other parties interested in robust systems. I
have not seen evidence of effective work in this area, though I have not
* Life cycle. The von Neumann model would come to a halt with the pond
strip-mined of the raw materials for life, littered with the corpses of
dead creatures. By contrast, when organisms in nature die, their bodies
feed a whole food chain of scavengers and micro-organisms; the materials
of a dead organism serve as nutrients for new generations of living
things. There have been recent efforts to model ecological food chains
using network models; I haven't found a strong example of this yet.
Von Neumann's original thought experiment proposed an automaton which
would replicate itself using a factory-like assembly process,
independent of its peers and its environment. In subsequent decades,
researchers have made tremendous progress at creating beautiful and
useful models of many more elements of living systems, including growth,
self-replication, evolution, social behavior, and ecosystem
These simulations express several key insights about the nature of
* bottom up, not top down. Complex structures grow out of simple
components following simple steps.
* systems, not individuals. Living systems are composed of networks
of interacting organisms, rather than individual organisms in an inert
* layered architecture. Living and lifelike systems express
different behavior at different scales of time and space. On different
scales, living systems change based on algorithms for growth, for
learning, and for evolution.
Many "artificial life" experiments have helped to provide a greater
understanding of the components of living systems, and these simulations
have found useful applications in a wide range of fields. However,
there has been little progress at evolving more sophisticated, life-like
systems that contain many of these aspects at the same time.
A key theme of the Levy book is the question of whether "artificial
life" simulations can actually be alive. At the end of the book, Levy
opend the scope to speculations about the "strong claim" of artificial
life. Proponents of a-life, like proponents of artificial intelligence,
argue that "the real thing" is just around the corner -- if it is not a
property of Tierra and the MIT insect robots already!
For example, John Conway, the mathematics professor who developed the
Game of Life, believed that if the Game was left to run with enough
space and time, real life would eventually evolve. "Genuinely living,
whatever reasonable definition you care to give to it. Evolving,
reproducing, squabbling over territory. Getting cleverer and cleverer.
Writing learned PhD theses. On a large enough board, here is no doubt in
my mind that this sort of thing would happen."(Levy, p. 58)That doesn't
seem imminent, notwithstanding Ray Kurzweil's opinions that we are about
to be supplanted by our mechanical betters.
Nevertheless, it is interesting to consider the point at which
simulations might become life. There are a variety of cases that test
the borders between life and non-life. Does life require chemistry based
on carbon and water? That's the easiest of the border cases -- it seems
unlikely. Does a living thing need a body? Is a prion a living thing? A
self-replicating computer program? Do we consider a brain-dead human
whose lungs are operated by a respirator to be alive? When is a fetus
considered to be alive? At the border, however, these definitions fall
into the domain of philosophy and ethics, not science.
Since the creation of artificial life, in all of its multidimensional
richness, has generated little scientific progress, practitioners over
the last decade have tended to focus on specific application domains,
which continue to advance, or have shifted their focus to other fields.
*Cellular automata have become useful tools in the modeling of
epidemics, ecosystems, cities, forest fires, and other systems composed
of things that spread and transform.
* Genetic algorithms have found a wide variety of practical
applications, creating a market for software and services based on these
* The simulation of plant and animal forms has morphed into the
computer graphics field, providing techniques to simulate the appearance
of complex living and nonliving things.
* The software for the Sojourner robot that expored Mars in 1997
included concepts developed by Rodney Brooks' team at MIT; there are
numerous scientific and industrial applications for the insect-like
* John Conway put down the Game and returned to his work as a
mathematician, focusing on crystal lattice structure.
* Tom Ray left to the silicon test tubes of Tierra, and went to the
University of Oklahoma to study newly-assembled genome databases for
insight into gene evolution and human cognition. The latest
developments in computational biology have generated vast data sets that
seem more interesting than an artificial world of assembly language
While the applications of biology to computing and computing to biology
are booming these days, the synthesis of life does not seem to be the
most fruitful line of scientific investigation.
Will scientists ever evolve life, in a computer or a test tube? Maybe.
It seems possible to me. But even if artificial creatures never write
their PhD thesis, at the very least, artificial life will serve the
purpose of medieval alchemy. In the pursuit of the philosophers stone
early experimenters learned the properties of chemicals and techniques
for chemistry, even though they never did found the elixir of eternal
Scientist dates a speech-enabling gene to about 100,000 years ago; evidence of culture starts about 50,000 years ago.
And here's a link to a Nature story last fall about the discovery of the the FOXP2 gene, which enables fine control over the muscles of the mouth and throat.
As always, the science is more subtle than the reports in the popular press. There's a lot of ongoing research and debate about how and how much the gene influences the ability to speak and understand language.
Just read two really cool books about recent scientific discoveries about the behavior of networks:
* Nexus, by Mark Buchanan, former editor of Nature magazine
* Linked, by Albert-Laslo Barabasi, one of the scientists whose team made some of the key discoveries
It's a small world after all
There are many versions of the party game. The website, Six Degrees of Kevin Bacon, looks up the number of links connecting an arbitrary celebrity with Kevin Bacon. Will Smith was in Independence Day (1996) with Harry Connick, Jr., who was in My Dog Skip (2000) with Kevin Bacon. In another version of the party game, mathematicians boast of their "Erdos number" -- how many close are they to a person who's published a paper with Paul Erdos, the prolific and eccentric Hungarian mathematician. A variant called "Jewish geography" connects people via links through summer camps, social clubs and synagogues.
The intuitive insight that communities are "small worlds" has been quantified. Just in the last four years, scientists have developed models to describe the properties and behavior of "small-worlds" networks.
Networks can be characterized by several parameters:
* the level of clustering -- how connected a given node is to nearby nodes. For example, social networks are highly clustered -- one's friends are likely to know each other
* the degree of separation, also called the diameter -- how many links it takes on average to get from one node to another
* the level of hierarchy -- how similar is the level of connectivity among different nodes. Do most nodes have about the same level of connection, or are some nodes much more connected than others?
A network can become "a small world" in one of two ways:
* a small number of long-distance connections. If you take a network where most connections are local, and add just a few long-distance connections, the network quickly "links up", making it possible to traverse vast distances in just a few hops. For example, a coffee trader in Guatemala provides a link connecting a rural coffee growing family to an urban latte-sipper in just a few steps. Research by Duncan Watts and Steven Strogatz, published in 1998, modeled the role of long-distance connections in creating the "small world" effect.
* a small number of big hubs. On the worldwide web, the Yahoo news portal has lots of links to local news sites, making it easy to find local news in many of the world's languages in just a few clicks. This type of network, in which a few members of a set have most links, and many members have few links, are called "scale free networks", and can be described by a power law plotting the distribution of links among nodes. Research by Barabasi and his team, published in 1999 and more recently, modeled this pattern and found evidence of it in a variety of domains.
The "small worlds" patterns create networks that are highly resilient, yet vulnerable to certain kinds of failure.
* small worlds networks are invulnerable to random damage -- if you randomly remove nodes from the internet, or species from an ecosystem, the system will continue to operate with little disturbance
* small worlds networks are vulnerable to attacks on connectors or hubs -- if you take down a number of key internet hubs, or remove just a few linchpin species in ecosystem, the connections in the system will break down.
With the "small worlds" model in hand, scientists foraged for data sets and mapped the workings of small-worlds networks in a wide variety of domains:
* the web, which can be traversed with a few hyperlinks
* the internet -- which can be crossed in a few hops
* electric power networks
* social networks
* ecosystems, in which a few "hub" species are predators or prey for many others.
* biochemistry -- in which a few key chemicals catalyze many reactions.
* group behavior -- in which fireflies start blinking in unison, and theater-goers unconsiously synchronize their applause
Relationship to other aspects of complexity theory
One of the fun things about reading the books is drawing relationships between "small worlds networks" and other aspects of complex, emergent systems, although these links are not well-developed in the books themselves.
Stuart Kauffman, a theoretical biologist, has developed a set of models to explain the natural emergence of order in open thermodyamic systems. According to Kauffman's models, explained in his book, "At Home in the Universe",
* self-replication is likely to emerge from a set of sufficiently diverse chemicals in high concentration
* genes code for a relatively small number of types of cells because of the network parameters of gene expression (Kauffman theorizes that it is the low coupling parameter that makes the state space of gene expression much lower than one might expect).
* the evolution of species can be modeled by adaptive walks across "fitness landscapes", in which organisms with better-adapted traits outcompete others, and produce descendants with the opportunity to become even more fit. Key parameters of the model include the level of randomness within the fitness landscape (in a random landscape, a small change in an organism would cause a big change in fitness; in a non-random landscape, a small change in an organism would probably cause a small change in fitness); the level of coupling among genes in an organism (this models conflicting constraints -- e.g. a gene that protects against malaria also increases vulnerability to blood disease); and the level of coupling among species in the landscape. In these models, extinctions follow a "power law" distribution, with frequent extinctions of small numbers of species, and infrequent catastrophes wiping out many species at once.
Kaufman's theories about the emergence of organization and the mechanisms of evolution are fascinating and appealing. But in the absence of any but the sketchiest of empirical evidence, his work is vulnerable to criticism that it's computer art -- the properties of his models could just be artifacts of the parameters plugged into the models.
The empirical data analyzed by Barabasi's team about chemical reaction networks and connection patterns in ecosystems seem like early evidence that nature works in the ways that Kauffman describes. Networks such as ecosystems and the world wide web have a small number of key nodes with many connections, and a great many nodes with fewer connections. According to the network model, this will lead to an evolutionary pattern with many small extinctions of non-hub species, and some mass disasters when key species are taken eliminated.
The Watts and Barabasi research suggests some alternate ways to configure Kaufman's model, creating similar results with data that fit more closely with empirical evidence.
* Kauffman's model accounts for long evolutionary jumps -- the probability that a small change in an organism results in a large change in fitness -- by tuning the "randomness" of the fitness landscape. Watts' use a seemingly simpler to achieve similar results, by adding just a few nodes with long-distance coupbling behavior.
* Kaufman's model tunes the average level of coupling up and down, reaching realistic behavior at a particular range of parameters. Barabasi's model observes that level of coupling in a network varies by power law, and this distribution predicts the observed behavior.
Much more evidence is needed to confirm or disprove Kaufman's theories, and to refine the models, in networks of gene expression; ecological networks, and evolution. The ongoing research and analysis models seems like it is on the right track to find these things out.
One of the key insights of Barabasi's team is that a "scale-free network" can be created by a simple growth pattern -- if new nodes add links with slight preference for popular nodes, the hierarchical pattern will emerge. It would be interesting to see future research that looked in more detail at models of evolution and growth.
In particular, the Watts and Barabasi models focus on patterns of network wiring -- the number and distance of linkes. There are additional interesting questions about what this network architecture means with respect to the level of influence between nodes. What is the relationship between the architecture of the network and the way the network is used to transmit information?
That's one of the most exciting things about studying this topic -- the work is not near done.
The unfinished nature of the field shows up in some logical gaps in the books.
Both books explain how network growth patterns enable the rich to get richer, but that does not seem to me to be the most interesting part of the story. It is true that wealthy investors make more money, and really big sites like Yahoo and Amazon acquire the most links.
But the Pareto principle doesn't explain whether and how the poor get rich. Google comes from nowhere, provides a better search engine, and rapidly emerges as the leading search site. And the Pareto principle doesn't talk about impact of providing "small-worlds" connectivity to the remote and obscure. The interesting thing about the web is not that Yahoo is popular - it's that a quick keyword search will find sites on medieval theologians and African cooking, and a couple of clicks on Yahoo News links will get you local media in Farsi.
Also, neither book has a strong discussion about limits to network growth, or differentiates between hub systems with obvious physical limits, like airports, and with few physical limits, like the information space of the web.
Comparison and contrast
The books are eerily similar, as if one of of the writers was looking over the other one's shoulder as he wrote. The similarity in substance is not that surpising -- after all, the books explain the same papers by the same set of scientists over a few year period of time. What is odder is that the books contain many of the same anecdotes -- tales of Erdos, the eccentric Hungarian mathematician; the inspiration of Duncan Watts by synchronized fireflies, the creation of the Oracle of Kevin Bacon. Both books very similar sections on the internet and network economy, with a similar sweeping generalizations about impending change, and similar lack of substance.
Buchanan is a professional writer, and the book is a little better written. His magazine instincts show -- each chapter is nicely structured, starting with anecdotes about people, and uncovering some new theme. The book does a decent job with transitions -- it reads like a book rather than a collection of articles. Buchanan has a PhD in physics -- he's read the primary sources, he understands the math, he enjoys the subject and he doesn't pander to the audience.
Barabasi is a participant, not a bystander -- the unique strength of the book lies in the first-hand stories of his team and their discoveries. Barabasi is proud of his achievements; he makes it very clear that the topic was not properly understood until his team started their work. A typical sentence along these lines: "Uncovering and explaining these laws has been a fascinating roller coaster ride during which we have learned more about our complex, interconnected world than was known in the last hundred years." This is not the place to look for humility.
Both books were definitely worth reading, with clear explanations, great references to the sources, and a lot of food for thought. It is quite a thrill to read about these developments as they are happening.
Computational Beauty of Nature, by Gary Flake, is a very nicely written walk through topics related to chaos and complexity, including fractals, chaos, artificial life, adaptive systems, and neural networks. This is THE one book for folks who want to dive into these topics one level deeper than the popular science books. Each chapter has references to the primary source books and articles, if you want to pursue the topics in greater depth.
The book's website has a set of Java applets and C programs to run the simulations -- for example, you can play with the parameters of L-System fractals to simulate different kinds of plant shapes. The source code is available to download and play with.
Flake does a lovely job of explaining the math and modeling concepts, in a manner that is comprehensible to those of us without extensive math backgrounds. Sometimes his one-page intros go a bit fast for me, but it's easy enough to hit Google, find a relevant tutorial, then go back and finish the chapter. I needed to do this for the sections on matrix math and circuit design -- this is a very pleasurable way to learn.