Guest blogging
Aug 22, 2011
I am the very model of a Singularitarian
The Stross Entries #9 (originally posted on Charlie Stross's weblog on August 12, 2011)
I suggested in an earlier post that foresight is not so much about prediction as it's about designing against surprise. Key to this is the exploration of multiple futures, which is why scenario-based foresight is so commonly practiced. Scenarios are rarely developed in isolation, but are usually created in decks(generally of four, when one uses the common 2X2 matrix method of generating them). These are then intended as snapshots taken in different points of a complex space of possibilities. The opposite of scenarios is the default future, which is what everybody assumes is going to happen. If life is what happens to you while you're making other plans, the real future is what happens to you after you've planned for the default future. A classic example of what you get when you plan for the default future is the Maginot Line. In a 1998 article in the journal Futures, "Futures Beyond Dystopia," Richard Slaughter critiques science fiction's default futures. He accuses SF of oscillating between naive techno-optimism and equally naive apocalypticism. Late 20th century SF lacks the necessary spectrum of intermediate scenarios, according to Slaughter, which may explain its decreasing hold on the public imagination. What we are left with is two default futures, and no societal capacity to plan for a third. This is an idea worth serious contemplation by those of us who write the stuff. Sometimes, too, our scenarios grow so elaborate that they become more than scenarios--they're complete paradigms. They become default modes of thinking, and come with associated cultures, champions and institutions. At this point, presenting alternatives becomes increasingly difficult; one must present, not just new scenarios, but an entirely new paradigm to complement the reigning one. May people, particularly in the foresight community, believe that a shift from scenario to paradigm is what's happened to the idea of the Technological Singularity. It's become the new default future--no longer the shocking, thought-provoking alternative to an orthodoxy, but the very orthodoxy itself. Against this, it's no longer sufficient to simply present different scenarios. We need an alternative paradigm (or two, or six). I've been working on some. If the Singularity is our new Maginot Line, what's the future equivalent of a line of panzers running right over it? Since scenarios are often productively built around oppositions, I'll suggest an opposite worldview to the Singularity--one that makes opposite assumptions. The Singularity emerges from the idea that a steady and geometric increase in computing power will result in superhuman intelligence emerging rapidly and drawing with it a geometric increase in industrial and technological progress and scientific understanding; and that this sudden explosion of change is by definition unimaginable to beings of lesser intelligence, such as humans. Hence, the singularity, that place that we mere mortals cannot go. We await the Kwisatz Haderach of AI to lead us through it. The Singularity is actually an intermeshing set of beliefs about technology, intelligence, and about what drives technological, economic and social change. It's a self-supporting system of ideas, which is what makes it a paradigm and not merely a scenario. And, as I said, paradigms are not to be simply denied or affirmed. (Even the primary champions of the Singularity are not true believers: if you'd like to see Vernor Vinge, Charlie, Aleister Reynolds and me dismantle its mythological structure, watch this video.) However since it's just one vision of the future, it is wise to have others. One that I have been working on is something I call the Rewilding. The Rewilding isn't so much a scenario as it's an alternative package of assumptions. For instance, the name: the original meaning of the word 'wild' was 'self-willed.' So, this is a set of ideas about a world that is self-willed, rather than willed by agencies (i.e. intelligences whether mortal, artificial, or divine). I gave a little introductory talk about it at OSCON a couple of years ago, and you can find that here. The deep logic of the Singularity is that intelligence (or, for many people, consciousness) has a magical transformative power; the even deeper mythos under that notion is the idea of agency--that the dew on the morning grass must be painted there by fairies; that the regular orbits of the planets must be ordained by God; or that the design we see in Nature is the result of a Designer. In its most refined, philosophical form, the Singularity imagines the creation by Man of a semi-divine Designer that renders a transcendent and unknowable future. The Rewilding is a vision of radical removal of agency from the world: the flowers bedew themselves, nobody ordered the motion of the planets, not even the mysterious agency known as Scientific Law; evolution is design without a designer, computing is thought without a thinker, and there is no mathematical reality separate from the physical world. In the Rewilding, civilization advances by systematically blurring or even erasing the border between the artificial and the natural; the more efficient an artificial system is, the more it resembles (or even is) a natural one. That is, our surroundings becomes increasingly wild (self-willed) rather than having to be willed by us. Agency, so long marching forward, begins to retreat. The deep logic of this radically Copernican view is that intelligence (agency) is not a magically transformative power that stands outside nature and ordains how it should move; as I've suggested since my 2002 novel Permanence, intelligence is no more than what we mean when we say, 'look, that thing is acting intelligently.' The more you try to pin down what intelligence is, the more elusive it becomes, and this is because, as Brian Cantwell Smith has argued in great detail, there is no actual difference between computing and other forms of activity. To put it another way, agency is an illusion. Mind is always embodied, and everything that we think is transcendent, is actually part of some embodied and evolved strategy. Most importantly, the Rewilding is a critique of the notion that intelligence and computation are equivalent. These ideas are intended to mesh together and reinforce one another in the same way that the notions of geometric growth, the evidence of Moore's Law, and computing theory reinforce one another in the paradigm of the Singularity. For instance, to get to the Rewilding, a good SF writer (or futurist) need only posit that the following are true: What all of these lines of thought add up to is the assertion that no amount of intelligence can act as the primary driver of change in our world. As I've proposed in my forthcoming novel Ashes of Candesce, consciousness is the passenger, and values are the driver; and values are ultimately determined by our physical form. Of course, all of these ideas could be wrong; it's not my job to determine that. The point of this exercise is to bring together a coherent set of theories and perspectives that together constitute a broad-enough worldview to make a good second paradigm for the future--one worthy of being placed next to the Singularity in our planning toolkit. This second perspective allows us to avoid the complacency of the 'default future' and start triangulating on the future. There's no reason to stop here. Ideally, I'd like to see a whole spectrum of paradigmatic scenarios of the future. The more we have, the better our advance planning for what will inevitably turn out to be a new world of surprises. Addendum: For the original comments, sniping, and general carpet-chewing that followed this entry, visit the original entry here.
Rewilding Etiquette
The Stross Entries #8 (originally posted on Charlie Stross's weblog on August 6, 2011)
Imagine a future where the most revolutionary changes in our world have not come from nanotech, genetic engineering, artificial intelligence or even space development--but from cognitive science and a deepening understanding of how humans function (or not) in groups. What would such a future look like? We're all familiar--maybe too familiar--with one model of such a future; it's exemplified by stories like Brave New World and 1984. Those books were direct reactions to the last great cycle of research into human nature. That was the era when Freud seemed to have a true model of human nature, Marx a true model of economics (or not) and when eugenics still seemed like a good idea. (If you want to read an excellent horror/slipstream novel about eugenics run amok, try David Nickle's Eutopia, which is available from Chizine Press). These and related theories were used to justify the great 20th century human engineering efforts such as The Great Leap Forward, Soviet collectivization, and so on. The problem wasn't just that ended up being harnessed for evil purposes, but that they were wrong or incomplete. But what would a correct theory of human nature look like, combined with the principles of self-organization and collective intelligence that are emerging right now? What would a cogsci singularity look like? I think it would look like good manners. Manners--etiquette--are little studied these days, which is ironic considering that arguably, we need them more than ever. After all, at no other time in history (except maybe during the hegemony of Rome) have so many diverse people being jostling elbows the way they are now. These days, any big city has people from every corner of the world living in it; in my city of Toronto, more than 50% of the inhabitants are from somewhere else. (And it works magnificently; we have 1/10th the murder rate of any comparably-sized American city.) We need to get along with one another, and good manners are an essential tool. So, what if we didn't shave everybody's head, stamp a number on it and put them through brainwashing classes; or breed them for docility; or drug the water supply. What if, instead, we started a new movement in manners, one directed at conflict resolution, collective problem solving, and the cohabitation of diverse kinds of people? And simply presented it as a movement, like open source software, not run by a social engineering elite but by anybody who's willing to use the publically available code: i.e., the peer-reviewed, experimentally verified, incomplete but emerging cognitive sciences? I can think of several objections and reactions to this idea. Aside from WTF?!, of course. One is that manners are actually a smoke-screen that an elite use to morally whitewash themselves: I can get away with murdering and pillaging the people around me, as long as I'm polite about it. I think this is very true in certain cases; when one the characters in Neal Stephenson's The Diamond Age goes on an extended rant justifying hypocrisy among the moneyed classes, he's implicitly admitting to this intretation (and doing a damned fine job). -However, even people in the poorest villages know the difference between good manners and bad. Manners, I suspect, are one of those basic human inventions, like language. Another objection is that manners are culturally determined. What counts as polite for me may not count as polite for you, or for somebody from the other side of the world. This is a great objection, but you could turn it around by saying, "Okay then, what would the manners of a global, multicultural, crowded civilization look like?" Another question you could ask would be, "Is there a core 'metalanguage' of manners?" Some linguists now think that human language doesn't directly follow some set of meta-rules, but more indirectly converges on certain kinds of attractors; maybe it's a similar case with manners. They converge on behaviours that allow us to get along; but they also get crufted over with local theories about good and bad, cleanliness and contagion, etc. Nonetheless we can to some extent rewild our manners: we can conform them to reality to some degree. For instance, in the novel Nova by Samuel R. Delany, one of the characters offers another character half-chewed food from his own mouth, saying, "this is good, try it." This can be good manners for him because communicable diseases have been wiped out in this future. For us, this action can't be good manners. Similarly, washing one's hands after going to the bathroom is a piece of good manners that's largely supplanted other hygienic etiquette, such as never shaking hands using the left hand. Rewilded manners are manners that have had localization, historical accident, and obsolete folk theories removed; washing hands is rewilded manners. Eating pork is no longer bad manners (unless you eat it around people for whom it still is). Rewilded manners is saying, "I'm sorry, could you repeat that?" instead of demanding that the person you're talking to speak proper English. In my novel Lady of Mazes I had a book of simple rules called The Good Book. These were rules on how to behave in different social circumstances, and they had been constructed using massive simulations of millions of social agents. The Good Book is an emergent system for an amicable society. The rules weren't necessarily intuitive, and some ran counter to what one would expect good manners to look like. But they were the result of looking at human behaviour from a higher complexity level than we are able to do as individuals. To represent (I mean re-present or express) cognitive science as manners would be to rewild it: to return our interactions to as close to a one-to-one relationship of behaviour and reality as possible. Instead of manners around contagion that tell us not to serve meat from animals with cloven hoofs, or only to shake hands with our right hand, we might for instance get forms of greeting based on the most human universals of trust-building (on the primate level, how do eye contact, physical contact, and stance etc. contribute to establishing trust when meeting a stranger? You can study that). To bow seems to be to make oneself vulnerable; it is for us as bearing the throat is for dogs. Is it then a trust-gesture we should encourage, or is it too submissive...? I am not an expert in the sociology of etiquette and manners, and it may be that my interpretations of what they are and how they work are wrong. Even if I'm on track, this is a task for that 'army of social scientists' I was advocating in an earlier post. As an SF writer, of course, it's not my job to be right; it's my job to provoke the imagination. So, indulge me: imagine the rewilded manners of a near-future Earth, where overlapping etiquette movements combine and compete the way that Gnome and KDE do within the broader Linux community, each seeking a style proper to its vision of human culture, but each adhering to deeper common principles that are derived from a rigorous study of how people actually behave, and what helps them get along, when they do get along. Addendum: This one really got their attention. See all 184 comments in the original comment thread.
Our Eucatastrophe
The Stross Entries #7 (originally posted August 2, 2011)
In an earlier post I talked about prediction vs. preparation as different ways of approaching the future. Also foresight, which is the systematic study of trends and possibilities for the near future. When you do foresight, you quickly begin to realize that our ideas about the future are highly distorted, both by optimism and pessimism, as well as propaganda, ideology, and all the various things that various people and groups are trying to sell us. How do you cut through all of that to get some sense--any sense--of where we're really going? One annual effort to do just that is the Millennium Project's State of the Future. This annual study of trends and drivers is grounded in research by hundreds of people in dozens of countries around the world. The full report comes with a CD or DVD containing 7000 pages of data, analysis, and background on the 15 years' worth of methodological refinement and legwork that have gone into the project. The pdf version of the executive summary is free to download here, and if you do look at it you may be shocked to discover something: The 2011 State of the Future report is optimistic. Cautiously so, and with caveats, but optimistic. The executive summary starts by saying, "The world is getting richer, healthier, better educated, more peaceful, and better connected and people are living longer, yet half the world is potentially unstable." The overall message is that from where we stand right now, we couldbuild a poverty-free, sustainable world of free citizens. Or, we might pooch the whole thing. Remember when I said in our last post that our problems are no longer technological? What I meant was that developing the technologies we need to save our collective asses is no longer the big issue; it's coordinating and cooperating to implement the solutions we already know will work, that's our difficult task now. TheState of the Future report agrees: By almost every measure, our world is getting better. Literacy, crime rates, education, access to fresh water--you name it, it's vastly improved over the past decade, and on a global scale. There remain several significant issues that could derail everything, however--so under the "Where we are Losing" heading the report lists the following: This is consistent with previous years' reports. Corruption, international organized crime, climate change and related sustainability issues, and the prospect of a global mass extinction that may or may not include us... it's serious stuff. But the number of things that are going right, and that may synergize to mitigate or reverse these bad trends, is greater still. J.R.R. Tolkien invented a literary device he said was necessary to balance the Greek idea of the catastrophe, that moment in a tragedy when everything falls apart. Tolkien called his idea the eucatastrophe--the moment when suddenly, everything goes right. In The Lord of the Rings, the eucatastrophe is the moment when Gollum falls into Mount Doom with the ring. Exactly that force for evil that threatened to destroy everything, saves everything instead. In light of my last post on this blog, you might say that eucatastrophe is the exact opposite of a wicked problem: it's the unstoppable twining of myriad threads of fate to create an unexpected, but in hindsight inevitable, positive transformation of the world. And it's as likely for the complex systems we inhabit to do right by us as wrong. So I want to think about what our world's eucatastrophe could be. Once you read the State of the Future reports, you will see that everything is aligned for one to happen; the question is when, where and how? What if everything goes right? And what if we help make it happen? Addendum: For the original comment thread on Charlie's blog, go here.There is no question that the world can be far better than it is--IF we make the right decisions. When you consider the many wrong decisions and good decisions not taken--day after day and year after year around the world--it is amazing that we are still making as much progress as we are. Hence, if we can improve our decisionmaking as individuals, groups, nations, and institutions, then the world could be surprisingly better than it is today.
Wicked (2)
The Stross Entries #6 (originally posted on August 3, 2011)
What do we do about wicked problems? --That is, problems that we can't all even agree exist, much less define well; problems that have no metric for determining their extent, or even whether our interventions mitigate them? I don't have answers, but will venture to suggest a direction for us to look. The internet has exposed a flaw in our grand plan to unite humanity: it turns out that increasing people's ability to exchange messages does not, by itself, increase their ability to communicate. The Net has developed a centripetal power: for every community it brings together, it seems to drive others apart. Eli Pariser's idea of the Filter Bubble is an expression of this phenomenon. This problem arises because it is easier to communicate with people who share the same understanding of the meaning of a given set of terms and phrases than with people who have a different understanding of these meanings. Automatic translation is not an answer to our diverging worldviews, because each person and social group has their own private grammar. It takes work to learn it and that work can't be offloaded to an automated system. At least, not entirely. That's why processes that tackle complex group cognition usually exhibit an obsession with words. For instance, in Structured Dialogic Design (an example I use just because it's one I know well) most of the session's time is spent learning what people mean by the terms they're throwing around. This may seem boring and tedious, but it's absolutely essential--the unsexy plumbing work of the 21st century. For instance, if we were to try to scale up SDD or a similar process, we might create a web-based system in which participants are allowed to define a problem or issue. The person who defines the issue owns it. Other participants can then participate in discussion and brainstorming around the issue, but in order to become part of the brainstorming group, they first have to submit a rephrasing of the issue. The owner of the issue decides whether their restatement indicates that they understood what he/she meant. If they have exhibited an understanding of how that issue is being framed for the purposes of this discussion, they can then proceed to help work on it. Also critical is the question of who gets to work on a problem. To put it bluntly, the people who are affected by proposed changes need to have a say in them. The person who defined the issue doesn't get to say who the stakeholders are; a wider and more inclusive process does this--and political representation has a place here. If you don't have this inclusion of interested parties, you get the kind of botched social experiments that James C. Scott talks about in his book Seeing Like a State. (Think Soviet collectivization.) In systems terminology, you fail to properly employ Ashby's Law of Requisite Variety. In my theoretical problem solving app, if the issue as defined can't attract representatives from enough of the identified stakeholder groups, then there's something wrong with the definition of the problem, and discussion on it cannot proceed. We have other biases and limitations to work around. One is the Erroneous Priorities Effect, which arises when groups are allowed to vote on the relative importance of a set of issues. Straight democratic voting breaks down in this circumstance; you need to do a binary pairing exercise, where you ask, "would solving issue A help solve issue B?" and then "would solving issue B help solve issue A?" iteratively through the issues until you build an influence map that shows the true root(s) of the problematic mess. This is one process where computers can be of immense help; this is how the CogniScope software for SDD works. So much for fantasizing about what I would do if I were king; these are just suggestions. I do think, though, that this stuff should be built into our social media at a very basic level. Why is it even possible to have misunderstandings online when we have all these tools at hand to help prevent them? It's because social media systems like Facebook are just the tricycle version of what social media will become. Facebook barely hints at what's coming; it's social media with training wheels on. What's coming is political media, media that extract commitments from their users and employ those commitments to help solve complex, otherwise intractable real-world problems. (Existing collective intelligence apps such as Wikipedia rely on community involvement, but as Aleco Christakis puts it, involvement is to commitment as ham is to eggs: the chicken is involved, the pig is committed.) So what the skill-sets needed for this next great leap forward? I can tell you, it's not computer programmers that we'll need; it's not technologists. We need social scientists. A lot of them. I wrote my great nanotech book back in the 90's. I've played the augmented reality and artificial intelligence cards in my novels. Biotech is yesterday's future. --No--what I'll be writing about from now on--what the 21st century will belong to--is cognitive and social science, because our technological society's one big blind spot is that we can imagine everything about ourselves and our world changingexcept how we make decisions. That is precisely the sea-change that is rushing toward us--or more properly, that we have the historic opportunity to seize and design. Our age belongs not to some attempted re-engineering of human nature, the sort of thing that so many millions died for in the last century. It belongs to a maturing of our ability to govern ourselves as we are. Because it doesn't matter what we're capable of doing, if we continue making the wrong decisions about what to do.
Addendum: For the highly enthusiastic, original comment thread, visit here.
Wicked (1)
The Stross Entries #5 (originally posted July 31st, 2011)
I sense a theme. I've been reading a lot of blog posts, and comments to same, that highlight the seemingly intractable quality of current world problems. This recent post by Steelweaver is a great example. So are a lot of the comments to my previous"Beyond Prediction" post. Steelweaver in particular hits the nail on the head with the idea that "people no longer inhabit a single reality. ... Collectively, there is no longer a single cultural arena of dialogue." This is definitely the case when you examine the cultural and political dialogues arising around the Greek and U.S. debt crises, or global warming. --And this is a brilliantly insightful idea, but it's a little bit sad, too, because it seems as though a lot of people are just discovering this problem, and yet it's been well known for decades. I spend a lot of time with people who have a very particular way of looking at problems: define it, decompose and scope it, solve it, implement it. A lot of engineers, scientists, and programmers of my acquaintance take this approach. And they sometimes get very, very angry when faced with real-world problems thatcan't be approached this way; in fact, I keep ending up in circular arguments with technologists who insist on using this approach on, eg., climate change. Or the debt crises. (Encountered angry trolls in any comment threads lately? Do they tend to make sweeping generalizations about the nature of problems, their causes, and their solutions? Hmmm...) But often, in the human sphere, there are what're called "wicked" problems. In 1973, Horst Rittel and Melvin Webber defined a wicked problem this way: Our most important problems are wicked problems. Luckily, social scientists have been studying this sort of mess since, well, since 1970. Techniques exist that will allow moderately-sized groups with widely divergent agendas and points of view to work together to solve highly complex problems. (The U.S. Congress apparently doesn't use them.) Structured Dialogic Design is one such methodology. Scaling SDD sessions to groups larger than 50 to 70 people at a time has proven difficult--but the fact that it and similar methods exist at all should give us hope. Here's my take on things: our biggest challenges are no longer technological. They are issues of communication, coordination, and cooperation. These are, for the most part, well-studied problems that are not wicked. The methodologies that solve them need to be scaled up from the small-group settings where they currently work well, and injected into the DNA of our society--or, at least, built into our default modes of using the internet. They then can be used to tackle the wicked problems. What we need, in other words, is a Facebook for collaborative decision-making: an app built to compensate for the most egregious cognitive biases and behaviours that derail us when we get together to think in groups. Decision-support, stakeholder analysis, bias filtering, collaborative scratch-pads and, most importantly, mechanisms to extract commitments to action from those that use these tools. I have zero interest in yet another open-source copy of a commercial application, and zero interest in yet another Tetris game for Android. But a Wikipedia's worth of work on this stuff could transform the world. If Google+ can attract millions of people in just a few days to an app that does little more than let them drag pictures of people into circles, surely we can build a simple app that everybody can use that does even one useful thing, like, say, mitigate theErroneous Priorities Effect when you're attending a meeting. Next, in Wicked (2): What a Wikipedia's worth of work would get us. Addendum: You can find the original comment thread for this entry here.
Climate change is a great example of a wicked problem: Quick, somebody tell me what the acceptable maximum amount of CO2 in the atmosphere should be, in parts-per-milion! Provide me with the answer to that question, and you win a pony! (And now, dear trolls, fair warning: if you argue over climate change at all in the comment thread, I will mod you off the island. 'Cause it's not what this post is about, it's just an example.)
It is not the case that wicked problems are simply problems that have been incompletely analyzed; there really is no 'right' formulation and no 'right' answer. These are problems that cannot be engineered. The anger of many of my acquaintances seems to stem from the erroneous perception that they could be solved this way, if only those damned republicans/democrats/liberals/conservatives/tree-huggers/industrialists/true believers/denialists didn't keep muddying the waters. Because many people aren't aware that there are wicked problems, they experience the failure to solve major complex world issues as the failure of some particular group to understand 'the real situation.' But they're not going to do that, and granted that they won't, the solutions you work on have to incorporate their points-of-view as well as your own, or they're non-starters. This, of course, is mind-bogglingly difficult.
Aug 01, 2011
These Aren't the Worlds You're Looking For
The Stross Entries #3
Last week my wife and I read the chronologically-first Dragonriders of Pern book to my daughter. (She loved it.) DragonsDawn is one of more than a dozen novels by Anne McCaffrey set on the alien world of Pern, which in this story has just been colonized by humans. I was struck by McCaffrey's detailed thinking about what colonization of another planet would be like--both because of the sophistication of some of her ideas, and the utter naivete of others. The colonists use genetic engineering to defend Pern's biosphere against incursions by an alien life form known as Thread, but nobody (least of all McCaffrey herself) seems to realize that the humans and their goats, pigs, food plants and associated fungi and microorganisms are themselves a catastrophic alien threat to the planet's biosphere. At least the Thread and the native life forms have had some time to co-evolve. McCaffrey's colonists fan out from their initial base at Landing and spread seeds, spores, eggs and new species of megafauna all over the planet. They seem utterly unaware that this will cause massive displacement of species up and down the food chain, perturbing nearly every ecology in the world. They also seem unaware of the possibility that local life forms might be better adapted at some things, and might see them and their imports as food as well. I raise this not to dump on McCaffrey (whose books are marvelous) but becauseDragonsDawn perfectly exhibits the conceptual blind spots that have gotten us into trouble on our own planet. Even more, however, DragonsDawn flags a giant blind spot among proponents of space colonization. This blind spot is the idea that the worlds we want to locate and colonize should be worlds like Earth. The fact is, the last place we want to set up a human colony is a planet with a fully developed Earth-like ecosystem. Jared Diamond provides some of the reasons why in his excellent study of the European conquest of the Western hemisphere,Guns, Germs and Steel. European settlers didn't just come over and settle; the Vikings tried it and died out, and many early colonies in the Indies and America failed. Those settlements that were successful were the ones that had the benefit of knowledge earned through long experimentation in the micro-ecologies of the Azores, and the Canary Islands, and in Africa and Asia. More importantly, however, the successful colonies weren't bands of human beings--they were humans accompanied by the right mix of food animals, edible plants, and microorganisms. In other words, it wasn't European humans who colonized the Americas; it was European ecosystems. And the effect, both on the flora and fauna and on the humans already in the Americas, was apocalyptic. A single species can't colonize another world; it takes a whole biosphere. Even your gut fauna have the potential to wreak havoc on a planet that's never encountered them before. You might think you could genetically engineer solutions to this whopping big problem, but the thing is, species-by-species interventions won't work. Most of the microorganisms in any randomly-selected drop of water are unknown to science, and you have thousands of species living inside you, all of which would need to be taken into account. Ditto for the new biosphere you're moving into; adaptation must be mutual. So, these green, Earth-like worlds with their blue skies and oceans, warm breezes and waving, untouched forests--these aren't the worlds you're looking for. They're impossible to colonize without unforeseeable catastrophic results. We will want to look for them, to study them and admire them from afar, but we'd better not ever set foot there. Instead, the worlds we will want to colonize (and I disagree here with Charlie's assessment that colonizing other worlds is impossible or impractical) are those that are fallow--Earth-analogues that could have developed life but never did. We might get luckiest on planets that do have life but where that life is stuck in the proterozoic stage, and has oxygenated their atmospheres but not yet colonized land. As long as we're willing to pave over that indigenous life entirely with our own, this might be the best way to go. Otherwise, however, for the purposes of eventual colonization, we should be searching for the Fallow Earths. --Oh, hey, I just came up with a book title. Anybody want to pay me to write a novel around it? Addendum: for the comment thread around this entry, head on over to Charlie's Blog.