Cosmos books



Download 1.13 Mb.
Page1/15
Date conversion20.04.2016
Size1.13 Mb.
  1   2   3   4   5   6   7   8   9   ...   15
TOAST

Books by Charles Stross
Singularity Sky

The Atrocity Archive

Iron Sunrise

The Family Trade

The Hidden Family

Accelerando

TOAST

Charles Stross

COSMOS BOOKS

TOAST

Copyright © 2002; revised and

expanded, 2006, by Charles Stross. All rights reserved.

Cover design copyright © 2006 by Juha Lindroos.
No portion of this book may be reproduced by any means,

mechanical, electronic, or otherwise, without first obtaining

the permission of the copyright holder.
For more information, contact Wildside Press.

www.wildsidepress.com
ISBN: 0-8095-5603-0

Contents

Introduction

Antibodies

Bear Trap

Extracts from the Club Diary

A Colder War

TOAST: A Con Report

Ship of Fools

Dechlorinating the Moderator

Yellow Snow

Big Brother Iron

Lobsters

CAUTION
Highly flammable. Keep away from naked flame, hot surface or other sources of ignition—no smoking. Keep away from food, drink, and animal feeding stuffs. Keep out of reach of children. This product contains extract of H. P. Lovecraft and George Orwell. Do not swallow. In case of contact with eyes, wash thoroughly with water. This product contains wood pulp from renewably harvested trees. Wash hands after use. Safe to use with septic tanks. In event of ingestion, consult a physician. May contain traces of nuts . . .

Introduction:

After the Future Imploded
The future has imploded into the present,” wrote Gareth Branwyn, in a famously bombastic manifesto that Billy Idol recycled in an even more bombastic multimedia album in 1992.

What happens when the future implodes into the past?

The last century saw an amazing flowering of futures. Galactic empires exploded across reams of yellowing woodpulp; meanwhile, the Futurist movement spawned bizarre political monsters that battled across continents. Both fascism and Bolshevism were expressions of belief in a utopian ideal, however misplaced and bloody their methodologies. Meanwhile, advertising mutated from a cottage industry for printers into a many-tongued hydra that promised us a cleaner, brighter present. Today’s marketing spin is descended from yesterday’s brainwashing techniques: propaganda principles pioneered by Goebbels are now common property.

The sheer speed with which change swept over the twentieth century, bearing us all towards some unseen crescendo, was a tonic for the imagination. Science fiction wouldn’t have flourished in an earlier era—it took a time of change, when children growing up with horse-drawn carriages would fly around the world on jet engines, to make plausible the dreams of continuous progress that this genre is based on.

But the pace of change isn’t slackening. If anything, it’s accelerating; the coming century is going to destroy futures even faster than the last one created them. This collection of short stories contains no work more than a decade old. Nevertheless, one of these stories is already a fossil—past a sell-by date created by the commercial data processing industry—and the others aren’t necessarily that far behind.

How did things get to be this way?

Delta(Change/t)
Until about the third millennium BC, there was no noticeable change in social patterns on any time scale measured in less than centuries. Around that time, the first permanent settlements that we’d recognize as towns arose, facilitated by the discovery of agriculture. With them appeared writing and codified law and the rudiments of government.

From that time on, there was no turning back. An agricultural civilization can support far more people in a given area than a hunter-gatherer lifestyle—but the transition from a hunter-gatherer society to agriculture is strictly a one-way process. If you try to reverse it, most of your people will starve to death: they simply won’t be able to acquire enough food. This was the first of many such one-way processes in the historical record. Arguably, it’s the existence of these one-way transitions that gives rise to the appearance of inexorable historical progress; it’s not that reversals are impossible, it’s simply that after a reversal there’ll be nobody left to keep a written record of it.

The twentieth century was riddled with one-way technological changes. (For example, once the atom bomb had been invented, even if the Manhattan project had been quietly disbanded and its records destroyed there would have been no way of preventing its rediscovery.) And such one-way changes have come even faster with every passing decade. There are more people alive today than ever before—and a higher proportion of them are scientists and engineers who contribute to the pace of change.

Some changes come from unexpected directions. Take Moore’s Law, for example. Moore’s Law had a far greater effect on the latter third of the 20th century than the lunar landings of the Apollo program (and it was formulated around the same time), but relatively few people know of it. Certainly, back in 1968 nobody (except possibly Gordon Moore) might have expected it to result in the world we see today . . .

Gordon Moore was a senior engineer working at a small company near Palo Alto, a spin-off of Fairchild Semiconductor. His new company was in business to produce integrated circuits—lumps of silicon with transistors and resistors etched onto them by photolithography. Moore noticed something interesting about the efficiency of these circuits. Silicon is a semiconductor: it conducts electricity, but has a markedly higher resistivity than a good conductor (like, say, copper). When you push electric current through a resistor you get heat, and the more resistor you push it through, the more of the current ends up warming the environment instead of doing useful work.

Moore noticed that as circuits grew physically smaller, less electricity was dissipated as heat. Moreover, as distances shrank the maximum switching speed of a circuit increased! So smaller circuits were not only more energy efficient, but ran faster. Putting this together with what he knew about the methods of chip design, Moore formulated his law: that microprocessor speeds would double every eighteen months, as circuit sizes shrank by the same factor, until limits imposed by quantum mechanics intervened.

Moore was wrong—they’re now doubling every fifteen months and accelerating.

The interesting thing about Moore’s law is that although it was clearly true in 1968, nobody in the SF field came close to grasping its implications until the early 1980’s, by which time personal computers had already become nearly ubiquitous. Moore’s law was a classic example of an exponentiating change—one that starts off from a very low level, lumbers along for a while (invisible to all but specialists in the field), then explodes onto the scene with mind-numbing speed. Since 1970 we’ve have exponentiating changes coming out of our ears: genetically modified organisms, the growth of the internet, the spread of the PC, the network-enabled mobile phone, and—coming soon, to a planet near you—nanotechnology.

Back in 1988, nanotechnology looked like SF. By 1998 it was making eminent scientists scratch their heads and explain why it couldn’t possibly work in the pages of Scientific American. By 2000 it was a multi-hundred million dollar industry, and in another two years we can expect to see the first nanotech IPOs hitting the stock market.

(Doubtless it’ll toast the credibility of a few more of the stories in this collection before it’s over.)

The past through the future
Short science fiction stories are historical documents; they illustrate the author’s plausible expectations of the future at a specific point in time. (I’m leaving aside the implausible fictions—there’re enough of them in this collection—as they tell us something different about the author’s expectations of human behaviour and what constitutes wholesome, or at least saleable, entertainment.)

All the stories in this collection are artefacts of the age of Moore’s law. All of them were written with word processing software rather than the pen; the earliest of them (Yellow Snow) dates to 1989, about the time I was discovering the internet. Moore’s Law has already rendered some of these stories obsolete—notably Ship of Fools, my Y2K story from 1994. (Back then people tended to look at me blankly whenever I mentioned the software epoch in public.) Yellow Snow is also looking distinctly yellowed around the edges, ten years on, with the Human Genome Project a nearly done deal.

Part of the problem facing any contemporary hard SF writer is the fogbank of accelerating change that has boiled up out of nowhere to swallow our proximate future. Computer scientist and author Vernor Vinge coined the term “singularity” to describe this; a singularity, in mathematics, is the point towards which an exponential curve tends. At the singularity, the rate of change of technology becomes infinite; we can’t predict what lies beyond it.

In a frightening essay on the taxonomy of artificial intelligence, published in Whole Earth Review in 1994, Vinge pointed out that if it is possible to create an artificial intelligence (specifically a conscious software construct) equivalent to a human mind, then it is possible to create one that is faster than a human mind—just run it on a faster computer. Such a weakly superhuman AI can design ever-faster hardware for itself, amplifying its own capabilities. Or it could carry out research into better, higher orders of artificial sentience, possibly transforming itself into a strongly superhuman AI: an entity with thought processes as comprehensible to us as ours are to a dog or cat. The thrust of Vinge’s argument was that if artificial intelligence is possible, then it amounts to a singularity in the history of technological progress; we cannot possibly predict what life will be like on the far side of it, because the limiting factor on our projections is our own minds, and the minds that are driving progress beyond that point will have different limits.

Hans Moravec, Professor of Robotics at Carnegie Melon University, holds similar views. In his book “Robot: mere machine to transcendent mind” he provides some benchmark estimates of the computational complexity of a human brain, and some calculations of the point at which sufficient computing power to match it will be achieved. These estimates are very approximate, but the problem of exponential growth is that even a gigantic thousandfold error in his calculation—three orders of magnitude—merely pushes the timescale for a human-equivalent computing system back seven or eight years. If we take Moravec’s guesstimate as gospel, we’ve got about thirty years left on top of the brains trust. Then . . . singularity.

These assumptions implicitly assume that Cartesian dualists and believers in an immaterial soul (exemplified by arch-AI skeptic and mathematician Roger Penrose, and philosopher John Searle) are wrong. The assumption that intelligence is a computational process is that of a materialist generation that confidently looks forward across a vast algorithmic gulf and sees no limit to their potential. Just as clockwork was the preferred metaphor for cosmology and biology in the eighteenth and nineteenth centuries, so today the computer has subsumed our vision of the future. As with the clockwork mechanics of the age of enlightenment, it may turn out to be a potent but ultimately limited vision: but unlike the earlier metaphor, computing lends us a fascinating abstraction, the idea of virtualisation—of simulations that are functionally equivalent to the original template.

It’s very hard to refute the idea of the software singularity with today’s established knowledge, just as it was hard to disprove the idea of heavier-than-air flight in the 1860’s. Even if we never achieve a working AI, there are lesser substitutes that promise equivalent power. Much work is currently being done on direct brain to computer interfaces: Vinge discusses at length the possibilities for Augmented Intelligence as opposed to Artificial Intelligence, and these are at least as startling as the real McCoy.

The whole problematic issue for SF writers is that these fundamental changes in the way human minds work—and later, minds in general—kick in some time in the next ten to twenty years. Today, I have at my fingertips a workstation more powerful than the most advanced supercomputer of 1988, with a permanent internet connection leading me to a huge, searchable library of information. (In other words, I’ve got a workstation with a web browser.) Where will we be in another decade? Imagine a prosthetic memory wired into your spectacle frames, recording everything you see and available to prompt you at a whispered command. Far-fetched? Prototypes exist in places like the Media Lab today; in a decade it’ll be possible for the cost of a home PC, and the decade after that they’ll be giving them away in breakfast cereal packets.

I don’t think predicting the spread of intelligence augmenting technologies is over-optimistic. Cellular mobile telephone services were introduced in the UK in 1985. Early mobile phones cost half as much as a car and were the size of a brick; coverage was poor. The phone companies expected to have a total market of 50,000 phones by the year 2000. It’s now that very year. Phones are small enough to fit in a shirt pocket and cheap enough that last time I bought one it cost less than the leather case I bought at the same time. More than 50,000 mobile phones are sold every day; parents give them to schoolchildren so they can keep in touch. Gadgets that fill a human need (like communication) proliferate like crazy, far faster than we expect, and increasing our own intelligence probably falls into the same category: the only reason people aren’t clamouring for it today is that they don’t know what they’re missing.

This sort of change—the spread of mobile phones, or of ubiquitous high-bandwidth memory prostheses—is quite possible within a time span shorter than the time between my writing the first story in this collection (1989) and the publication of this book (2003). It plays merry hell with a writer’s ability to plot a story or novel convincingly; think how many dramas used to rely on the hero’s telephone wire being cut to stop them calling for help! The future isn’t going to be like the past any more—not even the near future, five or ten years away, and there’s no way of predicting where the weird but ubiquitous changes will come from.

Let me give a more concrete example to illustrate the headaches that come from prognosticating about the future. Suppose I exercise my authorial fiat and write a time machine into this essay. Nothing fancy; just a gadget for driving around the last century. Jumping into the saddle of my time machine I slide the crystal rod backwards, setting the controls for 1901. On arrival, I proceed at once to—where else?—the residence of one Mr Herbert George Wells, writer and journalist.

I knock on the door and, sidestepping the housekeeper, introduce myself as a Time traveller from 2001, validating my credentials with a solar-powered pocket calculator and a digital watch. Mr Wells is fascinated; he quite naturally asks me questions about my era. Unfortunately I am constrained by the law of temporal paradox evasion (not to mention the time police, and the exigencies of essay writing), so when he asks a question I can only say, “yes,” “no,” or state a known quantitative fact:

(Fade out, pursued by a bloodthirsty temporal paradox.)

The upshot for the future
Measures of economic success vary with time. To a Victorian economist, prosperity was a function of the conversion of raw materials; coal, steel, bauxite, ships launched, trains built, houses erected. The concept of floating currencies was quite alien to the world-view of the day. Energy was measured in tons of coal mined; fuel oil was mostly irrelevant, paraffin to burn in lamps.

The idea of the automobile industry reaching its contemporary size was ludicrous; the transport infrastructure was principally rail- and ship-based, and automotive technology was not stable enough for a mass market. Nor did a road infrastructure exist that would support widespread car ownership. H. G. Wells, who was nevertheless a visionary, predicted an aviation industry (in The Sleeper Awakes), and envisaged huge networks of moving roads; but he still didn’t realise that it might affect the holiday habits of millions, shrinking the world of 2002 to the size of the Great Britain of 1902 in terms of travel time. The idea that, by 2000, 45% of the population of a post-imperial Britain would classify themselves as “middle class” would have struck a turn-of-the-century socialist as preposterous, and not even a lunatic would suggest that the world’s largest industry would be devoted to the design of imaginary machines—software—with no physical existence.

We live in a world which, by the metrics of Victorian industrial consumption, is poverty stricken; nevertheless, we are richer than ever before. Apply our own metrics to the Victorian age and they appear poor. The definition of what is valuable changes over time, and with it change our social values. As AI and computer speech recognition pioneer Raymond Kurzweil pointed out in The Age of Sensual Machines, the first decade of the twenty-first century will see more change than the latter half of the twentieth.

To hammer the last nail into the coffin of predictive SF, our personal values are influenced by our social environment. Our environment is in turn dependent on these economic factors. Human nature itself changes over time—and the rate of change of human nature is not constant. For thousands of years, people expected some of their children to die before adulthood; only in the past two or three generations has this come to be seen as a major tragedy, a destroyer of families. Access to transportation and privacy caused a chain reaction in social relationships between the genders in the middle of the century, a tipping of the balance that is still causing considerable social upheaval. When we start changing our own minds and the way we think, or creating new types of being to do the thinking, we’ll finally be face-to-face with that rolling fog bank; at that point, the future becomes unknowable to us. Worse: yesterday’s futures are ruled out of today’s future, like the plot that hangs on a severed telegraph wire or a Victorian beau’s cancelled betrothal.

Which brings me briefly onto the topic of the short stories in this collection. They appeared between 1989 and 2000; and they’re coloured by my own understanding of that decade. In a very real way they’re historical documents. Yellow Snow, the first-written of these stories, hitched a ride on the post-cyberpunk surf: nevertheless it’s dated by the technology of the day. (No internet here: just a strangely intelligent environment.) Ship of Fools dives headlong into the future and crashes messily up against January the First, 2000—hopefully with more grace than many of the consultants who were selling us all on doom and gloom back then. Toast takes Moore’s Law to its logical conclusion, while Antibodies cross-fertilises Vinge’s singularity with the anthropic cosmological principle and some of Moravec’s odder theories about quantum mechanics’ many universes hypothesis in an unsettling stew: but both these stories are brittle, subject to a resounding technological refutation that could happen at any moment. I wouldn’t bet on Dechlorinating the Moderator looking anything but quaint in a decade, either.

Like all alternate histories, Big Brother Iron and A Colder War both beg the questions of built-in obsolescence inherent in the genre, fleeing sideways into “what if we hadn’t done that?” In the case of Big Brother Iron, we ask “what if the nightmare of totalitarianism envisaged in Orwell’s 1984 had progressed from Stalinism to overtly Brezhnevite decay,” while A Colder War takes an extrapolative look at Lovecraft’s At the Mountains of Madness. In neither case can we treat these as models of futurism. In contrast, Lobsters, written in the spring of 1999 amidst the chaos of working for a dot-com that was growing like Topsy, goes eyeball to hairy eyeball with the near future: the version of the story in this book is the original one, tweaked slightly for 2000-era technologies, rather than the updated version that forms the first chapter of
  1   2   3   4   5   6   7   8   9   ...   15


The database is protected by copyright ©essaydocs.org 2016
send message

    Main page