Selected Essays on Technology, Creativity, Copyright and the Future of the Future



Download 0.6 Mb.
Page8/11
Date conversion15.02.2016
Size0.6 Mb.
1   2   3   4   5   6   7   8   9   10   11
Free(konomic) E-books
(Originally published in Locus Magazine, September 2007)
Can giving away free electronic books really sell printed books? I think so. As I explained in my March column ("You Do Like Reading Off a Computer Screen"), I don't believe that most readers want to read long-form works off a screen, and I don't believe that they will ever want to read long-form works off a screen. As I say in the column, the problem with reading off a screen isn't resolution, eyestrain, or compatibility with reading in the bathtub: it's that computers are seductive, they tempt us to do other things, making concentrating on a long-form work impractical.
Sure, some readers have the cognitive quirk necessary to read full-length works off screens, or are motivated to do so by other circumstances (such as being so broke that they could never hope to buy the printed work). The rational question isn't, "Will giving away free e-books cost me sales?" but rather, "Will giving away free e-books win me more sales than it costs me?"
This is a very hard proposition to evaluate in a quantitative way. Books aren't lattes or cable-knit sweaters: each book sells (or doesn't) due to factors that are unique to that title. It's hard to imagine an empirical, controlled study in which two "equivalent" books are published, and one is also available as a free download, the other not, and the difference calculated as a means of "proving" whether e-books hurt or help sales in the long run.
I've released all of my novels as free downloads simultaneous with their print publication. If I had a time machine, I could re-release them without the free downloads and compare the royalty statements. Lacking such a device, I'm forced to draw conclusions from qualitative, anecdotal evidence, and I've collected plenty of that:


  • Many writers have tried free e-book releases to tie in with the print release of their works. To the best of my knowledge, every writer who's tried this has repeated the experiment with future works, suggesting a high degree of satisfaction with the outcomes



  • A writer friend of mine had his first novel come out at the same time as mine. We write similar material and are often compared to one another by critics and reviewers. My first novel had a free download, his didn't. We compared sales figures and I was doing substantially better than him -- he subsequently convinced his publisher to let him follow suit



  • Baen Books has a pretty good handle on expected sales for new volumes in long-running series; having sold many such series, they have lots of data to use in sales estimates. If Volume N sells X copies, we expect Volume N+1 to sell Y copies. They report that they have seen a measurable uptick in sales following from free e-book releases of previous and current volumes



  • David Blackburn, a Harvard PhD candidate in economics, published a paper in 2004 in which he calculated that, for music, "piracy" results in a net increase in sales for all titles in the 75th percentile and lower; negligible change in sales for the "middle class" of titles between the 75th percentile and the 97th percentile; and a small drag on the "super-rich" in the 97th percentile and higher. Publisher Tim O'Reilly describes this as "piracy's progressive taxation," apportioning a small wealth-redistribution to the vast majority of works, no net change to the middle, and a small cost on the richest few



  • Speaking of Tim O'Reilly, he has just published a detailed, quantitative study of the effect of free downloads on a single title. O'Reilly Media published Asterisk: The Future of Telephony, in November 2005, simultaneously releasing the book as a free download. By March 2007, they had a pretty detailed picture of the sales-cycle of this book -- and, thanks to industry standard metrics like those provided by Bookscan, they could compare it, apples-to-apples style, against the performance of competing books treating with the same subject. O'Reilly's conclusion: downloads didn't cause a decline in sales, and appears to have resulted in a lift in sales. This is particularly noteworthy because the book in question is a technical reference work, exclusively consumed by computer programmers who are by definition disposed to read off screens. Also, this is a reference work and therefore is more likely to be useful in electronic form, where it can be easily searched



  • In my case, my publishers have gone back to press repeatedly for my books. The print runs for each edition are modest -- I'm a midlist writer in a world with a shrinking midlist -- but publishers print what they think they can sell, and they're outselling their expectations



  • The new opportunities arising from my free downloads are so numerous as to be uncountable -- foreign rights deals, comic book licenses, speaking engagements, article commissions -- I've made more money in these secondary markets than I have in royalties



  • More anecdotes: I've had literally thousands of people approach me by e-mail and at signings and cons to say, "I found your work online for free, got hooked, and started buying it." By contrast, I've had all of five e-mails from people saying, "Hey, idiot, thanks for the free book, now I don't have to buy the print edition, ha ha!"

Many of us have assumed, a priori, that electronic books substitute for print books. While I don't have controlled, quantitative data to refute the proposition, I do have plenty of experience with this stuff, and all that experience leads me to believe that giving away my books is selling the hell out of them.


More importantly, the free e-book skeptics have no evidence to offer in support of their position -- just hand-waving and dark muttering about a mythological future when book-lovers give up their printed books for electronic book-readers (as opposed to the much more plausible future where book lovers go on buying their fetish objects and carry books around on their electronic devices).
I started giving away e-books after I witnessed the early days of the "bookwarez" scene, wherein fans cut the binding off their favorite books, scanned them, ran them through optical character recognition software, and manually proofread them to eliminate the digitization errors. These fans were easily spending 80 hours to rip their favorite books, and they were only ripping their favorite books, books they loved and wanted to share. (The 80-hour figure comes from my own attempt to do this -- I'm sure that rippers get faster with practice.)
I thought to myself that 80 hours' free promotional effort would be a good thing to have at my disposal when my books entered the market. What if I gave my readers clean, canonical electronic editions of my works, saving them the bother of ripping them, and so freed them up to promote my work to their friends?
After all, it's not like there's any conceivable way to stop people from putting books on scanners if they really want to. Scanners aren't going to get more expensive or slower. The Internet isn't going to get harder to use. Better to confront this challenge head on, turn it into an opportunity, than to rail against the future (I'm a science fiction writer -- tuning into the future is supposed to be my metier).
The timing couldn't have been better. Just as my first novel was being published, a new, high-tech project for promoting sharing of creative works launched: the Creative Commons project (CC). CC offers a set of tools that make it easy to mark works with whatever freedoms the author wants to give away. CC launched in 2003 and today, more than 160,000,000 works have been released under its licenses.
My next column will go into more detail on what CC is, what licenses it offers, and how to use them -- but for now, check them out online at creativecommons.org.
The Progressive Apocalypse and Other Futurismic Delights
(Originally published in Locus Magazine, July 2007)
Of course, science fiction is a literature of the present. Many's the science fiction writer who uses the future as a warped mirror for reflecting back the present day, angled to illustrate the hidden strangeness buried by our invisible assumptions: Orwell turned 1948 into Nineteen Eighty-Four. But even when the fictional future isn't a parable about the present day, it is necessarily a creation of the present day, since it reflects the present day biases that infuse the author. Hence Asimov's Foundation, a New Deal-esque project to think humanity out of its tribulations though social interventionism.
Bold SF writers eschew the future altogether, embracing a futuristic account of the present day. William Gibson's forthcoming Spook Country is an act of "speculative presentism," a book so futuristic it could only have been set in 2006, a book that exploits retrospective historical distance to let us glimpse just how alien and futuristic our present day is.
Science fiction writers aren't the only people in the business of predicting the future. Futurists -- consultants, technology columnists, analysts, venture capitalists, and entrepreneurial pitchmen -- spill a lot of ink, phosphors, and caffeinated hot air in describing a vision for a future where we'll get more and more of whatever it is they want to sell us or warn us away from. Tomorrow will feature faster, cheaper processors, more Internet users, ubiquitous RFID tags, radically democratic political processes dominated by bloggers, massively multiplayer games whose virtual economies dwarf the physical economy.
There's a lovely neologism to describe these visions: "futurismic." Futurismic media is that which depicts futurism, not the future. It is often self-serving -- think of the antigrav Nikes in Back to the Future III -- and it generally doesn't hold up well to scrutiny.
SF films and TV are great fonts of futurismic imagery: R2D2 is a fully conscious AI, can hack the firewall of the Death Star, and is equipped with a range of holographic projectors and antipersonnel devices -- but no one has installed a $15 sound card and some text-to-speech software on him, so he has to whistle like Harpo Marx. Or take the Starship Enterprise, with a transporter capable of constituting matter from digitally stored plans, and radios that can breach the speed of light.
The non-futurismic version of NCC-1701 would be the size of a softball (or whatever the minimum size for a warp drive, transporter, and subspace radio would be). It would zip around the galaxy at FTL speeds under remote control. When it reached an interesting planet, it would beam a stored copy of a landing party onto the surface, and when their mission was over, it would beam them back into storage, annihilating their physical selves until they reached the next stopping point. If a member of the landing party were eaten by a green-skinned interspatial hippie or giant toga-wearing galactic tyrant, that member would be recovered from backup by the transporter beam. Hell, the entire landing party could consist of multiple copies of the most effective crewmember onboard: no redshirts, just a half-dozen instances of Kirk operating in clonal harmony.
Futurism has a psychological explanation, as recounted in Harvard clinical psych prof Daniel Gilbert's 2006 book, Stumbling on Happiness. Our memories and our projections of the future are necessarily imperfect. Our memories consist of those observations our brains have bothered to keep records of, woven together with inference and whatever else is lying around handy when we try to remember something. Ask someone who's eating a great lunch how breakfast was, and odds are she'll tell you it was delicious. Ask the same question of someone eating rubbery airplane food, and he'll tell you his breakfast was awful. We weave the past out of our imperfect memories and our observable present.
We make the future in much the same way: we use reasoning and evidence to predict what we can, and whenever we bump up against uncertainty, we fill the void with the present day. Hence the injunction on women soldiers in the future of Starship Troopers, or the bizarre, glassed-over "Progressland" city diorama at the end of the 1964 World's Fair exhibit The Carousel of Progress, which Disney built for GE.
Lapsarianism -- the idea of a paradise lost, a fall from grace that makes each year worse than the last -- is the predominant future feeling for many people. It's easy to see why: an imperfectly remembered golden childhood gives way to the worries of adulthood and physical senescence. Surely the world is getting worse: nothing tastes as good as it did when we were six, everything hurts all the time, and our matured gonads drive us into frenzies of bizarre, self-destructive behavior.
Lapsarianism dominates the Abrahamic faiths. I have an Orthodox Jewish friend whose tradition holds that each generation of rabbis is necessarily less perfect than the rabbis that came before, since each generation is more removed from the perfection of the Garden. Therefore, no rabbi is allowed to overturn any of his forebears' wisdom, since they are all, by definition, smarter than him.
The natural endpoint of Lapsarianism is apocalypse. If things get worse, and worse, and worse, eventually they'll just run out of worseness. Eventually, they'll bottom out, a kind of rotten death of the universe when Lapsarian entropy hits the nadir and takes us all with it.
Running counter to Lapsarianism is progressivism: the Enlightenment ideal of a world of great people standing on the shoulders of giants. Each of us contributes to improving the world's storehouse of knowledge (and thus its capacity for bringing joy to all of us), and our descendants and proteges take our work and improve on it. The very idea of "progress" runs counter to the idea of Lapsarianism and the fall: it is the idea that we, as a species, are falling in reverse, combing back the wild tangle of entropy into a neat, tidy braid.
Of course, progress must also have a boundary condition -- if only because we eventually run out of imaginary ways that the human condition can improve. And science fiction has a name for the upper bound of progress, a name for the progressive apocalypse:
We call it the Singularity.
Vernor Vinge's Singularity takes place when our technology reaches a stage that allows us to "upload" our minds into software, run them at faster, hotter speeds than our neurological wetware substrate allows for, and create multiple, parallel instances of ourselves. After the Singularity, nothing is predictable because everything is possible. We will cease to be human and become (as the title of Rudy Rucker's next novel would have it) Postsingular.
The Singularity is what happens when we have so much progress that we run out of progress. It's the apocalypse that ends the human race in rapture and joy. Indeed, Ken MacLeod calls the Singularity "the rapture of the nerds," an apt description for the mirror-world progressive version of the Lapsarian apocalypse.
At the end of the day, both progress and the fall from grace are illusions. The central thesis of Stumbling on Happiness is that human beings are remarkably bad at predicting what will make us happy. Our predictions are skewed by our imperfect memories and our capacity for filling the future with the present day.
The future is gnarlier than futurism. NCC-1701 probably wouldn't send out transporter-equipped drones -- instead, it would likely find itself on missions whose ethos, mores, and rationale are largely incomprehensible to us, and so obvious to its crew that they couldn't hope to explain them.
Science fiction is the literature of the present, and the present is the only era that we can hope to understand, because it's the only era that lets us check our observations and predictions against reality.
When the Singularity is More Than a Literary Device: An Interview with Futurist-Inventor Ray Kurzweil
(Originally published in Asimov's Science Fiction Magazine, June 2005)
It's not clear to me whether the Singularity is a technical belief system or a spiritual one.
The Singularity -- a notion that's crept into a lot of skiffy, and whose most articulate in-genre spokesmodel is Vernor Vinge -- describes the black hole in history that will be created at the moment when human intelligence can be digitized. When the speed and scope of our cognition is hitched to the price-performance curve of microprocessors, our "progress" will double every eighteen months, and then every twelve months, and then every ten, and eventually, every five seconds.
Singularities are, literally, holes in space from whence no information can emerge, and so SF writers occasionally mutter about how hard it is to tell a story set after the information Singularity. Everything will be different. What it means to be human will be so different that what it means to be in danger, or happy, or sad, or any of the other elements that make up the squeeze-and-release tension in a good yarn will be unrecognizable to us pre-Singletons.
It's a neat conceit to write around. I've committed Singularity a couple of times, usually in collaboration with gonzo Singleton Charlie Stross, the mad antipope of the Singularity. But those stories have the same relation to futurism as romance novels do to love: a shared jumping-off point, but radically different morphologies.
Of course, the Singularity isn't just a conceit for noodling with in the pages of the pulps: it's the subject of serious-minded punditry, futurism, and even science.
Ray Kurzweil is one such pundit-futurist-scientist. He's a serial entrepreneur who founded successful businesses that advanced the fields of optical character recognition (machine-reading) software, text-to-speech synthesis, synthetic musical instrument simulation, computer-based speech recognition, and stock-market analysis. He cured his own Type-II diabetes through a careful review of the literature and the judicious application of first principles and reason. To a casual observer, Kurzweil appears to be the star of some kind of Heinlein novel, stealing fire from the gods and embarking on a quest to bring his maverick ideas to the public despite the dismissals of the establishment, getting rich in the process.
Kurzweil believes in the Singularity. In his 1990 manifesto, "The Age of Intelligent Machines," Kurzweil persuasively argued that we were on the brink of meaningful machine intelligence. A decade later, he continued the argument in a book called The Age of Spiritual Machines, whose most audacious claim is that the world's computational capacity has been slowly doubling since the crust first cooled (and before!), and that the doubling interval has been growing shorter and shorter with each passing year, so that now we see it reflected in the computer industry's Moore's Law, which predicts that microprocessors will get twice as powerful for half the cost about every eighteen months. The breathtaking sweep of this trend has an obvious conclusion: computers more powerful than people; more powerful than we can comprehend.
Now Kurzweil has published two more books, The Singularity Is Near, When Humans Transcend Biology (Viking, Spring 2005) and Fantastic Voyage: Live Long Enough to Live Forever (with Terry Grossman, Rodale, November 2004). The former is a technological roadmap for creating the conditions necessary for ascent into Singularity; the latter is a book about life-prolonging technologies that will assist baby-boomers in living long enough to see the day when technological immortality is achieved.
See what I meant about his being a Heinlein hero?
I still don't know if the Singularity is a spiritual or a technological belief system. It has all the trappings of spirituality, to be sure. If you are pure and kosher, if you live right and if your society is just, then you will live to see a moment of Rapture when your flesh will slough away leaving nothing behind but your ka, your soul, your consciousness, to ascend to an immortal and pure state.
I wrote a novel called Down and Out in the Magic Kingdom where characters could make backups of themselves and recover from them if something bad happened, like catching a cold or being assassinated. It raises a lot of existential questions: most prominently: are you still you when you've been restored from backup?
The traditional AI answer is the Turing Test, invented by Alan Turing, the gay pioneer of cryptography and artificial intelligence who was forced by the British government to take hormone treatments to "cure" him of his homosexuality, culminating in his suicide in 1954. Turing cut through the existentialism about measuring whether a machine is intelligent by proposing a parlor game: a computer sits behind a locked door with a chat program, and a person sits behind another locked door with his own chat program, and they both try to convince a judge that they are real people. If the computer fools a human judge into thinking that it's a person, then to all intents and purposes, it's a person.
So how do you know if the backed-up you that you've restored into a new body -- or a jar with a speaker attached to it -- is really you? Well, you can ask it some questions, and if it answers the same way that you do, you're talking to a faithful copy of yourself.
Sounds good. But the me who sent his first story into Asimov's seventeen years ago couldn't answer the question, "Write a story for Asimov's" the same way the me of today could. Does that mean I'm not me anymore?
Kurzweil has the answer.
"If you follow that logic, then if you were to take me ten years ago, I could not pass for myself in a Ray Kurzweil Turing Test. But once the requisite uploading technology becomes available a few decades hence, you could make a perfect-enough copy of me, and it would pass the Ray Kurzweil Turing Test. The copy doesn't have to match the quantum state of my every neuron, either: if you meet me the next day, I'd pass the Ray Kurzweil Turing Test. Nevertheless, none of the quantum states in my brain would be the same. There are quite a few changes that each of us undergo from day to day, we don't examine the assumption that we are the same person closely.
"We gradually change our pattern of atoms and neurons but we very rapidly change the particles the pattern is made up of. We used to think that in the brain -- the physical part of us most closely associated with our identity -- cells change very slowly, but it turns out that the components of the neurons, the tubules and so forth, turn over in only days. I'm a completely different set of particles from what I was a week ago.
"Consciousness is a difficult subject, and I'm always surprised by how many people talk about consciousness routinely as if it could be easily and readily tested scientifically. But we can't postulate a consciousness detector that does not have some assumptions about consciousness built into it.
"Science is about objective third party observations and logical deductions from them. Consciousness is about first-person, subjective experience, and there's a fundamental gap there. We live in a world of assumptions about consciousness. We share the assumption that other human beings are conscious, for example. But that breaks down when we go outside of humans, when we consider, for example, animals. Some say only humans are conscious and animals are instinctive and machinelike. Others see humanlike behavior in an animal and consider the animal conscious, but even these observers don't generally attribute consciousness to animals that aren't humanlike.
"When machines are complex enough to have responses recognizable as emotions, those machines will be more humanlike than animals."
The Kurzweil Singularity goes like this: computers get better and smaller. Our ability to measure the world gains precision and grows ever cheaper. Eventually, we can measure the world inside the brain and make a copy of it in a computer that's as fast and complex as a brain, and voila, intelligence.
Here in the twenty-first century we like to view ourselves as ambulatory brains, plugged into meat-puppets that lug our precious grey matter from place to place. We tend to think of that grey matter as transcendently complex, and we think of it as being the bit that makes us us.
But brains aren't that complex, Kurzweil says. Already, we're starting to unravel their mysteries.
"We seem to have found one area of the brain closely associated with higher-level emotions, the spindle cells, deeply embedded in the brain. There are tens of thousands of them, spanning the whole brain (maybe eighty thousand in total), which is an incredibly small number. Babies don't have any, most animals don't have any, and they likely only evolved over the last million years or so. Some of the high-level emotions that are deeply human come from these.
"Turing had the right insight: base the test for intelligence on written language. Turing Tests really work. A novel is based on language: with language you can conjure up any reality, much more so than with images. Turing almost lived to see computers doing a good job of performing in fields like math, medical diagnosis and so on, but those tasks were easier for a machine than demonstrating even a child's mastery of language. Language is the true embodiment of human intelligence."
If we're not so complex, then it's only a matter of time until computers are more complex than us. When that comes, our brains will be model-able in a computer and that's when the fun begins. That's the thesis of Spiritual Machines, which even includes a (Heinlein-style) timeline leading up to this day.
Now, it may be that a human brain contains n logic-gates and runs at x cycles per second and stores z petabytes, and that n and x and z are all within reach. It may be that we can take a brain apart and record the position and relationships of all the neurons and sub-neuronal elements that constitute a brain.
But there are also a nearly infinite number of ways of modeling a brain in a computer, and only a finite (or possibly nonexistent) fraction of that space will yield a conscious copy of the original meat-brain. Science fiction writers usually hand-wave this step: in Heinlein's "Man Who Sold the Moon," the gimmick is that once the computer becomes complex enough, with enough "random numbers," it just wakes up.
Computer programmers are a little more skeptical. Computers have never been known for their skill at programming themselves -- they tend to be no smarter than the people who write their software.
But there are techniques for getting computers to program themselves, based on evolution and natural selection. A programmer creates a system that spits out lots -- thousands or even millions -- of randomly generated programs. Each one is given the opportunity to perform a computational task (say, sorting a list of numbers from greatest to least) and the ones that solve the problem best are kept aside while the others are erased. Now the survivors are used as the basis for a new generation of randomly mutated descendants, each based on elements of the code that preceded them. By running many instances of a randomly varied program at once, and by culling the least successful and regenerating the population from the winners very quickly, it is possible to evolve effective software that performs as well or better than the code written by human authors.
Indeed, evolutionary computing is a promising and exciting field that's realizing real returns through cool offshoots like "ant colony optimization" and similar approaches that are showing good results in fields as diverse as piloting military UAVs and efficiently provisioning car-painting robots at automotive plants.
So if you buy Kurzweil's premise that computation is getting cheaper and more plentiful than ever, then why not just use evolutionary algorithms to evolve the best way to model a scanned-in human brain such that it "wakes up" like Heinlein's Mike computer?
Indeed, this is the crux of Kurzweil's argument in Spiritual Machines: if we have computation to spare and a detailed model of a human brain, we need only combine them and out will pop the mechanism whereby we may upload our consciousness to digital storage media and transcend our weak and bothersome meat forever.Indeed, this is the crux of Kurzweil's argument in Spiritual Machines: if we have computation to spare and a detailed model of a human brain, we need only combine them and out will pop the mechanism whereby we may upload our consciousness to digital storage media and transcend our weak and bothersome meat forever.
But it's a cheat. Evolutionary algorithms depend on the same mechanisms as real-world evolution: heritable variation of candidates and a system that culls the least-suitable candidates. This latter -- the fitness-factor that determines which individuals in a cohort breed and which vanish -- is the key to a successful evolutionary system. Without it, there's no pressure for the system to achieve the desired goal: merely mutation and more mutation.
But how can a machine evaluate which of a trillion models of a human brain is "most like" a conscious mind? Or better still: which one is most like the individual whose brain is being modeled?
"It is a sleight of hand in Spiritual Machines," Kurzweil admits. "But in The Singularity Is Near, I have an in-depth discussion about what we know about the brain and how to model it. Our tools for understanding the brain are subject to the Law of Accelerating Returns, and we've made more progress in reverse-engineering the human brain than most people realize." This is a tasty Kurzweilism that observes that improvements in technology yield tools for improving technology, round and round, so that the thing that progress begets more than anything is more and yet faster progress.
"Scanning resolution of human tissue -- both spatial and temporal -- is doubling every year, and so is our knowledge of the workings of the brain. The brain is not one big neural net, the brain is several hundred different regions, and we can understand each region, we can model the regions with mathematics, most of which have some nexus with chaos and self-organizing systems. This has already been done for a couple dozen regions out of the several hundred.
"We have a good model of a dozen or so regions of the auditory and visual cortex, how we strip images down to very low-resolution movies based on pattern recognition. Interestingly, we don't actually see things, we essentially hallucinate them in detail from what we see from these low resolution cues. Past the early phases of the visual cortex, detail doesn't reach the brain.
"We are getting exponentially more knowledge. We can get detailed scans of neurons working in vivo, and are beginning to understand the chaotic algorithms underlying human intelligence. In some cases, we are getting comparable performance of brain regions in simulation. These tools will continue to grow in detail and sophistication.
"We can have confidence of reverse-engineering the brain in twenty years or so. The reason that brain reverse engineering has not contributed much to artificial intelligence is that up until recently we didn't have the right tools. If I gave you a computer and a few magnetic sensors and asked you to reverse-engineer it, you might figure out that there's a magnetic device spinning when a file is saved, but you'd never get at the instruction set. Once you reverse-engineer the computer fully, however, you can express its principles of operation in just a few dozen pages.
"Now there are new tools that let us see the interneuronal connections and their signaling, in vivo, and in real-time. We're just now getting these tools and there's very rapid application of the tools to obtain the data.
"Twenty years from now we will have realistic simulations and models of all the regions of the brain and [we will] understand how they work. We won't blindly or mindlessly copy those methods, we will understand them and use them to improve our AI toolkit. So we'll learn how the brain works and then apply the sophisticated tools that we will obtain, as we discover how the brain works.
"Once we understand a subtle science principle, we can isolate, amplify, and expand it. Air goes faster over a curved surface: from that insight we isolated, amplified, and expanded the idea and invented air travel. We'll do the same with intelligence.
"Progress is exponential -- not just a measure of power of computation, number of Internet nodes, and magnetic spots on a hard disk -- the rate of paradigm shift is itself accelerating, doubling every decade. Scientists look at a problem and they intuitively conclude that since we've solved 1 percent over the last year, it'll therefore be one hundred years until the problem is exhausted: but the rate of progress doubles every decade, and the power of the information tools (in price-performance, resolution, bandwidth, and so on) doubles every year. People, even scientists, don't grasp exponential growth. During the first decade of the human genome project, we only solved 2 percent of the problem, but we solved the remaining 98 percent in five years."
But Kurzweil doesn't think that the future will arrive in a rush. As William Gibson observed, "The future is here, it's just not evenly distributed."
"Sure, it'd be interesting to take a human brain, scan it, reinstantiate the brain, and run it on another substrate. That will ultimately happen."
"But the most salient scenario is that we'll gradually merge with our technology. We'll use nanobots to kill pathogens, then to kill cancer cells, and then they'll go into our brain and do benign things there like augment our memory, and very gradually they'll get more and more sophisticated. There's no single great leap, but there is ultimately a great leap comprised of many small steps.
"In The Singularity Is Near, I describe the radically different world of 2040, and how we'll get there one benign change at a time. The Singularity will be gradual, smooth.
"Really, this is about augmenting our biological thinking with nonbiological thinking. We have a capacity of 1026 to 1029 calculations per second (cps) in the approximately 1010 biological human brains on Earth and that number won't change much in fifty years, but nonbiological thinking will just crash through that. By 2049, nonbiological thinking capacity will be on the order of a billion times that. We'll get to the point where bio thinking is relatively insignificant.
"People didn't throw their typewriters away when word-processing started. There's always an overlap -- it'll take time before we realize how much more powerful nonbiological thinking will ultimately be."
It's well and good to talk about all the stuff we can do with technology, but it's a lot more important to talk about the stuff we'll be allowed to do with technology. Think of the global freak-out caused by the relatively trivial advent of peer-to-peer file-sharing tools: Universities are wiretapping their campuses and disciplining computer science students for writing legitimate, general purpose software; grandmothers and twelve-year-olds are losing their life savings; privacy and due process have sailed out the window without so much as a by-your-leave.
Even P2P's worst enemies admit that this is a general-purpose technology with good and bad uses, but when new tech comes along it often engenders a response that countenances punishing an infinite number of innocent people to get at the guilty.
What's going to happen when the new technology paradigm isn't song-swapping, but transcendent super-intelligence? Will the reactionary forces be justified in razing the whole ecosystem to eliminate a few parasites who are doing negative things with the new tools?
"Complex ecosystems will always have parasites. Malware [malicious software] is the most important battlefield today.
"Everything will become software -- objects will be malleable, we'll spend lots of time in VR, and computhought will be orders of magnitude more important than biothought.
"Software is already complex enough that we have an ecological terrain that has emerged just as it did in the bioworld.
"That's partly because technology is unregulated and people have access to the tools to create malware and the medicine to treat it. Today's software viruses are clever and stealthy and not simpleminded. Very clever.
"But here's the thing: you don't see people advocating shutting down the Internet because malware is so destructive. I mean, malware is potentially more than a nuisance -- emergency systems, air traffic control, and nuclear reactors all run on vulnerable software. It's an important issue, but the potential damage is still a tiny fraction of the benefit we get from the Internet.
"I hope it'll remain that way -- that the Internet won't become a regulated space like medicine. Malware's not the most important issue facing human society today. Designer bioviruses are. People are concerted about WMDs, but the most daunting WMD would be a designed biological virus. The means exist in college labs to create destructive viruses that erupt and spread silently with long incubation periods.
"Importantly, a would-be bio-terrorist doesn't have to put malware through the FDA's regulatory approval process, but scientists working to fix bio-malware do.
"In Huxley's Brave New World, the rationale for the totalitarian system was that technology was too dangerous and needed to be controlled. But that just pushes technology underground where it becomes less stable. Regulation gives the edge of power to the irresponsible who won't listen to the regulators anyway.
"The way to put more stones on the defense side of the scale is to put more resources into defensive technologies, not create a totalitarian regime of Draconian control.
"I advocate a one hundred billion dollar program to accelerate the development of anti-biological virus technology. The way to combat this is to develop broad tools to destroy viruses. We have tools like RNA interference, just discovered in the past two years to block gene expression. We could develop means to sequence the genes of a new virus (SARS only took thirty-one days) and respond to it in a matter of days.
"Think about it. There's no FDA for software, no certification for programmers. The government is thinking about it, though! The reason the FCC is contemplating Trusted Computing mandates," -- a system to restrict what a computer can do by means of hardware locks embedded on the motherboard -- "is that computing technology is broadening to cover everything. So now you have communications bureaucrats, biology bureaucrats, all wanting to regulate computers.
"Biology would be a lot more stable if we moved away from regulation -- which is extremely irrational and onerous and doesn't appropriately balance risks. Many medications are not available today even though they should be. The FDA always wants to know what happens if we approve this and will it turn into a thalidomide situation that embarrasses us on CNN?
"Nobody asks about the harm that will certainly accrue from delaying a treatment for one or more years. There's no political weight at all, people have been dying from diseases like heart disease and cancer for as long as we've been alive. Attributable risks get 100-1000 times more weight than unattributable risks."
Is this spirituality or science? Perhaps it is the melding of both -- more shades of Heinlein, this time the weird religions founded by people who took Stranger in a Strange Land way too seriously.
After all, this is a system of belief that dictates a means by which we can care for our bodies virtuously and live long enough to transcend them. It is a system of belief that concerns itself with the meddling of non-believers, who work to undermine its goals through irrational systems predicated on their disbelief. It is a system of belief that asks and answers the question of what it means to be human.
It's no wonder that the Singularity has come to occupy so much of the science fiction narrative in these years. Science or spirituality, you could hardly ask for a subject better tailored to technological speculation and drama.
1   2   3   4   5   6   7   8   9   10   11


The database is protected by copyright ©essaydocs.org 2016
send message

    Main page