Thought Experiments: When the Singularity is More Than a Literary Device An Interview with Futurist-Inventor Ray Kurzweil”



Download 64.82 Kb.
Date conversion15.02.2016
Size64.82 Kb.

 

 


Thought Experiments: When the Singularity is More Than a Literary Device

An Interview with Futurist-Inventor Ray Kurzweil”

Cory Doctorow

It’s not clear to me whether the Singularity is a technical belief system or a spiritual one.

The Singularity–a notion that’s crept into a lot of skiffy, and whose most articulate in-genre spokesmodel is Vernor Vinge–describes the black hole in history that will be created at the moment when human intelligence can be digitized. When the speed and scope of our cognition is hitched to the price-performance curve of microprocessors, our "prog-ress" will double every eighteen months, and then every twelve months, and then every ten, and eventually, every five seconds.

Singularities are, literally, holes in space from whence no information can emerge, and so SF writers occasionally mutter about how hard it is to tell a story set after the information Singularity. Everything will be different. What it means to be human will be so different that what it means to be in danger, or happy, or sad, or any of the other elements that make up the squeeze-and-release tension in a good yarn will be unrecognizable to us pre-Singletons.

It’s a neat conceit to write around. I’ve committed Singularity a couple of times, usually in collaboration with gonzo Singleton Charlie Stross, the mad antipope of the Singularity. But those stories have the same relation to futurism as romance novels do to love: a shared jumping-off point, but radically different morphologies.

Of course, the Singularity isn’t just a conceit for noodling within the pages of the pulps: it’s the subject of serious-minded punditry, futurism, and even science.

Ray Kurzweil is one such pundit-futurist-scientist. He’s a serial entrepreneur who founded successful businesses that advanced the fields of optical character recognition (machine-reading) software, text-to-speech synthesis, synthetic musical instrument simulation, computer-based speech recognition, and stock-market analysis. He cured his own Type-II diabetes through a careful review of the literature and the judicious application of first principles and reason. To a casual observer, Kurzweil appears to be the star of some kind of Heinlein novel, stealing fire from the gods and embarking on a quest to bring his maverick ideas to the public despite the dismissals of the establishment, getting rich in the process.

Kurzweil believes in the Singularity. In his 1990 manifesto, The Age of Intelligent Machines, Kurzweil persuasively argued that we were on the brink of meaningful machine intelligence. A decade later, he continued the argument in a book called The Age of Spiritual Machines, whose most audacious claim is that the world’s computational capacity has been slowly doubling since the crust first cooled (and before!), and that the doubling interval has been growing shorter and shorter with each passing year, so that now we see it reflected in the computer industry’s Moore’s Law, which predicts that microprocessors will get twice as powerful for half the cost about every eighteen months. The breathtaking sweep of this trend has an obvious conclusion: computers more powerful than people; more powerful than we can comprehend.

Now Kurzweil has published two more books, The Singularity Is Near, When Humans Transcend Biology (Viking, Spring 2005) and Fantastic Voyage: Live Long Enough to Live Forever (with Terry Grossman, Rodale, November 2004). The former is a technological roadmap for creating the conditions necessary for ascent into Singularity; the latter is a book about life-prolonging technologies that will assist baby-boomers in living long enough to see the day when technological immortality is achieved.

See what I meant about his being a Heinlein hero?

I still don’t know if the Singularity is a spiritual or a technological belief system. It has all the trappings of spirituality, to be sure. If you are pure and kosher, if you live right and if your society is just, then you will live to see a moment of Rapture when your flesh will slough away leaving nothing behind but your ka, your soul, your consciousness, to ascend to an immortal and pure state.

I wrote a novel called Down and Out in the Magic Kingdom where characters could make backups of themselves and recover from them if something bad happened, like catching a cold or being assassinated. It raises a lot of existential questions: most prominently: are you still you when you’ve been restored from backup?

The traditional AI answer is the Turing Test, invented by Alan Turing, the gay pioneer of cryptography and artificial intelligence who was forced by the British government to take hormone treatments to "cure" him of his homosexuality, culminating in his suicide in 1954. Turing cut through the existentialism about measuring whether a machine is intelligent by proposing a parlor game: a computer sits behind a locked door with a chat program, and a person sits behind another locked door with his own chat program, and they both try to convince a judge that they are real people. If the computer fools a human judge into thinking that it’s a person, then to all intents and purposes, it’s a person.

So how do you know if the backed-up you that you’ve restored into a new body–or a jar with a speaker attached to it–is really you? Well, you can ask it some questions, and if it answers the same way that you do, you’re talking to a faithful copy of yourself.

Sounds good. But the me who sent his first story into Asimov’s seventeen years ago couldn’t answer the question, "Write a story for Asimov’s" [sic] the same way the me of today could. Does that mean I’m not me anymore?

Kurzweil has the answer.

"If you follow that logic, then if you were to take me ten years ago, I could not pass for myself in a Ray Kurzweil Turing Test. But once the requisite uploading technology becomes available a few decades hence, you could make a perfect-enough copy of me, and it would pass the Ray Kurzweil Turing Test. The copy doesn’t have to match the quantum state of my every neuron, either: if you meet me the next day, I’d pass the Ray Kurzweil Turing Test. Nevertheless, none of the quantum states in my brain would be the same. There are quite a few changes that each of us undergo from day to day; we don’t examine the assumption that we are the same person closely.

"We gradually change our pattern of atoms and neurons but we very rapidly change the particles the pattern is made up of. We used to think that in the brain–the physical part of us most closely associated with our identity–cells change very slowly, but it turns out that the components of the neurons, the tubules and so forth, turn over in only days. I’m a completely different set of particles from what I was a week ago.

"Consciousness is a difficult subject, and I’m always surprised by how many people talk about consciousness routinely as if it could be easily and readily tested scientifically. But we can’t postulate a consciousness detector that does not have some assumptions about consciousness built into it.

"Science is about objective third party observations and logical deductions from them. Consciousness is about first-person, subjective experience, and there’s a fundamental gap there. We live in a world of assumptions about consciousness. We share the assumption that other human beings are conscious, for example. But that breaks down when we go outside of humans, when we consider, for example, animals. Some say only humans are conscious and animals are instinctive and machinelike. Others see humanlike behavior in an animal and consider the animal conscious, but even these observers don’t generally attribute consciousness to animals that aren’t humanlike.

"When machines are complex enough to have responses recognizable as emotions, those machines will be more humanlike than animals."

The Kurzweil Singularity goes like this: computers get better and smaller. Our ability to measure the world gains precision and grows ever cheaper. Eventually, we can measure the world inside the brain and make a copy of it in a computer that’s as fast and complex as a brain, and voila, intelligence.

Here in the twenty-first century we like to view ourselves as ambulatory brains, plugged into meat-puppets that lug our precious grey matter from place to place. We tend to think of that grey matter as transcendently complex, and we think of it as being the bit that makes us us.

But brains aren’t that complex, Kurzweil says. Already, we’re starting to unravel their mysteries.

"We seem to have found one area of the brain closely associated with higher-level emotions, the spindle cells, deeply embedded in the brain. There are tens of thousands of them, spanning the whole brain (maybe eighty thousand in total), which is an incredibly small number. Babies don’t have any, most animals don’t have any, and they likely only evolved over the last million years or so. Some of the high-level emotions that are deeply human come from these.

"Turing had the right insight: base the test for intelligence on written language. Turing Tests really work. A novel is based on language: with language you can conjure up any reality, much more so than with images. Turing almost lived to see computers doing a good job of performing in fields like math, medical diagnosis and so on, but those tasks were easier for a machine than demonstrating even a child’s mastery of language. Language is the true embodiment of human intelligence."

If we’re not so complex, then it’s only a matter of time until computers are more complex than us. When that comes, our brains will be modelable in a computer and that’s when the fun begins. That’s the thesis of Spiritual Machines, which even includes a (Heinlein-style) timeline leading up to this day.

Now, it may be that a human brain contains n logic-gates and runs at x cycles per second and stores z petabytes, and that n and x and z are all within reach. It may be that we can take a brain apart and record the position and relationships of all the neurons and sub-neuronal elements that constitute a brain.

But there are also a nearly infinite number of ways of modeling a brain in a computer, and only a finite (or possibly nonexistent) fraction of that space will yield a conscious copy of the original meat-brain. Science fiction writers usually hand-wave this step: in Heinlein’s "Man Who Sold the Moon," the gimmick is that once the computer becomes complex enough, with enough "random numbers," it just wakes up.

Computer programmers are a little more skeptical. Computers have never been known for their skill at programming themselves–they tend to be no smarter than the people who write their software.

But there are techniques for getting computers to program themselves, based on evolution and natural selection. A programmer creates a system that spits out lots–thousands or even millions–of randomly generated programs. Each one is given the opportunity to perform a computational task (say, sorting a list of numbers from greatest to least) and the ones that solve the problem best are kept aside while the others are erased. Now the survivors are used as the basis for a new generation of randomly mutated descendants, each based on elements of the code that preceded them. By running many instances of a randomly varied program at once, and by culling the least successful and regenerating the population from the winners very quickly, it is possible to evolve effective software that performs as well or better than the code written by human authors.

Indeed, evolutionary computing is a promising and exciting field that’s realizing real returns through cool offshoots like "ant colony optimization" and similar approaches that are showing good results in fields as diverse as piloting military UAVs and efficiently provisioning car-painting robots at automotive plants.

So if you buy Kurzweil’s premise that computation is getting cheaper and more plentiful than ever, then why not just use evolutionary algorithms to evolve the best way to model a scanned-in human brain such that it "wakes up" like Heinlein’s Mike computer?

Indeed, this is the crux of Kurzweil’s argument in Spiritual Machines: if we have computation to spare and a detailed model of a human brain, we need only combine them and out will pop the mechanism whereby we may upload our consciousness to digital storage media and transcend our weak and bothersome meat forever.

But it’s a cheat. Evolutionary algorithms depend on the same mechanisms as real-world evolution: heritable variation of candidates and a system that culls the least-suitable candidates. This latter–the fitness-factor that determines which individuals in a cohort breed and which vanish–is the key to a successful evolutionary system. Without it, there’s no pressure for the system to achieve the desired goal: merely mutation and more mutation.

But how can a machine evaluate which of a trillion models of a human brain is "most like" a conscious mind? Or better still: which one is most like the individual whose brain is being modeled?

"It is a sleight of hand in Spiritual Machines," Kurzweil admits. "But in The Singularity Is Near, I have an in-depth discussion about what we know about the brain and how to model it. Our tools for understanding the brain are subject to the Law of Accelerating Returns, and we’ve made more progress in reverse-engineering the human brain than most people realize." This is a tasty Kurzweilism that observes that improvements in technology yield tools for improving technology, round and round, so that the thing that progress begets more than anything is more and yet faster progress.

"Scanning resolution of human tissue–both spatial and temporal–is doubling every year, and so is our knowledge of the workings of the brain. The brain is not one big neural net, the brain is several hundred different regions, and we can understand each region, we can model the regions with mathematics, most of which have some nexus with chaos and self-organizing systems. This has already been done for a couple dozen regions out of the several hundred.

"We have a good model of a dozen or so regions of the auditory and visual cortex, how we strip images down to very low-resolution movies based on pattern recognition. Interestingly, we don’t actually see things, we essentially hallucinate them in detail from what we see from these low resolution cues. Past the early phases of the visual cortex, detail doesn’t reach the brain.

"We are getting exponentially more knowledge. We can get detailed scans of neurons working in vivo, and are beginning to understand the chaotic algorithms underlying human intelligence. In some cases, we are getting comparable performance of brain regions in simulation. These tools will continue to grow in detail and sophistication.

"We can have confidence of reverse-engineering the brain in twenty years or so. The reason that brain reverse engineering has not contributed much to artificial intelligence is that up until recently we didn’t have the right tools. If I gave you a computer and a few magnetic sensors and asked you to reverse-engineer it, you might figure out that there’s a magnetic device spinning when a file is saved, but you’d never get at the instruction set. Once you reverse-engineer the computer fully, however, you can express its principles of operation in just a few dozen pages.

"Now there are new tools that let us see the interneuronal connections and their signaling, in vivo, and in real-time. We’re just now getting these tools and there’s very rapid application of the tools to obtain the data.

"Twenty years from now we will have realistic simulations and models of all the regions of the brain and [we will] understand how they work. We won’t blindly or mindlessly copy those methods, we will understand them and use them to improve our AI toolkit. So we’ll learn how the brain works and then apply the sophisticated tools that we will obtain, as we discover how the brain works.

"Once we understand a subtle science principle, we can isolate, amplify, and expand it. Air goes faster over a curved surface: from that insight we isolated, amplified, and expanded the idea and invented air travel. We’ll do the same with intelligence.

"Progress is exponential–not just a measure of power of computation, number of Internet nodes, and magnetic spots on a hard disk–the rate of paradigm shift is itself accelerating, doubling every decade. Scientists look at a problem and they intuitively conclude that since we’ve solved 1 percent over the last year, it’ll therefore be one hundred years until the problem is exhausted: but the rate of progress doubles every decade, and the power of the information tools (in price-performance, resolution, bandwidth, and so on) doubles every year. People, even scientists, don’t grasp exponential growth. During the first decade of the human genome project, we only solved 2 percent of the problem, but we solved the remaining 98 percent in five years."

But Kurzweil doesn’t think that the future will arrive in a rush. As William Gibson observed, "The future is here, it’s just not evenly distributed."

"Sure, it’d be interesting to take a human brain, scan it, reinstantiate the brain, and run it on another substrate. That will ultimately happen.

"But the most salient scenario is that we’ll gradually merge with our technology. We’ll use nanobots to kill pathogens, then to kill cancer cells, and then they’ll go into our brain and do benign things there like augment our memory, and very gradually they’ll get more and more sophisticated. There’s no single great leap, but there is ultimately a great leap comprised of many small steps.

"In The Singularity Is Near, I describe the radically different world of 2040, and how we’ll get there one benign change at a time. The Singularity will be gradual, smooth.

"Really, this is about augmenting our biological thinking with nonbiological thinking. We have a capacity of 1026 to 1029 calculations per second (cps) in the approximately 1010 biological human brains on Earth and that number won’t change much in fifty years, but nonbiological thinking will just crash through that. By 2049, nonbiological thinking capacity will be on the order of a billion times that. We’ll get to the point where bio thinking is relatively insignificant.

"People didn’t throw their typewriters away when word-processing started. There’s always an overlap–it’ll take time before we realize how much more powerful nonbiological thinking will ultimately be."

It’s well and good to talk about all the stuff we can do with technology, but it’s a lot more important to talk about the stuff we’ll be allowed to do with technology. Think of the global freak-out caused by the relatively trivial advent of peer-to-peer file-sharing tools: Universities are wiretapping their campuses and disciplining computer science students for writing legitimate, general purpose software; grandmothers and twelve-year-olds are losing their life savings; privacy and due process have sailed out the window without so much as a by-your-leave.

Even P2P’s worst enemies admit that this is a general-purpose technology with good and bad uses, but when new tech comes along it often engenders a response that countenances punishing an infinite number of innocent people to get at the guilty.

What’s going to happen when the new technology paradigm isn’t song-swapping, but transcendent super-intelligence? Will the reactionary forces be justified in razing the whole ecosystem to eliminate a few parasites who are doing negative things with the new tools?

"Complex ecosystems will always have parasites. Malware [malicious software] is the most important battlefield today.

"Everything will become software–objects will be malleable, we’ll spend lots of time in VR, and computhought will be orders of magnitude more important than biothought.

"Software is already complex enough that we have an ecological terrain that has emerged just as it did in the bioworld.

"That’s partly because technology is unregulated and people have access to the tools to create malware and the medicine to treat it. Today’s software viruses are clever and stealthy and not simpleminded. Very clever.

"But here’s the thing: you don’t see people advocating shutting down the Internet because malware is so destructive. I mean, malware is potentially more than a nuisance–emergency systems, air traffic control, and nuclear reactors all run on vulnerable software. It’s an important issue, but the potential damage is still a tiny fraction of the benefit we get from the Internet.

"I hope it’ll remain that way–that the Internet won’t become a regulated space like medicine. Malware’s not the most important issue facing human society today. Designer bioviruses are. People are concerned about WMDs, but the most daunting WMD would be a designed biological virus. The means exist in college labs to create destructive viruses that erupt and spread silently with long incubation periods.

"Importantly, a would-be bio-terrorist doesn’t have to put malware through the FDA’s regulatory approval process, but scientists working to fix bio-malware do.

"In Huxley’s Brave New World, the rationale for the totalitarian system was that technology was too dangerous and needed to be controlled. But that just pushes technology underground where it becomes less stable. Regulation gives the edge of power to the irresponsible who won’t listen to the regulators anyway.

"The way to put more stones on the defense side of the scale is to put more resources into defensive technologies, not create a totalitarian regime of Draconian control.

"I advocate a one hundred billion dollar program to accelerate the development of anti-biological virus technology. The way to combat this is to develop broad tools to destroy viruses. We have tools like RNA interference, just discovered in the past two years to block gene expression. We could develop means to sequence the genes of a new virus (SARS only took thirty-one days) and respond to it in a matter of days.

"Think about it. There’s no FDA for software, no certification for programmers. The government is thinking about it, though! The reason the FCC is contemplating Trusted Computing mandates,"–a system to restrict what a computer can do by means of hardware locks embedded on the motherboard–"is that computing technology is broadening to cover everything. So now you have communications bureaucrats, biology bureaucrats, all wanting to regulate computers.

"Biology would be a lot more stable if we moved away from regulation–which is extremely irrational and onerous and doesn’t appropriately balance risks. Many medications are not available today even though they should be. The FDA always wants to know what happens if we approve this and will it turn into a thalidomide situation that embarrasses us on CNN?

"Nobody asks about the harm that will certainly accrue from delaying a treatment for one or more years. There’s no political weight at all, people have been dying from diseases like heart disease and cancer for as long as we’ve been alive. Attributable risks get 100-1000 times more weight than unattributable risks."

Is this spirituality or science? Perhaps it is the melding of both–more shades of Heinlein, this time the weird religions founded by people who took Stranger in a Strange Land way too seriously.

After all, this is a system of belief that dictates a means by which we can care for our bodies virtuously and live long enough to transcend them. It is a system of belief that concerns itself with the meddling of non-believers, who work to undermine its goals through irrational systems predicated on their disbelief. It is a system of belief that asks and answers the question of what it means to be human.



It’s no wonder that the Singularity has come to occupy so much of the science fiction narrative in these years. Science or spirituality, you could hardly ask for a subject better tailored to technological speculation and drama.

Discussion Questions for “The Singularity” concept
Ray Kurzweil predicts a day when human consciousness will be digitizable, and Artificial Intelligences (AI’s), whether they started out as biological human beings or not, will come to be the dominant life form on Earth and then in the universe. Here are some things to consider about the time period when this ‘Singularity’ will allegedly be happening, and the time afterward:
Ray Kurzweil predicts that when machines are complex enough to have responses recognizable as emotions, they will be more humanlike than animals. When machines become emotional, and humanlike, what effect will that have on the relationships humans have with machines, that machines have with machines, that humans have with non-human animals, and that machines have with non-human animals?
Will the rights of non-human animals become seen as more or less important than the rights of artificial intelligences; for example, if we are acknowledging the basic rights of computers/machines to exist as they want to exist without interference, will we continue to deny that non-human animals have that same right?
If we can augment human intelligence using artificial intelligence, would people begin to use the same technology to augment the intelligence of non-human animals so that, for example, we could actually communicate directly with them? And if people can suddenly talk to intelligence-augmented cows and pigs and robot chickens (or more accurately, cyborg chickens), actually have conversations with them—perhaps during a Turing Test—what impact would that have on our society?
Will the new dividing line for what is and what is not deserving of rights be based on levels of intelligence, and if so, what are the potential problems with that?
Will people who refuse to get technological upgrades to their brains’ abilities be discriminated against because they are ‘un-evolved,’ and dramatically less intelligent than the upgraded humans and AI’s around them? Will the non-upgraded humans be victims of a new kind of apartheid, perhaps called such derogatory terms as “meat-brains” (the way the aliens in “They’re Made Out of Meat” looked at humans), obsos (short for ‘obsolete’), relics, antiques, betas (from ‘beta tests’ of not-yet-perfected software releases), downgrades, shut-ins or closed books (because their minds are not open books for all to read using nanobot-assisted telepathy), and so on?
Will human-robot marriages become accepted?
Will robots be able to unionize and go on strike if they feel their employers are treating them unfairly?
Will ‘turning off’ or destroying a robot come to be seen as murder?
Will robots have the right to an attorney and other elements of due process if accused of a crime?
What is consciousness? What constitutes a mind?
What is free will?
In the movie Jurassic Park, Jeff Goldblum’s character Malcolm, whose area of expertise is chaos theory, tells the creator of Jurassic Park that the knowledge he has is dangerous because he hasn’t earned it. He stood on the shoulders of those who had gone before and took the next logical step, and was so concerned with whether or not we could do something amazing that he never considered whether or not we should do it. If, in the future, human beings can exponentially increase their knowledge without working for it—become geniuses in moments without working hard to attain that knowledge, but can just simply download it—what are the potential dangers of doing so and the potential problems with doing so?
What do you think of the idea of computer programmers deliberately accelerating a technological version of Darwinian evolution, destroying the least-capable machines and then using the survivors as the basis for the next generation of machines and so on?
Kurzweil uses the term ‘malware,’ generally used to refer to computer viruses and worms, with reference to what bio-terrorists can engineer. He refers to it as ‘bio-malware.’ What does this tell you about his views on technology and human biology?
Kurzweil advocates that we do away with all attempts to regular pharmaceutical products and biological engineering, stating that the risks of delaying possible technological and medical breakthroughs is greater than the risks of releasing on to the market products that are defective and damaging (such as thalidomide, which resulted in huge numbers of children being born missing one or more limbs when their mothers had taken it). Do you agree or disagree with his idea that we should de-regulate all technological/biological engineering and the production of pharmaceuticals to achieve breakthroughs, even with the increased risk of defective and dangerous products hitting the market? Consider for a moment the lists of ‘side effects’ of pharmaceutical products that are currently approved after being tested for safety by the FDA, and how many are recalled due to their being dangerous, and how many lawyers have ads offering to help you sue the medical technology and pharmaceutical companies that have released defective products.
Kurzweil points out that we could be curing cancer and heart disease with less regulation of medical research, yet he fails to mention one of the most important things we need to understand about heart disease and cancer: for the most part, they are entirely preventable through diet. People who do not eat animal products of any kind have reduced their chances of developing cancer or heart disease by well over ninety percent. Since heart disease and cancer are the top two killers in our country, and are becoming increasingly prevalent problems in other countries as they adopt our diet, this is a significant data point to leave out. A change in dietary lifestyle would also reduce obesity and diabetes; pharmaceuticals and nanobots are entirely unnecessary to deal with these problems. Is Kurzweil touting high-tech solutions to problems that can be solved with very little or even no technology at all, because his bias is to use technology? In other words, when all you have is a hammer, does every problem around you start to look like a nail?
Is there an actual objective ‘reality’ or only perceptions of reality?
Will synthetic and organic forms of consciousness be given equal status and rights?

Recall that optimists in the 1950’s thought nuclear power would solve all our planet’s energy problems forever, and would provide cheap and abundant energy with no pollution. Futurists also predicted that by now we’d have flying cars and personal jetpacks. Kurzweil has also predicted flying cars. Flying cars do exist, but they cost several million dollars. Nuclear power is not cheap and has certainly not been made safe or environmentally responsible. In light of these past predictions of the future, our present, do you think that Kurzweil’s predictions are overly-optimistic?


One problem with high-tech. stuff is that it is extremely toxic to produce. The production of every computer, every piece of technology, also requires the production of a large amount of highly toxic waste by-products. Also, there are only just so many billions or trillions of tons of certain resources in planet Earth for us to use to make stuff. Some of these are predicted to run out within the next century or so. Bauxite is used to make aluminum. Iron ore is a core component, obviously, of industrial iron. Plastics are currently made from petroleum products, although they can also now be made from plant materials. Manganese, selenium, lithium, and other metals that are important in making high-tech. stuff all exist in finite quantities in Earth. Making high-tech. stuff also requires a great amount of energy, and yet more energy to keep it all continually running. The U.S. uses more energy, and produces more pollution, per person, than any other nation on Earth. Is Kurzweil adequately factoring into his optimistic view of technological advancement the physical limitations of our planet’s limited resources?
On a related note, Kurzweil does predict that in the future, most of Earth will be re-made into a giant super-computer, with only a few ‘nature preserves’ set aside for humans who chose not to upgrade themselves, and that eventually space-faring AI’s will begin converting raw materials such as asteroids, comets, other planets and even perhaps stars into super-computers, eventually turning our entire universe and perhaps even other alternate universes in the multiverse into giant computers.
Very few people in the poorest parts of the world currently have access to computers, though an increasing number have access to cell phones. Is Kurzweil’s view of technological advancement adequately factoring in the extreme gaps in wealth and technology between the First World and the Third World?
What will free will mean when people have the ability to control other people’s thoughts through technology in the same way that certain wasps can suppress the free will of roaches by suppressing their brains’ abilities to produce the chemical octopamine?
Where does ‘human’ consciousness end and ‘artificial’ consciousness begin, and will that line matter in the future?
If consciousness can eventually be backed up like a computer file, saved, transferred, re-downloaded into new synthetic or even organic brains (computers or clones), then have we achieved immortality?
If a complete human consciousness can live forever, or be deleted or ‘overwritten,’ what kind of an impact will that have on the human beliefs regarding souls?

What does it mean to be human when human consciousness no longer has to reside only in a human organic brain, and human bodies are no longer entirely biological? If we can directly experience other minds and they can directly experience our minds, how will that affect our own consciousness and the consciousnesses of those whose minds we experience and those who experience our minds?


How do we retain our own sense of individuality when other people have access to our minds, or we have direct access to theirs?
Consider this: in recent years, hackers have hacked into the computer systems of the Department of Defense, the Pentagon, NATO, law firms, major corporations such as Sony, Citigroup, Bank of America, the Bank of Israel, Lockheed Martin, the Wall Street Journal, the Nasdaq Stock Exchange, and also the CERN Large Hadron Collider, which is the largest supercollider in the world and smashes sub-atomic particles together. Then there is what hackers did to the now-former CEO of HB Gary Federal, Aaron Barr, who boasted he would soon expose the members of the collective of hackers known as Anonymous and bring them to justice. Members of Anonymous remotely wiped his iPhone and his iPad along with its backup storage, copied large amounts of work-related email messages from him, shared this on the Internet via a peer-to-peer file sharing service, took over his Twitter account, published his social security number on the Internet, exposed his World of Warcraft character name(s), and revealed personal details from his life, as well as stole tens of thousands of e-mails from his company HB Gary Federal and posted them online and hacked the company’s official website. The targets of these cyber-attacks represent some of the most highly protected and secure computer systems in the world. They have dedicated full-time cyber-security protocols and employees who are supposed to be protect their networks and data from such attacks. Malware is created every day; every day, people’s computers are hacked or damaged by Trojan Horses, worms, and viruses. People’s cell phones and I-pods and blackberries have been hacked. Although the majority of cases of identity theft happen when people dig through other people’s garbage or recycling and get a hold of paper documents that were not shredded, there are many cases of identity theft that are accomplished using computers and hacking. Given all that, why might some people have reservations about uploading their complete consciousnesses (their memories, their personalities, essentially, their complete identities) from their corporeal bodies and current “meat-brains” to exist as immortal digital minds in virtual realities contained in and continually maintained by databases of computers? What could potentially go wrong with that?
If nanobots make telepathic communication possible (reading one another’s thoughts directly, brain-to-brain, the way we currently talk on cell phones or instant message) in the future, how will we safeguard our private thoughts? Currently hackers listen in on people’s cell phones—even when they are not turned on—and read other people’s emails and instant message chat sessions. If people wirelessly communicate using technology-based telepathy, how will we safeguard our most private thoughts from hackers or even malware being installed directly into our machine-infused brains? And more generally, what will people who do not wish to have others hear their thoughts be able to do to prevent others from ‘listening’ to their thoughts, or to prevent themselves from being able to hear the thoughts of others (which they may also not want), other than refuse to have the technology implanted in their brains, which may lead to their being discriminated against?

Once everyone has the ability to create and live in any kind of virtual reality they choose (whether projected as a ‘skin’ on to actual reality or its own separate space, perhaps experienced only internally by the user) what happens to ‘actual’ shared reality?


If people have the ability to directly stimulate their own emotional and pleasure centers and live in any kind of virtual reality they wish, what will motivate people to ‘unplug’ from their virtual reality “trips” and return to actual reality?
What will it mean to be human when existing in the completely biological body in which you were born is only one of several options available for the continued existence of your consciousness?
With the advanced kind of neuroscience that will lead to the digitalization of the brain also probably leading to the ability to end all forms of mental retardation (intelligence-enhancing computer implants, perhaps nanobots, could upgrade anyone to reverse congenital or even injury-related brain damage), Alzheimer’s, OCD, Tourette Syndrome, clinical depression, Asperger’s or autism, how will we deal with people who do not wish to be ‘cured’ of their different mental functioning? There are any number of people who would love to be cured of their OCD, or Tourette Syndrome, or Alzheimer’s, or clinical depression … but what about when the people do not wish to be cured of what others see as their mental illness?
What about when people cannot give meaningful consent to be cured because their mental illness prevents them from perceiving their own mental illness? Do others then have the right or even the responsibility to cure that person against his or her will, because she or he is incapable of understanding that she or he is sick and in need of help, and only when that person has been cured will that person be able to perceive in hindsight the necessity of the therapy or treatment? And who gets to make that determination?
Who has the power to determine what constitutes a ‘normal’ mode of perception and thus what is ‘abnormal’ or ‘brain damage’?
In the novel Flowers for Algernon, a severely developmentally disabled man is given genius through technology, but it is only temporary, and he then has to deal with knowing that he will eventually lose it and revert back to his previous intellectual level … It was the basis for the movie Charly and a similar movie is Molly (with Elisabeth Shue and Aaron Eckhart). If the technology were developed to increase intellect for people with severe cognitive disabilities (whose disabilities are such that they cannot understand the possible option of increasing their intelligence, and thus cannot make an informed or meaningful choice about it) do other people have the right to make that decision for them? Is a developmental disability (such as Down Syndrome) a problem that needs to be fixed, a brain dysfunction that needs to be corrected?
Do people currently have the right to robo-trip (deliberately self-inflicting brain damage through abusing certain medications and alcohol), to diminish their own intellects (which could otherwise have been used to greatly advance our knowledge of the universe, creating advances in science, math, medicine, etc.) in order to be happier, by conforming, fitting in, dumbing themselves down through self-inflicted brain damage so they can be happier, and not feel set apart from everyone else by their genius? Do they have the right to deprive the world of their full intellectual capacity and all the good it could do for so many beings, just so they can be happy?

What are the ethical implications of people using computers implanted in their brains to boost their intellectual abilities? If we ban athletes from using performance-enhancing drugs such as steroids, will Mathletes and participants in Knowledge Bowl be banned from using mental performance-enhancing drugs or nanobots or computer implants, which are the intellectual equivalent of ‘juicing’? What will grades in school mean, or SAT scores mean, when students are using technology (drugs, microchips, magnetic-field generators, or any combination of these or other technologies, etc.) to boost their brainpower? Is that ‘cheating’ in some sense? Or is it any different than some students getting a well-balanced breakfast and enough sleep versus others who are choosing to eat junk food or skip breakfast and stay up all night? Is drinking coffee and listening to classical music and smelling fresh-cut lemons or engaging in breathing exercises and meditation and taking ginko-biloba ‘cheating’? These things also boost memory and intellectual processing efficiency. If a student uses mental-performance and intelligence-enhancing technologies, is that fair? Is the student truly earning his or her grades in school? Also, if learning becomes as easy as breathing, with no struggle, no hardship, no effort, then will students (and people in general) value the knowledge that they gain, or will they begin to take it for granted? If there is no struggle to earn the knowledge, will it mean anything to people any more? Also, if everyone is a genius, then what does ‘genius’ mean any more?


If human urges to do things like molest children, or rape or murder people, or even traits like greed, lust for power and domination, cruelty, aggression, the desire to do violence of any kind, can just be ‘deleted’ like corrupted files, bad computer code, should they be?
How will immortality of consciousness affect humanity? Will people worry that by transferring consciousness to a new host mind they will miss their one window of opportunity to have their souls go to the afterlife or be ‘truly’ reincarnated? That their souls will be lost in translation?
What about wills and inheritances, or having one’s descendants and friends have to get used to seeing your consciousness in a new host form, or you having to get used to other people you know having their minds exist in new forms? What about if your life partner does not want to achieve immortality with you, or you don’t want to but your partner does? How will possible immortality affect your economic decisions, retirement, etc.? Do you want to work for the next million years?
Assuming that the Amish are around in the future, if an Amish person goes into the world of the ‘English’ (everyone who is not Amish), to buy something they need in town or to sell some of their furniture, and they encounter a humanoid robot, an android, how will they interact with it, if at all? The Amish highly value hospitality, civility, politeness, and so on, yet they do not want to interact with technology that is current; that is, they choose to remain several decades behind the latest technological advances. So if they were confronted by a piece of current technology, who is essentially a person with free will and emotions, and even looks like a human being, what would an Amish person do in that situation?
A good example of the Anti-Singularity would be the movie Idiocracy, where in the future, humanity gets stupider and stupider with each new generation because the smartest people, worried about human overpopulation, chose not to have kids of their own, and the dumbest people kept having kids … lots and lots of them … and through both nature and nurture (they were too stupid to know how to be good parents) each new generation of humanity represented a kind of devolution. What do you think of this idea?







The database is protected by copyright ©essaydocs.org 2016
send message

    Main page