Post-Human Intelligence; starting an avalanche with a pebble

Download 18.03 Kb.
Size18.03 Kb.
Bryan Conrad December 8, 2011

Mind, Machine, and Consciousness

LE 1310—042

Post-Human Intelligence;

starting an avalanche with a pebble

I believe beyond a shadow of a doubt that, along a relatively short time line, man will achieve something completely remarkable; we will exceed the capacity of our ability to understand. It is with complete certainty that I assure you that humans will create an intelligent machine capable, not only of gathering and processing information to then infer its own original conclusions, but also of conscious self-awareness, expression of complex emotion, and the curiosity, desire and ability to improve itself. In his book, THE AGE of SPIRITUAL MACHINES, futurist, Ray Kurzweil states, “Before the next century is over, human beings will no longer be the most intelligent or capable type of entity on the planet. Actually, let me take that back. The truth of that last statement depends on how we define human.” The implications of that one statement are profound. To this point, our discussion of artificial intelligence has largely dealt with the concept of the creation of an intelligence equivalent to that of humans. What I would like to explore in this paper is the next step; the awakening of a smarter-than-human, faster-than-human and ultimately unlike human, self-improving artificial intelligence. That which has become popularly known as The Singularity.

According to Vernor Vinge, the man most widely credited with coining the phrase “singularity” as it pertains to AI, defines a singularity thusly: “just as our model of physics breaks down when it tries to model the singularity at the center of a black hole, our model of the world breaks down when it tries to model a future that contains entities smarter than human… it is a point where our old models must be discarded and a new reality rules.” 13 The fact that technology is advancing in exponential rather than liner steps gives credence to the assertion of singularity science that, within a very narrow window of time, AI will be achieved, and that once achieved, its advancement will continue at faster and faster speeds, until ultimately human intelligence will be surpassed. As one will note in the graph on the following page, technology has advanced over the course of the twentieth century at an amazing pace; each advancement directly propelled forward by the most immediately previous advancement. According to the graph, in this decade, machines will process at speeds equivalent to a biological brain with the capacity of an insect, by 2020 that of a mouse, 2040 that of a single human brain, and by 2060 it is projected that manmade computational power will exceed that of the complete collective of all human minds. Ray Kurtzwell, inventor, scientist, author and singularity proponent, elaborates further in his article entitled “Don’t Underestimate the Singularity”:

“Here is just one example, provided by Professor Martin Grötschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later—in 2003—this same model could be solved in roughly one minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms!”3

c:\users\conrad\desktop\moores law1 (2).jpg
Kurzweil writes that, due to paradigm shifts, a trend of exponential growth extends Moore's law from integrated circuits to earlier transistors, vacuum tubes, relays, and electromechanical computers. He predicts that the exponential growth will continue, and that in a few decades the computing power of all computers will exceed that of human brains, with superhuman artificial intelligence appearing around the same time

The Age of Spiritual Machines: When Computers Exceed Human Intelligence

Ray Kurzweil January 1, 1999

“Computers doubled in speed every three years at the beginning of the twentieth century, every two years in the 1950s and 1960s, and are now doubling in speed every twelve months. This trend will continue, with computers achieving the memory capacity and computing speed of the human brain by around the year 2020.”2

These are far from new ideas. As early as the 1960’s scientific philosophers were making what seemed at the time to be “fantastical” claims of the future of intelligence. “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. ...” 15

Jump ahead 50 years, and you’ll find the most acutely involved minds in singularity science congregated in New York last month at “The Singularity Summit 2011”. The brain power in that room was astounding. Scientists from every discipline, philosophers, industrialists, entrepreneurs, doctors of medicine, mathematicians, and economists gathered for a two day summit and discussed revolutionary science and how it might impact our future as a species. I encourage you to click the hyperlinks in my bibliography and watch the presentations on YouTube. Though I cited many of my favorites, this paper simply does not allow for a complete discussion within the parameters of the assignment.

One recurring theme, however, was that, to this point in history, man perceives himself to be the pinnacle of evolution. We constantly marvel at our own greatness in wonder and awe. We hold ourselves superior over all other species of known life, and have declared ourselves eminent within our domain. But perhaps one should consider extending one’s timeline before assuming the keys to the universe. To quote Max Tegmark from his presentation entitled, "The Future of Life: a Cosmic Perspective", “It’s completely ridiculous to think that we humans, after evolving for a million years… are the final or most advanced way in which we can put together elementary particles into something self-aware and that this is the end somehow.” 16 His main theme was that given a long enough timeline, billions of years in cosmological theory, the sentient beings of future intelligences, when compared to the glorious thing that human beings of 2011 consider themselves to be, “are likely to be as far ahead of us, as we are far ahead of bactiera.”16 The singularity presupposes that it is conceivable that this superior sentience of our future will be man-made, and that once awakened and granted access to exponentially increasing processing speed, it will begin to build itself. In that instance, humanity will almost certainly become obsolete.

As time grows near, one must be mindful of these last few steps we are taking as a society. That which we know, and can prove empirically, is expanding rapidly, engorging itself upon, of all things, itself. Our knowledge propagates further knowledge, and we are on the verge, 40 some years away if my research is credible, of exceeding our own capacity. It is my hope that as we continue on our quest to construct sentience, we do so in a way that does not jeopardize our entire species. This technology holds so much promise, yet my concern is that we are not mature enough, for lack of a better term, to contain and utilize it to the furtherance of humanity.


  1. Storrs, J.H. (2007). Beyond AI: Creating the Conscience of the Machine. Amherst, NY, Prometheus Books.

  2. Kurzweil, R. (1999). The age of spiritual machines: when computers exceed human intelligence. New York, NY, Viking.

  3. Kurtzwell, R. (Oct. 10, 2011). “Kurzweil responds: Don’t underestimate the Singularity”. Technology Review.

  4. Singularity Institute for Artificial Intelligence, Inc. (2011). Multiple Pages.

  5. Thiel, P. (2011). "Back to the Future".

“ there was this very alive sort of imagination about the future, and somehow there’s been a shift away from that.”

  1. Wolfram, S. (2011). "Computation and the Future of Mankind".

  2. Ferrucci, D. Cerutti. D. and Jennings, K. (2011). “IBM's Watson”.

  3. Crane, R. (2011). "Rethinking Communication"

  4. George, D. and Brown, S. (2011). "From Planes to Brains: Building AI the Wright Way"

  5. Itskov. D. (2011). "Project 'Immortality 2045' -- Russian Experience.

  6. Arrison, S. (2011). "How the Coming Age of Longevity Will Change Everything"

  7. Shermer, M. (2011) "Social Singularity: Transitioning from Civilization 1.0 to 2.0”.

  8. Vinge, V. (1993). “What is the Singularity?”.

  9. Thierney, J. (2008). “Technology That Outthinks Us: A Partner or a Master?”. New York Times.

  10. Good, I. J. (1965). "Speculations Concerning the First Ultraintelligent Machine". Advances in Computers. vol 6, Academic Press.

  11. Tegmark, M.(2011). "The Future of Life: a Cosmic Perspective".

Download 18.03 Kb.

Share with your friends:

The database is protected by copyright © 2022
send message

    Main page