Part of what we are trying to achieve. The primary goal.
Marko: Agree with Bill. Narrow AI. AGI have human attributes? Through evolution, certain motivations were fitness function.
AI will not have these human attributes. Will not leave us behind. Not in our genes.
Ray: A lot of our intelligence and thinking has to do with our bodies meeting its needs and desires.
Animals finding food and meeting all other biological needs.
Humans spend a lot of time doing that as well.
Machines, if not biological, will have certain needs to survive. But different.
Presumably less complex.
Conflict between humans and machines may come from that difference.
They may not be sympathetic to things they don't feel.
Machines are part of civilization. Not from Mars. These machines are being created to assist us, transcend our limitations, expand our reach. Really the frontier for machines is to master emotional intelligence.
No human can hold a candle to Mathematica.
Few chess masters can beat a PC at chess.
Being funny. Expressing a loving sentiment.
A song that makes sense.
Things machines can't do yet.
My prediction - will happen in next 20 years.
Value to that. Essence of human art.
Focus and concentrate that human intelligence into something lasting and worthwhile.
To some extent that transcends our bodies.
Transcends into the world of ideas.
A pure AI would not have a body - pure math.
Emiliano: Good you ended on that note.
Argentina. Hear you speak a lot of transcending as a verb. Interested in your view on Transcendance as a now. Is there, for you, something that is transcendant - beyond experience. You mentioned mathematics. Really curious to understand the transcendant as a noun.
Ray: Transcendance is the goal of art and science. There is some beautiful insight. That is a transcendance.
A beautiful piece of music. An experience of transcendance. That is our goal. What we try to experience.
For an inventor. Solving a problem.
See people benefit from solution.
Mathematical formula and changing people's lives. A transcendance to that.
What life seeks.
Emiliano: Different idea of transcendance - something you cannot ever reach.
Ray: Can you give me an example?
Ray: Platonic forms. Truth = like a circle. That clock is a circle, but look closely and it is not a circle.
Electron around a proton - must be circle = no, it is perturbed.
Maybe that's what you are alluding to.
Marco/Japan: Who will be able to make AI. A human or AI system, must come from self-replicating software.
We still don't know how brain works. Would be bounded.
Ray: Brain is not of a complexity beyond what humans can understand. Can quantify. Upper bound 25 m bytes.
800 m bytes.
50 m bytes.
Half of that is the brain. Most wasted. Very inefficient ways of implementing.
Much less needed.
Not simple, but something we can manage.
M: Tools for that?
Ray: Yes. Key insights. Pretty good idea what these billion modules do in the neocortex. One little pattern recognizer will recognize if your name is all in capital letters, the cross bar on the A.
Another recognizer for irony or humor.
Not that different.
Actually the same. At a higher level of this conceptual network. More inputs to it, but the same recognizer.
So these recognizers must be complex.
No. They deal with lists. Work exactly like language LISP.
Initially craze in 80's this is the way the brain works.
You can encode all the thinking the brain does - turns out that is actually true.
Brain processes LISP statements.
Your face or irony can be reduced to lists in elaborate hierarchy.
Been attempts to code these modules.
Not done, but see that is something we can understand.
that is 1 m times slower than electronics.
Tony/ S. Korea: Yesterday we had discussion about people with poor beliefs.
People not believing in climate change were libertarians.
Wondering, regarding Singularity, are distopians subject to core beliefs.
Ray: Great question. I'd be interested in your answer. I talk to a lot of people about these ideas.
Some readily accept. It's kind of obvious. Some people absolutely will not accept it.
Kevin Kelly cannot get his emotional and intellectual arms around exp growth.
He says superAI - he doesn't see them. Someday, but not in 40 years.
He says timing is wrong.
I'd completely agree IF I didn't think law of accelerating returns were true.
Small number of decades away.
He presents a gestalt against it, but no real arguments.
An argument against it withou making any arguments.
Are people who get it smarter?
But I don't think that is the case. Some very smart just can't accept the law of accelerating returns.
Nobel prize winners resist it. What is it?
What personality type?
People have said, more open minded?
Alex: Maybe scared. Jealous. Don't want to say publically. Maybe at home, in their hearts.
Ray: Consequences are daunting. More than people are used to.
Distinguish from people who have not thought about it at all. Say, well that's nuts.
The way Bob Metcalfe says, Why should it stop.
Some people very exposed to it, just resist it. A personality type.
Steve: What were your stages? Took you 20 years.
Ray: 1980 when I began to seriously study technology progressions.
[Sahal paper came out in 1979 - http://su-etherpad.com/tpenergy-Jul20-Team-Meeting ]
Over 300 pages, new book. Essay. Will self publish.
Updates Singularity is Near.
If I leave it to other people.
I wrote 150 predictions.
Critics will write 8 correct, 5 wrong.
Through selection bias can give wrong impression.
Not always obvious what is going on.
So, I research each prediction. For all of my books.
Another book length document coming out.
Next trade book is one about the mind.
In the mid 80s. Came out in 1989 - Age of Intelligent Machines. Really finished in 1986.
Views consistent with these views.
Not a community to talk about this.
Santiago/Argentina: How has your thinking changed since Singularity is near.
Ray: Hasn't changed. It just will examine in a lot more depth the issue of the mind.