Place your take-home messages here

Part 1:

Download 3.37 Mb.
Size3.37 Mb.
1   2   3   4   5   6   7   8   9   ...   57

Part 1:

Part 2:

Part 3:

Kathryn will review later looking for any [ref] markers--She and TFs will fill these in with references, but all are welcome (encouraged) to add references / put in as many details as are available.
Salim: Intro of Ralph Merkle

Last year, photo on wall. A favorite of the students.
Ralph Merkle: This is the opening inspirational talk.

What is the purpose of life, existance in atoms.

Health, wealth in atoms.

Atoms matter a whole lot. The arrangement matters a whole lot.
On the left you see Coal - carbon atoms.

On the right you see diamond Diamond - also carbon atoms.

Arrangement matters a great deal.

On the bottom left you see a computer chip sitting on a pile of sand

If you take the sand, add a pinch of impurities, you get silicon chips.

On the right, you have two people, one sick, one cycling - difference is how the atoms are arranged.

Three trends in our ability to arrange atoms.

\"Trends\" right.

SU, exponential trends.

1/ Flexibility - more things today than we ever could in the past.

Difficult to quantify this one, but clearly better today.

2/Precision, smallest feature sizes. This we can quantify.

We can see, very very clearly in computer wafers.

Exemplifies very clearly.

3/Cost. Straight lines on semi-log paper. More than exponential trends for decades.
1959 Feynman talk [ref]

There is PLENTY of room at the bottom.

In 1959, he pointed out that the laws of Physics did not speak against manipulating matter atom by atom.

How many have read Feynman: this is vintage Feynman. If not, you are in for a treat.

Everyone keeps pointing out what Feynman said.

Feynman got it right. That is really remarkable.

Still stands as earliest observation on this.

Molecular nanotechnology.

Definitions of scale - there are small things and small things are small.

A lot of things are \"nanotechnology\"

What is interesting is nanotechnology about manufacturing with the ultimate in flexibility, the ultimate in precision - every atom in the right place, the ultimate in low cost - just the materials and energy.

Raw materials in on left, mfg process, output products on the right.

You've been looking at 3D printer.


1 meter - size of my arm

mm - about the head of a pin

1000 times smaller is a micron - red blood cells - about 8 microns. Still visible under a light microscope.

Wavelength of light is less than a micron in size.

Down another factor of 1000 - a ribosome - the molecular machine in your body that churns out proteins from DNA/ RNA. About 25 nanometers in size.
A \"typical\" nanotube is 1-2 nanometers.

Atom is a bit more than a tenth of a nanometer.
One of the big things that happened in the last few decades that happened was the scanning electron microscopes.

Yes, it was ugly.

It allowed us to see and touch and even move individual atoms in a way we never thought possible. People were amazed.

Also saw theoretical work - not only see and touch atoms.

We'll be able to build complex molecular structures - nothing in the laws of physics speaks against manipulating matter atom by atom.

We should be able to build lots of interesting devices.
General picture of what is happening.

On left, that dot. The range of things we can build today. Current ability to make things.

Set of capabilities will expand. That gray oval is at present a theoretical understanding - that should let us build a whole range of molecular structures.

We have experimental capabilities we think will be developed.

If we look at it, we have the onrushing experimental front and theoretical analysis.

As we get a better understanding and better experimental techniques,

once we have those core abilities,

there will be an explosion.
I hope to give you a taste of the magnitude of that explosion.


Computer revolutions

New set of capabilities that will change the landscape in a fundamental way.
Not so much that Products are small set of capabilities,

that represents next 10-20 years of output. Huge range.

Insanely huge set of possibilities opened up in the future.
So, manufacturing.

How to arrange atoms.

There are basically two ways:

First, self assembly.

Second, positional assembly.
In self, they stick.

In positional, you take parts and put them together.

This is an illustration of self assembly. [slide T4 Bacteriophage]

Vibratory bowl []

Track that spirals out of bowl. By the time they exit, they are perfectly aligned.

Charming device to watch work.

[Slide showing smiley face created by self assembly of DNA]
Positional assembly

Involves holding parts and putting them together.

At the scale of you and me, we have hands.

At the scale of atoms and molecules, this idea is new.

Idea is beginning to spread.

With scanning probe microscopes ... [survey of class]

You have surface and a stick.

If you have an AFM - atomic force microscope [ref]

Some measure how much surface pulls on the stick -- very snazzy.

Other technique - Atomic tunneling microscope.

You apply a small voltage, you measure the current as tip is about to touch the surface, there is a small current flow. Now you can scan across the surface.

When increases - closer. When decreases - futher away.

Get picture of surface (bumps)
[Slide with complex machine]

Ultra-high vacuum.
Picture from last year: picture of a single molecule from AFM.

Single pentacene molecule.

Diagram below molecule.

Technically a tour de force.

The AFM is able to see the Image of a single molecule - quite remarkable.

IBM in atoms - classic shot.

1990 done at Almaden.

This is really a tour de force.

Q: In what sense is that an image?

That is an image in that they scanned over the surface, following over the contours of the Xenon atoms. Took that data and produced image.

Q: Don't atoms move?

Done at 4 Kelvin. Very very cold. Atoms are stable for at least days on end.

At room temp, poof, they all vanish.
Here's another example done at room temperature.

Silicon experiment.

Put single layer of tin on top.

Here, the silicon atoms you use are actually bonded.

Making and breaking bonds.

So they are stable. Very nice work. 2008.


Original Paper:

Here they spelled out Si - the symbol for Silicon atoms on a silicon surface.

That's an interesting capability.

We're still working on a 2D surface. Someone will build a 3D structure. Able to build something, I don't know what.

Now, the theoretical work in this area - of course - allows us to think of things that are not yet experimentally available.
Set of molecular tools to allow 3D construction of diamond.

This set would allow building of another set.

So we know that in a theoretical set - waiting for demonstration that it is feasible in 3D. Hopefully will come relatively soon.
So, what would we want to build?

On upper left, a Neon pump. We might want to pump Neon through a structure ...

Q: At 4 Kelvin?

This would be room temp.

Some could go from 100s of degrees Centigrade down to 0.

Plantary gear on right and universal joint would be able to operate right on down to 0 Kelvin.

Q: Once atoms placed together, what holds them?

We have a whole technical discussion coming up -- including deep insight into what makes atoms hold together and do what they do.

Robotic arm. 100 nanometers tall. Not only something you could build ... need positional control

Not only could you build a molecular robotic arm, but the arm, once built, could hold the tools to build more robotic arms. Builds more of itself [self-reference again ...]
How do we make big things using molecular tools?

If you sneeze the small things will all blow away.
We know that it is possible to make something BIG, like a redwood tree, from something that is small, like a seed for a redwood tree.

Suggest we could build something large from molecular machines that build other molecular machines.

Whole area: self-replication machines.

Thorough coverage of topic ... fact that I am co-author has nothing to do with my view ;-)

Quite an interesting area.

Build big things from small things.
Convergent assembly -

start small on the left. Put them together, so they are 2 nm parts -> 4 nm parts .. so size doubles at every stage. In 30 doublings you get to 1 m.


Under a day.

One thing - interesting - speed is a scale independent constant.

If I am going 1 m/second. A nanometer per nanosecond is reasonable speed. Same as 1m/sec.

Small things moving across system - if average speed across is 1 cm/sec then it takes 100 seconds to make a 1 m object.

Speed should be quite good. Kind of interesting.

Will take a few years to develop these machines.

Mfg costs should be low.

Lumber, wheat - made with self-replicating mechanisms.

Products are likely to be of same order of magnitude.

We'll see across the board drops in manufacturing costs.

Computers - more powerful in the future.

Molecular computers are in the cards.

That's coming.

High density memory. Very high memory molecular memory.

1 bit per atom. Plausible. In the cards.

Today's surgical devices are too big.

In the future, molecular tools to deal with cell level damage.

Lighter, stronger, smarter, less expensive --in aerospace particularly.

Space - available to pretty much everyone.

[K. Eric Drexler ref]

You get orders of magnitude drops in costs.

Vacations in space, living in space for a lot of people.

Analysis of single stage vehicle.

Q: Time horizon?

Depends on how fast we get nanotechnology working.

I'm going to say something that is sacriledge.

We have the straight lines on semi-log paper.

It is how much work we put in, particularly early on.

Chip manufacturers have put in gobs.

Nano is getting zip money. [holding fingers together] Investments are that big.

We can bring in the benefits of the technology by putting in the investment now.

Some technologies not yet on curve.

Until they get onto the curve, a lot depends on chance events.
Babbage in 1840s

Relays used in telegraph systems. Could have used them in 1850s to build computers.

We didn't get computers until 1950s. Big delay on that one.

I'll announce in passing that envinronment would benefit greatly from this technology.

Also good to have long term planning.

Universe 13.7 bn years old. Will continue for awhile.

How many people in this room plan to be alive in 30 years ... 50 ... 100 years.

That is a political hot potato around here.

It would be nice if some better medical technology was developed in a time frame that would be useful to you: 30/50 years. So this horizon makes sense - for personal medical technology.

Life spans have been 70 years.
Question session.

Salim: 15 minutes. 2 slices. Turn around, form groups of 5-6 people.

What were 2 most important aspects.

Discussion results can be logged here:
Main ideas:

You need to see these things when you buid them
Disruptive nature, could change everything we do, how far away are we from the things we just saw?
Downsides? How do we keep the environment \"clean\" from nanotech? Analogy of fire and the various regulations, tools and rules we have to deal with that potentially destructive technoloigy.
Cost and temperature a problem? Temperature isn't but the higher the temp the more likely you will get undesired side reactions, these things won't be too expensive and as time goes on costs go down. Room temperature operation is likely

10:24 Break
- Molecular nanotechnology
10:32 Promised introduction to core concepts about atoms and how they work.

First, lead in - why do this.

Manufactured products are made from atoms.
We need to understand what atoms are and how they more.

This will be basic for molecular nano, nano, medical, whole set of concepts.

No equations.

But showing pictorially, graphically. If you want equations, you can look them up.

You will get concepts of equations.

Hard core intro.
That - a nucleus and electron cloud - is an atom

The nucleus is apoint mass. Technically, it may be fuzzy, but ignore that. It is amazingly how well you can model nucleus as Newtonian point mass.

Occassionally, Hydrogen might tunnel. None of the others do this.

The cloud kind of schmeers around.

Defined by laws of quantum mechanics.

It is a blurry cloud on the level of atoms and molecules.
Hydrogen nucleus. One proton. Proton has mass approx. 1 unified atomic mass unit [ref]

About \"1\" -- actually is 1 - actual mass close to 1.

Mass is a funny number.

Charge is +1.
Electron has mass. 1836x smaller than proton. Charge of -1.

Opposite charges attract.

Electron is attracted to proton. Reason it does not collapse is that quantum mechanics says, \"it can't\". Smaller cloud, more blur. Don't ask. It is quantum mechanics.
Carbon atom.

Nucleus - 6 protons and 6 neutrons. That is Carbon 12. Called that because you can add protons + neutrons. 6 + 6 is 12. 6 protons means it is Carbon. By definition. If it has 8 neutrons, it is Carbon 14 : 8 + 6 = 14 Radioactive. Used in dating.
Six electrons. -1 for each.

Now we get some numbers, but don't be alarmed.

1 Unified Atom Mass Unit is blah - an exact number for an AMU. Really small.

10 to the minus 27 kg.
One proton is slightly different from 1 AMU. Slightly different, but pretty close.
Carbon 12 has mass of 12 AMU - by definition.

One electron, a lot smaller in mass.

Mass of neutron is also about 1 AMU.
So. Neutrons and Protons and AMU are about same and real small.

Nucleus has N + P in it. Count them. That integer is the mass number.
Q: What is mass?

I am massive. [Patting self and hopping.]

I am multiple kilograms. In US, I would be in pounds.
Q: How is that different from weight?

I am on a planetary surface. Being attracted by gravity.

In space, I would still have mass, but no weight.

People at NASA are very careful about the differences as mistakes could cause problems on rocket flights.
Q: Mass is resistance to acceleration.


Another way of looking at mass.
OK. That is the hairly technical stuff.

That is the core.

Atoms have nuclei - most of mass in nucleus.

Atoms from bonds. They form bonds because of the electron cloud.
The Hydrogen molecule has TWO protons. Point masses. Electrons are drawn to both protons because they are positively charge.

There is quantum fuzz - about the same size as the bond length.
In fact, what is going on, the electrons like to be near the two protons.

They like to be right between the two protons.

More electron density between the two protons - closer to both protons there.

I want to snuggle up to both protons if I can. If there is more negative charge between the two protons, they say \"Yum\" and move closer to each other. That is what is called a bond.

All about the electrostatic attraction of the electrons for the protons and the protons for the electrons and the quantum mechanics that cause the electrons to not want to be in the same place - so blurred.
Interesting thing. Bond has a certain characteristic length that it likes to be.

If you pull the two protons apart, there is a restoring force. Sort of like a spring. Pull them apart and they want to pull back together.

Stretching a bond is like stretching a spring. Oooo. Interesting.
You can say the same thing in equations. If you get the right spring constants, it is really quite accurate.

You can draw a graph. Plot energy versus bond length.

Two protons, pull them apart. Energy goes up - they don't LIKE to be pulled apart.

If I push them together, they are unhappy.

They go back to the bottom of the potential well.
This potential well describes the force between the two protons in the Hydrogen molecule.

If I know that curve, I don't have to know where the electrons are. I just need to know the shape of that curve.

I just say, \"There is a bond between the two Hydogen atoms.\" I can ignore the electrons entirely.

I am describing a bond between two Hydrogen atoms. I can describe it as a spring between two Hydrogen atoms. I have now allowed myself to think about this spring - no electrons, no quantum mechanics.
OK. Hang on tight. Replaced a bond with a spring.

Now Water. Three atoms. Two bonds. Two springs.

H in upper left and right.

Oxygen in the middle.
I've got something more. Those two hydrogen atoms are at a characteristic angle.

I need another spring. I need an angle with a spring. Those 3 like to be at a particular angle.

Goes back to shape of electron cloud.

Again, I can ignore quantum mechanics so long as I understand the angle.
Finally,there is a torsion angle. Angle around a bond.

Bond between two Carbon atoms. As I rotate, there is an angle it wants to be at.

Now I have another angle: a torsion angle spring constant.

Some people omit this. But it is often significant - especially in biological systems.
This covers the INTERNAL angles.

Semester course in molecular mechanics. This is speed learning.
Now molecules float and bump and have force between them. Called the \"van der Walls\" force. Falls off very rapidly. Weaker.
Q: Is this strong/weak forces?

No. Those are inside nuclei.

Weak forces govern radioactive decay.

We are tossing out.
Q: No gravity?

Gravity does not exisit. Picture of what is going on between atoms and molecules. Throwing out everything we possibly can. Gravity at scale of atoms/molecules - tiny.

Neutrinos? Forget them.
Q: What is attracting the molecules?


Quantum forces we are very carefully getting rid of here.

nice you know those forces you can start to undersand a lot of things, chemistry, biochemisty, water
This is a bit of a lie, this is water but essentailly this is telling the truth, ordinary water is moving around and not as uniform

if you throw in a hexane you have an oily (hydrophobic) molecule in water and there are less hydrogen bonds. You break the hydrogen bonds.

Energetically that's bad, if you through two hexane molecules in the water you are breaking more hydrogen bonds which is not good.

If the hexane molecules snuggle up to each other you break fewer hydrogen bonds. That's why oil and water don't mix.
Hydrophobic vs. Hydrophilic

Hydophilic - parts of molecules that form h bondss sitck to both each other and water

Hydrophobic - parts of molecules stick to each other
Explains protein folding

hydrophobic parts stay inside touching each other, hydrophilic parts stay on outside bonded with each other
Simple Newtonian physics can describe the motion of atoms
Most molecular mechanics models use:

1. Bond length/stretching (interaction between 2 atoms)

2. Angle bend (interaction between 3 atoms)

3. Torsion (interaction between 4 atoms)

4. van der Waals (interaction between atoms over a small distance - sub nm scale)

5. Electrostatics (interaction between atoms over a larger distance - sub nm scale)

Again: discussioned logged at questions:
Do the bonds get larger as temperatures increase of do they stay the same length until their is a phase change?

brad su
What actual models - tinkertoy type of models - are available which exhibit the appropriate spring-like behaviors including torsion? Can we get some to play/build with?

John Graves Singularity University

Asimov's Laws

Immoral or illegal to \"pause\" a computer-based being?

Genero Sapiens vs Homo Sapiens Effictus
[end of presentation]


Question Template [for copy/paste - do not use]

Vote by adding +XX where XX are your initials (see key: )

Student Question:

Student Votes:

Student Comments:

Faculty Response:


Student Question:

Student Votes:

Student Comments:

Faculty Response:

Instant Evaluation:
GSP10 Book List:



5 July 2010

9am \u2013 10am

AIR CL4 Artificial Intelligence: Applications

Neil Jacobstein
Speakers on this pad:

9:06 Neil Jacobstein


10:12 Ben Goertzel


11:19 Peter Norvig

=============== PRE-presentation notes
Kathryn will review later looking for any [ref] markers--She and TFs will fill these in with references, but all are welcome (encouraged) to add references / put in as many details as available.
New slide marker in Etherpad starting today:


at the left margin, the hyphen above indicates a new slide.
If someone wants to put in the TITLE of the slide, like this

- Example Title

that would be even better.
We may be able to use this system to create a robot presenter:
Suggestions for etherpad contributor signals during talks:


A period at the left margin means this contributor is taking a break


An equals sign at the left margin means this contributor is ready to capture the next words

(their first keystroke would be a backspace)
PRIOR AIR Etherpad: (includes Dan Barry's CL2/3)
=============== End of PRE-presentation notes

Today we are going to talk about applications of AI.


The use of AI is part of a larger technology arc



Machines replaced humans as calculator drones


Lester Thurow - replace physical exertion with brain power

\"Standards of living rise not because people work harder but because they work smarter. Economic progress is the replacement of physical assertion with brain power.\"

- Machine Augments Human Expert: Dendral 1965 - 1980

We've been able to demonstrate amazing applications. Machines solving

In the last 50 years we've been able to get machines to solve problems which only people had been able to solve.

System used heuristics and rules of chemistry, amazingly successful.

Narrow AI expert system. Hand coded in LISP.

Since Dendral,


we've been able to demonstrate narrow AI in a wide variety of domains.

Can sometimes outperform people who created the system.

At Bangkok at rush hour, these systems can't compete with human drivers.


If it works, it isn't AI.

Now, when AI is successful, they are considered medical or manufacturing applications, not AI applications.


AAAI - in 1989, created an applications conference to capture and doocument the deployed and emerging applications.

Team of 25 researchers gets together and

Selects the top 20 which are presented at the AAA conference each year

There are about 460 since 1989.

You can go to the website and look at most of them, download PDFs


AI has been deployed worldwide.
Or in their power grid or small devices in cras and macines

And in many cases there are universities doing AI research

The range of AI applications is truly huge, ubiquitious, increasingly invisible and embedded in 'vanilla' products and services

they are distributed across wide variety of application domains


government applications


Most of the applications, we don't know about.

No delivedrd by PDAs mobile phones and cloud computer services

Mid-1990's AI reached a tipping point.

Many people adopted it without knowing, adopted in the fact that people were using mobile phones that used AI tech

Instant credit check involves AI

- Task Automation

Only humans used to do these things.

We don't have broad human intelligence of AGI we have an amazing list of what we can do

Thinking about your team projects and what AI could do for you. This is the list of what might be relevant.

- Application Domains

DARPA has funded a lot of AI over the last 50 years.

At this year's conference, oceanography application determining when sample bottles open to sample organisms.

- Sample of AI Application

- Sources of Value Added

Acceleration is often by orders of magnitude.


Example would be really complex scheduling and planning problems


Beverly Park Woolf: Building Intelligent Interactive Tutors [Matt: for book list]
Carnegie Learning for Algebra I. Really large studies, carefully controlled. 70% greater likelihood of completing subsequent courses.

If you want to build more than a page turner, with a student model, see Woolf's book.


Thing remarkable about DARPA BBN - 10 million element search space.

Produced improvements in productivity. Speed ups. Increased accuracy.


The paper system had a mechanistic causal model. Diagnosing problems. Pitch gums up systems. Used in many mills, still in use today.


Dick Reese said DART+ alone paid for all of DARPA's investments in AI.

Logistics Europe/Saudi transfer of materials.

C++ system

Oracle forms.

Standard linear programming package.

Off the shelf tools. Used lin prog to do optimization.

- GE Plastics Color formulation tool (

Almost old enough to vote. How would an AI system wote?? Democratic?

Big business for them. Huge savings.

Delivered over web.

An example of what you can do as a large corporation when you are paying attention and bringing a system through the culture.


Jack Myers collaborated with CS Harry Pople

THey built a system that wasinformally known as Jack in the Box

Building in internal medicine knowledge base.

They did quite a good job of capturing a big chunk of internal medical base

Randy built a commercial version.

Internist 1 was used in early years of capturing knowledge base

Finally, QMR had a PC version

System was remarkably effective for doing what they wanted it to do

Didn't get traction in the medical community.


If interested in Biomedical Informatics [Matt: another book]

Biomedical Informatics - computer applications in health care - Edward H. Shortliffe, James J Cimino

Goertzel will talk to you about a longevity gene


Lessons learned.

Acceptance remains elusive.

Schism between development and implementation.

Ted Shortland has discusssed.

\"Effectiveness\" discussion is not enough.

BUT adoption is a whole different thing.

Issues: culture, integration, work flow, verification, changing guidelines, finance, safety and

in the US: liability

Diagnosis is NOT the key problem.

If you are thinking about deploying into developing countries, success will depend on the availability of prevention or treatment, not just diagnosis.


Personal medical advice is coming along. We see now platforms to capture medical information.

Standard guidelines.

If time goes on, people working now on agents, use your data, provide specific recommentations. Will need more data.


Game based AI is a big win. Fastest growing. THE competitive edge. Games > Movies.

VERY big deal.

AI in these games varis.y widely

How many use such games? [lots of hands]

AI ratchets up and becomes

AI in VR environments - Second Life (see Ben)

AI sourcing - brings together user requirements and

Use constraints and tree search algorithms.

Hosted over 230 procurement events.

Cost savings, $1.8 bn
64-bit shared server farm

AI generate antenna. Evolved a solution using the criteria they had as selection pressure.

F-22 Raptor, Cockpit interface applications.

Anti-glaucoma drug - designed based on structural constraints.

Lenses are preimum optics. Used to take months, now they have captured that optical

knowledge and can now produce new lens in a very short period of time.

If you can produce a system that can both have domain knowledge and task search

And access to the web you can provide amazing services

Google has been able to leverage their access in Translation (Norvig)
Microsoft's Bing \"Search decision engine\" is meant to provide answers not only pointers to URL's.

Wolfram Alpha - curating knowledge. If you can state your problem in the way they have curated knowledge, it can give very sophisticated answers [ref]
SIRI takes task requests. Gives the user a very small window to help answer. Apple bought SIRI - Adam coming next week


Shine - built at NASA's JPL. It is blistering fast. Reason for that, they pre-allocate storage. They can get this thing to run in near real-time. Specifically designed to diagnose problems on spacecraft. Used onGalieleo etc.

NASA licensed to Biospace - useful in other domains.


Today, not just on a single workstation. AI+ other technologies. Mainstream languages.


Uniform Resource Identifier

Leveraging neuroscience - Numenta

like VitaminD?

Mainstream applications integrated with AI. Browser is GUI. And sometimes speech interfaces used in new ways.

Not technology or researcher centric, they are now built around the processes in the organization.

Can get closed feedback in robotics


Seen on mars. Urban grand challenge.

Robots in small slice form.

Lego kits, small amounts of AI (planning systems)

Robot soccer Neo? [ref]

We are not just looking at robots in macro scale. Molecular scale.

Interacting at nanoscale.

Map patents in nanotechnology. Early example of some interdisciplinary coordination.

Deployment and evolution include engineering - including social aspects.

You really need to pay attention to all the problems of getting these systems adopted by

Organizations who have resistance to change.

Very important to leverage deep domain and task knowledge.

Sometimes big win just through leveraging massive data.


Biological time perspective.

Some of you may think AI has had 50 years to achieve AGI, that should be enough

Compare how long it took Mother Nature - 600 million years - by that standard we are doing OK


When are we getting AGI?
For team project work, they use narrow slice of AI.

Whole unified theory of cognition delivering value?

Suggest: identify immediate value generators.

Make resilient decisions to cover your bets.



Em: Please use mic as we are taping.


No questions?

Shary: Resources for technicalities. What is really happening? If/then?
AAAI - lot of documentation of how they do it.

Go to that site and click on applications.
Gary: Interested in Google and others. Rate web, all information. Matching same criteria we use? Google rates pages based on links they have, but you don't get that much for certain information. Different system to provide what you are looking for. Joke e-mail sparked idea. Asked you to pick a famous person. Asks you to filter answer. Will find person in 15 questions. Male/Female - will find people all over the earth. Can you have filtered criteria, the exact answer you are looking for?
Interesting question. In example you gave, it is a constrained domain. About searching for people, not other domains. If you can constrain the domain, or having people willing to hang in there for 15 questions. Google users want a SINGLE search. They click around for awhile to find what they want. You want a single answer: either contrain domain or be more specific. The way to build systems like that - see if you can require less than 15 steps, follows user and knows more. Can help provide exact match due to knowledge of user's intent.
Sasha: Expressed / implied inputs.
In most cases, most systems not understanding implied inputs. Requires sophisticated user model. Early Microsoft Bob - universally not appreciated. Tried without context to understand.

Best now are cognitive tutors.
Jan/Germany: Search engines in next decades? Wolfram Alpha - how is it different \"computational knowledge engine\"?
If you know Mathematica, you can use WA with more ease. Use that genre.

Next gen search: key is to get beyond links to documents. Will be helped by people encoding their URI - not just their documents. Right down the data item - the leaf nodes.

Just by having that massive data available.

If you can encode the semantics of the search, a la Web 2.0 or 3.0. Ontologies.[ref]

Some combination of URI and semantics.
Dmitry: How labor and time intense is development of narrow AI? Is there a way to outsource? If we know business/engineering and need it fast/cheap? What are typical conditions?
Typically there are two types of people for deep narrow AI system. A human expert who has the entire body of knowledge - typically not substitutable. Need someone available for quite a long time to extract the knowledge from and ensure high quality

Other people you need are those who can encode that knowledge sometimes the experts can do it sometimes you need someone who can tease out the model on how they solve complex problems.

Could you outsource, sure but you still need the integration of those two people.

reference to Push Singh's work - now ongoing in the Mind Machine Project as the OpenMind group:
Need long-term patient sponsorship
9:55 Break.

Words from BBC guy: Thanks for your secrets. They were on the record. We will be in contact with some of you via e-mail. Hope you get everything you hope for from the program. Bye.
AIR CL6 The Future of Artificial General Intelligence (Ben Goertzel )

Mon, July 5, 11:30am \u2013 12:30pm
10:12 Ben (via Skype)

Sorry I couldn't make it.


Will go through very quickly - own take on history of AI.

At the end, since this is SU, I will indulge in a bit of speculation.

AGI during next few decades.

Feel free to e-mail me at your leisure. Always interested to talk.

Look at my Keynote presentation rather than my face.

Will start screen sharing.

10:14 [Switches to screen share]

AGI 1950-2050

So this talk is called, \" \"

Since you have already done some AI material,

will go quickly

1950-2050 was nice round century.



Artificial General Intelligence.

Define with G factor: look up

Shane and Legg? published paper with 70 different definitions.


Contrasted with narrow AI.

Can be quite valuable

1950 Turing test for general AI. Assume already reviewd.

Back then, they thought they could build machines like people.

-thought it would happen in the 50s 60s or 70s

didn't think it would take a half century.



Flourishing of differnet types of seach



Lenat - math theorem

connection Machine - field going in all directions




Google is an AI company

PayPal is an AI company - fraud detection


planning control, unmanned vehicles, video games

Slide shows character from Black & White


By 2001 we were suppose to have AI -- Hal


Instead we got Google - narrow AI everywhere, helpful but not intelligent like C3PO


Useful things: car in desert


I've done my own narrow AI.


Machine learning applied to biological data

mitrocondrial mutations


Fruit flies live 5x normal \"Methuselah flies\" [ref]

Narrow AI, but important


AGI is coming back


Series of conferences. 2010.


AGI - 11 next year at Google's campus [ref]
Not remotely as marginalized as it was a decade ago.

Ambitions may be done now, or at least soon.


Open source OpenCog []

Builds on dot com company

Book from 2006, the hidden pattern. Discusses philosophy of mind.


Book: Building better minds - whole approach in detail.

Many types of memory.

Separate types for different memories

Goals -

Attentional memory

Using system to control agents - embodies agents. Virtual embodied agents.

And humanoid robots.


Blow up of declarative knowledge representation. Where specialized representations come together.

Combines aspects of neural net to importance values.

They flow around like in a neural net.

Semantic net.

Most nodes have no names. Most don't correspond to English worlds or external perceptions or actions.


Prototype in second life. Dog controled by AI.


Video from multiverse.

Shows how ambiguities can be perceived relative to virtual world.

[narrates story - dog: \"grab it\" with \"it\" being disambiguated]

Not just a mocked up demo, but a screen capture from Multiverse [ref]

Dog does learn stuff and respond using OpenCog.


Would like to do virtual parrots


Virtual babies.


What we have done is gone from dog to humanoid. Early stage work.

Xiamen in China [ref]

Robot made by French company [ref]

chinese grad students to control French robot

Code from Brazil

Nice thing about open source approach, comes out quite international and interdisciplinary.

Built by the world, just as Linux has been. Versus secret government lab.


Robot can interact with people. Play.

Not most advance.

Impressed by PR2 built by Willow Garage.

Very different from virtual world. Have to deal with \"messy\" real world.


Video showing robot navigating around.

Some chinese guy says, \"Hey robot, go to the chair\" It navigates there.

[aside: Skype message cannot be viewed while playing video.]

Robot found chair.


Alright we're back.

Important thing in 2010. Robot revolution is beginning.

Sound would have helped in last video. Defect of Skype's screen share.

But, importing virtual dog code to robot, robot has to do vision processing.

Has to move head around object.

Makes it more difficult, doesn't add much. Lab is simplified environment.

Later, richer environment, more affordances, you get a better interaction.

Some reason to think that human richness comes out of richness of world we interact with.

Having some kind of interaction with a realy rich world gives system a lot of data.

Maybe system with lot of interactio with internet could do it too.

How rich does the perceptual/motor interaction need to be?

Could talk about that a long time.

Robotics is coming. See Willow Garage robot here.

Doesn't fall down due to wheels.

Woman is robot, guy is human.


Honda robots play soccer


James Albus - created for US Army



What is coming in the future?


What I'm aiming at is robots and virtual agents - virtual toddler.

We can get there in 10 years. Maybe less.

I'm optimistic we could get there in 5 years with adequate funding and focus.

These are hard to come by.

We will have robot children - controlled by

OpenCog or integrative systems.


Robot servants.

Whether that wil happen.

Care for domestic tasttasks.


Robot Scientists.

Child robots + narrow AI


Biology interests me most.

Control AI biology equipment

AIs to surf biological papers

Put together with AGI system like OpenCog - many not be like human, but much better.

Potentially may read every paper and look at all data.

Creativity, playfulness

+ accumen of narrow aI system -> Nobel Prize winning results every day


More and more global brain emerging.

Internet leverages human intelligence - and stupidity and other human traits.

Robot service, Korea has said that by 2020 there will be a robot in every house,

Robot scientists, our robot toddlers with useful narrow AI programs we should be able to get use for early stage AGI systems.

Where going to see the internet have a mind of it's own (skynet via Terminator 3)
2030: A bit closer to the singularity
One big network, car, toilet, home, 3D printer, avatars in various virtual worlds, service robots, basement fusion reactor etc will all work together. Hopefully not a big brother scenario but more like the internet today. An internet bridging every virtual and physical object to speak of which will have all kinds of amazing implications.
Internet of things


2050: Ray Kurzweil projects technological singularity at this stage (2045)
+/- 15 years, that's about right.

- A Very Hard Problem
A cockroach projecting the outcome of WW2 - as useful as predicting what happens after that.
Goal invariance under radical self-modification:

How can you create something that gets better, but doesn't change \"spirit\".

It could have a change of heart with huge IQ.

Solvable technical problem.

- Another Very Hard Problem

Make an AGI that subjugates us all. More of a political problem.

- from Orion's Arm

When AGI really succeeds, it goes beyond all these things we've discussed,

it will be beyond human comprehension.

Hope we enjoy the ride.


Neil: Come to front of room for questions. Don't be shy.

Michael Chen: Saw OpenCog on Singularity Summit last October. Since then?

BG: Bunch of devlopments. Robotics work is new. Extending from virtual world.

Natural language gen working much more robostly. System can express itself.

Backend improvements.

MC: Track?

BG: OpenCog developers e-mail. Blog and wiki not updated as often as it should be.

IRC channel. [ref]
Shary: Wondering to what extent current AGI is trending to human intelligence, which has emotions, intuition etc that makes us humanm, how is this integrated into AGI?

BG: People trying to emulate. Others aiming at GI. I'm aiming at latter. Not a human. Started with cognitive architecture, but no sex drive in the robot or get angry when someone insults it. Emotion and intuition are part of broad architecturs. But specific group that governs humans is not being built into OpenCog. Other researchers are.
Eric: Talked about robot child by 2020. Would this be static - same intelligence/behavior or some kind of learning that would grow?

BG: Everyghing I am doing is about learning. Child would learn. All unknow territory.

This is all uknown terrirtory, whether we can build a robot child and teach it so that it can learn and learn and become an adult. etc. We don't know yet.

Certainly AGI is all about dealing with learning systems.

AGI vs narrow AI is to learn about situations that programmer did not anticipate.

Child can deal with situation its parents never thought of.
Eric: If we get there by 2020, we don't need to do anything else because it iwll just go by itself, at least the basic mechanism?
If we get to a robot toddler, we've solved AGI. Once you have the toddler, you've made the breakthrough. Even if you have to modifiy it a bit -
\"Leap from robot toddler to nobel prize winner is much less than where we are now\"
3 to 10 year old change - genetic programs come into play. The brain changes.

You probably need to add some expert functionality. But core learning/intelligence algorithms are the same.
Bill Bing: Politics and policy associated with different areas. Roombot, not much outrage as a vaccuum cleaner. Which areas will allow more progressive use of robots and the higher level human like behaviour?
BG: Well, in large part that depends on the robotic side as well as the AGI side. And depends on cost. Neo is $15,000. Another is $100,000. Not that many can afford the PR2.

Once it crosses the cost threshold - PR2 for $5K. People will find amazing uses that we can't imagine now.

Haven't plotted exponential curve for drop in robot prices. I think 2020 - upper middle class in the developed world. Once more prevalant - more issues.
Jan/Germany: Reading about Numenta - HTM How is your approach different?
BG: I think ... I read Jeff Hawkings book on intell. Certainly true - not terribly original, but elegantly expressed. Vision processing by HTM. Many decades - bcak to Mountcastles, 1970s.

vision/ audition in human brain are HTM.

Disagree on HTM is not characteristic of all. Important for V and A.

Other had

Neocortex for cognition - comes from repitle olfactory bulb.

So cognitive cortex has combinatorial.


Non-linear dynamics and strange attractors missing from HTM model.

Nice philosophy, theory of vision.

Not actuation.

Not language.

Tomaso Poggio/ MIT



Functional / H T M based AI vision system.
Poggio closer to brain

Ent - seem to work better than Hawkins.

From lay persepctive, same structure.
Yara: Existing languages good enougn?


BG: There is a saying [Greenspun's tenth rule] that any sufficiently complex software program has a LISP interpreter in side it.

Current program languages are a

I prefer Haskell and LISP, but they're not as scalable as C++. Church - Noah Goodman's thing - that's a toy language. Not remotely scalable, even to the level of say LISP or Haskell. If you had something like that which could deal with large amounts of memory and machines like C++, that would be great. But we don't have that, and C++ is enough. We basically emulate the things that Church provides in C++. It's tempting to say \"well, AI would be easier with a better language, so we should start with the language,\" but I looked at the history and saw that people spent 10 years just building their languages, and it turns out to be really hard to build a programming language, and we don't *need* it.
EK/Argentina: 2 parts. Laws of computation,
We've made airplanes and helicopters and spacecraft, but we can't make a robot hummingbird.
Empathy has two aspects: logically modeling other people and internally simulating other person.

In principle we don't do that well, AGI could do that better. Or we could build psycho-paths. Both possibilities exist within broadly human-like systems.
Wide variance within humans. Some really rotten without good side.

Doubt that what we see exhausts range of possible. Could have REALLY sweet computer.
Justin/Canada: Great Talk. Philosophy - human / biological traits. Dawkins, Selfish Gene.

Unselfish AGI possible?
BG: Of course, it is possible. We are not limited to what we are.

[static/distortion now]


[dropped video]

Neil: we ought to reboot?

BG: [very noisy] Reeboot????

[laughter and applause]

Neil: Peter Norvig is here.

AIR CL5 Artificial Intelligence: Methods (Peter Norvig)

Mon, July 5, 10:15am \u2013 11:15am


Neil: Peter Norvig is director of research at Google Inc.



co author of leading textbook in the field.

USC/ Berkeley - what goes on under the hood.
Peter: Always enjoy speaking with the Singularity.

Ben did a good job talking about the big picture.

I'm going to do the small picture. Bottom up.

People with vision - what is right architecture

and people looking for right pieces.

Tough task. Wrote with Stuart Russell 1200 page textbook

1 hour lecture

1/2 hour so you could have questions

Intelligent agents






Interfacting with the world



Perceiving the world with senses

Philosophical issues


Four ways

Ar you duplicating what humans or doing or trying to do the best job

How successs do they act: external

Most useful - act rationally.


Rational agent

Percieves and acts.

Map prior perception into current action


Table tennis? Yes

Drive a road? yes

- list

Things in red you can't do yet.

Pink arguing

Green have been done

History of AI like history of war on cancer.

Nixon: War on Cancer. Must have lost.

But, look at cancer rates, lots of successes. Better all the time.

New treatments.

Better and batter at a lot of tasks but not one answer.

AI is like that.

- Hard questions

- Agents and environments

Agent in one box. Takes percepts. Sensors. Actuators. Black box does something.


Example. Has to clean stuff up. Whether dirty. Figure out right program, right sequence of actions.

Vacuum cleaner world

Is that the right function? Measure of metric. see how it performs.



More complex tracks state of world.


Even more complex

Keep track of state of the world and track goals


Most complex - utility based, not just black/white. How much is it worth.

- Problem solving agents

Define world in terms of states.

- Problem Types

Talk about attributes, how hard for agent.

Deterministic - like chess

Non-observable - sensorless problem.

Nondeterministic - solution is a contingency plan

State unknow - exploration problem - what COULD world be like

History has been one of dealing with increasing complexity.

Map of whole state


Similar problems like moving tiles around. Good theoretical


Algorithm no time for details

- Maap problem

- Algorithm. What comes next, until get to goal



Trade off until you reach goal.


skip that


Genetic algo

May allow parts to go together

Can combine parts of a different solutions hoping that they will solve a problem when combined


Eight queens problem. Can take half of left board, half of right board, put together, see how well it solves. Chance of selecting proportional to fitness.


What is that blob?


How about this blob? [fuzziness]

- Uncertainty

In early years of field, being able to represent problems.

We've gotten a lot better at this recently (late 80s)

Now we can handle problems with uncertainty.

Move from chess to robots moving around. Gears slipping. Other things.


Why do we need to deal with uncertainty?

Evertyhing in life has uncertainty.

Optimal route may be dterministic, but what time to leave?

There could always be earthquake, car hit by asteroid, whateer?

What is spread over time. Usually 25 min. Allow more than 25 min, but not 25 hours.

What is spread is subject fo runcertainty.


Uncertainty is two things, Probability and Utility

Probabilistic asserstions summarize affects of laziness and unknowns

Laziness - don't integrate everything. Get all traffic reports. Do simulation. Figure out today how long to drive to airpor. Easier to just give a few extra

Can use Baysion probabilities to summarize the problems

- Inference bu


-Bayes' rule and conditional independence

causal probability networks, dont have to look at all P values just those that are related to each other

- Preferences

rational axioms based on prefs.

-Rational Preferences


- Rational pref. contd.

- Maximizing expected utility

- Utilities

can map them into real numbers, think of them in terms of lotteries

1/1000000 chance you die how much would you pay to avoid? $30


-Decision Networks

add action nodes andutility nodes to belief networks

- Multiattribute utility

- Strict dominance

B blob dominates A blob of uncertainty

-Qualitative behaviors

a choice is obvious, worth a little

b choice is not obvi, worth a lot

c choice is not obvi, worth a little
- Strict dominance

Two attributes we know are monotonically increasing. Anything in that region is better. B is better than A.

That's when outcomes our
- Qualitative behaviors

- Inductive Learning (a.k.a. Science)

Ben talked alot about learning. Key to AI.

Think of learning as induction.

We are trying to bring it all back into learning functions.

Learn from pair of examples. tic-tac-toe.

Figure out rules, best move

- Inductive learning method

Easy generalization from curve fitting to AI.

What is best fit for points.

Best fit quadratic - seems good but point mssing


Higher order - which is better -


Here is another that fits all - but who would pick that

Occam's Razor [ref]

- Performance measurement

Shary: ??

PN: This does not have to be. Trying to do learning - Bayesian in

Performance over training set size.

- Example: linear Gaussian model

Here lot of linear models, approaches tended to be complex. As problems get bigger, more complex we get more complex features. Make the square of the variable one of the features. That technology seem to be helpful.
Slides are now available:
- Brains

How much on what works vs how much is it like brain.

How to model neurons



- Back prop

Math gets complex, but we don't need to go into details


- Hadwritten digit recog

Support vector machines

Best machine ~.6% error

Trading back and forth as techniques advacne.

- Mobile Robots

Transition to uncertain environments.

Robots is where the treads hit the road.

Messy environment.

- Localization problem - Where Am I?
One of the problems where we have had a lot of success

Dynamic Bayes net
- Localization cond

Through bunch of measures

Assume gaussian noise in moriton prediction. sensor range measurements
These are 4 measurements for range. From angle, two match well, two don't.

Too noisy to figure out where you are [relative to side or corner of box]


Could be almost anywhere in map.

Get more reading.

Yet more seconds of reading, figures out where it is.

So, knows for sure. Localization given a map.

- 3D mapping

Situation where you don't know map.
Whirlwind tour. You get an idea of the range.

Look up ones you are interested in.

11:50 [applause]

Sam/Australia: Noticed economic ideas came up? How much integration is there with social sciences?

PN: AI separate, but historical accident. Other fields closely aligned. In a sense, economics is one of them. Doing the right thing. Economics focuses on masses. AI focuses mostly on agents. Economics looks at individuals. AI looks at groups, but mostly not.

Control theory. AI not branch of this? Due to set of tools you have.

Strictly defining disciplines - smell of chem lab. Shape of beakers.

Control theorists had models for linear things.

Simplify until tools worked.

AI said, we want to go someplace else.

Good for exploration of different parts of world.

Things KNOWN in control that were needed in AI. For decades.
Bryce: Ethics. Concepts we deal with. Specifically. In AI, movement back to linear mapping, adding non-linear as features. Higher level predicate functions, reason being to maintain logical entailment? between functions. Connection? Montonicity? [sorry bryce - what is this term?]

Choice better, between 3 things or 1000s? Opportunity costs?
PN: First about language. relation to AI. Important from the beginning.

What it means to be human.

I see this parallel between learning algo and language - perhaps coincicdene, but common root.

Theoretical and practical.

Some not feasible. Find shortcuts.

Leads into common set of approaches. That's where they come together.



I talked about split - model or best action.

I was talking about perfect rationality.

Undertand how to act human - playing poker for example.

Understand where they make mistakes and exploit that.
Bryce: Point was not irrational montonicity, but ...

PN: How you define the problem. Weakness - problem is not taking all variables into account.

As a core computational problem

Here is definition of problem

Here is how to behave rationally.

There is finite ability

If you just pose problem, not enough.

You have one click of CPU clock - what wil you spend it on.

How soon do you need decision.

You can get right answer - too many choices

Maybe ignore some of these

Much more complex.

No good theories on how to ignore things. \x3c\x3c\x3c\x3c\x3c\x3c\x3c\x3c\x3c

You'd have to write all that down.
Gary: Frrom you point of view, relating to robots. Will keep on focusing on more complicated, econometric models? or neuroscience? Some kind of organs. Much more intelligent.
PN: We want people approaching from both ends. Robots coming along well. Ben made good point - depends on how much tehy cost. Amount of work relates to how easy to get them.

1978 level of PC revolution in robots now. No Macintosh yet. Nothing you'd want to have in your house yte.

Some on neural models.

Some on rational computation.

Still need better pieces. Sensors. Actuators.

Once we have pieces, get more capable approaches.

Still at vey low leve.

Follow robot soccer - great place for teamwork.

Winners hit ball hardest, straightest.

Everybody has access to low level, focus then on high level.
Bill/US: very quickly make decisions, jsut know it (Blink [ref])?
Very good at picking out annecdotes to make compelling point. Someone else, maybe Gladwell himself, could make opposite point: quick decisions that are screw ups [tell me about it!]
Good to know there is one mode of thinking, but to say every expert acts instantaneously -- well, we don't.

Neil: Thank you Peter.


Em: Reminders. Anders. Last student that arrived this weekend.



Lauren: you've got to say a few words. 30 seconds about how much you hate the US Embassy.

Anders: Hello. Yes. Thank you. I know nothing about what you said about yourselves.

I had some trouble with the American embasssy. all behind me. Steep learning curve.

I'm from Denmark. Another Dane here. Working with business development. No engineer, no technicial. Mostly IT systems for 10 years or so. 36. Wife 2 kids in Denamrk.

Looking forward to next 8 weeks. Thank you.

[more applausese]

Em\" E-mailed TP selections. Teams. 583C all 5 this afternoon.

Schedule shows which in which room.

Please bring all stuff out of this ballroom. We need to close it down.


[end of presentation]

New meta-data today for weeks 1-2:
Cumulative Etherpad document:
Pages of etherpad output (if printed as PDF): 390
Wordle of cumulative etherpad:
Python code used to collect Etherpads:
Techshop (Saturday, 3 July):


TP Pads for this afternoon:
NOTE: You should already be a member and Owner of the Google Group for your team project.


9am - 10am

NT CL7 Molecular Machine Systems

Ralph Merkle


10:15am - 11:15am

NT CL8 Recent Developments in Nanotechnology

Brian Wang

Slides: (tip: there are more slides here than will be shown)

11:30am - 12:30pm

BB CL8 Towards Homo transcendis



NT Core Session


9:16 Ralph: Molecular nonotechnology

Talking about ways of arranging atoms.

If you arrange a few, that will not change the world.

Can you scale that up to arranging large numbers of atoms.

- Replicative manufacturing systems.
We know we can scale things up because biological systems do it.

A seed can creat a tree.

It is possible.

Now, we want to do that in a manufacturing context.

Here we have pictures of robotic arms.

These are making other robotic arms. This is in Japan, the Fanuc Factory Group.

You can have a manufacturing system that is making more mfg system.

There is a property called closure.

Need a few critcal components to make the whole thing work.

- Replication

Many ways.

Von Neumann - a classic. The standard stored program architecture.

Also cooked up an architecture for replicating mfg systems. Also a classic.

Bacteria - very old architecture

NASA Lunar Mfg Facility - proposed in 1980

Land a big seed on the moon. Mine lunar soil.

Drexler's original proposal

Simplified HC assembler - Rob and I cooked up

Exp assembly

MEMS in Skidmore's lecture


Waved hands and mentioned this. Good for nanofactories. Bring together

all these concepts in a nice package.
Q: Santiago: What is it? Self - replication?
Fire is self - replicatingire is self - replicating, but doesn't create anything.

Any replicating mfg system is going to be self-replicating or \"falling down\" in terms of what it can make.
RepRap is partial.

Only real example we have which is pretty well closed is the totality of the mfg system - including people - since we can make all of the things we use to mfg the things we mfg.
Bryce: Energy. Does it need to get the energy itself?
One interesting thing is defining the environment, including energy. On Earth, that means sunshine or whatever source. We have that externally.

If you define your environment correctly you can have a trivial process.

Definition of your system - if done in the right way, becomes boring.

Low cost env - then that is economically interesting.

Share with your friends:
1   2   3   4   5   6   7   8   9   ...   57

The database is protected by copyright © 2020
send message

    Main page