InterOffice Memo To: List From



Download 183.65 Kb.
Page1/10
Date conversion25.05.2016
Size183.65 Kb.
  1   2   3   4   5   6   7   8   9   10

InterOffice Memo


To:

List

From:

Nathan P. Myhrvold

Date:

September 8, 1993

Subject:

Road Kill on the Information Highway

Technological changes often have enormous consequences. Microsoft has been the beneficiary of this effect for the last 17 years, and a variety of other companies in the computer industry have either had their day in the sun, or have fallen past the crest of the wave and have suffered as a result. Although the shifting fortunes of companies within the computer industry are naturally quite important to those of us who are participants, the effect on the world at large has actually been rather modest. The confluence of wide area digital communications and ever cheaper computing is going to be a lot more traumatic and far ranging than PCs have been. This memo is about some of those changes and how they will effect a number of industries.

Many of the ideas described here spring from numerous conversations with people at Microsoft and elsewhere. I'd thank everybody individually, but alas since my life isn't yet on line its hard to enumerate them all. I would like to thank Cindy Wilson for confirming a number of facts for me and providing terrific library support.

A Computer On Every Desk


Personal computers have become incredibly useful tools - whether for individuals or organizations. The world writes with PCs - whether composing a simple letter, writing a novel, laying out Time magazine or drafting the blueprints for new building. We make decisions with personal computers - analyzing an investment or creating a budget and asking "what if?". These are all very valuable tasks, but it is also important to keep some perspective on the scope of the changes. Word processing has replaced the typewriter - but outside of the typewriter business one could ask "so what?". Spreadsheets, one of the strongest applications categories have replaced the columnar pad and the adding machine. The spreadsheet implementation of a columnar pad is a lot more convenient and has broadened the user base to some people who weren't big columnar pad users, but this is the basic functionality. I remember seeing those funny pads with all the columns in the back of stationary stores years ago. I recently checked, and found that they, along with all those delightfully bizarre forms of K+E graph paper, are still carried in a good stationary store (albeit at reduced volumes). Despite the enormity of the personal computer "revolution" within the parochial confines of the computer industry, it really hasn't been that much of a revolution in society as a whole. How radical can a revolution be if its rallying cry (at least implicitly) is "Death To Columnar Pads!" ?

Another way to see this is to look at our own mission statement - A computer in every home and on every desk. This was a highly unconventional vision in the context of water cooled mainframes humming in the machine room, but it's really rather modest in the larger context of society as a whole. In actual practice, the mission that we and the rest of the industry have actually delivered is somewhat shorter than way we normally phrase it, it's really - A computer on every desk. We have been pretty good about getting computers onto desks whether they are in the office, or in the den at home, but desks are still our primary strength. We've done a bit better in some areas than others. Laptops (like the Omnibook I'm using right now on United 871 to JFK) have allowed people to take their desk activities with them. This is balanced by the fact that there are many real desks, such as those in the school classroom, that haven't been populated with computers.

The metaphor of the desk top which inspired researchers at Xerox PARC has largely been realized - personal computers and the software they run are primarily dedicated to service the activities that people do at a desk. Most of the product development done at Microsoft is focused on making incremental improvements to this basic mission, or in integrating the last remaining desk top activities. Our Microsoft AtWork campaign will unify PCs with the telephone, FAX and copier, closing on the last big gaps. However challenging (and profitable) this may be, we must not forget that our industry has grown up in a rather restricted environment. It is if we lived in a world where the only furniture was a desk, and columnar pads were on every bestseller list.

The Importance of Being Exponential


I was recently interviewed by a guy doing an article to commemorate the 40th anniversary of Playboy magazine. He wanted to know what computing would be like 40 years hence, in the year 2033. This kind of extrapolation is clearly fraught with difficulty, but that doesn't mean that we can't try our best. In the last 20 years the overall improvement in the price/performance ratio of computing has been about a factor of one million. There is every reason to believe that this will continue for the next 20 years; in fact, the technological road map appears reasonably clear. This will yield another factor of one million by 2013. It is hard to predict what technology will be promising then, but I'm optimistic enough to believe that the trend will continue.

Laboratories are already operating "ballistic" transistors which have switching times on the order of a femtosecond. That is 10-15 of a second, or about 10 million times faster than the transistors in this year's fastest microprocessor. The trick is simple enough in principle - reduce the size of the semiconductor component and the current flow so that the electrons don't bump into each other or the semiconductor atoms. In addition to being much faster, this dramatically reduces power drain and heat dissipation. The next stage is more of the same - people are currently experimenting with the "single electron transistor", in which a single bit is represented by a lone electron. This is not only very fast, it is also the ultimate in low power computing!

This raises a couple of other amusing points. Every advance in speed makes computers physically smaller. Switching speed in semiconductors is directly related to their size, and at another scale, the delay or latency caused by time for a signal to travel from one part of the computer to the next is limited by transit time at the speed of light. At the speed contemplated above this is extremely significant. If you have a computer with a femtosecond cycle time, then it takes about 1 million CPU cycles for a signal to travel one foot. As a point of comparison, a hot processor of 1993 with a 100 MHz clock rate (10 nanosecond cycle time) will would have a similar relative wait time in terms of clock cycles if it was sending a signal about 1860 miles. The latency associated with going across the country today will be occur in moving across your desk in the future. The amazingly fast computers of 2033 will of necessity be amazingly small because you can't build them any other way. They are also likely to be very cheap because most any method for manufacturing things this small is to replicate them like crazy and make many at the same time.

Optical computing offers an interesting and orthogonal set of tricks and techniques. Note that in the cases I am describing here the invention that must be done is primarily engineering rather than scientific in nature. The basic phenomena exist in the lab today, so mother nature has, in effect, already signed off on the designs. The remaining steps are just to learn how to manufacture these devices in quantity, integrate them at a large scale and bring them to market. People are pretty good at doing this sort of thing, so I think that there is as much reason to believe that breakthroughs will accelerate the pace of development as there is to doubt that the trend will continue.

Assuming that the price/performance trend in computing does continue, the computers of 20 years from now will be a million times faster, and 40 years hence a trillion times faster than the fastest computers available today. In order to put this into perspective, a factor of one million reduces a year of computing time to just 30 seconds. A factor of a trillion takes one million years into the same 30 seconds. Attempting to extrapolate what we could do with a CPU year of 1993 level computing is hard, and a million CPU years is nearly impossible to imagine.

Note that this is only the estimate for a single CPU and the standard serial "von Neuman" architecture. Multiprocessors will become increasingly common, and give us a huge range of performance above and beyond this. The figures above were meant to be a comparison at fixed cost - say for a typical desktop computer. There will always be specialized problem areas in science, engineering and elsewhere that will justify machines that are 1000 times to 10,000 times more expensive than a desktop user of today can afford. The supercomputer of 2033 could easily be 4 quadrillion times faster than a computer of 1993 - which means that it could do in 30 seconds what the fastest PCs of today could accomplish in 4 billion years - the age of the earth.

Storage capacity is also increasing at an exponential rate. There are a whole host of interesting storage technologies on the horizon. Current hard disk technology will continue to drop in price - earlier this year we priced hard disks at about $1 per megabyte. This fall a new series of disks will come in at about 30 cents per meg, and I believe that this trend will continue for a while. We have talked to people at disk companies who are planning to make gigabyte capacity drives down to the 1.8 inch form factor, and then surface mount hundreds of them to cards, as if they were chip modules, to increase the density. The total bandwidth will be very high because the disks will be configured as an array. Various forms of optical storage will also appear to challenge this standard magnetic based technology. The most exotic of these is holographic memories which appear to be able to store up to 1012 bytes - a terabyte - per cubic centimeter. This isn't even close to the theoretical limit, which is far higher.

These forms of secondary storage will face some interesting competition - semiconductor RAM has increased in density (and decreased in cost) by 4X every 18 months for the last twenty years. The various technologies discussed above for CPUs will also effect mass storage - RAM based on single electron transistors will be very dense. Various random factors (such as the recent explosion at a Japanese epoxy plant) can perturb the pricing, but in the long run I expect these pricing curves to be maintained. This is far faster than the price of mechanical mass storage is dropping, and suggests that by the year 2000, RAM will cost about $1-$2 per gigabyte.

I expect that the typical desktop PC around the turn of the century will be over 100 gigabytes - whether RAM or a mixture of RAM and some other sort of mass storage., and a typical LAN server here at Microsoft will have a few terabytes. To put this in perspective, the American Airlines SABRE reservation system is a little over 2 terabytes, so it would fit on only 20 PCs worth of storage. If you bought that capacity today with PC industry components the cost would be just under $700,000 , a tiny fraction of what it cost to built SABRE out of 112 IBM 3390 disk drives. By the year 2000, I expect that amount of disk space to cost a few thousand dollars. In fact, it might cost even less because video on demand systems used to replace Blockbuster and other video rental stores will dramatically increase the market for storage and should dramatically drive the price learning curve.

The computational load used by SABRE is even less of a problem than the storage requirement. The existing reservation system uses processors (IBM 3090 architecture) rated at 423 million instructions per second. This is equivalent to four MIPS R4400s, or six to eight Pentiums. Another way to look at this is the traffic load. The peak ever recorded on SABRE was 3595 transactions per second. A dual processor MIPS R4400 machine over in the NT group recently benchmarked at about 300 transactions per second1 with NT and SQL Server, so you'd need a dozen of these machines to do the whole thing. The reason for the factor of six difference between this and the raw number of compute cycles is that SABRE uses specialized transaction processing operating system (IBM Transaction Processing Facility) rather than a general purpose system like NT and SQL Server which is far less efficient (but more flexible) than TPF.

In fact, with a new generation of transaction and database software based on direct use of 64 bit addressing and massive RAM memory, a single microprocessor such as the MIPS T5, Intel P7 or next generation Alpha or PowerPC should be able to the entire SABRE load. Although SABRE represents one of the most challenging real time data processing tasks of 1993, hardware technology will pass it by in the next couple years. There is still a major task in creating the software which will enable this, but my assumption is that we and/or others will recognize this opportunity and rise this to challenge.

One way to consume these resources is to switch to ever richer data types. We have seen how the move to GUI consumed many more CPU cycles, RAM and disk space than character mode, so it is relatively safe to suggest that more of the same will occur with rich multimedia data types. Audio is already within our grasp, because it is fairly low bandwidth. Film and video represent a major step upwards. A feature film compressed with MPEG or similar technology will be about 4 gigabytes. Advances in compression will probably take this down to about 600 megabytes and increase the quality but this is still a major step up from text. If we compare the size of a novel or movie script to the film itself, we'll see roughly a factor of 1000 increase.

A factor of 1000 is quite a bit by normal standards, and much of the work that the PC industry will be doing in the next several years is gearing up to this challenge. It is quite clear that video and other multimedia data types will become as commonplace as text is today. This will cause major shifts in usage and will have all sorts of other consequences, some of which are treated elsewhere in this memo. Nevertheless, this isn't a very full answer to the question of what we'll do with all of those computing cycles.

Remember that a factor of 1000 improvement takes only ten years, so at best the transition to video will only be a temporary pause in the onslaught of computing. Other media types won't really help much because there is a more fundamental limitation, which may be somewhat surprising - human sensory organs aren't all that complex!

Audio and video are already well understood, but we can make some estimates for the other senses. Taste and smell are actually variations of the same sense (you can't "taste" many things when your nose is plugged) which maps a set chemical sensors in the nose and mouth to a signal. Clinical tests similar to the "Pepsi Challenge" show that the taste/smell resolution is not very complex - at least in humans. The sometimes hilarious vocabulary used by oenophiles to describe the taste of a wine is another example. Without understanding the output device it is hard to say in detail, but I doubt that the combined bandwidth would be more than audio.

Touch is more interesting. Apart from our fingertips, lips and a few other regions, our sense of touch is actually not very good. The basic unit of visual displays is the pixel, short for picture element, and in a similar vein we can talk about the touch element or "touchel". Even parts of our anatomy which we think of as being quite sensitive often have poor spatial resolution and thus require few touchels to achieve the full effect. Poke yourself with various shapes of the same basic size and you'll find that most places you can't tell the difference - sensitivity as we normally think of is actually about resolution in amplitude, which effects the number of bits per touchel rather than their density. Even the highest resolution parts of your body don't need more than 100 touchels per linear inch, and the total surface area which requires that much is pretty small. An estimate of the total touchel bandwidth is probably something like one million touchels with 8 bits of amplitude per touchel, updated between 30 and 100 times a second. The total bandwidth is therefore equal to, or possibly a bit less than, that for video.

There are clearly some major engineering problems to be solved before we all have touchel based body suits and taste output devices in our mouths. Better yet we could have a direct connection to our nervous system so we can "jack in" to our PCs in the manner that William Gibson and others have described in cyberpunk science fiction. Since this is primarily a biological and mechanical engineering problem it is not subject to the same exponential technology rules which govern computing. There are reasons to be optimistic, but it is not clear to me whether we'll have the full range of human senses as output devices in twenty years, or even in forty. Regardless of the I/O issues, it is clear that computing resources will not be the bottleneck in solving this ultimate human interface problem. In fact, the estimates above suggest that the total bandwidth that can be absorbed by a human is only a modest constant factor larger than video, and thus will only take a few more years for the price/performance curve to surmount it.

Given the amazing growth of computing, there are always a foolish few who contend that we will run out of demand, so we'll never need more powerful machines. The counterexample is that there are many simple problems which would require more computing than a 1993, or even a 2033, level supercomputer could do between now and the recollapse or heat death of our the universe2. Computer scientists classify problems of this sort as "NP hard", and there are many examples. In actual practice some NP hard problems can be efficiently approximated, but many cannot, and this will lead to a never ending stream of problems for computers of the future, no matter how fast.

Here is the simplest example of why NP hard problems are hard. Consider trying to enumerate all possible combinations or orderings of N unique objects (characters or whatever). The number of combinations is N factorial, usually written as N!. 3! is only six, so I can list all of the combinations here 123, 132, 213, 231, 312, 321. 10! is about 3 million, which is relatively small, but 59! is about 1080 and 100! is just under 10157. These are really big numbers! For reference sake, cosmologists usually estimate that there are about 1080 elementary particles (protons, neutrons etc.), and about 10160 photons in the entire universe, so even with extreme cleverness about how we stored the resulting list, we'd need to use all of the matter and most of the energy in the universe just to write down 59!, much less 100. As output problems go, touchel suits seem mundane by comparison! This particular example isn't very interesting, but there are plenty of NP hard problems, such as the traveling salesman problem and many others which are just as easy to state and just as hard to solve.

Many open problems in computing - including some classified as "artificial intelligence" fall into a grey area - they are likely to be NP hard in principle, but there may efficient approximations which allow us to get arbitrarily good results with bounded computing resources. Other problems, such as simulation, recognition based input (speech, handwriting...), virtual reality and most scientific programs are clearly not NP hard so they will fall to computing.

The key issue behind this point is that computing is on a very fast exponential growth curve. Anything which isn't exponential in growth, or which is exponential but with a slower growth rate, will quickly and inexorably be overwhelmed. NP hard problems are an example that can easily scale beyond any computer. Mainframe and minicomputers did have exponential growth, but at a slower rate than microprocessors so they succumbed. Human interface only scales up to the point where our nervous system is saturated, which as discussed above will be reached within the next decade or so.

The SABRE airline reservation system mentioned above is one which doesn't scale. The size of SABRE depends on the number of travelers, flights, travel agents and how fast they type. None of these factors is growing anywhere near as fast as computing itself. Even a constant factor of 1000 by using video, or of perhaps 10,000 to have a full virtual reality (thus obviating the flight altogether!) does not matter because any constant factor is rapidly absorbed by exponential growth.

This is a fundamental lesson for us. It is extraordinarily difficult for people to really grasp the power of exponential growth. No experience in our every day life prepares us for it. The numbers become so astronomically large3 so quickly, as the projections above show, that it easy to either dismiss them outright, or mentally glaze over and become numb to their meaning. It is incredibly easy to fool oneself into thinking that you do understand it, but usually this just means that you've mentally done a linear extrapolation from the recent past. This works for a little while, but then rapidly becomes out of date.

The trick for the next decade of the computing industry will hinge on being very smart about recognizing which things scale with or faster than computing and what things do not. This will not be obvious in the early stages, but that is where the value lies so this is the challenge to which we must rise. The "MIPS to the moon" vision speech is very easy to make - the hard part is to really believe it. Fortunately, much of the competition will be a bunch of entrenched linear extrapolators that won't know what hit them.

At its essence, this is the secret of Microsoft and the personal computer industry. Despite the incredible advances that had occurred in mainframes and minicomputers, the people involved - many of them quite brilliant - could not grasp the fact that the advances would continue and that this would stretch their business models to the breaking point. If the mainframe folks had stopped to make the exponential extrapolation - and acted upon it - then it naturally follows that microprocessor based systems would deliver computing to the masses, that they would ultimately surpass mainframes and minis, that hardware should be decoupled from software because the driving forces are different, and finally that software would be a central locus of value.

The last point is quite interesting. If you are in the hardware business you must be very agile indeed to keep your footing. The implementation technologies change so fast that you are always at an incredible risk of becoming obsolete. It is very hard to change a manufacturing business fast enough because of the large investment in tooling up for what will soon become yesterday's technology. The only hardware companies that have ever made significant money are those that managed to create an asset - the hardware architecture - which was above the fray of individual implementations and thus could enjoy a longer life span. Software is able to do the same trick in an even better fashion. Like a hardware architecture it lives for a long time - more than a dozen years so far for MS DOS - but the tooling cost is far lower. The software business is still quite tricky but it is fundamentally better suited to long term growth in an exponential market than hardware is.

The growth curve of Microsoft is unprecidented in the annals of business, and seems quite miraculous until you realize that what we have done is ride the exponential growth curve of computer price/performance. That is the true driving force behind our success. As long as computing hardware increases its price performance there is a proportionate opportunity for software to harness that power to do things for end users. This occurs because the machines get cheaper, user interface techniques like GUI allow us to make computing more appealing to a wider audience, raw power enables new application categories, old customers can upgrade and get substantially better features... All of these factors combine to increase the opportunity for software. I made a chart a couple of years ago plotting the number of lines of code in several of our products over time. These also followed an exponential curve with approximately the same growth rate.

Our feat, and believe me I do not mean to belittle it, has been to ride this wave of technology and maintain or increase our relative position. The correct way to measure this position, and thus our market share, is as a fraction of worldwide CPU cycles consumed by our products. (As an aside, the evolution of the information highway will cause this market share metric to evolve as well. Rather than measuring the fraction of the worlds computing cycles executed by our software, we will have to look at share of both CPU cycles as well as total data transmitted.)

Maintaining CPU cycle share is very hard to do, because the right mix of products and technology to do this at various points in time changes a lot. Companies that have point products or which only extrapolate linearly will fail. So far, we have often been the beneficiary because we have been able to reinvent the company at each point along the line.

The bad news about this is that we will have to continue to do so, and at a rate which continues to increase. It will impossible for us to maintain our historical growth curves unless we maintain our CPU cycle share and that means that we must do two things: continue to bring new technology to our existing products, and at the same time create new product lines to track the emergence of computing in new mass markets.

One aspect of the price/performance trend discussed above is that a PC class machine will get amazingly powerful, but an equal consequence is that extremely cheap consumer computing devices will emerge with the same or higher computing power than today's PCs, but with far higher volume. The bulk of the worlds' computing cycles come near the low end, high volume part of the market. Any software company that wants to maintain its relative share of total CPU cycles must have products that are relevant to the high volume segment of the market. If you don't, then you are vulnerable to a software company that does establishes a position there and then rides the technology curve up to the mainstream. This is what the PC industry did to mainframes and minicomputers, and if we in the PC industry are not careful this fate will befall us as well.

The basic dynamics of this situation comes from the relative lifetimes of software and hardware. An operating system architecture can easily last 15 to 20 years or possibly even more, as MS DOS and UNIX both demonstrate. Over that period of time the computing hardware that the OS runs on will increase in price/performance by a factor of 30,000 to a million, taking the combined platform into new realms and application areas. If each segment of the computing industry scaled at the same rate, then the relative standings would be safe, but this isn't the way it usually works. Technology trends will favor some segments over others. Microprocessors beat out discrete components, and more recently RISC based microprocessors have been increasing their price/performance at a faster rate than their CISC based rivals. Business trends also matter - a high volume platform will have more ISVs, and probably a more competitive hardware market.

Microsoft started out as the Basic company - well, as it so happens we still are the leading supplier of Basic, but that is hardly the way to characterize us at this point. Growth, profitability and the focus of the company have historically shifted from one area to another, and this will continue past our current product lines. The day will come when people will say "hey, didn't Microsoft used be the company that made office software?" and the answer will be "yes, and as a matter of fact they still do, but that isn't what they're known for these days".

  1   2   3   4   5   6   7   8   9   10


The database is protected by copyright ©essaydocs.org 2016
send message

    Main page