microchip
Integrated circuit from an EPROM memory microchip showing the memory blocks and supporting circuitry.

What Is the Future of Computers?
Natalie Wolchover, Life’s Little Mysteries Staff
10 September 2012 Time: 09:41 PM ET

n 1958, a Texas Instruments engineer named Jack Kilby cast a pattern onto the surface of an 11-millimeter-long “chip” of semiconducting germanium, creating the first ever integrated circuit. Because the circuit contained a single transistor — a sort of miniature switch — the chip could hold one “bit” of data: either a 1 or a 0, depending on the transistor’s configuration.

Since then, and with unflagging consistency, engineers have managed to double the number of transistors they can fit on computer chips every two years. They do it by regularly halving the size of transistors. Today, after dozens of iterations of this doubling and halving rule, transistors measure just a few atoms across, and a typical computer chip holds 9 million of them per square millimeter. Computers with more transistors can perform more computations per second (because there are more transistors available for firing), and are therefore more powerful. The doubling of computing power every two years is known as “Moore’s law,” after Gordon Moore, the Intel engineer who first noticed the trend in 1965.

Moore’s law renders last year’s laptop models defunct, and it will undoubtedly make next year’s tech devices breathtakingly small and fast compared to today’s. But consumerism aside, where is the exponential growth in computing power ultimately headed? Will computers eventually outsmart humans? And will they ever stop becoming more powerful?

Continue reading: What Is the Future of Computers? | Moore's Law | LiveScience.

Home           Top of page