For 50 years, the increasing speed of computers has been as much a constant as the outstretched arm of the tax collector. While that good fortune is bound to come to an end eventually, teams of scientists and engineers continue to produce faster and faster machines. They accomplish that by a mixture of clever new designs and an old-fashioned devotion to cleanliness.
None of the dire predictions about the end of innovation have come true yet, said Robert Dreyer, a former Intel chip designer who is now studying at Harvard. “The end is always 10 years away,” he said. “As you get closer to the end, the engineers find new ways to push it back.”
Computer chips are enormously complicated machines, but their operation can be explained by a few simple principles. All the decision-making about where the pop-up windows will be on the screen, which monster will explode or how the modem will squawk depends upon the movement of packets of electrons between transistors, the switches that direct the flow of current on a chip.
A full packet means one thing to a transistor, and an empty packet or, more precisely, the absence of a packet, means the opposite. Each chip has millions of tiny packets moving among millions of transistors at the same time, and somehow they manage to do the right thing.
The chip designers make computers faster with two basic tricks. First, they arrange for the packets to get where they’re going just a bit sooner. That is usually accomplished by making the chip smaller. If the wires are shorter, the electrons will arrive earlier.
That has been the best and most consistent trick for computer designers, but it requires a huge investment in the plants for manufacturing the chips. Today, many of the cutting-edge chips are made with wires 0.18 microns wide. The next generation will be 0.15 microns wide. A human hair is about 100 microns wide. Specks of dirt or any other contaminants can foul the chips, so the plants must be kept very clean.
Occasionally, chip designers stumble upon an entirely new chemical process for building chips that also accelerates the movement of the electrons. Recently, IBM discovered how to build the wires out of copper instead of aluminum. In the past, copper was not used because no one could prevent it from dissolving into the silicon foundation of the chip and creating short circuits.
Now the better conductivity of copper ensures that the electrons get there with less effort, making it possible for the whole chip to run faster. That process is used in many of IBM’s chips and is also licensed to Motorola for the new G4 chips it makes for Apple computers.
Getting the packets of electrons to their destinations faster is only half the battle. Chip designers also try to maximize the work chips can do by packing each chip with more transistors. Building a chip with smaller wires means that there’s more space for transistors. If these transistors can be used intelligently, more work can be done.
Ideally, twice as many transistors means doing twice as many multiplications and additions, and that means finishing each calculation in half the time.
In reality, chip designers have many restrictions that prevent them from using all the new transistors. In some cases, the designers are constrained by the desire to keep their new chips compatible with their old ones.
One of Intel’s strengths, for example, is that different generations of its chips are compatible, but that is also a liability.
Intel’s chip designers are forced to use many transistors to make sure that the company’s latest chips can still run software that is more than 20 years old. Comparable chips from Motorola used in Apple machines are often one-third the size of Intel’s chips, and that makes them easier to manufacture.
The goal of backward compatibility also limits the flexibility of designers.




