The limits to Moore's Law

Here's a fantastic article from Futurity writer Danny Bradbury on the limits to Moore's Law...

Moore's law was first created in 1965 to describe the cost and space effects of minimisation. The law will reach its 50th birthday intact, but it will take a lot of innovation.

It was only a few words, but it set the pace for the development of microprocessor technology for decades to come - and for other things besides. Moore's Law was coined by Gordon Moore, later the co-founder of Intel, when he worked for Fairchild Semiconductor in 1965. The complexity for minimum component costs has increased at a rate of roughly a factor of two per year," he said in his groundbreaking article, "Cramming more components onto integrated circuits". Later, he revised it to every two years, citing the increased complexity of components.

Limits to Moore's law

Moore's Law was originally developed purely for describing the number of transistors that could be put on a chip at minimal cost. The problem for chip designers is that Moore's Law depends on transistors shrinking, and eventually, the laws of physics intervene. In particular, electron tunnelling prevents the length of a gate - the part of a transistor that turns the flow of electrons on or off - from being smaller than 5 nm. The other problem hindering smaller transistors is heat extraction. The more transistors there are on a chip, the more heat it produces, and the greater the chance of a malfunction. New methods must be developed to remove that heat from the chip.

Intel researchers published a paper in 2003 called Limits to Binary Logic Switch Scaling - a Gedanken Model. The paper anticipated that the industry would reach the limits of Moores law, and said that a trade-off between density and speed would be necessary to keep extending it.

Hard drive storage has suffered from similar problems to electronic transistors on chips. The devices store information magnetically using a series of ones and zeros. They use grains of magnetic material to store this information. Storage vendors have continued to increase the density of the magnetic grains on a hard drive by making them smaller. However, as density approaches 100 Gb per square inch, the physical law of superparamagnetism looms. When small enough, the magnetic grains will alter their magnetic state unpredictably, switching ones to zeros and vice versa.

From horizontal to vertical

The use of carbon nano tubes and silicon-germanium nanowires could extend the performance of transistors to some extent, although their size would remain roughly the same. Another potential solution is the use of 3D chips, in which layers of transistors are stacked on top of each other. This would maintain the horizontal size of the chip, while drastically increasing its transistor count. In 2008, researchers at the University of Rochester managed to create three-dimensional circuitry running at 1.4 GHz. That chip optimized the way that components interact with each other vertically, rather than simply layering banks of regular transistors on top of each other without having the different layers communicate.
On the storage side, companies such as IBM and HP have been working on both storage and computing systems that work at a molecular level. Layers of molecular strands, laid out in a grid, could also form the basis for a microprocessor.

While we wait: virtualization

Until significant technological advances in storage appear, the innovations must come in software. Virtualization technology enables us to use more of each processors' capacity, by separating the software processes running on them into separate virtual machines, ensuring that they do not interfere with each other. This can increase processor utilization from 10-15% up to 80-90%.

Virtualization can also help us to maximise our storage capacity. In traditional dedicated storage environments, where one physical disk drive is allocated to a particular application, much storage capacity goes unused. Instead, we can virtualise our storage into storage area networks, in which any disk in a high-speed network can store some information for a software application. This allows us to spread our data more evenly over a lot of disk drives, minimising the unused space.




Stewart Baines
Stewart Baines

I've been writing about technology for nearly 20 years, including editing industry magazines Connect and Communications International. In 2002 I co-founded Futurity Media with Anthony Plewes. My focus in Futurity Media is in emerging technologies, social media and future gazing. As a graduate of philosophy & science, I have studied futurology & foresight to the post-grad level.