Moore’s Law is Coming to an End: What That Really Means for Technology

The definition of Moore’s Law, according to the Oxford Dictionary, is, “the principle that the speed and capability of computers can be expected to double every two years, as a result of increases in the number of transistors a microchip can contain.” To give you a physical example you can wrap your head around: The first disc drive enterprise computer was released by IBM in 1956. It could hold 5 MB of data, or approximately 1 photo taken on an iPhone today. It weighed 1 metric ton. Now the average iPhone contains 64 gigabytes of storage, which is 12,800 times more memory, and it weighs around 150 grams.

IBM hard-disc drive loaded onto a plane, 1956

In the 21st century, silicon wafer technology has enabled extremely small transistors, so small in fact that the circuits which carry electricity in your devices are only a few atoms thick. The space between them has become so narrow that if we pack them closer together, the electrons will start interacting with each other, affecting the computer’s output. This combined with a number of other physical constraints (heat, reliability, cost) means it is impossible to continue to double the number of transistors in a microchip starting next year. But our need for computing power will continue to grow exponentially, and there is a real race already running to fulfill that need.

Producing these cutting-edge chips has been no easy feat, and the necessary brainpower requires deep pockets only a few companies have. The R&D cost alone to develop a new microchip is in the billions of dollars. This is dwarfed by the overhead expenses of running the sterile manufacturing plants. The executives at NVIDIA, AMD, and other chipmakers have seen the writing on the wall for some time – and they are all attacking the problem from different angles to cut out their share in this competitive market.

Editorial content

Some chip makers have focused on the low end of the market. Increasingly “smart” home appliances require these inexpensive and simple chips to control a few narrow functions. At the other end, highly specialized application-specific chips (ASICS) are being used to mine crypto and conduct deep learning data analysis – accelerating specific computer programs to the physical limits.

In the middle of this spectrum is the majority of consumer electronics as we think of them – computers, mobile devices, and Smart TVs. Increasingly the solution has been to move that computational work and storage from our devices somewhere else – into the cloud or data warehouses owned by service providers. 5G will enable such quick connection speeds that even less computation will happen on your devices, and the chips contained inside of them will increasingly be designed to optimize transmitting and receiving data from outside sources.

Heightening the stakes in this one-legged chip race has been the recent chip shortage. Companies are torn between innovation to stay ahead, and servicing the existing demand. Due to all this dissonance, it is likely the landscape of semiconductors will look vastly different in a few short year

Leave a Reply

Your email address will not be published. Required fields are marked *