Markets column
Chris Woodcock monitors the markets, keeping a careful eye on the biggest technology stocks in the world. Every fortnight he brings you his latest insights on market movements and more.
After of years of rapid development, Chris asks if we’ve finally reached the end of Moore’s Law.
If anyone ever suggests that working in financial markets is glamorous – well, you almost certainly didn’t like them already and now you know they are liars too.
Moore’s Law impending death
Or maybe I’m just not doing it right. Either way, I spend a good deal of my time poring over financial statements and talking about capital allocation. So when two meetings in the space of a month mention the impending death of Moore’s Law, a totem of the tech industry for fifty years, it is as racy as it gets.
For the uninitiated, Moore’s Law is the observation that the number of components on a computer processing chip doubles every two years. It was made by Intel co-founder Gordon Moore in a 1965 paper and has become a self-fulfilling prophesy, driving the industry with surprising consistency ever since.
In everyday life, its effect has been profound. The Apollo Guidance Computer used 2 kB of memory to fly Armstrong et al to the moon, compared to 1,048,576 kB in the iPhone 5.
To understand how it works, take a deep breath. Transistors, the building blocks of all things compute, are measured in nanometres (nm). If the industry shifts from transistors that are 170nm wide to 120nm, that is a 30% shrinkage. Because chips come in square packages, the footprint of the new chip is half that of the previous generation. Twice as many are produced out of the same factory. As long as your costs did not more than double to achieve that, it makes sense to do it. More sense, in fact, than not doing it.
Smaller, cheaper, faster, better
Costs tend to rise only around 30% at each transition, and chips with more tightly-packed transistors are also quicker and use less power. Smaller, cheaper, faster, more efficient. It is really quite amazing.
The latest generation of chips are at 22nm and Intel tells us they are good for another two steps, to 10nm.
Thereafter, however, it starts to get more complicated. Under normal circumstances it is just a case of building better machines to manufacture transistors, but below 10nm the material itself, silicon, reaches the limits of its capabilities. To progress beyond 10nm requires new materials like graphene, as well as a complete redesign of the transistor structure itself. All this costs money and takes time, potentially ruining the Moore’s Law cadence.
Enough of the science part, the best question to ask might be so what?
The largest noticeable change will be the pace at which we upgrade our technology. Without Moore’s Law, the latest gadgets might no longer be that much greater than their predecessors. For your average consumer, that is hardly devastating. Reducing obsolescence in the process, it is arguably a more sustainable approach.
Threat to consumer computing?
However, for the computing industry there is more to worry about.
Those that rely on Moore’s Law to sell their wares, like Apple and HP, could find their sales as much as halve or more if the desire to upgrade every two years or so diminishes. Imagine a world in which computers or phones are like TVs (or fridges).
For those at the bleeding edge of manufacturing, like Intel and their Taiwanese counterparts, they will bear the brunt of the extra investment needed whilst also seeing their unit sales decline.
All that aside, perhaps the most exciting part for me was learning about the ultimate limit of computing miniaturisation. Based on Heisenberg’s Uncertainty Principle (oh yes) measuring an electron’s position, the bit of the future, is bounded at 0.00243nm. That is 10,000 times smaller than today’s transistors. It just might take longer to get there.
image credit: flickr/OnInnovation