One of the original innovators behind the ARM platform discusses troubled waters ahead for the microprocessor industry - and it’s not simply a matter of the approaching limits of Moore’s law. The growing complexity of computing is making today’s “hard” problems far harder than the “hard” problems of the past. At the same time, growing specialization in the market is actually shrinking the number of players who can tackle these sorts of problems and pushing out the incentive to do so.
ARM’s employee number 16 has witnessed a steady stream of technological advances since he joined that chip-design company in 1991, but he now sees major turbulence on the horizon.
"I don’t think the future is going to be quite like the past," Simon Segars, EVP and head of ARM’s Physical IP Division, told his keynote audience on Thursday at the Hot Chips conference at Stanford University, just north of Silicon Valley.
"There may be trouble ahead."
The microprocessor industry has enjoyed an almost unbroken streak of improvements, Segars said, citing advances in silicon manufacturing techniques, power reduction, and gadget-size and gadget-cost shrinkage – he brought along a 1983, $3,995 Motorola DynaTAC as a prop.
But the landscape is changing. The low-hanging fruit has been picked, and a new way of thinking will be needed to provide the world with the squillions of low-cost, low-power microprocessors that the increasingly mobile computing ecosystem requires – not to mention the everything-connected world described by the current buzz-phrase: “The internet of things”.
Harkening back to when he joined ARM, Segars said: “2G, back in the early 90s, was a hard problem. It was solved with a general-purpose processor, DSP, and a bit of control logic, but essentially it was a programmable thing. It was hard then – but by today’s standards that was a complete walk in the park.”
He wasn’t merely indulging in “Hey you kids, get off my lawn!” old-guy nostalgia. He had a point to make about increasing silicon complexity – and he had figures to back it up: “A 4G modem,” he said, “which is going to deliver about 100X the bandwidth … is going to be about 500 times more complex than a 2G solution.”
The way that the 4G-modem problem will be solved, Segars said, will be by throwing a ton of dedicated DSP processing engines at it – which will, of course, require a lot of silicon real estate.
"But that’s not so bad," he said, "because silicon’s being scaled the whole time. But it’s going to eat a lot of power, and power is the real problem."
ARM is a mobile-processor company, and mobile processors run on batteries – and Segar said that the power required to juice increasingly complex silicon is a system-level challenge. “The reason for that,” he said, “is because batteries are pretty rubbish, really.”
As silicon technologies have improved in comparative leaps and bounds, batteries haven’t. “Historically,” Segar said, “battery power has grown about 10 or 11 per cent per year, which unfortunately is not very well-matched with Moore’s law.”
Moore’s law on trial
But as big a problem as batteries are in the mobile market, there are much more fundamental challenges that will make the future of the microprocessor market different from its past.
For one, the complexity of chip design and the intricacies of the physics involved is increasing, making design a much riskier and more demanding process. “And you really, really need to worry about that,” Segars said.
The reason for that worry is risk and cost. “The cost of your tape-out is going to be astronomical,” he said. “When you’ve written that check for a million or two million dollars for your [chip-making lithography] masks, you want to hope that chip works. So the effort going into validating and verifying a design has gone up by orders of magnitude.”
Another challenge that Segars sees is caused by the increasing stratification of the semiconductor industry – a development that has brought many benefits, but which also has its downside.
"When prople first started building semiconductor devices," he said, "they did everything themselves in fully vertically integrated companies. People had fabs, product design, manufacturing, case design – they did the whole thing themselves."
That has changed over the years – mostly for the better, from Segars’ point of view – and now the industry is filled with companies specializing in various areas such as design, IP, electronic design automation (EDA), packaging, chip-baking, and so on.
This specialization has been great for spreading the costs and risks around, and for taking advantage of the economies of scale – TSMC and Global Foundries, for example, produce silicon for many different fabless chip designers.
Dying arts and fading fabs
But with more companies focusing on design rather than manufacture, Segars sees a danger. “The skills you need to close out the timing at the transistor level are becoming a dying art,” he warned.
"As we go forward, and start worrying about very exotic processes that we’re going to have to deal with in the future, those transistor skills are going to need to become very, very important once again," he cautioned. "And as a designer, you’re going to have to worry about everything – from architecture down to transistors."
But no matter how “disaggregated” the semiconductor industry becomes, eventually somebody has to actually manufacture the chips themselves. Segars reminded his audience of the famous quote from T.J. Rodgers of Cypress Semiconductor that “Real men have fabs,” but then pointed out the obvious fact that there are far fewer fab companies around than there used to be – and that such shrinkage in the manufacturing base poses its own problems.
As chip-baking process sizes shrink, Segars said, “We’ve seen the cost of developing processes go up and up and up, and now it costs you billions of dollars – as I’m sure everybody knows – to develop a new process. It costs you billions of dollars to buy all the equipment for it, and so fewer prople are doing it.”
From a customer’s point of view, a small number of strong, efficient, advanced fabs is not a problem – in fact, the same economies of scale that make industry disaggregation a good thing make fewer, busier fabs a good thing.
Four customers does not a market make
But there is one potentially troubling problem, Segars said – and it’s not for the fabs or their customers, it’s for equipment suppliers. “The physics problems that you have to solve when you go to smaller and smaller geometries are getting harder and harder to solve, and so the cost of the equipment that you need for the next generation process goes up and up.”
That price inflation is not in and of itself the real market problem, though. “The problem is that the supply chain that builds that equipment for the foundries has a bit of trouble dealing with its return on investment.”
Simply put, the foundry-equipment market is shrinking. “When the world moved from 200 millimeter wafers to 300-millimeter wafers, if you were [fab-equipment manufacturer] ASML or somebody like that, you had a whole lot of customers that you could go and sell that equipment to.”
Now, however, the move to 450-millimeter wafers is the new hotness – which is great for economies of scale at the foundry level, but not so great for foundry-equipment makers. “There’s only going to be about four guys who are going to build those size wafers,” Segars said, “so if you’re doing all the R&D and your customer size is four, that is a bit of a problem.”
Moore’s law repealed
Finally, there’s the looming problem of the future of silicon process-size shrinkage: it can’t go on forever. One obvious limit, as Segars pointed out, is the .27-nanometer diameter of the silicon atom itself – as process sizes shrink down to, say, 14nm and below, you’re only talking about dozens of atoms per transistor gate.
But there are plenty of other challenges to be met before you’ll count the number of silicon atoms in a gate on your fingers and toes – namely, what lithography techniques can take you well below 20nm?
At the 20nm level, Segars said, “the problem is that you need to introduce double patterning.” Using two sets of masks to accomplish what one set could do at larger process sizes not only increases mask costs, but also slows down the throughput of the manufacturing.
If you want to keep the throughput at the same rate, you have to buy more equipment – which might make the ASML’s of the world happier, but driving up chip costs is not a good thing in a world that includes developing nations whose citizens are hankering to join the mobile world.
There’s one long-sought technology about which Segars remains a bit sceptical. “At 14 [nanometers] and below, what you really want is EUV,” he said, referring to extreme ultraviolet lithography, which has long been seen as a possible solution to the process-shinking problem. EUV’s promise comes from the fact that it’s based on 13.5nm wavelength light – one hell of a lot more precise than the 193nm light that Segars said is used in today’s visible-light lithography.
"The problem is," he said, "that [EUV] is really, really hard to make. You’ve got to make a plasma out of tin atoms, and then shoot it with a laser, and some light comes out – but the light’s really weak, and it gets absorbed by everything. So generating enough of it to economically build chips is very, very hard."
After the endgame: core teamwork
But that’s not to say that there’s a dead-end on the road we’re travelling. Segars’ vision of the future jibes with the one described by fellow ARMian Jem Davies, the company’s vice president of technology, when speaking at AMD’s Fusion Summit this June – namely, that heterogeneous computing systems are the Next Big Thing.
Simply put, heterogeneous computing systems distribute a workload to various and sundry specialized compute engines – CPU, GPU, video, encryption, baseband, whatever – so that individual sub-tasks are completed efficiently by dedicated hardware best suited to them.
"I think the future of processing is heterogeneous multiprocessing," Segars said, "… dedicated engines arranged in various clusters with a software layer that can understand the underlying hardware, and make sure that if it’s not needed, it’s shut off, it’s not leaking, to preserve that battery."
There are a host of challenges to achieving the holy heterogeous grail, of course – not the least of which being keeping all the various cores in close communication, and optimally data-coherent.
To that end, ARM’s upcoming Cortex-A15 compute core – which will likely appear in early 2013 – will introduce a cache coherent interconnect that will enable full coherency among multiple CPU clusters. Segars also projects that by 2015, coherency in ARM-based SoCs and systems will be limited not only to CPUs, but will also allow full “where’s that data?” transparency among CPUs, GPUs, and specialized engines.
Full coherence, however, brings with it its own set of challenges, such as unwanted latency when far-flung cores and engines need to share the same data, but ARM, AMD, and Intel are all looking into how different approaches to coherency can help – or hinder – heterogeneity.
A lot has changed in the microprocessor world since the Intel 4004 appeared 40 years ago this November. By and large, the arc of improvement has been relatively straightforward, with improvements in process size, processing power, and miniaturization being fairly regular – achieved through one hell of a lot of work, to be sure, but regular nonetheless.
There’s been a lot of talk recently about the “post PC era”. From Segars’ point of view, however, we may also soon be talking about the “post–Moore’s Law" era – a time when computing advances are no longer measured in transistor counts per square millimeter, but rather in how quickly, intelligently, and cooperatively different cores and engines can communicate.