Computing after Moore’s Law

theworldismine13

God Emperor of SOHH
Joined
May 4, 2012
Messages
22,799
Reputation
570
Daps
22,758
Reppin
Arrakis
Computing after Moore’s Law
http://www.scientificamerican.com/a...puting-after-moores-law/?WT.mc_id=SA_Facebook

Fifty years ago this month Gordon Moore published a historic paper with an amusingly casual title: “Cramming More Components onto Integrated Circuits.” The document was Moore’s first articulation of a principle that, after a little revision, became elevated to a law: Every two years the number of transistors on a computer chip will double.

As anyone with even a casual interest in computing knows, Moore’s law is responsible for the information age. “Integrated circuits make computers work,” writes John Pavlus in “The Search for a New Machine” in the May Scientific American, “but Moore’s law makes computers evolve.” People have been predicting the end of Moore’s law for decades and engineers have always come up with a way to keep the pace of progress alive. But there is reason to believe those engineers will soon run up against insurmountable obstacles. “Since 2000 chip engineers faced with these obstacles have been developing clever workarounds,” Pavlus writes, “but these stopgaps will not change the fact that silicon scaling has less than a decade left to live.”

>>View a slide show of innovations that may extend Moore’s law.

Faced with this deadline, chip manufacturers are investing billions to study and develop new computing technologies. In his article Pavlus takes us on a tour of this research and development frenzy. Although it’s impossible to know which technology will surmount silicon—and there’s good reason to believe it will be a combination of technologies rather than any one breakthrough—we can take a look at the contenders. Here’s a quick survey.

Graphene
One of the more radical moves a manufacturer of silicon computer chips could make would be to ditch silicon altogether. It’s not likely to happen soon but last year IBM did announce that it was spending $3 billion to look for alternatives. The most obvious candidate is—what else?—graphene, single-atom sheets of carbon. “Like silicon,” Pavlus writes, “graphene has electronically useful properties that remain stable under a wide range of temperatures. Even better, electrons zoom through it at relativistic speeds. And most crucially, it scales—at least in the laboratory. Graphene transistors have been built that can operate hundreds or even thousands of times faster than the top-performing silicon devices, at reasonable power density, even below the five-nanometer threshold in which silicon goes quantum.” A significant problem, however, is that graphene doesn’t have a band gap—the quantum property that makes it possible to turn a transistor from on to off.

Carbon Nanotubes
Roll a single-atom sheet of carbon into a cylinder and the situation improves: carbon nanotubes develop a band gap and, along with it, some semiconducting properties. But Pavlus found that even the researchers charged with developing carbon nanotube–based computing had their doubts. “Carbon nanotubes are delicate structures,” he writes. “If a nanotube’s diameter or chirality—the angle at which its carbon atoms are “rolled”—varies by even a tiny amount, its band gap may vanish, rendering it useless as a digital circuit element. Engineers must also be able to place nanotubes by the billions into neat rows just a few nanometers apart, using the same technology that silicon fabs rely on now.”

Memristors
Hewlett–Packard is developing chips based on an entirely new type of electronic component: the memristor. Predicted in 1971 but only developed in 2008, memristors—the term is a portmanteau combining “memory” and “resistor”—possess the strange ability to “remember” how much current previously flowed through it. As Pavlus explains, memristors make it possible to combine storage and random-access memory. “The common metaphor of the CPU as a computer’s ‘brain’ would become more accurate with memristors instead of transistors because the former actually work more like neurons—they transmit and encode information as well as store it,” he writes.

Cognitive Computers
To build chips “at least as ‘smart’ [as a] housefly,” researchers in IBM’s cognitive computing group are exploring processors that ditch the calculator like Von Neumann architecture. Instead, as Pavlus explains, they “mimic cortical columns in the mammalian brain, which process, transmit and store information in the same structure, with no bus bottlenecking the connection.” The result is IBM’s TrueNorth chip, in which five billion transistors model a million neurons linked by 256 million synaptic connections. “What that arrangement buys,” Pavlus writes, “is real-time pattern-matching performance on the energy budget of a laser pointer.”
 

Claudex

Lord have mercy!
Supporter
Joined
Apr 30, 2014
Messages
6,419
Reputation
4,222
Daps
19,342
Reppin
Motherland
Thanks for the article breh, I got another one to add to this one. So I'ma just do it here instead of starting a new thread.

Intel’s former chief architect: Moore’s law will be dead within a decade
Intel’s former chief architect Bob Colwell delivered the keynote address at the Hot Chips conference on Sunday, in a speech I personally wish I’d been able to attend. Colwell, who served as a senior designer and project leader at Intel from 1990 to 2000, was critical to the development of the Pentium Pro, Pentium II, P3, and P4 processors before departing the company. I’ve had the opportunity to speak with him before, and his speeches on processor technology and the evolution of Intel’s most successful designs are always fascinating. The Pentium Pro’s architecture (also known the P6) is arguably the most successful design in the history of microprocessing — echoes of its design principles persist to this day in the latest Haswell CPUs.

Today, Colwell heads up DARPA’s Microsystems Technology Office, where he works on developing new cutting-edge technologies across a variety of fields. In his talk at Hot Chips, he faced up to a blunt truth that engineers acknowledge but marketing people will dodge at every opportunity: Moore’s law is headed for a cliff. According to Colwell, the maximum extension of the law, in which transistor densities continue doubling every 18-24 months, will be hit in 2020 or 2022, around 7nm or 5nm.

“For planning horizons, I pick 2020 as the earliest date we could call [Moore's law] dead,” Colwell said. “You could talk me into 2022, but whether it will come at 7 or 5nm, it’s a big deal.”

Dennard and Moore
It’s important to realize, I think, just how odd semiconductor scaling has been compared to everything else in human history. People often talk about Moore’s law as if it’s the semiconductor equivalent of gravity, but in reality, nothing else we’ve ever discovered has scaled like semiconductor design. From mud huts to skyrscrapers, we’ve never built a structure that’s thousands of times smaller, thousands of times faster, and thousands of times more power efficient, at the same time, within a handful of decades.



Once you recognize just how unusual this has been, it’s easier to accept that it’s also coming to an end. With Dennard scaling having stopped in 2005 (Dennard scaling deals with switching speeds and other physical characteristics of transistors, and thus heat dissipation and maximum clock speeds), the ability to cram ever-more silicon into tiny areas is of diminishing value. The explosion of accelerators and integrated components into SoCs is partly about fighting the growth of “dark silicon” (that’s silicon you can’t afford to turn on without blowing your power budget) by building specialized functions that can be offloaded into cores that only fire up on demand.

That Moore’s law will continue until 7nm or 5nm is actually extremely reasonable. I’ve heard other engineers speak of being dubious about 10nm and below. But the problem is simple enough: With Dennard scaling gone and the benefits of new nodes shrinking every generation, the impetus to actually pay the huge costs required to build at the next node are just too small to justify the cost. It might be possible to build sub-5nm chips, but the expense and degree of duplication at key areas to ensure proper circuit functionality are going to nuke any potential benefits.

What’s striking about Colwell’s talk is that it echoes what a lot of really smart people have been talking about for years, but has yet to filter into general discourse. This isn’t something that we’re going to just find a way around. DARPA continues working on cutting edge technology, but Colwell believes the gains will be strictly incremental, with performance edging up perhaps 30x in the next 50 years. Of the 30+ alternatives to CMOS that DARPA has investigated, only 2-3 of them show long-term promise, and even there, he describes the promise as “not very promising.”

Innovation is still going to happen. There are technologies that are going to continue to improve our underlying level of ability; a 30x advance in 50 years is still significant. But the old way — the old promise — of a perpetually improving technology stretching into infinity? That’s gone. And no one seriously thinks graphene, III-V semiconductors, or carbon nanotubes are going to bring it back, even if those technologies eventually become common.
 
Top