How Chip Designers Like Nvidia ($NVDA) Are “Bending” Moore’s Law

NVIDIA Corporation [stckqut]NVDA[/stckqut] is driven by “specialized computing,” that is, the transforming of specific software tasks into physical silicon chips instead of depending on an ever-faster do-it-all CPU, or central processing unit. It has existed in some form or another for decades, but it has lately become the driving force behind pretty much everything cool in technology, from artificial intelligence to self-driving cars. Why? Because those CPUs aren’t getting faster at the pace they once were. Moore’s Law is dying.

Moore’s Law is the notion that, every two years or so, the number of transistors in a chip doubles. Its popular conception is that computers keep getting faster, smaller and more power-efficient. That isn’t happening the way it used to. “It’s not like Moore’s Law is going to hit a brick wall — it’s going to kind of sputter to an end,” says Daniel Reed, chair of computational science and bioinformatics at the University of Iowa.

As Intel and the other chip foundries spend fortunes to keep the wheel turning, chip designers across the industry are finding creative ways to continue at the old pace of Moore’s Law, and in many cases increase device performance even more quickly.

“Most of the advances today come from [chip] design and software,” says Nvidia chief scientist William Dally. “For us it’s been a challenge because we feel under a lot of pressure to constantly deliver twice the performance per generation,” he adds. So far, Nvidia has accomplished that cadence even when the size of the elements on the chip doesn’t change, and the only thing that does is its design, or “architecture.”

Here’s a less-than-exhaustive list of all the applications to which the principle of specialized computing has been applied: Artificial intelligence, image recognition, self-driving cars, virtual reality, bitcoin mining, drones, data centers, even photography. Pretty much every technology company that makes hardware or supplies it — including Apple, Samsung, Amazon, Qualcomm, Nvidia, Broadcom, Intel, Huawei and Xiaomi — is exploiting this phenomenon. Even companies that only produce chips for their own use, including Microsoft, Google, and Facebook, are doing it.

Many years ago, almost all computing was done with the CPU, one thing after another in sequence, says Keith Kressin, a senior vice president at Qualcomm. Gradually, often-used but processor-intensive tasks were diverted to specialized chips. Those tasks were processed in parallel, while the CPU did only what was absolutely required.

These task-focused chips come in a wide variety, reflecting the breadth of their uses, and the lines between them can be blurry. One kind, the graphics processing unit — think Nvidia and gamers — found wider use for tasks to which it’s uniquely suited, including artificial intelligence. Later on, the rise of smartphones created a gigantic need for another type, digital signal processing chips, designed to enhance photography, for example.

Source: How Chip Designers Are Breaking Moore’s Law – WSJ

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.