Marsh & McLennan Advantage Insights logo
Conversations and insights from the edge of global business
Menu Search

BRINK News is transitioning to This Moment platform on MarshMcLennan.com as of March 31, 2023. Read the update here.

Technology

Artificial Intelligence Uses a Computer Chip Designed for Video Games. Does That Matter?

As AI and machine learning become more and more widespread in the global economy, there is an increasing focus on the hardware that drives them. Currently, nearly all AI systems run on a chip known as a GPU that was designed for video gaming. 

Are current chip designs fit for purpose in an AI future, or is a new type of chip needed? The answer could have profound consequences for the IT sector. Hodan Omaar is a policy analyst at the Information Technology and Innovation Foundation.

BRINK: First of all, can you explain the difference between a GPU chip and the more well-known CPU chip that is used in most of our computers?

Omaar: Central processing units (CPUs) and graphics processing units (GPUs) have a lot in common, but they have different architectures and are built for different purposes. CPUs have a small number of processing cores, which provide the power a CPU needs to perform certain tasks or computations. 

A CPU can focus these cores on getting individual tasks done quickly, but it does these tasks serially, meaning one at a time. As a result, CPUs are better tasked with computations where speed or latency is important. 

The Difference Between a Sports Car and a Truck

GPUs are made up of many smaller and more specialized cores that work together to deliver massive performance on processing tasks that can be easily divided up and processed across many cores. This makes GPUs better suited to tasks wherein bandwidth, rather than speed, is important. 

To see this, imagine a CPU is a sports car and a GPU is a semi-truck, and their task is to move a house full of boxes from one place to another. The sports car will move the boxes more quickly, but it will have to keep making the journey back and forth, whereas the semi-truck will carry a much greater load, but will travel more slowly. 

BRINK: Why do GPUs appear to work much better in AI than CPUs?

Omaar: AI applications, such as machine learning, deep learning and autonomous driving, involve highly parallelizable, predictable computations. Consider the computations involved in training an AI system to recommend pizzas based on factors like the weather, order history and general location. While the value of the factors themselves might be different across regions, the arithmetic operations the system is doing in each is the same. It is, therefore, more efficient to use GPUs, which can simultaneously execute the operations in parallel, than CPUs, which do them sequentially.

In general, AI chips don’t only execute more calculations in parallel, they have a number of features optimized for AI workloads. For instance, many AI chips are designed for low-precision computing — which trades-off the numerical precision of a calculation for speed and efficiency — because AI applications don’t need to use highly precise values to accurately encode and manipulate the data needed to train AI algorithms. They can also be built with specific programming languages that execute AI codes better.

Developing state-of-the-art AI chips is important to ensure AI developers and users can remain competitive in AI R&D and deployment.

BRINK: GPUs were designed for gaming, so are they the best type of chip for handling AI?

Omaar: It’s true that GPUs were originally designed for the video gaming industry because they are particularly good at matrix arithmetic, a major mathematical tool used to construct and manipulate realistic images. Many of the independent and identical operations I mentioned that AI systems do for training and inference are also matrix multiplication operations, making them ideal for GPUs.

But a new set of AI chips is starting to emerge that are specialized for different tasks. This is partly because improvements in CPUs are slowing because the ability to pack more transistors onto a single processor is beginning to reach its physical limits. 

A New Generation of AI Chips Is On the Horizon

The market for specialized AI chips divides broadly into three categories. The first are GPUs, which are mostly used to train and develop AI algorithms. The second are what are called “field programmable gate arrays” (FPGAs), which are mostly used to apply trained AI algorithms to new data inputs. FPGAs are different from other AI chips because their architecture can be modified by programmers after fabrication. 

The third group of AI chips are “application-specific integrated circuits” (ASICs), which can be used for either training or inference tasks. ASICs have hardware that is customized for a specific algorithm and typically provide more efficiency than FPGAs. But because they are so narrow in their application, they grow obsolete more quickly as new AI algorithms are created. 

BRINK: Some say that the United States is losing its competitiveness in this market. How much of a concern is that?

Omaar: Developing state-of-the-art AI chips is important to ensure AI developers and users can remain competitive in AI R&D and deployment. The United States is still the world leader in designing chips for AI systems. In our 2021 report looking at AI competitiveness, we found that at least 62 U.S. firms are developing AI chips, compared with 29 firms in China and 14 in the European Union. 

The United States has many advantages for AI chip production, including high-quality infrastructure and logistics, innovation clusters, leading universities and a history of leadership in the field. 

Continued Leadership Is Not Guaranteed

China has targeted the industry for a global competitive advantage, as detailed in a number of government plans, including “Made in China 2025.” And while some of its policy actions are fair and legitimate, many seek to unfairly benefit Chinese firms at the expense of more-innovative foreign firms.

The U.S. government needs to bolster its long-term competitiveness in the production of AI chips to stay competitive in AI. It can do this in part by expanding domestic chip manufacturing activity. For instance, Congress and the Biden administration should fully appropriate the funds needed to enact programs articulated in the CHIPS Act legislation, which passed Congress as part of the National Defense Authorization Act at the end of 2020. 

The legislation includes a combination of elements that would advance both U.S.-based chip innovation and manufacturing, including, for instance, $7 billion over five years for chip-focused R&D funding. The legislation also introduces a 40% investment tax credit for chip equipment and facilities to attract and incentivize more domestic chip manufacturing.

Hodan Omaar

Policy Analyst at The Center for Data Innovation @hodanomaar

Hodan Omaar is a policy analyst at the Center for Data Innovation, a nonprofit, non-partisan think tank. Hodan’s work covers U.S. policy in artificial intelligence across sectors such as healthcare, education, and government and she speaks and writes on a variety of issues related to high-performance and quantum computing.

Get ahead in a rapidly changing world. Sign up for our daily newsletter. Subscribe
​​