The AI chip market is set to exceed US$250 billion by 2033.

Puces AI 2023-2033

Analyse technologique et prévisions du marché pour la vente mondiale de puces d'IA par zone géographique, type de traitement, architecture, emballage, application et secteur sectoriel, en plus des calculs des coûts associés aux puces IA de pointe.


Show All Description Contents, Table & Figures List Pricing Related Content
The global AI chips market will grow to US$257.6 billion by 2033, with the three largest industry verticals at that time being IT & Telecoms, Banking, Financial Services and Insurance (BFSI), and Consumer Electronics. Artificial Intelligence is transforming the world as we know it; from the success of DeepMind over Go world champion Lee Sedol in 2016, to the robust predictive abilities of OpenAI's ChatGPT, the complexity of AI training algorithms is growing at a startlingly fast pace, where the amount of compute necessary to run newly-developed training algorithms appears to be doubling roughly every four months. In order to keep pace with this growth, hardware for AI applications is needed that is not just scalable - allowing for longevity as new algorithms are introduced, while keeping operational overheads low - but is also able to handle increasingly complex models at a point close to the end-user. A two-pronged approach, to handle AI in the cloud and at the edge, is required to fully realize an effective Internet of Things.
 
Following a period of dedicated research by expert analysts, IDTechEx has published a report that offers unique insights into the global AI chip technology landscape and corresponding markets. The report contains a comprehensive analysis of 19 players involved with AI chip design, as well as an account of 10 design start-up companies, and the most prominent semiconductor manufacturers globally. This includes a detailed assessment of technology innovations and market dynamics. The market analysis and forecasts focus on total revenue (all-inclusive, excluding multi-purpose, and excluding multi-purpose and cloud-based offerings), with granular forecasts that are disaggregated by geography (Europe, APAC, and North America), processing type (edge and cloud), chip architecture (GPU, CPU, ASIC and FPGA), packaging type (System-on-Chip, Multi-Chip Module, and 2.5D+), application (language, computer vision, predictive, and other), and industry vertical (industrial, healthcare, automotive, retail, media & advertising, BFSI, consumer electronics, IT & telecoms, and other).
 
In addition, this report contains rigorous calculations pertaining to costs of manufacture, design, assembly, test & packaging, and operation for chips at nodes from 90 nm down to 3 nm, for AI purposes. Forecasts are presented on the design costs and manufacture costs (investment per wafer) as semiconductor manufacturers move to more advanced nodes beyond 3 nm. The report presents an unbiased analysis of primary data gathered via our interviews with key players, and it builds on our expertise in the semiconductor and electronics sectors.
 
This research delivers valuable insights for:
  • Companies that require AI-capable hardware.
  • Companies that design/manufacture AI chips and/or AI-capable embedded systems.
  • Companies that supply components used in AI-capable embedded systems.
  • Companies that invest in AI and/or semiconductor design, manufacture, and packaging.
  • Companies that develop other technologies for machine learning workloads.
 
 
The rise of intelligent hardware
 
The notion of designing hardware to fulfil a certain function, particularly if that function is to accelerate certain types of computations by taking control of them away from the main (host) processor is not a new one; the early days of computing saw CPUs (Central Processing Units) paired with mathematical coprocessors, known as Floating-Point Units (FPUs), the purpose of which was to offload complex floating point mathematical operations from the CPU to this special-purpose chip, as the latter could handle computations in a more efficient manner, thereby freeing the CPU up to focus on other things. As markets and technology developed, so too did workloads, and so new pieces of hardware were needed to handle these workloads. A particularly noteworthy example of one of these specialized workloads is the production of computer graphics, where the accelerator in question has become something of a household name: the Graphics Processing Unit (GPU).
 
Just as computer graphics required a different type of chip architecture, so the emergence of machine learning has brought about a demand for another type of accelerator, one that is capable of efficiently handling machine learning workloads. This report details the differences between CPU, GPU and Field Programmable Gate Array (FPGA) architectures, and their relative effectiveness with handling machine learning workloads. Application-specific Integrated Circuits (ASICs) can be effectively designed to handle specific workloads, with the architectures of several of the world's leading designers of ASICs for AI being analyzed in this report. The need for chips capable of handling ML workloads will only increase as the benefits for consumers (increased functionality in consumer electronics, more accurate image classification and object detection in security cameras, and low latency, high-precision inference in autonomous vehicles, for example) is realized, which is reflected in the forecast compound annual growth rate (CAGR) of 24.4% for AI chips (including those that are used for other purposes in addition to handling ML workloads, as well as chips accessible through a cloud service) between the years 2023 and 2033.
 
Compound Annual Growth Rates for each of the three main forecasts in this report, between the years 2023 and 2033. Source: IDTechEx
 
AI is on the global agenda
 
AI's capabilities in natural language processing (understanding of textual data, not just from a linguistic perspective but also a contextual one), speech recognition (being able to decipher a spoken language and convert it to text in the same language, or convert to another language), recommendation (being able to send personalized adverts/suggestions to consumers based on their interactions with service items), reinforcement learning (being able to make predictions based on observations/exploration, such as is used when training agents to play a game), object detection, and image classification (being able to distinguish objects from an environment, and decide on what that object is) are so significant to the efficacy of certain products (such as autonomous vehicles and industrial robots) and to models of national governance, that the development of AI hardware and software has motivated national and regional funding initiatives across the globe. As AI-capable processors and accelerators are dependent on semiconductor manufacturers, with those capable of producing the more advanced nodes necessary for chips employed within data centres located in the Asia-Pacific region (particularly Taiwan and South Korea), the ability to manufacture AI chips is dependent on the possible supply from a select few companies (for edge devices, it is not as necessary to employ leading-edge node technology, given that these chips are typically used for low-power inference. However, the fact remains that the global supply chain is heavily indebted to a specific geographic region).
 
The risk of relying on the manufacturing capabilities of companies concentrated in a specific geographic region was realized in 2020, when a number of complementing factors (such as the COVID-19 pandemic, the rise of data mining, a Taiwanese drought, fabrication facility fire outbreaks, and neon procurement difficulties) led to a global chip shortage, where demand for semiconductor chips exceeded supply. Since then, the largest stakeholders in the semiconductor value chain (the US, the EU, South Korea, Taiwan, Japan, and China) have sought to reduce their exposure to a manufacturing deficit, should another set of circumstances arise that results in an even more exacerbated chip shortage. National and regional government initiatives have been put in place to incentivize semiconductor manufacturing companies to expand operations or build new facilities. These government initiatives are discussed in the report, where the funding is broken down and the reasons for these initiatives and what they mean for other stakeholders (such as the restrictions imposed on China by the US, and how China can build a national semiconductor supply chain around these restrictions) is detailed. In addition, the private investments announced for semiconductor manufacture since 2021 are outlined, along with current company semiconductor manufacture capabilities, particularly in relation to AI.
 
Shown here are the proposed and confirmed investments into semiconductor facilities by manufacturers since 2021. Where currencies have been listed in anything but US$, these have been converted to US$ as of publication date. Source: IDTechEx
 
The cost of progress
 
Machine learning is the process by which computer programs utilize data to make predictions based on a model, and then optimize the model to better fit with the data provided, by adjusting the weightings used. Computation therefore involves two steps: Training, and Inference. The first stage of implementing an AI algorithm is the training stage, where data is fed into the model and the model adjust its weights until it fits appropriately with the provided data. The second stage is the inference stage, where the trained AI algorithm is executed, and new data (that was not provided in the training stage) is classified in a manner consist with the acquired data. Of the two stages, the training stage is more computationally intense, given that this stage involves performing the same computation millions of times (the training for some leading AI algorithms can take days to complete). This then poses the question: how much does it cost to train AI algorithms?
 
In an effort to quantify this, IDTechEx has rigorously calculated the design, manufacture, assembly, test & packaging, and operational costs of AI chips from 90 nm down to 3 nm. By considering that a 3 nm chip with a given transistor density will have a smaller area than a more mature node chip with the same transistor density, the cost of deploying a leading-edge chip for a given AI algorithm can be compared with a trailing-edge chip capable of a similar performance for the same algorithm. For example, should a 3 nm chip with a given area and transistor density be used continuously for five years, the cost incurred will be 45.4X less than the cost incurred by running a 90 nm chip with the same transistor density continuously for five years, based on the model of a 3 nm chip that we employ. This includes the initial production costs of the respective chips, and can then be used to determine whether it is worthwhile to upgrade from a more mature node chip to a more advanced node chip, depending on how long the chip is to be in service for.
 
The costs associated with producing and operating a chip at each of the given nodes over the course of 5 years, based on our model of a 3 nm chip used for AI purposes. Source: IDTechEx
 
Market developments and roadmaps
 
IDTechEx's model of the global AI chips market considers architectural trends, developments in packaging, the dispersion/concentration of funding and investments, historical financial data, and geographically-localized ecosystems to give an accurate representation of the evolving market value over the next ten years.
Report MetricsDetails
Historic Data2019 - 2022
CAGRThe global market for AI chips will reach US$257.6 billion by 2033. This represents a CAGR of 24.4% over the forecast period (2023 to 2033).
Forecast Period2023 - 2033
Regions CoveredWorldwide, Asia-Pacific, Europe, North America (USA + Canada)
Segments CoveredGeography (North America, APAC, Europe, Rest of World), processing type (edge, cloud), architecture (GPU, CPU, ASIC, FPGA), packaging (SoC, MCM, 2.5D+), application (language, computer vision, predictive, other), and industry vertical (industrial, healthcare, automotive, retail, media & advertising, BFSI, consumer electronics, IT & telecoms, other)
Analyst access from IDTechEx
All report purchases include up to 30 minutes telephone time with an expert analyst who will help you link key findings in the report to the business issues you're addressing. This needs to be used within three months of purchasing the report.
Further information
If you have any questions about this report, please do not hesitate to contact our report team at research@IDTechEx.com or call one of our sales managers:

AMERICAS (USA): +1 617 577 7890
ASIA (Japan): +81 3 3216 7209
EUROPE (UK) +44 1223 812300
Table of Contents
1.EXECUTIVE SUMMARY
1.1.What is an AI chip?
1.2.AI acceleration
1.3.AI chip capabilities
1.4.AI chip applications
1.5.Edge AI
1.6.Advantages and disadvantages of edge AI
1.7.The AI chip landscape - overview
1.8.The AI chip landscape - key hardware players
1.9.The AI chip landscape - hardware start-ups
1.10.The AI chip landscape - other than hardware
1.11.AI landscape - geographic split: China
1.12.AI landscape - geographic split: USA
1.13.AI landscape - geographic split: Rest of World
1.14.TSMC - the foremost AI chip manufacturer
1.15.Semiconductor foundry node roadmap
1.16.Roadmap for advanced nodes
1.17.Traditional supply chain
1.18.IDM fabrication capabilities
1.19.Foundry capabilities
1.20.Map of proposed and confirmed funding
1.21.Proposed government funding
1.22.Chip transistor density
1.23.TSMC transistor densities
1.24.Chip design costs
1.25.Summary of chip costs
1.26.Analysis: production costs vs operating costs
1.27.Analysis: cost effectiveness of nodes
1.28.Analysis: cost to create new leading node chips
1.29.Future chip design costs
1.30.Future capital investment per wafer
1.31.Capital investment for leading-edge nodes
1.32.All-inclusive AI chip market forecast
1.33.AI chip (excluding multi-purpose) market forecast
1.34.Edge vs cloud computing
1.35.Growth rates and analysis
2.FORECASTS
2.1.Leading-edge node design, manufacturing, ATP, and operational costs
2.1.1.Overview
2.1.2.Design costs
2.1.3.Operational costs
2.1.4.Fabrication costs
2.1.5.Assembly, test and packaging costs
2.1.6.Comparison and analysis
2.2.Market forecasts
2.2.1.AI chip forecast 2023 - 2033
2.2.2.Disaggregated forecasts
3.AI HARDWARE - TECHNOLOGY OVERVIEW
3.1.Introduction to AI chips
3.1.1.What is an AI chip?
3.1.2.AI acceleration
3.1.3.Why AI acceleration is needed
3.1.4.The interaction between hardware and software
3.1.5.AI chip capabilities
3.1.6.AI chip applications
3.1.7.AI in robotics
3.1.8.AI in vehicles
3.1.9.Edge AI
3.1.10.Advantages and disadvantages of edge AI
3.1.11.The AI chip landscape - overview
3.1.12.The AI chip landscape - key hardware players
3.1.13.The AI chip landscape - hardware start-ups
3.1.14.The AI chip landscape - other than hardware
3.1.15.AI landscape - geographic split: China
3.1.16.AI landscape - geographic split: USA
3.1.17.AI landscape - geographic split: Rest of World
3.1.18.TSMC - the foremost AI chip manufacturer
3.1.19.Integrated circuits explained
3.1.20.The need for specialized chips
3.1.21.AI chip basics
3.1.22.AI chip types
3.1.23.Deep neural networks
3.1.24.Training and inference
3.1.25.AI chip capabilities
3.1.26.Parallel computing
3.1.27.Low-precision computing
3.1.28.Major players
3.1.29.Emerging technologies: neuromorphic photonic architectures
3.1.30.Components of a neural network
3.1.31.Photonic processing systems
3.2.Number representation
3.2.1.Fixed-point representation
3.2.2.Floating-point representation - example
3.2.3.Floating-point representation - range
3.2.4.Floating-point representation - rounding
3.2.5.The IEEE standards
3.2.6.Denormalized numbers
3.2.7.Quantization
3.3.Transistor Technology
3.3.1.How transistors operate: p-n junctions
3.3.2.How transistors operate: electron shells
3.3.3.How transistors operate: valence electrons
3.3.4.How transistors work: back to p-n junctions
3.3.5.How transistors work: connecting a battery
3.3.6.How transistors work: PNP operation
3.3.7.How transistors work: PNP
3.3.8.How transistors switch
3.3.9.From p-n junctions to FETs
3.3.10.How FETs work
3.3.11.Moore's law
3.3.12.Gate length reductions
3.3.13.FinFET
3.3.14.GAAFET, MBCFET, RibbonFET
3.3.15.Process nodes
3.3.16.Device architecture roadmap
3.3.17.Evolution of transistor device architectures
3.3.18.Carbon nanotubes for transistors
3.3.19.CNTFET designs
3.3.20.Semiconductor foundry node roadmap
3.3.21.Roadmap for advanced nodes
3.4.GPU architecture
3.4.1.Core count
3.4.2.Memory
3.4.3.Threads
3.4.4.Nvidia and AMD - performance
3.4.5.Nvidia and AMD - adoption
3.4.6.Tensor mathematics
3.4.7.Tensor cores
3.5.AI performance benchmarking
3.5.1.MLPerf and MLCommons
3.5.2.MLPerf - Training overview
3.5.3.MLPerf - Training benchmarks
3.5.4.MLPerf - Training HPC
3.5.5.MLPerf - Inference
4.AI CHIP FABRICATION - PLAYER CAPABILITIES AND INVESTMENTS
4.1.Supply chain and player capabilities
4.1.1.Semiconductor supply chain players
4.1.2.Traditional supply chain
4.1.3.IDM fabrication capabilities
4.1.4.Foundry capabilities
4.2.Recently announced player investments and news
4.2.1.Intel into the "angstrom era": Roadmap
4.2.2.Intel: 2022 investments in European fab capabilities
4.2.3.Intel: 2022 investments in US fab capabilities
4.2.4.Samsung: 2022 investments in US fab capabilities
4.2.5.TSMC: 2022 investments in fab capabilities
4.2.6.GlobalFoundries: Fabrication investments
4.2.7.Texas Instruments: Fabrication investments
4.2.8.UMC: Fabrication investments
4.2.9.SMIC and Hua Hong Semiconductor: Fabrication investments
4.2.10.Rapidus: Japan's quest for 2 nm
4.2.11.Map of proposed and confirmed funding
4.2.12.Investments in semiconductor manufacturing proposed since 2021
4.2.13.Proposed government funding
4.3.The US CHIPS and Science Act of 2022
4.3.1.Introduction to the US CHIPS Act
4.3.2.Components of Division A
4.3.3.Components of Division B
4.3.4.Funding at a glance
4.3.5.Funding timeline for Division A
4.3.6.CHIPS for America Fund
4.3.7.CHIPS Program - Application priorities
4.3.8.Nine areas of action for the CHIPS Program
4.3.9.Motivations and background
4.3.10.The US-China trade war
4.3.11.The AI battlefield
4.3.12.Big business: TSMC
4.3.13.Good for IDMs, not so much for Fabless
4.3.14.Summary of announced investments
4.4.The European Chips Act of 2022
4.4.1.Motivation and goals
4.4.2.The eight provisions of the European Chips Act
4.4.3.Timescale for achieving goals
4.4.4.The three pillars of financing
4.4.5.Funding allocations
4.4.6.Funding at a glance
4.4.7.Analysis of funding
4.4.8.Pillar 1: The Chips for Europe Initiative
4.4.9.Pillar 2: Ensuring security of supply
4.4.10.Addressing the EU's semiconductor weaknesses
4.4.11.Investment plans for Germany
4.4.12.Investment plans for France
4.4.13.Investment plans for Spain
4.4.14.Investment plans for Italy
4.4.15.Summary of European investments
4.5.Chinese semiconductor investments
4.5.1.A response to US restrictions
4.5.2.Made in China 2025
4.5.3.Funding generated for Made in China 2025
4.5.4.Funding recipients
4.5.5.Results of Made in China 2025
4.5.6.New investments: 2022 and beyond
4.5.7.Short-term difficulties, long-term opportunities
4.5.8.AI acceleration in China
4.5.9.AI adoption in China
4.5.10.Summary of announced investments
4.6.South Korea semiconductor investments
4.6.1.National High-tech Industry Development Strategy
4.6.2.Six objectives that comprise the strategy
4.6.3.Building the world's largest semiconductor cluster
4.6.4.Growing of the domestic supply chain
4.6.5.K-Semiconductor industry targets
5.SUPPLY CHAIN PLAYERS
5.1.Nvidia
5.1.1.Nvidia V100
5.1.2.Nvidia A100
5.1.3.Nvidia H100
5.1.4.MLPerf results - Training (H100)
5.1.5.MLPerf results - Training: HPC
5.1.6.MLPerf results - Inference: Data Center
5.1.7.Grace Hopper Superchip
5.1.8.Grace Hopper architecture
5.2.Intel
5.2.1.Intel's AI hardware portfolio
5.2.2.Habana Gaudi
5.2.3.Habana Gaudi2
5.2.4.Habana Greco
5.2.5.Xeon Scalable Processor architecture
5.2.6.4th Gen Xeon Scalable Processor performance
5.3.Advanced Micro Devices (AMD) Xilinx
5.3.1.AMD Radeon Instinct
5.3.2.AMD Ryzen 7040
5.3.3.Alveo V70
5.3.4.AMD Xilinx ACAP
5.3.5.Versal AI
5.4.Google
5.4.1.Google TPU
5.4.2.Cloud TPU and Edge TPU
5.4.3.Pixel Neural Core and Pixel Tensor
5.5.Qualcomm
5.5.1.Qualcomm - Overview
5.5.2.Cloud AI 100
5.5.3.Qualcomm AI core
5.5.4.Qualcomm AI performance results
5.5.5.MLPerf results - Inference: Datacenter
5.5.6.MLPerf results - Inference: Edge
5.5.7.MLPerf results - Inference: Mobile and Tiny
5.5.8.Mobile AI
5.6.IBM
5.6.1.IBM Telum processor
5.6.2.IBM Artificial Intelligence Unit
5.7.Amazon Web Services (AWS)
5.7.1.AWS Inferentia
5.7.2.Inferentia and Inferentia2 architectures
5.7.3.NeuronCore
5.7.4.AWS Trainium
5.8.NXP Semiconductors
5.8.1.NXP Semiconductors: Introduction
5.8.2.MCX N
5.8.3.i.MX 95 and NPU
5.9.Huawei
5.9.1.Huawei Ascend and Kirin chipsets
5.9.2.Da Vinci architecture
5.10.Tesla
5.10.1.Tesla D1 chip
5.10.2.Tesla FSD
5.11.Apple
5.11.1.Apple's Neural Engine
5.11.2.The ANE's capabilities and shortcomings
5.12.Cambricon
5.12.1.Siyuan series
5.13.NationalChip
5.13.1.GX series
5.13.2.GX8002 and gxNPU
5.14.Ambarella
5.14.1.CV3-AD685 for automotive applications
5.14.2.CVflow architecture
5.15.MediaTek
5.15.1.MediaTek Dimensity and APU
5.16.Efinix
5.16.1.Efinix Quantum architecture
5.16.2.Titanium and Trion FPGAs
5.17.Graphcore
5.17.1.IPU
5.17.2.Bow IPU and Pods
5.17.3.Benchmarking results
5.18.Tencent
5.18.1.Zixiao
5.19.Baidu
5.19.1.Kunlun and XPU
5.20.Start-ups and New Players
5.20.1.Lightmatter
5.20.2.Lightelligence
5.20.3.Perceive
5.20.4.Enflame
5.20.5.SambaNova
5.20.6.Cerebras
5.20.7.Groq
5.20.8.Mythic
5.20.9.Hailo
5.20.10.Blaize
 

Ordering Information

Puces AI 2023-2033

£$¥
Electronic (1-5 users)
£5,650.00
Electronic (6-10 users)
£8,050.00
Electronic and 1 Hardcopy (1-5 users)
£6,450.00
Electronic and 1 Hardcopy (6-10 users)
£8,850.00
Electronic (1-5 users)
€6,400.00
Electronic (6-10 users)
€9,100.00
Electronic and 1 Hardcopy (1-5 users)
€7,310.00
Electronic and 1 Hardcopy (6-10 users)
€10,010.00
Electronic (1-5 users)
$7,000.00
Electronic (6-10 users)
$10,000.00
Electronic and 1 Hardcopy (1-5 users)
$7,975.00
Electronic and 1 Hardcopy (6-10 users)
$10,975.00
Electronic (1-5 users)
¥990,000
Electronic (6-10 users)
¥1,406,000
Electronic and 1 Hardcopy (1-5 users)
¥1,140,000
Electronic and 1 Hardcopy (6-10 users)
¥1,556,000
Electronic (1-5 users)
元50,000.00
Electronic (6-10 users)
元72,000.00
Electronic and 1 Hardcopy (1-5 users)
元58,000.00
Electronic and 1 Hardcopy (6-10 users)
元80,000.00
Click here to enquire about additional licenses.
If you are a reseller/distributor please contact us before ordering.
お問合せ、見積および請求書が必要な方はm.murakoshi@idtechex.com までご連絡ください。

Report Statistics

Slides 344
Companies 29
Forecasts to 2033
ISBN 9781915514653
 

Preview Content

pdf Document Webinar Slides: Key Semiconductor Trends
pdf Document Webinar Slides
pdf Document Sample pages
 
 
 
 

Subscription Enquiry