This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages) This article needs attention from an expert in Artificial intelligence. The specific problem is: Needs attention from a current expert to incorporate modern developments in this area from the last few decades, including TPUs and better coverage of GPUs, and to clean up the other material and clarify how it relates to the subject. WikiProject Artificial intelligence may be able to help recruit an expert. (November 2021) | This article is missing information about its scope: What is AI hardware for the purposes of this article? Event cameras are an application of neuromorphic design, but LISP machines are not an end use application. It previously mentioned memristors, which are not specialized hardware for AI, but rather a basic electronic component, like resister, capacitor, or inductor. Please expand the article to include this information. Further details may exist on the talk page. (November 2021) | This article needs to be updated. Please help update this article to reflect recent events or newly available information. (November 2021) | (Learn how and when to remove this template message) Specialized hardware for artificial intelligence is used to execute artificial intelligence programs faster, such as Lisp machines, neuromorphic engineering, event cameras, and physical neural networks. ## Contents * 1 Lisp machines * 2 Neural network hardware * 2.1 Physical neural networks * 3 Component hardware * 3.1 AI accelerators * 4 Sources ## Lisp machines[edit] Main article: Lisp machine ## Neural network hardware[edit] See also: artificial neural network ### Physical neural networks[edit] Main article: physical neural network ## Component hardware[edit] ### AI accelerators[edit] Main article: AI accelerator Since the 2010s, advances in computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[1] By 2019, graphic processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI.[2] OpenAI estimated the hardware compute used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months.[3][4] ## Sources[edit] 1. ^ Research, AI (23 October 2015). "Deep Neural Networks for Acoustic Modeling in Speech Recognition". airesearch.com. Retrieved 23 October 2015. 2. ^ "GPUs Continue to Dominate the AI Accelerator Market for Now". InformationWeek. December 2019. Retrieved 11 June 2020. 3. ^ Ray, Tiernan (2019). "AI is changing the entire nature of compute". ZDNet. Retrieved 11 June 2020. 4. ^ "AI and Compute". OpenAI. 16 May 2018. Retrieved 11 June 2020. This computer hardware article is a stub. You can help Wikipedia by expanding it. | * v * t * e *[v]: View this template *[t]: Discuss this template *[e]: Edit this template