GPU
GPU (Graphics Processing Unit),
a specialized processor initially intended for fast image processing. GPUs may have more raw computing power than general purpose CPUs but need a specialized and parallelized way of programming. Leela Chess Zero has proven that a Best-first Monte-Carlo Tree Search (MCTS) with deep learning methodology will work with GPU architectures.
Contents
- 1 History
- 2 GPU in Computer Chess
- 3 GPU Chess Engines
- 4 GPGPU
- 5 Hardware Model
- 6 Programming Model
- 7 Memory Model
- 8 Instruction Throughput
- 9 Tensors
- 10 Host-Device Latencies
- 11 Deep Learning
- 12 Architectures
- 13 See also
- 14 Publications
- 15 Forum Posts
- 16 External Links
- 17 References
History
In the 1970s and 1980s RAM was expensive and Home Computers used custom graphics chips to operate directly on registers/memory without a dedicated frame buffer resp. texture buffer, like TIAin the Atari VCS gaming system, GTIA+ANTIC in the Atari 400/800 series, or Denise+Agnus in the Commodore Amiga series. The 1990s would make 3D graphics and 3D modeling more popular, especially for video games. Cards specifically designed to accelerate 3D math, such as SGI Impact (1995) in 3D graphics-workstations or 3dfx Voodoo (1996) for playing 3D games on PCs, emerged. Some game engines could use instead the SIMD-capabilities of CPUs such as the Intel MMX instruction set or AMD's 3DNow! for real-time rendering. Sony's 3D capable chip GTE used in the PlayStation (1994) and Nvidia's 2D/3D combi chips like NV1 (1995) coined the term GPU for 3D graphics hardware acceleration. With the advent of the unified shader architecture, like in Nvidia Tesla (2006), ATI/AMD TeraScale (2007) or Intel GMA X3000 (2006), GPGPU frameworks like CUDA and OpenCL emerged and gained in popularity.
GPU in Computer Chess
There are in main four ways how to use a GPU for chess:
- As an accelerator in Lc0: run a neural network for position evaluation on GPU
- Offload the search in Zeta: run a parallel game tree search with move generation and position evaluation on GPU
- As a hybrid in perft_gpu: expand the game tree to a certain degree on CPU and offload to GPU to compute the sub-tree
- Neural network training such as Stockfish NNUE trainer in Pytorch[2] or Lc0 TensorFlow Training
GPU Chess Engines
GPGPU
Early efforts to leverage a GPU for general-purpose computing required reformulating computational problems in terms of graphics primitives via graphics APIs like OpenGL or DirextX, followed by first GPGPU frameworks such as Sh/RapidMind or Brook and finally CUDA and OpenCL[3].
Khronos OpenCL
OpenCL specified by the Khronos Group is widely adopted across all kind of hardware accelerators from different vendors.
AMD
AMD supports language frontends like OpenCL, HIP, C++ AMP and with OpenMP offload directives. It offers with ROCm its own parallel compute platform.
- AMD OpenCL Developer Community
- AMD ROCm™ documentation
- AMD OpenCL Programming Guide
- AMD OpenCL Optimization Guide
- AMD GPU ISA documentation
Apple
Since macOS 10.14 Mojave a transition from OpenCL to Metal is recommended by Apple.
- Apple OpenCL Developer
- Apple Metal Developer
- Apple Metal Programming Guide
- Metal Shading Language Specification
Intel
Intel supports OpenCL with implementations like BEIGNET and NEO for different GPU architectures and the oneAPI platform with DPC++ as frontend language.
Nvidia
CUDA is the parallel computing platform by Nvidia. It supports language frontends like C, C++, Fortran, OpenCL and offload directives via OpenACC and OpenMP.
- Nvidia CUDA Zone
- Nvidia PTX ISA
- Nvidia CUDA Toolkit Documentation
- Nvidia CUDA C++ Programming Guide
- Nvidia CUDA C++ Best Practices Guide
Further
- Vulkan (OpenGL sucessor of Khronos Group)
- DirectCompute (Microsoft)
- C++ AMP (Microsoft)
- OpenACC (offload directives)
- OpenMP (offload directives)
Hardware Model
A common scheme on GPUs with unified shader architecture is to run multiple threads in SIMT fashion and a multitude of SIMT waves on the same SIMD unit to hide memory latencies. Multiple processing elements (GPU cores) are members of a SIMD unit, multiple SIMD units are coupled to a compute unit, with up to hundreds of compute units present on a discrete GPU. The actual SIMD units may have architecture dependent different numbers of cores (SIMD8, SIMD16, SIMD32), and different computation abilities - floating-point and/or integer with specific bit-width of the FPU/ALU and registers. There is a difference between a vector-processor with variable bit-width and SIMD units with fix bit-width cores. Different architecture white papers from different vendors leave room for speculation about the concrete underlying hardware implementation and the concrete classification as hardware architecture. Scalar units present in the compute unit perform special functions the SIMD units are not capable of and MMAC units (matrix-multiply-accumulate units) are used to speed up neural networks further.
AMD Terminology | Nvidia Terminology |
---|---|
Compute Unit | Streaming Multiprocessor |
Stream Core | CUDA Core |
Wavefront | Warp |
Hardware Examples
Nvidia GeForce GTX 580 (Fermi) [4][5]
- 512 CUDA cores @1.544GHz
- 16 SMs - Streaming Multiprocessors
- organized in 2x16 CUDA cores per SM
- Warp size of 32 threads
AMD Radeon HD 7970 (GCN)[6][7]
- 2048 Stream cores @0.925GHz
- 32 Compute Units
- organized in 4xSIMD16, each SIMT4, per Compute Unit
- Wavefront size of 64 work-items
Wavefront and Warp
Generalized the definition of the Wavefront and Warp size is the amount of threads executed in SIMT fashion on a GPU with unified shader architecture.
Programming Model
A parallel programming model for GPGPU can be data-parallel, task-parallel, a mixture of both, or with libraries and offload-directives also implicitly-parallel. Single GPU threads (work-items in OpenCL) contain the kernel to be computed and are coupled to a work-group, one or multiple work-groups form the NDRange to be executed on the GPU device. The members of a work-group execute the same kernel, can be usually synchronized and have access to the same scratch-pad memory, with an architecture limit of how many work-items a work-group can hold and how many threads can run in total concurrently on the device.
OpenCL Terminology | CUDA Terminology |
---|---|
Kernel | Kernel |
Compute Unit | Streaming Multiprocessor |
Processing Element | CUDA Core |
Work-Item | Thread |
Work-Group | Block |
NDRange | Grid |
Thread Examples
Nvidia GeForce GTX 580 (Fermi, CC2) [8]
- Warp size: 32
- Maximum number of threads per block: 1024
- Maximum number of resident blocks per multiprocessor: 32
- Maximum number of resident warps per multiprocessor: 64
- Maximum number of resident threads per multiprocessor: 2048
AMD Radeon HD 7970 (GCN) [9]
- Wavefront size: 64
- Maximum number of work-items per work-group: 1024
- Maximum number of work-groups per compute unit: 40
- Maximum number of Wavefronts per compute unit: 40
- Maximum number of work-items per compute unit: 2560
Memory Model
OpenCL offers the following memory model for the programmer:
- __private - usually registers, accessable only by a single work-item resp. thread.
- __local - scratch-pad memory shared across work-items of a work-group resp. threads of block.
- __constant - read-only memory.
- __global - usually VRAM, accessable by all work-items resp. threads.
OpenCL Terminology | CUDA Terminology |
---|---|
Private Memory | Registers |
Local Memory | Shared Memory |
Constant Memory | Constant Memory |
Global Memory | Global Memory |
Memory Examples
Nvidia GeForce GTX 580 (Fermi) [10]
- 128 KiB private memory per compute unit
- 48 KiB (16 KiB) local memory per compute unit (configurable)
- 64 KiB constant memory
- 8 KiB constant cache per compute unit
- 16 KiB (48 KiB) L1 cache per compute unit (configurable)
- 768 KiB L2 cache in total
- 1.5 GiB to 3 GiB global memory
- 256 KiB private memory per compute unit
- 64 KiB local memory per compute unit
- 64 KiB constant memory
- 16 KiB constant cache per four compute units
- 16 KiB L1 cache per compute unit
- 768 KiB L2 cache in total
- 3 GiB to 6 GiB global memory
Unified Memory
Usually data has to be copied between a CPU host and a discrete GPU device, but different architectures from different vendors with different frameworks on different operating systems may offer a unified and accessible address space between CPU and GPU.
Instruction Throughput
GPUs are used in HPC environments because of their good FLOP/Watt ratio. The instruction throughput in general depends on the architecture (like Nvidia's Tesla, Fermi, Kepler, Maxwell or AMD's TeraScale, GCN, RDNA), the brand (like Nvidia GeForce, Quadro, Tesla or AMD Radeon, Radeon Pro, Radeon Instinct) and the specific model.
Integer Instruction Throughput
- INT32
- The 32-bit integer performance can be architecture and operation depended less than 32-bit FLOP or 24-bit integer performance.
- INT64
- In general registers and Vector-ALUs of consumer brand GPUs are 32-bit wide and have to emulate 64-bit integer operations.
- INT8
- Some architectures offer higher throughput with lower precision. They quadruple the INT8 or octuple the INT4 throughput.
Floating-Point Instruction Throughput
- FP32
- Consumer GPU performance is measured usually in single-precision (32-bit) floating-point FMA (fused-multiply-add) throughput.
- FP64
- Consumer GPUs have in general a lower ratio (FP32:FP64) for double-precision (64-bit) floating-point operations throughput than server brand GPUs.
- FP16
- Some GPGPU architectures offer half-precision (16-bit) floating-point operation throughput with an FP32:FP16 ratio of 1:2.
Throughput Examples
Nvidia GeForce GTX 580 (Fermi, CC 2.0) - 32-bit integer operations/clock cycle per compute unit [12]
MAD 16 MUL 16 ADD 32 Bit-shift 16 Bitwise XOR 32
Max theoretic ADD operation throughput: 32 Ops x 16 CUs x 1544 MHz = 790.528 GigaOps/sec
AMD Radeon HD 7970 (GCN 1.0) - 32-bit integer operations/clock cycle per processing element [13]
MAD 1/4 MUL 1/4 ADD 1 Bit-shift 1 Bitwise XOR 1
Max theoretic ADD operation throughput: 1 Op x 2048 PEs x 925 MHz = 1894.4 GigaOps/sec
Tensors
MMAC (matrix-multiply-accumulate) units are used in consumer brand GPUs for neural network based upsampling of video game resolutions, in professional brands for upsampling of images and videos, and in server brand GPUs for accelerating convolutional neural networks in general. Convolutions can be implemented as a series of matrix-multiplications via Winograd-transformations [14]. Mobile SoCs usually have an dedicated neural network engine as MMAC unit.
Nvidia TensorCores
- With Nvidia Volta series TensorCores were introduced. They offer FP16xFP16+FP32, matrix-multiplication-accumulate-units, used to accelerate neural networks.[15] Turing's 2nd gen TensorCores add FP16, INT8, INT4 optimized computation.[16] Amperes's 3rd gen adds support for BF16, TF32, FP64 and sparsity acceleration.[17]Ada Lovelaces's 4th gen adds support for FP8.[18]
AMD Matrix Cores
- AMD released 2020 its server-class CDNA architecture with Matrix Cores which support MFMA (matrix-fused-multiply-add) operations on various data types like INT8, FP16, BF16, FP32. AMD's CDNA 2 architecture adds FP64 optimized throughput for matrix operations. AMD's RDNA 3 architecture features dedicated AI tensor operation acceleration. AMD's CDNA 3 architecture adds support for FP8 and sparse matrix data (sparsity).
Intel XMX Cores
- Intel added XMX, Xe Matrix eXtensions, cores to some of the Intel Xe GPU series, like Arc Alchemist and Intel Data Center GPU Max Series.
Host-Device Latencies
One reason GPUs are not used as accelerators for chess engines is the host-device latency, aka. kernel-launch-overhead. Nvidia and AMD have not published official numbers, but in practice there is a measurable latency for null-kernels of 5 microseconds [19] up to 100s of microseconds [20]. One solution to overcome this limitation is to couple tasks to batches to be executed in one run [21].
Deep Learning
GPUs are much more suited than CPUs to implement and train Convolutional Neural Networks (CNN), and were therefore also responsible for the deep learning boom, also affecting game playing programs combining CNN with MCTS, as pioneered by Google DeepMind's AlphaGo and AlphaZero entities in Go, Shogi and Chess using TPUs, and the open source projects Leela Zero headed by Gian-Carlo Pascutto for Go and its Leela Chess Zero adaption.
Architectures
The market is split into two categories, integrated and discrete GPUs. The first being the most important by quantity, the second by performance. Discrete GPUs are divided as consumer brands for playing 3D games, professional brands for CAD/CGI programs and server brands for big-data and number-crunching workloads. Each brand offering different feature sets in driver, VRAM, or computation abilities.
AMD
AMD line of discrete GPUs is branded as Radeon for consumer, Radeon Pro for professional and Radeon Instinct for server.
CDNA3
CDNA3 HPC architecture was unveiled in December, 2023. With MI300A APU model (CPU+GPU+HBM) and MI300X GPU model, both with multi-chip modules design. Featuring Matrix Cores with support for a broad type of precision, as INT8, FP8, BF16, FP16, TF32, FP32, FP64, as well as sparse matrix data (sparsity). Supported by AMD's ROCm open software stack for AMD Instinct accelerators.
RDNA3 architecture in Radeon RX 7000 series was announced on November 3, 2022, featuring dedicated AI tensor operation acceleration.
CDNA2
CDNA2 architecture in MI200 HPC-GPU with optimized FP64 throughput (matrix and vector), multi-chip-module design and Infinity Fabric was unveiled in November, 2021.
CDNA
CDNA architecture in MI100 HPC-GPU with Matrix Cores was unveiled in November, 2020.
RDNA2 cards were unveiled on October 28, 2020.
RDNA cards were unveiled on July 7, 2019.
Vega GCN 5th gen
Vega cards were unveiled on August 14, 2017.
Polaris GCN 4th gen
Polaris cards were first released in 2016.
Southern Islands GCN 1st gen
Southern Island cards introduced the GCN architecture in 2012.
- AMD Radeon HD 7000 on Wikipedia
- Southern Islands Programming Guide
- Southern Islands Instruction Set Architecture
Apple
M series
Apple released its M series SoC (system on a chip) with integrated GPU for desktops and notebooks in 2020.
ARM
The ARM Mali GPU variants can be found on various systems on chips (SoCs) from different vendors. Since Midgard (2012) with unified-shader-model OpenCL support is offered.
Valhall (2019)
Bifrost (2016)
Midgard (2012)
Intel
Xe
Intel Xe line of GPUs (released since 2020) is divided as Xe-LP (low-power), Xe-HPG (high-performance-gaming), Xe-HP (high-performace) and Xe-HPC (high-performance-computing).
Nvidia
Nvidia line of discrete GPUs is branded as GeForce for consumer, Quadro for professional and Tesla for server.
Grace Hopper Superchip
The Nvidia GH200 Grace Hopper Superchip was unveiled August, 2023 and combines the Nvidia Grace CPU (ARM v9) and Nvidia Hopper GPU architectures via NVLink to deliver a CPU+GPU coherent memory model for accelerated AI and HPC applications.
Ada Lovelace Architecture
The Ada Lovelace microarchitecture was announced on September 20, 2022, featuring 4th-generation Tensor Cores with FP8, FP16, BF16, TF32 and sparsity acceleration.
Hopper Architecture
The Hopper GPU Datacenter microarchitecture was announced on March 22, 2022, featuring Transfomer Engines for large language models.
Ampere Architecture
The Ampere microarchitecture was announced on May 14, 2020 [22]. The Nvidia A100 GPU based on the Ampere architecture delivers a generational leap in accelerated computing in conjunction with CUDA 11 [23].
Turing Architecture
Turing cards were first released in 2018. They are the first consumer cores to launch with RTX, for raytracing, features. These are also the first consumer cards to launch with TensorCores used for matrix multiplications to accelerate convolutional neural networks. The Turing GTX line of chips do not offer RTX or TensorCores.
Volta Architecture
Volta cards were released in 2017. They were the first cards to launch with TensorCores, supporting matrix multiplications to accelerate convolutional neural networks.
Pascal Architecture
Pascal cards were first released in 2016.
Maxwell Architecture
Maxwell cards were first released in 2014.
PowerVR
PowerVR (Imagination Technologies) licenses IP to third parties (most notable Apple) used for system on a chip (SoC) designs. Since Series5 SGX OpenCL support via licensees is available.
PowerVR
IMG
Qualcomm
Qualcomm offers Adreno GPUs in various types as a component of their Snapdragon SoCs. Since Adreno 300 series OpenCL support is offered.
Adreno
Vivante Corporation
Vivante licenses IP to third parties for embedded systems, the GC series offers optional OpenCL support.
GC-Series
See also
- Deep Learning
- FPGA
- Graphics Programming
- Monte-Carlo Tree Search
- Parallel Search
- Perft(15)
- SIMD and SWAR Techniques
- Thread
Publications
1986
- W. Daniel Hillis, Guy L. Steele, Jr. (1986). Data parallel algorithms. Communications of the ACM, Vol. 29, No. 12, Special Issue on Parallelism
1990
2008 ...
- Vlad Stamate (2008). Real Time Photon Mapping Approximation on the GPU. in ShaderX6 - Advanced Rendering Techniques [24]
- Ren Wu, Bin Zhang, Meichun Hsu (2009). Clustering billions of data points using GPUs. ACM International Conference on Computing Frontiers
- Mark Govett, Craig Tierney, Jacques Middlecoff, Tom Henderson (2009). Using Graphical Processing Units (GPUs) for Next Generation Weather and Climate Prediction Models. CAS2K9 Workshop
- Hank Dietz, Bobby Dalton Young (2009). MIMD Interpretation on a GPU. LCPC 2009, pdf, slides.pdf
- Sander van der Maar, Joost Batenburg, Jan Sijbers (2009). Experiences with Cell-BE and GPU for Tomography. SAMOS 2009 [25]
2010...
- Avi Bleiweiss (2010). Playing Zero-Sum Games on the GPU. NVIDIA Corporation, GPU Technology Conference 2010, slides as pdf
- Mark Govett, Jacques Middlecoff, Tom Henderson (2010). Running the NIM Next-Generation Weather Model on GPUs. CCGRID 2010
- John Nickolls, William J. Dally (2010). The GPU Computing Era. IEEE Micro.
2011
- Mark Govett, Jacques Middlecoff, Tom Henderson, Jim Rosinski, Craig Tierney (2011). Parallelization of the NIM Dynamical Core for GPUs. slides as pdf
- Ľubomír Lackovič (2011). Parallel Game Tree Search Using GPU. Institute of Informatics and Software Engineering, Faculty of Informatics and Information Technologies, Slovak University of Technology in Bratislava, pdf
- Dan Anthony Feliciano Alcantara (2011). Efficient Hash Tables on the GPU. Ph. D. thesis, University of California, Davis, pdf » Hash Table
- Damian Sulewski (2011). Large-Scale Parallel State Space Search Utilizing Graphics Processing Units and Solid State Disks. Ph.D. thesis, University of Dortmund, pdf
- Damjan Strnad, Nikola Guid (2011). Parallel Alpha-Beta Algorithm on the GPU. CIT. Journal of Computing and Information Technology, Vol. 19, No. 4 » Parallel Search, Reversi
- Balázs Jákó (2011). Fast Hydraulic and Thermal Erosion on GPU. M.Sc. thesis, Supervisor Balázs Tóth, Eurographics 2011, pdf
2012
- Liang Li, Hong Liu, Peiyu Liu, Taoying Liu, Wei Li, Hao Wang (2012). A Node-based Parallel Game Tree Algorithm Using GPUs. CLUSTER 2012 » Parallel Search
2013
- S. Ali Mirsoleimani, Ali Karami Ali Karami, Farshad Khunjush (2013). A parallel memetic algorithm on GPU to solve the task scheduling problem in heterogeneous environments. GECCO '13
- Ali Karami, S. Ali Mirsoleimani, Farshad Khunjush (2013). A statistical performance prediction model for OpenCL kernels on NVIDIA GPUs. CADS 2013
- Diego Rodríguez-Losada, Pablo San Segundo, Miguel Hernando, Paloma de la Puente, Alberto Valero-Gomez (2013). GPU-Mapping: Robotic Map Building with Graphical Multiprocessors. IEEE Robotics & Automation Magazine, Vol. 20, No. 2, pdf
- David Williams, Valeriu Codreanu, Po Yang, Baoquan Liu, Feng Dong, Burhan Yasar, Babak Mahdian, Alessandro Chiarini, Xia Zhao, Jos Roerdink (2013). Evaluation of Autoparallelization Toolkits for Commodity GPUs. PPAM 2013
2014
- Qingqing Dang, Shengen Yan, Ren Wu (2014). A fast integral image generation algorithm on GPUs. ICPADS 2014
- S. Ali Mirsoleimani, Ali Karami Ali Karami, Farshad Khunjush (2014). A Two-Tier Design Space Exploration Algorithm to Construct a GPU Performance Predictor. ARCS 2014, Lecture Notes in Computer Science, Vol. 8350, Springer
- Steinar H. Gunderson (2014). Movit: High-speed, high-quality video filters on the GPU. FOSDEM 2014, pdf
- Baoquan Liu, Alexandru Telea, Jos Roerdink, Gordon Clapworthy, David Williams, Po Yang, Feng Dong, Valeriu Codreanu, Alessandro Chiarini (2018). Parallel centerline extraction on the GPU. Computers & Graphics, Vol. 41, pdf
2015 ...
- Peter H. Jin, Kurt Keutzer (2015). Convolutional Monte Carlo Rollouts in Go. arXiv:1512.03375 » Deep Learning, Go, MCTS
- Liang Li, Hong Liu, Hao Wang, Taoying Liu, Wei Li (2015). A Parallel Algorithm for Game Tree Search Using GPGPU. IEEE Transactions on Parallel and Distributed Systems, Vol. 26, No. 8 » Parallel Search
- Simon Portegies Zwart, Jeroen Bédorf (2015). Using GPUs to Enable Simulation with Computational Gravitational Dynamics in Astrophysics. IEEE Computer, Vol. 48, No. 11
2016
- Sean Sheen (2016). Astro - A Low-Cost, Low-Power Cluster for CPU-GPU Hybrid Computing using the Jetson TK1. Master's thesis, California Polytechnic State University, pdf [26] [27]
- Jingyue Wu, Artem Belevich, Eli Bendersky, Mark Heffernan, Chris Leary, Jacques Pienaar, Bjarke Roune, Rob Springer, Xuetian Weng, Robert Hundt (2016). gpucc: an open-source GPGPU compiler. CGO 2016
- David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, Demis Hassabis (2016). Mastering the game of Go with deep neural networks and tree search. Nature, Vol. 529 » AlphaGo
- Balázs Jákó (2016). Hardware accelerated hybrid rendering on PowerVR GPUs. [28] IEEE 20th Jubilee International Conference on Intelligent Engineering Systems
- Diogo R. Ferreira, Rui M. Santos (2016). Parallelization of Transition Counting for Process Mining on Multi-core CPUs and GPUs. BPM 2016
- Ole Schütt, Peter Messmer, Jürg Hutter, Joost VandeVondele (2016). GPU Accelerated Sparse Matrix–Matrix Multiplication for Linear Scaling Density Functional Theory. pdf [29]
- Chapter 8 in Ross C. Walker, Andreas W. Götz (2016). Electronic Structure Calculations on Graphics Processing Units: From Quantum Chemistry to Condensed Matter Physics. John Wiley & Sons
2017
- David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis (2017). Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm. arXiv:1712.01815 » AlphaZero
- Tristan Cazenave (2017). Residual Networks for Computer Go. IEEE Transactions on Computational Intelligence and AI in Games, Vol. PP, No. 99, pdf
- Jayvant Anantpur, Nagendra Gulur Dwarakanath, Shivaram Kalyanakrishnan, Shalabh Bhatnagar, R. Govindarajan (2017). RLWS: A Reinforcement Learning based GPU Warp Scheduler. arXiv:1712.04303
2018
- David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, Vol. 362, No. 6419
Forum Posts
2005 ...
- Hardware assist by Nicolai Czempin, Winboard Forum, August 27, 2006
- Monte carlo on a NVIDIA GPU ? by Marco Costalba, CCC, August 01, 2008
2010 ...
- Using the GPU by Louis Zulli, CCC, February 19, 2010
2011
- GPGPU and computer chess by Wim Sjoho, CCC, February 09, 2011
- Possible Board Presentation and Move Generation for GPUs? by Srdja Matovic, CCC, March 19, 2011
- Re: Possible Board Presentation and Move Generation for GPUs by Steffan Westcott, CCC, March 20, 2011
- Zeta plays chess on a gpu by Srdja Matovic, CCC, June 23, 2011 » Zeta
- GPU Search Methods by Joshua Haglund, CCC, July 04, 2011
2012
- Possible Search Algorithms for GPUs? by Srdja Matovic, CCC, January 07, 2012 [30] [31]
- uct on gpu by Daniel Shawul, CCC, February 24, 2012 » UCT
- Is there such a thing as branchless move generation? by John Hamlen, CCC, June 07, 2012 » Move Generation
- Choosing a GPU platform: AMD and Nvidia by John Hamlen, CCC, June 10, 2012
- Nvidias K20 with Recursion by Srdja Matovic, CCC, December 04, 2012 [32]
2013
- Kogge Stone, Vector Based by Srdja Matovic, CCC, January 22, 2013 » Kogge-Stone Algorithm [33] [34]
- GPU chess engine by Samuel Siltanen, CCC, February 27, 2013
- Fast perft on GPU (upto 20 Billion nps w/o hashing) by Ankan Banerjee, CCC, June 22, 2013 » Perft, Kogge-Stone Algorithm [35]
2015 ...
- GPU chess update, local memory... by Srdja Matovic, CCC, June 06, 2016
- Jetson GPU architecture by Dann Corbit, CCC, October 18, 2016 » Astro
- Pigeon is now running on the GPU by Stuart Riffle, CCC, November 02, 2016 » Pigeon
2017
- Back to the basics, generating moves on gpu in parallel... by Srdja Matovic, CCC, March 05, 2017 » Move Generation
- Re: Perft(15): comparison of estimates with Ankan's result by Ankan Banerjee, CCC, August 26, 2017 » Perft(15)
- Chess Engine and GPU by Fishpov , Rybka Forum, October 09, 2017
- To TPU or not to TPU... by Srdja Matovic, CCC, December 16, 2017 » Deep Learning [36]
2018
- Announcing lczero by Gary, CCC, January 09, 2018 » Leela Chess Zero
- GPU ANN, how to deal with host-device latencies? by Srdja Matovic, CCC, May 06, 2018 » Neural Networks
- GPU contention by Ian Kennedy, CCC, May 07, 2018 » Leela Chess Zero
- How good is the RTX 2080 Ti for Leela? by Hai, September 15, 2018 » Leela Chess Zero [37]
- Re: How good is the RTX 2080 Ti for Leela? by Ankan Banerjee, CCC, September 16, 2018
- My non-OC RTX 2070 is very fast with Lc0 by Kai Laskos, CCC, November 19, 2018 » Leela Chess Zero
- LC0 using 4 x 2080 Ti GPU's on Chess.com tourney? by M. Ansari, CCC, December 28, 2018 » Leela Chess Zero
2019
- Generate EGTB with graphics cards? by Nguyen Pham, CCC, January 01, 2019 » Endgame Tablebases
- LCZero FAQ is missing one important fact by Jouni Uski, CCC, January 01, 2019 » Leela Chess Zero
- Michael Larabel benches lc0 on various GPUs by Warren D. Smith, LCZero Forum, January 14, 2019 » Lc0 [38]
- Using LC0 with one or two GPUs - a guide by Srdja Matovic, CCC, March 30, 2019 » Lc0
- Wouldn't it be nice if C++ GPU by Chris Whittington, CCC, April 25, 2019 » C++
- Lazy-evaluation of futures for parallel work-efficient Alpha-Beta search by Percival Tiglao, CCC, June 06, 2019
- My home-made CUDA kernel for convolutions by Rémi Coulom, Game-AI Forum, November 09, 2019 » Deep Learning
- GPU rumors 2020 by Srdja Matovic, CCC, November 13, 2019
2020 ...
- AB search with NN on GPU... by Srdja Matovic, CCC, August 13, 2020 » Neural Networks [39]
- I stumbled upon this article on the new Nvidia RTX GPUs by Kai Laskos, CCC, September 10, 2020
- Will AMD RDNA2 based Radeon RX 6000 series kick butt with Lc0? by Srdja Matovic, CCC, November 01, 2020
- Zeta with NNUE on GPU? by Srdja Matovic, CCC, March 31, 2021 » Zeta, NNUE
- GPU rumors 2021 by Srdja Matovic, CCC, April 16, 2021
- Comparison of all known Sliding lookup algorithms [CUDA] by Daniel Infuehr, CCC, January 08, 2022 » Sliding Piece Attacks
- Re: China boosts in silicon... by Srdja Matovic, CCC, September 10, 2024
External Links
- Graphics processing unit from Wikipedia
- Video card from Wikipedia
- Heterogeneous System Architecture from Wikipedia
- Tensor processing unit from Wikipedia
- General-purpose computing on graphics processing units (GPGPU) from Wikipedia
- List of AMD graphics processing units from Wikipedia
- List of Intel graphics processing units from Wikipedia
- List of Nvidia graphics processing units from Wikipedia
- NVIDIA Developer
- NVIDIA GPU Programming Guide
OpenCL
- OpenCL from Wikipedia
- Part 1: OpenCL™ – Portable Parallelism - CodeProject
- Part 2: OpenCL™ – Memory Spaces - CodeProject
CUDA
- CUDA from Wikipedia
- CUDA Zone | NVIDIA Developer
- Nvidia CUDA Compiler (NVCC) from Wikipedia
- Compiling CUDA with clang — LLVM Clang documentation
- CppCon 2016: “Bringing Clang and C++ to GPUs: An Open-Source, CUDA-Compatible GPU C++ Compiler" by Justin Lebar, YouTube Video [40]
- :
Deep Learning
- Deep Learning | NVIDIA Developer » Deep Learning
- NVIDIA cuDNN | NVIDIA Developer
- Efficient mapping of the training of Convolutional Neural Networks to a CUDA-based cluster
- Deep Learning in a Nutshell: Core Concepts by Tim Dettmers, Parallel Forall, November 3, 2015
- Deep Learning in a Nutshell: History and Training by Tim Dettmers, Parallel Forall, December 16, 2015
- Deep Learning in a Nutshell: Sequence Learning by Tim Dettmers, Parallel Forall, March 7, 2016
- Deep Learning in a Nutshell: Reinforcement Learning by Tim Dettmers, Parallel Forall, September 8, 2016
- Faster deep learning with GPUs and Theano
- Theano (software) from Wikipedia
- TensorFlow from Wikipedia
Game Programming
- Advanced game programming | Session 5 - GPGPU programming by Andy Thomason
- Leela Zero by Gian-Carlo Pascutto » Leela Zero
Chess Programming
- Chess on a GPGPU
- GPU Chess Blog
- ankan-ban/perft_gpu · GitHub » Perft [41]
- LCZero · GitHub » Leela Chess Zero
- GitHub - StuartRiffle/Jaglavak: Corvid Chess Engine » Jaglavak
- Zeta OpenCL Chess » Zeta
References
- ↑ Image by Mahogny, February 09, 2008, Wikimedia Commons
- ↑ Pytorch NNUE training by Gary Linscott, CCC, November 08, 2020
- ↑ [1] Wikipedia contributors. (2024, June 30). General-purpose computing on graphics processing units. In Wikipedia, The Free Encyclopedia. Retrieved 13:27, July 7, 2024
- ↑ Fermi white paper from Nvidia
- ↑ GeForce 500 series on Wikipedia
- ↑ Graphics Core Next on Wikipedia
- ↑ Radeon HD 7000 series on Wikipedia
- ↑ CUDA Technical_Specification on Wikipedia
- ↑ AMD GPU Hardware Basics
- ↑ CUDA C Programming Guide v7.0, Appendix G.COMPUTE CAPABILITIES
- ↑ AMD Accelerated Parallel Processing OpenCL Programming Guide rev2.7, Appendix D Device Parameters, Table D.1 Parameters for 7xxx Devices
- ↑ CUDA C Programming Guide v7.0, Chapter 5.4.1. Arithmetic Instructions
- ↑ AMD_OpenCL_Programming_Optimization_Guide.pdf 3.0beta, Chapter 2.7.1 Instruction Bandwidths
- ↑ Re: To TPU or not to TPU... by Rémi Coulom, CCC, December 16, 2017
- ↑ INSIDE VOLTA
- ↑ AnandTech - Nvidia Turing Deep Dive page 6
- ↑ Wikipedia - Ampere microarchitecture
- ↑ - Ada Lovelace microarchitecture
- ↑ host-device latencies? by Srdja Matovic, Nvidia CUDA ZONE, Feb 28, 2019
- ↑ host-device latencies? by Srdja Matovic AMD Developer Community, Feb 28, 2019
- ↑ Re: GPU ANN, how to deal with host-device latencies? by Milos Stanisavljevic, CCC, May 06, 2018
- ↑ NVIDIA Ampere Architecture In-Depth | NVIDIA Developer Blog by Ronny Krashinsky, Olivier Giroux, Stephen Jones, Nick Stam and Sridhar Ramaswamy, May 14, 2020
- ↑ CUDA 11 Features Revealed | NVIDIA Developer Blog by Pramod Ramarao, May 14, 2020
- ↑ Photon mapping from Wikipedia
- ↑ Cell (microprocessor) from Wikipedia
- ↑ Jetson TK1 Embedded Development Kit | NVIDIA
- ↑ Jetson GPU architecture by Dann Corbit, CCC, October 18, 2016
- ↑ PowerVR from Wikipedia
- ↑ Density functional theory from Wikipedia
- ↑ Yaron Shoham, Sivan Toledo (2002). Parallel Randomized Best-First Minimax Search. Artificial Intelligence, Vol. 137, Nos. 1-2
- ↑ Alberto Maria Segre, Sean Forman, Giovanni Resta, Andrew Wildenberg (2002). Nagging: A Scalable Fault-Tolerant Paradigm for Distributed Search. Artificial Intelligence, Vol. 140, Nos. 1-2
- ↑ Tesla K20 GPU Compute Processor Specifications Released | techPowerUp
- ↑ Parallel Thread Execution from Wikipedia
- ↑ NVIDIA Compute PTX: Parallel Thread Execution, ISA Version 1.4, March 31, 2009, pdf
- ↑ ankan-ban/perft_gpu · GitHub
- ↑ Tensor processing unit from Wikipedia
- ↑ GeForce 20 series from Wikipedia
- ↑ Phoronix Test Suite from Wikipedia
- ↑ kernel launch latency - CUDA / CUDA Programming and Performance - NVIDIA Developer Forums by LukeCuda, June 18, 2018
- ↑ Re: Generate EGTB with graphics cards? by Graham Jones, CCC, January 01, 2019
- ↑ Fast perft on GPU (upto 20 Billion nps w/o hashing) by Ankan Banerjee, CCC, June 22, 2013