laitimes

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

author:Charlie who loves programming

Today I learned a new concept, GPGPU, to share with you.

General-purpose computing on graphics processing units (GPGPU), also known as GPGPU computing, refers to the use of GPUs for general-purpose computing in addition to traditional computer graphics computing. is general-purpose computing that runs on a graphics processing unit (GPU) that has traditionally been handled by a central processing unit (CPU).

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

Using GPUs for general-purpose computing enhances CPU architectures by accelerating some parts of an application while the rest continues to run on the CPU, ultimately creating an overall faster, higher-performing application by combining CPU and GPU processing power.

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

GPU vs. GPGPU

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!
Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

Basically all modern GPUs are GPGPUs. A GPU is a programmable processor with thousands of processing cores running simultaneously in massively parallel, each focused on efficient computation, facilitating real-time processing and analysis of large data sets.

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

The graphics card is used for graphics rendering

While GPUs were originally designed primarily to render images, GPGUs can now also be programmed to use processing power for scientific computing needs.

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

If a graphics card is compatible with any specific framework that provides access to general-purpose computing, it's a GPGPU. The main difference is that GPU computing is a hardware component, while GPGPU is essentially a software concept in which specialized programming and device design facilitate massively parallel processing of non-specialized computing.

What is GPGPU Acceleration?

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

A graphics processing unit (GPU) is a specialized computer processor that meets the needs of real-time, high-resolution 3D graphics for computationally intensive tasks.

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

By 2012, GPUs had evolved into highly parallel, multi-core systems that could efficiently process large amounts of data. This design is more efficient than general-purpose central processing units (CPUs) for algorithms that process large blocks of data in parallel, such as cryptographic hash functions, machine learning, and so on.

The architecture of a GPU is different from that of a CPU. CPUs are designed for single-threaded performance, while GPUs are designed to handle a large number of parallel tasks. The parallel architecture of the GPU enables it to execute thousands of threads simultaneously, making it ideal for data processing tasks that require high throughput. In addition, modern GPUs are equipped with specialized hardware to perform tasks such as matrix multiplication, which is a critical operation in many data processing tasks.

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

GPGPU acceleration refers to an accelerated computing method in which the computationally intensive portion of an application is allocated to the GPU, while general-purpose computing is handed over to the CPU, providing supercomputing-level parallelism. While highly complex calculations are computed in GPUs, sequential calculations can be performed in parallel in CPUs.

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

The most striking feature of the GPGPU design is the ability to transfer information from the GPU back to the CPU in both directions; In general, the data throughput is high in both directions, resulting in a multiplier effect on the speed of a particular high-usage algorithm.

GPGPU pipelines can dramatically increase the speed of data processing, especially for very large datasets, or data processing with 2D or 3D images. It is used for complex graphics pipelines as well as scientific calculations; This is especially true in areas with large data sets, such as genome mapping, or where 2D or 3D analysis is useful – especially current biomolecular analysis, protein research, and other complex organic chemistry.

How to use GPGPU

Writing GPU-enabled applications requires parallel computing platforms and application programming interfaces (APIs) that allow software developers and software engineers to build algorithms to modify their applications and map compute-intensive cores to GPUs. GPGPU supports several memory types in the memory hierarchy so that designers can optimize their programs.

Any language that allows code running on the CPU to poll the GPU shader for a return value can create a GPGPU framework. Programming standards for parallel computing include OpenCL (vendor independent), OpenACC, OpenMP, and OpenHMPP.

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

OpenCL stands for Open Computing Language, an open standard defined by the Khronos Group, an open and free standard, OpenCL provides a cross-platform GPGPU platform that supports parallel computing of data on CPUs for cross-platform parallel programming of various accelerators in supercomputers, cloud servers, personal computers, mobile devices, and embedded platforms.

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

OpenCL is actively supported on Intel, AMD, Nvidia, and ARM platforms. OpenCL is a tool to perform parallel general-purpose computing on GPUs or other compatible hardware accelerators. This includes moving data, doing real work, and getting results.

As of 2016, OpenCL is the dominant open general-purpose GPU computing language.

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

NVIDIA proposed the CUDA platform in 2006 by creating the Compute Unified Device Architecture (CUDA), a software development kit (SDK) and application programming interface (API) for accelerating parallel computing that allows algorithms to be written in programming language C for use on GeForce 8 series and later GPUs . CUDA technology enables parallel processing by breaking down tasks into thousands of smaller "threads" that execute independently.

CUDA API and its runtime: The CUDA API is an extension of the C programming language that adds the ability to specify thread-level parallelism in C, as well as to specify GPU device-specific operations, such as moving data between the CPU and GPU.

CUDA is a software layer that provides direct access to the GPU's virtual instruction set and parallel computing elements to execute the compute core. In addition to drivers and runtime kernels, the CUDA platform includes compilers, libraries, and developer tools to help programmers accelerate their applications.

CUDA is designed to work with programming languages such as C, C++, Fortran, and Python. This accessibility makes it easier for parallel programming experts to work with GPU resources, whereas previous APIs, such as Direct3D and OpenGL, required advanced graphics programming skills. CUDA-based GPUs also support programming frameworks such as OpenMP, OpenACC, and OpenCL.

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

ROCm is AMD's software technology stack for graphics processing unit (GPU) programming, introduced in 2016, including HIP (GPU core-based programming), OpenMP/Message Passing Interface (MPI) (instruction-based programming), and OpenCL. ROCm covers a variety of domains: general-purpose computing (GPGPU) on graphics processing units, high-performance computing (HPC), heterogeneous computing.

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

ROCm runs on an AMD graphics card.

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

Metal GPGPU is an API from Apple Inc., a low-level graphics programming API for iOS and macOS, but it can also be used for general-purpose computing on these devices.

GPGPU in CUDA

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

The CUDA platform is a software layer that provides direct access to the GPU's virtual instruction set and parallel computing elements to execute compute cores. Designed to work with programming languages such as C, C++, and Fortran, CUDA is an easily accessible platform that does not require advanced graphics programming skills, and software developers can use it through CUDA accelerated libraries and compiler instructions. CUDA-enabled devices typically connect to the host CPU, which is used for data transfer and kernel calls to the CUDA device.

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

GPGPU's CUDA models accelerate a wide range of applications, including GPGPU AI, computational science, image processing, numerical analytics, and deep learning. The CUDA toolkit includes GPU-accelerated libraries, compilers, programming guides, API references, and the CUDA runtime.

In addition to CUDA, does GPGPU have a domestic framework?

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

Yes, CANN, is a domestic CUDA alternative, produced by Huawei!

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

CANN is a heterogeneous computing architecture launched by Huawei for AI scenarios, which supports multiple AI frameworks, including MindSpore, PyTorch, and TensorFlow, and serves AI processors and programming downwards, playing a key role in improving the computing efficiency of Ascend AI processors. At the same time, it provides multi-level programming interfaces for diverse application scenarios, allowing users to quickly build AI applications and services based on the Ascend platform.

Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!
Learn a little bit of AI knowledge every day: Learn about GPGPU and CUDA, and Huawei's CANN is also very powerful!

#GPU##首款国产7纳米GPGPU芯片亮相##显卡# #华为鸿蒙× Ascend Cloud is expected to break through the bottleneck of AI computing power##华为##英伟达##国产7纳米GPGPU芯片面世##cuda##昇腾AI向上的力量##微头条首发挑战赛##头条创作挑战赛##头条#