经典国产对白乱子伦精品视频_午夜精品久视频在线观看_国产精品久久久久久一级三级片_1024国产精品自拍_无码精品视频区一区二区三_亚洲日韩av电影_狠狠色2019综合网_欧美日韩性网站_午夜a级黄色免费毛片_中文字幕中出在线观看

How to understand the intelligent computing center!

Intelligent computing center is a data center that focuses on artificial intelligence computing tasks.
 
Data centers usually include three categories, in addition to intelligent computing centers, the other two are general computing centers that focus on general computing tasks, and supercomputing centers that focus on supercomputing tasks.
 
Since 2023, AIGC models represented by ChatCPT and Sora have emerged, sparking a global wave of AI.
 
To gain an advantage in the AI wave, it is necessary to have strong AI computing power support. The intelligent computing center, as the core infrastructure of AI computing power, has gradually become the focus of people's attention and a key construction object in the industry.
 
According to data, more than 20 cities in China, including Wuhan, Chengdu, Changsha, Nanjing, Hohhot, etc., have built intelligent computing centers. By 2025, the number of domestic intelligent computing centers will exceed 50.
 
These intelligent computing centers use specialized AI computing hardware, which is suitable for efficiently running AI algorithms. They can be applied in fields such as computer vision, natural language processing, machine learning, etc., to handle tasks such as image recognition, speech recognition, text analysis, model training and inference.
 
What are the differences between intelligent computing servers?
The intelligent computing server is the main computing hardware of the intelligent computing center. The biggest difference between it and traditional general-purpose servers is the difference in computing power chips.
 
Traditional general-purpose servers use CPU as the main chip, some are equipped with GPU (Graphics Processing Unit) cards, while others are not. Even if configured, the quantity is not large (1-2 blocks), mainly used to complete traditional graphics processing tasks (such as 3D graphics rendering).
 
The intelligent computing server is also equipped with a CPU to ensure the operation of the operating system. However, in order to better complete AI computing tasks, more GPU, NPU (neural network processing unit), TPU (tensor processing unit) and other computing chips (4 or 8 blocks) have been configured, with the output computing power of these chips being the main focus.
 
This "CPU+GPU" and "CPU+NPU" architecture, also known as the "heterogeneous computing" architecture, can fully leverage the advantages of different computing chips in terms of performance, cost, and energy consumption.
 
GPUs, NPUs, and TPUs have a large number of cores and are skilled in parallel computing. AI algorithms involve a large number of simple matrix operations and require powerful parallel computing capabilities.
 
In practical use, GPUs, NPUs, and TPUs will be made into the form of boards and inserted into the slots of intelligent computing servers. After the server is powered on and started, execute the computing tasks according to the schedule.
 
In addition to differences in chips, AI servers have also been strengthened in terms of architecture, storage, heat dissipation, topology, and other aspects to fully utilize performance and ensure stable operation.
 
For example, the DRAM capacity of a smart computing server is usually 8 times that of a regular server, and the NAND capacity is 3 times that of a regular server. Even its PCB circuit board has significantly more layers than traditional servers.
 
Crazy stacking of materials will definitely bring about a cost difference between the two. The price of a smart computing server may be tens of times higher than that of a traditional general-purpose server.
 
Not long ago, China Mobile announced the bidding results for the centralized procurement of new intelligent computing centers from 2024 to 2025, with a total procurement scale of 8054 intelligent computing servers and a total bid amount of approximately 19.104 billion yuan (excluding tax). On average, the price per unit is 2.372 million yuan. The price of a universal server varies depending on the brand and configuration, ranging from approximately 10000 to 100000 yuan.
Due to the influence of computing power boards, the power consumption of intelligent computing servers is significantly higher than that of general-purpose servers.
 
Taking Nvidia GPU as an example, A100 has a single card power consumption of 400W, and H100 has a single card power consumption of 700W. A smart computing server equipped with 8 GPUs can achieve a thermal power consumption of 3.2-5.6 kW for just the GPUs. For traditional general-purpose servers, it is only about 0.3~0.5 kW.
From an external perspective, there is not much difference between intelligent computing servers and general-purpose servers. Both are standard architectures and can fit into a 42U standard rack. If there are more AI computing power boards built-in, the thickness of the intelligent computing server may be slightly larger, reaching 4U, 5U, or even 10U.
 
It should be noted that intelligent computing servers can also be divided into training servers, inference servers, or integrated training and push servers based on different work tasks. These servers may have some differences in architecture and size. Generally speaking, the training server is larger than the inference server (with more AI computing power boards).

Next:無下一篇
Share :