Nvidia announced this Monday (12) a new processor for use in data centers and artificial intelligence applications . The chip was named “Grace” in honor of Grace Hopper, a pioneer in computing.
According to the company , which has not released many technical specifications, the chip will have “ten times the performance of today’s fastest servers in the most complex tasks of AI and high-performance computing”.
Organizations such as the Swiss National Center for Supercomputing (CSCS) and the Los Alamos National Laboratory in the US have already expressed interest in building machines with the new chip.
The Grace processor will build on a future iteration of ARM’s Neoverse architecture, a company Nvidia is in talks to acquire for $40 billion. The deal is under investigation by the European Union , concerned about a monopoly, and faces protests from Chinese manufacturers , as well as giants like Microsoft, Qualcomm and Google .
Nvidia already has a presence in the high-performance computing and artificial intelligence market, with GPUs specialized in the Ampere family. But not all tasks are limited by the GPU and at least one processor is needed to “organize” running processes. From this point of view, building an optimized CPU makes sense.
Read more:
- ARM Introduces New Generation Processor Features
- AMD retails three processors for professional machines
- Qualcomm creates a “fan club” for Snapdragon processors
Desktop Data Center
Last November Nvidia announced the DGX Station A100, a workstation that concentrates “the power of a data center” in a single machine, designed for high-performance tasks such as biological simulations or artificial intelligence research.
Equipped with a 64-core AMD processor (probably a Ryzen Threadripper 3990X CPU), the DGX Station has 512GB of RAM and a 7.68Tb NVMe SSD. But the star are the four A100 GPUs based on the Ampere architecture, with 80 GB of RAM (each), totaling 320 GB.
According to Nvidia, the DGX Station supports Multi-Instance GPU (MIG) technology, which allows you to compartmentalize and virtualize your GPUs as 28 separate instances that can be accessed by multiple users or allocated for parallel processing tasks.
Source: The Verge