AMD Unveils Radeon Instinct MI60, MI50 Accelerators

SAN FRANCISCO, CA, Nov 7, 2018 – AMD announced the AMD Radeon Instinct MI60 and MI50 accelerators, the world’s first 7nm datacenter GPUs, designed to deliver the compute performance required for next-generation deep learning, HPC, cloud computing and rendering applications. Researchers, scientists and developers will use AMD Radeon Instinct accelerators to solve tough and interesting challenges, including large-scale simulations, climate change, computational biology, disease prevention and more.

AMD Radeon Instinct MI60 accelerator

“Legacy GPU architectures limit IT managers from effectively addressing the constantly evolving demands of processing and analyzing huge datasets for modern cloud datacenter workloads,” said David Wang, senior vice president of engineering, Radeon Technologies Group at AMD. “Combining world-class performance and a flexible architecture with a robust software platform and the industry’s leading-edge ROCm open software ecosystem, the new AMD Radeon Instinct accelerators provide the critical components needed to solve the most difficult cloud computing challenges today and into the future.”

The AMD Radeon Instinct MI60 and MI50 accelerators feature flexible mixed-precision capabilities, powered by high-performance compute units that expand the types of workloads these accelerators can address, including a range of HPC and deep learning applications. The new AMD Radeon Instinct MI60 and MI50 accelerators were designed to efficiently process workloads such as rapidly training complex neural networks, delivering higher levels of floating-point performance, greater efficiencies and new features for datacenter and departmental deployments1.

The AMD Radeon Instinct MI60 and MI50 accelerators provide ultra-fast floating-point performance and hyper-fast HBM2 (second-generation High-Bandwidth Memory) with up to 1 TB/s memory bandwidth speeds. They are also the first GPUs capable of supporting next-generation PCIe 4.02 interconnect, which is up to 2X faster than other x86 CPU-to-GPU interconnect technologies3, and feature AMD Infinity Fabric Link GPU interconnect technology that enables GPU-to-GPU communications that are up to 6X faster than PCIe Gen 3 interconnect speeds4.

AMD also announced a new version of the ROCm open software platform for accelerated computing that supports the architectural features of the new accelerators, including optimized deep learning operations (DLOPS) and the AMD Infinity Fabric Link GPU interconnect technology. Designed for scale, ROCm allows customers to deploy high-performance, energy-efficient heterogeneous computing systems in an open environment.

“Google believes that open source is good for everyone,” said Rajat Monga, engineering director, TensorFlow, Google. “We’ve seen how helpful it can be to open source machine learning technology, and we’re glad to see AMD embracing it. With the ROCm open software platform, TensorFlow users will benefit from GPU acceleration and a more robust open source machine learning ecosystem.”

Key features of the AMD Radeon Instinct MI60 and MI50 accelerators include:

  • Optimized Deep Learning Operations: Provides flexible mixed-precision FP16, FP32 and INT4/INT8 capabilities to meet growing demand for dynamic and ever-changing workloads, from training complex neural networks to running inference against those trained networks.
  • World’s Fastest Double Precision PCIe2 Accelerator5The AMD Radeon Instinct MI60 is the world’s fastest double precision PCIe 4.0 capable accelerator, delivering up to 7.4 TFLOPS peak FP64 performance5 allowing scientists and researchers to more efficiently process HPC applications across a range of industries including life sciences, energy, finance, automotive, aerospace, academics, government, defense and more. The AMD Radeon Instinct MI50 delivers up to 6.7 TFLOPS FP64 peak performance1, while providing an efficient, cost-effective solution for a variety of deep learning workloads, as well as enabling high reuse in Virtual Desktop Infrastructure (VDI), Desktop-as-a-Service (DaaS) and cloud environments.
  • Up to 6X Faster Data Transfer: Two Infinity Fabric Links per GPU deliver up to 200 GB/s of peer-to-peer bandwidth – up to 6X faster than PCIe 3.0 alone4 – and enable the connection of up to 4 GPUs in a hive ring configuration (2 hives in 8 GPU servers).
  • Ultra-Fast HBM2 Memory: The AMD Radeon Instinct MI60 provides 32GB of HBM2 Error-correcting code (ECC) memory6, and the Radeon Instinct MI50 provides 16GB of HBM2 ECC memory. Both GPUs provide full-chip ECC and Reliability, Accessibility and Serviceability (RAS)7 technologies, which are critical to deliver more accurate compute results for large-scale HPC deployments.
  • Secure Virtualized Workload Support: AMD MxGPU Technology, the industry’s only hardware-based GPU virtualization solution, which is based on the industry-standard SR-IOV (Single Root I/O Virtualization) technology, makes it difficult for hackers to attack at the hardware level, helping provide security for virtualized cloud deployments.

Updated ROCm Open Software Platform

AMD today also announced a new version of its ROCm open software platform designed to speed development of high-performance, energy-efficient heterogeneous computing systems. In addition to support for the new Radeon Instinct accelerators, ROCm software version 2.0 provides updated math libraries for the new DLOPS; support for 64-bit Linux operating systems including CentOS, RHEL and Ubuntu; optimizations of existing components; and support for the latest versions of the most popular deep learning frameworks, including TensorFlow 1.11, PyTorch (Caffe2) and others. Learn more about ROCm 2.0 software here.

Availability 

The AMD Radeon Instinct MI60 accelerator is expected to ship to datacenter customers by the end of 2018. The AMD Radeon Instinct MI50 accelerator is expected to begin shipping to data center customers by the end of Q1 2019. The ROCm 2.0 open software platform is expected to be available by the end of 2018.

Supporting Resources

  • Visit the AMD Next Horizon event webpage to get the event materials
  • Learn more about AMD Radeon Instinct MI60 and MI50 accelerators
  • Learn more about AMD 7nm technology here
  • Learn more about the ROCm 2.0 open software platform here
  • Learn more about ROCm & MIOpen Docker Hub here
  • Become a fan of AMD on Facebook
  • Follow AMD Radeon Instinct on Twitter

About AMD

For more than 45 years AMD has driven innovation in high-performance computing, graphics and visualization technologies ― the building blocks for gaming, immersive platforms and the datacenter. Hundreds of millions of consumers, leading Fortune 500 businesses and cutting-edge scientific research facilities around the world rely on AMD technology daily to improve how they live, work and play. AMD employees around the world are focused on building great products that push the boundaries of what is possible.

For more information about how AMD is enabling today and inspiring tomorrow, visit www.amd.com.

 

1 As of Oct 22, 2018. The results calculated for Radeon Instinct MI60 designed with Vega 7nm FinFET process technology resulted in 29.5 TFLOPS half precision (FP16), 14.8 TFLOPS single precision (FP32) and 7.4 TFLOPS double precision (FP64) peak theoretical floating-point performance. This performance increase is achieved with an improved transistor count of 13.2 billion on a smaller die size of 331.46mm2 than previous Gen MI25 GPU products with the same 300W power envelope.

The results calculated for Radeon Instinct MI50 designed with Vega 7nm FinFET process technology resulted in 26.8 TFLOPS peak half precision (FP16), 13.4 TFLOPS peak single precision (FP32) and 6.7 TFLOPS peak double precision (FP64) floating-point performance. This performance increase is achieved with an improved transistor count of 13.2 billion on a smaller die size of 331.46mm2 than previous Gen MI25 GPU products with the same 300W power envelope.

The results calculated for Radeon Instinct MI25 GPU based on the “Vega10” architecture resulted in 24.6 TFLOPS peak half precision (FP16), 12.3 TFLOPS peak single precision (FP32) and 768 GFLOPS peak double precision (FP64) floating-point performance. This performance is achieved with a transistor count of 12.5 billion on a die size of 494.8mm2 with 300W power envelope.

AMD TFLOPS calculations conducted with the following equation for Radeon Instinct MI25, MI50, and MI60 GPUs: FLOPS calculations are performed by taking the engine clock from the highest DPM state and multiplying it by xx CUs per GPU. Then, multiplying that number by xx stream processors, which exist in each CU. Then, that number is multiplied by 2 FLOPS per clock for FP32 and 4 FLOPS per clock for FP16. To calculate FP64 TFLOPS rate for Vega 7nm products MI50 and MI60 a 1/2 rate is used and for “Vega10” architecture based MI25 a 1/16th rate is used.

TFLOP calculations for MI50 and MI60 GPUs can be found at https://www.amd.com/en/products/professional-graphics/instinct-mi50and https://www.amd.com/en/products/professional-graphics/instinct-mi60

GFLOPS per Watt
MI25 MI50 MI60
FP16 0.082 0.089 0.098
FP32 0.041 0.045 0.049
FP64 0.003 0.022 0.025

Industry supporting documents / web pages:
http://www.tsmc.com/english/dedicatedFoundry/technology/7nm.htm 
https://www.globalfoundries.com/sites/default/files/product-briefs/product-brief-7lp-7nm-finfet-technology.pdf

AMD has not independently tested or verified external/third party results/data and bears no responsibility for any errors or omissions therein.
RIV-2

2 Pending

3 As of October 22, 2018. Radeon Instinct MI50 and MI60 “Vega 7nm” technology-based accelerators are PCIe Gen 4.0 capable providing up to 64 GB/s Peak bandwidth per GPU card with PCIe Gen 4.0 x16 certified servers. Peak theoretical transport rate performance guidelines are estimated only and may vary. Previous Gen Radeon Instinct compute GPU cards are based on PCIe Gen 3.0 providing up to 32 GB/s peak theoretical transport rate bandwidth performance.

Peak theoretical transport rate performance is calculated by Baud Rate * width in bytes * # directions = GB/s  
PCIe Gen 3: 8 * 2 * 2 = 32 GB/s
PCIe Gen 4: 16 * 2 * 2 = 64 GB/s

Refer to server manufacture PCIe Gen 4.0 compatibility and performance guidelines for potential peak performance of the specified server models. Server manufacturers may vary configuration offerings yielding different results.

https://pcisig.com/ 
https://www.chipestimate.com/PCI-Express-Gen-4-a-Big-Pipe-for-Big-Data/Cadence/Technical-Article/2014/04/15 
https://www.tomshardware.com/news/pcie-4.0-power-speed-express,32525.html

AMD has not independently tested or verified external/third party results/data and bears no responsibility for any errors or omissions therein.
RIV-5

4 As of Oct 22, 2018. Radeon Instinct™ MI50 and MI60 “Vega 7nm” technology based accelerators are PCIe® Gen 4.0* capable providing up to 64 GB/s peak theoretical transport data bandwidth from CPU to GPU per card with PCIe Gen 4.0 x16 certified servers. 
Previous Gen Radeon Instinct compute GPU cards are based on PCIe Gen 3.0 providing up to 32 GB/s peak theoretical transport rate bandwidth performance.

Peak theoretical transport rate performance is calculated by Baud Rate * width in bytes * # directions = GB/s per card
PCIe Gen3: 8 * 2 * 2 = 32 GB/s
PCIe Gen4: 16 * 2 * 2 = 64 GB/s
Vega20 to Vega20 xGMI = 25 * 2 * 2 = 100 GB/s * 2 links per GPU = 200 GB/s

xGMI (also known as Infinity Fabric Link)  vs. PCIe Gen3: 200/32 = 6.25x

Radeon Instinct™ MI50 and MI60 “Vega 7nm” technology-based accelerators include dual Infinity Fabric™ Links providing up to 200 GB/s peak theoretical GPU to GPU or Peer-to-Peer (P2P) transport rate bandwidth performance per GPU card. Combined with PCIe Gen 4 compatibility providing an aggregate GPU card I/O peak bandwidth of up to 264 GB/s.

Performance guidelines are estimated only and may vary. Previous Gen Radeon Instinct compute GPU cards provide up to 32 GB/s peak PCIe Gen 3.0 bandwidth performance.

Infinity Fabric Link technology peak theoretical transport rate performance is calculated by Baud Rate * width in bytes * # directions * # links = GB/s per card

Infinity Fabric Link: 25 * 2 * 2 = 100 GB/s

MI50 |MI60 each have two links:
100 GB/s * 2 links per GPU = 200 GB/s

Refer to server manufacture PCIe Gen 4.0 compatibility and performance guidelines for potential peak performance of the specified server model numbers. Server manufacturers may vary configuration offerings yielding different results. 
https://pcisig.com/ 
https://www.chipestimate.com/PCI-Express-Gen-4-a-Big-Pipe-for-Big-Data/Cadence/Technical-Article/2014/04/15 
https://www.tomshardware.com/news/pcie-4.0-power-speed-express,32525.html

AMD has not independently tested or verified external/third party results/data and bears no responsibility for any errors or omissions therein.
RIV-4

5 Calculated on Oct 22, 2018, the Radeon Instinct MI60 GPU resulted in 7.4 TFLOPS peak theoretical double precision floating-point (FP64) performance. AMD TFLOPS calculations conducted with the following equation: FLOPS calculations are performed by taking the engine clock from the highest DPM state and multiplying it by xx CUs per GPU. Then, multiplying that number by xx stream processors, which exist in each CU. Then, that number is multiplied by 1/2 FLOPS per clock for FP64. TFLOP calculations for MI60 can be found at https://www.amd.com/en/products/professional-graphics/instinct-mi60External results on the NVidia Tesla V100 (16GB card) GPU accelerator resulted in 7 TFLOPS peak double precision (FP64) floating-point performance. Results found at: https://images.nvidia.com/content/technologies/volta/pdf/437317-Volta-V100-DS-NV-US-WEB.pdf. AMD has not independently tested or verified external/third party results/data and bears no responsibility for any errors or omissions therein.

6 ECC support on 2nd Gen Radeon Instinct GPU cards, based on the “Vega 7nm” technology has been extended to full-chip ECC including HBM2 memory and internal GPU structures.

7 Expanded RAS (Reliability, availability and serviceability) attributes have been added to AMD’s 2nd Gen Radeon Instinct Vega 7nm technology based GPU cards and their supporting ecosystem including software, firmware and system level features. AMD’s remote manageability capabilities using advanced out-of-band circuitry allow for easier GPU monitoring via I2C, regardless of the GPU state. For full system RAS capabilities, refer to the system manufacturer’s guidelines for specific system models.

Leave a Reply

Your email address will not be published. Required fields are marked *