Customers around the world rely on Microsoft Azure to drive innovations related to our environment, public health, energy sustainability, weather modeling, economic growth, and more. Finding solutions to these important challenges requires huge amounts of focused computing power. Customers are increasingly finding the best way to access such high-performance computing (HPC) through the agility, scale, security, and leading-edge performance of Azure’s purpose-built HPC and AI cloud services.
Azure’s market-leading vision for HPC and AI is based on a core of genuine and recognized HPC expertise, using proven HPC technology and design principles, enhanced with the best features of the cloud. The result is a capability that delivers performance, scale, and value, unlike any other cloud. This means applications are scaling 12 times higher than other public clouds. It means higher application performance per node. It means powering AI workloads for one customer with a supercomputer fit to be among the top five in the world. It also means delivering massive compute power into the hands of medical researchers over a weekend to prove out life-saving innovations in the fight against COVID-19.
This year during NVIDIA GTC 21, we’re spotlighting some of the most transformational applications powered by NVIDIA accelerated computing that highlights our commitment to edge, on-premises, and cloud computing. Registration is free, so sign up to learn how Microsoft is powering transformation.
AI and supercomputing scale
The AI and machine learning space continues to be one of the most inspiring areas of technical evolution since the internet. The trend toward using massive AI models to power a large number of tasks is changing how AI is built. At Microsoft Build 2020, we shared our vision for AI at Scale utilizing state-of-the-art AI supercomputing in Azure and a new class of large-scale AI models enabling next-generation AI. The advantage of large-scale models is that they only need to be trained once with massive amounts of data using AI supercomputing, enabling them to then be “fine-tuned” for different tasks and domains with much smaller datasets and resources.
Training models at this scale requires large clusters of hundreds of machines with specialized AI accelerators interconnected by high-bandwidth networks inside and across the machines. We have been building such clusters in Azure to enable new natural language generation and understanding capabilities across Microsoft products.
The work that we have done on large-scale compute clusters, leading network design, and the software stack, including Azure Machine Learning, ONNX Runtime, and other Azure AI services, to manage it is directly aligned with our AI at Scale strategy.
Machine learning at the edge
Microsoft provides various solutions in the intelligent edge portfolio to empower customers to make sure that machine learning not only happens in the cloud but also at the edge. The solutions include Azure Stack Hub, Azure Stack Edge, and IoT Edge.
Whether you are capturing sensor data and inferencing at the edge or performing end-to-end processing with model training in Azure and leveraging the trained models at the edge for enhanced inferencing operations—Microsoft can support your needs however and wherever you need to.
Visualization and GPU workstations
Azure enables a wide range of visualization workloads, which are critical for desktop virtualization as well as professional graphics such as computer-aided design, content creation, and interactive rendering. Visualization workloads on Azure are powered by NVIDIA’s world-class graphics processing units (GPUs) and RTX technology, the world’s preeminent visual computing platform.
With access to graphics workstations on Azure cloud, artists, designers, and technical professionals can work remotely, from anywhere, and from any connected device. See our NV-Series virtual machines (VMs) for Windows and Linux.
- We are proud to announce a new high-memory variant that is coming to our GPU supercomputing portfolio, featuring the latest NVIDIA A100 80GB SXM GPUs around the same NVIDIA InfiniBand HDR and PCIe Gen4-based building-block we have today, but of course with a few adjustments to make sure that customer workloads can take full advantage of these new chips. Like the A100 40GB GPU instance, these will be available to customers on-demand, at massive scale, without any specific commitment. Please fill out this form to request access.
- NVIDIA and Microsoft Azure are raising the bar for XR streaming. Announced today, the NVIDIA CloudXR platform will be available on Azure instances NCv3 and NCasT4_v3.
Join us at the NVIDIA GTC 2021 conference
Microsoft Azure is sponsoring NVIDIA GTC 2021 conference workshops and training. The NVIDIA Deep Learning Institute (DLI) offers hands-on training in AI, accelerated computing, and accelerated data science to help developers, data scientists, and other professionals solve their most challenging problems. These in-depth workshops are taught by experts in their respective fields, delivering industry-leading technical knowledge to drive breakthrough results for individuals and organizations.
On-demand Microsoft sessions with GTC
Microsoft session recordings will be available on the GTC site starting April 12, 2021. You can find a list of the Microsoft digital sessions along with corresponding links in the Microsoft Tech Community blog here.