HPC Top 5 Stories: Dec. 12, 2016

10
Speaker, Date TITLE GOES HERE HPC TOP 5 STORIES WEEKLY INSIGHTS INTO THE WORLD OF HIGH PERFORMANCE COMPUTING

Transcript of HPC Top 5 Stories: Dec. 12, 2016

Speaker, DateTITLE GOES HEREHPC TOP 5 STORIESWEEKLY INSIGHTS INTO THE WORLD OF HIGH PERFORMANCE COMPUTING

2

Missed Supercomputing 16? Catch up on what you missed…

3

Another way to catch up on the news, follow

#HPCMeetsAI and #SC16 on Twitter

4

Here are the “Top Five” storieshighlighting what’s hot in HighPerformance Computing.

5

The Power of the GPU in Scientific Computing, Data Centers, and Deep Learning

The importance of such chips for developing and training new AI algorithms quickly cannot be understated, according to some AI researchers. "Instead of months, it could be days," Nvidia CEO Jen-Hsun Huang said in a November earnings call, discussing the time required to train a computer to do a new task. "It's essentially like having a time machine."While Nvidia is primarily associated with video

cards that help gamers play the latest first-person shooters at the highest resolution possible, the company has also been focusing on adapting its graphics processing unit chips, or GPUs, to serious scientific computation and data center number crunching.

"In the last 10 years, we’ve actually brought our GPU technology outside of graphics, made it more general purpose," says Ian Buck, vice president and general manager of Nvidia's accelerated computing business unit.

LEARN MORE

6

Microsoft and Cray Collaborate, with NVIDIA Tesla P100s

READ MORE

Cray believes that its work with Microsoft and CSCS could have solved this problem by applying supercomputing architectures to accelerate the training process.

The three worked together to scale the Microsoft Cognitive Toolkit on a Cray XC50 supercomputer at CSCS nicknamed “Piz Daint”.

According to the supercomputer manufacturer, deep learning problems share algorithmic similarities with applications that are traditionally run on a massively parallel supercomputer. So by optimizing inter-node communication using the Cray XC Aries network and a high performance MPI library, each training job is said to be able to leverage more compute resources and therefore reduce the amount of time required to train them.

7

NVIDIA Training at CU: Deep Learning ad OpenACC Programming

READ MORE

NVIDIA, the pioneer of GPU-accelerated computing, and CU Boulder's Research Computing are pleased to host a two-day training event Wednesday, Jan. 11, and Thursday, Jan. 12, with a focus on high-performance computing, deep learning and OpenACC programming.

Days one and two will consist of hands-on tutorials on OpenACC and deep learning, respectively. On the second day, in lieu of attending the deep learning session, select participants may engage in an OpenACC hackathon. 

Why attend? NVIDIA GPUs are the world’s fastest and most efficient accelerators. This workshop will teach attendees how to accelerate applications across a diverse set of domains using OpenACC and demonstrate use of GPUs for deep learning.

8

GPUs & Deep Learning in the Spotlight for NVIDIA at SC16

Deep learning is the fastest-growing field in artificial intelligence, helping computers make sense of infinite amounts of data in the form of images, sound, and text. Using multiple levels of neural networks, computers now have the capacity to see, learn, and react to complex situations as well or better than humans. This is leading to a profoundly different way of thinking about your data, your technology, and the products and services you deliver.

In this video from SC16, Roy Kim from NVIDIA describes how the company is bringing in a new age of AI with accelerated computing for Deep Learning applications. “Come join NVIDIA at SC16 to learn how AI supercomputing is breaking open a world of limitless possibilities. This is an era of multigenerational discoveries taking place in a single lifetime. See how other leaders in the field are advancing computational science across domains, get free hands-on training with the newest GPU-accelerated solutions, and connect with NVIDIA experts.”

READ MORE

9

HPE Apollo 6500 for Deep Learning Contains NVIDIA Tesla K80sIn this video from SC16, Greg Schmidt from Hewlett Packard Enterprise describes how the HPE Apollo 6500 high density GPU server is ideal for Deep Learning applications.

“With up to eight high performance NVIDIA GPUs designed for maximum transfer bandwidth, the HPE Apollo 6500 is purpose-built for HPC and deep learning applications. Its high ratio of GPUs to CPUs, dense 4U form factor and efficient design enable organizations to run deep learning recommendation algorithms faster and more efficiently, significantly reducing model training time and accelerating the delivery of real-time results, all while controlling costs.”

READ MORE

Stay tuned for weekly HPC Top 5 Stories