Copy

HPC Newsletter
Email not displaying correctly?
View this email in your browser
New Initiatives and Products from
HPC: The Next Chapter
This year, SC15 felt as if HPC were entering a new era. As additional industry sectors and research communities embrace high-performance computing, the buzz continues to grow. Although there were fewer major vendor announcements, the show was extremely busy and welcomed more attendees than ever before.

HPC matters, HPC transforms and HPC can do more work for you!

-Eliot Eshelman
Vice President, Strategic Accounts & HPC Initiatives

 
NVIDIA Acceleration: The Future of HPC
A new NVIDIA Tesla M40 GPU Accelerator, optimized for 7 TFLOPS of single-precision performance, is now available. This GPU will be particularly effective for Machine Learning applications; applications with requirements for both single- and double-precision floating point should continue using the Tesla K80 GPU.

In the latest Top500 list, one-third of the compute power is provided by NVIDIA GPUs. HPC clusters with accelerators are demonstrably more effective. Take advantage of our proven expertise when you design your next HPC resource - Microway is an NVIDIA Elite Solution Provider.

 
Deep Learning with Microway
Interest in deep learning increases alongside the growing demand to analyze big data in real time. The technology spans across industries from the auto manufacturing to social media to life sciences and research. Stay up to date with frameworks, tutorials, software and products with our HPC Tech Tips Blog.

Keras and Theano Deep Learning Frameworks

Caffe Deep Learning Tutorial using NVIDIA DIGITS on Tesla K80 & K40 GPUs


Contact us to set up a Deep Learning Test Drive on our cluster equipped with NVIDIA Tesla M40 or NVIDIA Tesla K80 GPU Accelerators.
OpenPOWER Gains Momentum Within The HPC Community
The OpenPOWER Foundation has been continuing to add members. Several new pieces of OpenPOWER hardware were on display at SC15. With the U.S. government already signed up to receive two high-performance HPC clusters built with IBM POWER CPUs and NVIDIA Tesla GPUs, we expect to see more OpenPOWER growth and developments in 2016.
Intel® OmniPath Interconnect
The official launch of Intel's new OmniPath interconnect took place at the start of the show. It promises high throughput, low latency and a 48-port switch which changes the dynamics of larger fabrics. Each port of OmniPath delivers up to 25GB/sec of bi-directional throughput, which is the limit of the PCI-Express bus in modern servers.
Data Matters
HPC applications have always made heavy demands on both memory and I/O subsystems. With recent advances in non-volatile memory and flash storage, data will continue to move closer to the compute. Intel's diagram below demonstrates how data will be moving one level closer.
 
It's worth noting that SLURM already features support for Burst Buffers on high-end Cray systems. We expect to see support for other architectures in the near future.
Intel Scalable System Framework (SSF)
Intel's Scalable System Framework provides a scalable and balanced design for small and large HPC systems. Many of the advancements announced by Intel at SC15 are part of SSF. It combines next-generation Intel® Xeon processors, Intel Xeon Phi™ processors, Intel Omni-Path, Intel Lustre and Intel Scalable Software Stack.
Follow the Progress
Building, managing & using HPC clusters has always been challenging. The OpenHPC community effort has been recently launched to work on these problems. It is still a work in progress, but we expect OpenHPC to be running on clusters by mid-2016.

You can follow the progress of Microway's OpenHPC efforts on github:
https://github.com/Microway/MCMS-OpenHPC-Recipe
Simplify with Spack

Spack is a versatile package management tool from The Livermore Computing Facility at Lawrence Livermore National Laboratory.  

Spack promises to make installing HPC software easier by simplifying the complexity of managing a large number of configurations of installed packages.  Spack offers a recursive specification syntax in order to create parametric builds of packages and their dependencies.  Using Spack, it is possible to have combinatoric varieties of builds of the same package on the same system.  Spack also has the ability to ensure that each package can find its correct dependencies, independent of the users’ environment.  If you are managing a cluster with scientific software and need a way to simplify the management of installing various builds of the same software, then you will probably benefit from using Spack.  Be sure to look into Spack at
https://github.com/LLNL/spack

OpenMP

The anticipated release of OpenMP 4.1 was announced instead as OpenMP 4.5, and received with great interest. The new version introduces enough significant changes to merit upgrading to version 4.5. 

Prominent new features include reductions for C/C++ arrays, support for Fortran 2003, support for doacross loops, division of loops into tasks with the taskloop construct, and improved support for C++ reference types. 

Other refinements include: setting task priorities, SIMD extensions, thread affinity policies, unstructured data mapping, asynchronous execution, support for device pointers, and a new "link" feature for mapping global variables.

OpenMPI
OpenMPI introduced its 1.10.1 version, which features support for Intel’s much anticipated OmniPath Interconnect.  Also new with this release is libfabric support for usNIC, and Mellanox Yalla PML.
 

A roadmap to v2.x includes plans for MPI-3.1 compliance, with v2.0.0 to be released by Q1 2016.  Take note that IBM Platform MPI will be based upon OpenMPI v2.x.  Many of the Platform MPI features will be committed back to OpenMPI.

Features which will be dropped from v2.x will be support for Myrinet / MX, Cray XT, VampirTrace, Checkpoint / restart, and the Legacy collective module (ml).  New v2.x features will include improved Cray XE/XC support, Unified Communication X (UCX) integration, and Process Management Interface (PMIx) Exascale.  PMIx features support of rapid startup of applications which are exascale+.  PMIx support will be featured in a standalone library.  OpenMPI v2.x will support CUDA GPUDirect. 
 
We strive to bring you the most detailed information on the exciting progress and products in the HPC world.

As always, if you have any questions, Microway's experts are happy to offer advice and share their technical expertise.  Feel free to contact us when you design your next cluster or WhisperStation.
Connect With Us
Tweet
Share
Forward
Share
Contact Us
sales@microway.com
(508) 746-7341
GSA Schedule GS-35F-0431N
Eliot Eshelman (508) 732-5534
Ed Hinkel (508) 732-5523
John Murphy (508) 732-5542
Samantha Wheeler (508) 732-5526
Copyright © 2015 Microway Inc., All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list