

SHOC Download and Readback measure data transfer speeds between the host and Phi card over a range of message sizes. Since STREAM measures local (Phi) memory bandwidth and execution of both LINPACK and DGEMM are dominated by local computation on the Phi card, these results are not surprising. The virtual STREAM, LINPACK, and DGEMM (and SGEMM - not shown) results are essentially the same as their un-virtualized counterparts. The second descriptor (“baremetal”, “virtual_6.0”) refers to whether the data were generated on un-virtualized RedHat or on RedHat running in a VM on ESX 6.0. The first descriptor (“pragma”, “native”, or “scif”) refers to the Phi programming mode used for the test. In the graphs below, the data series are each described with two descriptors. Power management was disabled on the host as well as the Phi card to generate best and most stable performance. The micperf utility reports our board SKU as C0-7120 P/A/X/D (Knights Corner).
#Cpu speed accelerator 6.0 software
All tests were run on RedHat 6.4 and MPSS 3.1.2, a now-old version of the Intel Manycore Platform Software Stack. We ran the Intel micperf tests on an Intel-supplied prototype machine, comparing bare-metal and virtualized performance in passthrough mode.
#Cpu speed accelerator 6.0 how to
This blog entry shares our performance results and explains how to expose Intel Xeon Phi in passthrough mode, which is a bit more involved than merely adding a PCI device to the guest. With the release of ESX 6.0, Na Zhang recently validated that, 1) ESX 6.0 does now correctly allow access to Intel Xeon Phi in passthrough mode, and 2) performance is generally very good. While this wasn’t of practical use to customers since I was not testing with a released version of ESX, it did validate that with appropriate engineering changes it would be possible to use Intel Xeon Phi with ESX and achieve good performance. However, using an engineering build of ESX, I was able to successfully configure the device in passthrough mode, run Intel’s bundled Phi performance tests, and demonstrate good performance. Last year when I tested Intel Xeon Phi in passthrough mode (VM Direct Path I/O) with VMware ESX® 5.5, we found that it didn’t work due to some limitations in our passthrough implementation. Compute accelerators - whether they be GPUs, Intel Xeon Phi, or FPGAs - are increasingly common in HPC and so it is important that we assess the use of such technologies from within VMware vSphere® as part of our efforts to virtualize research computing and other HPC environments.
