This post is a part of following series:
- NVIDIA GRID K2 VGPU with Citrix XenServer part 1 – Background Information
- NVIDIA GRID K2 vGPU with Citrix XenServer part 2 – XenServer Installation
- NVIDIA GRID K2 vGPU with Citrix XenServer part 3 – VM deployment and drivers installation
- NVIDIA GRID K2 VGPU with Citrix XenServer part 4 – Benchmarking
One of my favorite projects for this year – virtualization of CAD workstations running CATIA using NVIDIA GRID cards and Citrix technologies. Biggest possible benefit we can achiev is money savings on infrastructure in branch offices (no need for expensive storage arrays, workstations, backup solutions). But wait a moment, there are more:
BENEFITS OF NVIDIA GRID FOR IT:
- Leverage industry-leading virtualization solutions, including Citrix, Microsoft and VMware
- Ability to add your most graphics-intensive users to your virtual solutions
- Improved the productivity and mobility for all of your users
- Enhanced data and IP security for models and assets
BENEFITS OF NVIDIA GRID FOR USERS:
- Access to all critical applications, including the most 3D-intensive
- Highly responsive, rich multimedia experiences
- Access from anywhere, on any device
Let’s start with little background information.
NVIDIA GRID vGPU enables multiple Virtual Machines (VM) to have simultaneous, direct access to a single physical GPU, using the same NVIDIA graphics drivers that are deployed on non-virtualized Operating Systems.
Under the control of NVIDIA’s GRID Virtual GPU Manger, which runs in XenServer’s Control Domain (dom0), GRID physical GPUs are capable of supporting multiple virtual GPU devices (vGPUs) that can be assigned directly to VMs.
Guest VMs use GRID virtual GPUs in the same manner as a physical GPU that has been passed through by the hypervisor; an NVIDIA driver loaded in the guest VM provides direct access to the GPU for performance critical fast paths, and a paravirtualized interface to the GRID Virtual GPU Manager.
We can choose between GRID K1 and K2 cards, but before we do that, we should familiarize ourselves with their main differences.
GRID vGPU enables up to eight users to share each physical GPU, assigning the graphics resources of the available GPUs to virtual machines in a balanced approach. Each NVIDIA GRID K1 card has four GPUs, allowing 32 users to share a single card. Each NVIDIA GRID K2 card has two GPUs, allowing 16 users to share a single card.
GRID K1 is built of 4 Entry Kepler GPUs and has 16GBs DDR3 memory (4GB/GPU). Equivalent Quadro card is K600 (entry). We can use GRID K1 cards to serve up to 32 “Knowledge users” with K120Q vGPU (512MB) and 2 displays or up to 4 power users with K180Q vGPU (4GB) and 4 displays.
GRID K2 is built of 2 High End Kepler GPUs and has 8GBs DDR5 memory (4GB/GPU). Equivalent Quadro card is K5000 (high end). We can use GRID K2 cards to serve up to 16 “Power Users” configured with K220Q vGPU (512MB) and 2 displays or up to 2 “Deisgners” with K280Q vGPU (4GB) and 4 displays.
Below you can see full profiles table for K1 and K2 cards:
You have to keep in mind that you are not able to use mixed vGPUs on single physical GPU. However, this restriction doesn’t extend across physical GPUs on the same card. Each physical GPU on a K1 or K2 may host different types of virtual GPU at the same time.
My goal is to use CATIA on virtual desktops so I’m gonna use GRID K2 to make sure that performance is good enough to serve at least 2 users on a single graphics board.
On a daily basis I work with HP servers, and for this project I’ll do the same. Below you can find a list of NVIDIA GRID supported HP solutions. More information can found under following url: GRID HP DATASHEET…
Or if you are looking for other vendors, take a look at: http://www.nvidia.com/object/grid-enterprise-resources.html#datasheet
Different server models offer different count of GRID boards you can install – means you can easily calculate which server is the best choice for you. Using my case – I want to run CATIA workstations using (most likely) K280Q vGPUs so I’m gonna need single GRID K2 card to handle 2 designers. If it’s just for two designers (as in my test scenario) we can take HP ProLiant WS460c Gen8 with WS460 Expansion Blade containing single GRID K2 card. Such Blade Server will use 2 bays in your Enclosure so in total you can have 8 of them (8 GRID K2 = 16 designers or up to 8 * 16 “Knowledge workers” – it all depends on type of vGPU you want to assign).
Another nice option for bigger environments is HP ProLiant SL270s Gen8 SE which can be installed with up to 4 K1 or K2 boards! In case of using “smallest” vGPU we can handle up to 4 * (4 * 8) = 128 “Knowledge workers” using K1 cards in a single server!
Next part of article will appear soon…