Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 - EMC

8 downloads 36 Views 4MB Size Report
Sizing and scaling information for Citrix Storefront . ... Citrix NetScaler VPX and Citrix Storefront . ..... Citrix Storefront (version 1.3, included with XenDesktop 7).

VSPEX Proven Infrastructure Guide

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Backup

EMC VSPEX Abstract This document describes the EMC® VSPEX® end-user computing solution with Citrix XenDesktop, Microsoft Hyper-V Server 2012, and EMC Next-Generation VNX® for up to 2,000 virtual desktops. December 2013

Copyright © 2013 EMC Corporation. All rights reserved. Published in the USA. Published December 2013 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. EMC VSPEX End-User Computing with Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops Proven Infrastructure Guide Part Number H11973.1

2

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Contents

Contents Chapter 1

Executive Summary

12

Introduction ............................................................................................................. 13 Audience .................................................................................................................. 13 Purpose of this guide................................................................................................ 13 Business needs ........................................................................................................ 14

Chapter 2

Solution Overview

15

Solution overview ..................................................................................................... 16 Desktop broker ......................................................................................................... 16 Virtualization ............................................................................................................ 16 Compute .................................................................................................................. 16 Network .................................................................................................................... 17 Backup ..................................................................................................................... 17 Storage..................................................................................................................... 17

Chapter 3

Solution Technology Overview

22

Solution technology ................................................................................................. 23 Summary of key components.................................................................................... 24 Desktop virtualization .............................................................................................. 25 Citrix .................................................................................................................... 25 XenDesktop 7 ...................................................................................................... 25 Machine Creation Services ................................................................................... 26 Citrix Provisioning Services .................................................................................. 26 Citrix Personal vDisk ............................................................................................ 27 Citrix Profile Management .................................................................................... 27 Virtualization ............................................................................................................ 27 Microsoft Hyper-V Server 2012 ............................................................................ 27 Microsoft System Center Virtual Machine Manager .............................................. 28 Hyper-V High Availability...................................................................................... 28 EMC Storage Integrator for Windows .................................................................... 28 Compute .................................................................................................................. 28 Network .................................................................................................................... 30 Storage..................................................................................................................... 31 EMC VNX Snapshots ............................................................................................ 31 EMC VNX SnapSure .............................................................................................. 31 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

3

Contents

EMC VNX Virtual Provisioning ............................................................................... 32 VNX FAST Cache ................................................................................................... 37 VNX FAST VP (optional) ........................................................................................ 37 VNX file shares .................................................................................................... 37 ROBO ................................................................................................................... 37 Backup and recovery ................................................................................................ 38 EMC Avamar ........................................................................................................ 38 ShareFile .................................................................................................................. 38 ShareFile StorageZones ....................................................................................... 38 ShareFile StorageZone Architecture ..................................................................... 39 Using ShareFile StorageZone with VSPEX architectures ........................................ 41

Chapter 4

Solution Overview

44

Solution overview ..................................................................................................... 45 Solution architecture ................................................................................................ 45 Logical architecture ............................................................................................. 45 Key components .................................................................................................. 47 Hardware resources ............................................................................................. 50 Software resources .............................................................................................. 54 Sizing for validated configuration ........................................................................ 55 Server configuration guidelines ................................................................................ 57 Microsoft Hyper-V memory virtualization for VSPEX .............................................. 58 Memory configuration guidelines ......................................................................... 59 Network configuration guidelines ............................................................................. 60 VLAN.................................................................................................................... 60 Enable jumbo frames ........................................................................................... 62 Link aggregation .................................................................................................. 62 Storage configuration guidelines .............................................................................. 62 Hyper-V storage virtualization for VSPEX .............................................................. 64 VSPEX storage building block .............................................................................. 66 VSPEX end-user computing validated maximums ................................................ 67 Storage layout for 500 virtual desktops ............................................................... 67 Storage layout for 1,000 virtual desktops ............................................................ 71 Storage layout for 2,000 virtual desktops ............................................................ 76 High availability and failover .................................................................................... 82 Virtualization layer ............................................................................................... 82 Compute layer ..................................................................................................... 82 Network layer....................................................................................................... 83 Storage layer ....................................................................................................... 84

4

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Contents

Validation test profile ............................................................................................... 85 Backup environment configuration guidelines .......................................................... 86 Backup characteristics......................................................................................... 86 Backup layout...................................................................................................... 86 Sizing guidelines ...................................................................................................... 87 Reference workload .................................................................................................. 87 Defining the reference workload .......................................................................... 87 Applying the reference workload .............................................................................. 88 Implementing the reference architectures................................................................. 89 Resource types .................................................................................................... 89 Backup resources ................................................................................................ 91 Expanding existing VSPEX EUC environments ...................................................... 91 Implementation summary .................................................................................... 91 Quick assessment .................................................................................................... 92 CPU requirements ................................................................................................ 92 Memory requirements .......................................................................................... 92 Storage performance requirements ...................................................................... 93 Storage capacity requirements ............................................................................ 93 Determining equivalent reference virtual desktops .............................................. 93 Fine-tuning .......................................................................................................... 95

Chapter 5

VSPEX Configuration Guidelines

97

Overview .................................................................................................................. 98 Pre-deployment tasks ............................................................................................... 99 Deployment prerequisites .................................................................................... 99 Customer configuration data ..................................................................................101 Preparing switches, connecting the network, and configuring switches ..................101 Preparing network switches ...............................................................................101 Configuring infrastructure network .....................................................................101 Configuring VLANs .............................................................................................103 Completing network cabling ..............................................................................103 Preparing and configuring the storage array ...........................................................104 Configuring VNX .................................................................................................104 Provisioning core data storage ...........................................................................105 Provisioning optional storage for user data ........................................................110 Provisioning optional storage for infrastructure virtual machines .......................113 Installing and configuring Microsoft Hyper-V hosts .................................................114 Installing Windows hosts ...................................................................................114 Installing Hyper-V and configuring failover clustering.........................................114

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

5

Contents

Configuring Windows host networking ...............................................................114 Installing PowerPath on Windows servers ..........................................................115 Enabling jumbo frames ......................................................................................115 Planning virtual machine memory allocations ....................................................115 Installing and configuring SQL Server database ......................................................116 Creating a virtual machine for Microsoft SQL Server ...........................................116 Installing Microsoft Windows on the virtual machine .........................................116 Installing SQL Server..........................................................................................117 Configuring database for Microsoft SCVMM .......................................................117 Deploying System Center Virtual Machine Manager server ......................................118 Creating a SCVMM host virtual machine.............................................................118 Installing the SCVMM guest OS ..........................................................................118 Installing the SCVMM server ..............................................................................119 Installing the SCVMM Management Console ......................................................119 Installing the SCVMM agent locally on a host .....................................................119 Adding a Hyper-V cluster into SCVMM ................................................................119 Adding file share storage to SCVMM (file variant only) .......................................119 Creating a virtual machine in SCVMM.................................................................119 Creating a template virtual machine...................................................................119 Deploying virtual machines from the template virtual machine ..........................120 Installing and configuring XenDesktop controller ....................................................120 Installing server-side components of XenDesktop ..............................................120 Configuring a site...............................................................................................121 Adding a second controller ................................................................................121 Installing Citrix Studio........................................................................................121 Preparing master virtual machine ......................................................................121 Provisioning virtual desktops .............................................................................122 Installing and configuring Provisioning Services (PVS only) ....................................123 Configuring a PVS server farm ............................................................................124 Adding a second PVS server...............................................................................124 Create a PVS store .............................................................................................124 Configuring inbound communication .................................................................124 Configuring a bootstrap file................................................................................125 Setting up a TFTP server on VNX .........................................................................125 Configuring boot options 66 and 67 on DHCP server ..........................................126 Preparing the master virtual machine.................................................................126 Provisioning the virtual desktops .......................................................................127 Setting up EMC Avamar ..........................................................................................127 GPO additions for EMC Avamar ..........................................................................129 6

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Contents

Preparing the master image for EMC Avamar .....................................................132 Defining datasets ..............................................................................................133 Defining schedules ............................................................................................137 Adjusting the maintenance window schedule ....................................................137 Defining retention policies .................................................................................138 Creating groups and group policy ......................................................................139 EMC Avamar Enterprise Manager: activating clients ...........................................142 Summary ................................................................................................................149

Chapter 6

Validating the Solution

150

Overview ................................................................................................................151 Post-installation checklist ......................................................................................151 Deploying and testing a single virtual desktop .......................................................152 Verifying the redundancy of the solution components ............................................152

Appendix A Bills of Materials

153

Bill of materials for 500 virtual desktops ................................................................154 Bill of materials for 1,000 virtual desktops .............................................................156 Bill of materials for 2,000 virtual desktops .............................................................158

Appendix B Customer Configuration Data Sheet

160

Customer configuration data sheets .......................................................................161

Appendix C References

164

References .............................................................................................................165 EMC documentation ..........................................................................................165 Other documentation.........................................................................................165

Appendix D About VSPEX

166

About VSPEX ..........................................................................................................167

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

7

Contents

Figures

8

Figure 1.

Next-Generation VNX with multicore optimization................................ 19

Figure 2.

Active/active processors increase performance, resiliency, and efficiency ............................................................................................. 21

Figure 3.

Latest Unisphere Management Suite ................................................... 21

Figure 4.

Solution components .......................................................................... 23

Figure 5.

XenDesktop 7 architecture components .............................................. 25

Figure 6.

Compute layer flexibility ...................................................................... 29

Figure 7.

Example of highly-available network design ........................................ 30

Figure 8.

Storage pool rebalance progress ......................................................... 33

Figure 9.

Thin LUN space utilization ................................................................... 34

Figure 10.

Examining storage pool space utilization............................................. 35

Figure 11.

Defining storage pool utilization thresholds ........................................ 36

Figure 12.

Defining automated notifications for block .......................................... 36

Figure 13.

ShareFile high-level architecture.......................................................... 39

Figure 14.

Logical architecture: VSPEX end-user computing for Citrix XenDesktop with ShareFile StorageZone.............................................. 41

Figure 15.

Logical architecture for SMB variant .................................................... 46

Figure 16.

Logical architecture for FC variant ........................................................ 47

Figure 17.

Hypervisor memory consumption ........................................................ 58

Figure 18.

Required networks .............................................................................. 61

Figure 19.

Hyper-V virtual disk types .................................................................... 64

Figure 20.

Core storage layout with PVS provisioning for 500 virtual desktops ..... 67

Figure 21.

Core storage layout with MCS provisioning for 500 virtual desktops .... 68

Figure 22.

Optional storage layout for 500 virtual desktops ................................. 70

Figure 23.

Core storage layout with PVS provisioning for 1,000 virtual desktops ............................................................................................. 71

Figure 24.

Core storage layout with MCS provisioning for 1,000 virtual desktops ............................................................................................. 72

Figure 25.

Optional storage layout for 1,000 virtual desktops .............................. 74

Figure 26.

Core storage layout with PVS provisioning for 2,000 virtual desktops ............................................................................................. 76

Figure 27.

Core storage layout with MCS provisioning for 2,000 virtual desktops ............................................................................................. 78

Figure 28.

Optional storage layout for 2,000 virtual desktops .............................. 80

Figure 29.

High availability at the virtualization layer ........................................... 82

Figure 30.

Redundant power supplies .................................................................. 82

Figure 31.

Network layer high availability ............................................................. 83

Figure 32.

VNX series high availability ................................................................. 84

Figure 33.

Sample network architecture—SMB variant .......................................102

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Contents

Figure 34.

Sample network architecture – FC Variant .........................................103

Figure 35.

Set nthread parameter.......................................................................108

Figure 36.

Storage System Properties dialog box................................................108

Figure 37.

Create FAST Cache dialog box ............................................................109

Figure 38.

Advanced tab in the Create Storage Pool dialog box ..........................110

Figure 39.

Advanced tab in the Storage Pool Properties dialog box ....................110

Figure 40.

Storage Pool Properties window ........................................................111

Figure 41.

Manage Auto-Tiering dialog box ........................................................112

Figure 42.

LUN Properties window ......................................................................113

Figure 43.

Configure Bootstrap dialog box .........................................................125

Figure 44.

Configuring Windows Folder Redirection ............................................129

Figure 45.

Create a Windows network drive mapping for user files .....................130

Figure 46.

Configure drive mapping settings ......................................................131

Figure 47.

Configure drive mapping common settings ........................................131

Figure 48.

Create a Windows network drive mapping for user profile data ..........132

Figure 49.

Avamar tools menu............................................................................133

Figure 50.

Avamar Manage All Datasets dialog box ............................................133

Figure 51.

Avamar New Dataset dialog box ........................................................134

Figure 52.

Configure Avamar Dataset settings ....................................................134

Figure 53.

User Profile data dataset ...................................................................135

Figure 54.

User Profile data dataset Exclusion settings ......................................135

Figure 55.

User Profile data dataset Options settings .........................................136

Figure 56.

User Profile data dataset Advanced Options settings .........................136

Figure 57.

Avamar default Backup/Maintenance Windows schedule .................137

Figure 58.

Avamar modified Backup/Maintenance Windows schedule...............138

Figure 59.

Create new Avamar backup group .....................................................139

Figure 60.

New backup group settings ...............................................................140

Figure 61.

Select backup group dataset .............................................................140

Figure 62.

Select backup group schedule...........................................................141

Figure 63.

Select backup group retention policy.................................................141

Figure 64.

Avamar Enterprise Manager ...............................................................142

Figure 65.

Avamar Client Manager......................................................................142

Figure 66.

Avamar Activate Client dialog box......................................................143

Figure 67.

Avamar Activate Client menu .............................................................143

Figure 68.

Avamar Directory Service configuration .............................................144

Figure 69.

Avamar Client Manager – post configuration .....................................144

Figure 70.

Avamar Client Manager – virtual desktop clients ...............................145

Figure 71.

Avamar Client Manager – select virtual desktop clients .....................145

Figure 72.

Select Avamar groups ........................................................................146 EMC VSPEX End-User Computing

Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

9

Contents

Figure 73.

Activate Avamar clients .....................................................................146

Figure 74.

Commit Avamar client activation .......................................................147

Figure 75.

Avamar client activation informational prompt one............................147

Figure 76.

Avamar client activation informational prompt two ............................148

Figure 77.

Avamar Client Manager – activated clients ........................................148

Tables

10

Table 1.

Thresholds and settings under VNX OE Block Release 33 .................... 37

Table 2.

Minimum hardware resources to support ShareFile StorageZone with Storage Center ............................................................................. 42

Table 3.

Recommended EMC VNX storage needed for ShareFile StorageZone CIFS share ....................................................................... 43

Table 4.

Solution hardware ............................................................................... 50

Table 5.

Solution software ................................................................................ 54

Table 6.

Configurations that support this solution ............................................ 55

Table 7.

Server hardware .................................................................................. 57

Table 8.

Hardware resources for network .......................................................... 60

Table 9.

Storage hardware ................................................................................ 63

Table 10.

Number of disks required for various numbers of virtual desktops ...... 67

Table 11.

Validated environment profile ............................................................. 85

Table 12.

Backup profile characteristics ............................................................. 86

Table 13.

Virtual desktop characteristics ............................................................ 88

Table 14.

Blank worksheet row ........................................................................... 92

Table 15.

Reference virtual desktop resources .................................................... 93

Table 16.

Example worksheet row ....................................................................... 94

Table 17.

Example applications .......................................................................... 94

Table 18.

Server resource component totals ....................................................... 95

Table 19.

Blank customer worksheet .................................................................. 96

Table 20.

Deployment process overview ............................................................. 98

Table 21.

Tasks for pre-deployment .................................................................... 99

Table 22.

Deployment prerequisites checklist ..................................................... 99

Table 23.

Tasks for switch and network configuration .......................................101

Table 24.

Tasks for storage configuration..........................................................104

Table 25.

Tasks for server installation ...............................................................114

Table 26.

Tasks for SQL Server database setup .................................................116

Table 27.

Tasks for SCVMM configuration .........................................................118

Table 28.

Tasks for XenDesktop controller setup ...............................................120

Table 29.

Tasks for XenDesktop controller setup ...............................................123

Table 30.

Tasks for Avamar integration .............................................................128

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Contents

Table 31.

Tasks for testing the installation ........................................................151

Table 32.

List of components used in the VSPEX solution for 500 virtual desktops ...........................................................................................154

Table 33.

List of components used in the VSPEX solution for 1,000 virtual desktops ...........................................................................................156

Table 34.

List of components used in the VSPEX solution for 2,000 virtual desktops ...........................................................................................158

Table 35.

Common server information ..............................................................161

Table 36.

Hyper-V server information ................................................................161

Table 37.

Array information...............................................................................162

Table 38.

Network infrastructure information ....................................................162

Table 39.

VLAN information ..............................................................................162

Table 40.

Service accounts ...............................................................................163

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

11

Chapter 1: Executive Summary

Chapter 1

Executive Summary

This chapter presents the following topics: Introduction .............................................................................................................13 Audience .................................................................................................................. 13 Purpose of this guide ............................................................................................... 13 Business needs ........................................................................................................14

12

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 1: Executive Summary

Introduction EMC® VSPEX® validated and modular architectures are built with proven technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, compute, and networking layers. VSPEX eliminates server virtualization planning and configuration burdens. When you are embarking on server virtualization, virtual desktop deployment, or IT consolidation, VSPEX accelerates your IT transformation by enabling faster deployments, more choices, greater efficiency, and lower risk. This document is intended to be a comprehensive guide to the technical aspects of this solution. Server capacity is provided in generic terms for required minimums of CPU, memory, and network interfaces. Customers are free to select any server and networking hardware that meets or exceeds the stated minimums.

Audience This guide assumes you have the necessary training and background to install and configure an end-user computing solution based on Citrix XenDesktop with Microsoft Hyper-V as a hypervisor, EMC VNX® series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable, and you should be familiar with these documents. You should also be familiar with the infrastructure and database security policies of the customer installation. Individuals focused on selling and sizing a VSPEX end-user computing solution for Citrix XenDesktop should pay particular attention to the first four chapters of this document. Implementers of the solution should focus on the configuration guidelines in Chapter 5, the solution validation in Chapter 6, and the appropriate references and appendices.

Purpose of this guide This guide presents an initial introduction to the VSPEX end-user computing architecture, an explanation of how to modify the architecture for specific engagements, and instructions for effectively deploying the system. The VSPEX end-user computing architecture provides the customer with a modern system capable of hosting a large number of virtual desktops at a consistent performance level. This solution executes on a Microsoft Hyper-V virtualization layer backed by the highly available VNX storage family for storage and Citrix’s XenDesktop desktop broker. The compute and network components, while vendor-definable, are designed to be redundant and sufficiently powerful to handle the processing and data needs of a large virtual machine environment. The 500, 1,000, and 2,000 virtual desktop environments discussed are based on a defined desktop workload. While not every virtual desktop has the same

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

13

Chapter 1: Executive Summary

requirements, this document provides adjustment methods and guidance for deploying a cost-effective system. An end-user computing or virtual desktop architecture is a complex system offering. This document facilitates setup by providing up-front software and hardware material lists, step-by-step sizing guidance and worksheets, and verified deployment steps. Validation tests are provided to ensure that your system is up and running properly after the last component has been installed. Follow the guidelines in this document to ensure an efficient and painless desktop deployment.

Business needs The use of business applications is becoming more common in the consolidated compute, network, and storage environment. Using Citrix for EMC VSPEX end-user computing reduces the complexity of configuring the components of a traditional deployment model. It simplifies integration management while maintaining the application design and implementation options. Citrix unifies administration while enabling the control and monitoring of process separation. The following are the business needs addressed by the VSPEX end-user computing solution for Citrix architecture:

14



Provides an end-to-end virtualization solution to utilize the capabilities of the unified infrastructure components



Provides a solution for efficiently virtualizing 500, 1,000, or 2,000 virtual desktops for varied customer use cases



Provides a reliable, flexible, and scalable reference design

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 2: Solution Overview

Chapter 2

Solution Overview

This chapter presents the following topics: Solution overview ....................................................................................................16 Desktop broker ........................................................................................................16 Virtualization ...........................................................................................................16 Compute .................................................................................................................. 16 Network ................................................................................................................... 17 Backup ..................................................................................................................... 17 Storage .................................................................................................................... 17

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

15

Chapter 2: Solution Overview

Solution overview The EMC VSPEX end-user computing solution for Citrix XenDesktop on Microsoft Hyper-V server 2012 provides a complete system architecture capable of supporting and protecting up to 2,000 virtual desktops with a redundant server and network topology, highly available storage, and trusted EMC backup solutions. The core components that make up this particular solution are desktop broker, virtualization, storage, network, and compute.

Desktop broker XenDesktop is the virtual desktop solution from Citrix that allows virtual desktops to run on the Microsoft Hyper-V virtualization environment. It enables the centralization of desktop management and provides increased control for IT organizations. XenDesktop allows end users to connect to their desktops from multiple devices across a network connection.

Virtualization Microsoft Hyper-V is a virtualization platform that provides flexibility and cost savings to end users by enabling the consolidation of large, inefficient server farms into nimble, reliable cloud infrastructures. The core Microsoft virtualization components are the Microsoft Hyper-V hypervisor and the Microsoft System Center Virtual Machine Manager for system management. The Microsoft Hyper-V hypervisor runs on a dedicated server and allows multiple operating systems to execute on the system simultaneously as virtual machines. Microsoft clustered services allows multiple Hyper-V servers to operate in a clustered configuration. The Microsoft Hyper-V cluster configuration is managed as a larger resource pool through the Microsoft System Center Virtual Machine, allowing dynamic allocation of CPU, memory, and storage across the cluster. High-availability features of Microsoft Hyper-V Server 2012, such as live Migration and Storage Migration, enable seamless migration of virtual machines and stored files from one Hyper-V server to another with minimal or no performance impact.

Compute VSPEX allows flexibility in the design and implementation of the vendor’s choice of server components. The infrastructure must conform to the following attributes:

16



Sufficient RAM, CPU cores, and memory to support the required number and types of virtual machines



Sufficient network connections to enable redundant connectivity to the system switches



Excess capacity to support failover after a server failure in the environment

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 2: Solution Overview

Network VSPEX allows flexibility in the design and implementation of the vendor’s choice of network components. The infrastructure must conform to the following attributes: 

Redundant network links for the hosts, switches, and storage



Support for link aggregation



Traffic isolation based on industry-accepted best practices

Backup EMC Avamar® delivers the protection and efficiency needed to accelerate the deployment of a VSPEX end-user computing solution. Avamar enables administrators to centrally back up and manage the policies and end-user computing infrastructure components, while it allows end users to efficiently recover their own files from a simple and intuitive web-based interface. Avamar only moves new, unique sub-file data segments, resulting in fast daily full backups. This results in an up to 90 percent reduction in backup times, and can reduce the required daily network bandwidth by up to 99 percent and the required backup storage by 10 to 30 times.

Storage The EMC Next-Generation VNX storage series provides both file and block access with a broad feature set, making it an ideal choice for any end-user computing implementation. VNX storage includes the following components, sized for the stated reference architecture workload: 

Host adapter ports (for block)—Provide host connectivity through fabric to the array



Data Movers (for file)—Front-end appliances that provide file services to hosts (optional if providing CIFS/SMB, NFS services)



Storage processors (SPs)—The compute components of the storage array, which are used for all aspects of data moving into, out of, and between arrays



Disk drives—Disk spindles and solid state drives (SSDs) that contain the host/application data and their enclosures

Note: The term Data Mover refers to a VNX hardware component, which has a CPU, memory, and I/O ports. It enables CIFS (SMB) and NFS protocols on the VNX.

The desktop solutions described in this document are based on the EMC VNX5400 and EMC VNX5600 storage arrays. The VNX5400 can support a maximum of 250 drives and the VNX5600 can host up to 500 drives.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

17

Chapter 2: Solution Overview

The EMC VNX series supports a wide range of business-class features that are ideal for the end-user computing environment, including: 

EMC Fully Automated Storage Tiering for Virtual Pools (FAST™ VP)



EMC FAST Cache



File-level data deduplication and compression



Block deduplication



Thin provisioning



Replication



Snapshots and checkpoints



File-level retention



Quota management

Features and enhancements The EMC VNX flash-optimized unified storage platform delivers innovation and enterprise capabilities for file, block, and object storage in a single, scalable, and easy-to-use solution. Ideal for mixed workloads in physical or virtual environments, VNX combines powerful and flexible hardware with advanced efficiency and management and protection software to meet the demanding needs of today’s virtualized application environments. The next-generation VNX series includes many features and enhancements designed and built upon the first generation’s success. These features and enhancements include: 

More capacity with multicore optimization with Multicore Cache, Multicore RAID, and Multicore FAST Cache (MCx™)



Greater efficiency with a flash-optimized hybrid array



Better protection by increasing application availability with active/active storage processors



Easier administration and deployment by increasing productivity with a new Unisphere® Management Suite

VSPEX is built with the next-generation VNX to deliver even greater efficiency, performance, and scale than ever before. Flash-optimized hybrid array VNX is a flash-optimized hybrid array that provides automated tiering to deliver the best performance to your critical data, while intelligently moving less frequently accessed data to lower-cost disks. In this hybrid approach, a small percentage of flash drives in the overall system can provide a high percentage of the overall IOPS. A flash-optimized VNX takes full advantage of the low latency of flash to deliver cost-saving optimization and high performance scalability. The EMC Fully Automated Storage Tiering Suite (FAST Cache and FAST VP) tiers both block and file data across heterogeneous drives and boosts

18

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 2: Solution Overview

the most active data to the flash drives, ensuring that customers never have to make concessions for cost or performance. New data tends to be accessed more frequently than older data, so it is stored on flash drives to provide the best performance. As data ages and becomes less active, FAST VP automatically tiers the data from high-performance drives to high-capacity drives, based on customer-defined policies. This functionality has been enhanced to provide four times better efficiency with new FAST VP solid-state disks (SSDs) that are based on enterprise multi-level cell (eMLC) technology, lowering the cost per gigabyte. FAST Cache dynamically absorbs unpredicted spikes in system workloads. All VSPEX use cases benefit from the increased efficiency. VSPEX Proven Infrastructures deliver private cloud, end-user computing, and virtualized application solutions. With VNX, customers can realize an even greater return on their investment. VNX provides out-of-band, block-based deduplication that can dramatically lower the costs of the flash tier. VNX Intel MCx code path optimization The advent of flash technology has been a catalyst in significantly changing the requirements of midrange storage systems. EMC redesigned the midrange storage platform to efficiently optimize multicore CPUs to provide the highest performing storage system at the lowest cost in the market. MCx distributes all VNX data services across all cores, as shown in Figure 1. The VNX series with MCx dramatically improves the file performance for transactional applications like databases and virtual machines over network-attached storage (NAS).

Figure 1.

Next-Generation VNX with multicore optimization

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

19

Chapter 2: Solution Overview

Multicore Cache The cache is the most valuable asset in the storage subsystem; its efficient use is fundamental to the overall efficiency of the platform in handling variable and changing workloads. The cache engine has been modularized to take advantage of all the cores available in the system.

Multicore RAID Another important part of the MCx redesign is the handling of I/O to the permanent back-end storage—hard disk drives (HDDs) and SSDs. Greatly increased performance improvements in VNX come from the modularization of the back-end data management processing, which enables MCx to seamlessly scale across all processors. VNX performance VNX storage, enabled with the MCx architecture, is optimized for FLASH 1st and provides unprecedented overall performance. It optimizes the system for transaction performance (cost per IOPS), bandwidth performance (cost per GB/s) with low latency, and provides optimal capacity efficiency (cost per GB). VNX provides the following performance improvements: 

Up to four times more file transactions when compared with dual controller arrays



Increased file performance for transactional applications (for example, Microsoft Exchange on VMware over NFS) by up to three times with a 60 percent better response time



Up to four times more Oracle and Microsoft SQL Server OLTP transactions



Up to six times more virtual machines

Active/active array storage processors The new VNX architecture provides active/active array storage processors, as shown in Figure 2, which eliminate application timeouts during path failover since both paths are actively serving I/O. Load balancing is also improved and applications can achieve up to two times better performance. Active/active for block is ideal for applications that require the highest levels of availability and performance but do not require tiering or efficiency services like compression, deduplication, or snapshot. With this VNX release, VSPEX customers can use virtual Data Movers (VDMs) and VNX Replicator to perform automated and high-speed file-system migrations between systems. This process migrates all checkpoints and settings automatically and enables the clients to continue operation during the migration.

20

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 2: Solution Overview

Figure 2.

Active/active processors increase performance, resiliency, and efficiency

Unisphere management The latest Unisphere Management Suite extends Unisphere’s easy-to-use interface to include VNX Monitoring and Reporting for validating performance and anticipating capacity requirements. As shown in Figure 3, the suite also includes Unisphere Remote for centrally managing up to thousands of VNX and VNXe® systems with added support for XtremSW™ Cache.

Figure 3.

Latest Unisphere Management Suite

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

21

Chapter 3: Solution Technology Overview

Chapter 3

Solution Technology Overview

This chapter presents the following topics: Solution technology .................................................................................................23 Summary of key components ................................................................................... 24 Desktop virtualization .............................................................................................. 25 Virtualization ...........................................................................................................27 Compute .................................................................................................................. 28 Network ................................................................................................................... 30 Storage .................................................................................................................... 31 Backup and recovery ................................................................................................ 38 ShareFile .................................................................................................................. 38

22

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 3: Solution Technology Overview

Solution technology This VSPEX solution uses EMC VNX5400 (for up to 1,000 virtual desktops) or VNX5600 (for up to 2,000 virtual desktops) storage arrays and Microsoft Hyper-V Server 2012 to provide the storage and computer resources for a Citrix XenDesktop 7 environment for Windows 7 virtual desktops, which are provisioned by Provisioning Services (PVS) or Machine Creation Services (MCS). Figure 4 shows the components of the solution.

Figure 4.

Solution components

Planning and designing the storage infrastructure for Citrix XenDesktop is a critical step, because the shared storage must be able to absorb large bursts of input/output (I/O) that occur during some use cases—like when many desktops boot at the beginning of a workday or when required patches are applied. These large I/O bursts can lead to periods of erratic and unpredictable virtual desktop performance. If planning does not take these use cases into account, users can quickly become frustrated by unpredictable performance. To provide predictable performance for an end-user computing environment, the storage must be able to handle peak I/O loads from clients while still providing fast response times. Typically, the design for this type of workload involves deploying several disks to handle brief periods of extreme I/O pressure, and this can be expensive to implement. This solution uses EMC VNX FAST Cache, allowing for a reduction in the number of disks required. EMC’s next-generation backup enables protection of user data and end-user recoverability by using EMC Avamar and its desktop client within the desktop image. EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

23

Chapter 3: Solution Technology Overview

Summary of key components This section describes the key components of this solution. Desktop virtualization The desktop virtualization broker manages the provisioning, allocation, maintenance, and eventual removal of the virtual desktop images that are provided to users of the system. This software enables on-demand creation of desktop images, allows maintenance of the image without affecting user productivity, and prevents the environment from growing in an unconstrained way. Virtualization The virtualization layer allows physical resources to be uncoupled from the applications that use them. This allows applications to use resources that are not directly tied to hardware, enabling many key features for end-user computing. Compute The compute layer provides memory and processing resources for the virtualization layer software and the applications running in the infrastructure. The VSPEX program defines the minimum amount of compute layer resources required and allows the customer to choose any compute hardware that meets the requirements. Network The network layer connects the users of the environment to the resources they need and connects the storage layer to the compute layer. The VSPEX program defines the minimum number of network ports required for the solution and provides general guidance on network architecture. It allows the customer to implement the requirements using any network hardware that meets these requirements. Storage The storage layer is a critical resource for the implementation of the end-user computing environment. Because of the way desktops are used, the storage layer must be able to absorb large bursts of transient activity without causing undue impact on the user experience. This solution uses EMC VNX FAST Cache to handle this workload efficiently. Backup and recovery The optional backup and recovery component of the solution provides data protection in the event that the data in the primary system is deleted, damaged, or otherwise becomes unusable. ShareFile Security components from RSA provide customers with additional options to control access to the environment and ensure that only authorized users are permitted to use the system. Solution architecture provides details about the components that make up the reference architecture.

24

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 3: Solution Technology Overview

Desktop virtualization Desktop virtualization encapsulates and delivers the user desktop to a remote client device, which can be a thin client, zero client, smartphone, or tablet. It allows subscribers from different locations to access virtual desktops hosted on centralized computing resources at remote data centers. In this solution, Citrix XenDesktop is used to provision, manage, broker, and monitor the desktop virtualization environment. Under the XenDesktop 7 architecture, management and delivery components are shared between XenDesktop and XenApp to give administrators a unified management experience. Figure 5 shows the XenDesktop 7 architecture components.

Citrix XenDesktop 7

Users Users (Receiver) (Receiver)

Citrix Director

Receiver communicates with virtual delivery agent for desktop access

Client side network Server side network StoreFront StoreFront communicates with controllers to broker connections between users and virtual desktops

Delivery controller

Citrix Studio

Virtual Delivery Agent Virtual Machines

Virtual Delivery Agent

Virtual Delivery Agent

Virtual Delivery Agent

Server OS Machines

Desktop OS Machines

Remote PCs

Hypervisors

Database

Figure 5.

Controllers communicate with hypervisor management suite to deploy virtual desktops

XenDesktop 7 architecture components

The XenDesktop 7 architecture components are described as follows: 

Receiver: Installed on user devices, Citrix Receiver provides users with quick, secure, self-service access to documents, applications, and desktops from any of the user’s devices including smartphones, tablets, and PCs. Receiver provides on-demand access to Windows, Web, and Software-as-a-Service (SaaS) applications.



StoreFront: StoreFront authenticates users to sites hosting resources and manages stores of desktops and applications that users access.



Studio: Studio is the management console that enables you to configure and manage the deployment, eliminating the need for separate management consoles for managing delivery of applications and desktops. Studio provides various wizards to guide you through the process of setting up your

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

25

Chapter 3: Solution Technology Overview

environment, creating your workloads to host applications and desktops, and assigning applications and desktops to users.

Machine Creation Services



Delivery Controller: Installed on servers in the data center, the Delivery Controller consists of services that communicate with the hypervisor to distribute applications and desktops, authenticate and manage user access, and broker connections between users and their virtual desktops and applications. The controller manages the state of the desktops, starting and stopping them based on demand and administrative configuration. In some editions, the controller allows you to install profile management to manage user personalization settings in virtualized or physical Windows environments. Each site has one or more delivery controllers.



Virtual Delivery Agent (VDA): Installed on server or workstation operating systems, the VDA enables connections for desktop s and applications. For Remote PC access, install the VDA on the office PC.



Server OS machines: Server OS machines are virtual machines or physical machines based on Windows Server operating systems and are used for delivering applications or hosted shared desktops (HSD) to users.



Desktop OS machines: Desktop OS machines are virtual machines or physical machines based on Windows Desktop operating system and are used for delivering personalized desktops to users or applications from desktop operating systems.



Remote PC access: User devices that are included on a whitelist enable users to access resources on their office PCs remotely from any device running Citrix Receiver.

Machine Creation Services (MCS) is a provisioning mechanism that is integrated with the XenDesktop management interface, Citrix Studio, to provision, manage, and decommission desktops throughout the desktop lifecycle from a centralized point of management. MCS allows the management of several types of machines within a catalog in Citrix Studio. Desktop customization is persistent for machines that use Personal vDisk, while non-Personal vDisk machines are appropriate if desktop changes are to be discarded when the user logs off. Desktops provisioned using MCS share a common base image within a catalog. Because of this, the base image is typically accessed often enough to use EMC VNX FAST Cache, while frequently accessed data is promoted to flash drives to provide optimal I/O response time with fewer physical disks.

Citrix Provisioning Services

26

Citrix Provisioning Services (PVS) takes a different approach from traditional desktop imaging solutions by fundamentally changing the relationship between the hardware and the software that runs on it. By streaming a single shared disk image (vDisk) instead of copying images to individual machines, PVS enables organizations to reduce the number of disk images that they manage. As the number of machines continues to grow, PVS provides the efficiency of a centralized management with the benefits of distributed processing.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 3: Solution Technology Overview

Because machines stream the disk data dynamically in real time from a single shared image, consistency of the machine image is ensured. In addition, the configuration, applications, and even OS of large pools of machines can change completely during the reboot restart operation. In this solution, PVS provisions 500, 1,000, or 2,000 virtual desktops that are running Windows 7 or 8. The desktops are deployed from a single vDisk image. Citrix Personal vDisk

The Citrix Personal vDisk (PvDisk or PvD) feature was introduced in Citrix XenDesktop 5.6. With Personal vDisk, users can preserve customization settings and userinstalled applications in a pooled desktop. This capability is accomplished by redirecting the changes from the user’s pooled virtual machine to a separate disk called Personal vDisk. During runtime, the content of the Personal vDisk is blended with the content from the base virtual machine to provide a unified experience to the end user. The Personal vDisk data is preserved during restart and refresh operations. In this solution, PVS provisions 500, 1,000, or 2,000 virtual desktops that are running Windows 7. The desktops are deployed from a single vDisk image.

Citrix Profile Management

Citrix Profile Management preserves user profiles and dynamically synchronizes them with a remote profile repository. Citrix Profile Management ensures that personal settings are applied to desktops and applications, regardless of the user’s login location or client device. The combination of Citrix Profile Management and pooled desktops provides the experience of a dedicated desktop while potentially minimizing the amount of storage required in an organization. Citrix Profile Management dynamically downloads a user’s remote profile when the user logs in to a Citrix XenDesktop. Profile Management downloads user profile information only when the user needs it.

Virtualization The virtualization layer is a key component of any end-user computing solution. It allows the application resource requirements to be decoupled from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance and allows the physical capability of the system to change without affecting the hosted applications. Microsoft Hyper-V Server 2012

Microsoft Hyper-V server 2012 is used to build the virtualization layer for this solution. Microsoft Hyper-V transforms a computer’s physical resources by virtualizing the CPU, memory, storage, and network. This transformation creates fully functional virtual machines that run isolated and encapsulated operating systems and applications just like physical computers. High-availability features of Microsoft Hyper-V such as Live Migration and Storage Migration enable seamless migration of virtual machines and stored files from one Hyper-V server to another with minimal or no performance impact.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

27

Chapter 3: Solution Technology Overview

Microsoft System Center Virtual Machine Manager

Microsoft System Center Virtual Machine Manager is a centralized management platform for the Microsoft Hyper-V infrastructure. It provides administrators with a single interface that can be accessed from multiple devices for all aspects of monitoring, managing, and maintaining the virtual infrastructure.

Hyper-V High Availability

The Microsoft Hyper-V Cluster High Availability feature allows the virtualization layer to automatically restart virtual machines in various failure conditions. If the physical hardware has an error, the impacted virtual machines can be restarted automatically on other servers in the cluster. Note: For Microsoft Hyper-V Cluster High Availability to restart virtual machines on different hardware, those servers must have resources available. The Compute section provides specific recommendations to enable this functionality.

Microsoft Hyper-V Cluster allows you to configure policies to determine which machines are restarted automatically and under what conditions these operations should be performed. EMC Storage Integrator for Windows

EMC Storage Integrator (ESI) 3.0 for Windows is a management interface that provides the ability to view and provision block and file storage for Windows environments. ESI simplifies the steps involved in creating and provisioning storage to Hyper-V servers as a local disk or a mapped share. ESI also supports storage discovery and provisioning through PowerShell. The ESI for Windows product guides that are available on EMC Online Support provide more information.

Compute The choice of a server platform for an EMC VSPEX infrastructure is based not only on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance and management features, and other factors. For these reasons, EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a given number of servers with a specific set of requirements, VSPEX documents the requirements for the number of processor cores and the amount of RAM. This can be implemented with 2 servers or 20 and still be considered the same VSPEX solution. For example, let us assume that the compute layer requirements for a given implementation are 25 processor cores and 200 GB of RAM. One customer wants to use white-box servers containing 16 processor cores and 64 GB of RAM, while another customer chooses a higher-end server with 20 processor cores and 144 GB of RAM.

28

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 3: Solution Technology Overview

In this example, the first customer needs four servers while the second customer needs two, as shown in Figure 6.

Figure 6.

Compute layer flexibility

Note: To enable high availability at the compute layer, each customer needs one additional server with sufficient capacity to provide a failover platform in the event of a hardware outage.

In the compute layer, observe the following best practices: 

Use a number of identical—or at least compatible—servers. VSPEX implements hypervisor-level high-availability technologies that might require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area.



If you are implementing hypervisor-layer high availability, then the largest virtual machine you can create is constrained by the smallest physical server in the environment.



Implement the high-availability features available in the virtualization layer to ensure that the compute layer has sufficient resources to accommodate at least single-server failures. This allows you to implement minimal-downtime upgrades and tolerate single-unit failures.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

29

Chapter 3: Solution Technology Overview

Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX is flexible enough to meet your specific needs. The key constraint is the provision of sufficient processor cores and RAM per core to meet the needs of the target environment.

Network The infrastructure network requires redundant network links for each Hyper-V host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure for the solution already exists or is being deployed alongside other components of the solution. An example of this kind of highly available network topology is depicted in Figure 7. Note: The example is for IP-based networks, but the same underlying principles regarding multiple connections and elimination of single points of failure also apply to Fibre Channelbased networks.

Figure 7.

30

Example of highly-available network design

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 3: Solution Technology Overview

This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security. EMC unified storage platforms provide network high availability or redundancy by using link aggregation. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links.

Storage The storage layer is a key component of any cloud infrastructure solution, providing storage efficiency, management flexibility, and reduced total cost of ownership. This VSPEX solution uses the EMC VNX series to provide virtualization at the storage layer. EMC VNX Snapshots

VNX Snapshots is a software feature that creates point-in-time data copies. VNX Snapshots can be used for data backups, software development and testing, repurposing, data validation, and local rapid restores. VNX Snapshots improves on the existing EMC VNX SnapView™ snapshot functionality by integrating with storage pools. Note: LUNs created on physical RAID groups, also called RAID LUNs, support only SnapView snapshots. This limitation exists because VNX Snapshots requires pool space as part of its technology.

VNX Snapshots supports 256 writeable snapshots per pool LUN. It supports branching, also called “Snap of a Snap,” as long as the total number of snapshots for any primary LUN is less than 256, which is the hard limit. VNX Snapshots uses redirect-on-write (ROW) technology. ROW redirects new writes destined for the primary LUN to a new location in the storage pool. Such an implementation is different from copy-on-first-write (CoFW) used in SnapView, which holds the writes to the primary LUN until the original data is copied to the reserved LUN pool to preserve a snapshot. This release also supports consistency groups (CGs). Several pool LUNs can be combined into a CG and snapped concurrently. When a snapshot of a CG is initiated, all writes to the member LUNs are held until snapshots have been created. Typically, CGs are used for LUNs that belong to the same application. EMC VNX SnapSure EMC VNX SnapSure™ is an EMC VNX file software feature that enables you to create and manage checkpoints that are point-in-time, logical images of a production file system (PFS). SnapSure uses a copy-on-first-modify principle. A PFS consists of blocks. When a block within the PFS is modified, a copy containing the block’s original contents is saved to a separate volume called the SavVol.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

31

Chapter 3: Solution Technology Overview

Subsequent changes made to the same block in the PFS are not copied into the SavVol. The original blocks from the PFS in the SavVol and the unchanged PFS blocks remaining in the PFS are read by SnapSure according to a bitmap and block map data-tracking structure. These blocks combine to provide a complete point-in-time image called a checkpoint. A checkpoint reflects the state of a PFS at the time the checkpoint was created. SnapSure supports these types of checkpoints: 

Read-only checkpoints—Read-only file systems created from a PFS



Writeable checkpoints—Read/write file systems created from a read-only checkpoint

SnapSure can maintain a maximum of 96 read-only checkpoints and 16 writeable checkpoints per PFS, while allowing PFS applications continued access to real-time data. Note: Each writeable checkpoint associates with a read-only checkpoint, referred to as the baseline checkpoint. Each baseline checkpoint can have only one associated writeable checkpoint.

Using VNX SnapSure provides more detailed information. EMC VNX Virtual Provisioning

EMC VNX Virtual Provisioning™ enables organizations to reduce storage costs by increasing capacity utilization, simplifying storage management, and reducing application downtime. Virtual Provisioning also helps companies to reduce power and cooling requirements and reduce capital expenditures. Virtual Provisioning provides pool-based storage provisioning by implementing pool LUNs that can be either thin or thick. Thin LUNs provide on-demand storage that maximizes the utilization of your storage by allocating storage only as needed. Thick LUNs provide predictable high performance for your applications. Both types of LUNs benefit from the ease-of-use features of pool-based provisioning. Pools and pool LUNs are also the building blocks for advanced data services such as FAST VP, VNX snapshots, and compression. Pool LUNs also support a variety of additional features, such as LUN shrink, online expansion, and the User Capacity Threshold setting. Virtual Provisioning allows you to expand the capacity of a storage pool from the Unisphere GUI after disks are physically attached to the system. VNX systems have the ability to rebalance allocated data elements across all member drives to use new drives after the pool is expanded. The rebalance function starts automatically and runs in the background after an expand action. You can monitor the progress of a rebalance operation from the General tab of the Pool Properties window in Unisphere, as shown in Figure 8.

32

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 3: Solution Technology Overview

Figure 8.

Storage pool rebalance progress

LUN expansion Use pool LUN expansion to increase the capacity of existing LUNs. It allows for provisioning larger capacity as business needs grow. The VNX family has the capability to expand a pool LUN without disrupting user access. You can expand a pool LUN with a few simple clicks and the expanded capacity is immediately available. However, you cannot expand a pool LUN if it is part of a data-protection or LUN-migration operation. For example, snapshot LUNs or migrating LUNs cannot be expanded. LUN shrink Use LUN shrink to reduce the capacity of existing thin LUNs. VNX can shrink a pool LUN. This capability is only available for LUNs served by Windows Server 2008 and later. The shrinking process involves these steps: 1.

Shrink the file system from Windows Disk Management.

2.

Shrink the pool LUN using a command window and the DISKRAID utility. The DISKRAID utility is available through the VDS Provider, which is part of the EMC Solutions Enabler package.

The new LUN size appears as soon as the shrink process is complete. A background task reclaims the deleted or shrunk space and returns it to the storage pool. Once the task is complete, any other LUN in that pool can use the reclaimed space. For more detailed information on LUN expansion/shrinkage, refer to the EMC VNX Virtual Provisioning Applied Technology White Paper.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

33

Chapter 3: Solution Technology Overview

Alerting the user through the Capacity Threshold setting You must configure proactive alerts when using a file system or storage pools based on thin pools. Monitor these resources so that storage is available for provisioning when needed and capacity shortages are avoided. Figure 9 demonstrates why provisioning with thin pools requires monitoring.

Figure 9.

Thin LUN space utilization

Monitor the following values for thin pool utilization:

34



Total capacity is the total physical capacity available to all LUNs in the pool.



Total allocation is the total physical capacity currently assigned to all pool LUNs.



Subscribed capacity is the total host-reported capacity supported by the pool.



Over-subscribed capacity is the amount of user capacity configured for LUNs that exceeds the physical capacity in a pool.



Total allocation must never exceed the total capacity, but if it nears that point, add storage to the pools proactively before reaching a hard limit.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 3: Solution Technology Overview

Figure 10 shows the Storage Pool Properties dialog box in Unisphere, which displays parameters such as Free, Percent Full, Total Allocation, Total Subscription of physical capacity, Percent Subscribed and Oversubscribed By of virtual capacity.

Figure 10. Examining storage pool space utilization

When storage pool capacity becomes exhausted, any requests for additional space allocation on thin-provisioned LUNs fail. Applications attempting to write data to these LUNs usually fail as well, and an outage is the likely result. To avoid this situation, monitor pool utilization so you are alerted when thresholds are reached; set the Percentage Full Threshold to allow enough buffer to correct the situation before an outage situation occurs. Edit this setting by clicking Advanced in the Storage Pool Properties dialog box, as shown in Figure 11. This alert is only active if there are one or more thin LUNs in the pool, because thin LUNs are the only way to oversubscribe a pool. If the pool only contains thick LUNs, the alert is not active because there is no risk of running out of space due to oversubscription. You also can specify the value for Percent Full Threshold, which equals Total Allocation/Total Capacity, when a pool is created. When storage pool capacity becomes exhausted, any requests for additional space allocation on thin-provisioned LUNs fail. Applications attempting to write data to these LUNs usually fail as well, and an outage is the likely result. To avoid this situation, monitor pool utilization so you are alerted when thresholds are reached; set the Percentage Full Threshold to allow enough buffer to correct the situation before an outage situation occurs. Edit this setting by clicking Advanced in the Storage Pool Properties dialog box, as shown in Figure 11. This alert is only active if there are one or more thin LUNs in the pool, because thin LUNs are the only way to oversubscribe a pool. If the pool only contains thick LUNs, EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

35

Chapter 3: Solution Technology Overview

the alert is not active because there is no risk of running out of space due to oversubscription. You also can specify the value for Percent Full Threshold, which equals Total Allocation/Total Capacity, when a pool is created.

Figure 11. Defining storage pool utilization thresholds

View alerts by clicking Alert in Unisphere. Figure 12 shows the Unisphere Event Monitor Wizard, where you can also select the option of receiving alerts through email, a paging service, or an SNMP trap.

Figure 12. Defining automated notifications for block

36

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 3: Solution Technology Overview

Table 1 lists the information about thresholds and their settings. Table 1.

Thresholds and settings under VNX OE Block Release 33

Threshold type

Threshold range

Threshold default

Alert severity

Side effect

User settable

1%-84%

70%

Warning

None

Built-in

N/A

85%

Critical

Clears user settable alert

Allowing total allocation to exceed 90 percent of total capacity puts you at risk of running out of space and affecting all applications that use thin LUNs in the pool. VNX FAST Cache

VNX FAST Cache, a part of the VNX FAST Suite, enables the use of flash drives as an expanded cache layer for the array. FAST Cache is array-wide, non-disruptive cache, available for both file and block storage. Frequently accessed data is copied to the FAST Cache in 64 KB increments. Subsequent reads and writes to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to flash drives. This dramatically improves the response times for the active data and reduces data hot spots that can occur within the LUN.

VNX FAST VP (optional)

VNX FAST VP, a part of the VNX FAST Suite, enables you to automatically tier data across multiple types of drives to balance differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 256 MB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is done as part of a regularly scheduled maintenance operation.

VNX file shares

In many environments it is important to have a common location to store files accessed by many different individuals. This is implemented as CIFS or NFS file shares from a file server. VNX storage arrays can provide this service along with centralized management, client integration, advanced security options, and efficiency improvement features. Configuring and Managing CIFS on VNX provides more information.

ROBO

Organizations with remote offices and branch offices (ROBO) often prefer to locate data and applications close to the users in order to provide better performance and lower latency. In these environments, IT departments must balance the benefits of local support with the need to maintain central control. Local Systems and storage should be easy for local personnel to administer but also support remote management and flexible aggregation tools that minimize the demands on those local resources. With VSPEX, you can accelerate the deployment of applications at remote offices and branch offices. Customers can also use Unisphere Remote to consolidate the monitoring, system alerts, and reporting of hundreds of locations while maintaining simplicity of operation and unified storage functionality for local managers.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

37

Chapter 3: Solution Technology Overview

Backup and recovery Backup and recovery provides data protection by backing up data files or volumes according to defined schedules and restoring data from backup in case recovery is needed after a disaster. In this VSPEX solution, EMC Avamar is used for the stack, which supports up to 2,000 virtual machines. EMC Avamar

EMC Avamar provides methods to back up virtual desktops using either image-level or guest-based operations. Avamar runs the deduplication engine at the virtual machine disk (VHDX) level for image backups and at the file level for guest-based backups. Image-level protection enables backup clients to make a copy of all the virtual disks and configuration files associated with the particular virtual desktop in the event of hardware failure, corruption, or accidental deletion of the virtual desktop. Avamar significantly reduces the backup and recovery time of the virtual desktop by using change block tracking (CBT) on both backup and recovery. Guest-based protection runs like traditional backup solutions. Guest-based backup can be used on any virtual machine running an operating system for which an Avamar backup client is available. It enables detailed control over the content and inclusion and exclusion patterns. This can be used to prevent data loss due to user errors, such as accidental file deletion. Installing the desktop/laptop agent on the system to be protected enables self-service recoverability of user data.

ShareFile ShareFile is a cloud-based file-sharing and storage service built for enterprise class storage and security. ShareFile enables users to securely share documents with other users. ShareFile users include employees and users who are outside of the enterprise directory (referred to as clients). ShareFile StorageZones

ShareFile StorageZones allow businesses to share files across the company while meeting compliance and regulatory concerns. StorageZones allow customers to keep their data on storage systems that are onsite. It allows for sharing of large files with full encryption and provides the ability to synchronize files with multiple devices. By keeping data onsite and closer to users than data residing on the public cloud, StorageZone can provide improved performance as well as improved security. ShareFile StorageZone allows you to:

38



Use StorageZone with or instead of the ShareFile-managed cloud storage.



Configure Citrix CloudGateway Enterprise to integrate ShareFile services with Citrix Receiver for user authentication and user provisioning.



Take advantage of automated reconciliation between the ShareFile cloud and a company’s StorageZone deployment.



Enable automated antivirus scans of uploaded files.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 3: Solution Technology Overview



ShareFile StorageZone Architecture

Enable file recovery from the Storage Center backup (the server component of a StorageZone is called a Storage Center). You can browse the file records for a particular date and time and tag any files and folders to restore from Storage Center backup.

Figure 13 shows the ShareFile high-level architecture.

Figure 13. ShareFile high-level architecture

ShareFile consists of three components: 

Client—accesses the ShareFile service through one of the native tools like a browser, Citrix Receiver, or directly through the application programming interface (API).



Control Plane—performs functions such as storing files, folders and account information, access control, reporting and various other brokering functions. The Control Plane resides in multiple Citrix data centers located worldwide.



StorageZone—defines locations where data is stored.

The server component of StorageZone is called Storage Center. High availability requires at least two Storage Centers per StorageZone. A StorageZone must use a single file share for all of its Storage Centers. ShareFile Storage Center extends the ShareFile Software-as-a-Service (SaaS) cloud storage by providing the ShareFile account with on-premises private storage, referred to as StorageZone. The ShareFile on-premises storage differs from cloud storage as follows: 

ShareFile-managed cloud storage is a public multi-tenant storage system maintained by Citrix.



A ShareFile Storage Center is a private single-tenant storage system maintained by the customer that can be used only by approved customer accounts.

By default, ShareFile stores data in the secure ShareFile-managed cloud storage. The ShareFile Storage Center feature enables you to configure a private, onsite EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

39

Chapter 3: Solution Technology Overview

StorageZone. StorageZone defines locations where data is stored and enables performance optimization by locating data storage close to users. Determine the number of StorageZones and their location based on the organization’s performance and compliance requirements. In general, assigning users to the StorageZone that is geographically closest to them is the best practice for optimizing performance. Storage Center is a Web service that handles all HTTPS operations from end users and the ShareFile control subsystem. The ShareFile control subsystem handles all operations not related to file contents, such as authentication, authorization, file browsing, configuration, metadata, sending and requesting files, and load balancing. The control subsystem also performs Storage Center health checks and prevents offline servers from sending requests. The ShareFile control subsystem is maintained in Citrix Online data centers. The ShareFile storage subsystem handles operations related to file contents such as uploads, downloads, and antivirus verification. When you create a StorageZone, you are creating a private storage subsystem for your ShareFile data. For a production deployment of ShareFile, the recommended best practice is to use at least two servers with Storage Center installed for high availability. When you install Storage Center, you create a StorageZone. You can then install Storage Center on another server and join it to the same StorageZone. Storage Centers that belong to the same StorageZone must use the same file share for storage.

40

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 3: Solution Technology Overview

Using ShareFile StorageZone with VSPEX architectures

Figure 14 illustrates the VSPEX end-user computing for Citrix XenDesktop environment with added infrastructure to support ShareFile StorageZone with Storage Center. Server capacity is specified in generic terms for required minimums of CPU and memory. The customer is free to select the server and networking hardware that meets or exceeds the stated minimums. The recommended storage delivers a highly available architecture for the ShareFile StorageZone deployment.



Desktop users (ICA clients)

Virtual desktop #1

Virtual desktop #N

Microsoft Server 2012 Hyper-V virtual desktops

XenDesktop 7 Controller #1

SQL Server

XenDesktop 7 Controller #2



Active Directory / DNS / DHCP

Virtual Machine Manager

Microsoft Server 2012 Hyper-V VMs Microsoft Server 2012 Hyper-V Cluster virtual desktops

Network Microsoft Server 2012 Hyper-V Cluster Infrastructure

VNX5400

Storage center #1

Storage center #2

Microsoft Server 2012 Hyper-V VMs – ShareFile StorageZone

10 Gb Ethernet network 1 Gb Ethernet network

Microsoft Server 2012 Hyper-V Cluster ShareFile StorageZone VMs

Figure 14. Logical architecture: VSPEX end-user computing for Citrix XenDesktop with ShareFile StorageZone

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

41

Chapter 3: Solution Technology Overview

Server A high availability production environment requires a minimum of two servers (virtual machines) with Storage Center installed. Table 2 summarizes the requirements for CPU/Memory to implement ShareFile StorageZone with Storage Center. Table 2.

Minimum hardware resources to support ShareFile StorageZone with Storage Center

Storage Center

CPU (cores)

Memory (GB)

Reference

2

4

Storage Center system requirements on Citrix eDocs

Network Provide sufficient network ports to support the additional two-storage-center server requirements. You can implement the networking components using 1 Gb or 10 Gb IP networks, provided that bandwidth and redundancy are sufficient to meet the listed requirements. Storage ShareFile StorageZone requires a CIFS share to provide private data storage for Storage Center. The EMC VNX storage family has the ability to provide both file and block access with a broad feature set, making it an ideal choice for ShareFile StorageZone storage implementation. The EMC VNX series supports a wide range of business class features ideal for ShareFile StorageZone storage including:

42



Fully Automated Storage Tiering for Virtual Pools (FAST VP)



FAST Cache



Data compression and file deduplication



Thin provisioning



Replication



Checkpoints



File-level retention



Quota management

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 3: Solution Technology Overview

Table 3 provides the recommended EMC VNX storage needed for ShareFile StorageZone CIFS share. Table 3.

Recommended EMC VNX storage needed for ShareFile StorageZone CIFS share

Storage

Configuration

Notes

CIFS share

For 500 users:

The configuration assumes that each user utilizes 10 GB of private storage space.

 2 x Data Movers (active/standby CIFS variant only)  8 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks For 1,000 users:  2 x Data Movers (active/standby CIFS variant only)  16 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks For 2,000 users:  2 x Data Movers (active/standby CIFS variant only)  24 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

43

Chapter 4: Solution Overview

Chapter 4

Solution Overview

This chapter presents the following topics: Solution overview ....................................................................................................45 Solution architecture ............................................................................................... 45 Server configuration guidelines ...............................................................................57 Network configuration guidelines ............................................................................60 Storage configuration guidelines .............................................................................62 High availability and failover ................................................................................... 82 Validation test profile .............................................................................................. 85 Backup environment configuration guidelines ......................................................... 86 Sizing guidelines .....................................................................................................87 Reference workload..................................................................................................87 Applying the reference workload .............................................................................88 Implementing the reference architectures ............................................................... 89 Quick assessment ....................................................................................................92

44

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview

Solution overview This chapter provides a comprehensive guide to the major aspects of this solution. Server capacity is specified in generic terms for required minimums of CPU, memory, and network interfaces. You can select the server and networking hardware that meets or exceeds the stated minimums. EMC has validated the specified storage architecture, along with a system meeting the server and network requirements outlined, to provide high levels of performance while delivering a highly available architecture for your end-user computing deployment. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual desktops and has been validated by EMC. In practice, each virtual desktop type has its own set of requirements that rarely fit a predefined idea of what a virtual desktop should be. In any discussion about end-user computing, a reference workload should first be defined. Not all servers perform the same tasks, and building a reference that takes into account every possible combination of workload characteristics is impractical. Note: VSPEX uses the concept of a reference workload to describe and define a virtual machine. Therefore, one physical or virtual desktop in an existing environment might not be equal to one virtual desktop in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. Applying the reference workload provides a detailed description.

Solution architecture We1 validated the VSPEX end-user computing solution with EMC VNX at three different points of scale. These defined configurations form the basis of creating a custom solution. These points of scale are defined in terms of the reference workload. Logical architecture

The architecture diagrams in this section show the layout of the major components in the solutions for the two storage variants—SMB and FC.

1

In this Proven Infrastructure Guide, "we" refers to the EMC Solutions engineering team that validated the solution.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

45

Chapter 4: Solution Overview

Figure 15 depicts the logical architecture of the SMB variant, where 10 GbE carries all network traffic. Desktop users (ICA clients)

… Virtual desktop #1

Virtual desktop #N

Hyper-V virtual desktops

XenDesktop 7 Controller #1

SQL Server

PVS Server #1

XenDesktop 7 Controller #2

Virtual Machine Manager

PVS Server #2

Hyper-V Server 2012 virtual servers

Network



AD/DNS/ DHCP

Hyper-v Server 2012 Cluster virtual desktops

Hyper-v Server 2012 Cluster infrastructure

10 Gb Ethernet network

EMC VNX Series

Figure 15. Logical architecture for SMB variant Note: You can implement the networking components of the solution using 1 Gb/s or 10 Gb/s IP networks, provided that bandwidth and redundancy are sufficient to meet the listed requirements.

46

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview

Figure 16 depicts the logical architecture of the FC variant, wherein an FC SAN carries storage traffic and 10 GbE carries management and application traffic. Desktop users (ICA clients)

… Virtual desktop #1

Virtual desktop #N

Hyper-V virtual desktops

XenDesktop 7 Controller #1

SQL Server

PVS Server #1

XenDesktop 7 Controller #2

Virtual Machine Manager

PVS Server #2

Hyper-V Server 2012 virtual servers

Network



AD/DNS/ DHCP

Hyper-v Server 2012 Cluster virtual desktops

Hyper-v Server 2012 Cluster infrastructure

Fibre Channel Storage Network 10 Gb Ethernet network

EMC VNX Series

Figure 16. Logical architecture for FC variant Note: You can implement the networking components of the solution using 1 Gb/s or 10 Gb/s IP networks, provided that bandwidth and redundancy are sufficient to meet the listed requirements.

Key components

Citrix XenDesktop 7 delivery controller We used two Citrix XenDesktop controllers to provide redundant virtual desktop delivery, authenticate users, manage the assembly of users' virtual desktop environments, and broker connections between users and their virtual desktops. In this reference architecture, the controllers are installed on Windows Server 2012 and hosted as virtual machines on Hyper-V Server 2012. Citrix Provisioning Services server We used two Citrix Provisioning Services (PVS) servers to provide redundant stream services to stream desktop images from vDisks, as needed, to target devices. In this reference architecture, vDisks are stored on a CIFS share that is hosted by the VNX storage system.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

47

Chapter 4: Solution Overview

Virtual desktops We provisioned virtual desktops running Windows 7 or 8 using MCS and PVS. Microsoft Hyper-V Server 2012 Microsoft Hyper-V provides a common virtualization layer to host a server environment. Table 13 on page 93 lists the specific characteristics of the validated environment. Microsoft Hyper-V server 2012 provides a highly available infrastructure through features such as the following: 

Live Migration—Provides live migration of virtual machines within clustered and non-clustered servers with no virtual machine downtime or service disruption



Storage Live Migration—Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption

Microsoft System Center Virtual Manager 2012 SP1 Microsoft System Center Virtual Manager Server provides a scalable and extensible platform that forms the foundation for virtualization management for the Microsoft Hyper-V cluster. Microsoft System Center Virtual Manager manages all Hyper V hosts and their virtual machines. SQL Server Microsoft System Center Virtual Manager Server and XenDesktop controllers require a database service to store configuration and monitoring details. A Microsoft SQL Server 2012 running on a Windows 2012 Server is used for this purpose. Active Directory server Active Directory (AD) services are required for the various solution components to function properly. The Microsoft AD Directory Service running on a Windows Server 2012 server is used for this purpose. DHCP server The DHCP server centrally manages the IP address scheme for the virtual desktops. This service is hosted on the same virtual machine as the domain controller and DNS server. The Microsoft DHCP Service running on a Windows 2012 server is used for this purpose. DNS server DNS services are required for the various solution components to perform name resolution. The Microsoft DNS Service running on a Windows 2012 server is used for this purpose. EMC SMI-S Provider for Microsoft System Center Virtual Machine Manager 2012 SP1 EMC SMI-S Provider for Microsoft System Center Virtual Machine Manager is a plug-in to the Microsoft System Center Virtual Machine Manager that provides storage management for EMC arrays directly from the client. EMC SMI-S Provider helps provide a unified management interface.

48

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview

IP/Storage Networks All network traffic is carried by standard Ethernet network with redundant cabling and switching. User and management traffic is carried over a shared network while SMB storage traffic is carried over a private, non-routable subnet. IP network The Ethernet network infrastructure provides IP connectivity between virtual desktops, Hyper-V clusters, and VNX storage. For the SMB variant, the IP infrastructure allows Hyper-V servers to access CIFS shares on the VNX and desktop streaming from PVS servers with high bandwidth and low latency. It also allows desktop users to redirect their user profiles and home directories to the centrally maintained CIFS shares on the VNX. Fibre Channel (FC) network For the FC variant, storage traffic between all Hyper-V hosts and the VNX storage system is carried over an FC network. All other traffic is carried over the IP network. EMC VNX5400 array A VNX5400 array provides storage by presenting SMB/FC storage to Hyper-V hosts for up to 1,000 virtual desktops. EMC VNX5600 array A VNX5600 array provides storage by presenting SMB/FC storage to Hyper-V hosts for up to 2,000 virtual desktops. VNX family storage arrays include the following components: 

Storage processors (SPs) support block data with UltraFlex I/O technology that supports Fibre Channel, iSCSI, and Fibre Channel over Ethernet (FCoE) protocols. The SPs provide access for all external hosts and for the file side of the VNX array.



The Disk-Processor Enclosure (DPE) is 3U in size and houses each storage processor as well as the first tray of disks. This form factor is used in the VNX5300 and VNX5500.



X-Blades (or Data Movers) access data from the back end and provide host access using the same UltraFlex I/O technology that supports the NFS, CIFS, MPFS, and pNFS protocols. The X-Blades in each array are scalable and provide redundancy to ensure that no single point of failure exists.



The Data Mover Enclosure (DME) is 2U in size and houses the Data Movers (XBlades). The DME is similar in form to the SPE and is used on all VNX models that support file protocol.



Standby power supplies are 1U in size and provide enough power to each storage processor to ensure that any data in flight is de-staged to the vault area in the event of a power failure. This ensures that no writes are lost. Upon restart of the array, the pending writes are reconciled and persisted.



Control Stations are 1U in size and provide management functions to the fileside components referred to as X-Blades. The Control Station is responsible for

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

49

Chapter 4: Solution Overview

X-Blade failover. The Control Station optionally can be configured with a matching secondary Control Station to ensure redundancy on the VNX array. 

Disk-Array Enclosures (DAEs) house the drives used in the array.

EMC Avamar Avamar software provides the platform for the protection of virtual machines. This protection strategy uses persistent virtual desktops. It also enables image protection and end-user recoveries. Hardware resources

Table 4 lists the hardware used in this solution. Table 4.

Solution hardware

Hardware

Configuration

Notes

Servers for virtual desktops

Memory:

Total server capacity required to host virtual desktops

 Desktop OS: 2 GB RAM per desktop  1 TB RAM across all servers for 500 virtual desktops  2 TB RAM across all servers for 1,000 virtual desktops  4 TB RAM across all servers for 2,000 virtual desktops  Server OS: 0.6 GB RAM per desktop  300 GB RAM across all servers for 500 virtual desktops  600 GB RAM across all servers for 1,000 virtual desktops  1.2 TB RAM across all servers for 2,000 virtual desktops CPU:  Desktop OS: 1 vCPU per desktop (8 desktops per core)  63 cores across all servers for 500 virtual desktops  125 cores across all servers for 1,000 virtual desktops  250 cores across all servers for 2,000 virtual desktops  Server OS: 0.2 vCPU per desktop (5 desktops per core)  100 cores across all servers for 500 virtual Network:  6 x 1 GbE NICs per standalone server for 500 virtual desktops  3 x 10 GbE NICs per blade chassis or 6 x 1 GbE NICs per standalone server for 1,000/2,000 desktops  200 cores across all servers for 1,000 virtual desktops  400 cores across all servers for 2,000 virtual desktops

50

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview Hardware

Configuration

Notes

Network infrastructure

Minimum switching capability for SMB variant:

Redundant LAN configuration

 Two physical switches  6 x 1 GbE ports per Hyper-V server or three 10 GbE ports per blade chassis  1 x 1 GbE port per Control Station for management  2 x 10 GbE ports per Data Mover for data Minimum switching capability for FC variant:

Redundant LAN/SAN configuration

 2 x 1 GbE ports per Hyper-V server  4 x 4/8 Gb FC ports for VNX back end  2 x 4/8 Gb FC ports per Hyper-V server Storage

Common  2 x 10 GbE interfaces per Data Mover  2 x 8 FC ports per storage processor (FC variant only)

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

51

Chapter 4: Solution Overview Hardware

Configuration

Notes

For 500 virtual desktops:

VNX shared storage for virtual desktops

 2 Data Movers (active/standby SMB variant only)  600 GB 15 k rpm 3.5-inch SAS disks Drive count

PvD

Non-PvD

HSD

PVS

16

8

8

MCS

13

10

10

 3 x 100 GB 3.5-inch flash drives For 1,000 virtual desktops:  2 Data Movers (active/standby SMB variant only)  600 GB 15 k rpm 3.5-inch SAS disks Drive count

PvD

Non-PvD

HSD

PVS

32

16

16

MCS

26

20

20

 3 x 100 GB 3.5-inch flash drives For 2,000 virtual desktops:  2 Data Movers ( active/standby SMB variant only)  600 GB 15 k rpm 3.5-inch SAS disks Drive count

PvD

Non-PvD

HSD

PVS

64

32

32

MCS

26

40

40

 5 x 100 GB, 3.5-inch flash drives For 500 virtual desktops:  16 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks For 1,000 virtual desktops:  24 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks For 2,000 virtual desktops:  48 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks

52

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Optional for user data

Chapter 4: Solution Overview Hardware

Configuration

Notes

For 500 virtual desktops:

Optional for infrastructure storage

 5 x 600 GB 15 k rpm 3.5-inch SAS disks For 1,000 virtual desktops:  5 x 600 GB 15 k rpm 3.5-inch SAS disks For 2,000 virtual desktops:  5 x 600 GB 15 k rpm 3.5-inch SAS disks Shared infrastructure

In most cases, a customer environment will already have infrastructure services such as Active Directory and DNS configured. The setup of these services is beyond the scope of this document. If this solution is being implemented with no existing infrastructure, a minimum number of additional servers is required:  2 x physical servers

Services can be migrated into VSPEX postdeployment but must exist before VSPEX can be deployed

 20 GB RAM per server  4 x processor cores per server  2 x 1 GbE ports per server EMC nextgeneration backup

Avamar  1 x Gen4 utility node  1 x Gen4 3.9TB spare node  3 x Gen4 3.9TB storage nodes

Servers for customer infrastructure

Minimum number required:  2 x physical servers  20 GB RAM per server  4 x processor cores per server  2 x 1 GbE ports per server

Servers and the roles they fulfill might already exist in the customer environment

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

53

Chapter 4: Solution Overview

Software resources Table 5 lists the software used in this solution. Table 5.

Solution software

Software

Configuration

VNX5400 or 5600 (shared storage, file systems) VNX Operating Environment (OE) for file

Release 8.1.0-34746

VNX OE for block

Release 33 (05.33.000.3.746)

ESI for Windows

Version 3.0

XenDesktop Desktop Virtualization Citrix XenDesktop Controller

Version 7 Platinum Edition

Operating system for XenDesktop Controller

Windows Server 2012 Standard Edition

Microsoft SQL Server

Version 2012 Standard Edition

Next-generation backup Avamar

7.0

Microsoft Hyper-V Hyper-V Server

Hyper-V Server 2012

System Center Virtual Machine Manager

2012 SP1

Operating system for System Center Virtual Machine Manager

Windows Server 2012 Standard

PowerPath Edition (FC variant only)

5.7

Virtual desktops Note: Other than base OS, this software was used for solution validation and is not required.

Base operating system

Microsoft Windows 7 Enterprise (32-bit) SP1

Windows Server 2008 R2 SP1 Standard Edition

54

Microsoft Office

Office Enterprise 2007 SP3

Internet Explorer

8.0.7601.17514

Adobe Reader

9.1

Adobe Flash Player

11.4.402.287

Bullzip PDF Printer

9.1.1454

FreeMind

0.8.1

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview

Sizing for validated configuration

When selecting servers for this solution, ensure that the processor core meets or exceeds the performance of the Intel Nehalem family at 2.66 GHz. As servers with greater processor speeds, performance, and higher core density become available, you can consolidate servers as long as the required total core and memory count is met and a sufficient number of servers are incorporated to support the necessary level of high availability. As with servers, you can also consolidate network interface card (NIC) speed and quantity as long as you maintain the overall bandwidth requirements for this solution and sufficient redundancy to support high availability. Table 6 shows the configurations of the servers that support this solution. Each server has two sockets of four cores and 128 GB of RAM, plus two 10 GbE for each blade chassis. Table 6.

Configurations that support this solution

Desktop Type Desktop OS

Server OS

No. of servers

No. of virtual desktops

Total cores

Total RAM

8

500

64

1 TB

16

1,000

128

2 TB

32

2,000

256

4 TB

13

500

100

300 GB

25

1,000

200

600 GB

50

2,000

400

1.2 TB

As shown in Table 13 on page 88, to support eight virtual desktops, at least one core is required with a minimum of 2 GB of RAM for each. You should consider the correct balance of memory and cores required for the number of virtual desktops to be supported by a server. For example, a server that supports 24 virtual desktops requires a minimum of three cores but also a minimum of 48 GB of RAM. IP network switches used to implement this reference architecture must have a minimum backplane capacity of 96 (for 500 virtual desktops), 192 (for 1,000 virtual desktops), or 320 (for 2,000 virtual desktops) Gb/s non-blocking and support the following features: 

IEEE 802.1x Ethernet flow control



802.1q VLAN tagging



Ethernet link aggregation using IEEE 802.1ax (802.3ad) Link Aggregation Control Protocol



SNMP management capability



Jumbo frames

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

55

Chapter 4: Solution Overview

Choose the number and type of switches required to support high availability and choose a network vendor that can provide easily available parts, good service, and optimal support contracts. The network configuration should include the following: 

A minimum of two switches to support redundancy



Redundant power supplies



A minimum of 40 1 GbE ports (for 500 virtual desktops), two 1 GbE and fourteen 10 GbE ports (for 1,000 virtual desktops), or two 1 GbE and twenty-two 10 GbE ports (for 2,000 virtual desktops), distributed for high availability



The appropriate uplink ports for customer connectivity

While use of 10 GbE ports should align with those on the server and storage, keep in mind the overall network requirements for the solution and the level of redundancy required to support high availability. Consider additional server NICs and storage connections for specific implementation requirements. The management infrastructure (Active Directory, DNS, DHCP, and SQL Server) can be supported on two servers similar to those previously defined, but requires a minimum of only 20 GB of RAM instead of 128 GB.

56

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview

Server configuration guidelines When you are designing and ordering the compute/server layer of the VSPEX solution, you should consider several factors that might alter the final purchase. From a virtualization perspective, if you fully understand the system’s workload, features like dynamic memory can reduce the aggregate memory requirement. If the virtual desktop pool does not have a high level of peak or concurrent usage, the number of vCPUs can be reduced. Conversely, if the applications being deployed are highly computational in nature, the number of CPUs and memory purchased might need to be increased. Table 7 provides configuration details for the virtual desktop servers and network hardware. Table 7.

Server hardware

Servers for virtual desktops

Configuration

CPU:

 Desktop OS: 1 vCPU per desktop (8 desktops per core)  63 cores across all servers for 500 virtual desktops  125 cores across all servers for 1000 virtual desktops  250 cores across all servers for 2000 virtual desktops  Server OS: 0.2 vCPU per desktop (5 desktops per core)  100 cores across all servers for 500 virtual desktops  200 cores across all servers for 1,000 virtual desktops  400 cores across all servers for 2,000 virtual desktops

Memory:

 Desktop OS: 2 GB RAM per Desktop  1 TB RAM across all servers for 500 virtual desktops  2 TB RAM across all servers for 1000 virtual desktops  4 TB RAM across all servers for 2000 virtual machines  2 GB RAM reservation per Hyper-V host  Server OS: 0.6 GB RAM per desktop  300 GB RAM across all servers for 500 virtual desktops  600 GB RAM across all servers for 1,000 virtual desktops  1.2 TB RAM across all servers for 2,000 virtual machines  2 GB RAM reservation per Hyper-V host

Network:

 6 x 1 GbE NICs per server for 500 virtual desktops  3 x 10 GbE NICs per blade chassis or 6 x 1 GbE NICs per standalone server for 1,000 virtual desktops  3 x 10 GbE NICs per blade chassis or 6 x 1 GbE NICs per standalone server for 2,000 virtual desktops

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

57

Chapter 4: Solution Overview

Microsoft Hyper-V memory virtualization for VSPEX

Microsoft Hyper-V has a number of advanced features that help to maximize performance and overall resource utilization. The most important of these pertain to memory management. This section describes some of these features and the items you must consider when using them in the environment. In general, you can consider virtual machines on a single hypervisor consuming memory as a pool of resources. Figure 17 shows an example of memory consumption at the hypervisor level.

Figure 17. Hypervisor memory consumption

58

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview

Dynamic Memory Dynamic Memory, which was introduced in Windows Server 2008 R2 SP1, increases physical memory efficiency by treating memory as a shared resource and allocating it to the virtual machines dynamically. Actual consumed memory of each virtual machine is adjusted on demand. Dynamic memory enables more virtual machines to run by reclaiming unused memory from idle virtual machines. In Windows Server 2012, dynamic memory enables the dynamic increase of maximum memory available to virtual machines. Non-Uniform Memory Access Non-Uniform Memory Access (NUMA) is a multi-node computer technology that enables a CPU to access remote-node memory. This type of memory access is costly in terms of performance. However, Windows Server 2012 employs a process affinity that strives to keep threads pinned to the particular CPU to avoid remote-node memory access. In previous versions of Windows, this feature is only available to the host. Windows server 2012 extends this functionality to virtual machines, where it improves performance. Smart Paging With dynamic memory, Hyper-V allows virtual machines to exceed physical available memory. There is likely a gap between minimum memory and startup memory. Smart paging is a memory management technique that uses disk resources as a temporary memory replacement. It swaps out less-used memory to disk storage and swaps it in when needed. The drawback is that this can cause performance to degrade. Hyper-V continues to use guest paging when the host memory is oversubscribed, because it is more efficient than smart paging. Memory configuration guidelines

This section provides guidelines for allocating memory to virtual machines. The guidelines outlined here take into account Hyper-V memory overhead and the virtual machine memory settings. Hyper-V memory overhead The virtualization of memory resources incurs associated overhead, including the memory consumed by Hyper-V, the parent partition, and additional overhead for each virtual machine. Leave at least 2 GB of memory for the Hyper-V parent partition for this solution. Allocating memory to virtual machines The proper sizing of memory for a virtual machine in VSPEX architectures is based on many factors. With the number of application services and use cases available, determining a suitable configuration for an environment requires creating a baseline configuration, testing, and making adjustments, as discussed later in this paper. Table 13 on page 88 outlines the resources used by a single virtual machine.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

59

Chapter 4: Solution Overview

Network configuration guidelines This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines outlined here take into account jumbo frames, VLAN, and LACP on EMC unified storage. Table 4 on page 50 provides detailed network resource requirements. Table 8.

Hardware resources for network

Component Network infrastructure

Configuration Minimum switching capacity

Block

 2 physical switches  2 x 10 GbE ports per Microsoft Hyper-V server  1 x 1 GbE port per Control Station for management  2 x FC/CEE/10GbE ports per Microsoft Hyper-V server, for storage network  2 x FC/CEE/10GbE ports per SP, for desktop data  2 x 10 GbE ports per Data Mover for user data

File

 2 physical switches  4 x 10 GbE ports per Microsoft Hyper-V server  1 x 1 GbE port per Control Station for management  2 x 10 GbE ports per Data Mover for data

Note: The solution can use 1 Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled.

VLAN

60

It is a best practice to isolate network traffic so that the traffic between hosts and storage and hosts and clients, as well as management traffic, all move over isolated networks. In some cases physical isolation might be required for regulatory or policy compliance reasons, but in many cases, logical isolation using VLANs is sufficient. This solution calls for a minimum of three VLANs: 

Client access



Storage



Management

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview

The VLANs are illustrated in Figure 18.

Figure 18. Required networks Note: The diagram demonstrates the network connectivity requirements for a VNX array using 10 GbE network connections. A similar topology should be created for an array using 1 GbE network connections.

The client access network is for users of the system (clients) to communicate with the infrastructure. The storage network is used for communication between the compute layer and the storage layer. The management network is used to give administrators a dedicated way to access the management connections on the storage array, network switches, and hosts. Notes:  Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. These additional networks can be implemented, but they are not required.  If you choose the Fibre Channel storage network option for the deployment, similar best practices and design principles apply.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

61

Chapter 4: Solution Overview

Enable jumbo frames

This EMC VSPEX end-user computing solution recommends that MTU be set at 9,000 (jumbo frames) for efficient storage and migration traffic.

Link aggregation

A link aggregation resembles an Ethernet channel, but uses the LACP IEEE 802.3ad standard. The IEEE 802.3ad standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex. In this solution, LACP is configured on VNX, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links.

Storage configuration guidelines Hyper-V allows more than one method of using storage when hosting virtual machines. We tested the solutions described in this section and in Table 9 using SMB, and the storage layout described adheres to all current best practices. Customers and architects can make modifications based on their understanding of the systems’ usage and load if required. This solution used Login VSI to simulate a user load against the desktops. Login VSI provides guidance to gauge the maximum number of users a desktop environment can support. The Login VSI medium workload is selected for this testing. The storage layouts for 500, 1,000, and 2,000 desktops are defined when the Login VSImax average response time is below the dynamically calculated maximum threshold. This maximum threshold is known as the VSImax dynamic. The Login VSI has two ways of defining the maximum threshold: classic and dynamic VSImax. The classic VSImax threshold is defined as 4,000 milliseconds. However, the dynamic VSImax threshold is calculated based on the initial response time of the user activities.

62

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview Table 9.

Storage hardware

Hardware

Configuration

Notes

Storage

Common:

VNX shared storage for virtual desktops

 2 x 10 GbE interfaces per Data Mover  2 x 8 Gb FC ports per storage processor (FC variant only) For 500 virtual desktops:  2 Data Movers (active/standby SMB variant only)  600 GB 15 k rpm 3.5-inch SAS disks: Drive count

PvD

Non-PvD

HSD

PVS

16

8

8

MCS

13

10

10

 3 x 100 GB 3.5-inch flash drives For 1,000 virtual desktops:  2 Data Movers (active/standby SMB variant only)  600 GB 15 k rpm 3.5-inch SAS disks: Drive count

PvD

Non-PvD

HSD

PVS

32

16

16

MCS

26

20

20

 3 x 100 GB 3.5-inch flash drives For 2,000 virtual desktops:  2 Data Movers ( active/standby SMB variant only)  600 GB 15 k rpm 3.5-inch SAS disks: Drive count

PvD

Non-PvD

HSD

PVS

64

32

32

MCS

26

40

40

 5 x 100 GB 3.5-inch flash drives For 500 virtual desktops:  16 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks

Optional for user data

For 1,000 virtual desktops:  24 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks For 2,000 virtual desktops:  48 x 2 TB 7,200 rpm 3.5-inch NL-SAS disks

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

63

Chapter 4: Solution Overview Hardware

Configuration

Notes

For 500 virtual desktops:

Optional for infrastructure storage

 5 x 600 GB 15 k rpm 3.5-inch SAS disks For 1,000 virtual desktops:  5 x 600 GB 15 k rpm 3.5-inch SAS disks For 2,000 virtual desktops:  5 x 600 GB 15 k rpm 3.5-inch SAS disks

Hyper-V storage virtualization for VSPEX

This section provides guidelines for setting up the storage layer of the solution to provide high-availability and the expected level of performance. Windows Server 2012 Hyper-V and Failover Clustering use Cluster Shared Volumes (CSV) V2 and New Virtual Hard Disk Format (VHDX) features to virtualize storage presented from external shared storage system to host virtual machines. In Figure 19, the storage array presents either block-based LUNs (as CSV) or file-based CIFS share (as SMB shares) to the Windows hosts to host virtual machines.

Figure 19. Hyper-V virtual disk types

CIFS Windows Server 2012 supports using CIFS (SMB 3.0) file shares as shared storage for Hyper-V virtual machines. CSV A Cluster Shared Volume (CSV) is a shared disk containing an NTFS volume that is made accessible by all nodes of a Windows Failover Cluster. It can be deployed over any SCSI-based local or network storage. Pass-through disks Windows 2012 also supports pass-through disks, which allows a virtual machine to access a physical disk mapped to the host that does not have a volume configured.

64

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview

SMB 3.0 (file-based storage only) The SMB protocol is the file sharing protocol that is used by default in Windows environments. With the introduction of Windows Server 2012, it provides a vast set of new SMB features with an updated (SMB 3.0) protocol. Some of the key features available with Windows Server 2012 SMB 3.0 are: 

SMB Transparent Failover



SMB Scale Out



SMB Multichannel



SMB Direct



SMB Encryption



VSS for SMB file shares



SMB Directory Leasing



SMB PowerShell

With these new features, SMB 3.0 offers richer capabilities that, when combined, provide organizations with a high performance storage alternative to traditional Fibre Channel storage solutions at a lower cost. Note: SMB is also known as Common Internet File System (CIFS). For more details about SMB 3.0, refer to the EMC VNX Series: Introduction to SMB 3.0 Support .

ODX (block-based storage only) Offloaded Data Transfers (ODX) is a feature of the storage stack in Microsoft Windows Server 2012 that gives users the ability to use the investment in external storage arrays to offload data transfers from the server to the storage arrays. When used with storage hardware that supports the ODX feature, file copy operations are initiated by the host but performed by the storage device. ODX eliminates the data transfer between the storage and the Hyper-V hosts by using a token-based mechanism for reading and writing data within or between storage arrays and reduces the load on your network and hosts. Using ODX helps to enable rapid cloning and migration of virtual machines. Since the file transfer offloads to the storage array when using ODX, the host resource usage, such as CPU and network, is significantly reduced. By maximizing the use of storage arrays, ODX minimizes latencies and improves the transfer speed of large files, such as database or video files. When ODX-supported file operations are performed, data transfers are automatically offloaded to the storage array and are transparent to users. ODX is enabled by default in Windows Server 2012.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

65

Chapter 4: Solution Overview

New Virtual Hard Disk format Hyper-V in Windows Server 2012 contains an update to the VHD format, called VHDX, which has a much larger capacity and built-in resiliency. The main new features of the VHDX format are: 

Support for virtual hard disk storage with the capacity of up to 64 TB



Additional protection against data corruption during power failures by logging updates to the VHDX metadata structures



Optimal structure alignment of the virtual hard disk format to suit large sector disks

The VHDX format also has the following features:

VSPEX storage building block



Larger block sizes for dynamic and differential disks, which enables the disks to meet the needs of the workload



The 4 KB logical sector virtual disk that enables increased performance when used by applications and workloads that are designed for 4 KB sectors



The ability to store custom metadata about the files that the user might want to record, such as the operating system version or applied updates



Space reclamation features that can result in smaller file size and enable the underlying physical storage device to reclaim unused space (for example, TRIM requires direct-attached storage or SCSI disks and TRIM-compatible hardware)

Sizing the storage system to meet virtual server IOPS is a complicated process. When I/O reaches the storage array, several components such as the Data Mover (for filebased storage), SPs, back-end dynamic random access memory (DRAM) cache, FAST Cache (if used), and disks serve that I/O. Customers must consider various factors when planning and scaling their storage system to balance capacity, performance, and cost for their applications. VSPEX uses a building block approach to reduce complexity. A building block is a set of disk spindles that can support a certain number of virtual desktops in the VSPEX architecture. Each building block combines several disk spindles to create a storage pool that supports the needs of the end-user-computing environment. Three building blocks (500, 1,000, and 2,000 desktops) are currently verified on the VNX series and provide a flexible solution for VSPEX sizing. Table 10 shows a simple list of the disks required to support various scales of configurations, excluding hot spare needs. Note: If a configuration is started with the 500-desktop building block for MCS, it can be expanded to the 1,000-desktop building block by adding ten matching SAS drives and allowing the pool to restripe. For details about pool expansion and restriping, refer to the EMC VNX Virtual Provisioning Applied Technology White Paper.

66

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview Table 10.

Number of disks required for various numbers of virtual desktops

Virtual desktops

VNX platform

Flash drives (FAST Cache)

SAS drives (PVS/NonPvD)

SAS drives (PVS/PvD)

SAS drives (MCS/NonPvD)

SAS drives (MCS/PvD)

500

5400

2

13

21

10

13

1,000

5400

2

21

37

20

26

2,000

5600

4

37

69

40

52

VSPEX end-user computing validated maximums

VSPEX end-user-computing configurations are validated on the VNX5400 and VNX5600 platforms. Each platform has different capabilities in terms of processors, memory, and disks. For each array, there is a recommended maximum VSPEX enduser-computing configuration. As outlined in Table 10, the recommended maximum for a VNX5400 is 1,000 desktops and the recommended maximum for a VNX5600 is 2,000 desktops.

Storage layout for 500 virtual desktops

Core storage layout with PVS provisioning Figure 20 illustrates the layout of the disks that are required to store 500 virtual desktops with PVS provisioning. This layout can be used with random, static, personal vDisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data. UNUSED

vDisks and TFTP images Storage Pool 2

Hot Spare

RAID 5

0

1

2

3

5 2

4

8

9

2

3

10

11

12

13

Bus 1 Enclosure 1

14

Virtual Desktops (Write Cache) Storage Pool 1

Hot Spare

1

7

FAST Cache

UNUSED

0

6

RAID 10

RAID 1

4

5

6

8

7

9

10

11

12

13

Bus 1 Enclosure 0

14

UNUSED

VNX OE RAID 5 (3+1) 0

1

2

3

4

5

6

SAS

7

8

9

SSD

10

11

12

13

14

15

16

17

18

NL SAS

19

20

21

22

23

24

Bus 0 Enclosure 0

UNUSED

Figure 20. Core storage layout with PVS provisioning for 500 virtual desktops

Core storage layout with PVS provisioning overview The following core configuration is used in the reference architecture for 500 desktop virtual machines: 

Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX Operating Environment (OE).

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

67

Chapter 4: Solution Overview



The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4 and 1_1_5 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram.



Eight SAS disks (shown here as 1_0_7 to 1_0_14) on the RAID 10 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. 

For NAS, ten LUNs of 200 GB each are provisioned from the pool to provide the storage required to create two CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB Shares.



For FC, two LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs.



Two Flash drives (shown here as 1_0_5 and 1_0_6) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives.



Five SAS disks (1_1_0 to 1_1_4) in the RAID 5 Storage Pool 2 are used to store PVS vDisks and TFTP images. FAST Cache is enabled for the entire pool.



Disks shown here as 0_0_4 to 0_0_24, 1_0_0 to 1_0_3 and 1_1_6 to 1_1_14 are unused. They were not used for testing this solution.

Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results.

Core storage layout with MCS provisioning Figure 21 illustrates the layout of the disks that are required to store 500 virtual desktops with MCS provisioning. This layout can be used with random, static, personal vDisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data. FAST Cache

UNUSED Hot Spare

RAID 1

0

1

2

3

5 2

4

6

7

8

9

12

13

14

11

12

13

14

Bus 1 Enclosure 1

Storage Pool 1

Hot Spare

1

11

Virtual Desktops

UNUSED

0

10

2

3

RAID 5

4

5

6

8

7

9

10

Bus 1 Enclosure 0

UN-BOUND

VNX OE RAID 5 (3+1) 0

1

2

3

4

5

6

SAS

7

8

9

SSD

10

11

12

13

14

NL SAS

15

16

17

18

19

20

21

22

23

UNUSED

Figure 21. Core storage layout with MCS provisioning for 500 virtual desktops

68

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

24

Bus 0 Enclosure 0

Chapter 4: Solution Overview

Core storage layout with MCS provisioning overview The following core configuration is used in the reference architecture for 500 desktop virtual machines: 

Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE.



The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4 and 1_1_2 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram.



Ten SAS disks (shown here as 1_0_5 to 1_0_14) on the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. 

For NAS, ten LUNs of 200 GB each are provisioned from the pool to provide the storage required to create two CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB Shares.



For FC, two LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. Note: If personal vDisk is implemented, half the drives (five SAS disks for 500 desktops) are sufficient to satisfy the performance requirement. However, the desktop capacity will be reduced by 50 percent. If your environment capacity requirement is met, implement personal vDisk with MCS provisioning with 5 SAS drives for 500 desktops.



Two Flash drives (shown here as 1_1_0 and 1_1_1) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives.



Disks shown here as 0_0_4 to 0_0_24, 1_0_0 to 1_0_3 and 1_1_3 to 1_1_14 are unused. They were not used for testing this solution.

Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

69

Chapter 4: Solution Overview

Optional storage layout In solution validation testing, storage space for user data was allocated on the VNX array as shown in Figure 22. This storage is in addition to the core storage shown in Figure 21. If storage for user data exists elsewhere in the production environment, this storage is not required.

0

0

1

1

User Profiles and Home Directories

Personal vDisks

Storage Pool 4

Storage Pool 5

RAID 6

RAID 10

2

3

4

5

6

7

8

9

10

Infrastructure VMs Storage Pool 6

User Profiles and Home Directories

RAID 5

RAID 6

2

11

12

13

Storage Pool 4

3

4

SAS

5

SSD

6

7

8

NL SAS

9

14

Bus 1 Enclosure 2

Hot Spare

10

11

12

13

14

Bus 0 Enclosure 2

UNUSED

Figure 22. Optional storage layout for 500 virtual desktops

Optional storage layout overview The optional storage layout is used to store the infrastructure servers, user profiles and home directories, and Personal vDisks. The following optional configuration is used in the reference architecture for 500 virtual desktops: 

The EMC VNX Series does not require a dedicated hot spare drive. The disk shown here as 0_2_14 is an unbound disk that can be used as a hot spare when needed. This disk is marked as hot spare in the storage layout diagram.



Five SAS disks (shown here as 0_2_0 to 0_2_4) on the RAID 5 storage pool 6 are used to store the infrastructure virtual machines. A 1 TB LUN is provisioned from the pool to present to the Hyper-V servers as a CSV.



Sixteen NL-SAS disks (shown here as 0_2_5 to 0_2_13 and 1_2_0 to 1_2_6) on the RAID 6 storage pool 4 are used to store user data and roaming profiles. Ten LUNs of 500 GB each are provisioned from the pool to provide the storage required to create two CIFS file systems. If multiple drive types have been implemented, FAST VP can be enabled to automatically tier data to balance differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 256 MB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is done as part of a regularly scheduled maintenance operation. FAST VP is not recommended for virtual desktop storage, but it can provide performance improvements when implemented for user data and roaming profiles.

70

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview



Storage layout for 1,000 virtual desktops

0

1

Eight SAS disks (1_2_7 to 1_2_14) in the RAID 10 Storage Pool 5 are used to store the Personal vDisks. FAST Cache is enabled for the entire pool.



For NAS, ten LUNs of 200 GB each are provisioned from the pool to provide the storage required to create two CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB Shares.



For FC, two LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs.

Core storage layout with PVS provisioning Figure 23 illustrates the layout of the disks that are required to store 1,000 virtual desktops with PVS provisioning. This layout can be used with random, static, personal vDisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data. Virtual Desktops (Write Cache)

vDisks and TFTP images

Storage Pool 1

Storage Pool 2

RAID 10

RAID 5

2

3

4

5

Hot Spare

1

7

8

9

10

FAST Cache

UNUSED

0

6

2

3

5

12

13

14

Bus 1 Enclosure 1

Virtual Desktops (Write Cache) Storage Pool 1

Hot Spare

RAID 1

4

11

UNUSED

RAID 10

8

7 2

6

9

10

11

12

13

14

Bus 1 Enclosure 0

UNUSED

VNX OE RAID 5 (3+1) 0

1

2

3

4

5

SAS

6

7

8

9

SSD

10

11

12

13

14

15

16

17

18

NL SAS

19

20

21

22

23

24

Bus 0 Enclosure 0

UNUSED

Figure 23. Core storage layout with PVS provisioning for 1,000 virtual desktops

Core storage layout with PVS provisioning overview The following core configuration is used in the reference architecture for 1,000 virtual desktops: 

Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE.



The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4 and 1_0_7 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram.



Sixteen SAS disks (shown here as 1_0_8 to 1_0_14 and 1_1_0 to 1_1_8) on the RAID 10 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

71

Chapter 4: Solution Overview



For NAS, ten LUNs of 400 GB each are provisioned from the pool to provide the storage required to create four CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB Shares.



For FC, four LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs.



Two Flash drives (shown here as 1_0_5 and 1_0_6) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives.



Five SAS disks (1_1_9 to 1_1_13) in the RAID 5 Storage Pool 2 are used to store PVS vDisks and TFTP images. FAST Cache is enabled for the entire pool.



The disk shown here as 0_0_4 to 0_0_24, 1_0_0 to 1_0_3 and 1_1_14 is unused. They were not used for testing this solution.

Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results.

Core storage layout with MCS provisioning Figure 24 illustrates the layout of the disks that are required to store 1,000 virtual desktops with MCS provisioning. This layout can be used with random, static, personal vDisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data. FAST Cache

Virtual Desktops Hot Spare

RAID 1

0

1

RAID 5

2

3

5 2

4

6

7

8

9

11

12

13

14

11

12

13

14

Bus 1 Enclosure 1

Storage Pool 1

Hot Spare

1

10

Virtual Desktops

UNUSED

0

UNUSED

Storage Pool 1

2

3

RAID 5

4

5

6

8

7

9

10

Bus 1 Enclosure 0

UNUSED

VNX OE RAID 5 (3+1) 0

1

2

3

4

5

6

7

SAS

8

9

SSD

10

11

12

13

14

15

16

17

18

NL SAS

19

20

21

22

23

24

Bus 0 Enclosure 0

UNUSED

Figure 24. Core storage layout with MCS provisioning for 1,000 virtual desktops

Core storage layout with MCS provisioning overview The following core configuration is used in the reference architecture for 1,000 virtual desktops:

72



Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE.



The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4 and 1_1_2 are unbound disks that can be used as hot

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview

spares when needed. These disks are marked as hot spares in the storage layout diagram. 

Twenty SAS disks (shown here as 1_0_5 to 1_0_14 and 1_1_3 to 1_1_12) on the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. 

For NAS, ten LUNs of 800 GB each are provisioned from the pool to provide the storage required to create four CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB shares.



For FC, four LUNs of 2 TB each are provisioned from the pool to present to the Hyper-V servers as a four CSVs. Note: If personal vDisk is implemented, half the drives (ten SAS disks for 1,000 desktops) are sufficient to satisfy the performance requirement. However, the desktop capacity will be reduced by 50 percent. If your environment capacity requirement is met, implement personal vDisk with MCS provisioning with 10 SAS drives for 1,000 desktops.



Two Flash drives (shown here as 1_1_0 and 1_1_1) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives.



Disks shown here as 0_0_4 to 0_0_24 and 1_1_13 to 1_1_14 are unused. They were not used for testing this solution.

Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

73

Chapter 4: Solution Overview

Optional storage layout In solution validation testing, storage space for user data was allocated on the VNX array as shown in Figure 25. This storage is in addition to the core storage shown in Figure 24. If storage for user data exists elsewhere in the production environment, this storage is not required. UNUSED

Personal vDisks Storage Pool 5 RAID 10

0

8

1

2

3

4

5

6

7

8

9

10

11

12

13

14

Bus 1 Enclosure 3

Personal vDisks Storage Pool 5

Hot Spare

RAID 10

0

1

3

2

4

5

6

7

8

9

10

11

12

13

14

9

10

11

12

13

14

Bus 0 Enclosure 3

User Profiles and Home Directories Storage Pool 4 RAID 6

0

0

1

1

2

3

4

5

6

7

8

Infrastructure VMs Storage Pool 6

User Profiles and Home Directories

RAID 5

RAID 6

2

Storage Pool 4

3

4

SAS

5

SSD

6

7

8

NL SAS

9

Bus 1 Enclosure 2

Hot Spare

10

11

12

13

14

Bus 0 Enclosure 2

UNUSED

Figure 25. Optional storage layout for 1,000 virtual desktops

Optional storage layout overview The optional storage layout is used to store the infrastructure servers, user profiles and home directories, and Personal vDisks. The following optional configuration is used in the reference architecture for 1,000 virtual desktops:

74



The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 0_2_14 and 0_3_14 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram.



Five SAS disks (shown here as 0_2_0 to 0_2_4) on the RAID 5 storage pool 6 are used to store the infrastructure virtual machines. A 1 TB LUN is provisioned from the pool to present to the Hyper-V servers as a CSV.



Twenty-four NL-SAS disks (shown here as 0_2_5 to 0_2_13 and 1_2_0 to 1_2_14) on the RAID 6 storage pool 4 are used to store user data and roaming profiles. Ten LUNs of 1 TB each are provisioned from the pool to provide the storage required to create two CIFS file systems.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview

If you have implemented multiple drive types, you can enable FAST VP to automatically tier data to balance differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 256 MB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is done as part of a regularly scheduled maintenance operation. FAST VP is not recommended for virtual desktop storage, but it can provide performance improvements when implemented for user data and roaming profiles. 



Sixteen SAS disks (0_3_0 to 0_3_13 and 1_3_0 to 1_3_1) in the RAID 10 Storage Pool 5 are used to store the Personal vDisks. FAST Cache is enabled for the entire pool. 

For NAS, ten LUNs of 400 GB each are provisioned from the pool to provide the storage required to create four CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB Shares.



For FC, four LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs.

Disks shown here as as 1_3_2 to 1_3_14 are unused. They were not used for testing this solution.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

75

Chapter 4: Solution Overview

Storage layout for 2,000 virtual desktops

Core storage layout with PVS provisioning Figure 26 illustrates the layout of the disks that are required to store 2,000 virtual desktops with PVS provisioning. This layout can be used with random, static, personal vDisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data.

FAST Cache

UNUSED Hot Spare

RAID 1

0

1

0

2

3

4

5

6

7

8

9

10

Virtual Desktops (Write Cache)

vDisks and TFTP images

Storage Pool 1

Storage Pool 2

RAID 10

RAID 5

1

2

3

4

5

8

7 2

6

11

12

13

14

Bus 0 Enclosure 2

FAST Cache Hot Spare

RAID 1

9

10

11

12

13

14

9

10

11

12

13

14

11

12

13

14

Bus 1 Enclosure 1

Virtual Desktops (Write Cache) Storage Pool 1 RAID 10

0

1

2

3

4

5

6

7

8

Virtual Desktops (Write Cache)

UNUSED

Storage Pool 1

Hot Spare

0

1

Bus 0 Enclosure 1

2

3

RAID 10

4

5

8

7 2

6

9

10

Bus 1 Enclosure 0

UNUSED

VNX OE RAID 5 (3+1) 0

1

2

3

4

5

6

7

SAS

8

9

SSD

10

11

12

13

14

15

16

17

18

NL SAS

19

20

21

22

23

24

Bus 0 Enclosure 0

UNUSED

Figure 26. Core storage layout with PVS provisioning for 2,000 virtual desktops

Core storage layout with PVS provisioning overview The following core configuration is used in the reference architecture for 2,000 virtual desktops:

76



Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE.



The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4, 1_1_14, and 0_2_2 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram.



Thirty-two SAS disks (shown here as 1_0_5 to 1_0_14 and 0_1_0 to 0_1_14, and 1_1_0 to 1_1_6) on the RAID 10 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview



For NAS, ten LUNs of 800 GB each are provisioned from the pool to provide the storage required to create eight CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB shares.



For FC, eight LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs.



Four Flash drives (shown here as 1_1_12 to 1_1_13 and 0_2_0 to 0_2_1) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives.



Five SAS disks (1_1_7 to 1_1_11) in the RAID 5 Storage Pool 2 are used to store PVS vDisks and TFTP images. FAST Cache is enabled for the entire pool.



Disks shown here as as 0_0_4 to 0_0_24, 1_0_0 to 1_0_3 and 0_2_3 to 0_2_14 are unused. They were not used for testing this solution.

Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

77

Chapter 4: Solution Overview

Core storage layout with MCS provisioning Figure 27 illustrates the layout of the disks that are required to store 2,000 virtual desktops with MCS provisioning. This layout can be used with random, static, personal vDisk, and hosted shared desktop provisioning options. This layout does not include space for user profile data. Virtual Desktops

UNUSED

Storage Pool 1

Hot Spare

RAID 5

0

1

2

3

5

4

6

7

FAST Cache

8

9

10

11

12

13

14

9

10

11

12

13

14

10

11

12

13

14

11

12

13

14

Bus 0 Enclosure 2

Virtual Desktops Storage Pool 1

RAID 1

0

RAID 5

1

2

3

4

5

6

8

7

FAST Cache

Virtual Desktops Hot Spare

RAID 1

0

1

Storage Pool 1 RAID 5

2

3

5 2

4

6

7

8

9

Storage Pool 1

Hot Spare

1

Bus 0 Enclosure 1

Virtual Desktops

UNUSED

0

Bus 1 Enclosure 1

2

3

RAID 5

4

5

6

8

7

9

10

Bus 1 Enclosure 0

UNUSED

VNX OE RAID 5 (3+1) 0

1

2

3

4

5

6

SAS

7

8

9

SSD

10

11

12

13

14

15

16

17

18

NL SAS

19

20

21

22

23

24

Bus 0 Enclosure 0

UNUSED

Figure 27. Core storage layout with MCS provisioning for 2,000 virtual desktops

Core storage layout with MCS provisioning overview The following core configuration is used in the reference architecture for 2,000 desktop virtual machines: 

Four SAS disks (shown here as 0_0_0 to 0_0_3) are used for the VNX OE.



The EMC VNX Series does not require a dedicated hot spare drive. The disks shown here as 1_0_4,0_1_2, and 0_2_5 are unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the storage layout diagram.



Forty SAS disks (shown here as 1_0_5 to 1_0_14, 0_1_3 to 0_1_14, 1_1_2 to 1_1_14, and 0_2_0 to 0_2_4) on the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled for the entire pool. 

78

For NAS, ten LUNs of 1,600 GB each are provisioned from the pool to provide the storage required to create eight CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB shares.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview



For FC, eight LUNs of 2 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs. Note: If personal vDisk is implemented, half the drives (twenty SAS disks for 2,000 desktops) are sufficient to satisfy the performance requirement. However, the desktop capacity will be reduced by 50 percent. If your environment capacity requirement is met, implement personal vDisk with MCS provisioning with 20 SAS drives for 1,000 desktops.



Two Flash drives (shown here as 0_1_0 to 0_1_1 and 1_1_0 and 1_1_1) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives.



Disks shown here as 0_0_4 to 0_0_24, 1_0_0 to 1_0_3 and 0_2_6 to 0_2_14 are unbound. They were not used for testing this solution.

Note: Larger drives can be substituted to provide more capacity. To satisfy the load recommendations, the drives must all be 15k rpm and the same size. If differing sizes are used, storage layout algorithms might give sub-optimal results.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

79

Chapter 4: Solution Overview

Optional storage layout In solution validation testing, storage space for user data was allocated on the VNX array as shown in Figure 28. This storage is in addition to the core storage shown in Figure 27. If storage for user data exists elsewhere in the production environment, this storage is not required. Personal vDisks Storage Pool 5

Hot Spare

RAID 10

0

8

1

2

3

4

5

6

7

8

9

10

8

9

10

11

12

Hot Spare

13

UNUSE D

14

Bus 0 Enclosure 5

Personal vDisks Storage Pool 5 RAID 10

0

1

3

2

4

5

6

7

11

User Profiles and Home Directories

2

3

4

5

14

Storage Pool 5

Hot Spare

RAID 6

1

13

Personal vDisks

Storage Pool 4

0

12

Bus 1 Enclosure 4

6

7

8

RAID 10

9

10

11

12

13

14

Bus 0 Enclosure 4

9

10

11

12

13

14

Bus 1 Enclosure 3

9

10

11

12

13

14

Bus 0 Enclosure 3

User Profiles and Home Directories Storage Pool 4 RAID 6

0

1

2

3

4

5

6

7

8

User Profiles and Home Directories Storage Pool 4 RAID 6

0

0

1

1

2

3

4

5

6

7

8

Infrastructure VMs Storage Pool 6

User Profiles and Home Directories

RAID 5

RAID 6

2

Storage Pool 4

3

4

SAS

5

SSD

6

7

8

NL SAS

9

Hot Spare

10

11

12

13

14

Bus 1 Enclosure 2

UNUSED

Figure 28. Optional storage layout for 2,000 virtual desktops

Optional storage layout overview The optional storage layout is used to store the infrastructure servers, user profiles and home directories, and Personal vDisks. The following optional configuration is used in the reference architecture for 2,000 virtual desktops: 

80

The EMC VNX Series does not require a dedicated hot spare drive. The disk shown here as 1_2_14, 0_4_9, 0_5_12, and 0_5_13 are unbound disks that can be used as hot spares when needed. This disk is marked as hot spare in the storage layout diagram.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview



Five SAS disks (shown here as 1_2_0 to 1_2_4) on the RAID 5 storage pool 6 are used to store the infrastructure virtual machines. A 1 TB LUN is provisioned from the pool to present to the Hyper-V servers as a CSV.



Forty eight NL-SAS disks (shown here as 1_2_5 to 1_2_13, 0_3_0 to 0_3_14, 1_3_0 to 1_3_14, and 0_4_0 to 0_4_8) on the RAID 6 storage pool 4 are used to store user data and roaming profiles. Ten LUNs of 2 TB each are provisioned from the pool to provide the storage required to create two CIFS file systems. If multiple drive types have been implemented, FAST VP can be enabled to automatically tier data to balance differences in performance and capacity. FAST VP is applied at the block storage pool level and automatically adjusts where data is stored based on how frequently it is accessed. Frequently accessed data is promoted to higher tiers of storage in 256 MB increments, while infrequently accessed data can be migrated to a lower tier for cost efficiency. This rebalancing of 256 MB data units, or slices, is done as part of a regularly scheduled maintenance operation. FAST VP is not recommended for virtual desktop storage, but it can provide performance improvements when implemented for user data and roaming profiles



Thirty-two SAS disks (0_4_10 to 0_4_14, 1_4_0 to 1_4_14, and 0_5_0 to 0_5_11) in the RAID 10 Storage Pool 5 are used to store the Personal vDisks. FAST Cache is enabled for the entire pool. 

For NAS, ten LUNs of 800 GB each are provisioned from the pool to provide the storage required to create eight CIFS file systems. The file systems are presented to the Hyper-V servers as four SMB shares.



For FC, eight LUNs of 1 TB each are provisioned from the pool to present to the Hyper-V servers as four CSVs.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

81

Chapter 4: Solution Overview

High availability and failover This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide, it provides the ability to survive most single-unit failures with minimal to no impact to business operations. Virtualization layer As indicated earlier, configuring high availability in the virtualization layer and allowing the hypervisor to automatically restart virtual machines that fail is recommended. Figure 29 illustrates the hypervisor layer responding to a failure in the compute layer.

Figure 29. High availability at the virtualization layer

Implementing high availability at the virtualization layer ensures that, even in the event of a hardware failure, the infrastructure will attempt to keep as many services running as possible. Compute layer

While this solution offers flexibility in the type of servers to be used in the compute layer, we recommend that you use enterprise class servers designed for the data center. Connect these servers, with redundant power supplies, to separate Power Distribution Units (PDUs) in accordance with your server vendor’s best practices.

Figure 30. Redundant power supplies

82

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview

Configuring high availability in the virtualization layer is also recommended. This means that the compute layer must be configured with enough resources so that the total number of available resources meets the needs of the environment, even with a server failure, as demonstrated in Figure 30. Network layer

The advanced networking features of the VNX family provide protection against network connection failures at the array. Each Hyper-V host has multiple connections to user and storage Ethernet networks to guard against link failures. These connections should be spread across multiple Ethernet switches to guard against component failure in the network, as shown in Figure 31.

Figure 31. Network layer high availability

By designing the network with no single points of failure, you can ensure that the compute layer is able to access storage and communicate with users even if a component fails.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

83

Chapter 4: Solution Overview

Storage layer

The VNX family is designed for proven five 9s (99.999 percent) availability by using redundant components throughout the array. All of the array components are capable of continued operation in the event of hardware failure. The RAID disk configuration on the array provides protection against data loss caused by individual disk failures, and the available hot spare drives can be allocated dynamically to replace a failing disk, as shown in Figure 32.

Figure 32. VNX series high availability

EMC storage arrays are designed to be highly available by default. When they are configured according to the directions in their installation guides, no single unit failures result in data loss or unavailability.

84

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview

Validation test profile The VSPEX solution was validated with the environment profile characteristics shown in Table 11. Table 11.

Validated environment profile

Profile characteristic

Value

Number of virtual desktops

 500 for 500 virtual desktops  1,000 for 1,000 virtual desktops  2,000 for 2,000 virtual desktops

Virtual desktop OS

 Desktop OS: Windows 7 Enterprise (32-bit) SP1  Server OS: Windows Server 2008 R2 SP1

CPU per virtual desktop

 Desktop OS: 1 vCPU  Server OS: 0.2 vCPU

Number of virtual desktops per CPU core

 Desktop OS: 1 vCPU

RAM per virtual desktop

 Desktop OS: 2 GB

 Server OS: 0.2 vCPU

 Server OS: 0.6 GB Desktop provisioning method

 Provisioning Services (PVS)  Machine Creation Services (MCS)

Average storage available for each virtual desktop

 4 GB (PVS)

Average IOPS per virtual desktop at steady state

8 IOPS

Average peak IOPS per virtual desktop during boot storm

 60 IOPS (MCS/NFS variant)

 8 GB (MCS)

 8 IOPS (PVS/NFS variant)  116 IOPS (MCS/FC variant)  14 IOPS (PVS/FC variant)

Number of datastores to store virtual desktops

 2 for 500 virtual desktops  4 for 1,000 virtual desktops  8 for 2,000 virtual desktops

Number of virtual desktops per datastore

250

Disk and RAID type for datastores

RAID 5, 600 GB, 15k rpm, 3.5-inch SAS disks

Disk and RAID type for CIFS shares to host roaming user profiles and home directories (optional for user data)

RAID 6, 2 TB, 7,200 rpm, 3.5-inch NL-SAS disks

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

85

Chapter 4: Solution Overview Note: The Average IOPS per virtual desktop at steady state is measured when Login VSI medium profile workload is simulated on 500, 1,000, and 2,000 desktop configurations. On each configuration, the Login VSImax is below the dynamic VSImax threshold.

Backup environment configuration guidelines This section provides guidelines for setting up the backup and recovery environment for this VSPEX solution. Backup characteristics

Table 12 shows how the backup environment profile in this VSPEX solution was sized using three stacks. Table 12.

Backup profile characteristics

Profile characteristic

Value

User data

 5 TB for 500 virtual desktops  10 TB for 1,000 virtual desktops  20 TB for 2,000 virtual desktops Note: 10.0 GB per desktop

Daily change rate for user data User data

2%

Retention per data types

Backup layout

86

# Daily

30 Daily

# Weekly

4 Weekly

# Monthly

1 Monthly

Avamar provides various deployment options for specific use cases and recovery requirements. In this case, the solution is deployed with an Avamar Data Store. This enables the unstructured user data to be backed up directly to the Avamar system for simple file-level recovery. This backup solution unifies the backup process with industry-leading deduplication backup software and systems and achieves the highest levels of performance and efficiency.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview

Sizing guidelines The following sections provide definitions of the reference workload used to size and implement the VSPEX architectures discussed in this document. They provide guidance on how to correlate the reference workloads to actual customer workloads and how that can change the end delivery configuration from the server and network perspective. You can modify the storage definition by adding drives for greater capacity and performance and by adding features like FAST Cache for desktops and FAST VP for improved user data performance. The disk layouts were created to provide support for the specified number of virtual desktops at the defined performance level. Decreasing the number of recommended drives or stepping down an array type can result in lower IOPS per desktop and a reduced user experience because of higher response time.

Reference workload Each VSPEX Proven Infrastructure implements the storage, network, and compute resources needed for a set number of virtual machines that have been validated by EMC. In practice, each virtual machine has its own set of requirements, which rarely fit a pre-defined idea of what a virtual machine should be. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. To simplify the discussion, we have defined a representative reference workload. By Defining the reference workload comparing actual customer needs to this reference workload, you can extrapolate which reference architecture to choose. For the VSPEX end-user computing solution, the reference workload is defined as a single virtual desktop that can be deployed using a desktop or server OS. In the case of a desktop OS, each user accesses a dedicated virtual machine that is allocated 1 vCPU and 2 GB of RAM. As with server OS, each virtual machine is allocated 4 vCPUs and 12 GB of RAM, and is shared among 20 virtual desktop sessions. Table 13 shows the characteristics of the virtual desktop.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

87

Chapter 4: Solution Overview Table 13.

Virtual desktop characteristics

Characteristic

Value

Virtual desktop operating system

 Desktop OS: Microsoft Windows 7 Enterprise Edition (32-bit) SP1  Server OS: Windows Server 2008 R2 SP1

Virtual processors per virtual desktop

 Desktop OS: 1 vCPU  Server OS: 0.2 vCPU

RAM per virtual desktop

 Desktop OS: 2 GB  Server OS: 0.6 GB

Available storage capacity per virtual desktop

 4 GB (PVS)

Average IOPS per virtual desktop at steady state

8

Available storage capacity per virtual desktop

8 GB (MCS)

 8 GB (MCS)

This desktop definition is based on user data that resides on shared storage. The I/O profile is defined using a test framework that runs all desktops concurrently with a steady load generated by the constant use of office-based applications like browsers, office productivity software, and other standard task utilities.

Applying the reference workload In addition to the supported desktop numbers (500, 1,000, and 2,000), consider the following factors when deciding which end-user computing solution to deploy. Concurrency The workloads used to validate VSPEX solutions assume that all desktop users will be active at all times. In other words, the 1,000-desktop architecture was tested with 1,000 desktops, all generating workload in parallel, all booted at the same time, and so on. If your customer expects to have 1,200 users, but only 50 percent of them will be logged on at any given time because of time zone differences or alternate shifts, 600 active users out of the total 1,200 users can be supported by the 1,000-desktop architecture. Heavier desktop workloads The workload defined in Table 13 and used to test these VSPEX end-user computing configurations is considered a typical office worker load. However, some customers have users with a more active profile. If a company has 800 users and, because of custom corporate applications, each user generates 12 IOPS as compared to the 8 IOPS used in the VSPEX workload, the solution will need 9,600 IOPS (800 users * 12 IOPS per desktop). The 1,000-desktop configuration would be insufficient in this case because it has been rated to 8,000

88

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview

IOPS (1,000 desktops * 8 IOPS per desktop). This customer should move up to the 2,000-desktop solution.

Implementing the reference architectures The reference architectures require a set of hardware to be available for the CPU, memory, network, and storage needs of the system. These are presented as general requirements that are independent of any particular implementation. This section describes some considerations for implementing the requirements. Resource types

The reference architectures define the hardware requirements for the solution in terms of four basic types of resources: 

CPU resources



Memory resources



Network resources



Storage resources

This section describes the resource types, how they are used in the reference architectures, and key considerations for implementing them in a customer environment. CPU resources The architectures define the number of CPU cores that are required, but not a specific type or configuration. It is assumed that new deployments use recent revisions of common processor technologies, and it is assumed that these will perform as well as, or better than, the systems used to validate the solution. In any running system, it is important to monitor the utilization of resources and adapt as needed. The reference virtual desktop and the required hardware resources in the reference architectures assume that there will be no more than eight virtual CPUs for each physical processor core (8:1 ratio) when desktop OS is used. In most cases, this provides an appropriate level of resources for the hosted virtual desktops. In cases where this ratio might not be appropriate, monitor the CPU utilization at the hypervisor layer to determine if more resources are required. Memory resources Each virtual desktop in the reference architecture is defined as having 2 GB of memory dedicated to a single instance of the desktop operating system. In a virtual environment, because of budget constraints, it is not uncommon to provision virtual desktops with more memory than the hypervisor physically has. The memory overcommitment technique takes advantage of the fact that each virtual desktop does not fully utilize the amount of memory allocated to it. To oversubscribe the memory usage to some degree makes business sense. The administrator has the responsibility to proactively monitor the oversubscription rate so that it does not shift the bottleneck away from the server and become a burden to the storage subsystem. This solution was validated with statically assigned memory and no over-commitment of memory resources. If memory over-commitment is used in a real world EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

89

Chapter 4: Solution Overview

environment, regularly monitor the system memory utilization and associated page file I/O activity to ensure that a memory shortfall does not cause unexpected results. Network resources The reference architectures outline the minimum needs of the system. If additional bandwidth is needed, it is important to add capability to both the storage array and the hypervisor host to meet the requirements. The options for network connectivity on the server depend on the type of server. The storage arrays have a number of included network ports and provide the option of adding ports using EMC FLEX I/O modules. For reference purposes, in the validated environment, EMC assumes that each virtual desktop generates 8 I/Os per second with an average size of 4 KB. Each virtual desktop is generating at least 32 KB/s of traffic on the storage network. For an environment rated for 500 virtual desktops, this equates to a minimum of approximately 16 MB/sec. This is well within the bounds of modern networks. However, this does not take into account other operations. For example, additional bandwidth is needed for: 

User network traffic



Virtual desktop migration



Administrative and management operations

The requirements for each of these vary depending on how the environment is being used, so it is not practical to provide concrete numbers in this context. However, the network described in the reference architecture for each solution should be sufficient to handle average workloads for the described use cases. Regardless of the network traffic requirements, always have at least two physical network connections that are shared with a logical network so that a single link failure does not affect the availability of the system. The network should be designed so that the aggregate bandwidth in the event of a failure is sufficient to accommodate the full workload. Storage resources The reference architectures contain layouts for the disks used in the validation of the system. Each layout balances the available storage capacity with the performance capability of the drives. There are a few layers to consider when examining storage sizing. Specifically, the array has a collection of disks that are assigned to a storage pool. From that storage pool, you can provision storage to the Microsoft Hyper-V cluster. Each layer has a specific configuration that is defined for the solution and documented in Chapter 5. It is generally acceptable to replace drive types with a type that has more capacity and the same performance characteristics or with ones that have higher performance characteristics and the same capacity. Similarly, it is acceptable to change the placement of drives in the drive shelves in order to comply with updated or new drive shelf arrangements.

90

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview

In other cases where there is a need to deviate from the proposed number and type of drives specified or the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system. Backup resources

The solution outlines both the initial and growth backup storage and retention needs of the system. You can gather additional information to further size Avamar, including tape-out needs, RPO and RTO specifics, and multi-site environment replication needs.

Expanding existing The EMC VSPEX EUC solution supports a flexible implementation model that enables you to easily expand your environment as the needs of the business change. VSPEX EUC environments You can combine the building block configurations presented in this solution to form larger implementations. For example, you can build the 1,000 desktop configuration all at once, or you can start with the 500-desktop configuration and expand it as needed. In the same way, you can implement the 2,000 desktop configuration all at once or gradually by expanding the storage resources as they are needed. Implementation summary

The requirements stated in the reference architectures are what EMC considers the minimum set of resources to handle the workloads required based on the stated definition of a reference virtual desktop. In any customer implementation, the load of a system will vary over time as users interact with the system. However, if the customer virtual desktops differ significantly from the reference definition, and vary in the same resource group, then you might need to add more of that resource to the system.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

91

Chapter 4: Solution Overview

Quick assessment An assessment of the customer environment will help ensure that you implement the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations and help assess the customer environment. First, summarize the user types that you plan to migrate into the VSPEX end-user computing environment. For each group, determine the number of virtual CPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of reference virtual desktops required from the resource pool. Applying the reference workload provides examples of this process. Fill out a row in the worksheet for each application, as shown in Table 14. Table 14.

Blank worksheet row CPU (virtual CPUs)

Application

Example user type

Resource requirements

Memory (GB)

IOPS

Equivalent reference virtual desktops

Number of users

Total reference desktops

---

---

---

Equivalent reference desktops

Fill out the resource requirements for the User Type. The row requires input on three different resources: CPU, Memory, and IOPS. CPU requirements

The reference virtual desktop assumes most desktop applications are optimized for a single CPU in a desktop OS deployment. If one type of user requires a desktop with multiple virtual CPUs, modify the proposed virtual desktop count to provide for the additional resources. For example, if you virtualize 100 desktops, but 20 users require two CPUs instead of one, then consider that your pool needs to provide 120 virtual desktops of capability.

Memory requirements

Memory plays a key role in ensuring application functionality and performance. Therefore, each group of desktops will have different targets for the acceptable amount of available memory. Like the CPU calculation, if a group of users requires additional memory resources, simply adjust the number of desktops you are planning for to accommodate the additional resource requirements. For example, if you have 200 desktops that will be virtualized using desktop OS, but each one needs 4 GB of memory instead of the 2 GB that is provided in the reference, plan for 400 virtual desktops.

92

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview

Storage performance requirements

The storage performance requirements for desktops are usually the least understood aspect of performance. The reference virtual desktop uses a workload generated by an industry-recognized tool to execute a wide variety of office productivity applications that should be representative of the majority of virtual desktop implementations.

Storage capacity requirements

The storage capacity requirements for a desktop can vary widely depending on the types of applications in use and specific customer policies. The virtual desktops presented in this solution rely on additional shared storage for user profile data and user documents. This requirement is covered as an optional component that can be met with the addition of specific storage hardware from the reference architecture or with existing file shares in the environment.

Determining equivalent reference virtual desktops

With all of the resources defined, determine an appropriate value for the “Equivalent reference virtual desktops” rows in Table 14 by using the relationships in Table 15. Round all values up to the nearest whole number. Table 15. Desktop type Desktop OS

Server OS

Reference virtual desktop resources Resource

Value for reference virtual desktop

CPU

1

Equivalent reference virtual desktops = Resource requirements

Memory

2

Equivalent reference virtual desktops = (Resource requirements)/2

IOPS

8

Equivalent reference virtual desktops = (Resource requirements)/8

CPU

0.2

Equivalent reference virtual desktops = (Resource requirements)/0.2

Memory

0.6

Equivalent reference virtual desktops = (Resource requirements)/0.6

IOPS

8

Equivalent reference virtual desktops = (Resource requirements)/8

Relationship between requirements and equivalent reference virtual desktops

For example, if a group of 100 users needs two virtual CPUs and 12 IOPS per desktop in a desktop OS deployment along with 8 GB of memory, describe them as needing two reference desktops of CPU, four reference desktops of memory, and two reference desktops of IOPS based on the virtual desktop characteristics in Table 13 on page 88. These figures go in the “Equivalent Reference Virtual Desktops” row, as shown in Table 16. Use the maximum value in the row to complete the “Equivalent Reference Virtual Desktops” column. Multiply the number of equivalent reference virtual desktops by the number of users to arrive at the total resource needs for that type of user.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

93

Chapter 4: Solution Overview Table 16.

Example worksheet row CPU (Virtual CPUs)

Memory (GB)

IOPS

Equivalent reference virtual desktops

Number of users

Total reference desktops

Resource requirements

2

8

12

---

---

---

Equivalent reference virtual desktops

2

4

2

4

100

400

User type Heavy users

After completing the worksheet for each user type to be migrated into the virtual infrastructure, compute the total number of reference virtual desktops that are required in the pool by computing the sum of the “Total” column on the right side of the worksheet, as shown in Table 17. Table 17.

Example applications CPU (Virtual CPUs)

Memory (GB)

IOPS

Equivalent reference virtual desktops

Number of users

Total reference desktops

Resource requirements

2

8

12

---

---

---

Equivalent reference virtual desktops

2

4

2

4

100

400

Resource requirements

2

4

8

---

---

---

Equivalent reference virtual desktops

2

2

1

2

100

200

Resource requirements

1

2

8

---

---

---

Equivalent reference virtual desktops

1

1

1

1

300

300

User type

Heavy users

Total

900

The VSPEX end-user computing solutions define definite resource pool sizes. For this solution set, the pool sizes are 500, 1,000, and 2,000. In the case in Table 17, the customer requires 900 virtual desktops of capability from the pool. Therefore, the resource pool of 1,000 virtual desktops provides sufficient resources for the current needs as well as room for growth.

94

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 4: Solution Overview

Fine-tuning

In most cases, the recommended hardware for servers and storage can be sized appropriately based on the process described in the previous section. However, in some cases, further customization of available hardware resources might be necessary. A complete description of system architecture is beyond the scope of this document, however, additional customization can be done at this point. Storage resources In some applications, it might be necessary to separate some storage workloads from others. The storage layouts in the VSPEX architectures put all of the virtual desktops in a single resource pool. To achieve workload separation, purchase additional disk drives for each group that needs workload isolation and add them to a dedicated pool. It is not appropriate to reduce the size of the main storage resource pool in order to support isolation, or to reduce the capability of the pool, without additional guidance beyond this document. The storage layouts presented in this paper are designed to balance many different factors in terms of high availability, performance, and data protection. Changing the components of the pool can have significant and difficult-topredict impacts on other areas of the system. Server resources In the VSPEX end-user computing solution, it is possible to customize the server hardware resources more effectively. To do this, first total the resource requirements for the server components as shown in Table 18. Note the addition of the “Total CPU Resources” and “Total Memory Resources” columns on the right side of the table. Table 18.

Server resource component totals

User type

CPU (virtual CPUs)

Memory (GB)

Number of users

Total CPU resources

Total memory resources

Heavy users

Resource requirements

2

8

100

200

800

Moderate users

Resource requirements

2

4

100

200

400

Typical users

Resource requirements

1

2

300

300

600

700

1800

Total

In this example, the target architecture required 700 virtual CPUs and 1800 GB of memory. With the stated assumptions of eight desktops per physical processor core in desktop OS deployment and no memory over-provisioning, this translates to 88 physical processor cores and 1800 GB of memory. In contrast, the 1,000 virtualdesktop resource pool as documented in the reference architecture calls for 2,000 GB of memory and at least 125 physical processor cores. In this environment, the solution can be implemented effectively with fewer server resources. Note: Keep high availability requirements in mind when customizing the resource pool hardware.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

95

Chapter 4: Solution Overview

Table 19 is a blank worksheet for gathering customer information. Table 19.

Blank customer worksheet CPU (virtual CPUs)

User type

Resource requirements

Memory (GB)

IOPS

Equivalent reference virtual desktops

Number of users

Total reference desktops

---

---

---

---

---

---

---

---

---

---

---

---

---

---

---

Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Total

96

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Chapter 5

VSPEX Configuration Guidelines

This chapter presents the following topics: Overview .................................................................................................................. 98 Pre-deployment tasks .............................................................................................. 99 Customer configuration data ..................................................................................101 Preparing switches, connecting the network, and configuring switches ................101 Preparing and configuring the storage array ..........................................................104 Installing and configuring Microsoft Hyper-V hosts ...............................................114 Installing and configuring SQL Server database ....................................................116 Deploying System Center Virtual Machine Manager server ....................................118 Installing and configuring XenDesktop controller ..................................................120 Installing and configuring Provisioning Services (PVS only) ..................................123 Setting up EMC Avamar ..........................................................................................127 Summary................................................................................................................149

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

97

Chapter 5: VSPEX Configuration Guidelines

Overview Table 20 describes the stages of the solution deployment process. When the deployment is completed, the VSPEX infrastructure is ready for integration with the existing customer network and server infrastructure. Table 20.

98

Deployment process overview

Stage

Description

Reference

1

Verify prerequisites.

Pre-deployment tasks

2

Obtain the deployment tools.

Pre-deployment tasks

3

Gather customer configuration data.

Pre-deployment tasks

4

Rack and cable the components.

Vendor’s documentation

5

Configure the switches and networks; connect to the customer network.

Preparing switches, connecting the network, and configuring switches

6

Install and configure the VNX.

Preparing and configuring the storage array

7

Configure virtual machine storage.

Preparing and configuring the storage array

8

Install and configure the servers.

Installing and configuring Microsoft Hyper-V hosts

9

Set up SQL Server (used by SCVMM, PVS Server and XenDesktop).

Installing and configuring SQL Server database

10

Install and configure SCVMM.

Deploying System Center Virtual Machine Manager server

11

Set up XenDesktop Controller.

Installing and configuring XenDesktop controller

12

Test and install.

Validating the Solution

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Pre-deployment tasks Pre-deployment tasks include procedures that do not directly relate to environment installation and configuration, but whose results will be needed at the time of installation. Examples of pre-deployment tasks are collection of hostnames, IP addresses, VLAN IDs, license keys, installation media, and so on. Be sure to perform these tasks, shown in Table 21, before the customer visit to decrease the time required onsite.

Deployment prerequisites

Table 21.

Tasks for pre-deployment

Task

Description

Reference

Gather documents

Gather the related documents listed in the references. These are used throughout the text of this document to provide detail on setup procedures and deployment best practices for the various components of the solution.

 EMC documentation

Gather tools

Gather the required and optional tools for the deployment. Use Table 22 to confirm that all equipment, software, and appropriate licenses are available before the deployment process.

Table 22

Gather data

Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information on the customer configuration data worksheet for reference during the deployment process.

Appendix B

 Other documentation

Complete the VNX Block Configuration Worksheet for Fibre Channel, available on EMC Online Support, to provide the most comprehensive array-specific information. Table 22 itemizes the hardware, software, and license requirements for the solution. For additional information, refer to the hardware and software tables in this guide. Table 22.

Deployment prerequisites checklist

Requirement

Description

Reference

Hardware

Physical servers to host virtual desktops: Sufficient physical server capacity to host desktops Microsoft Hyper-v Server 2012 to host virtual infrastructure servers Note: This requirement might be covered by existing infrastructure.

EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

99

Chapter 5: VSPEX Configuration Guidelines Requirement

Description

Reference

Networking: Switch port capacity and capabilities as required by the end-user computing EMC VNX: Multiprotocol storage array with the required disk layout Software

Microsoft SCVMM 2012 SP1 installation media Citrix Provisioning Service 7 installation media Citrix XenDesktop 7 installation media Citrix Provisioning Services 7 installation media ESI for Microsoft

EMC Online Support

Microsoft Windows Server 2012 installation media (AD/DHCP/DNS/Hypervisor) Microsoft Windows 7 SP1 installation media Microsoft SQL Server 2012 installation media Software – FC variant only

EMC PowerPath

Licenses

Citrix XenDesktop 7 license files Microsoft Windows Server 2012 Standard (or higher) license keys Note: This requirement might be covered in the existing Microsoft Key Management Server (KMS).

Microsoft Windows 7 license keys Note: This requirement might be covered in the existing Microsoft Key Management Server (KMS).

Microsoft SQL Server license key Note: This requirement might be covered in the existing infrastructure.

SCVMM 2012 SP1 license keys Licenses - FC variant only

EMC PowerPath license files

100 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

EMC Online Support

Chapter 5: VSPEX Configuration Guidelines

Customer configuration data To reduce the onsite time, information such as IP addresses and hostnames should be assembled as part of the planning process. Appendix B provides a table to maintain a record of relevant information. This form can be expanded or contracted as required, and information can be added, modified, and recorded as deployment progresses. Additionally, complete the VNX File and Unified Worksheet, available on EMC Online Support, to record the most comprehensive array-specific information.

Preparing switches, connecting the network, and configuring switches This section provides the requirements for network infrastructure to support this architecture. Table 23 offers a summary of the tasks to complete along with references for further information. Table 23.

Tasks for switch and network configuration

Task

Description

Reference

Configure infrastructure network

Configure storage array and windows host infrastructure networking as specified in Solution architecture on page 45.

Configure storage network (FC variant)

Configure Fibre Channel switch ports, zoning for Hyper-V hosts, and the storage array.

Vendor’s switch configuration guide

Configure VLANs

Configure private and public VLANs as required.

Vendor’s switch configuration guide

Complete network cabling

 Connect switch interconnect ports.  Connect VNX ports.

Preparing network switches

For validated levels of performance and high availability, this solution requires the switching capacity provided in the Solution hardware table on page 50. If the existing infrastructure meets the requirements, new hardware installation is not necessary.

Configuring infrastructure network

The infrastructure network requires redundant network links for each Hyper-V host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure for the solution already exists or is being deployed alongside other components of the solution.

EMC VSPEX End-User Computing 101 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Figure 33 and Figure 34 show a sample redundant network infrastructure for this solution. The diagrams illustrate the use of redundant switches and links to ensure that no single points of failure exist in network connectivity.

Figure 33. Sample network architecture—SMB variant

102 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Figure 34. Sample network architecture – FC Variant

Configuring VLANs

Completing network cabling

Ensure that you have an adequate number of switch ports for the storage array and Hyper-V hosts configured with a minimum of three VLANs for: 

Virtual machine networking and Hyper-V management traffic (customer-facing networks, which can be separated if necessary)



Storage networking (private network)



Live Migration (private network)

Ensure that all solution servers, storage arrays, switch interconnects, and switch uplinks have redundant connections and are plugged into separate switching infrastructures. Ensure that there is complete connection to the existing customer network. Note: At this point, the new equipment is in the process of connecting to the existing customer network. Take care to ensure that unforeseen interactions do not cause service issues on the customer network.

EMC VSPEX End-User Computing 103 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Preparing and configuring the storage array This section provides resources and instructions for configuring and provisioning core storage and optional storage. Configuring VNX

This section describes how to configure the VNX storage array. In this solution, the VNX series provides CIFS or FC SAN-connected block storage for Hyper-V hosts. Table 24 shows the tasks for the storage configuration. Table 24.

Tasks for storage configuration

Task

Description

Reference

Set up initial VNX configuration

Configure the IP address information and other key parameters on the VNX.

 VNX5400 Unified Installation Guide

Provision FC storage for Hyper V (FC only)

Create FC LUNs that will be presented to the Hyper-V servers as CSV hosting the virtual desktops.

Provision optional storage for user data

Create FC LUNs that will be presented to the Hyper-V servers as CSV hosting the virtual desktops.

 VNX5600 Unified Installation Guide  VNX File and Unified Worksheet  Unisphere System Getting Started Guide  Your vendor’s switch configuration guide

Preparing VNX The VNX5400 Unified Installation Guide provides instructions for assembling, racking, cabling, and powering the VNX. For 2,000 virtual desktops, refer to the VNX5600 Unified Installation Guide instead. There are no specific setup steps for this solution. Setting up the initial VNX configuration After completing the initial VNX setup, you must configure key information about the existing environment so that the storage array can communicate. Configure the following items in accordance with your IT datacenter policies and existing infrastructure information: 

DNS



NTP



Storage network interfaces



Storage network IP address



CIFS services and Active Directory Domain membership

The reference documents listed in Table 24 provide more information about how to configure the VNX platform. Server configuration guidelines on page 57 provides more information about the disk layout.

104 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Provisioning core data storage

Core data storage is a repository for virtual desktops’ operating system data. It can be FC variant and SMB variant. Figure 20, Figure 24, Figure 26, Figure 27, Figure 29, and Figure 30 depict the target storage layout for both Fibre Channel (FC) and SMB variants of the three solution stacks in this VSPEX solution. The following sections describe provisioning steps for both FC and SMB variants. Provisioning storage for Hyper-V Cluster (FC variant only) Complete the following steps in the EMC Unisphere interface to configure FC LUNs on VNX that will be used to store virtual desktops: 1.

2.

3.

Create a block-based RAID 5 storage pool that consists of ten, twenty, or forty 600 GB SAS drives (10 drives for 500 virtual desktops, 20 for 1,000 virtual desktops, or 40 for 2,000 virtual desktops) for the MCS/Non-PvD configuration, and present them to the ESXi servers as VMFS datastores. For other MCS or PVS configuration, refer to Storage configuration guidelines to choose the appropriate LUN size. Enable FAST Cache for the storage pool. a.

Log in to EMC Unisphere.

b.

Choose the array that will be used in this solution.

c.

Go to Storage > Storage Configuration > Storage Pools.

d.

Select the Pools tab.

e.

Click Create.

In the block storage pool, create four LUNs (for 500 virtual desktops), eight LUNs (for 1,000 virtual desktops), or sixteen LUNs (for 2,000 virtual desktops), and present them to the Hyper-V servers as CSV. a.

Go to Storage > LUNs.

b.

Click Create.

c.

In the dialog box, choose the pool created in step1; MAX for User Capacity; and 4, 8, or 16 for Number of LUNs to create. LUNs will be provisioned after this operation.

Configure a storage group to allow Hyper-V servers access to the newly created LUNs. a.

Go to Hosts > Storage Groups.

b.

Create a new storage group.

c.

Select the LUNs and Hyper-V hosts to be added to this storage group.

EMC VSPEX End-User Computing 105 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Provisioning storage for CIFS Share (SMB variant only) Complete the following steps in EMC Unisphere to configure CIFS file systems on VNX that will be used to store virtual desktops: 1.

2.

Create a block-based RAID 5 storage pool that consists of ten, twenty, or forty 600 GB SAS drives (10 drives for 500 virtual desktops, 20 drives for 1,000 virtual desktops, or 40 drives for 2,000 virtual desktops) for a MCS/Non-PvD configuration. For other MCS or PVS configurations, refer to Storage configuration guidelines to choose the appropriate RAID type and disk count. Enable FAST Cache for the storage pool. a.

Log in to EMC Unisphere.

b.

Choose the array that will be used in this solution.

c.

Go to Storage > Storage Configuration > Storage Pools.

d.

Select the Pools tab.

e.

Click Create.

Create ten LUNs in the block storage pool, and present them to the Data Mover as dvols in the system-defined NAS pool. Each LUN should be 200 GB (for 500 virtual desktops), 400 GB (for 1,000 virtual desktops), or 800 GB (for 2,000 virtual desktops) for the MCS/Non-PvD configuration. Present them to the Hyper-V server as CSVs. For other MCS or PVS configuration, refer to Storage configuration guidelines to choose the appropriate LUN size. a.

Go to Storage > LUNs.

b.

Click Create.

c.

In the dialog box, choose the pool created in step 1, MAX for User Capacity, and 10 for Number of LUNs to create. Note: Ten LUNs are created because EMC Performance Engineering recommends creating approximately one LUN for every four drives in the storage pool and creating LUNs in even multiples of ten. Refer to EMC VNX Unified Best Practices for Performance Applied Best Practices Guide.

d.

Go to Hosts > Storage Groups.

e.

Choose filestorage.

f.

Click Connect LUNs.

g.

In the Available LUNs panel, choose the 10 LUNs you just created.

The LUNS immediately appear in the Selected LUNs panel. The Volume Manager automatically detects a new storage pool for file, or you can click Rescan Storage System under Storage Pool for File to scan for it immediately. Do not proceed until the new storage pool for file is present in the GUI.

106 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

3.

4.

5.

For the MCS/Non-PvD configuration, create four, eight, or sixteen file systems of 500 GB each (four file systems for 500 virtual desktops, eight for 1,000, or sixteen for 2,000), and present them to the Hyper-V servers as SMB shares. For other MCS or PVS configuration, refer to Storage configuration guidelines to choose the appropriate file system size. a.

Go to Storage > Storage Configuration > File Systems.

b.

Click Create.

c.

In the dialog box, choose Create from Storage Pool.

d.

Enter the Storage Capacity, for example, 500 GB.

e.

Accept the default values for all other parameters.

Export the file systems using CIFS. a.

Go to Storage > Shared Folders > CIFS.

b.

Click Create.

In Unisphere: a.

Click Settings > Data Mover Parameters to make changes to the Data Mover configuration.

b.

In the Set Parameters list, choose All Parameters.

c.

Scroll down to the nthreads parameter, as shown in Figure 36.

d.

Click Properties to update the setting.

The default number of threads dedicated to serve NFS requests is 384 per Data Mover on VNX. Because this solution requires up to 2,000 desktop connections, increase the number of active NFS threads to a maximum of 1,024 (for 500 virtual desktops), or 2,048 (for 1,000 and 2,000 virtual desktops) on each Data Mover.

EMC VSPEX End-User Computing 107 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Figure 35. Set nthread parameter

Configuring Fast Cache To configure FAST Cache on the storage pool(s) for this solution, complete the following steps: 1.

Configure flash drives as FAST Cache: a.

Click Properties (in the dashboard of the Unisphere window) or Manage Cache (in the left-hand pane of the Unisphere window) to open the Storage System Properties dialog box.

Figure 36. Storage System Properties dialog box

b.

Click the FAST Cache tab to view FAST Cache information.

108 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

c.

Click Create to open the Create FAST Cache dialog box.

Figure 37. Create FAST Cache dialog box

d.

The RAID Type field is displayed as RAID 1 when the FAST Cache has been created.

e.

You can also choose the number of flash drives. The bottom portion of the window shows the flash drives that will be used for creating FAST Cache. You can choose the drives manually by selecting the Manual option. Refer to Storage configuration guidelines to determine the number of flash drives that are used in this solution. Note: If a sufficient number of flash drives are not available, an error message is displayed and FAST Cache cannot be created.

2.

Enable FAST Cache on the storage pool. If a LUN is created in a storage pool, you can only configure FAST Cache for that LUN at the storage pool level. In other words, all the LUNs created in the storage pool will have FAST Cache enabled or disabled.

3.

To configure FAST Cache on an existing storage pool, use the Advanced tab in the Create Storage Pool dialog box.

EMC VSPEX End-User Computing 109 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Figure 38. Advanced tab in the Create Storage Pool dialog box

After FAST Cache is installed in the VNX series, it is enabled by default when a storage pool is created.

Figure 39. Advanced tab in the Storage Pool Properties dialog box Note: The FAST Cache feature on the VNX series array does not cause an instantaneous performance improvement. The system must collect data about access patterns and promote frequently used information into the cache. This process can take a few hours during which the performance of the array steadily improves.

Provisioning optional storage for user data

If storage required for user data (for example, roaming user profiles and home directories) does not already exist in the production environment and the optional user data disk pack has been purchased, complete the following steps in Unisphere to configure two CIFS file systems on VNX: 1.

Create a block-based RAID 6 storage pool that consists of sixteen, twentyfour, or forty-eight two-TB NL-SAS drives (sixteen drives for 500 virtual desktops, twenty-four drives for 1,000 virtual desktops, or forty-eight for 2,000 virtual desktops). Figure 22, Figure 28, and Figure 31 depict the target user data storage layout for the solution.

110 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

2.

Create ten LUNs in the block storage pool, and present them to the Data Mover as dvols in the system-defined NAS pool. The capacity of each LUN should be 1 TB (for 500 virtual desktops), 2 TB (for 1,000 virtual desktops), or 4 TB (for 2,000 virtual desktops).

3.

Create two file systems from the system-defined NAS pool containing the ten new LUNs. Export the file systems as CIFS shares.

Configuring FAST VP (optional) Optionally, you can configure FAST VP to automate data movement between storage tiers. You can configure FAST VP at the pool level or at the LUN level.

Configuring FAST VP at the pool level 1.

Select a storage pool and click Properties to open the Storage Pool Properties dialog box. Figure 40 shows the tiering information for a specific FAST VP enabled pool.

Figure 40. Storage Pool Properties window

The Tier Status box displays FAST VP relocation information for the selected pool. 2.

In the Auto-Tiering list, select Manual or Automatic for the Relocation Schedule. The Tier Details panel displays the exact distribution of the data.

3.

Click Relocation Schedule to open the Manage Auto-Tiering dialog box.

EMC VSPEX End-User Computing 111 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Figure 41. Manage Auto-Tiering dialog box

4.

Optionally, from the Manage Auto-Tiering dialog box, you can change the Data Relocation Rate. The default rate is set to Medium so as not to significantly affect host I/O.

5.

Click OK to save your changes. Note: FAST VP is a completely automated tool that schedules relocations to occur automatically. Schedule relocations during nonpeak hours to minimize potential impact on performance.

Configuring FAST VP at the LUN level Some FAST VP properties are managed at the LUN level. 1.

Click Properties for a specific LUN.

2.

Select the Tiering tab to view the tiering information for the LUN.

112 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Figure 42. LUN Properties window

The Tier Details section displays the current distribution of slices within the LUN.

Provisioning optional storage for infrastructure virtual machines

3.

Use the Tiering Policy list to select the tiering policy for the LUN.

4.

Click OK to save your changes.

If the storage required for infrastructure virtual machines (that is, SQL Server, domain controller, vCenter Server, and/or XenDesktop controllers) does not already exist in the production environment and the optional user data disk pack has been purchased, configure a CIFS file system on VNX to be used as an SMB share in which the infrastructure virtual machines reside. Repeat the configuration steps shown in Provisioning storage for CIFS Share (SMB variant only) to provision the optional storage, while taking into account the smaller number of drives.

EMC VSPEX End-User Computing 113 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Installing and configuring Microsoft Hyper-V hosts This section provides the requirements for the installation and configuration of the Windows hosts and infrastructure servers to support the architecture. Table 25 describes the required tasks. Table 25.

Tasks for server installation

Task

Description

Reference

Install Windows hosts

Install Windows Server 2012 on the physical servers for the solution.

http://technet.microsoft.com

Install Hyper-V and configure Failover Clustering

1. Add the Hyper-V Server role.

http://technet.microsoft.com

Configure windows hosts networking

Configure Windows hosts networking, including NIC teaming and the Virtual Switch network.

http://technet.microsoft.com

Install PowerPath on Windows Servers

Install and configure PowerPath to manage multipathing for VNX LUNs

PowerPath and PowerPath/VE for Windows Installation and Administration Guide

2. Add the Failover Clustering feature. 3. Create and configure the Hyper-V cluster.

Installing Windows Follow Microsoft best practices to install Windows Server 2012 and the Hyper-V role on the physical servers for this solution. hosts Installing Hyper-V and configuring failover clustering

To install and configure Failover Clustering, complete the following steps: 1.

On each Windows host, install Windows Server 2012 and patches.

2.

Configure the Hyper-V role and the Failover Clustering feature.

3.

Install the HBA drivers, or configure iSCSI initiators on each Windows host. For details, refer to EMC Host Connectivity Guide for Windows.

Table 25 provides the steps and references to accomplish the configuration tasks. Configuring Windows host networking

To ensure performance and availability, the following network interface cards (NICs) are required: 

At least one NIC for virtual machine networking and management (can be separated by network or VLAN if necessary).



At least two 10 GbE NICs for the storage network.



At least one NIC for Live Migration.

Note: Enable Jumbo Frames for NICS that transfer SMB data. Set the MTU to 9,000. Consult the NIC vendor configuration guide for instructions.

114 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Installing PowerPath on Windows servers

Install PowerPath on Windows Servers to improve and enhance the performance and capabilities of the VNX storage array. For detailed installation steps, refer to PowerPath and PowerPath/VE for Windows Installation and Administration Guide.

Enabling jumbo frames

A jumbo frame is an Ethernet frame with a “payload” greater than 1,500 bytes and up to 9,000 bytes. This is also known as the Maximum Transmission Unit (MTU). The generally accepted maximum size for a jumbo frame is 9,000 bytes. Processing overhead is proportional to the number of frames. Therefore, enabling jumbo frames reduces processing overhead by reducing the number of frames to be sent. This increases the network throughput. Jumbo frames should be enabled end-to-end. This includes the network switches and VNX interfaces. To enable jumbo frames on the VNX: 1.

Use Unisphere > Settings > Network > Settings for File.

2.

Select the appropriate network interface under the Interfaces tab.

3.

Select Properties.

4.

Set the MTU size to 9000.

5.

Select OK to apply the changes.

You might need to enable Jumbo frames on each network switch. Consult your switch configuration guide for instructions. Planning virtual machine memory allocations

Server capacity serves two purposes in the solution: 

Supports the new virtualized desktop infrastructure.



Supports the required infrastructure services such as authentication/authorization, DNS and database.

For information on minimum infrastructure service hosting requirements, refer to Table 5. If existing infrastructure services meet the requirements, the hardware listed for infrastructure services is not required. Configuring memory Take care to properly size and configure the server memory for this solution. This section provides an overview of memory management in a Hyper-V environment. Memory virtualization techniques enable the hypervisor to abstract physical host resources such as Dynamic Memory to provide resource isolation across multiple virtual machines, and avoid resource exhaustion. With advanced processors (such as Intel processors with EPT support), this abstraction takes place within the CPU. Otherwise, this process occurs within the hypervisor itself. Many techniques are available within the hypervisor to maximize the use of system resources such as memory. Do not substantially over commit resources, because this can lead to poor system performance. The exact implications of memory over commitment in a real-world environment are difficult to predict. Performance

EMC VSPEX End-User Computing 115 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

degradation due to resource-exhaustion increases with the amount of memory overcommitted.

Installing and configuring SQL Server database This section describes how to set up and configure a SQL Server database for the solution. At the end of this section, you will have Microsoft SQL server on a virtual machine, with the databases required by Microsoft SCVMM, Citrix Provisioning Service, and Citrix XenDesktop configured for use. Table 26 identifies the tasks for the SQL Server database setup.

Creating a virtual machine for Microsoft SQL Server

Table 26.

Tasks for SQL Server database setup

Task

Description

Reference

Create a virtual machine for Microsoft SQL Server

Create a virtual machine to host SQL Server. Verify that the virtual server meets the hardware and software requirements.

http://msdn.microsoft.com

Install Microsoft Windows on the virtual machine

Install Microsoft Windows Server 2012 Standard Edition on the virtual machine created to host SQL Server.

http://technet.microsoft.com

Install Microsoft SQL Server

Install Microsoft SQL Server on the virtual machine designated for that purpose.

http://technet.microsoft.com

Configure database for Microsoft SCVMM

Create the database required for the SCVMM Server on the appropriate datastore.

Configure XenDesktop database permissions

Configure the database server with appropriate permissions for the XenDesktop installer.

Database Access and Permissions for XenDesktop 7

Note: The customer environment might already contain a SQL Server designated for this role. In that case, refer to Configuring database for Microsoft SCVMM.

The requirements for processor, memory, and operating system vary for different versions of SQL Server. The virtual machine should be created on one of the Hyper-V servers designated for infrastructure virtual machines, and it should use the CSV designated for the shared infrastructure.

The SQL Server service must run on Microsoft Windows. Install Windows on the virtual Installing Microsoft Windows machine and select the appropriate network, time, and authentication settings. on the virtual machine

116 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Installing SQL Server

Install SQL Server on the virtual machine from the SQL Server installation media. The Microsoft TechNet website provides information on how to install SQL Server. One of the components in the SQL Server installer is the SQL Server Management Studio (SSMS). You can install this component on the SQL server directly as well as on an administrator’s console. Be sure to install SSMS on at least one system. In many implementations, you might want to store data files in locations other than the default path. To change the default path, right-click the server object in SSMS and select Database Properties. This action opens a properties interface from which you can change the default data and log directories for new databases created on the server. Note: For high availability, SQL Server can be installed in a Microsoft Failover Cluster.

Configuring database for Microsoft SCVMM

To use Microsoft SCVMM in this solution, you must create a database for the service to use. Note: Do not use the Microsoft SQL Server Express-based database option for this solution.

It is a best practice to create individual login accounts for each service accessing a database on SQL Server.

EMC VSPEX End-User Computing 117 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Deploying System Center Virtual Machine Manager server This section provides information on how to configure SCVMM. Complete the tasks in Table 27. Table 27.

Tasks for SCVMM configuration

Task

Description

Reference

Create the SCVMM host virtual machine

Create a virtual machine for the SCVMM Server.

Install the SCVMM guest OS

Install Windows Server 2012 Datacenter Edition on the SCVMM host virtual machine.

Install the SCVMM server

Install an SCVMM server.

http://technet.microsoft.com

Install the SCVMM Management Console

Install an SCVMM Management Console.

http://technet.microsoft.com

Install the SCVMM agent locally on the hosts

Install an SCVMM agent locally on the hosts SCVMM manages.

http://technet.microsoft.com

Add a Hyper-V cluster into SCVMM

Add the Hyper-V cluster into SCVMM.

http://technet.microsoft.com

Add file share storage in SCVMM (file variant only)

Add SMB file share storage to a Hyper-V cluster in SCVMM.

http://technet.microsoft.com

Create a virtual machine in SCVMM

Create a virtual machine in SCVMM.

http://technet.microsoft.com

Create a template virtual machine

Create a template virtual machine from the existing virtual machine.

http://technet.microsoft.com

Create the hardware profile and Guest Operating System profile at this time. Deploy virtual machines from the template virtual machine

Creating a SCVMM host virtual machine

Deploy the virtual machines from the template virtual machine.

http://technet.microsoft.com

To deploy the Microsoft Hyper-V server as a virtual machine on a Hyper-V server that is installed as part of this solution, connect directly to an infrastructure Hyper-V server by using the Hyper-V manager. Create a virtual machine on the Microsoft Hyper-V server with the customer guest OS configuration by using an infrastructure server datastore presented from the storage array. The memory and processor requirements for the SCVMM server depend on the number of Hyper-V hosts and virtual machines SCVMM must manage.

Installing the SCVMM guest OS

Install the guest OS on the SCVMM host virtual machine. Install the required Windows Server version on the virtual machine and select the appropriate network, time, and authentication settings.

118 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Installing the SCVMM server

Set up the VMM database and the default library server, and then install the SCVMM server. Refer to the article, Installing the VMM Server, to install the SCVMM server.

Installing the SCVMM Management Console

SCVMM Management Console is a client tool used to manage the SCVMM server. Install the VMM Management Console on the same computer as the VMM server.

Installing the SCVMM agent locally on a host

If the hosts must be managed on a perimeter network, install a SCVMM agent locally on the host before adding it to VMM. Optionally, install a VMM agent locally on a host in a domain before adding the host to VMM.

Refer to the article, Installing the VMM Administrator Console, to install the SCVMM Management Console.

Refer to the article, Installing a VMM Agent Locally, to install a VMM agent locally on a host. Adding a Hyper-V cluster into SCVMM

Add the deployed Microsoft Hyper-V cluster to SCVMM. SCVMM manages the Hyper-V cluster.

Adding file share storage to SCVMM (file variant only)

To add file share storage to SCVMM, complete the following steps:

Creating a virtual machine in SCVMM

Refer to the article, How to Add a Host Cluster to VMM, to add the Hyper-V cluster.

1.

Open the VMs and Services workspace.

2.

In the VMs and Services pane, right-click the Hyper-V Cluster name.

3.

Click Properties.

4.

In the Properties window, click File Share Storage.

5.

Click Add, and then add the file share storage to SCVMM.

Create a virtual machine in SCVMM to use as a virtual machine template. Install the virtual machine, then install the software, and change the Windows and application settings. Refer to How to Create a Virtual Machine with a Blank Virtual Hard Disk to create a virtual machine.

Creating a template virtual machine

Converting a virtual machine into a template removes the virtual machine. Backup the virtual machine, because the virtual machine could be destroyed during template creation. Create a hardware profile and a Guest Operating System profile when creating a template. Use the profiler to deploy the virtual machines. Refer to How to Create a Template from a Virtual Machine to create the template.

EMC VSPEX End-User Computing 119 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Deploying virtual machines from the template virtual machine

Refer to How to Deploy a Virtual Machine to deploy the virtual machines. The deployment wizard allows you to save the PowerShell scripts and reuse them to deploy other virtual machines with the same configuration.

Installing and configuring XenDesktop controller This section provides information on how to set up and configure Citrix XenDesktop controllers for the solution. For a new installation of XenDesktop, Citrix recommends that you complete the tasks in Table 28 in the order shown. Table 28.

Tasks for XenDesktop controller setup

Task

Description

Create virtual machines for XenDesktop controllers.

Create four virtual machines in Hyper V. Two of the virtual machines are used as XenDesktop delivery controllers.

Install the guest operating system for XenDesktop controllers and PVS servers.

Install Windows Server 2008 R2 or Windows Server 2012 guest operating system.

Install server-side components of XenDesktop.

Install XenDesktop server components on the first delivery controller.

Install Citrix Studio.

Install Citrix Studio to manage XenDesktop deployment remotely.

Configure a site.

Configure a site in Citrix Studio.

Add a second XenDesktop delivery controller.

Install additional delivery controller for high availability.

Prepare a master virtual machine.

Create a master virtual machine as the base image for the virtual desktops.

Installing serverside components of XenDesktop

Reference

www.citrix.com

Install the following server-side components of XenDesktop on the first controller: 

Delivery Controller—Distributes applications and desktops, manages user access, and optimizes connections



Citrix Studio—Allows you to create, configure and manage infrastructure components, applications, and desktops



Citrix Director—Enables you to monitor performance and troubleshoot problems



License server—Manages product licenses



Citrix StoreFront—Provides authentication and resource delivery services for Citrix Receiver

Note: Citrix supports installation of XenDesktop components only through the procedures described in Citrix documentation.

120 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Configuring a site

Adding a second controller

Start Citrix Studio and configure a site. For site configuration, do the following: 1.

License the site and specify which edition of XenDesktop to use.

2.

Set up the site database using a designated login credential for SQL Server.

3.

Provide information about your virtual infrastructure, including the Microsoft SCVMM path that the controller will use to establish a connection to the Hyper-V infrastructure.

After you have configured a site, you can add a second controller to provide high availability. The server-side components of XenDesktop required for the second controller are: 

Delivery Controller



Citrix Studio



Citrix Director



Citrix StoreFront

Do not install the license server component on the second controller because it is centrally managed on the first controller. Installing Citrix Studio

Install Citrix Studio on the appropriate administrator consoles to manage your XenDesktop deployment remotely.

Preparing master virtual machine

Optimize the master virtual machine to avoid unnecessary background services that generate extraneous I/O operations and adversely affect the overall performance of the storage array. Complete the following steps to prepare the master virtual machine: 1.

Install the Windows 7 guest operating system.

2.

Install the appropriate integration tools, such as Hyper-V Tools.

3.

Optimize the operating system settings by referring to Citrix Windows 7 Optimization Guide for Desktop Virtualization.

4.

Install the Virtual Delivery Agent.

5.

Install third-party tools or applications relevant to your environment, such as Microsoft Office.

EMC VSPEX End-User Computing 121 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Provisioning virtual desktops

Complete the following steps to deploy virtual desktops using Machine Creation Services (MCS) in Citrix Studio: 1.

Create a machine catalog using the master virtual machine as the base image. MCS allows the creation of a machine catalog that contains various types of desktops. The following desktop types were tested in this solution: 

 2.

Windows Desktop OS: 

Random: users connect to a new (random) desktop each time they log on



Personal vDisk: users connect to the same (static) desktop each time they log on. Changes are saved on a separate Personal vDisk.

Windows Server OS: provides hosted shared desktops for a large-scale deployment of standardized machines

Add the machines created in the catalog to a delivery group so that the virtual desktops are available to the end users.

122 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Installing and configuring Provisioning Services (PVS only) This section provides information on how to set up and configure Citrix Provisioning Services for the solution. For a new installation of Provisioning Services, Citrix recommends that you complete the tasks in Table 29 in the order shown. Table 29.

Tasks for XenDesktop controller setup

Task

Description

Reference

Create virtual machines for PVS servers

Create two virtual machines in Hyper-V Server. These virtual machines are used as PVS servers.

Install guest operating system for PVS servers

Install Windows Server 2008 R2 or Windows Server 2012 guest operating system.

Install server-side components of Provisioning Services

Install PVS server components and console on the PVS server.

Configure PVS server farm

Run Provisioning Services Configuration Wizard to create a PVS server farm.

Add a second PVS server

Install PVS server components and console on the second server and join it to the existing server farm.

Create a PVS store

Specify the store path where the vDisks will reside.

Configure inbound communication

Adjust the total number of threads that will be used to communicate with each virtual desktop

Configure a bootstrap file

Update the bootstrap image to use both PVS servers to provide streaming services

Set up TFTP server on VNX

Copy the bootstrap image to the TFTP server hosted on VNX

Configure boot options 66 and 67 on DHCP server

Specify the TFTP server IP and the name of the bootstrap image used for the Preboot eXecution Environment (PXE) boot

Prepare a master virtual machine

Create a master virtual machine as the base image for the virtual desktops.

Provision virtual desktops

Provision desktops using PVS.

Citrix website

EMC VSPEX End-User Computing 123 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Configuring a PVS server farm

After the PVS server components are installed on the PVS server, start the Provisioning Services Configuration Wizard and configure a new server farm using the following options: 1.

Specify the DHCP service to be run on another computer.

2.

Specify the PXE service to be run on another computer.

3.

Select Create Farm to create a new PVS server farm using a designated SQL database instance.

4.

When creating a new server farm, you must create a site. Provide an appropriate name for the new site and target device collection.

5.

Select the license server that is running on the XenDesktop controller.

6.

If you choose to run the TFTP service on VNX, do not use the TFTP service hosted on the PVS server: clear the option Use the Provisioning Services TFTP service.

Adding a second PVS server

After you have configured a PVS server farm, you can add a second PVS server to provide high availability. Install the PVS server components and console on the second PVS server and run the Provisioning Services Configuration Wizard to join the second server to the existing server farm.

Create a PVS store

A PVS store is a logical container for vDisks. PVS supports the use of CIFS share as the storage target of a store. When creating a PVS store, set the default store path to the universal naming convention (UNC) path of a CIFS share that is hosted on the VNX storage. In the Provisioning Services console, right-click a store, select Properties and Validate to confirm that all PVS servers in the server farm can access the CIFs share.

Configuring inbound communication

Each PVS server maintains a range of User Datagram Protocol (UDP) ports to manage all inbound communications from virtual desktops. Ideally, you should dedicate one thread for each desktop session. The total number of threads supported by a PVS server is calculated as: Total threads = (Number of UDP ports * Threads per port * Number of network adapters)

Adjust the thread count accordingly to match the number of deployed virtual desktops.

124 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Configuring a bootstrap file

To update the bootstrap file required for the virtual desktops to PXE boot, complete the following steps: 1.

In the Provisioning Services console, navigate to Farm > Sites > Site-name > Servers.

2.

Right-click a server and select Configure Bootstrap.

Figure 43. Configure Bootstrap dialog box

Setting up a TFTP server on VNX

3.

In the Configure Bootstrap dialog box, update the bootstrap image to reflect the IP addresses used for all PVS servers that provide streaming services in a round-robin fashion. Select Read Servers from Database to obtain a list of PVS servers automatically or select Add to manually add the server information.

4.

After modifying the configuration, click OK to update the ARDBP32.BIN bootstrap file, which is located at C:\ProgramData\Citrix\Provisioning Services\Tftpboot.

5.

Navigate to the folder and examine the timestamp of the bootstrap file to ensure that it is updated on the intended PVS server.

In addition to the NFS/CIFS server, the VNX platform is also used as a TFTP server that provides a bootstrap image when virtual desktops PXE boot. To configure the VNX TFTP server, complete the following steps: 1.

Enable the TFTP service by using the following command syntax: server_tftp -service -start

2.

Use the following command syntax to set the TFTP working directory and enable read/write access for file transfer: server_tftp -set –path -readaccess all -writeaccess all

EMC VSPEX End-User Computing 125 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

3.

Use a TFTP client to upload the ARDBP32.BIN bootstrap file from C:\ProgramData\Citrix\Provisioning Services\Tftpboot on the PVS server to the VNX TFTP server.

4.

Use the following syntax to set the TFTP working directory access to read-only to prevent accidental modification of the bootstrap file: server_tftp -set –path -writeaccess none

Configuring boot options 66 and 67 on DHCP server

To PXE boot the virtual desktops successfully from the bootstrap image supplied by the PVS servers, set the boot options 066 and 067 on the DHCP server. Complete the following steps to configure the boot options on the Microsoft DHCP server: 1.

From the DHCP management interface of the Microsoft DHCP server, right-click Scope Options, and then select Configure Options.

2.

Select 066 Boot Server Host Name.

3.

In String Value, type the IP address of the Data Mover configured as TFTP server.

4.

Similarly, select 067 Bootfile Name, and then type ARDBP32.BIN in the String value box. The ARDBP32.BIN bootstrap image is loaded on a virtual desktop before the vDisk image is streamed from the PVS servers.

Preparing the master virtual machine

Optimize the master virtual machine to avoid unnecessary background services generating inessential I/O operations that adversely affect the overall performance of the storage array. Complete the following steps to prepare the master virtual machine: 1.

Install the appropriate integration tools.

2.

Optimize the operating system settings by referring to the following document: Citrix Windows 7 Optimization Guide for Desktop Virtualization.

3.

Install the Virtual Delivery Agent.

4.

Install third-party tools or applications, such as Microsoft Office, relevant to your environment.

5.

Install the PVS target device software on the master virtual machine.

6.

Modify the BIOS of the master virtual machine so that the network adapter is at the top of the boot order to ensure PXE boot of the PVS bootstrap image.

126 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Provisioning the virtual desktops

Complete the following steps to deploy PVS-based virtual desktops: 1.

Run the PVS imaging wizard to clone the master image onto a vDisk.

2.

When the cloning is complete, shut down the master virtual machine and modify the following vDisk properties: 

Access mode: Standard Image



Cache type: Cache on device hard drive

3.

Prepare a virtual machine template to be used by XenDesktop Setup Wizard in the next step.

4.

Run the XenDesktop Setup Wizard in the PVS console to create a machine catalog that contains the specified number of virtual desktops.

5.

Add the virtual desktops created in the catalog to a delivery group so that the virtual desktops are available to the end users.

Setting up EMC Avamar This section provides information about the installation and configuration of Avamar that is required to support “in-guest” backup of user files. There are other methods for backing up user files with Avamar; however, this method provides end-user restore capabilities using a common GUI. For this configuration, we assume that only a user’s files and profile are being backed up. Table 30 describes the tasks you must complete. Note: Regular backups of the data center infrastructure components required by Citrix XenDesktop virtual desktops should supplement the backups produced by the procedure described here. A full disaster recovery plan requires the ability to restore Citrix XenDesktop end-user computing as well as the ability to restore Citrix XenDesktop desktop user data and files.

EMC VSPEX End-User Computing 127 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines Table 30.

Tasks for Avamar integration

Task

Description

Reference

Microsoft Active Directory Preparation GPO Additions for EMC Avamar

Create and configure Group Policy Object (GPO) to enable EMC Avamar backups of user files and profiles.

Citrix XenDesktop (Master) Image Preparation Master Image Preparation for EMC Avamar

Install and configure the EMC Avamar Client to run in user mode.

EMC Avamar Preparation Defining Datasets

Create and configure EMC Avamar Datasets to support user files and profiles.

Defining Schedules

Create and configure EMC Avamar backup schedule to support virtual desktop backups.

Adjust Maintenance Window Schedule

Modify Maintenance Window schedule to Support virtual desktop backups.

Defining Retention Policies

Create and configure EMC Avamar Retention Policy.

Group and Group Policy Creation

Create and configure EMC Avamar Group and Group Policy

 EMC Avamar 7.0

Administrator Guide

 EMC Avamar 7.0

Operational Best Practices

Post Desktop Deployment Activate Clients (Desktops)

Activate Citrix XenDesktop virtual desktops using EMC Avamar Enterprise Manager.

128 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

EMC Avamar 7.0 Administrator Guide

Chapter 5: VSPEX Configuration Guidelines

GPO additions for EMC Avamar

You must use mapped drives to reduce the management burden and because of current EMC Avamar limitations (such as no support for client side variables like %username%). Configure the Windows Folder Redirection to create the UNC paths required for the mapped drives. You must create a new GPO. Folder redirection To configure Windows Folder Redirection: 1.

Edit the GPO by navigating to User Configuration > Policies > Windows Settings > Folder Redirection.

2.

Right-click Documents.

3.

Select Properties.

4.

In the Settings list, select Basic – Redirect everyone’s folder to the same location.

5.

In the Root Path box, type \\CIFS_server\folder, as shown in Figure 45, and then click OK.

Figure 44. Configuring Windows Folder Redirection

EMC VSPEX End-User Computing 129 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Mapped drives Create two mapped drive configurations – one for the user’s files and one for the user’s profile. Repeat the following procedure twice, changing three variables each time (Location, Label As, and Drive Letter Used) to create the two mapped drives. To configure Drive Mappings: 1.

Edit the GPO, and then navigate to User Configuration > Preferences > Windows Settings > Drive Maps.

2.

Right click the blank/white area in the right pane.

3.

In the context menu, select New > Mapped Drive, as shown in Figure 45.

Figure 45. Create a Windows network drive mapping for user files

4.

In the mapped drive properties dialog box, set the following items, as shown in Figure 46, to create the User_Files mapped drive: a.

In the Action list, select Create.

b.

In Location, type \\cifs_server\folder\%username%.

c.

Select Reconnect.

d.

In Label as, type User_Files.

e.

In the Drive Letter box, select Use, and then select U.

f.

In Hide/Show this drive, select Hide this drive.

130 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Figure 46. Configure drive mapping settings

5.

At the top of the Properties window, select the Common tab, and then select Run in logged-on user’s security context (user policy option).

Figure 47. Configure drive mapping common settings

Repeat steps 1 through 5 to create a User_Profile mapped drive using the following variables: a.

In Location, type \\cifs_server\folder\%username%.domain.V2 where domain is the Active Directory domain name.

b.

In Label as, type User_Profile.

c.

In the Drive Letter box, select Use, and then select P.

EMC VSPEX End-User Computing 131 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Figure 49 shows a sample configuration.

Figure 48. Create a Windows network drive mapping for user profile data

6. Preparing the master image for EMC Avamar

Close the Group Policy Editor to ensure that the changes are saved.

This section provides information about using the Avamar Client for Windows to provide backup and restore support for Citrix XenDesktop virtual desktops that store user-generated files in EMC VNX home directories. The Avamar Client for Windows installs and runs as a Windows service called Backup Agent. Backup and restore capabilities are provided by this server service. Windows security limits the access of services logged on using the Local System account to local resources only. In its default configuration, the Backup Agent uses the Local System account to log on. It cannot access network resources, including the Citrix XenDesktop user profiles or data file shares. To access Citrix XenDesktop user profiles and data file shares, the Backup Agent must run as the currently logged on user. You can accomplish this by using a batch file that starts Backup Agent and logs it on as a user when the user logs in. Note: The commands in this batch file assume that the drive letter of the user data disk for the redirected Avamar Client for Windows var directory is D. When a different drive letter is assigned, replace D in all instances of D:\ with the correct letter. Redirection of the var directory is described in Re-direct the Avamar Client for Windows var directory.

Replace D with P using the configuration steps in Mapped drives. Modify the vardir path value within the avamar.cmd file located in C:\Program Files\avs\var to --vardir=P:\avs\var.

132 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Defining datasets

For the following sections, assume the Avamar Grid is up and functional and that you have logged into Avamar Administrator. Refer to the EMC Avamar 7.0 Administration Guide for information on accessing Avamar Administrator. Avamar datasets are lists of directories and files to backup from a client. Assigning a dataset to a client or group enables you to save backup selections. Refer to the EMC Avamar 7.0 Administration Guide for additional information about datasets. This section provides procedures to configure the Citrix XenDesktop virtual desktop datasets that are required to ensure successful backups of user files and user profiles. Create two datasets: one for the user files and one for the user profile, using the following procedures.

Creating the User Files dataset 1.

In Avamar Administrator, select Tools > Manage Datasets.

Figure 49. Avamar tools menu

2.

In the Manage All Datasets window, click New.

Figure 50. Avamar Manage All Datasets dialog box

EMC VSPEX End-User Computing 133 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

The New Dataset window appears.

Figure 51. Avamar New Dataset dialog box

3.

Select each plug-in and click remove (–) to remove all plug-ins from the list.

4.

In the Name field, type View-User-Files.

5.

Select Enter Explicitly.

6.

In the Select Plug-in Type list, select Windows File System.

7.

In Select Files and/or Folders, type U:\, and then click add (+).

Figure 52. Configure Avamar Dataset settings

8.

Click OK to save the Dataset.

134 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Creating the User Profile dataset To create a new dataset for User Profile data, complete the following steps: 1.

Complete the steps in Creating the User Files using the following values: 

Name: View-User-Profile.



Select Files and/or Folders: P:\.

Figure 53. User Profile data dataset

2.

Select the Exclusions tab.

3.

In the Select Plug-in Type list, select Windows File System.

4.

In Select Files and/or Folders, type P:\avs, and then click add (+).

Figure 54. User Profile data dataset Exclusion settings

5.

Click the Options tab.

EMC VSPEX End-User Computing 135 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

6.

In the Select Plug-in Type list, select Windows File System.

7.

Select Show Advanced Options.

Figure 55. User Profile data dataset Options settings

8.

Scroll down the list of options until you locate the Volume Freezing Options section.

9.

In the Method to freeze volumes list, select None.

10. Click OK to save the dataset.

Figure 56. User Profile data dataset Advanced Options settings

136 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Defining schedules Avamar schedules are reusable objects that control when Group backups and custom notifications occur. Define a reoccurring schedule that satisfies your recovery point objectives (RPO). Refer to the EMC Avamar 7.0 Administration Guide for additional information about datasets. Adjusting the maintenance window schedule

Avamar server maintenance includes three essential activities: 

Checkpoint—a snapshot of the Avamar server taken for the express purpose of facilitating server rollbacks.



Checkpoint validation—an internal operation that validates the integrity of a specific checkpoint. Once a checkpoint passes validation, it can be considered reliable enough to be used for a server rollback.



Garbage collection—an internal operation that recovers storage space from deleted or expired backups.

Each 24-hour day is divided into three operational windows during which various system activities are performed: 

Backup window



Blackout window



Maintenance window

Figure 57 illustrates the default Avamar backup, blackout, and maintenance windows.

Figure 57. Avamar default Backup/Maintenance Windows schedule



The backup window is that portion of each day reserved to perform normal scheduled backups. No maintenance activities are performed during the backup window.

EMC VSPEX End-User Computing 137 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines



The blackout window is that portion of each day reserved to perform server maintenance activities, primarily Garbage Collection, that require unrestricted access to the server. No backup or administrative activities are allowed during the blackout window. However, you can perform restores.



The maintenance window is that portion of each day reserved to perform routine server maintenance activities, primarily checkpoint creation and validation.

User files and profile data should not be backed up during the day while the users are logged onto their virtual desktop. Adjust the backup window start time to prevent backups from occurring during that time. Figure 58 illustrates modified backup, blackout, and maintenance windows for backing up Citrix XenDesktop virtual desktops.

Figure 58. Avamar modified Backup/Maintenance Windows schedule

To adjust the schedule to appear as shown above, change the Backup Window Start Time from 8:00 PM to 8:00 AM, and then click OK to save the changes. Refer to the EMC Avamar 7.0 Administration Guide for additional information about Avamar server maintenance activities. Defining retention policies

Avamar backup retention policies enable you to specify how long to keep a backup in the system. A retention policy is assigned to each backup when the backup occurs. Specify a custom retention policy to perform an on-demand backup, or create a retention policy that is assigned automatically to a group of clients during a scheduled backup. When the retention for a backup expires, the backup is automatically marked for deletion. The deletion occurs in batches during times of low system activity.

138 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Refer to the EMC Avamar 7.0 Administration Guide for additional information on defining retention policies. Creating groups and group policy

Avamar uses groups to implement various policies to automate backups and enforce consistent rules and system behavior across an entire segment, or group, of the user community. Group members are client machines that have been added to a particular group to perform scheduled backups. In addition to specifying which clients belong to a group, groups also specify: 

Datasets



Schedules



Retention Polices

These three objects comprise the “group policy.” Group policy controls backup behavior for all members of the group unless you override these settings at the client level. Refer to the EMC Avamar 7.0 Administration Guide for additional information about Groups and Group Policies. This section provides group configuration information that is required to ensure proper backups of user files and user profiles. Create two groups and their respective group policies: one for the user files and one for the user profile. Repeat the following procedure twice; change two variables each time (Name, and Dataset Used).

Creating the User File Group 1.

From the Actions menu, select New Group.

Figure 59. Create new Avamar backup group

EMC VSPEX End-User Computing 139 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

The New Group dialog box appears.

Figure 60. New backup group settings

2.

In Name, type View_User_Data.

3.

Ensure that Disabled is cleared.

4.

Click Next.

5.

In the Select An Existing Dataset list, select Citrix Xendesktop-User-Data.

Figure 61. Select backup group dataset

6.

Click Next.

140 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

7.

In the Select An Existing Schedule list, select a schedule.

Figure 62. Select backup group schedule

8.

Click Next.

9.

In the Select An Existing Retention Policy list, select a retention policy.

Figure 63. Select backup group retention policy

10. Click Finish. Note: If you click Next, you can select the clients to be added to the group. This step is unnecessary, because clients will be added to the group during activation.

EMC VSPEX End-User Computing 141 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

EMC Avamar Enterprise Manager: activating clients

Avamar Enterprise Manager is a web-based multi-system management console application that provides centralized Avamar system administration capabilities, including the ability to add and activate Avamar Clients all at once. In this section, we assume you know how to log into Avamar Enterprise Manager (EM), and that the Citrix XenDesktop desktops are created. After you log in to Avamar Enterprise Manager, the dashboard appears.

Figure 64. Avamar Enterprise Manager

1.

Click Client Manager.

2.

In the EMC Avamar Client Manager window, click Activate.

Figure 65. Avamar Client Manager

3.

Click the Client Information list arrow.

142 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Figure 66. Avamar Activate Client dialog box

4.

From the Client Information list, select Directory Service.

Figure 67. Avamar Activate Client menu

5.

In the Directory Service dialog box, enter the required user credentials, as shown in Figure 68. This assumes an Active Directory service has been configured in Avamar; refer to the EMC Avamar 7.0 Administration Guide for additional information about enabling LDAP Management. a.

In the User Domain list, select a directory service domain.

b.

In User Name and Password, type the user name and password required for directory service authentication.

c.

In Directory Domain, select a directory domain to query for client information, and then click OK.

EMC VSPEX End-User Computing 143 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Figure 68. Avamar Directory Service configuration

The Active Directory information appears in the left pane of the EMC Avamar Client Manager window.

Figure 69. Avamar Client Manager – post configuration

6.

In the Client Information directory tree, locate the Citrix XenDesktop virtual desktops.

144 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

In this example, an OU was created named VSPEX.

Figure 70. Avamar Client Manager – virtual desktop clients

7.

Select the virtual machine desktops you want to add to the Avamar server. Figure 71 shows a selected list in the Client Information pane and the target domain in the Server Information pane.

Figure 71. Avamar Client Manager – select virtual desktop clients

8.

Drag and drop the selected list to an existing Avamar domain in the Server Information pane.

EMC VSPEX End-User Computing 145 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

The Select Groups window appears.

Figure 72. Select Avamar groups

9.

Under Group Name, select the groups to which you want to add these desktops, and then click Add. The EMC Avamar Client Manager window reappears.

10. Click the Avamar domain to which you just added the XenDesktop desktops, and then click Activate.

Figure 73. Activate Avamar clients

The Show Clients for Activation window appears.

146 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

11. Click Commit.

Figure 74. Commit Avamar client activation

An Alert appears, indicating that the client activation will be performed as a background process. 12. Click OK.

Figure 75. Avamar client activation informational prompt one

A second Alert indicates that the activation process has been initiated and that you should check the logs for status. 13. Click OK.

EMC VSPEX End-User Computing 147 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Figure 76. Avamar client activation informational prompt two

The EMC Avamar Client Manager window reappears and displays the activated clients.

Figure 77. Avamar Client Manager – activated clients

14. Log out from EMC Avamar Enterprise Manager.

148 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 5: VSPEX Configuration Guidelines

Summary In this chapter, we presented the steps required to deploy and configure the various aspects of the VSPEX solution, which included both the physical and logical components. At this point, you should have a fully functional VSPEX solution. The following chapter covers post-installation and validation activities.

EMC VSPEX End-User Computing 149 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 6: Validating the Solution

Chapter 6

Validating the Solution

This chapter presents the following topics: Overview ................................................................................................................151 Post-installation checklist .....................................................................................151 Deploying and testing a single virtual desktop ......................................................152 Verifying the redundancy of the solution components ...........................................152

150 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 6: Validating the Solution

Overview This section provides a list of items that should be reviewed after the solution has been configured. The goal of this section is to verify the configuration and functionality of specific aspects of the solution and to ensure that the configuration supports core availability requirements. Table 31 describes the tasks that must be completed. Table 31.

Tasks for testing the installation

Task

Description

Reference

Post install checklist

Verify that sufficient virtual ports exist on each Hyper-V host virtual switch.

http://blogs.technet.com/b/ga vinmcshera/archive/2011/03/ 27/3416313.aspx

Verify that each Hyper-V host has access to the required datastores and VLANs.

http://social.technet.microsoft. com/wiki/contents/articles/15 1.hyper-v-virtual-networkingsurvival-guide-en-us.aspx

Verify that interfaces are configured correctly on all Microsoft Windows Hyper-V hosts. Deploy and test a single virtual server

Deploy a single virtual machine by using the System Center Virtual Machine Manager (SCVMM) interface.

http://channel9.msdn.com/Eve nts/TechEd/NorthAmerica/201 2/VIR310

Verify redundancy of the solution components

Perform a reboot of each storage processor in turn and ensure that LUN connectivity is maintained.

Steps shown below

Disable each of the redundant switches in turn and verify that the Hyper-V host, virtual machine, and storage array connectivity remains intact.

Reference vendor’s documentation

On a Hyper-V host that contains at least one virtual machine, restart the host and verify that the virtual machine can successfully migrate to an alternate host.

http://technet.microsoft.com/e n-us/library/gg610576.aspx

Post-installation checklist Prior to deployment into production, verify the following configuration items because they are critical to the solution functionality. Before deployment into production, verify the following on each Windows Server: 

The VLAN for virtual machine networking is configured correctly.



The storage networking is configured correctly.



Each server can access the required Cluster Shared Volumes/Hyper-V SMB shares.



A network interface is configured correctly for Live Migration. EMC VSPEX End-User Computing 151 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Chapter 6: Validating the Solution

Deploying and testing a single virtual desktop To verify the operation of the solution perform a deployment of a virtual machine to verify that the procedure completes as expected. Verify that the virtual machine joins the applicable domain, has access to the expected networks, and is able to log in.

Verifying the redundancy of the solution components To ensure that the various components of the solution maintain availability requirements, it is important to test specific scenarios related to maintenance or hardware failure. Perform a restart of each VNX Storage Processor in turn and verify that connectivity to Hyper-V storage is maintained throughout each restart, as follows: 1.

Log on to the Control Station with administrator rights.

2.

Navigate to /nas/sbin.

3.

Restart SPA: use the command ./navicli spa rebootsp.

4.

During the restart cycle, check for the presence of storage on Hyper-V hosts.

5.

When the cycle completes, restart SPB: ./navicli spb rebootsp.

Perform a failover of each VNX Data Mover in turn and verify that connectivity to Hyper-V Storage is maintained and that connections to CIFS file systems are reestablished. For simplicity, use the following approach for each data mover. Restart can also be accomplished through the Unisphere interface. From the Control Station $ prompt, use the command, server_cpu reboot (where is the name of the data mover). To verify that network redundancy features function as expected, disable each of the redundant switching infrastructures in turn. While each of the switching infrastructures is disabled, verify that all the components of the solution maintain connectivity to each other and to any existing client infrastructure as well. To verify that HA features function as expected for XenDesktop delivery controllers, StoreFront servers, and Provisioning Services servers, disable each of the redundant servers in turn and verify that the virtual desktops remain accessible. On a Hyper-V host that contains at least one virtual machine, enable maintenance mode and verify that the virtual machine can successfully migrate to an alternate host.

152 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Appendix A: Bills of Materials

Appendix A

Bills of Materials

This appendix presents the following topics: Bill of materials for 500 virtual desktops ...............................................................154 Bill of materials for 1,000 virtual desktops ............................................................156 Bill of materials for 2,000 virtual desktops ............................................................158

EMC VSPEX End-User Computing 153 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Appendix A: Bills of Materials

Bill of materials for 500 virtual desktops Table 32.

List of components used in the VSPEX solution for 500 virtual desktops

Component Microsoft Hyper-V servers

Solution for 500 Virtual Desktops CPU

Desktop OS  1 x vCPU per virtual desktop  8 x virtual desktops per physical core  500 x vCPUs  Minimum of 63 physical cores Server OS  0.2 x vCPU per virtual desktop  5 x virtual desktops per physical core  100 x vCPUs  Minimum of 100 physical cores

Memory

Desktop OS  2 GB RAM per desktop  Minimum of 1 TB RAM Server OS  0.6 GB RAM per desktop  Minimum of 300 GB RAM

Network – FC option

2 x 4/8 GB FC HBAs per server

Network – 1Gb option

6 x 1 GbE NICs per server

Note: To implement Microsoft Cluster Services functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements.

Network infrastructure

Fibre Channel

 2 x physical switches  2 x 1 GbE ports per Hyper-V server  4 x 4/8 Gb FC ports for VNX back end (Two per SP)  2 x 4/8 Gb FC ports per Hyper-V server

1 Gb network

 2 x physical switches  1 x 1 GbE port per Control Station for management  6 x 1 GbE ports per Hyper-V server

10 Gb network

 2 x physical switches  1 x 1 GbE port per Control Station for management  2 x 10 GbE ports per data mover for data

154 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Appendix A: Bills of Materials Component

Solution for 500 Virtual Desktops Note: When choosing the Fibre Channel option for storage, you still must choose one of the IP network options to have full connectivity.

EMC nextgeneration backup

Avamar

EMC VNX series storage array

Common

 1 x Gen4 utility node  1 x Gen4 3.9 TB spare node  3 x Gen4 3.9 TB storage nodes  EMC VNX5400  2 x Data Movers (active / standby)  600GB, 15k rpm 3.5-inch SAS drives – Core Desktops Drive count

PvD

Non-PvD

HSD

PVS

26

18

18

MCS

18

15

15

 3 x 100 GB, 3.5-inch flash drives–FAST Cache  17 x 2 TB, 3.5-inch NL-SAS drives (optional)– User Data FC option

2 x 8 Gb FC ports per Storage Processor

1 Gb Network option

4 x 1 Gb I/O module for each Data Mover (each module includes four ports)

10 Gb Network option

2 x 10 Gb I/O module for each Data Mover (each module includes two ports)

EMC VSPEX End-User Computing 155 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Appendix A: Bills of Materials

Bill of materials for 1,000 virtual desktops Table 33.

List of components used in the VSPEX solution for 1,000 virtual desktops

Component Microsoft HyperV servers

Solution for 1,000 Virtual Desktops CPU

Desktop OS  1 x vCPU per virtual desktop  8 x virtual desktops per physical core  1000 x vCPUs  Minimum of 125 physical cores Server OS  0.2 x vCPU per virtual desktop  5 x virtual desktops per physical core  200 x vCPUs  Minimum of 200 physical cores

Memory

Desktop OS  2 GB RAM per desktop  Minimum of 2 TB RAM Server OS  0.6 GB RAM per desktop  Minimum of 600 GB RAM

Network – FC option

2 x 4/8 GB FC HBAs per server

Network – 1Gb option

6 x 1 GbE NICs per blade chassis

Network – 10 Gb option

3 x 10 GbE NICs per blade chassis

Note: To implement Microsoft Cluster Services functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements.

Network infrastructure

Fibre Channel

 2 x physical switches  2 x 1 GbE ports per Hyper-V server  4 x 4/8 Gb FC ports for VNX back end (two per SP)  2 x 4/8 Gb FC ports per Hyper-V server

156 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Appendix A: Bills of Materials Component

Solution for 1,000 Virtual Desktops 1 Gb network

 2 x physical switches  1 x 1 GbE port per Control Station for management  6 x 1 GbE ports per Hyper-V server  2 x 10 GbE ports per data mover for data

10 Gb network

 2 x physical switches  1 x 1 GbE port per Control Station for management  3 x 10 GbE ports per blade chassis  2 x 10 GbE ports per data mover for data

Note: When choosing the Fibre Channel option for storage, you still must choose one of the IP network options to have full connectivity.

EMC nextgeneration backup

Avamar

EMC VNX series storage array

Common

 1 x Gen4 utility node  1 x Gen4 3.9 TB spare node  3 x Gen4 3.9 TB storage nodes  EMC VNX5400  2 x Data Movers (active / standby)  600 GB, 15 k rpm 3.5-inch SAS drives – Core Desktops Drive count

PvD

Non-PvD

HSD

PVS

26

18

18

MCS

18

15

15

 3 x 100 GB, 3.5-inch flash drives–FAST Cache  25 x 2 TB, 3.5-inch NL-SAS drives (optional)– User Data FC option

2 x 8 Gb FC ports per Storage Processor

1 Gb Network option

4 x 1 Gb I/O module for each Data Mover

10 Gb Network option

2 x 10 Gb I/O module for each Data Mover

(each module includes four ports) (each module includes two ports)

EMC VSPEX End-User Computing 157 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Appendix A: Bills of Materials

Bill of materials for 2,000 virtual desktops Table 34.

List of components used in the VSPEX solution for 2,000 virtual desktops

Component Microsoft Hyper V servers

Solution for 2,000 Virtual Desktops CPU

Desktop OS  1 x vCPU per virtual desktop  8 x virtual desktops per physical core  2,000 x vCPUs  Minimum of 250 physical cores Server OS  0.2 x vCPU per virtual desktop  5 x virtual desktops per physical core  400 x vCPUs  Minimum of 400 physical cores

Memory

Desktop OS  2 GB RAM per desktop  Minimum of 4 TB RAM Server OS  0.6 GB RAM per desktop  Minimum of 1.2 TB RAM

Network – FC option

2 x 4/8 GB FC HBAs per server

Network – 1 Gb option

6 x 1 GbE NICs per server

Network – 10 Gb option

3 x 10 GbE NICs per blade chassis

Note: To implement Microsoft Cluster Services functionality and to meet the listed minimums, the infrastructure should have at least one additional server beyond the number needed to meet the minimum requirements.

Network infrastructure

Fibre Channel

 2 x physical switches  2 x 1 GbE ports per Hyper-V server  4 x 4/8 Gb FC ports for VNX back end (two per SP)  2 x 4/8 Gb FC ports per Hyper-V server

158 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Appendix A: Bills of Materials Component

Solution for 2,000 Virtual Desktops 1 Gb network option

 2 x physical switches  1 x 1 GbE port per Control Station for management  6 x 1 GbE ports per Hyper-V server  2 x 10 GbE ports per data mover for data

10 Gb network option

 2 x physical switches  1 x 1 GbE port per Control Station for management  3 x 10 GbE ports per blade chassis  2 x 10 GbE ports per data mover for data

Note: When choosing the Fibre Channel option for storage, you must still choose one of the IP network options to have full connectivity.

EMC nextgeneration backup

Avamar

EMC VNX series storage array

Common

 1 x Gen4 utility node  1 x Gen4 3.9 TB spare node  3 x Gen4 3.9 TB storage nodes  EMC VNX5600  2 x Data Movers (active / standby)  600GB, 15k rpm 3.5-inch SAS drives — Core Desktops Drive count

PvD

Non-PvD

HSD

PVS

76

43

43

MCS

58

46

46

 5 x 100GB, 3.5-inch flash drives – FAST Cache  50 x 2 TB, 3.5-inch NL-SAS drives (optional) – User Data FC option

2 x 8 Gb FC ports per Storage Processor

1 Gb network option

4 x 1 Gb I/O module for each Data Mover

10 Gb network option

2 x 10 Gb I/O module for each Data Mover

(each module includes four ports)

(each module includes two ports)

EMC VSPEX End-User Computing 159 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Appendix B: Customer Configuration Data Sheet

Appendix B

Customer Configuration Data Sheet

This appendix presents the following topics: Customer configuration data sheets ......................................................................161

160 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Appendix B: Customer Configuration Data Sheet

Customer configuration data sheets Before you start the configuration, gather customer-specific network and host configuration information. The following tables provide information on assembling the required network and host address, numbering, and naming information. This worksheet can also be used as a “leave behind” document for future reference. The VNX File and Unified Worksheet should be cross-referenced to confirm customer information. Table 35.

Common server information

Server Name

Purpose

Primary IP

Domain Controller DNS Primary DNS Secondary DHCP NTP SMTP SNMP SCVMM Console XenDesktop Console Provisioning Services Console SQL Server

Table 36.

Hyper-V server information

Server Name

Purpose

Primary IP

Private Net (storage) Addresses

Hyper-V Host 1 Hyper-V Host 2 …

EMC VSPEX End-User Computing 161 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Appendix B: Customer Configuration Data Sheet Table 37.

Array information

Field

Value

Array name Admin account Management IP Storage pool name Datastore name CIFS Server IP

Table 38.

Network infrastructure information

Name

Purpose

IP Address

Subnet Mask

Ethernet Switch 1 Ethernet Switch 2 …

Table 39.

VLAN information

Name

Network Purpose

VLAN ID

Allowed Subnets

Virtual Machine Networking Hyper-V Management CIFS Storage Network Live Migration

162 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Default Gateway

Appendix B: Customer Configuration Data Sheet Table 40. Account

Service accounts Purpose

Password (optional, secure appropriately)

Windows Server administrator Hyper-V Administrator Array administrator SCVMM administrator XenDesktop administrator SQL Server administrator Windows Server administrator Hyper-V Administrator root

Array root Array administrator SCVMM administrator XenDesktop administrator SQL Server administrator

EMC VSPEX End-User Computing 163 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Appendix C: References

Appendix C

References

This appendix presents the following topics: References .............................................................................................................165

164 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Appendix C: References

References The following references provide additional and relevant information. EMC documentation

Other documentation

The following documents are located on EMC Online Support. Access to these documents depends on your login credentials. If you do not have access to a document, contact your EMC representative: 

EMC Infrastructure for Citrix XenDesktop 7, EMC VNX Series (NFS and FC), Citrix XenDesktop 7, VMware vSphere 5.1 Reference Architecture



EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series (FC), VMware vSphere 4.1, and Citrix XenDesktop 5 Proven Solution Guide



EMC Infrastructure for Virtual Desktops Enabled by EMC VNX Series (NFS), VMware vSphere 4.1, and Citrix XenDesktop 5 Proven Solution Guide



EMC Performance Optimization for Microsoft Windows XP for the Virtual Desktop Infrastructure Applied Best Practices



EMC VNX Unified Best Practices for Performance Applied Best Practices Guide



VNX FAST Cache: A Detailed Review



Sizing EMC VNX Series for VDI Workload



EMC Infrastructure for Citrix XenDesktop 5.6, EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Reference Architecture



EMC Infrastructure for Citrix XenDesktop 5.6: EMC VNX Series (NFS), VMware vSphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide



EMC Infrastructure for Citrix XenDesktop 5.5 (PVS): EMC VNX Series (NFS), Citrix XenDesktop 5.5 (PVS), XenApp 6.5, and XenServer 6 Reference Architecture



EMC Infrastructure for Citrix XenDesktop 5.5 (PVS) EMC VNX Series (NFS), Citrix XenDesktop 5.5 (PVS), XenApp 6.5, and XenServer 6 Proven Solution Guide



EMC Infrastructure for Citrix XenDesktop 5.5: EMC VNX Series (NFS), Cisco UCS, Citrix XenDesktop 5.5, XenApp 6.5, and XenServer 6 Reference Architecture



EMC Infrastructure for Citrix XenDesktop 5.5: EMC VNX Series (NFS), Cisco UCS, Citrix XenDesktop 5.5, XenApp 6.5, and XenServer 6 Proven Solution Guide

For Citrix or Microsoft documentation, refer to the Citrix and Microsoft websites at www.citrix.com and www.microsoft.com.

EMC VSPEX End-User Computing 165 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Appendix D: About VSPEX

Appendix D

About VSPEX

This appendix presents the following topics: About VSPEX ..........................................................................................................167

166 EMC VSPEX End-User Computing Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Appendix D: About VSPEX

About VSPEX EMC has joined forces with the industry’s leading providers of IT infrastructure to create a complete virtualization solution that accelerates deployment of cloud infrastructure. Built with best-in-class technologies, VSPEX enables faster deployment, more simplicity, greater choice, higher efficiency, and lower risk. Validation by EMC ensures predictable performance and enables customers to select technology that uses their existing IT infrastructures, while eliminating planning, sizing, and configuration burdens. VSPEX provides a proven infrastructure for customers who are looking for the simplicity that is characteristic of truly converged infrastructures and that, at the same time, provide more choice in individual stack components. VSPEX solutions are proven by EMC and packaged and sold exclusively by EMC channel partners. For channel partners, VSPEX provides more opportunity, a faster sales cycle, and end-to-end enablement. By working even more closely together, EMC and its channel partners can deliver infrastructures that accelerate the journey to the cloud for even more customers.

EMC VSPEX End-User Computing 167 Citrix XenDesktop 7 and Microsoft Hyper-V Server 2012 for up to 2,000 Virtual Desktops

Proven Infrastructure Guide

Suggest Documents