In the previous post of our VCAP7-DTM design study guide, we discussed sections 1 and 2 of the blueprint.
These were logical and conceptual designs of the VMware methodology. Now based on these, we will continue our design and start with the physical part of the designing process.
Once more, designing a solution for a customer is an agile process. It could be that during the decision of all our physical components some other solutions might be more interesting.
Example: we designed a Horizon CloudPod architecture as the customer needs to scale in the future. What with VMware on AWS? Could this not help us in solving the scalability issue? This decision can impact some of the previously made design choices. If we didn’t foresee this, we need to start over and validate our logical design again.
If you found the first part a bit too high-level, don’t worry! From here out, it will become more technical and in-depth.
Section 3 – Create a Physical Design for vSphere and Horizon Components
Section 3 to 8 are seen as the “WHAT” phase of our design. What components and building blocks are we going to use to satisfy all our previously created use cases. In section 2, we already covered some high-level solutions that we could use. In the following steps, we will use these high-level ideas and physically design them.
Section 3 will only be used to create an Architectural concept of the design. Example: How many sites, how many pods or how many blocks?
Horizon Building blocks:
When designing a Horizon pod and block architecture, it is key that you understand how the individual building blocks are made up.
The horizon cloud pod architecture allows multiple pods to be linked together. This can be multiple pods in the same logical datacenter or even spread across multiple data centers. The VIPA (View InterPod API) protocol is used to communicate between the different pods and sites.
Each pod will have its own local ADAM database instance that contains all local entitlements, as well as a global ADAM database. This global ADAM database is replicated across all members (pods) in the Cloud Pod federation, using VIPA.
The Java Messaging System (JMS) is used to replicate the ADAM database (LDS instances) between connection servers. This does have its own limitations as in the tolerable latency and maximum amount of 7 connection servers in the same pod. A good rule of thumb that I got from a VMware colleague was the “arms reach”. If the connection servers VM’s are in the physical vicinity of each other they can be seen as the same pod. Of course, this is a rule of thumb just to give you an idea. So, If you have a second dataroom/center on the other side of the building or campus, then a second pod should be used, as the VIPA protocol is built to handle more latency.
With the cloud pod architecture explained, the individual pod itself is built up out of 2 kinds of blocks: a management block and one or multiple resource blocks.
Each pod will have a maximum of 1 management block and up to 5 resource blocks.
The management block is a central vSphere cluster that will host all backend services for the horizon Pod: Management vCenter, Connection servers, AppVolumes servers, UAG’s, DB servers, File servers, DHCP, DNS, NTP, AD servers, …
All resource block’s vCenters (up to 5) will also be deployed on top of this cluster.
VMware VSAN is a perfect fit to be used to ensure that the management cluster is a high-available cluster from both compute and storage perspective.
The connection servers in the Pod can be expanded to a number of maximum 7 connection servers. As mentioned above, this is due to possible latency issues with the JMS (java messaging service) and the number of connection servers. The maximum of users that can be handled by a connection server is 4000 with a best-practice of 2000, with the N+1 ratio taken in aspect, a pod can theoretically handle 12.000 sessions from a connection server perspective.
Here the VMware best-practice also dictates that the 10.000 session limit should not be crossed. If more sessions are required, additional pods should be deployed.
The resource block will be used to host all Virtual Machines (VDI or/and RDSH VM’s). Different cloning technologies like Instant-cloning, linked clones or Persistent machines have an impact on the maximum amount of VM’s a single vCenter aka the resource block can host.
With Horizon 7.2, the sizing recommendations for CPA can be found here: https://kb.vmware.com/s/article/2150348
This means that each of our vCenters in the resources blocks have the following limits:
- A maximum of 4000 Virtual Machines for full clones and linked clones.
- A maximum of 8000 Virtual Machines for instant clones.
If for example, a deployment of 3000 VM’s is needed to provide adequate resources to the Horizon deployment, we could go with 1 vCenter to host all VM’s. Keep in mind that we are creating a single point of failure by doing this.
As we will have only 1 vCenter managing all VM’s, in the case of clones, the unavailability of vCenter will stop all future provisioning.
So, it is better to host the 3000 VM’s split over 2 vCenters to ensure a more redundant setup.
The only thing that has to be taken into account is the overhead of additional resource requirements on the management block.
Licensing wise, when using the “vSphere for Desktop” licenses, we can deploy an unlimited amount of vCenters that will be used solely for the Horizon deployment.
Again other design changes may come presently when creating multiple resource blocks like networking requirements and so on… Make sure this does not influence the design!
Additional Horizon components:
Other components can be added to our design, these have their own prerequisites. I will list a set of possible additional components and their requirements. As this is still part 3, it remains a high-level definition of all the necessary building blocks.
- App Volumes
Additional Virtual Machines need to be provisioned to host the app volumes installation.
The necessary resources need to be calculated on the Management resources blocks as well as the necessary storage for the writable volumes and app stacks that will be provisioned by the customer. When using a multi-site deployment, an additional datastore (NFS) needs to be available to allow replication of the ApStacks between sites.
The app volumes Manager will also need to communicate with all vCenters that are hosting “resource blocks”.
- User Environment Manager (UEM)
UEM has a dependency on an available Fileserver to host all the necessary configuration and user files/folders. This file server needs to be configured, created and sized to the size and setup of the environment.
In a multi-site environment, an active/passive spoke-hub DFS-R setup can be used to replicate all UEM User files between the 2 sites. This is the only supported method to ensure no profile corruption as DFS has no built-in conflict resolution. The Configuration files fully support a DFS-R spoke hub setup. More information on the supported configurations can be found in the following VMware KB article.
The vRealize Operations appliance can be installed and used for vSphere and/or Horizon Integration. The necessary sizing and amount of the appliances need to be calculated in reference to the design.
- Identity Manager (IDM)
When using the OnPrem deployment of IDM, the necessary resources need to be provisioned to host all the additional components. Also depending on the HA/DR requirements the number of components can double and the complexity of the environment rises. A HA on-prem IDM deployment will be running in an Active-Hot standby setup. This for IDM using Postgress, Oracle as well as Microsoft SQL (all flavors). The VMware best-practice is to disable the passive side on the Global load balancer side as well as configure specific services in read-only mode. This will guaranty that no writes/changes will be done from the passive side. The following article describes the multi-site IDM deployment. Therefore, the SAAS deployment mitigates this drawback as the HA/DR and overall maintenance are managed by VMware itself. But certain criteria may force you to go with the on-prem solution.
- App Volumes
Design vSphere to support Horizon
Based on the number of resources needed to host the Horizon environment (X amount of resource blocks and management blocks), we need to design a vSphere layer that is suited to house all the VM’s and deliver the needed performance (CPU, IOPS,…).
- vSphere Limitations
First of all, it is important to understand the limitations of the vSphere components.
The VCAP7-DTM design exam was based on vSphere 6.5, so be sure that you know the maximums of these versions for both vSphere and VSAN. Some examples are:
– A vSphere cluster can have a maximum of 64 ESX members.
– An ESX host running VSAN can have a maximum of 5 Disk groups.
– A Tiny vCenter deployment type has a recommended maximum of 10 ESX servers.
The following VMware website has detailed information on the maximums of all versions of VMware products: https://configmax.vmware.com
- HA and DRS
When designing vSphere clusters for Horizon, some HA and DRS settings need to be fine-tuned or adjusted.
Like creating Anti-Affinity rules for connection servers, UAG’s,… to make sure they are not running on the same hosts.
Required backend services
A workspace ONE or even a vSphere environment requires some services to be available to function correctly. Think about services like:
- Active Directory / LDAP
- PKI (Certificate Authority)
It is important that you know, which individual components in a Workspace ONE environment are dependent on which services.
By doing this, you have a better overview of all backend services and how to avoid a possible single-point-of-failure (SPOF).
As we finish section 3 of the VMware methodology, you should have a good understanding of all the building blocks that make up a Workspace ONE/ Horizon deployment. Together with the dependencies and requirements, they have on the underlying Infrastructure (vSphere and backend services).
Continue reading section 4 here (Work-in-progress) in part 3 of the VCAP7-DTM Design study guide.