Resource Pools are often misunderstood, disliked, and untrusted by vSphere Administrators who have been burned due to unexpected results of improperly configured resource pools. This white paper examines several scenarios based on actual customer resource pool implementations. Some scenarios describe examples of poorly configured pools. Other scenarios describe examples of resource pools that were well-configured to obtain the desired result. We will also briefly examine how Shares, Limits, and Reservations settings on a resource pool are applied.
Resource Pools are often misunderstood, disliked, and untrusted by vSphere Administrators, who have been burned due to unexpected results of improperly configured resource pools. Symptoms of such results include the inability to power on required virtual machines (VMs), the accidental inhibition from accessing available processor and memory resources, and the granting of high priority to VMs that were intended to get low priority. Because of such results, many administrators, perhaps most, avoid utilizing resource pools. However, resource pools can be very useful tools for administrators, who want to configure resource management without having to individually configure each VM. This leads to the administrator's desire to explore the proper usage of resource pools.
This white paper examines several scenarios based on actual customer implementations where resource pools were expected to be useful. Some scenarios describe examples of poorly configured pools. These examples include a description of the undesired results and recommendations for correcting the situation. Other scenarios describe examples of resource pools that were well-configured to obtain a desired result. First, here's a brief explanation of how Shares, Limits, and Reservations settings on a resource pool are applied.
Resource pools are a type of container that can be utilized to organize VMs, much like folders. But what makes resource pools unique, is that they can be used to implement resource controls, including Shares, Limits, and Reservations on CPU and RAM usage. Limits establish a hard cap on the resource usage. For example, a resource pool whose CPU Limit is set to 2 GHz, restricts the concurrent CPU usage of all VMs in the pool to a maximum of 2 GHz collectively, although some physical CPU capacity may remain unused. Reservations establish a minimum guarantee of resource usage. For example, a resource pool whose RAM Reservation is set to 2 GB guarantees the concurrent RAM usage of all VMs in the pool to a minimum of 2 GB collectively, regardless of the demand from VMs outside the resource pool. Shares establish a relative priority on resource usage that is only applied during periods of resource contention. For example, one VM is configured with 500 CPU Shares and another is config-ured with four times as many, or 2000, CPU Shares, as shown in Figure 1. These settings are completely ignored unless CPU contention occurs. During contention, the VM with 2000 CPU Shares will be granted four times the available CPU cycles than the VM with 500 CPU Shares.
For more details on controlling resources using Shares, Limits, and Reservations, see Chapter 2 - "Configuring Resource Allocations" in the vSphere Resource Management Guide, which is available for download from the vSphere 5.1 Documentation Center web portal. (http://pubs.vmware.com/vsphere-51/index.jsp).
One reoccurring issue that has plagued many administrators is related to the attempt to assign higher CPU or RAM priority to a set of VMs. For example, an administrator created two resource pools, one named "High Priority" and one named "Low Priority," and configured CPU and RAM Shares on each pool that corresponds to their names. The CPU and RAM Shares on the "High Priority" pool are set to High, and the Shares on the "Low Priority" pool are set to Low. The administrator understood that Shares apply a priority only when the corresponding resources are under contention, so he expected that under normal conditions, all VMs get the CPU and RAM resources they request. But, he expected that if processor or memory contention occurs, then each VM in the High Priority pool would get a greater share of the resources than each VM in the Low Priority pool. He expected that during contention, the performance of the High Priority VMs may remain rather steady, while the Low Priority VMs may noticeably begin to move slowly. Unfortunately, the opposite result was realized, where the VMs in the Low Priority pool actually began to run faster than the VMs in the High Priority Pool.
The real cause of this problem was that CPU and RAM shares were configured for the resource pools without the administrator fully understanding the impact of shares. In this case, the administrator assumed that setting High CPU Shares on a resource pool containing 50 VMs is equivalent to setting High CPU Shares on each VM individually. But this assumption is incorrect. To understand how Shares were truly applied, consider the following example.
No comments:
Post a Comment