VMware vSphere 4 Technical Information Page 21

  • Download
  • Add to my manuals
  • Print
  • Page
    / 54
  • Table of contents
  • BOOKMARKS
  • Rated. / 5. Based on customer reviews
Page view 20
VMware, Inc. 21
Chapter 2 ESX and Virtual Machines
For nearly all workloads, custom hyper-threading settings are not necessary. In cases of unusual
workloads that interact badly with hyper-threading, however, choosing the None or Internal
hyper-threading option might help performance. For example, an application with cache-thrashing
problems might slow down an application sharing its physical CPU core. In this case configuring the
virtual machine running that problem application with the None hyper-threading option might help
isolate that virtual machine from other virtual machines.
The trade-off for configuring None or Internal should also be considered. With either of these settings,
there can be cases where there is no core to which a descheduled virtual machine can be migrated, even
though one or more logical cores are idle. As a result, it is possible that virtual machines with
hyper-threading set to None or Internal can experience performance degradation, especially on systems
with a limited number of CPU cores.
Non-Uniform Memory Access (NUMA)
IBM (X-Architecture), AMD (Opteron-based), and Intel (Nehalem) non-uniform memory access (NUMA)
systems are supported in ESX. On AMD Opteron-based systems, such as the HP ProLiant DL585 Server,
BIOS settings for node interleaving determine whether the system behaves like a NUMA system or like a
uniform memory accessing (UMA) system. For more information, refer to your servers documentation.
If node interleaving is disabled, ESX detects the system as NUMA and applies NUMA optimizations. If
node interleaving (also known as interleaved memory) is enabled, ESX does not detect the system as
NUMA.
The intelligent, adaptive NUMA scheduling and memory placement policies in ESX can manage all
virtual machines transparently, so that administrators do not need to deal with the complexity of
balancing virtual machines between nodes by hand. However, manual override controls are available,
and advanced administrators may prefer to control the memory placement (through the Memory
Affinity option) and processor utilization (through the Only Use Processors option).
By default, ESX NUMA scheduling and related optimizations are enabled only on systems with a total of
at least four CPU cores and with at least two CPU core per NUMA node. On such systems, virtual
machines can be separated into the following categories:
Virtual machines with a number of vCPUs equal to or less than the number of cores in each NUMA
node will be managed by the NUMA scheduler and will have the best performance.
Virtual machines with more vCPUs than the number of cores in a NUMA node will not be managed
by the NUMA scheduler. They will still run correctly, but they will not benefit from the ESX NUMA
optimizations.
Configuring ESX for Hardware-Assisted Virtualization
For a description of hardware-assisted virtualization, see “Hardware-Assisted Virtualization” on page 11.
Hardware-assisted CPU virtualization has been supported since ESX 3.0 (VT-x) and ESX 3.5 (AMD-V). On
processors that support hardware-assisted CPU virtualization, but not hardware-assisted MMU
virtualization, ESX 4.0 by default chooses between the binary translation (BT) virtual machine monitor (VMM)
and a hardware virtualization (HV) VMM based on the processor model and the guest operating system,
providing the best performance in the majority of cases (see http://communities.vmware.com/docs/DOC-9882
for a detailed list). If desired, however, this behavior can be changed, as described below.
N
OTE More information about using NUMA systems with ESX can be found in the “Advanced Attributes
and What They Do” and “Using NUMA Systems with ESX Server” sections of the VMware Resource
Management Guide, listed in “Related Publications” on page 8.
Page view 20
1 2 ... 16 17 18 19 20 21 22 23 24 25 26 ... 53 54

Comments to this Manuals

No comments