VMware vSphere 4 Technical Information Page 26

  • Download
  • Add to my manuals
  • Print
  • Page
    / 54
  • Table of contents
  • BOOKMARKS
  • Rated. / 5. Based on customer reviews
Page view 25
Performance Best Practices for VMware vSphere 4.0
26 VMware, Inc.
ESX Storage Considerations
This subsection provides guidance regarding storage considerations in ESX.
ESX supports raw device mapping (RDM), which allows management and access of raw SCSI disks or
LUNs as VMFS files. An RDM is a special file on a VMFS volume that acts as a proxy for a raw device.
The RDM file contains meta data used to manage and redirect disk accesses to the physical device.
Ordinary VMFS is recommended for most virtual disk storage, but raw disks may be desirable in some
cases.
ESX supports three virtual disk modes: Independent persistent, Independent nonpersistent, and
Snapshot. These modes have the following characteristics:
Independent persistent – Changes are immediately written to the disk, so this mode provides the
best performance.
Independent nonpersistentChanges to the disk are discarded when you power off or revert to a
snapshot. In this mode disk writes are appended to a redo log. When a virtual machine reads from
disk, ESX first checks the redo log (by looking at a directory of disk blocks contained in the redo log)
and, if the relevant blocks are listed, reads that information. Otherwise, the read goes to the base disk
for the virtual machine. These redo logs, which track the changes in a virtual machine’s file system
and allow you to commit changes or revert to a prior point in time, can incur a performance penalty.
Snapshot – A snapshot captures the entire state of the virtual machine. This includes the memory and
disk states as well as the virtual machine settings. When you revert to a snapshot, you return all these
items to their previous states. Like the independent nonpersistent disks described above, snapshots
use redo logs and can incur a performance penalty.
ESX supports multiple disk types:
Thick – Thick disks, which have all their space allocated at creation time, are further divided into two
types: eager zeroed and lazy zeroed.
Eager-zeroed – An eager-zeroed thick disk has all space allocated and zeroed out at the time of
creation. This extends the time it takes to create the disk, but results in the best performance,
even on first write to each block.
Lazy-zeroed – A lazy-zeroed thick disk has all space allocated at the time of creation, but each
block is only zeroed on first write. This results in a shorter creation time, but reduced
performance the first time a block is written to. Subsequent writes, however, have the same
performance as an eager-zeroed thick disk.
Thin – Space required for a thin-provisioned virtual disk is allocated and zeroed upon demand, as
opposed to upon creation. There is a higher I/O penalty during the first write to an unwritten file
block, but the same performance as an eager-zeroed thick disk on subsequent writes.
Virtual machine disks created through the vSphere Client (whether connected directly to the ESX host or
through vCenter) can be lazy-zeroed thick disks (thus incurring the first-write performance penalty
described above) or thin disks. Eager-zeroed thick disks can be created from the console command line
using vmkfstools. For more details refer to the vmkfstools man page.
The alignment of your file system partitions can impact performance. VMware makes the following
recommendations for VMFS partitions:
Like other disk-based file systems, VMFS suffers a penalty when the partition is unaligned. Using the
vSphere Client to create VMFS partitions avoids this problem since it automatically aligns the
partitions along the 64KB boundary.
To manually align your VMFS partitions, check your storage vendors recommendations for the
partition starting block. If your storage vendor makes no specific recommendation, use a starting
block that is a multiple of 8KB.
Page view 25
1 2 ... 21 22 23 24 25 26 27 28 29 30 31 ... 53 54

Comments to this Manuals

No comments