ewams

Thick and Thin Provisioning with VMware vSphere 5

Contents: Overview
A majority of VMs have a virtual hard drive attached to allow the guest to use as persistent storage. Some VMs have multiple virtual hard drives, some just one small file, while others may have a single monolithic virtual disk. When creating a new virtual hard drive for a VM in the vSphere world you end up creating a virtual machine disk as two associated files, a pointer file and a flat file that contains the data. The data file can be created using two different formats, thick or thin. To add to it, thick has two subtypes: thick lazy zeroed and thick eager zeroed. Which type you plan on using in your environment can have an impact on performance and is a major consideration on how to manage the storage in a vSphere environment. The information for VMDK's has changed with pretty much each iteration of VMware's enterprise virtualization platform and it can be difficult to find accurate information. Here I attempt to pull together several resources from VMware and discuss VMDK types in vSphere 5 and VMFS-5.

Thick Provisioning Eager Zero
Starting with thick provisioning of VMDK's, we'll discuss what happens between the two types at creation and at write time. When a thick provisioned eager zeroed disk is created the maximum size of the disk is allocated to the VMDK and all of that space is zeroed out. You will notice when you create a new disk and if it is thick provisioned eager zeroed it takes a while to be created, this is because of the zeroing process. If you create an 80GB VMDK thick provisioned eager zeroed then it will allocate 80GB and write 80GB worth of zeros to the SAN. VMDK's with this format have the best performance because when a write operation occurs to the VMDK the location of the disk is determined and then the write is performed1.

Thick Provisioning Lazy Zero
A thick provisioned lazy zeroed VMDK is similar to the eager zeroed except that instead the zeroing operating is performed just before a write operation, not at creation. The space is still allocated to the VMDK so after creating a VMDK with this format the datastore will show that the space is no longer available, but there is the additional overhead of zeroing out at write time. If an 80gig VMDK is created in the thick provisioned lazy zeroed format then 80GB is allocated on the datastore but nothing else occurs until data is written to the VMDK. At each write time that is to a new block, the block is zeroed out and then the data is written. This means for each new write operation there is an overhead that is not present with thick provisioned eager zero. Performance of the lazy zeroed is not as good as eager zeroed but is better than thin provisioned.

Thin Provisioning
Thin provisioned disks are the third type of VMDK format. Thin provisioned VMDK's do not allocated or zero out space when they are created but instead do it only at write time. When an 80GB VMDK is created that is thin provisioned, only a little bit of metadata is written to the datastore. The 80GB does not show up in the datastore as in use like it does with thick provisioned. Instead, only when data is actually written does it take up space for a thin provisioned VMDK. At write time space is allocated on the datastore, the metadata of the VMDK is updated, then the block or blocks are zeroed out, then the data is written. Because of all the overhead at write time thin provisioned VMDK's have the lowest performance of the three disk formats. This overhead though is very small and most environments will not notice it until they have very write intensive VMs.

Thin provisioning of VMDKs allows you to overcommit storage to VMs on a datastore, you may have a 100GB datastore with 15VM's and each VM could have a 50GB VMDK attached to it and still have room on the datastore. The catch is that the sum total of all data written by the VMs on the 100GB datastore can not go above 100GB2. Thin provisioning allows for administrators to use space on datastores that would otherwise be unavailable if using thick provisioning, possibly reducing costs and administrative overhead. There is a secret catch to thin provisioning though that is important with datastore management and is often overlooked.

When a thin provisioned VMDK has a write to perform, it must allocate the new space to the VMDK and update the metadata of the disk to include the location of this new space. At this time the host that the VM is running on must be able to exclusively use the datastore for a split second. Traditionally a lock was performed on the datastore by using a SCSI reservation. In of itself this is no problem, but if you have several VM's on the datastore that have thin provisioned VMDK's it is possible that the VMs may all perform writes at the same time, then there will be multiple SCSI reservation commands sent, drastically reducing performance while they try to update the metadata. A datastore can be brought to its knees when this happens because only one host has write access to the datastore, causing all the others to pause their write operations, drastically slowing performance.

VMware has worked to reduce this issue and with VMFS-5 and vSphere 5 it has introduced vStorage APIs for Array Integration or VAAI. VAAI allows storage operations to be offloaded to the storage array instead of being performed by an ESXi host. One of the group of operations introduced with VAAI is called Atomic Test & Set (ATS) that is a replacement for SCSI reservations. This allows for the metadata to be updated without locking out all the other hosts. Great news if you have a VAAI supported storage array. Not so much for non-VAAI arrays.

Conclusion
Used correctly, thin and thick provisioning can provide an administrator better performance and utilization on their datastores and storage arrays. Deciding which format to use can be a difficult at times, especially if the underlying storage array also performs thin provisioning, has a large cache, or other advanced features. Because there are so many variables, remember there is no substitution for testing before committing in production. For further reading I have including links to all of my sources from VMware and highly recommend all of the blog posts by Cormac Hogan. If you have any feedback on this article feel free to contact me, information on the right.

Footnotes
  1. This is a vast oversimplification on how the write process works inside of vSphere. For details on the process take it from the source.
  2. For simplicity I am ignoring log files, config files, swap files and other metadata that would also take space in the datastore.


References


*disclaimer*
The information contained in this article is accurate to the best of my abilities. It is possible I made a mistake and my data is wrong. Before making any decisions in your environment be sure to verify all information with your vendor as I can not be held liable for anything bad that comes of the information provided.
At the time of this writing I was employed by a company that sells storage devices and I provided consulting services for VMware's products. This article was created of my own accord and does not represent any viewpoint or official statement from either company.



Written by Eric Wamsley
Posted: May 7th, 2013 8:24pm
Topic: VMware vSphere 5 VMDK provisioning
Tags: vSphere, VMFS, thick_provisioning, thin_provisioning, storage, virtualization,


 ©Eric Wamsley - ewams.net