ESXi Scratch Partition

Not something you will be dealing with during normal day-to-day vSphere operations, but you can bump into problems if you don’t have a scratch partition.

This happened to me while upgrading some ESXi host with VMware Update Manager, but can also happen when you are trying to generate log files for VMware Support. And what’s more annoying then having system log generation problems when trying to deal with another error in your vSphere environment.

What is the ESXi scratch partition?

The scratch partion is a 4 GB VFAT partition that is created on a target device that has sufficient space available and is considered “Local” by ESXi.

The scratch partition is used for a couple of things:

* Store vm-support output (if scratch partition is unavailable the output will be stored in memory)
* During first boot the partition will be configured through syslog for retaining logfiles
* Userworld swapfile (when enabled)

The scratch partition is created and configured during ESXi installation or the autoconfiguration phase when the ESXi server first boots.

ESXi selects one of these scratch locations during startup in order of preference:

  1. 1. The location configured in the /etc/vmware/locker.conf configuration file, set by the ScratchConfig.ConfiguredScratchLocation configuration option
  2. 2. A Fat16 filesystem of at least 4 GB on the Local Boot device.
  3. 3. A Fat16 filesystem of at least 4 GB on a Local device.
  4. 4. A VMFS Datastore on a Local device, in a .locker/ directory.
  5. 5. A ramdisk at /tmp/scratch/

There  are two examples where scratch space may not be automatically defined on persistent storage. In each case, the temporary scratch location will be configured on a ramdisk.

  1. 1. ESXi deployed on a Flash or SD device, including a USB key. Scratch partitions are not created on Flash or SD storage devices even if connected during install, due to the potentially limited read/write cycles available.
  2. 2. ESXi deployed in a Boot from SAN configuration or to a SAS device. A Boot from SAN or SAS LUN is considered Remote, and could potentially be shared among multiple ESXi hosts. Remote devices are not used for scratch to avoid collisions between multiple ESXi hosts.

So for the record, a scratch partition isn’t always created, and even isn’t required to run ESXi, but it can result in strange problems during vSphere operations. So better be safe, then sorry and create your scratch partition in advance.

Always have a scratch partition

The reason for this article is the fact that I ran into trouble while updating my ESXi host with VMware Update Manager. This was due to the fact that my servers had SAS disks and no scratch partition was configured. This led to this VMware KB article which explained how to configure your scratch partition by setting the advanced option ScratchConfig.ConfiguredScratchLocation.

This solved my problem and I’ve made it a best practice to configure my scratch partition, by setting ScratchConfig.ConfiguredScratchLocation, in my kickstart script. The scratch partition is to be located on the local VMFS datastore of my server. After all ESXi creates this local VMFS datastore from the disk space that isn’t used (when dealing with servers with local disks). This remaining disk space is more then enough to host the scratch partition. This way the scratch partition persistent and is always created, even in the case of local SAS disks.

For more information, and the sources of all my information for this article, have look at the following links:

About the Scratch Partition – page 56 – ESXi Installable and vCenter Server Setup Guide

VMware KB article 1033696 : Creating a persistent scratch location for ESXi

VMware KB article 1014953 : Identifying disks when working with VMware ESX

VMware ESXi Chronicles : Ops changes part 5 – Scratch Partition

VMware taking it to the next level!

VMware will be presenting a webcast with the name “Raising the Bar, Part V” on July 12th. This is the official announcement on the registration page of this event :


Raising the Bar, Part V
Date: Tuesday, July 12, 2011
Time: 9:00 AM (PT) | 12:00 PM (ET)

Please join VMware executives
Paul Maritz, CEO and Steve Herrod, CTO for the unveiling of the next major step forward in Cloud infrastructure.
Paul & Steve’s 45 minute live webcast will be followed by additional online sessions where you can learn more about the industry’s most trusted virtualization and cloud infrastructure products and services.
Join us and experience how the virtualization journey is helping transform IT and ushering in the era of Cloud Computing

Probably the next best thing we were waiting for the last couple of months. So don’t miss this announcement and sign up for the webcast over here.

Bug: Host Profiles forgets AD OU structure

Lately I’ve been playing around with VMware vSphere Host Profiles. This feature creates baseline configurations for your ESX / ESXi host based on a reference host you configured in advance.

One of the configuration settings in Host Profiles is the Active Directory (AD) configuration. As you can read in this post you can add your ESXi host to AD. This way ESXi is added to AD and can use AD for authentication.

The bug

And here is where my problem start. As you can see in my post I want my ESXi host to be added to the directory “OU=Servers,OU=ESXi” in my domain “DEVTEST.LOCAL”. When creating a host profile with from this ESXi hosts configuration, Host Profiles will only add the value “DEVTEST.LOCAL” to the “Configuration Details” of the Host Profile.

Now when applying the newly created Host Profile to a non-configured host, will result in an error that the host can not be joined to AD (Unless you have Domain Admin rights for the domain and can add computers to the the OU Computer). This is due to the fact that the specific directory structure isn’t added to the “Configuration Details” by Host Profiles when taking a snapshot of the configuration of the ESXi host.


How to solve this problem? Well actual the solution is very simple. Add the AD OU directory structure, in my case /Servers/ESXi, to the “Configuration Details” of the Host Profile. This can be done by manually editing the “Configuration Details” of the Domain Name under Active Directory Configuration. You just add the directory structure to the domain name. Also described in the note of this post.

Solution results in error : This solution will result in the ESXi host being added to the domain to the correct OU structure, but as a result the ESXi host will never reach the status Compliant. This is due to the fact that the Host Profile configuration states “DEVTEST.LOCAL/Servers/ESXi”, but the ESXi host presents the configuration as “DEVTEST.LOCAL” which results in an Non-Compliant status which is incorrect.

Hope this will be solved in the next release of VMware vSphere.

Citrix best practices for virtualizing XenApp

VMware already published a document about the best practices of virtualizing XenApp servers on top of VMware vSphere. You can read about it and download the document over here.

Now Citrix has published a best practice document, the XenApp Planning Guide: Virtualization Best Practics with their point of view on virtualizing XenApp servers. Note that this document not only focuses on virtualizatin XenApp on top of VMware vSphere, but also the hypervisors by Microsoft (Hyper-V) and Citrix (XenServer) are taken into account.

This document also contains a lot of useful recommendations. So I would recommend reading both the VMware document and the Citrix document carefully when designing your virtual environment

Overview of the Citrix document :

Desktop virtualization comprises of many different types of virtual desktops.  One option is to use a
Hosted Shared Desktop model, which consists of a published desktop running on a Citrix XenApp

One of the goals when creating a design for Hosted Shared Desktops is to try and maximize
scalability while still providing an adequate use experience. Hosted Shared Desktops provide an
advantage over other desktop virtualization techniques by only requiring the use of a single
operating system, which significantly reduces user resource requirements and helps improve
scalability numbers.

However, in order to get the most users, making correct design decisions as to the resource
allocation is important.  Creating too many virtual machines or too few might negatively impact

This planning guide provides resource allocation recommendations for users running on a Hosted
Shared Desktop on either Windows Server 2003 or Windows Server 2008.

Note: Even though these best practices are based on the Hosted Shared Desktop model, they are still relevant in a non-desktop model where users only connect to published applications without the desktop interface.

The XenApp Planning Guide: Virtualization Best Practices can be found here.