ESXi Scratch Partition

Not something you will be dealing with during normal day-to-day vSphere operations, but you can bump into problems if you don’t have a scratch partition.

This happened to me while upgrading some ESXi host with VMware Update Manager, but can also happen when you are trying to generate log files for VMware Support. And what’s more annoying then having system log generation problems when trying to deal with another error in your vSphere environment.

What is the ESXi scratch partition?

The scratch partion is a 4 GB VFAT partition that is created on a target device that has sufficient space available and is considered “Local” by ESXi.

The scratch partition is used for a couple of things:

* Store vm-support output (if scratch partition is unavailable the output will be stored in memory)
* During first boot the partition will be configured through syslog for retaining logfiles
* Userworld swapfile (when enabled)

The scratch partition is created and configured during ESXi installation or the autoconfiguration phase when the ESXi server first boots.

ESXi selects one of these scratch locations during startup in order of preference:

  1. 1. The location configured in the /etc/vmware/locker.conf configuration file, set by the ScratchConfig.ConfiguredScratchLocation configuration option
  2. 2. A Fat16 filesystem of at least 4 GB on the Local Boot device.
  3. 3. A Fat16 filesystem of at least 4 GB on a Local device.
  4. 4. A VMFS Datastore on a Local device, in a .locker/ directory.
  5. 5. A ramdisk at /tmp/scratch/

There  are two examples where scratch space may not be automatically defined on persistent storage. In each case, the temporary scratch location will be configured on a ramdisk.

  1. 1. ESXi deployed on a Flash or SD device, including a USB key. Scratch partitions are not created on Flash or SD storage devices even if connected during install, due to the potentially limited read/write cycles available.
  2. 2. ESXi deployed in a Boot from SAN configuration or to a SAS device. A Boot from SAN or SAS LUN is considered Remote, and could potentially be shared among multiple ESXi hosts. Remote devices are not used for scratch to avoid collisions between multiple ESXi hosts.

So for the record, a scratch partition isn’t always created, and even isn’t required to run ESXi, but it can result in strange problems during vSphere operations. So better be safe, then sorry and create your scratch partition in advance.

Always have a scratch partition

The reason for this article is the fact that I ran into trouble while updating my ESXi host with VMware Update Manager. This was due to the fact that my servers had SAS disks and no scratch partition was configured. This led to this VMware KB article which explained how to configure your scratch partition by setting the advanced option ScratchConfig.ConfiguredScratchLocation.

This solved my problem and I’ve made it a best practice to configure my scratch partition, by setting ScratchConfig.ConfiguredScratchLocation, in my kickstart script. The scratch partition is to be located on the local VMFS datastore of my server. After all ESXi creates this local VMFS datastore from the disk space that isn’t used (when dealing with servers with local disks). This remaining disk space is more then enough to host the scratch partition. This way the scratch partition persistent and is always created, even in the case of local SAS disks.

For more information, and the sources of all my information for this article, have look at the following links:

About the Scratch Partition – page 56 – ESXi Installable and vCenter Server Setup Guide

VMware KB article 1033696 : Creating a persistent scratch location for ESXi

VMware KB article 1014953 : Identifying disks when working with VMware ESX

VMware ESXi Chronicles : Ops changes part 5 – Scratch Partition

VMware taking it to the next level!

VMware will be presenting a webcast with the name “Raising the Bar, Part V” on July 12th. This is the official announcement on the registration page of this event :


Logo

Raising the Bar, Part V
Date: Tuesday, July 12, 2011
Time: 9:00 AM (PT) | 12:00 PM (ET)

Please join VMware executives
Paul Maritz, CEO and Steve Herrod, CTO for the unveiling of the next major step forward in Cloud infrastructure.
Paul & Steve’s 45 minute live webcast will be followed by additional online sessions where you can learn more about the industry’s most trusted virtualization and cloud infrastructure products and services.
Join us and experience how the virtualization journey is helping transform IT and ushering in the era of Cloud Computing
.

Probably the next best thing we were waiting for the last couple of months. So don’t miss this announcement and sign up for the webcast over here.

Citrix best practices for virtualizing XenApp

VMware already published a document about the best practices of virtualizing XenApp servers on top of VMware vSphere. You can read about it and download the document over here.

Now Citrix has published a best practice document, the XenApp Planning Guide: Virtualization Best Practics with their point of view on virtualizing XenApp servers. Note that this document not only focuses on virtualizatin XenApp on top of VMware vSphere, but also the hypervisors by Microsoft (Hyper-V) and Citrix (XenServer) are taken into account.

This document also contains a lot of useful recommendations. So I would recommend reading both the VMware document and the Citrix document carefully when designing your virtual environment

Overview of the Citrix document :

Desktop virtualization comprises of many different types of virtual desktops.  One option is to use a
Hosted Shared Desktop model, which consists of a published desktop running on a Citrix XenApp
server.

One of the goals when creating a design for Hosted Shared Desktops is to try and maximize
scalability while still providing an adequate use experience. Hosted Shared Desktops provide an
advantage over other desktop virtualization techniques by only requiring the use of a single
operating system, which significantly reduces user resource requirements and helps improve
scalability numbers.

However, in order to get the most users, making correct design decisions as to the resource
allocation is important.  Creating too many virtual machines or too few might negatively impact
scalability.

This planning guide provides resource allocation recommendations for users running on a Hosted
Shared Desktop on either Windows Server 2003 or Windows Server 2008.

Note: Even though these best practices are based on the Hosted Shared Desktop model, they are still relevant in a non-desktop model where users only connect to published applications without the desktop interface.

The XenApp Planning Guide: Virtualization Best Practices can be found here.

What ports does vSphere use?

Ok, I have some knowledge about VMware vSphere, but I can’t remember everything. Good thing there are some people out there that have some good ideas about reference material. One of them is VMware Technical Account Manager Dudley Smith who created a nice diagram of all the ports used within a vSphere environment.

Check out the blog post over here and download the nice ports diagram in PDF format.

Update : Also check this KB article by VMware for ports used by VMware products.

PXE Manager for vCenter

VMware Labs has again released a fine piece of work which should make installing ESXi a lot easier : PXE manager for vCenter. I’m a fan of automation. Especially when it comes down to the installation of ESXi. First installation is fun, second is nice, but from that point on it gets boring.

Until now you always had to revert to a third party tool to do the auto installation for you. Ok, VMware provided the automated installation through kickstart, but you still needed a third party to do the PXE boot, install and configure your ESX(i) server.

VMware now introduced the PXE Manager for vCenter as a fling (so do not use in your production environment 😉 ). The rumors were already there that this would be implemented in vSphere 4.1, but unfortunately it didn’t make the cut. Good to see that it wasn’t a rumor after all and VMware does indeed have a install / management solution for deploying ESXi onto your servers.

PXE Manager for vCenter enables ESXi host state (firmware) management and provisioning. Specifically, it allows:

* Automated provisioning of new ESXi hosts stateless and stateful (no ESX)
*
ESXi host state (firmware) backup, restore, and archiving with retention
*
ESXi builds repository management (stateless and statefull)
*
ESXi Patch management
*
Multi vCenter support
*
Multi network support with agents (Linux CentOS virtual appliance will be available later)
*
Wake on Lan
*
Hosts memtest
*
vCenter plugin
*
Deploy directly to VMware Cloud Director
*
Deploy to Cisco UCS blades

See for yourself on the VMware Labs page over here.

VMware vCloud Reference Architecture

Cloud here, cloud there, cloud is everywhere at the moment and private VMware vClouds are being deployed at customers all over the world. But with all great things the start with a design. And before you can design a nice solution to fit your need, you need to understand what vCloud is and what it’s capable of.

For this reason VMware created the vCloud Reference Architecture. A document that helps you design a private vCloud and understand all of it’s components. It will help you in the creation process, building your vCloud, size it for the needs of your organization and give you pointers on how to manage it.

You can download “Architecting a vCloud” over here.

VMware vCenter Update Manager Utility

Today I was looking into the replacement of SSL certificates for vSphere 4.1 U1. I came across the blog post by Derek Seaman about VMware VUM 4.1 U1 SSL Certificate Replacement. His post mentions a new tool for replacing the SSL certificate : the VMware vCenter Update Manager Utility.

I’ve looked it up in the VMware Update Manager 4.1 U1 release notes here and there it was :

The Update Manager 4.1 Update 1 release includes the VMware vCenter Update Manager Utility that helps users reconfigure the setup of Update Manager, change the database password and proxy authentication, re-register Update Manager with vCenter Server, and replace the SSL certificates for Update Manager

With this utility you can reconfigure the following settings in VMWare Update Manager :

Proxy settings

When you install the Update Manager server or the UMDS, you specify the proxy settings. If these settings change after installation, you must reconfigure Update Manager or UMDS to use the newly configured proxy.

Database user name and password

If the database user name and password change after you install the Update Manager server or UMDS, you can reconfigure Update Manager and UMDS without the need to reinstall them.

vCenter Server IP address

When you install the Update Manager server, you register it with the vCenter Server system with which Update Manager will work. Every time the vCenter Server IP is requested, you must provide the IP of the
vCenter Server system with which Update Manager is registered. If the IP of the vCenter Server system or Update Manager changes, you must be able to re-register the Update Manager server with the vCenter Server system.

SSL certificate

You can replace the default Update Manager SSL certificates with either selfsigned certificates or certificates signed by a commercial Certificate Authority (CA). You can replace only the SSL certificates that Update Manager uses for communication between the Update Manager server and client components. You cannot replace the SSL certificates that Update Manager uses when you
are importing offline bundles or upgrade release files.

So a useful tool when you want to reconfigure your VMware Update Manager installation after you’ve installed it. For the complete guide by VMware click here.

Advancing The Foundation For Cloud Computing

VMware released the long awaited vSphere 4.1. This update of the current vSphere productline has some great new feature included in this release. The notes on “What’s new” can be found here.

Some new features include :

Network I/O Control. Traffic-management controls allow flexible partitioning of physical NIC bandwidth between different traffic types, including virtual machine, vMotion, FT, and IP storage traffic (vNetwork Distributed Switch only).

Memory Compression. Compressed memory is a new level of the memory hierarchy, between RAM and disk. Slower than memory, but much faster than disk, compressed memory improves the performance of virtual machines when memory is under contention, because less virtual memory is swapped to disk.

Storage I/O Control. This feature provides quality-of-service capabilities for storage I/O in the form of I/O shares and limits that are enforced across all virtual machines accessing a datastore, regardless of which host they are running on. Using Storage I/O Control, vSphere administrators can ensure that the most important virtual machines get adequate I/O resources even in times of congestion.

Network I/O Control. Traffic-management controls allow flexible partitioning of physical NIC bandwidth between different traffic types, including virtual machine, vMotion, FT, and IP storage traffic (vNetwork Distributed Switch only).

ESX/ESXi Active Directory Integration. Integration with Microsoft Active Directory allows seamless user authentication for ESX/ESXi. You can maintain users and groups in Active Directory for centralized user management and you can assign privileges to users or groups on ESX/ESXi hosts. In vSphere 4.1, integration with Active Directory allows you to roll out permission rules to hosts by using Host Profiles.

Also a nice note is included under Install and Deployment : “Future major releases of VMware vSphere will include only the VMware ESXi architecture.” This means that we won’t see ESX anymore in the future releases. The result will be that you eventually will need to upgrade to ESXi. Not a bad thing as I already explained in an earlier post here. For more information on how to migrate to ESXi, look at this whitepaper written by VMware.

You can download the new release here. Next thing : update to new release and play with new features 😉

UPDATE : For more information on the vSphere 4.1 go to this link page of Eric Siebert. Excellent resource for all the vSphere 4.1 information out there.

vSphere management GOing to the cloud?

Last week VMware launches its new product: VMware Go. This is a product that is specifically targeted at the SMB market. A clever move by VMware to expand its market share of virtualization in the SMB segment. VMware already is the market leader in virtualization when it comes to enterprise companies. But in the SMB segment has competitors like Microsoft’s Hyper-V, Citrix XenServer or RedHats KVM.

Not only cost is a factor that stops SMB companies from entering the path of virtualization. Also the lack of resources and knowledge about virtualization is something most SMB companies don’t have.  With Go VMware  tries to simplify the proces of virtualization. It provides a management interface to VMware ESXi from the Go cloud.

Eric Sloof over at NTPRO.NL points to a YouTube video where Dave McCrory, founder and CTO of Hyper9, explains how VMware Go works.

The picture above shows the same explanation of VMware Go as Dave McCrory gives in his video. What shows is that management takes place, through a web interface,  from the workstation where the administrator is located. Everything will be managed from the VMware Go cloud. The ESXi hosts are connected to the Go cloud by installing a proxy admin desktop. This desktop will service the Go cloud a management interface for the ESXi host.

This is a rather new concept of managing servers. Normally a client-server management model is applied to this kind of infrastructure services. VMware vCenter, the current management tool for vSphere infrastructures, is an example of a this type of management model.

The question is : Is this the first of step into moving vSphere management into the cloud?

This may seem like a far fetched idea, but is it? We are now living in the world of cloud computing. Lets look at the same picture as above, but introduce the vCloud concept into this equation.

Here you can see the same concept as the picture above. The proxy desktop has been replaced by an VMware Go Proxy appliance which is for managing the ESXi host in you (local) private vSphere cloud. There is a connection between the Private vSphere cloud and the vCloud(s) provided the various VMware hosting partners. All this can be managed from a central point : the VMware Go cloud.

If the name will still be the same isn’t important, call it vCenter Cloud Edition (CE), it doesn’t matter. What does matter is the fact that you now have central point of management to control your hybrid cloud. Not only can you manage your private cloud, but from the same interface you can manage you various vCloud partners (or even non-VMware) cloud services. This makes the VMware vCenter Cloud Edition a cloud broker to manage all your IaaS cloud services. Maybe even with integration to manage PaaS or SaaS solution. One cloud to rule them all 😉

Will this become reality? Only time will tell.

My personal opion: I like the idea of cloud brokers. I don’t think that one (cloud) provider / solution can serve all the cloud services needed by a company. So in my opinion cloud brokers will become the next battleground in cloud land. That’s why I like the idea of a central management cloud broker solution. That’s why I like the idea of a vSphere vCenter Cloud Edition.

What do you think?