VMware’s road to the heterogeneous cloud

VMware acquired DynamicOps yesterday. Not the biggest news of all. VMware regularly acquires virtual and cloud software products that will then be integrated in the overall portfolio, but DynamicOps is a little bit different in my opinion. DynamicOps is a software company that builds cloud automation solutions that enable provisioning and management of IT services across heterogeneous clouds. The last part of what they do is the most interesting : “… across heterogenous clouds”.

This makes the solution a little bit different from the other acquisitions. This product not only manages VMware products, but also is capable of managing other cloud solutions of other vendors. Making it possible to move services between different vendors of cloud products.

This is a change of direction for VMware. Before this acquisition VMware had no management solutions to for other products then the VMware products. The acquisition of DynamicOps changes this and has huge benefits for the customers of VMware products. Now they can easily manage there IT services with one product. DynamicOps’s multi-platform and multi-platform service integration will help deliver the best IT solutions to the business easily and effectively.

For VMware this means that they’ve gone into the path of delivering heterogeneous cloud solution to their end customer. In my opinion a positive direction. A new path in delivering IT solutions that help customers to deliver IT in a flexible and easy way. Bringing more agility to the business.

For more information see :

VMware press release
Ramin Sayar’s blog
Leslie Muller’s blog (CTO & Founder, DynamicOps)

Powertool for your vCloud : vCloud Connector

Lately I’ve been playing around with vCloud and all the whistles and bells that come along with it. One of the tools that really got my attention was vCloud Connector. Although it might seem as a simplistic tool, it actually is pretty powerful. Especially when you take a look at the use cases for this tool. That when it shows its real value : being the interconnect between vClouds, the hybrid cloud facilitator.

The construct of vCloud Connector

To get a better understanding of vCloud Connector we have to first look at the construct. The following picture gives a good representation of how vCloud Connector is setup.

wpid-PastedGraphic3-2012-04-4-21-06.tiff

vCloud Connector is constructed via a server-slave principle. One vCloud Connnection Server (vCCS) is needed. This is the central point access point and responsible for managing the nodes. The nodes are vCloud Connector Nodes (vCCN). Per vCloud instance or vSphere instance a node has to be installed and the have to be attached to the Connection Server. Both the vCloud Connector Server and the vCloud Connector Node can be downloaded at the VMware site

Through the User Interface (UI) the Connection Server can be controlled. The UI is available as a vSphere Client plugin or can be accessed via the web portal http://vcloud.vmware.com. This is were the nodes can be attached and after that the fun can start.

Use Cases for vCloud Connector

Fun being no more that copy-ing or moving workloads between vSphere and / or vCloud instances. Simple, but so effective. I’ve defined the 5 use cases I see. Bare in mind that workload need to be power off. It’s not a (long-distance) vMotion yet, it’s a start. Maybe in the future online will become a reality… Who knows!

#1 Hybrid Cloud; Probably the most referred use case. Moving workload from the private, internal cloud to a vCloud instance provided by a VMware vCloud enabled partner cloud; a public cloud. Drag-and-drop and the workloads will be moved or copied to it’s new home.

#2 Moving between external providers; Nobody likes to be stuck at some provider. At some certain point the decision is made to move your workload from provider A to provider B. Maybe it’s cheaper or the new service provider has got better service levels. Whatever the reason there is always the part of moving from A to B. vCloud Connector makes this task easy as copy-and-past in Windows. Just shut down the vApps and move the workloads to the new vCloud enabled provider.

#3 Migrating to the vCloud; One of the first questions I always get is how to migrate from vSphere to vCloud. vCloud Connector is the way to do this. It connects to the vCenter server and give the option to move or copy virtual machines and templates to a vCloud Director Organization vDC (Org vDC). Easy and simple.

#4 Moving vApps (Templates) between Org vDCs in different organizations; vCloud Connecor can be setup to move vApp (Templates) between Org vDC in different organizations. Normally an organization is a boundary within vCloud Director. By using vCloud Connector vApp (Templates) can be moved or copied between Org vDCs in different organizations.

#5 vCenter to vCenter; Maybe not the first use case to be thought of, but actually you can setup vCloud Connector to copy / move workloads between vCenter instances. This can be done in other ways, I know. We’ve been doing that for years. But vCloud Connector really makes this an easy task. Leveraging this ability through the use of a vSphere Client plugin.

Hopefully this gives a little bit more insight on how vCloud Connector can be used. I would at least advice everybody to install and configure it within their vSphere infrastructure. A powertool to move worlds, at least VM worlds!

Understanding vSphere stretched clustering and SRM

Stretched clustering is one of the challenging topics I get when meeting with customers. Many customers think that stretched clustering is the ultimate disaster recovery solution and that it makes SRM obsolete. This is due to the fact that people think that HA will solve all their problems when it comes down to DR and that they still have the advantage of vMotion to have workload mobility between two data centers.

This however isn’t always true and there are some catches to the implementation of stretched clusters. In some cases, depending on the customers requirements, it even is better to have an SRM implementation to fulfill the requirements.

The table below give a graphical representation of which solution best suits which requirement.

wpid-PastedGraphic-2012-03-22-22-46.tiff

So in the end its up to the customer to decide which solution best suits its requirement. To help make this decision VMware Tech Marketing created a whitepaper to help make the right choice. A must read for everybody involved with disaster recovery and availability within a vSphere infrastructure.

The whitepaper can be found here.

Raising my own bar. Joining #VMware

At some points in life you look back on your career and think “What can make my working life more interesting and more challenging?” Well over the last year I’ve been asking myself this question, searched for it, talked to a lot of people and finally came to the answer.

And today, the 31st of August 2011, is the day I can announce to everybody what my answer is:

On the 1st of October I will be joining VMware (PSO) as Senior Consultant !

I’m really looking forward to working with the great minds in the field of virtualization and cloud computing.

And I would like to thank everybody who was involved in the process of me moving toward VMware. Thanks for the effort and the support!

vSphere 5 Cool Stuff : Storage DRS

Looking into the new features of vSphere, Storage Distributed Resource Scheduler (SDRS) is probably one of the best and one of the coolest new features that comes with vSphere 5. DRS is one of the standard features in vSphere and is used to distribute the load of the workload evenly over all the ESXi host within a cluster.

Within vSphere 4.1 and previous versions this was based on the CPU and memory load which was generated by the virtual machines that were hosted on top of vSphere. Now with vSphere 5 VMware has added Storage DRS. This extends DRS with the storage stack.

Storage DRS provides smart virtual machine placement and load balancing mechanisms based on I/O and space capacity. Overall this comes down to four nice features that Storage DRS provides :

1. Smart initial placement of the VMDK on the LUNs
2. Migration recommendations (manual and automated)
3. Affinity and anti-affinity rules
4. Storage maintenace mode

Basically these features are similar to the features you see with “normal” DRS. It adds a whole new dimension to vSphere and makes it more flexible and contributes to the fact that vSphere is a cloud operating system. In my opinion this will make life a lot easier and in the end it will save time and money in operating your virtual infrastructure.

To see how Storage DRS in action, have a closer look at the following video created by VMware :

For more information on the topic of Storage DRS see :

VMware product page
What’s New in vSphere 5.0 – Storage

And if you really want to dive into the new features of DRS, including Storage DRS, you have to take a look at the book written by Duncan Epping and Frank Denneman : VMware vSphere 5 : Clustering Technical Deepdive.

For more information on the book see the following pages on Yellow-Brick.com and FrankDenneman.nl.

Set VMware VUM to use HTTP download

VMware vCenter Update Manager (VUM) comes preconfigured with a couple of download locations to download your ESX(i) patches. By default VUM uses HTTPS to connect to the download locations. This is secure, but some companies want to know what you download and / or scan everything that you download. Therefore they request you to download your patches using HTTP.

To configure VUM to use HTTP in stead of HTTPS you’ll need to edit the following XML file (please create a copy before manually editing the file) :

<install location VUM>\VMware\Infrastructure\Update Manager\vci-integrity.xml

This file contains the host configuration settings for VUM. Change the following entries from https to http.

<ESX3xUpdateUrl>https://www.vmware.com/PatchManagementSystem/patchmanagement
</ESX3xUpdateUrl>

<ESX4xUpdateUrl>https://hostupdate.vmware.com/software/VUM/PRODUCTION/index.xml
</ESX4xUpdateUrl>

<ESXThirdPartyUpdateUrl>https://hostupdate.vmware.com/software/VUM/PRODUCTION/csco-main-index.xml</ESXThirdPartyUpdateUrl>

After you have saved the file vci-integrity.xml restart the VUM service on the server where VUM is installed.

VMware still leader in x86 Infrastructure Virtualization

Gartner published their magic quadrant again for x86 Server Virtualization Infrastructure. VMware again came out as the number 1 company in infrastructure virtualization.

The number of installed server VMs and containers has nearly doubled in the past year as competition improves, virtualization adoption expands, the midmarket heats up, desktop virtualization drives more workloads to servers and workloads are deployed by cloud computing providers.

The magic quadrant by Gartner shows VMware being the number 1 company in x86 Infrastructure Virtualization, followed by Microsoft and Citrix .

image

As of mid-2011, at least 40% of x86 architecture workloads have been virtualized on servers; furthermore, the installed base is expected to grow five-fold from 2010 through 2015 (as both the number of workloads in the marketplace grow and as penetration grows to more than 75%). A rapidly growing number of midmarket enterprises are virtualizing for the first time, and have several strong alternatives from which to choose. Virtual machine (VM) and operating system (OS) software container technologies are being used as the foundational elements for infrastructure-as-a-service (IaaS) cloud computing offerings and for private cloud deployments. x86 server virtualization infrastructure is not a commodity market. While migration from one technology to another is certainly possible, the earlier that choice is made, the better, in terms of cost, skills and processes. Although virtualization can offer an immediate and tactical return on investment (ROI), virtualization is an extremely strategic foundation for infrastructure modernization, improving the speed and quality of IT services, and migrating to hybrid and public cloud computing.

For the entire article go to the Gartner post over here.

ESXi Scratch Partition

Not something you will be dealing with during normal day-to-day vSphere operations, but you can bump into problems if you don’t have a scratch partition.

This happened to me while upgrading some ESXi host with VMware Update Manager, but can also happen when you are trying to generate log files for VMware Support. And what’s more annoying then having system log generation problems when trying to deal with another error in your vSphere environment.

What is the ESXi scratch partition?

The scratch partion is a 4 GB VFAT partition that is created on a target device that has sufficient space available and is considered “Local” by ESXi.

The scratch partition is used for a couple of things:

* Store vm-support output (if scratch partition is unavailable the output will be stored in memory)
* During first boot the partition will be configured through syslog for retaining logfiles
* Userworld swapfile (when enabled)

The scratch partition is created and configured during ESXi installation or the autoconfiguration phase when the ESXi server first boots.

ESXi selects one of these scratch locations during startup in order of preference:

  1. 1. The location configured in the /etc/vmware/locker.conf configuration file, set by the ScratchConfig.ConfiguredScratchLocation configuration option
  2. 2. A Fat16 filesystem of at least 4 GB on the Local Boot device.
  3. 3. A Fat16 filesystem of at least 4 GB on a Local device.
  4. 4. A VMFS Datastore on a Local device, in a .locker/ directory.
  5. 5. A ramdisk at /tmp/scratch/

There  are two examples where scratch space may not be automatically defined on persistent storage. In each case, the temporary scratch location will be configured on a ramdisk.

  1. 1. ESXi deployed on a Flash or SD device, including a USB key. Scratch partitions are not created on Flash or SD storage devices even if connected during install, due to the potentially limited read/write cycles available.
  2. 2. ESXi deployed in a Boot from SAN configuration or to a SAS device. A Boot from SAN or SAS LUN is considered Remote, and could potentially be shared among multiple ESXi hosts. Remote devices are not used for scratch to avoid collisions between multiple ESXi hosts.

So for the record, a scratch partition isn’t always created, and even isn’t required to run ESXi, but it can result in strange problems during vSphere operations. So better be safe, then sorry and create your scratch partition in advance.

Always have a scratch partition

The reason for this article is the fact that I ran into trouble while updating my ESXi host with VMware Update Manager. This was due to the fact that my servers had SAS disks and no scratch partition was configured. This led to this VMware KB article which explained how to configure your scratch partition by setting the advanced option ScratchConfig.ConfiguredScratchLocation.

This solved my problem and I’ve made it a best practice to configure my scratch partition, by setting ScratchConfig.ConfiguredScratchLocation, in my kickstart script. The scratch partition is to be located on the local VMFS datastore of my server. After all ESXi creates this local VMFS datastore from the disk space that isn’t used (when dealing with servers with local disks). This remaining disk space is more then enough to host the scratch partition. This way the scratch partition persistent and is always created, even in the case of local SAS disks.

For more information, and the sources of all my information for this article, have look at the following links:

About the Scratch Partition – page 56 – ESXi Installable and vCenter Server Setup Guide

VMware KB article 1033696 : Creating a persistent scratch location for ESXi

VMware KB article 1014953 : Identifying disks when working with VMware ESX

VMware ESXi Chronicles : Ops changes part 5 – Scratch Partition

VMware taking it to the next level!

VMware will be presenting a webcast with the name “Raising the Bar, Part V” on July 12th. This is the official announcement on the registration page of this event :


Logo

Raising the Bar, Part V
Date: Tuesday, July 12, 2011
Time: 9:00 AM (PT) | 12:00 PM (ET)

Please join VMware executives
Paul Maritz, CEO and Steve Herrod, CTO for the unveiling of the next major step forward in Cloud infrastructure.
Paul & Steve’s 45 minute live webcast will be followed by additional online sessions where you can learn more about the industry’s most trusted virtualization and cloud infrastructure products and services.
Join us and experience how the virtualization journey is helping transform IT and ushering in the era of Cloud Computing
.

Probably the next best thing we were waiting for the last couple of months. So don’t miss this announcement and sign up for the webcast over here.

Bug: Host Profiles forgets AD OU structure

Lately I’ve been playing around with VMware vSphere Host Profiles. This feature creates baseline configurations for your ESX / ESXi host based on a reference host you configured in advance.

One of the configuration settings in Host Profiles is the Active Directory (AD) configuration. As you can read in this post you can add your ESXi host to AD. This way ESXi is added to AD and can use AD for authentication.

The bug

And here is where my problem start. As you can see in my post I want my ESXi host to be added to the directory “OU=Servers,OU=ESXi” in my domain “DEVTEST.LOCAL”. When creating a host profile with from this ESXi hosts configuration, Host Profiles will only add the value “DEVTEST.LOCAL” to the “Configuration Details” of the Host Profile.

Now when applying the newly created Host Profile to a non-configured host, will result in an error that the host can not be joined to AD (Unless you have Domain Admin rights for the domain and can add computers to the the OU Computer). This is due to the fact that the specific directory structure isn’t added to the “Configuration Details” by Host Profiles when taking a snapshot of the configuration of the ESXi host.

Solution

How to solve this problem? Well actual the solution is very simple. Add the AD OU directory structure, in my case /Servers/ESXi, to the “Configuration Details” of the Host Profile. This can be done by manually editing the “Configuration Details” of the Domain Name under Active Directory Configuration. You just add the directory structure to the domain name. Also described in the note of this post.

Solution results in error : This solution will result in the ESXi host being added to the domain to the correct OU structure, but as a result the ESXi host will never reach the status Compliant. This is due to the fact that the Host Profile configuration states “DEVTEST.LOCAL/Servers/ESXi”, but the ESXi host presents the configuration as “DEVTEST.LOCAL” which results in an Non-Compliant status which is incorrect.

Hope this will be solved in the next release of VMware vSphere.