vCenter XVP Manager and Converter

The battle for the hypervisor continues. VMware still is ahead of it’s competitors, but Microsoft and Citrix are gaining market share in the hypervisor area. From the start these vendors have had tools to convert virtual machines from VMware ESX / ESXi to one of the hypervisors by the competitors and to manage VMware ESX machines.

VMware has it’s own VMware Labs. Here flings are presented to the public for beta testing. These are applications that you can download and tested within your own environment. Flings are applications that may one day be incorporated into vSphere. Till that time flings are not supported by VMware. So use at you own risk within your environment.

VMware now also created a tool to manage third-party hypervisors and convert VMs from a third-party competitive hypervisor platform to VMware ESX / ESXi :

VMware XVP Manager and Converter

VMware vCenter XVP Manager and Converter provides basic virtualization management capabilities for non-vSphere hypervisor platforms towards enabling centralized visibility and control across heterogeneous virtual infrastructures. It also simplifies and enables easy migrations of virtual machines from non-vSphere virtualization platforms to VMware vSphere.

Features

Management of the following Microsoft Hyper-V platforms:

  • Microsoft Hyper-V Server 2008
  • Microsoft Windows Server 2008 (64-bit) with Hyper-V role enabled
  • Microsoft Hyper-V Server 2008 R2
  • Microsoft Windows Server 2008 R2 with Hyper-V role enabled

Familiar vCenter Server graphical user interface for navigating through and managing non-vSphere inventory

Ease of virtual machine migrations from non-vSphere hosts to vSphere inventory

Compatible with VMware vCenter Server 4.0 & 4.1

Scalable up to management of 50 non-vSphere hosts

You can get your own copy at http://labs.vmware.com/flings/xvp

 

Guest VM Operations inside Hyper-V

 

Troubleshooting: ESXi to vCenter connection error

Install your new ESXi with your brand new installation process. Check! Verify that all your custom settings for ESXi are correct. Check! Install your vCenter server. Check! Configure vCenter and create a cluster. Check! Add ESXi host to vCenter. ERROR! *argh*

Troubleshooting. Always fun. You learn new stuff by exploring what you are doing wrong.

But OK. Then what went wrong?

Adding a host to vCenter

When you try to add your ESXi host to your vCenter server, the vCenter agent is installed to the ESXi host. It is installed locally on the ESXi host and is the agent that the vCenter server uses to communicate to the ESXi host.

During the installation of the vCenter agent the following steps are taken :

1. Upload vCenter agent to the ESXi host.
2. Install vCenter agent on the ESXi host and start the daemon.
3. Verify that the vCenter agent is running and vCenter is able to communicate with it.
4. Retrieve host configuration and set configuration settings (if necessary)

Everything went ok, until step 3. At that moment the process stalled and eventually the vCenter came back with the following error message :

Troubleshooting

Ok, so whenever I have a problem with vSphere I turn to my good old friend Google to solve all my problems. Cause, face it, the events in vCenter / ESXi aren’t always that clear about what’s going on. Ok, they give you a starting point. From that point on it’s either Google or deep dive the log files of vCenter or ESXi.

But Google returned no results that satisfied my requirements. Well then it’s of to the command line and view some logs.

But first enable the Remote Tech Support Mode to be able to log in via SSH. Read about it over here.

The logs for VMware on the ESXi host are located in : /var/log/vmware

There is the file located to install the vpxa daemon, vxp-iupgrade.log. Viewing the last 10 lines with the following command  : tail –f vpx-iupgrade.log

[242692] 2011-02-11 10:16:50: exec /opt/vmware/vpxa/vpx/install.sh
Starting vmware-vpxa:failed

This shows that the starting of the vpxa daemon fails after the installation has been completed. Which was also shown during the installation through vCenter. The daemon was installed, but could not be started.

Now turn to the vpxa.log in the /var/log/vmware/vpxa directory to find out what the problem is. There the following error is shown :

[2011-02-14 10:18:02.842 FFDF8B10 error ‘App’] [VpxdCertificate] Failed: unrecognized file format: /etc/vmware/ssl/rui.crt

Bingo! So we now know that there is a problem with the custom certificate file which was uploaded during the installation of ESXi. Apparently there is something wrong with the certificate file.

Resolution

The problem is in fact the custom SSL files that ESXi and vCenter use to communicate to one another securely. For more information see my previous post here. My custom SSL certificate file was not recognized. So therefor I decided to re-generate the SSL certificate and private key.

Normally this is done during the installation process first boot. But you can also execute the bash script yourself from the command line:

/sbin/generate-certificates.sh

This will generate new SSL certificate files an put them in the default location /etc/vmware/ssl

Afterwards restart the host daemon to load the SSL certificates again :

/etc/init.d/hostd restart

Add the host to vCenter and you’ll see that the ESXi host will be added to your vCenter correctly.

Conclusion

The reason for this post isn’t only to give you a solution to what my problem is, but also a path how to troubleshoot your VMware problems. I think every VMware administrator should poses these skills and should be able to look beyond the events that are created in vCenter. As you can see in the post above, it only takes analytic skills, common sense and the internet / Google to solve your problems. And no the command line isn’t something to be afraid of, even for a Windows sysadmin Winking smile

Some good resources on troubleshooting can be found here  :

Trainsignal – VMware vSphere Troubleshooting Training

VMware Education – VMware vSphere : Troubleshooting [V4.x]

Youtube – VMware Videos

Building a hybrid vCloud

VMware announced that it is going to release the VMware vCloud Connector. With this connector you will be able to connect to public vCloud solutions that are provided by service providers like Bluelock and Colt and in the near future Verizon (currently in beta).

Over the last couple of months these service providers have been building public vClouds based on VMware vCloud technology. The VMware vCloud Connector is the missing piece of linking your private vCloud (a.k.a. vSphere) to one of the public vClouds of the service providers.

The following link gives a graphic representation on how you should visualize the creation of a hybrid vCloud using the VMware vCloud Connector.

The VMware vCloud Connector is a virtual appliance running in your own private vCloud. By using a plugin in your vSphere client you can use the vCloud Connector to connect to public vClouds that are provided by the service providers that have build vClouds that are accessible through the vCloud API.

By using your vSphere Client together with the vCloud Connector you create a “single pain glass” management console for managing both your private vCloud and public vCloud resources.

This creates a hybrid cloud management interface with the following capabilities :

  • –  Visualize workloads and templates across vSphere and private/public vClouds
  • –  Migrate workloads and templates between vSphere and vClouds
  • –  Perform basic power and deployment operations on workloads and templates
  • –  Access console of vApps in vClouds

For more information on the VMware vCloud Connector, have a look at the post created by VMware vCloud Architect Massimo Re Ferre’

The blog post by VMware can be located here.

Advancing The Foundation For Cloud Computing

VMware released the long awaited vSphere 4.1. This update of the current vSphere productline has some great new feature included in this release. The notes on “What’s new” can be found here.

Some new features include :

Network I/O Control. Traffic-management controls allow flexible partitioning of physical NIC bandwidth between different traffic types, including virtual machine, vMotion, FT, and IP storage traffic (vNetwork Distributed Switch only).

Memory Compression. Compressed memory is a new level of the memory hierarchy, between RAM and disk. Slower than memory, but much faster than disk, compressed memory improves the performance of virtual machines when memory is under contention, because less virtual memory is swapped to disk.

Storage I/O Control. This feature provides quality-of-service capabilities for storage I/O in the form of I/O shares and limits that are enforced across all virtual machines accessing a datastore, regardless of which host they are running on. Using Storage I/O Control, vSphere administrators can ensure that the most important virtual machines get adequate I/O resources even in times of congestion.

Network I/O Control. Traffic-management controls allow flexible partitioning of physical NIC bandwidth between different traffic types, including virtual machine, vMotion, FT, and IP storage traffic (vNetwork Distributed Switch only).

ESX/ESXi Active Directory Integration. Integration with Microsoft Active Directory allows seamless user authentication for ESX/ESXi. You can maintain users and groups in Active Directory for centralized user management and you can assign privileges to users or groups on ESX/ESXi hosts. In vSphere 4.1, integration with Active Directory allows you to roll out permission rules to hosts by using Host Profiles.

Also a nice note is included under Install and Deployment : “Future major releases of VMware vSphere will include only the VMware ESXi architecture.” This means that we won’t see ESX anymore in the future releases. The result will be that you eventually will need to upgrade to ESXi. Not a bad thing as I already explained in an earlier post here. For more information on how to migrate to ESXi, look at this whitepaper written by VMware.

You can download the new release here. Next thing : update to new release and play with new features 😉

UPDATE : For more information on the vSphere 4.1 go to this link page of Eric Siebert. Excellent resource for all the vSphere 4.1 information out there.

HDS SRA 2.0

If you want VMware SRM to work with your storage array it needs to communicate to the storage array. For this reason each storage vendor has created a Storage Replication Adapter (SRA) which plugs into SRM. You can download these SRAs  for each vendor here. (Note : Only download from the VMware website. Why?)

HDS also provides a SRA to connect to its storage arrays. But only installing the SRA won’t get the storage array to be recognized by SRM. The picture below gives a graphical representation of the component needed to let SRM communicate with the HDS storage array.

Click on picture to enlarge

The HDS SRA 2.0 needs an instance of the HDS Command Control Interface (CCI) to communicate to the HDS storage array. The HDS CCI is storage box management software which is provided by HDS. This can be installed on the SRM server next to SRM and the SRA.

To create an instance of the HDS CCI a Hitachi Online Remote Copy Manager (HORCM) service is defined manually on the Windows host. The HDS CCI manages the storage array(s) through the defined control LUNs on each storage array.  The HORCM service is configured in the HORCM file. This file defines which LUNs are replicated between the protected and the recovery site. These LUNs are the LUNs that SRM can see and which it can managed for Disaster Recovery and testing purposes.

During configuration the HDS SRA is pointed to the HORCM instance which manages the storage array(s). All this provides the HDS SRA with the  information which it passes through to the SRM server.

This creates the connection between the SRM server and the storage array necessary for SRM to work. For more information look at the VMware vCenter Site Recovery Manager Deployment created by HDS. Follow its step-by-step instructions carefully while it is essential getting HDS to work with SRM.

VMware vSphere & SRM with Hitachi Data Systems (HDS)

This post is about Site Recovery Manager (SRM) in combination with the storage system of Hitachi Data Systems (HDS). I’ve been working with over the last couple of months. This is a braindump of my knowledge about the product.

HDS has complete portfolio of storage solutions available. Every type of VMware environment can find a HDS solution suited for their needs. I’ve been working with the AMS2500. This is a SAS / SATA based storage array. It is a suitable solution for midrange size companies, but can also be used by enterprise size companies as a 2nd tier SAN. Next to this storage array HDS also provides enterprise class storage array with its Universal Storage Platform.

For both types of storage HDS provides best practices for VMware which can be found here for the AMS2000 series and here for the Universal Storage Platform VM.

Like all major storage vendors HDS also is a VMware partner when it comes to SRM. They committed themselves to the support of their storage systems with SRM through their HDS SRA 2.0.

For more information how to set up VMware SRM with HDS storage arrays take a look at the deployment guide here that HDS created. It’s a document that explains in detail how to setup your HDS storage array and HDS Storage Replicatoin Adapter (SRA) for the creation of your SRM environment.

For more information on HDS with VMware look at this resource page.

Cloud in your Pocket

Wyse created an application for  a VMware View / RDP client for the iPhone. An app giving the user full control over his virtual desktop through his / her iPhone. Resulting in the ability to access the cloud from your pocket through your mobile device.

 This concept isn’t new, but Wyse made such an incredible app that it’s really easy to perform actions while connected to your desktop. Easy to use also on such a small screen!

Watch the video created by Richard Garsthagen made at VMworld to see it in action.

http://www.youtube.com/watch?v=UZ24A5kE6XM

Long Distance vMotion by Cisco & VMware

Cisco and VMware are currently working on a new technology called Long Distance vMotion. This makes it possible to move application workloads between multiple datacenters without any downtime. The vMotion technology is already available within VMware vSphere. It is used to migrate one VM from one host to another or wit Storage vMotion move the VMs data from one storage location to another. This with the machine being fully operable and available to the end-user.

image

The changing model of data center management and provisioning allows VMware VMotion to be used for several purposes without violating the application SLAs.

Data center maintenance without downtime: Applications on a server or data center infrastructure requiring maintenance can be migrated offsite without downtime.
Disaster avoidance: Data centers in the path of natural calamities (such as hurricanes) can proactively migrate the mission-critical application environments to another data center.
Data center migration or consolidation: Migrate applications from one data center to another without business downtime as part of a data center migration or consolidation effort.
Data center expansion: Migrate virtual machines to a secondary data center as part of data center expansion to address power, cooling, and space constraints in the primary data center.
Workload balancing across multiple sites: Migrate virtual machines between data centers to provide compute power from data centers closer to the clients (“follow the sun”) or to load-balance across multiple sites. Enterprises with multiple sites can also conserve power and reduce cooling costs by dynamically consolidating virtual machines into fewer data centers (automated by VMware Dynamic Power Management [DPM]), another feature enabling the green data center of the future.

In these cases the secondary cloud can be provided by a service provider through a “virtual private cloud” connected to your “internal cloud”. Bringing down the TCO of your server infrastructure, using capacity in the secondary datacenter only when you need it and making use of a pay-per-use model for the consumed capacity. So this technology is a real cloud enabler!

For more information about this technology can be found here. Written by Omar Sultan.

Read the paper on this subject created by Cisco and VMware here.