Remote USB for virtual machines

Virtualization of servers results in the fact that server are no longer depeded on the hardware on which it runs. This is one of the key selling points of virtualization, but it also results in the fact that you can’t connect external hardware devices to your virtual machine directly. 

Especially for USB devices this can be a problem. There are still application vendors out there that have implemented some sort of USB dongle for licensing their software. To make it possible to connect your USB dongle to a virtual machines an implementation is needed of remote USB to connect the USB devices to your virtual machine.

The following picture gives a graphical representation of how this works.  

Click to enlarge

The concept of remote USB is based on the client-server model. All USB devices will be centralized to one USB remote server. Through a software USB device driver (=client) in the virtual machine a connection is made with this USB remote server.

Through management on the USB remote server, USB devices can be allocated to a particular virtual machine. This allows virtual machines to make use of USB devices without connecting that USB device to the hardware on which it runs. All USB data is now send over the network from the virtual machine to the remote USB server and vice versa.

There are several vendors on the market with a remote USB solution. All have the same result : giving access to USB devices over the network. Only difference is that some have software based solution while others have a hardware based solution.

When using a software based solution a server is needed that acts as the remote USB host. The server must have enough USB slots available to connect all the USB devices. The hardware based solution is a network device which act as the end point in the remote USB solution. All USB devices will be connected to this network device.

Examples :

Note : Remote USB isn’t needed for VDI implementations. Most VDI vendors have USB redirection incorporated into their solution. In case you are using thin clients, USB redirection can also be implemented on the thin client policy server.

Advancing The Foundation For Cloud Computing

VMware released the long awaited vSphere 4.1. This update of the current vSphere productline has some great new feature included in this release. The notes on “What’s new” can be found here.

Some new features include :

Network I/O Control. Traffic-management controls allow flexible partitioning of physical NIC bandwidth between different traffic types, including virtual machine, vMotion, FT, and IP storage traffic (vNetwork Distributed Switch only).

Memory Compression. Compressed memory is a new level of the memory hierarchy, between RAM and disk. Slower than memory, but much faster than disk, compressed memory improves the performance of virtual machines when memory is under contention, because less virtual memory is swapped to disk.

Storage I/O Control. This feature provides quality-of-service capabilities for storage I/O in the form of I/O shares and limits that are enforced across all virtual machines accessing a datastore, regardless of which host they are running on. Using Storage I/O Control, vSphere administrators can ensure that the most important virtual machines get adequate I/O resources even in times of congestion.

Network I/O Control. Traffic-management controls allow flexible partitioning of physical NIC bandwidth between different traffic types, including virtual machine, vMotion, FT, and IP storage traffic (vNetwork Distributed Switch only).

ESX/ESXi Active Directory Integration. Integration with Microsoft Active Directory allows seamless user authentication for ESX/ESXi. You can maintain users and groups in Active Directory for centralized user management and you can assign privileges to users or groups on ESX/ESXi hosts. In vSphere 4.1, integration with Active Directory allows you to roll out permission rules to hosts by using Host Profiles.

Also a nice note is included under Install and Deployment : “Future major releases of VMware vSphere will include only the VMware ESXi architecture.” This means that we won’t see ESX anymore in the future releases. The result will be that you eventually will need to upgrade to ESXi. Not a bad thing as I already explained in an earlier post here. For more information on how to migrate to ESXi, look at this whitepaper written by VMware.

You can download the new release here. Next thing : update to new release and play with new features 😉

UPDATE : For more information on the vSphere 4.1 go to this link page of Eric Siebert. Excellent resource for all the vSphere 4.1 information out there.

HDS SRA 2.0

If you want VMware SRM to work with your storage array it needs to communicate to the storage array. For this reason each storage vendor has created a Storage Replication Adapter (SRA) which plugs into SRM. You can download these SRAs  for each vendor here. (Note : Only download from the VMware website. Why?)

HDS also provides a SRA to connect to its storage arrays. But only installing the SRA won’t get the storage array to be recognized by SRM. The picture below gives a graphical representation of the component needed to let SRM communicate with the HDS storage array.

Click on picture to enlarge

The HDS SRA 2.0 needs an instance of the HDS Command Control Interface (CCI) to communicate to the HDS storage array. The HDS CCI is storage box management software which is provided by HDS. This can be installed on the SRM server next to SRM and the SRA.

To create an instance of the HDS CCI a Hitachi Online Remote Copy Manager (HORCM) service is defined manually on the Windows host. The HDS CCI manages the storage array(s) through the defined control LUNs on each storage array.  The HORCM service is configured in the HORCM file. This file defines which LUNs are replicated between the protected and the recovery site. These LUNs are the LUNs that SRM can see and which it can managed for Disaster Recovery and testing purposes.

During configuration the HDS SRA is pointed to the HORCM instance which manages the storage array(s). All this provides the HDS SRA with the  information which it passes through to the SRM server.

This creates the connection between the SRM server and the storage array necessary for SRM to work. For more information look at the VMware vCenter Site Recovery Manager Deployment created by HDS. Follow its step-by-step instructions carefully while it is essential getting HDS to work with SRM.

VMware vSphere & SRM with Hitachi Data Systems (HDS)

This post is about Site Recovery Manager (SRM) in combination with the storage system of Hitachi Data Systems (HDS). I’ve been working with over the last couple of months. This is a braindump of my knowledge about the product.

HDS has complete portfolio of storage solutions available. Every type of VMware environment can find a HDS solution suited for their needs. I’ve been working with the AMS2500. This is a SAS / SATA based storage array. It is a suitable solution for midrange size companies, but can also be used by enterprise size companies as a 2nd tier SAN. Next to this storage array HDS also provides enterprise class storage array with its Universal Storage Platform.

For both types of storage HDS provides best practices for VMware which can be found here for the AMS2000 series and here for the Universal Storage Platform VM.

Like all major storage vendors HDS also is a VMware partner when it comes to SRM. They committed themselves to the support of their storage systems with SRM through their HDS SRA 2.0.

For more information how to set up VMware SRM with HDS storage arrays take a look at the deployment guide here that HDS created. It’s a document that explains in detail how to setup your HDS storage array and HDS Storage Replicatoin Adapter (SRA) for the creation of your SRM environment.

For more information on HDS with VMware look at this resource page.

vSphere management GOing to the cloud?

Last week VMware launches its new product: VMware Go. This is a product that is specifically targeted at the SMB market. A clever move by VMware to expand its market share of virtualization in the SMB segment. VMware already is the market leader in virtualization when it comes to enterprise companies. But in the SMB segment has competitors like Microsoft’s Hyper-V, Citrix XenServer or RedHats KVM.

Not only cost is a factor that stops SMB companies from entering the path of virtualization. Also the lack of resources and knowledge about virtualization is something most SMB companies don’t have.  With Go VMware  tries to simplify the proces of virtualization. It provides a management interface to VMware ESXi from the Go cloud.

Eric Sloof over at NTPRO.NL points to a YouTube video where Dave McCrory, founder and CTO of Hyper9, explains how VMware Go works.

The picture above shows the same explanation of VMware Go as Dave McCrory gives in his video. What shows is that management takes place, through a web interface,  from the workstation where the administrator is located. Everything will be managed from the VMware Go cloud. The ESXi hosts are connected to the Go cloud by installing a proxy admin desktop. This desktop will service the Go cloud a management interface for the ESXi host.

This is a rather new concept of managing servers. Normally a client-server management model is applied to this kind of infrastructure services. VMware vCenter, the current management tool for vSphere infrastructures, is an example of a this type of management model.

The question is : Is this the first of step into moving vSphere management into the cloud?

This may seem like a far fetched idea, but is it? We are now living in the world of cloud computing. Lets look at the same picture as above, but introduce the vCloud concept into this equation.

Here you can see the same concept as the picture above. The proxy desktop has been replaced by an VMware Go Proxy appliance which is for managing the ESXi host in you (local) private vSphere cloud. There is a connection between the Private vSphere cloud and the vCloud(s) provided the various VMware hosting partners. All this can be managed from a central point : the VMware Go cloud.

If the name will still be the same isn’t important, call it vCenter Cloud Edition (CE), it doesn’t matter. What does matter is the fact that you now have central point of management to control your hybrid cloud. Not only can you manage your private cloud, but from the same interface you can manage you various vCloud partners (or even non-VMware) cloud services. This makes the VMware vCenter Cloud Edition a cloud broker to manage all your IaaS cloud services. Maybe even with integration to manage PaaS or SaaS solution. One cloud to rule them all 😉

Will this become reality? Only time will tell.

My personal opion: I like the idea of cloud brokers. I don’t think that one (cloud) provider / solution can serve all the cloud services needed by a company. So in my opinion cloud brokers will become the next battleground in cloud land. That’s why I like the idea of a central management cloud broker solution. That’s why I like the idea of a vSphere vCenter Cloud Edition.

What do you think?

Cloud from an end-user perspective

frustration

“I don’t want to care” is probably one of the main reasons end-users want to move to cloud services besides of course IT costs.

Over the last couple of decades IT more and more has become entangled within our daily lives. In our work, at home, in the streets; IT is everywhere. We are more depended on IT services then we think.

The thing is we don’t want is to care about IT. IT should be there like electricity, tap water or mailman dropping the “oldskool snailmail” in the mailbox. All examples of services that we take for granted and which we don’t think about. It’s delivered to us according to when we expect it, either being on-demand or on a pre-fined schedule. How these services are organized or how it works is something most end-users don’t care about.

Same goes for cloud services. End-users don’t want to care about IT, they just want to consume it. End-users in this context can be anybody, corparate or personal, as long as they use the cloud service.  But the technology that lies behind of these cloud services is of no interest to them. If the technology isn’t important to the end-user, what is?

The things that end-users look for IT cloud services can be brought down to 3 points :

  1. Performance; Either being a local software program on their personal  computer or a cloud service, it doesn’t matter as long as it performs to the expectation of the end-users.
  2. Availability; If you buy a service you want to use it whenever you need it. A big frustration is not being able to use that service at the moment you need it. A cloud app can have 99,9% uptime, but that 1 hour the  cloud service was down at the moment that users needed it the most, will result in a negative experience with the end user.
  3. Security; Data is new oil in this information era. And personal data of end-users is on top of the data list. End-users want to be sure that whatever data is put into the cloud doesn’t leave the cloud without their permission. They want to have full control over their data.

So whenever thinking about cloud computing and what matters, take into account the end-user and the 3 points above which matters to them!

Cloud in your Pocket

Wyse created an application for  a VMware View / RDP client for the iPhone. An app giving the user full control over his virtual desktop through his / her iPhone. Resulting in the ability to access the cloud from your pocket through your mobile device.

 This concept isn’t new, but Wyse made such an incredible app that it’s really easy to perform actions while connected to your desktop. Easy to use also on such a small screen!

Watch the video created by Richard Garsthagen made at VMworld to see it in action.

http://www.youtube.com/watch?v=UZ24A5kE6XM

Amazon Virtual Private Cloud

Amazon has managed to create a solution that lets your private cloud directly connect to their public cloud via a service called Virtual Private Cloud. Through gateways on both enda secure VPN connection is setup enabling the connection between the two clouds.

Amazon Virtual Private Cloud

The concept behind this new service is great. It really lets you extent your private cloud with all the nice features that the public cloud has to offer, but then as a private section in the Amazon public EC2 cloud.

This opens up the possibilities of cloud extension such as :

  • Cloud burst; Being able to add extra capacity to your cloud without having to add more hardware to your private cloud
  • Lab Cloud; Not using capacity in your private cloud for testing purposes. Enabling this in a Virtual Private Cloud. 
  • Business Continuity Cloud; In case of a disaster / failure of your private cloud continue your services in a Virtual Privat Cloud.

As you can see there are a lot of possibilities for adding “public private cloud” to your private cloud. Thing is that it needs to be easy to connect to, which in my opinion Amazon did by adding commonly used gateways (Checkpoint, Juniper, Vyatta, etc. ). The thing that worries me is the bandwidth that is needed between the internal private cloud and Amazons Virtual Private Cloud.  Ah well, it’s a first step. Lets see what the future brings us.

Fo more information on the Virtual Private Cloud by Amazon go here.

RedHat launching Broker for Cloud Computing

RedHat has launched a new open-source project in relation to Cloud Computing : the Delta Cloud. It can be seen as a cloud broker between the end-user access device and the various clouds computing solutions today. Currently each cloud provider is introducing it’s own API to interact with. RedHat’s Delta Cloud now tries to fill this gap by introducing a open-source project which will act as a doorway to multiple clouds. Currently public clouds like Amazons EC2 and solutions based on RedHat Enterprise Virtualization based private clouds are supported. Support for VMware based private clouds and Rackspace will be added soon.

For more information about the subject go to the Deltacloud projet site here.

image

Purpose of this Delta Cloud is to protect applications agains “cloud API changes and incompatibilities”. It should create a an ecosystem of developers, tools, scripts and applications that can interoperate across the public and private clouds available today. For example : You can start building applications build through the Delta Cloud API in your private cloud. And without any changes move this application to a public cloud. Interoperability in your hands!