Get vSphere network info using PowerCLI & CDP

PowerCLI is powerfull stuff. It can be used to set specific configuration for your vSphere environment, but it can also be used to collect information. This post will go into detail how to get network information using VMware PowerCLI and Cisco Discovery Protocol (CDP).

Cisco Discovery Protocol (CDP)

CDP is used to share information about other directly-connected Cisco networking equipment, such as upstream physical switches. CDP allows ESX and ESXi administrators to determine which Cisco switch port is connected to a given vSwitch. When CDP is enabled for a particular vSwitch, properties of the Cisco switch, such as device ID, software version, and timeout, may be viewed from the vSphere Client. This information is useful when troubleshooting network connectivity issues related to VLAN tagging methods on virtual and physical port settings.

CDP under VMware vSphere

By default ESX(i) has the CDP protocol on the vSwitch configured to the mode Listen. This enables you to view all relevant information with regards to your Cisco network.

To show this information under the vSphere client you have to enable CDP on the ESX(i) host and the Cisco switch. This KB article by VMware is a good reference on how to enable CDP. You can also view the configuration guide of ESX or ESXi.

If all works fine you can view the network information by clicking the info icon next to the vSwitch :

For more information on CDP under vSphere see this KB article.

PowerCLI & CDP

When CDP is enabled you can use PowerCLI to generate network information output for your vSphere environment. This is very usefull information which you can use for troubleshooting or in my case to prepare for a migration.

The KB article by VMware already gives a good PowerCLI script to get network information of all the connected NICs on a specific ESX(i) host.

Get-VMHost | Where-Object {$_.State -eq “Connected”} |
%{Get-View $_.ID} |
%{$esxname = $_.Name; Get-View $_.ConfigManager.NetworkSystem} |
%{ foreach($physnic in $_.NetworkInfo.Pnic){
$pnicInfo = $_.QueryNetworkHint($physnic.Device)
foreach($hint in $pnicInfo){
Write-Host $esxname $physnic.Device
if( $hint.ConnectedSwitchPort ) {
$hint.ConnectedSwitchPort
} else {
Write-Host “No CDP information available.”; Write-Host } } } }

If your not a scriptkiddie, you can also use PowerGUI and import the VMware Community PowerPack to do the work for you. This will, when you are connected to a vCenter host, provide you with a list of all the NICs in your ESX(i) hosts and their network information.

Relevant links

More information on CDP & ESXi: VirtualClouds.info – Configure Cisco CDP on ESX(i)

Long Distance vMotion by Cisco & VMware

Cisco and VMware are currently working on a new technology called Long Distance vMotion. This makes it possible to move application workloads between multiple datacenters without any downtime. The vMotion technology is already available within VMware vSphere. It is used to migrate one VM from one host to another or wit Storage vMotion move the VMs data from one storage location to another. This with the machine being fully operable and available to the end-user.

image

The changing model of data center management and provisioning allows VMware VMotion to be used for several purposes without violating the application SLAs.

Data center maintenance without downtime: Applications on a server or data center infrastructure requiring maintenance can be migrated offsite without downtime.
Disaster avoidance: Data centers in the path of natural calamities (such as hurricanes) can proactively migrate the mission-critical application environments to another data center.
Data center migration or consolidation: Migrate applications from one data center to another without business downtime as part of a data center migration or consolidation effort.
Data center expansion: Migrate virtual machines to a secondary data center as part of data center expansion to address power, cooling, and space constraints in the primary data center.
Workload balancing across multiple sites: Migrate virtual machines between data centers to provide compute power from data centers closer to the clients (“follow the sun”) or to load-balance across multiple sites. Enterprises with multiple sites can also conserve power and reduce cooling costs by dynamically consolidating virtual machines into fewer data centers (automated by VMware Dynamic Power Management [DPM]), another feature enabling the green data center of the future.

In these cases the secondary cloud can be provided by a service provider through a “virtual private cloud” connected to your “internal cloud”. Bringing down the TCO of your server infrastructure, using capacity in the secondary datacenter only when you need it and making use of a pay-per-use model for the consumed capacity. So this technology is a real cloud enabler!

For more information about this technology can be found here. Written by Omar Sultan.

Read the paper on this subject created by Cisco and VMware here.

Power over vSwitch back to where it belongs

With the upcoming new version of ESX on the horizon, Cisco published more an more detail on the Nexus 1000V. The Nexus 1000V is Cisco’s virtual switch which intergrates directly with ESX creating one distributed over all your ESX hosts. Besides this distrbuted switch Cisco also integrates the them in their management software. This gives network administrators the possibility to manage all switches, physical and virtual. Giving the power of networking  back to where it belongs; with the network admins.

Cisco Nexus 1000v with policy based VM connectivity

Cisco Nexus 1000v with policy based VM connectivity

More information about the Cisco Nexus 1000V can be found here. A nice video can be found here.

But during a presentation I attended Cisco also explained the Unified I/O concept. That was something that was new to me. But it’s going to be possible to combine network and storage traffic over one connection a.k.a. Unified I/O. Wow! That’s great. That would result in only two cables going into my server. But how does it work? Currently we have 10 Gbit available, but in the next year 40 / 100 Gbit will be introduced.

Combined with the ever growing capacity of CPU and RAM in servers this will result in VM host monsters. But how are all these new techonologies going to integrate with one another. Thankfully Brad Hedlund a Consulting System Engineer with Cisco and CCIE has written an article to explain this in detail. You can read about it here.

And as always a picture says more then a thousand words :

cna-with-n1k-ieorg4-large

Click on picture for more detail