The Perfect Storm – My thoughts on the VMware NSX launch

Last week VMware launched it’s long awaited NSX product line. Software that in my eyes is going to help revolutionise the way we think about networking. But is it all that new? I don’t think so.

It is however a perfect storm of things coming together. Over the last decade server virtualization has been the hot topic that enabled the IT infrastructure to create more flexible workloads that could keep up with the demands that the business had. No longer did we have to way until the iron was rolled in, we could just provision new “virtual hardware” on the spot. At first both server and storage hardware vendors did not build their hardware to facilitate virtualization and it’s benefits. Over time synergy occurred between virtualization and server / storage hardware resulting in a more symbiotic approach for delivering IT infrastructure to the business. With the addition of automating, self service management software came the “cloud revolution”

What always remained at status quo was the network. Network was always “external” to the virtualization software. Network components in the virtualization software always were an add-on to the existing network stack. Network intelligence such as routing, fire-walling, load balancing, etc. always remained in the physical world and not in the virtual world.

Still it is networking software running on top of a network hardware box, but up until this point nobody came up with a good idea to decouple them. Network software has always been tight up to the hardware. You bought the hardware, with the software. It maybe integrated into some overall management tooling, but all network intelligence was depended and limited by the physical box it was running on.

But now the time has come to split the software from the hardware. It’s time to break the barriers that have been defined by networking and pull the functionality of networking into the virtual world. The reason I’m calling it a perfect storm is the fact that everything is in place to take networking to the next level. It’s the missing piece in the puzzle for the software defined datacenter.

The product – what does it do?

NSX is an overlay product. It’s not a replacement for the current network. You still need to have a physical infrastructure to be able to connect the different components with one another. Just like you need to have physical servers to create virtual machines. But the purpose of the physical network will change. It will become a generic transport layer. All network intelligence such as switching, routing, fire-walling and load balancing will be provided by NSX.

And that’s where we can compare it again to the server virtualization. It’s abstracting the functionality from the hardware layer. Within server virtualization we abstract functionality of the server hardware and provide it as a virtual machine within the virtualization layer. By making it software, it can be easily copied and therefor can be created on-the-fly on top of the underlying hardware.

Same goes for NSX. All the intelligence, the network functionality, is abstracted and put into software. Only difference between the two is that the NSX does not run on top of the network hardware. It runs alongside the server virtualization / cloud software and integrates directly with the virtual network components. This is the reason networking infrastructure can remain as-is, and does not need modification. However as with server virtualization we’ll see hardware evolve over time to facilitate the needs that network virtualization has.

Change happens for a reason, to innovate. Together. 

Change is always disruptive. If change didn’t happen then we would al still be walking around in a bear skin, hunting down squirrels with a bat. Every ones in a while somebody has to come up with a new idea and drop the pebble into the pond. Nicira did this and with VMware they are now trying to bring this innovation to the rest of the world. Fact remains that the pond is still a pond. We’re are still talking about networking here.

And of course there are those who want to leave things as they are. If I were the biggest vendor of networking hardware in the world I wouldn’t be the first in line to change things. However I don’t believe in FUD when it comes down to technology. We can have fun discussing the solution still is missing and we can talk about what it can do and how we can make the next step. It all comes down to jointly work towards a better IT infrastructure world.

That sounds nice and sweet of course and in this case it’s true. Network folks and virtualization techies need to work together to define the next steps in this thing we call software defined networking. It needs to be open en backwards compatible. In my opinion NSX fits into that profile. A product that can merge the physical world of networking with the virtual world of innovation.

Conclusion

I think we are at the beginning of a new era. I compare these times to the starting period of server virtualization all the time. Not knowing what was going to come and what the exact impact of the technology would be. That’s the same with NSX. It’s new, we can see the potential, but we still have long way to go. But one thing I’m sure of : in 10 years time we’ll look back to this period with a smile and think “That was a whole different world!”

Note : I work for VMware. So I’m biased with regards to new products that my company delivers. However I’ve been working with the virtualization / cloud stack over the last years and have seen the limitations that network creates when defining a IT infrastructure that meets business demands. In this case I’m not only pro-VMware, I’m pro network (r)evolution!

New storage books for designing cloud infra

When creating a design for your cloud environment you always have to take the physical components, such as compute, network & storage into account. These components are the foundation that your cloud environment will be build on. A good design of these components is crucial for your overall design, the performance and resilience of your solution. Fact remains that you can’t know it all, but when you do want to know it, then the best way is to learn it from the experts.

Now we have the chance to do so. Three experts in the field of storage released two books about storage in relation to virtual cloud environments.

Mostafa Khalil from VMware, released the book “Storage Implementation in vSphere 5.0”

“The more important VMware virtualized infrastructure becomes, the more important virtualization storage becomes. Virtualization storage planning and management is complex, and it’s been almost impossible to find authoritative guidance – until now. In Storage Implementation in vSphere 5.0, one of VMware’s leading experts completely demystifies the “black box” of vSphere storage, and provides illustrated, step-by-step procedures for performing virtually every task associated with it. Mostafa Khalil brings together detailed techniques and guidelines, insights for better architectural design, planning and management best practices, common configuration details, and deep dives into both vSphere and external storage-related technologies. He gives technical professionals the deep understanding they need to make better choices, solve problems, and keep problems from occurring in the first place. This book answers crucial, ground-level questions such as: How do you configure storage array from “Vendor X” to support vSphere “Feature Y”? How do you know you’ve configured it correctly? What happens if you misconfigure it? How can you tell from logs and other tools that you have a problem – and how do you fix it? Most of the author’s troubleshooting techniques are based on his own personal experience as a senior VMware support engineer helping customerstroubleshoot their own vSphere production environments – experience that nobody else has.”

At the same time Vaughn Stewart and Mike Slisinger from NetApp released the book “Virtualization Changes Everything: Storage Strategies for VMware vSphere & Cloud Computing”:

Storage is a foundational component in the support of virtualization and cloud computing – and it is dynamically evolving. It is an aspect of the datacenter that is all-too-often overlooked, but without storage, there is no data, and without data, there is no cloud. Virtualization Changes Everything, by Vaughn Stewart and Mike Slisinger, examines the evolutionary influence of host virtualization and cloud computing in breaking storage deployment out of outdated silo models and into a dynamic, flexible hosting environment. Virtualization Changes Everything reviews common goals and challenges associated with providing storage service with cloud computing, and addresses each through the application of advanced storage technologies designed to scale in order to support the ever-expanding storage needs of the future. The examples within the book are pulled from real-world experience, and often involve the integration of multiple innovative technologies. If you are looking for measured guidance on high availability, efficiency, integration and performance for the storage in your cloud, then this book is for you!”

Both execellent books on the topic of storage and the impact it has on your virtual cloud environment. A must read for everybody that wants to gain more knowledge on this topic and the impact storage has on virtual cloud environments.