The rise of the manager of managers

“One ring to rule them all…” The phrase from Lord of the Rings to define the one ring that can control everything including the other rings with magic power. Kind of a nerd intro, but it’s a good analogy to describe what is currently happening in the space of IT infrastructure automation.

A few years ago every vendor had it’s own little product portfolio in which they excelled and made most of their money. Microsoft has Windows / Office, Red Hat has Linux, VMware has virtualization, etc. But as cloud popped up the game changed and everybody started to move into the same space: management and control of the IT infrastructure.

With that move everybody needed (or is going) to expand their capabilities into terrain that was not their area of expertise. Every vendor is moving up or down the stack to get the most control over the IT infrastructure. It’s all about the control of the resources within the IT infrastructure and being the manager to control those resources.

So with each vendor creating their own “manager” for their part of the stack and making that manager capable of managing “other” stuff in the IT infrastructure creates the question : “What manager should control my IT infrastructure?”

And as with all evolution it’s not the strongest, nor the smartest that will rise and will surface on top. It is the one that can adapt to it’s environment. As the data center is not comprised out of multiple vendor product, it needs to be a product that can integrate with all of them; old and new ones.

VMware’s flagship in automation and orchestration is vRealize Automation (vRA). But the engine that really makes this manager adaptable is the synergy it has with vRealize Orchestrator (vRO).

vRO is the “glue” that makes it possible to connect all the data center components together and integrate them into vRA. vRA will then orchestrate whatever process (i.e. use case) that needs to be automated. vRA and vRO are the tools to link everything together.

This does not mean that vRA/vRO replaces the orchestration of other management tooling of other vendors. vRA/vRO just becomes the central entity to govern, orchestrate and automate everything within the data center. One central tool to make sure that all your policies are applied with the IT infrastructure. It uses the capabilities of all the other managers to orchestrate the workflow to create IT services. In other words it becomes the manager of managers.

Below you’ll find a picture of the integration of vRealize Automation with vRealize Orchestration and how integration takes place with all the other components within the data center.

In the end it all comes down to integration and connecting all IT infrastructure services within the data center. vRealize Automation is the tool to provide that functionality and make sure that you can build a software-defined data center that can run any application.

Project Photon and Lightwave, the start of a new VMware era

VMware Cloud-Native Apps released their first open-source projects with the announcement and release of project Lightwave and Photon. This is a new step in the path forward for VMware. VMware has always been closed source and supportive of other open-source projects, but this is the first time that VMware is taking the lead and released code through open-source for it’s own projects.

A new step and it suits the approach of making “developers first class citizens of the datacenter”. I’ve been working with VMware products for some years now and have seen this trend slowly building up. Their is a shift happening. No longer are applications the turf that only belongs to developers and nor is IT infrastructure the turf that only belongs to the IT operations guys. Call it evolution, call it “DevOps”, but more and more organisations see the benefit of making applications and IT operations work closely together to get the best out of both worlds : a platform that can run any applications; legacy or cloud-native.

In my opinion it is a good move for VMware to follow this trend and to transform itself from an IT infrastructure company into a company that acknowledges the needs of both the developers and the IT ops guys. VMware is one of the thought leaders in the space of virtualization and cloud computing and has experience of introducing complex software concepts into enterprise environments. Server virtualization was the start, with Software-Defined Data Center being the vision that build on the advantages that virtualization provides.

VMware Cloud-Native Apps is a new era. A new step forward in the continuing to support the application evolution into the cloud. And in my opinion its was only natural to choose the path of open-source. If you want to treat developers as “first class citizens”, you need to make them part of the VMware application development lifecycle.

This is that start of more things to come. I hope we’ll see more projects targeted at the next generation of applications with lost of community involvement and the opportunity to be part of something great. VMware ❤️ Developers!

For more information on project Photon & Lightwave got to http://vmware.github.io/

Positioning Openstack within the VMware SDDC

Openstack is the leading open-source platform for deploying virtual machines in data centers. It allows IT infrastructure teams to deploy virtual machines and other IT infrastructure components. Either through the service portal or through the API that comes with Openstack.

The discussion that I have with most customers around Openstack if fact that they think the functionality of Openstack and VMware vRealize Automation (vRA) is the same.

In fact customers are right. We do offer the same functionality that Openstack has to offer, but vRA is much more than an Infrastructure-as-a-Service (IaaS) platform. To define the positioning I have plotted Openstack in the VMware SDDC solution offering below.

Openstack VMware SDDC

vRA (Cloud Automation) in its core is a self-service portal that can deploy virtual machines. It consumes the resources that are provided to it from the compute, network and storage layer in order to create virtual machines that can host applications. This is the same functionality that Openstack offers.

However vRA and the rest of the vRealize suite can do a lot more then provision infrastructure resources. Providing IaaS is just the first step of automation. The end goal is to provide full management capabilities to manage and monitor all the data center resources in order to provide virtual machines and application resources. Integration of all the IT management components is crucial for the creation of a Software Defined Data Center.

And that’s where the big difference is: Openstack in its essence is an IaaS tool, vRealize Automation is a automation & orchestration engine to create a SDDC (and also includes IaaS).

SDDC is not a VMware-only stack. SDDC is a term for the automation, orchestration and integration of all IT components in the data center. It needs to work with all the IT solutions you already have inside your data center. So it could well be that you have a VMware estate next to an Openstack estate, to service different workloads within your datacenter. Whatever flavour of Openstack is the choice of the customer. VMware vRA can connect via the Openstack APIs to manage the resources in the Openstack layer.

VMware also offers an Openstack flavour:  VMware Integrated Openstack (VIO). This is a distribution for those companies that want an enterprise-grade version of Openstack. A predefined installation of Openstack is supported and maintained by VMware.

So the conclusion is that Openstack can be one of the building blocks within the SDDC to host the application workloads in your datacenter. It fully integrates and the result is the best of both world.

API Coolness = Real Life Service Mashups


Just read about this and in my opinion pretty cool. Uber and Spotify will join forces and give you the option to listen to your favorite music on Spotify while taking your Uber cab to your next destination. How cool is that?

The question that raised my mind was : Is this a trend that will be the next cool thing to do in 2015?

Probably there are loads of examples out there, but this does seem to become the trend. The option to connect the service that makes your life more comfortable and let it connect to multiple other services that provide another type of service, but the two services combines create a better consumer experience.

True this has existed for years in the digitized world of software, but as our real life services get digitized more and more, the possibilities grow with it. Digitization and consumerization create a drive for innovation and the exploration for new ways to take consumer experience to the new level.

And with all software APIs are the way to connect services together. This ability to mashup real life services is really a level up in consumerization. Today it’s Uber and Spotify connecting. I wonder what the future will hold. One things for sure: in the end it will deliver a better experience for both me and you.

“All things are created twice” : Basics to IT Infrastructure Design

“All things are created twice” is one of the principles that immediately comes to mind when thinking of designing IT infrastructures. It’s the principle by Stephen Covey that says that we first create things in our mind, before we even produce anything in the physical world.

Think about that for a sec. We think first before we do something. It’s the process we do unconsciously all day. We do it every moment of the day, over and over again.

So the same goes for designing new IT infrastructures. First think about it, write that down into design documents and then build the IT infrastructure as defined in your design documents.

Compare it to building a house. Nobody goes out, buys bricks and mortar and then starts building something without a design. Same goes for building a new IT infrastructure or whatever it is that needs to be thought out before it is created. You don’t go out and randomly install software hoping it will result in the optimal IT infrastructure that suits your needs.

Or better yet the needs of your customer / company. Cause most of the times you don’t design according to what you think is best. You design the infrastructure to suit the needs and requirements of somebody else. Just like with building a house, you are the architect. Trying to figure out those needs and requirement of your customer. How big it must be? How many people are going to live in it? How should the plumbing / electricity be installed? And last but not least how much in the total amount of money that can be spend?

But we’re not building a house, we are building an IT infrastructure. The variables change, but the design methodology is the same. First think of what you want to create, then go out and build it.

And maybe this is in a nutshell what the VCDX program is all about. It’s not magical sorcery what the program is about. It’s about showing you can architect a IT infrastructure that suits the needs of your customer / company. As I always say: “There is no general perfect design, the perfect design is the design that meets the requirements of your customer while taking the constraints into account.”

Thats what is looked for in the VCDX program. People that can show that skill and be able to present and defend that to the rest if the world. Or in case of the program : the panel. So step up to the plate and show that you are an IT infrastructure designer. Good luck!

Click on the link for more information on the book by Steven Covey “The 7 Habits of Highly Effective People: Powerful Lessons in Personal Change”

Trust your ESXi hypervisor!

When it comes to security there is always the concerns about the security of the ESXi hypervisor. Its always the hypervisor that is nominated as the layer that can’t be trusted within the IT infrastructure. The whitepaper by Mike Foley  tries to give you more insight on how the VMware ESXi hypervisor from a security perspective and what things to look at when securing the hypervisor.

The topics covered in the white paper are:

  • Secure Virtual Machine Isolation in Virtualization
  • Network Isolation
  • Virtualized Storage
  • Secure Management
  • Platform Integrity Protection
  • VMware’s Secure Development Lifecycle

The document can be downloaded here.

The Perfect Storm – My thoughts on the VMware NSX launch

Last week VMware launched it’s long awaited NSX product line. Software that in my eyes is going to help revolutionise the way we think about networking. But is it all that new? I don’t think so.

It is however a perfect storm of things coming together. Over the last decade server virtualization has been the hot topic that enabled the IT infrastructure to create more flexible workloads that could keep up with the demands that the business had. No longer did we have to way until the iron was rolled in, we could just provision new “virtual hardware” on the spot. At first both server and storage hardware vendors did not build their hardware to facilitate virtualization and it’s benefits. Over time synergy occurred between virtualization and server / storage hardware resulting in a more symbiotic approach for delivering IT infrastructure to the business. With the addition of automating, self service management software came the “cloud revolution”

What always remained at status quo was the network. Network was always “external” to the virtualization software. Network components in the virtualization software always were an add-on to the existing network stack. Network intelligence such as routing, fire-walling, load balancing, etc. always remained in the physical world and not in the virtual world.

Still it is networking software running on top of a network hardware box, but up until this point nobody came up with a good idea to decouple them. Network software has always been tight up to the hardware. You bought the hardware, with the software. It maybe integrated into some overall management tooling, but all network intelligence was depended and limited by the physical box it was running on.

But now the time has come to split the software from the hardware. It’s time to break the barriers that have been defined by networking and pull the functionality of networking into the virtual world. The reason I’m calling it a perfect storm is the fact that everything is in place to take networking to the next level. It’s the missing piece in the puzzle for the software defined datacenter.

The product – what does it do?

NSX is an overlay product. It’s not a replacement for the current network. You still need to have a physical infrastructure to be able to connect the different components with one another. Just like you need to have physical servers to create virtual machines. But the purpose of the physical network will change. It will become a generic transport layer. All network intelligence such as switching, routing, fire-walling and load balancing will be provided by NSX.

And that’s where we can compare it again to the server virtualization. It’s abstracting the functionality from the hardware layer. Within server virtualization we abstract functionality of the server hardware and provide it as a virtual machine within the virtualization layer. By making it software, it can be easily copied and therefor can be created on-the-fly on top of the underlying hardware.

Same goes for NSX. All the intelligence, the network functionality, is abstracted and put into software. Only difference between the two is that the NSX does not run on top of the network hardware. It runs alongside the server virtualization / cloud software and integrates directly with the virtual network components. This is the reason networking infrastructure can remain as-is, and does not need modification. However as with server virtualization we’ll see hardware evolve over time to facilitate the needs that network virtualization has.

Change happens for a reason, to innovate. Together. 

Change is always disruptive. If change didn’t happen then we would al still be walking around in a bear skin, hunting down squirrels with a bat. Every ones in a while somebody has to come up with a new idea and drop the pebble into the pond. Nicira did this and with VMware they are now trying to bring this innovation to the rest of the world. Fact remains that the pond is still a pond. We’re are still talking about networking here.

And of course there are those who want to leave things as they are. If I were the biggest vendor of networking hardware in the world I wouldn’t be the first in line to change things. However I don’t believe in FUD when it comes down to technology. We can have fun discussing the solution still is missing and we can talk about what it can do and how we can make the next step. It all comes down to jointly work towards a better IT infrastructure world.

That sounds nice and sweet of course and in this case it’s true. Network folks and virtualization techies need to work together to define the next steps in this thing we call software defined networking. It needs to be open en backwards compatible. In my opinion NSX fits into that profile. A product that can merge the physical world of networking with the virtual world of innovation.

Conclusion

I think we are at the beginning of a new era. I compare these times to the starting period of server virtualization all the time. Not knowing what was going to come and what the exact impact of the technology would be. That’s the same with NSX. It’s new, we can see the potential, but we still have long way to go. But one thing I’m sure of : in 10 years time we’ll look back to this period with a smile and think “That was a whole different world!”

Note : I work for VMware. So I’m biased with regards to new products that my company delivers. However I’ve been working with the virtualization / cloud stack over the last years and have seen the limitations that network creates when defining a IT infrastructure that meets business demands. In this case I’m not only pro-VMware, I’m pro network (r)evolution!

Network virtualiSation eXplained

Evolution takes place everyday, but sometimes revolution is needed to kick it up to the next level. And that’s what’s happening in the network world at the moment. Those living in the “old world” will deny that a shift is currently happening and will just tell you it’s just an evolution, but what currently is happening in the network world is a revolution that will create a paradigm shift in the way we will think about network (virtualization). 

But where do we stand today. What is the current status of networks within the enterprise: Welcome to the world networking 1.0!


The intelligence of the network, the software of network components, is always coupled to the hardware that it is running on. In most enterprise environments the network intelligence is governed through a central management tool that will make sure that all devices can be managed from one central location. But this still results in the fact that you need to manage and configure all entities individually to create the desired network layout. The network design more or less is embedded into these networking devices. If it be switching (sw), routing (rt), firewalling (fw) or load balancing (lb); it all needs to be managed and configured individually and all has its own hardware that it runs on. From a flexibility and scalability perspective, this has always been a challenge. It always results in the need for more hardware if you want to achieve the required expansion to deliver the business needs. 

So that’s what we’ve been doing over the last decades. Trying to evolve a system that is limited by nature. Software and hardware tidily coupled, creating monolithic building block that is inflexible by the fact that it needs to be configured and managed individually on a per device basis. 

The key for creating flexibility and agility is in the fact that you want to decouple the software and the hardware. That’s the basic definition of what we call virtualization. Virtualization is a common word within IT today. But in general it’s used for server virtualization. Here compute power (processing and memory) is abstracted from the server hardware that has become commodity and is now used within server virtualization to create one big pool or processing and memory resources. 

Same needs to be done with the network resources. Hardware network devices need to become commodity and network resources need to be abstracted from the hardware layer. To do this network devices need to do one thing : transport network packets across from point A to B. Nothing more nothing less. That’s what hardware should do and it should be done in the fastest, easiest and most efficient way possible. In other words, the hardware should just become a transport layer within the layout of your virtual world. 

But how about all the intelligence? Intelligence is in the software. Software is the key to flexibility and efficiency. Software is needed to run and create a virtual world to build your network design in. This is where network virtualization layer comes into play. Network virtualization is a piece of software that will create that virtual playground for you and that allows you to build network designs in a virtual world. 

It will abstract the network intelligence from the hardware devices and will make that functionality available in the software layer.This does required tight integration with the virtualization software of compute resources. Please keep in mind that  not the network hardware devices themselves are virtualized. Network virtualization software integrates with the compute virtualization layer and therefor requires a compute virtualization platform such as vSphere, KVM, Xen, etc. This is 
 
But the result is that you no longer need physical hardware to provide switching, firewalling, routing or load balancing functionality in your network design. It can now all be created in your virtual world. The virtual world that also hosts your virtual machine workloads. Look at it as if it were your own personal network Lego world. You just use the building blocks as you please and create you own networks according to your network design specifications, but without having to buy those hardware devices.

Virtually everything is possible. This (r)evolution will set a new course in the world we know as networking. I’m already looking forward to the development  in the next couple of years! 

There are more excellent resources out there that you should read if you wan to catchup on network virtualization: 

http://bradhedlund.com/2013/05/28/what-is-network-virtualization/
http://networkheresy.com/category/network-virtualization/
http://blogs.vmware.com/networkvirtualization/