“All things are created twice” : Basics to IT Infrastructure Design

“All things are created twice” is one of the principles that immediately comes to mind when thinking of designing IT infrastructures. It’s the principle by Stephen Covey that says that we first create things in our mind, before we even produce anything in the physical world.

Think about that for a sec. We think first before we do something. It’s the process we do unconsciously all day. We do it every moment of the day, over and over again.

So the same goes for designing new IT infrastructures. First think about it, write that down into design documents and then build the IT infrastructure as defined in your design documents.

Compare it to building a house. Nobody goes out, buys bricks and mortar and then starts building something without a design. Same goes for building a new IT infrastructure or whatever it is that needs to be thought out before it is created. You don’t go out and randomly install software hoping it will result in the optimal IT infrastructure that suits your needs.

Or better yet the needs of your customer / company. Cause most of the times you don’t design according to what you think is best. You design the infrastructure to suit the needs and requirements of somebody else. Just like with building a house, you are the architect. Trying to figure out those needs and requirement of your customer. How big it must be? How many people are going to live in it? How should the plumbing / electricity be installed? And last but not least how much in the total amount of money that can be spend?

But we’re not building a house, we are building an IT infrastructure. The variables change, but the design methodology is the same. First think of what you want to create, then go out and build it.

And maybe this is in a nutshell what the VCDX program is all about. It’s not magical sorcery what the program is about. It’s about showing you can architect a IT infrastructure that suits the needs of your customer / company. As I always say: “There is no general perfect design, the perfect design is the design that meets the requirements of your customer while taking the constraints into account.”

Thats what is looked for in the VCDX program. People that can show that skill and be able to present and defend that to the rest if the world. Or in case of the program : the panel. So step up to the plate and show that you are an IT infrastructure designer. Good luck!

Click on the link for more information on the book by Steven Covey “The 7 Habits of Highly Effective People: Powerful Lessons in Personal Change”

Trust your ESXi hypervisor!

When it comes to security there is always the concerns about the security of the ESXi hypervisor. Its always the hypervisor that is nominated as the layer that can’t be trusted within the IT infrastructure. The whitepaper by Mike Foley  tries to give you more insight on how the VMware ESXi hypervisor from a security perspective and what things to look at when securing the hypervisor.

The topics covered in the white paper are:

  • Secure Virtual Machine Isolation in Virtualization
  • Network Isolation
  • Virtualized Storage
  • Secure Management
  • Platform Integrity Protection
  • VMware’s Secure Development Lifecycle

The document can be downloaded here.

The Perfect Storm – My thoughts on the VMware NSX launch

Last week VMware launched it’s long awaited NSX product line. Software that in my eyes is going to help revolutionise the way we think about networking. But is it all that new? I don’t think so.

It is however a perfect storm of things coming together. Over the last decade server virtualization has been the hot topic that enabled the IT infrastructure to create more flexible workloads that could keep up with the demands that the business had. No longer did we have to way until the iron was rolled in, we could just provision new “virtual hardware” on the spot. At first both server and storage hardware vendors did not build their hardware to facilitate virtualization and it’s benefits. Over time synergy occurred between virtualization and server / storage hardware resulting in a more symbiotic approach for delivering IT infrastructure to the business. With the addition of automating, self service management software came the “cloud revolution”

What always remained at status quo was the network. Network was always “external” to the virtualization software. Network components in the virtualization software always were an add-on to the existing network stack. Network intelligence such as routing, fire-walling, load balancing, etc. always remained in the physical world and not in the virtual world.

Still it is networking software running on top of a network hardware box, but up until this point nobody came up with a good idea to decouple them. Network software has always been tight up to the hardware. You bought the hardware, with the software. It maybe integrated into some overall management tooling, but all network intelligence was depended and limited by the physical box it was running on.

But now the time has come to split the software from the hardware. It’s time to break the barriers that have been defined by networking and pull the functionality of networking into the virtual world. The reason I’m calling it a perfect storm is the fact that everything is in place to take networking to the next level. It’s the missing piece in the puzzle for the software defined datacenter.

The product – what does it do?

NSX is an overlay product. It’s not a replacement for the current network. You still need to have a physical infrastructure to be able to connect the different components with one another. Just like you need to have physical servers to create virtual machines. But the purpose of the physical network will change. It will become a generic transport layer. All network intelligence such as switching, routing, fire-walling and load balancing will be provided by NSX.

And that’s where we can compare it again to the server virtualization. It’s abstracting the functionality from the hardware layer. Within server virtualization we abstract functionality of the server hardware and provide it as a virtual machine within the virtualization layer. By making it software, it can be easily copied and therefor can be created on-the-fly on top of the underlying hardware.

Same goes for NSX. All the intelligence, the network functionality, is abstracted and put into software. Only difference between the two is that the NSX does not run on top of the network hardware. It runs alongside the server virtualization / cloud software and integrates directly with the virtual network components. This is the reason networking infrastructure can remain as-is, and does not need modification. However as with server virtualization we’ll see hardware evolve over time to facilitate the needs that network virtualization has.

Change happens for a reason, to innovate. Together. 

Change is always disruptive. If change didn’t happen then we would al still be walking around in a bear skin, hunting down squirrels with a bat. Every ones in a while somebody has to come up with a new idea and drop the pebble into the pond. Nicira did this and with VMware they are now trying to bring this innovation to the rest of the world. Fact remains that the pond is still a pond. We’re are still talking about networking here.

And of course there are those who want to leave things as they are. If I were the biggest vendor of networking hardware in the world I wouldn’t be the first in line to change things. However I don’t believe in FUD when it comes down to technology. We can have fun discussing the solution still is missing and we can talk about what it can do and how we can make the next step. It all comes down to jointly work towards a better IT infrastructure world.

That sounds nice and sweet of course and in this case it’s true. Network folks and virtualization techies need to work together to define the next steps in this thing we call software defined networking. It needs to be open en backwards compatible. In my opinion NSX fits into that profile. A product that can merge the physical world of networking with the virtual world of innovation.

Conclusion

I think we are at the beginning of a new era. I compare these times to the starting period of server virtualization all the time. Not knowing what was going to come and what the exact impact of the technology would be. That’s the same with NSX. It’s new, we can see the potential, but we still have long way to go. But one thing I’m sure of : in 10 years time we’ll look back to this period with a smile and think “That was a whole different world!”

Note : I work for VMware. So I’m biased with regards to new products that my company delivers. However I’ve been working with the virtualization / cloud stack over the last years and have seen the limitations that network creates when defining a IT infrastructure that meets business demands. In this case I’m not only pro-VMware, I’m pro network (r)evolution!

Network virtualiSation eXplained

Evolution takes place everyday, but sometimes revolution is needed to kick it up to the next level. And that’s what’s happening in the network world at the moment. Those living in the “old world” will deny that a shift is currently happening and will just tell you it’s just an evolution, but what currently is happening in the network world is a revolution that will create a paradigm shift in the way we will think about network (virtualization). 

But where do we stand today. What is the current status of networks within the enterprise: Welcome to the world networking 1.0!


The intelligence of the network, the software of network components, is always coupled to the hardware that it is running on. In most enterprise environments the network intelligence is governed through a central management tool that will make sure that all devices can be managed from one central location. But this still results in the fact that you need to manage and configure all entities individually to create the desired network layout. The network design more or less is embedded into these networking devices. If it be switching (sw), routing (rt), firewalling (fw) or load balancing (lb); it all needs to be managed and configured individually and all has its own hardware that it runs on. From a flexibility and scalability perspective, this has always been a challenge. It always results in the need for more hardware if you want to achieve the required expansion to deliver the business needs. 

So that’s what we’ve been doing over the last decades. Trying to evolve a system that is limited by nature. Software and hardware tidily coupled, creating monolithic building block that is inflexible by the fact that it needs to be configured and managed individually on a per device basis. 

The key for creating flexibility and agility is in the fact that you want to decouple the software and the hardware. That’s the basic definition of what we call virtualization. Virtualization is a common word within IT today. But in general it’s used for server virtualization. Here compute power (processing and memory) is abstracted from the server hardware that has become commodity and is now used within server virtualization to create one big pool or processing and memory resources. 

Same needs to be done with the network resources. Hardware network devices need to become commodity and network resources need to be abstracted from the hardware layer. To do this network devices need to do one thing : transport network packets across from point A to B. Nothing more nothing less. That’s what hardware should do and it should be done in the fastest, easiest and most efficient way possible. In other words, the hardware should just become a transport layer within the layout of your virtual world. 

But how about all the intelligence? Intelligence is in the software. Software is the key to flexibility and efficiency. Software is needed to run and create a virtual world to build your network design in. This is where network virtualization layer comes into play. Network virtualization is a piece of software that will create that virtual playground for you and that allows you to build network designs in a virtual world. 

It will abstract the network intelligence from the hardware devices and will make that functionality available in the software layer.This does required tight integration with the virtualization software of compute resources. Please keep in mind that  not the network hardware devices themselves are virtualized. Network virtualization software integrates with the compute virtualization layer and therefor requires a compute virtualization platform such as vSphere, KVM, Xen, etc. This is 
 
But the result is that you no longer need physical hardware to provide switching, firewalling, routing or load balancing functionality in your network design. It can now all be created in your virtual world. The virtual world that also hosts your virtual machine workloads. Look at it as if it were your own personal network Lego world. You just use the building blocks as you please and create you own networks according to your network design specifications, but without having to buy those hardware devices.

Virtually everything is possible. This (r)evolution will set a new course in the world we know as networking. I’m already looking forward to the development  in the next couple of years! 

There are more excellent resources out there that you should read if you wan to catchup on network virtualization: 

http://bradhedlund.com/2013/05/28/what-is-network-virtualization/
http://networkheresy.com/category/network-virtualization/
http://blogs.vmware.com/networkvirtualization/

VMware Troubleshooting – Time Is On My Side

Lately I’ve been hitting some strange issues in vSphere and vCloud installations. First it was things around SSO not being able to connect and then it was the VMRC console in vCloud that started giving weird “invalid ticket” errors that resulted in vCloud VMRC console being accesible .. or not!

Both issues seemed unrelated, but the solution was the same : incorrect time settings on one of the vSphere / vCloud components.

So from a troubleshooting perspective we can add another check to the default checklist:

1. Check firewall.

2. Check time (NTP) settings!!!

It maybe a simple solution, but something to keep in mind while troubleshooting. It can save you a lot of frustation.

Some resource with regards to time and vSphere / vCloud :

VMware KB 2012069

VMware KB 2033880

Gotcha: NTP Can Affect Load Balanced vCloud VMRC

DMZ Design with vCloud Network and Security

“If you can create it with physical devices, you can build it in your own vCloud”. That’s something I always tell my customers when advising on VMware vCloud. Same goes for VMware vCloud Network and Security, which in my opinion hasn’t shown its full potential to customer yet. Thankfully Shubha Bheemarao and Ranga Maddipudi have created an excellent whitepaper on implementing vCloud Network and Security for a DMZ zone. This paper demonstrates how securing a virtual DMZ environment using VMware vCloud Networking and

Summary of the paper:

This paper highlights how securing a virtual DMZ environment using vCloud Networking and Security can be a strategic enabler to your organization as it helps you to reduce your capital expenditure and increase agility, while building a cloud ready, secure and scalable environment for business applications. The paper also highlights the different design approaches to securing business critical applications and enables you to make the choice that is most suited to your organization in the cloud journey. Further, it gives prescriptive configuration guidance to help you get started with the deployment of your preferred approach.

 

For more information on vCloud Networking and Security follow @vCloudNetSec on Twitter.

Source can be found here.