I work as an evangelist & strategist for the Cloud Management business unit within VMware. In this role I talk a lot to customers about what Cloud Management is and how it fits the customers needs when it comes to providing IT resources to the consumers of IT.
Depending on the maturity of the customer I talk about why of cloud management, but in general most of them are at the beginning or early start of looking into cloud management. And the perception of most customers is that cloud management is “just automation”.
Actually automation is just a part of the whole cloud management solution. Automation may be the first step because the transition to cloud requires stuff to be delivered “as-a-service”. For that we need to take as much of the manual labour out or delivery and “automate the manual process”.
If automation is just the first step, then what is the end goal? Well and that’s where we need to return to the name.It’s all about managing clouds. And if you see clouds as a collections of services that you can consume, then cloud management is the management of all those services.
And in today’s world customers are not consuming services from just one cloud. It could of course be a starting point. Today most enterprise customers have some form of private cloud. This is a starting point as most customers have evolved their IT infrastructure to a state where consumers can get their IT resources as a services.
But most customers I talk to foresee a world that embraces multiple clouds. Some customers are already using services from AWS or from Azure. Next to that see you other parties trailing these leaders in public cloud such as Google, Alicloud and Oracle. And let’s not forget the thousands of service providers out there that deliver services that need to be consumed. All of which provide services that need to be governed and that need to comply with the policies and regulations that the business needs to follow.
This is where we need to have to have a model that describes all the functionalities of a Cloud Management Platform. A description that can govern all aspects of managing multiple clouds and the applications that sit on top of it.
Luckily Gartner has created such a model. Below you will find a graphical representation of the model.
Courtesy of Gartner for more info click on the picture.
The model shows and envisions all aspects that a cloud management platform needs to address. Of course not everything needs to be in place when starting with cloud management. It’s a journey where most companies start small, but over time all areas will be addressed when moving into a multi-cloud, app-centric world.
The goal is to provide a cloud management system that enables internal IT departments to become a service broker to the business. It should not be a problem that the business wants to consume applications and infrastructure resources from multiple providers. Either internal, private cloud resources or public cloud resources should be consumable. As long as IT can govern those IT resources. In the end it’s all about staying in control and making sure that everything happens in accordance to the requirements and needs of the company and costs remain within budget. That’s the purpose of a Cloud Management Platform.
Digital transformation is all around us. Over the last couple of years we’ve moved into the age of digital experience. We’re more and more getting used to consuming our products as-a-service and consuming it digitally through apps on our computers and mobile phones.
This forces companies to change and rethink their current strategies. Innovation has to take place to keep competitors from taking more market share. Even worse is the possibility of being disrupted by either a new startup or a multi-billion dollar company that comes with a new innovative way to disrupt the market and gain a huge market share overnight. We all know the examples of Netflix, Uber, Amazon, etc. Companies that disrupt markets in new innovative way and have gained a solid position in the consumer market today.
All of this of course has an impact on how we consume IT infrastructure. The question then becomes: How do we relate this new way of doing business to IT infrastructure?
After all IT infrastructure is most of the time not the core business of most companies. But still we need to transform our infrastructure to accommodate digital transformation. Doesn’t matter what business your are in; banking, retail, government, whatever.
Every type of business is going through this digital transformation. Some industries are already in it, others are already starting. Either way consumers expect it and it will happen at some point in time.
Business consumes applications to deliver their services, applications run on infrastructure. Nothing new here, but as the introduction already said, things are changing rapidly and we are now coming into the area of digital transformation. The application becomes this central piece that makes this happen. Fundamentally making business “go digital”.
Crucial here is the interaction with which the applications can be build, changed and implemented in production. Application developers started to use more agile ways of making development happen, but that of course had an impact on infrastructure. A need to work more closely together led to the DevOps movement. A cultural shift to align the people and processes to provide developers with the right resources to get applications faster to production while keeping quality the same or better.
To accommodate all of this we also need to transform the infrastructure. The infrastructure need to become more agile to accommodate the needs of the applications that run on it. Taking into account that the infrastructure needs to be capable of running newly created applications, but also is capable of running the existing (sometimes legacy) applications.
Infrastructure agility is becoming a key piece of the digital transformation journey. Changing infrastructure capability demands, force IT operations to rethink their strategy and to become more app-centric.
VMware has been working on integrating container technology into it’s core products. For this the descision was made to do it in an open-source project. This ofcourse is a different strategy for VMware as everything has been closed source until recently. The open source project has made progess over the last year and we are now getting to the point that it is actually becoming a product that can be used in production by vSphere customers.
During VMworld 2015 VMware announced vSphere Integrated Containers. At the time this was just the project that came out of Project Bonneville; the integration of Docker containers within vSphere. Today VMware announced that it is not only the integration of containers in vSphere, which is now know as vSphere Integrated Containers Engine (VIC Engine), but also a Container Management Portal (Project Admiral) and a Container Registry (project Harbor). This now will be know in as vSphere Integrated Containers and provides vSphere administrators with a full set of tools that can be used to provide containers to developers and container users.
In this post I’ll explain what vSphere Integrated Containers (VIC) is and what the components do that make up VIC.
So as you can see on the marchitecture above, vSphere Integrated Containers consists out of there components :
- VIC Engine
- Container Management Portal (Project Admiral)
- Container Registry (Project Harbor)
Paradigm shift in thinking come with a change in technology. Same goes for the shift that Docker initiated by making containers mainstream. It’s nice to run a container next to your virtual machines with vSphere Integrated Containers, but when you really want to achieve massive scale and speed, you need to re-think your architecture.
This has been the thought process behind Photon Platform at VMware. A platform created for these new types of applications native to the cloud. When the application wants to make full use of the capabilities that the infrastructure has to offer there needs to be some form of “knowledge” and interaction between the the two.
Most of these new applicatons either use Platform-as-a-Service (with infrastructure orchestration included) or some kind of container managment system or both. And this is specifically the use case for which Photon Platform was designed.
Photon Platform is not a new type of “containers management system”, but rather a platform to host these “container managment systems” and PaaS deployments on. With that in mind there are two major features in Photon Platform :
- API-first Model; Photon Platform has been created to integrate with other software applications. For that we need an API. One of the design fundatmentals has been that everything should be able to be controlled through an API. It is focused on the automation of infrastructure consumption and operations using simple RESTful APIs, SDKs and CLI tooling, all fully multi-tenant.
- Fast, Scale-Out Control Plane; The platform has been created for applications that are massively scalable. For that you need a platform control plane that is the same. Photon Platform has a built-from-scratch infrastructure control plane optimized for massive scale and speed, allowing the creation of 1000s of new VM-isolated workloads per minute, and supporting 100,000s of total simultaneous workloads
This post will explain to you the Photon Platform architecture. Below you’ll find a picture showing you the archictecture of Photon Platform.
I’ll describe the different components that make up Photon Platform. But to explain first, Photon Platform has been build on the same foundation as vSphere. ESXi currently is the hypervisor that is being used to host the workload. This means that all components that you know of that work with vSphere, also work with Photon Platform. Photon Platform has native integration with VSAN and NSX. These components will therefore be used to facilitate storage and networking withing Photon Platform.
The challenge with each new technology is to integrate the new stuff with the old stuff. Same goes for containers. Over the last year the popularity of containers amongst developers has skyrocketed and every developer wants to use it. Questions of course is how do you manage it from an IT ops perspective. There is a natural fit between virtual machines and containers as I explained in my previous blogpost
For this reason VMware is developiong VMware Integrated Containers (VIC). An out-of-the-box integration between containers and the vSphere virtualization platform you know. It provides IT operations with the ability to deliver containers at the speed that developers want it. This then gives developers the flexibility, portability and speed to deliver their code / applications at the needs of the business. And of course that’s what it is all about in the end : deliver applications to the business to provide business value.
VMware Integrated Containers is build with vSphere as it’s foundation. This leverages all existing investments in VMware technology, people and processes.
Management tools that are familiar to IT Operations, such as VMotion, HA and DRS, can still be used. It is not needed to introduce new tooling and or application technology. VIC runs on the vSphere infrastructure platform that customers already have. This makes it possible for IT Operations to provide the same security, data persistency, networking and management capabilities for containers as it can do for virtual machines.
Besides the existing vSphere capabilities it can also make use of new vSphere software-defined technology for storage virtualization (VSAN / VVOLs) and network virtualization (NSX). No need to re-architecture the existing infrastructure. It can be fully enabled for a software-defined world of tomorrow without major design changes.
VMware Integrated Containers (VIC) leverages existing vSphere technology to create a container hosting platform out of standard vSphere components, This makes it possible to create an environment that adds functionality as networking and security without compromising.
For each VIC instance a resource pool and a Virtual Container Host (VCH) will be created.
The Virtual Container Host is the central entity within the VIC instance. It can be seen as the manager of all the containers that will be run within the VIC instance. Besides managing it also provides network capabilities to the container instances by routing the traffic from within the VIC instance to the outside world. Application traffic and the container resources (API requests, images, etc.) will all be handled by the VCH.
Please be aware that containers do not run on the VCH itself. Each container instance will be provided with a new virtual machine the moment it is started. By using instant cloning vSphere can provide a virtual machine instantly for container usage. Instant cloning creates a virtual machine copy in-memory which makes it possible to provide the foundation for the container that needs to be run is an fast and effective way.
VCH then manages the power state of the virtual machine that hosts the container and makes sure that all container API requests are handled in correspondence with the virtual machines life-cycle.
All of the VIC virtual machines are hosted within a vSphere resource pool that make up the VIC instance. A resource pool makes sure that resources can be managed per VIC instance. Each container having it’s own virtual machines makes it possible to guarantee resources on a per container basis. This granular way of managing resources now gives containers the ability to manage and monitor resources through the same mechanism that is used for virtual machines. Next to resource management it also guarantees security through isolation. No containers can influence another instance as there is a one-to-one container to virtual machine relationship.
And last but not least all data needs to be stored in a persistent way. For this each VIC will be provided with datastore space. These datastores will host the VMDKs of the virtual machines in the VIC instance. If a new container instance is launched within the VIC a new VMDK is created. That VMDK will have all the images (layers) and volumes installed for the new container instance.
So in all VMware Integrated Containers is the perfect fit for running containers on top of your vSphere platform. Running cloud-immigrant and cloud-native apps alongside one another.
A couple of months ago VMware released AppCatalyst. This is the slimmed down version of the popular desktop product VMware Fusion. It gives you the ability to run virtual machines on top of your Mac. Bringing you “datacenter-like” experience on top of your laptop.
This difference between Fusion and AppCatalyst is that AppCatalyst provides you with the virtualization engine and you can control it through the command line and APIs, not a graphical interface. So a limited set of control, but it is free to use.
AppCatalyst has been specifically created to accommodate developers. Most developers work with APIs and command line tools in order to get the job done. AppCatalyst provides this functionality and included with AppCatalyst is the Linux operating system project Photon. This Linux platform can be used to host containers (Docker, rkt, Garden, etc. ) which nowadays are used by developers for application development. The whole purpose is to create a “datacenter-like” environment experience on a developer desktop / laptop.
The first tech preview was limited with the amount of features, but the team developing AppCatalyst put a lot of effort in expanding the functionality. So now Tech Preview 2 (TP2) is available with the newly added stuff:
- AppCatalyst now allows VMs to run any version of Linux, BSD and Solaris.
- The AppCatalyst location is automatically added to a user’s path (was a manual step)
- A bug that incorrectly reported the suspend state has been fixed.
- The new Photon OS template VM with updated VMware tools has been bundled with AppCatalyst.
- Cross-ported a couple of critical Fusion bug fixes into AppCatalyst.
- AppCatalyst now allows users to run ESXi VMs (this is an experimental feature)
AppCatalyst Tech Preview 2 can be downloaded here.
A lot of new stuff is released by VMware and of course everything is cool stuff, but sometimes something is released that you’ve been waiting for for quite some time. And in my case that’s a web console to control ESXi directly.
I’m using a Mac and always is a hassle to get access to get access to a user interface. There isn’t a client available for Mac and I first had to install the Web Client in order to get access.
Now some great engineers at VMware have released a Fling that gives you direct access to the ESXi host through a HTML5 web interface that can be installed through a VIB file. It’s a simple process that can be found on the Fling page over here. It’s straightforward to setup. It took me 2 minutes following the instructions.
Be aware that all Flings are tech preview, so not everything is working out-of-the-box. But these features work for sure :
- VM operations (Power on, off, reset, suspend, etc).
- Creating a new VM, from scratch or from OVF/OVA (limited OVA support)
- Configuring NTP on a host
- Displaying summaries, events, tasks and notifications/alerts
- Providing a console to VMs
- Configuring host networking
- Configuring host advanced settings
- Configuring host services
Since containers got traction within the IT community, especially with developers, there have been discussions that it would make virtual machines obsolete.
“VMs (and with it hypervisors) are consumers of resources that can be used to run applications” is the general thought that comes to mind when you only take containers into account. Which is true is you are a developer and care about your applications: containers only on bare metal is the way to go!
But you do need to take into account that applications need to be maintained and managed by IT infrastructure operations guys. For them it’s about creating a stable environment and making sure that the infrastructure delivers it’s service in a resilient way, while keeping everything manageable. For this reason server virtualization and virtual machines have been a huge success over the last 10 years. It abstracted the compute functionality by creating virtual machines, made infrastructure management easy and optimizes resource management.
Now containers come along and all that great features of server virtualization are forgotten and basically Ops is being told: bare metal is the way forward.
In my opinion this all comes down to a responsibility definition : Containers solve a Dev problem, virtual machines solve an Ops problem.
Better yet virtual machines create a foundation for the challenges that containers have. The biggest problem with containers today is that they don’t provide isolation. This has been a feature of VMs since the beginning. So the shortcomings of containers are solved by virtual machines.
The picture below give a graphical representation of a container on top of a virtual machine.
Don’t get me wrong: I love containers and the problems that they solve for developers, but I don’t think we need to throw away best practices on how to operate an IT infrastructure just because we can. Virtual machines have proven to be a good foundation within IT infrastructures of today and test have already proven that the impact for resources of running containers on top of hypervisors are minimal, even neglectable.
So containers on top of virtual machines: the best of both worlds & bringing Dev and Ops closer together.
Lately I’ve been having discussion with customers around the topic of Cloud-Native Apps. It’s cool to talk about these new developments, but it also raises a lot of questions with my attendees and they want to know what my opinion / definition is about Cloud-Native.
Most of the times I refer to the analogy of Digital Natives vs. Digital Immigrant. This term was first coined by Marc Prensky with his article Digital Natives, Digital Immigrants in which he describes the failure of American educators to understand the needs of modern students. Or to look into a broader perspective, you have people (Digital Natives) that grow up with modern technology like computers, smartphones, tablets, etc. and people (Digital Immigrants) that have learned (or not) these new technologies later in life and have not grown up with them. It shows how different types of people consume the technology today and how they work with them.
And that’s where the analogy can be made to cloud native vs. cloud immigrant applications. Cloud in my opinion is a convergence of multiple technologies at the same time, that makes things possible that we’re not possible 5 – 10 years ago. But applications have been around since the start of the mainframe and boomed when we got the the client-server era. These applications nowadays reside on virtualized platforms. Platforms that are now converted to private clouds. Question however is if these applications make full use of the capabilities of a cloud environment. They were not designed with cloud in mind and are still very dependent of the capabilities that the infrastructure has to offer even if it’s all software-defined. They live in the cloud, but as they were not designed for it, they can be called “cloud immigrants”.
This of course is different from the applications that developers create today. If given the opportunity to design an application from the start, developers choose a resilient and scalable architecture and make use of architecture designs such as microservices. Everything is loosly coupled and can be replicated all over the cloud (or even clouds). This makes these applications “cloud native” and they make full use of all the benefits that a cloud architecture has to offer.
So both types of applications can run on a cloud platform, but both have different characteristics. Below a table showing the difference in some of the characteristics of “cloud immigrant” and “cloud native”.
There is no right or wrong when looking at the characteristics of the two different application structures. It just depends what the requirements are with regards to your applications. “Cloud immigrants” over the last decades have served us well. The majority of the applications today still are “cloud immigrants”. And for the years to come we’ll still need to support them and run them in our clouds. Migrating “cloud immigrants” to “cloud native” is not an easy task at hand and to give an example for that we just have to look into the past : we’re still running mainframe today, wasn’t that supposed to be migrated to the client-server model?
However “cloud native” is the way forward and IT departments need to prepare themselves to host these applications on top of their cloud infrastructures. Question then becomes : How do you run “cloud immigrants” and “cloud natives” jointly together?
“One ring to rule them all…” The phrase from Lord of the Rings to define the one ring that can control everything including the other rings with magic power. Kind of a nerd intro, but it’s a good analogy to describe what is currently happening in the space of IT infrastructure automation.
A few years ago every vendor had it’s own little product portfolio in which they excelled and made most of their money. Microsoft has Windows / Office, Red Hat has Linux, VMware has virtualization, etc. But as cloud popped up the game changed and everybody started to move into the same space: management and control of the IT infrastructure.
With that move everybody needed (or is going) to expand their capabilities into terrain that was not their area of expertise. Every vendor is moving up or down the stack to get the most control over the IT infrastructure. It’s all about the control of the resources within the IT infrastructure and being the manager to control those resources.
So with each vendor creating their own “manager” for their part of the stack and making that manager capable of managing “other” stuff in the IT infrastructure creates the question : “What manager should control my IT infrastructure?”
And as with all evolution it’s not the strongest, nor the smartest that will rise and will surface on top. It is the one that can adapt to it’s environment. As the data center is not comprised out of multiple vendor product, it needs to be a product that can integrate with all of them; old and new ones.
VMware’s flagship in automation and orchestration is vRealize Automation (vRA). But the engine that really makes this manager adaptable is the synergy it has with vRealize Orchestrator (vRO).
vRO is the “glue” that makes it possible to connect all the data center components together and integrate them into vRA. vRA will then orchestrate whatever process (i.e. use case) that needs to be automated. vRA and vRO are the tools to link everything together.
This does not mean that vRA/vRO replaces the orchestration of other management tooling of other vendors. vRA/vRO just becomes the central entity to govern, orchestrate and automate everything within the data center. One central tool to make sure that all your policies are applied with the IT infrastructure. It uses the capabilities of all the other managers to orchestrate the workflow to create IT services. In other words it becomes the manager of managers.
Below you’ll find a picture of the integration of vRealize Automation with vRealize Orchestration and how integration takes place with all the other components within the data center.
In the end it all comes down to integration and connecting all IT infrastructure services within the data center. vRealize Automation is the tool to provide that functionality and make sure that you can build a software-defined data center that can run any application.