VMware has been working on integrating container technology into it’s core products. For this the descision was made to do it in an open-source project. This ofcourse is a different strategy for VMware as everything has been closed source until recently. The open source project has made progess over the last year and we are now getting to the point that it is actually becoming a product that can be used in production by vSphere customers.
During VMworld 2015 VMware announced vSphere Integrated Containers. At the time this was just the project that came out of Project Bonneville; the integration of Docker containers within vSphere. Today VMware announced that it is not only the integration of containers in vSphere, which is now know as vSphere Integrated Containers Engine (VIC Engine), but also a Container Management Portal (Project Admiral) and a Container Registry (project Harbor). This now will be know in as vSphere Integrated Containers and provides vSphere administrators with a full set of tools that can be used to provide containers to developers and container users.
In this post I’ll explain what vSphere Integrated Containers (VIC) is and what the components do that make up VIC.
So as you can see on the marchitecture above, vSphere Integrated Containers consists out of there components :
- VIC Engine
- Container Management Portal (Project Admiral)
- Container Registry (Project Harbor)
Paradigm shift in thinking come with a change in technology. Same goes for the shift that Docker initiated by making containers mainstream. It’s nice to run a container next to your virtual machines with vSphere Integrated Containers, but when you really want to achieve massive scale and speed, you need to re-think your architecture.
This has been the thought process behind Photon Platform at VMware. A platform created for these new types of applications native to the cloud. When the application wants to make full use of the capabilities that the infrastructure has to offer there needs to be some form of “knowledge” and interaction between the the two.
Most of these new applicatons either use Platform-as-a-Service (with infrastructure orchestration included) or some kind of container managment system or both. And this is specifically the use case for which Photon Platform was designed.
Photon Platform is not a new type of “containers management system”, but rather a platform to host these “container managment systems” and PaaS deployments on. With that in mind there are two major features in Photon Platform :
- API-first Model; Photon Platform has been created to integrate with other software applications. For that we need an API. One of the design fundatmentals has been that everything should be able to be controlled through an API. It is focused on the automation of infrastructure consumption and operations using simple RESTful APIs, SDKs and CLI tooling, all fully multi-tenant.
- Fast, Scale-Out Control Plane; The platform has been created for applications that are massively scalable. For that you need a platform control plane that is the same. Photon Platform has a built-from-scratch infrastructure control plane optimized for massive scale and speed, allowing the creation of 1000s of new VM-isolated workloads per minute, and supporting 100,000s of total simultaneous workloads
This post will explain to you the Photon Platform architecture. Below you’ll find a picture showing you the archictecture of Photon Platform.
I’ll describe the different components that make up Photon Platform. But to explain first, Photon Platform has been build on the same foundation as vSphere. ESXi currently is the hypervisor that is being used to host the workload. This means that all components that you know of that work with vSphere, also work with Photon Platform. Photon Platform has native integration with VSAN and NSX. These components will therefore be used to facilitate storage and networking withing Photon Platform.
The challenge with each new technology is to integrate the new stuff with the old stuff. Same goes for containers. Over the last year the popularity of containers amongst developers has skyrocketed and every developer wants to use it. Questions of course is how do you manage it from an IT ops perspective. There is a natural fit between virtual machines and containers as I explained in my previous blogpost
For this reason VMware is developiong VMware Integrated Containers (VIC). An out-of-the-box integration between containers and the vSphere virtualization platform you know. It provides IT operations with the ability to deliver containers at the speed that developers want it. This then gives developers the flexibility, portability and speed to deliver their code / applications at the needs of the business. And of course that’s what it is all about in the end : deliver applications to the business to provide business value.
VMware Integrated Containers is build with vSphere as it’s foundation. This leverages all existing investments in VMware technology, people and processes.
Management tools that are familiar to IT Operations, such as VMotion, HA and DRS, can still be used. It is not needed to introduce new tooling and or application technology. VIC runs on the vSphere infrastructure platform that customers already have. This makes it possible for IT Operations to provide the same security, data persistency, networking and management capabilities for containers as it can do for virtual machines.
Besides the existing vSphere capabilities it can also make use of new vSphere software-defined technology for storage virtualization (VSAN / VVOLs) and network virtualization (NSX). No need to re-architecture the existing infrastructure. It can be fully enabled for a software-defined world of tomorrow without major design changes.
VMware Integrated Containers (VIC) leverages existing vSphere technology to create a container hosting platform out of standard vSphere components, This makes it possible to create an environment that adds functionality as networking and security without compromising.
For each VIC instance a resource pool and a Virtual Container Host (VCH) will be created.
The Virtual Container Host is the central entity within the VIC instance. It can be seen as the manager of all the containers that will be run within the VIC instance. Besides managing it also provides network capabilities to the container instances by routing the traffic from within the VIC instance to the outside world. Application traffic and the container resources (API requests, images, etc.) will all be handled by the VCH.
Please be aware that containers do not run on the VCH itself. Each container instance will be provided with a new virtual machine the moment it is started. By using instant cloning vSphere can provide a virtual machine instantly for container usage. Instant cloning creates a virtual machine copy in-memory which makes it possible to provide the foundation for the container that needs to be run is an fast and effective way.
VCH then manages the power state of the virtual machine that hosts the container and makes sure that all container API requests are handled in correspondence with the virtual machines life-cycle.
All of the VIC virtual machines are hosted within a vSphere resource pool that make up the VIC instance. A resource pool makes sure that resources can be managed per VIC instance. Each container having it’s own virtual machines makes it possible to guarantee resources on a per container basis. This granular way of managing resources now gives containers the ability to manage and monitor resources through the same mechanism that is used for virtual machines. Next to resource management it also guarantees security through isolation. No containers can influence another instance as there is a one-to-one container to virtual machine relationship.
And last but not least all data needs to be stored in a persistent way. For this each VIC will be provided with datastore space. These datastores will host the VMDKs of the virtual machines in the VIC instance. If a new container instance is launched within the VIC a new VMDK is created. That VMDK will have all the images (layers) and volumes installed for the new container instance.
So in all VMware Integrated Containers is the perfect fit for running containers on top of your vSphere platform. Running cloud-immigrant and cloud-native apps alongside one another.
A couple of months ago VMware released AppCatalyst. This is the slimmed down version of the popular desktop product VMware Fusion. It gives you the ability to run virtual machines on top of your Mac. Bringing you “datacenter-like” experience on top of your laptop.
This difference between Fusion and AppCatalyst is that AppCatalyst provides you with the virtualization engine and you can control it through the command line and APIs, not a graphical interface. So a limited set of control, but it is free to use.
AppCatalyst has been specifically created to accommodate developers. Most developers work with APIs and command line tools in order to get the job done. AppCatalyst provides this functionality and included with AppCatalyst is the Linux operating system project Photon. This Linux platform can be used to host containers (Docker, rkt, Garden, etc. ) which nowadays are used by developers for application development. The whole purpose is to create a “datacenter-like” environment experience on a developer desktop / laptop.
The first tech preview was limited with the amount of features, but the team developing AppCatalyst put a lot of effort in expanding the functionality. So now Tech Preview 2 (TP2) is available with the newly added stuff:
- AppCatalyst now allows VMs to run any version of Linux, BSD and Solaris.
- The AppCatalyst location is automatically added to a user’s path (was a manual step)
- A bug that incorrectly reported the suspend state has been fixed.
- The new Photon OS template VM with updated VMware tools has been bundled with AppCatalyst.
- Cross-ported a couple of critical Fusion bug fixes into AppCatalyst.
- AppCatalyst now allows users to run ESXi VMs (this is an experimental feature)
AppCatalyst Tech Preview 2 can be downloaded here.
A lot of new stuff is released by VMware and of course everything is cool stuff, but sometimes something is released that you’ve been waiting for for quite some time. And in my case that’s a web console to control ESXi directly.
I’m using a Mac and always is a hassle to get access to get access to a user interface. There isn’t a client available for Mac and I first had to install the Web Client in order to get access.
Now some great engineers at VMware have released a Fling that gives you direct access to the ESXi host through a HTML5 web interface that can be installed through a VIB file. It’s a simple process that can be found on the Fling page over here. It’s straightforward to setup. It took me 2 minutes following the instructions.
Be aware that all Flings are tech preview, so not everything is working out-of-the-box. But these features work for sure :
- VM operations (Power on, off, reset, suspend, etc).
- Creating a new VM, from scratch or from OVF/OVA (limited OVA support)
- Configuring NTP on a host
- Displaying summaries, events, tasks and notifications/alerts
- Providing a console to VMs
- Configuring host networking
- Configuring host advanced settings
- Configuring host services
Since containers got traction within the IT community, especially with developers, there have been discussions that it would make virtual machines obsolete.
“VMs (and with it hypervisors) are consumers of resources that can be used to run applications” is the general thought that comes to mind when you only take containers into account. Which is true is you are a developer and care about your applications: containers only on bare metal is the way to go!
But you do need to take into account that applications need to be maintained and managed by IT infrastructure operations guys. For them it’s about creating a stable environment and making sure that the infrastructure delivers it’s service in a resilient way, while keeping everything manageable. For this reason server virtualization and virtual machines have been a huge success over the last 10 years. It abstracted the compute functionality by creating virtual machines, made infrastructure management easy and optimizes resource management.
Now containers come along and all that great features of server virtualization are forgotten and basically Ops is being told: bare metal is the way forward.
In my opinion this all comes down to a responsibility definition : Containers solve a Dev problem, virtual machines solve an Ops problem.
Better yet virtual machines create a foundation for the challenges that containers have. The biggest problem with containers today is that they don’t provide isolation. This has been a feature of VMs since the beginning. So the shortcomings of containers are solved by virtual machines.
The picture below give a graphical representation of a container on top of a virtual machine.
Don’t get me wrong: I love containers and the problems that they solve for developers, but I don’t think we need to throw away best practices on how to operate an IT infrastructure just because we can. Virtual machines have proven to be a good foundation within IT infrastructures of today and test have already proven that the impact for resources of running containers on top of hypervisors are minimal, even neglectable.
So containers on top of virtual machines: the best of both worlds & bringing Dev and Ops closer together.
Lately I’ve been having discussion with customers around the topic of Cloud-Native Apps. It’s cool to talk about these new developments, but it also raises a lot of questions with my attendees and they want to know what my opinion / definition is about Cloud-Native.
Most of the times I refer to the analogy of Digital Natives vs. Digital Immigrant. This term was first coined by Marc Prensky with his article Digital Natives, Digital Immigrants in which he describes the failure of American educators to understand the needs of modern students. Or to look into a broader perspective, you have people (Digital Natives) that grow up with modern technology like computers, smartphones, tablets, etc. and people (Digital Immigrants) that have learned (or not) these new technologies later in life and have not grown up with them. It shows how different types of people consume the technology today and how they work with them.
And that’s where the analogy can be made to cloud native vs. cloud immigrant applications. Cloud in my opinion is a convergence of multiple technologies at the same time, that makes things possible that we’re not possible 5 – 10 years ago. But applications have been around since the start of the mainframe and boomed when we got the the client-server era. These applications nowadays reside on virtualized platforms. Platforms that are now converted to private clouds. Question however is if these applications make full use of the capabilities of a cloud environment. They were not designed with cloud in mind and are still very dependent of the capabilities that the infrastructure has to offer even if it’s all software-defined. They live in the cloud, but as they were not designed for it, they can be called “cloud immigrants”.
This of course is different from the applications that developers create today. If given the opportunity to design an application from the start, developers choose a resilient and scalable architecture and make use of architecture designs such as microservices. Everything is loosly coupled and can be replicated all over the cloud (or even clouds). This makes these applications “cloud native” and they make full use of all the benefits that a cloud architecture has to offer.
So both types of applications can run on a cloud platform, but both have different characteristics. Below a table showing the difference in some of the characteristics of “cloud immigrant” and “cloud native”.
There is no right or wrong when looking at the characteristics of the two different application structures. It just depends what the requirements are with regards to your applications. “Cloud immigrants” over the last decades have served us well. The majority of the applications today still are “cloud immigrants”. And for the years to come we’ll still need to support them and run them in our clouds. Migrating “cloud immigrants” to “cloud native” is not an easy task at hand and to give an example for that we just have to look into the past : we’re still running mainframe today, wasn’t that supposed to be migrated to the client-server model?
However “cloud native” is the way forward and IT departments need to prepare themselves to host these applications on top of their cloud infrastructures. Question then becomes : How do you run “cloud immigrants” and “cloud natives” jointly together?
“One ring to rule them all…” The phrase from Lord of the Rings to define the one ring that can control everything including the other rings with magic power. Kind of a nerd intro, but it’s a good analogy to describe what is currently happening in the space of IT infrastructure automation.
A few years ago every vendor had it’s own little product portfolio in which they excelled and made most of their money. Microsoft has Windows / Office, Red Hat has Linux, VMware has virtualization, etc. But as cloud popped up the game changed and everybody started to move into the same space: management and control of the IT infrastructure.
With that move everybody needed (or is going) to expand their capabilities into terrain that was not their area of expertise. Every vendor is moving up or down the stack to get the most control over the IT infrastructure. It’s all about the control of the resources within the IT infrastructure and being the manager to control those resources.
So with each vendor creating their own “manager” for their part of the stack and making that manager capable of managing “other” stuff in the IT infrastructure creates the question : “What manager should control my IT infrastructure?”
And as with all evolution it’s not the strongest, nor the smartest that will rise and will surface on top. It is the one that can adapt to it’s environment. As the data center is not comprised out of multiple vendor product, it needs to be a product that can integrate with all of them; old and new ones.
VMware’s flagship in automation and orchestration is vRealize Automation (vRA). But the engine that really makes this manager adaptable is the synergy it has with vRealize Orchestrator (vRO).
vRO is the “glue” that makes it possible to connect all the data center components together and integrate them into vRA. vRA will then orchestrate whatever process (i.e. use case) that needs to be automated. vRA and vRO are the tools to link everything together.
This does not mean that vRA/vRO replaces the orchestration of other management tooling of other vendors. vRA/vRO just becomes the central entity to govern, orchestrate and automate everything within the data center. One central tool to make sure that all your policies are applied with the IT infrastructure. It uses the capabilities of all the other managers to orchestrate the workflow to create IT services. In other words it becomes the manager of managers.
Below you’ll find a picture of the integration of vRealize Automation with vRealize Orchestration and how integration takes place with all the other components within the data center.
In the end it all comes down to integration and connecting all IT infrastructure services within the data center. vRealize Automation is the tool to provide that functionality and make sure that you can build a software-defined data center that can run any application.
VMware Cloud-Native Apps released their first open-source projects with the announcement and release of project Lightwave and Photon. This is a new step in the path forward for VMware. VMware has always been closed source and supportive of other open-source projects, but this is the first time that VMware is taking the lead and released code through open-source for it’s own projects.
A new step and it suits the approach of making “developers first class citizens of the datacenter”. I’ve been working with VMware products for some years now and have seen this trend slowly building up. Their is a shift happening. No longer are applications the turf that only belongs to developers and nor is IT infrastructure the turf that only belongs to the IT operations guys. Call it evolution, call it “DevOps”, but more and more organisations see the benefit of making applications and IT operations work closely together to get the best out of both worlds : a platform that can run any applications; legacy or cloud-native.
In my opinion it is a good move for VMware to follow this trend and to transform itself from an IT infrastructure company into a company that acknowledges the needs of both the developers and the IT ops guys. VMware is one of the thought leaders in the space of virtualization and cloud computing and has experience of introducing complex software concepts into enterprise environments. Server virtualization was the start, with Software-Defined Data Center being the vision that build on the advantages that virtualization provides.
VMware Cloud-Native Apps is a new era. A new step forward in the continuing to support the application evolution into the cloud. And in my opinion its was only natural to choose the path of open-source. If you want to treat developers as “first class citizens”, you need to make them part of the VMware application development lifecycle.
This is that start of more things to come. I hope we’ll see more projects targeted at the next generation of applications with lost of community involvement and the opportunity to be part of something great. VMware ❤️ Developers!
For more information on project Photon & Lightwave got to http://vmware.github.io/
Openstack is the leading open-source platform for deploying virtual machines in data centers. It allows IT infrastructure teams to deploy virtual machines and other IT infrastructure components. Either through the service portal or through the API that comes with Openstack.
The discussion that I have with most customers around Openstack if fact that they think the functionality of Openstack and VMware vRealize Automation (vRA) is the same.
In fact customers are right. We do offer the same functionality that Openstack has to offer, but vRA is much more than an Infrastructure-as-a-Service (IaaS) platform. To define the positioning I have plotted Openstack in the VMware SDDC solution offering below.
vRA (Cloud Automation) in its core is a self-service portal that can deploy virtual machines. It consumes the resources that are provided to it from the compute, network and storage layer in order to create virtual machines that can host applications. This is the same functionality that Openstack offers.
However vRA and the rest of the vRealize suite can do a lot more then provision infrastructure resources. Providing IaaS is just the first step of automation. The end goal is to provide full management capabilities to manage and monitor all the data center resources in order to provide virtual machines and application resources. Integration of all the IT management components is crucial for the creation of a Software Defined Data Center.
And that’s where the big difference is: Openstack in its essence is an IaaS tool, vRealize Automation is a automation & orchestration engine to create a SDDC (and also includes IaaS).
SDDC is not a VMware-only stack. SDDC is a term for the automation, orchestration and integration of all IT components in the data center. It needs to work with all the IT solutions you already have inside your data center. So it could well be that you have a VMware estate next to an Openstack estate, to service different workloads within your datacenter. Whatever flavour of Openstack is the choice of the customer. VMware vRA can connect via the Openstack APIs to manage the resources in the Openstack layer.
VMware also offers an Openstack flavour: VMware Integrated Openstack (VIO). This is a distribution for those companies that want an enterprise-grade version of Openstack. A predefined installation of Openstack is supported and maintained by VMware.
So the conclusion is that Openstack can be one of the building blocks within the SDDC to host the application workloads in your datacenter. It fully integrates and the result is the best of both world.