Cloud Automation Services

Introducing the VMware Cloud Automation Services

During VMworld 2018 in the US VMware launched the Cloud Automation Services. A couple of weeks ago this finally became available to everybody. These are the new automation services that are launched “as-a-service” and can be consumed through VMware Cloud Services offerings.

This is a major shift towards a Cloud Management platform that customers can consume “as-a-service”. Opposite to the vRealize product line where people still need to install, manage and update themselves. These services will now provide a set of automation services that are app-centric and multi-cloud. All of which is managed and maintained by VMware.

Answering the needs of our customers

This fits into the strategy where most companies are heading. Over the last couple of years there has been a shift within Enterprise IT, where most companies not only want to consume infrastructure resources from their own datacenter, but also want to consume services that are provides “as-a-service”.

The best examples of-course being the public clouds from AWS, Azure, and Google. But there is also a need where companies want to consume other services such as service providers, telcos and edge computing. All of which need to be connected and where services need to be orchestrated across the board.

Next to that there is a shift to become more focus on that what the business actually consumes : the application. In the age of digital transformation IT departments need to deliver those applications that needs to make a difference.

Key to that is the developers and application operators. They are the ones that need to do a good job in order to deliver the applications to the business. Next to multi-cloud consumption, they want to be able to consume the infrastructure resources in a programmatic way and want to be able to run any type of applications framework. This means that the platform and needs to support virtual machines, containers or any other type of application frameworks to support the needs of the developer.

In other words, the IT infrastructure department needs to provide a platform that is multi-cloud and app-centric to be able to address the needs the developers and the application operations have. And ultimately the “Lines of Business”; the consumers of IT.

The Three Cloud Automation Services

And this is where the three new Cloud Automation Services come into play. VMware already delivered cloud services that help provide insight on what is happening in the different clouds.

Now with the Cloud Automation Services IT departments have the ability to automate and orchestrate services across clouds and to streamline the delivery of applications and infrastructure services to the consumers.

To make this possible VMware has create three separate Cloud Automation Services. All three can be used individually, but of course have been architected to work as a combined set of tools to accommodate the DevOps way of working.

Cloud Assembly

Orchestrates and expedites infrastructure and application delivery in line with DevOps principles. Cloud Assembly supports the provision to AWS, Azure and vSphere. And as it supports vSphere, it also can deploy to VMware Cloud on AWS.

Cloud Assembly provides project teams with the ability to collaborate on infrastructure and application blueprints. They can easily collaborate through the Web interface, command line and API. All of which can be versioned and works in an iterative fashion. This means changes to a blueprint can be pushed towards already deployed blueprint onto the infrastructure.

Developers can easily consume the resources using declarative “infrastructure-as-code” or an API to quickly deliver code and applications.

Service Broker

Service Broker is the “one-stop-shop” service portal where services can be published to authorised consumers. It aggregates the service blueprints that have been created in Cloud Assembly alongside the native content from multiple clouds and application platforms.

It is accessible to the consumer through a self-service graphical UI and API. It gives IT Operations the tool to define pre-defined environments to end-users running on VMware-base private or hybrid cloud or native AWS and Azure public clouds.

The Service Broker catalog can host services defined as VMware Cloud Assembly Blueprints and AWS Cloud Formation templates with additional services coming soon. Service Broker can govern resource access and use with different policy types down to the project level across supported clouds.

Code Stream

Code Stream provided a Continuous Deployment (CD) tool for modelling release pipelines to get new applications and features into production faster.

Digital tranformation drives up the demand for these new features and applications. Code Stream helps in rapid pipeline modelling to enable developers and DevOps teams to deliver the software faster and more frequent to production.

An advanced analytics dashboard makes high level status of all deployments visible and enables in-depth troubleshooting and trend analysis of code release models to reduce the time to deploy code.

Code Stream provides many integration with various Continuous Integration (CI) tooling to create in the end a seamless CI/CD flow that will enable developers to deliver high-quality software from idea to production in a fast and agile way.

Moving towards a Cloud Management Platform “as-a-service”

The reason VMware launched the Cloud Automation Services is the fact that a lot of customers want to have a Cloud Management Platform delivered “as-a-service”. The last couple of years we have seen customer wanting to consume resources from public cloud such as AWS, Amazon and Google.

To follow this trend VMware is now providing its customers with a full automation suite to manage both private and public cloud resources. This is an addition to the Cloud Services that were announced over the last year. Services such as Network Insight, Log Intelligence and Cost Insight helped customer gain operational insight into their clouds.

Combining these operational services with the automation services from Cloud Automation Services is another step into a Cloud Management Platform “as-a-service”. Going forward VMware will evolve these services with new features and functionality to govern resources even better across clouds. Next to that VMware will also deliver new services to address the growing need to manage cloud resources across clouds.

In the end VMware will provide the Cloud Services that will empower the IT Operations of its customers. They will be able to deliver cloud resources to their consumers while staying in control.

For more information go to

Cloud Management 101

I work as an evangelist & strategist for the Cloud Management business unit within VMware. In this role I talk a lot to customers about what Cloud Management is and how it fits the customers needs when it comes to providing IT resources to the consumers of IT.

Depending on the maturity of the customer I talk about why of cloud management, but in general most of them are at the beginning or early start of looking into cloud management. And the perception of most customers is that cloud management is “just automation”.

Actually automation is just a part of the whole cloud management solution. Automation may be the first step because the transition to cloud requires stuff to be delivered “as-a-service”. For that we need to take as much of the manual labour out or delivery and “automate the manual process”.

If automation is just the first step, then what is the end goal? Well and that’s where we need to return to the name.It’s all about managing clouds. And if you see clouds as a collections of services that you can consume, then cloud management is the management of all those services.

And in today’s world customers are not consuming services from just one cloud. It could of course be a starting point. Today most enterprise customers have some form of private cloud. This is a starting point as most customers have evolved their IT infrastructure to a state where consumers can get their IT resources as a services.

But most customers I talk to foresee a world that embraces multiple clouds. Some customers are already using services from AWS or from Azure. Next to that see you other parties trailing these leaders in public cloud such as Google, Alicloud and Oracle. And let’s not forget the thousands of service providers out there that deliver services that need to be consumed. All of which provide services that need to be governed and that need to comply with the policies and regulations that the business needs to follow.

This is where we need to have to have a model that describes all the functionalities of a Cloud Management Platform. A description that can govern all aspects of managing multiple clouds and the applications that sit on top of it.

Luckily Gartner has created such a model. Below you will find a graphical representation of the model.


Courtesy of Gartner for more info click on the picture.

The model shows and envisions all aspects that a cloud management platform needs to address. Of course not everything needs to be in place when starting with cloud management. It’s a journey where most companies start small, but over time all areas will be addressed when moving into a multi-cloud, app-centric world.

The goal is to provide a cloud management system that enables internal IT departments to become a service broker to the business. It should not be a problem that the business wants to consume applications and infrastructure resources from multiple providers. Either internal, private cloud resources or public cloud resources should be consumable. As long as IT can govern those IT resources. In the end it’s all about staying in control and making sure that everything happens in accordance to the requirements and needs of the company and costs remain within budget. That’s the purpose of a Cloud Management Platform.

An IT Infrastructure Perspective on Digital Transformation

Digital transformation is all around us. Over the last couple of years we’ve moved into the age of digital experience. We’re more and more getting used to consuming our products as-a-service and consuming it digitally through apps on our computers and mobile phones.

This forces companies to change and rethink their current strategies. Innovation has to take place to keep competitors from taking more market share. Even worse is the possibility of being disrupted by either a new startup or a multi-billion dollar company that comes with a new innovative way to disrupt the market and gain a huge market share overnight. We all know the examples of Netflix, Uber, Amazon, etc. Companies that disrupt markets in new innovative way and have gained a solid position in the consumer market today.

All of this of course has an impact on how we consume IT infrastructure. The question then becomes: How do we relate this new way of doing business to IT infrastructure?

After all IT infrastructure is most of the time not the core business of most companies. But still we need to transform our infrastructure to accommodate digital transformation. Doesn’t matter what business your are in; banking, retail, government, whatever.

Every type of business is going through this digital transformation. Some industries are already in it, others are already starting. Either way consumers expect it and it will happen at some point in time.

The above picture I use to illustrate how infrastructure relates to digital transformation. It’s a simplification of what happens in the world of IT today.

Business consumes applications to deliver their services, applications run on infrastructure. Nothing new here, but as the introduction already said, things are changing rapidly and we are now coming into the area of digital transformation. The application becomes this central piece that makes this happen. Fundamentally making business “go digital”.

Crucial here is the interaction with which the applications can be build, changed and implemented in production. Application developers started to use more agile ways of making development happen, but that of course had an impact on infrastructure. A need to work more closely together led to the DevOps movement. A cultural shift to align the people and processes to provide developers with the right resources to get applications faster to production while keeping quality the same or better.

To accommodate all of this we also need to transform the infrastructure. The infrastructure need to become more agile to accommodate the needs of the applications that run on it. Taking into account that the infrastructure needs to be capable of running newly created applications, but also is capable of running the existing (sometimes legacy) applications.

Infrastructure agility is becoming a key piece of the digital transformation journey. Changing infrastructure capability demands, force IT operations to rethink their strategy and to become more app-centric.

vSphere Integrated Containers – The Next Step

VMware has been working on integrating container technology into it’s core products. For this the descision was made to do it in an open-source project. This ofcourse is a different strategy for VMware as everything has been closed source until recently. The open source project has made progess over the last year and we are now getting to the point that it is actually becoming a product that can be used in production by vSphere customers.

During VMworld 2015 VMware announced vSphere Integrated Containers. At the time this was just the project that came out of Project Bonneville; the integration of Docker containers within vSphere. Today VMware announced that it is not only the integration of containers in vSphere, which is now know as vSphere Integrated Containers Engine (VIC Engine), but also a Container Management Portal (Project Admiral) and a Container Registry (project Harbor). This now will be know in as vSphere Integrated Containers and provides vSphere administrators with a full set of tools that can be used to provide containers to developers and container users.

In this post I’ll explain what vSphere Integrated Containers (VIC) is and what the components do that make up VIC.

So as you can see on the marchitecture above, vSphere Integrated Containers consists out of there components :

  • VIC Engine
  • Container Management Portal (Project Admiral)
  • Container Registry (Project Harbor)

Continue reading

Photon Platform Architecture

Paradigm shift in thinking come with a change in technology. Same goes for the shift that Docker initiated by making containers mainstream. It’s nice to run a container next to your virtual machines with vSphere Integrated Containers, but when you really want to achieve massive scale and speed, you need to re-think your architecture.

This has been the thought process behind Photon Platform at VMware. A platform created for these new types of applications native to the cloud. When the application wants to make full use of the capabilities that the infrastructure has to offer there needs to be some form of “knowledge” and interaction between the the two.

Most of these new applicatons either use Platform-as-a-Service (with infrastructure orchestration included) or some kind of container managment system or both. And this is specifically the use case for which Photon Platform was designed.

Photon Platform is not a new type of “containers management system”, but rather a platform to host these “container managment systems” and PaaS deployments on. With that in mind there are two major features in Photon Platform :

  1. API-first Model; Photon Platform has been created to integrate with other software applications. For that we need an API. One of the design fundatmentals has been that everything should be able to be controlled through an API. It is focused on the automation of infrastructure consumption and operations using simple RESTful APIs, SDKs and CLI tooling, all fully multi-tenant.
  2. Fast, Scale-Out Control Plane; The platform has been created for applications that are massively scalable. For that you need a platform control plane that is the same. Photon Platform has a built-from-scratch infrastructure control plane optimized for massive scale and speed, allowing the creation of 1000s of new VM-isolated workloads per minute, and supporting 100,000s of total simultaneous workloads

This post will explain to you the Photon Platform architecture. Below you’ll find a picture showing you the archictecture of Photon Platform.

I’ll describe the different components that make up Photon Platform. But to explain first, Photon Platform has been build on the same foundation as vSphere. ESXi currently is the hypervisor that is being used to host the workload. This means that all components that you know of that work with vSphere, also work with Photon Platform. Photon Platform has native integration with VSAN and NSX. These components will therefore be used to facilitate storage and networking withing Photon Platform.

Continue reading

VMware Integrated Containers (Tech Preview)

The challenge with each new technology is to integrate the new stuff with the old stuff. Same goes for containers. Over the last year the popularity of containers amongst developers has skyrocketed and every developer wants to use it. Questions of course is how do you manage it from an IT ops perspective. There is a natural fit between virtual machines and containers as I explained in my previous blogpost

For this reason VMware is developiong VMware Integrated Containers (VIC). An out-of-the-box integration between containers and the vSphere virtualization platform you know. It provides IT operations with the ability to deliver containers at the speed that developers want it. This then gives developers the flexibility, portability and speed to deliver their code / applications at the needs of the business. And of course that’s what it is all about in the end : deliver applications to the business to provide business value.

VMware Integrated Containers is build with vSphere as it’s foundation. This leverages all existing investments in VMware technology, people and processes.

Management tools that are familiar to IT Operations, such as VMotion, HA and DRS, can still be used. It is not needed to introduce new tooling and or application technology. VIC runs on the vSphere infrastructure platform that customers already have. This makes it possible for IT Operations to provide the same security, data persistency, networking and management capabilities for containers as it can do for virtual machines.

Besides the existing vSphere capabilities it can also make use of new vSphere software-defined technology for storage virtualization (VSAN / VVOLs) and network virtualization (NSX). No need to re-architecture the existing infrastructure. It can be fully enabled for a software-defined world of tomorrow without major design changes.

VMware Integrated Containers (VIC) leverages existing vSphere technology to create a container hosting platform out of standard vSphere components, This makes it possible to create an environment that adds functionality as networking and security without compromising.

For each VIC instance a resource pool and a Virtual Container Host (VCH) will be created.

The Virtual Container Host is the central entity within the VIC instance. It can be seen as the manager of all the containers that will be run within the VIC instance. Besides managing it also provides network capabilities to the container instances by routing the traffic from within the VIC instance to the outside world. Application traffic and the container resources (API requests, images, etc.) will all be handled by the VCH.

Please be aware that containers do not run on the VCH itself. Each container instance will be provided with a new virtual machine the moment it is started. By using instant cloning vSphere can provide a virtual machine instantly for container usage. Instant cloning creates a virtual machine copy in-memory which makes it possible to provide the foundation for the container that needs to be run is an fast and effective way.

VCH then manages the power state of the virtual machine that hosts the container and makes sure that all container API requests are handled in correspondence with the virtual machines life-cycle.

All of the VIC virtual machines are hosted within a vSphere resource pool that make up the VIC instance. A resource pool makes sure that resources can be managed per VIC instance. Each container having it’s own virtual machines makes it possible to guarantee resources on a per container basis. This granular way of managing resources now gives containers the ability to manage and monitor resources through the same mechanism that is used for virtual machines. Next to resource management it also guarantees security through isolation. No containers can influence another instance as there is a one-to-one container to virtual machine relationship.

And last but not least all data needs to be stored in a persistent way. For this each VIC will be provided with datastore space. These datastores will host the VMDKs of the virtual machines in the VIC instance. If a new container instance is launched within the VIC a new VMDK is created. That VMDK will have all the images (layers) and volumes installed for the new container instance.

So in all VMware Integrated Containers is the perfect fit for running containers on top of your vSphere platform. Running cloud-immigrant and cloud-native apps alongside one another.

Datacenter-like experience on your laptop : VMware AppCatalyst (TP 2)

A couple of months ago VMware released AppCatalyst. This is the slimmed down version of the popular desktop product VMware Fusion. It gives you the ability to run virtual machines on top of your Mac. Bringing you “datacenter-like” experience on top of your laptop.

This difference between Fusion and AppCatalyst is that AppCatalyst provides you with the virtualization engine and you can control it through the command line and APIs, not a graphical interface. So a limited set of control, but it is free to use.

AppCatalyst has been specifically created to accommodate developers. Most developers work with APIs and command line tools in order to get the job done. AppCatalyst provides this functionality and included with AppCatalyst is the Linux operating system project Photon. This Linux platform can be used to host containers (Docker, rkt, Garden, etc. ) which nowadays are used by developers for application development. The whole purpose is to create a “datacenter-like” environment experience on a developer desktop / laptop.

The first tech preview was limited with the amount of features, but the team developing AppCatalyst put a lot of effort in expanding the functionality. So now Tech Preview 2 (TP2) is available with the newly added stuff:

  • AppCatalyst now allows VMs to run any version of Linux, BSD and Solaris.
  • The AppCatalyst location is automatically added to a user’s path (was a manual step)
  • A bug that incorrectly reported the suspend state has been fixed.
  • The new Photon OS template VM with updated VMware tools has been bundled with AppCatalyst.
  • Cross-ported a couple of critical Fusion bug fixes into AppCatalyst.
  • AppCatalyst now allows users to run ESXi VMs (this is an experimental feature)

AppCatalyst Tech Preview 2 can be downloaded here.

Cool Fling : Web UI for ESXi

A lot of new stuff is released by VMware and of course everything is cool stuff, but sometimes something is released that you’ve been waiting for for quite some time. And in my case that’s a web console to control ESXi directly.

I’m using a Mac and always is a hassle to get access to get access to a user interface. There isn’t a client available for Mac and I first had to install the Web Client in order to get access.

Now some great engineers at VMware have released a Fling that gives you direct access to the ESXi host through a HTML5 web interface that can be installed through a VIB file. It’s a simple process that can be found on the Fling page over here. It’s straightforward to setup. It took me 2 minutes following the instructions.

Be aware that all Flings are tech preview, so not everything is working out-of-the-box. But these features work for sure :

  • VM operations (Power on, off, reset, suspend, etc).
  • Creating a new VM, from scratch or from OVF/OVA (limited OVA support)
  • Configuring NTP on a host
  • Displaying summaries, events, tasks and notifications/alerts
  • Providing a console to VMs
  • Configuring host networking
  • Configuring host advanced settings
  • Configuring host services

For more information also have a look at the blog of Willam Lam here. He has more details on the project.

The virtual machine as the foundation for containers

Since containers got traction within the IT community,  especially with developers,  there have been discussions that it would make virtual machines obsolete.

“VMs (and with it hypervisors) are consumers of resources that can be used to run applications” is the general thought that comes to mind when you only take containers into account. Which is true is you are a developer and care about your applications: containers only on bare metal is the way to go!

But you do need to take into account that applications need to be maintained and managed by IT infrastructure operations guys. For them it’s about creating a stable environment and making sure that the infrastructure delivers it’s service in a resilient way, while keeping everything manageable. For this reason server virtualization and virtual machines have been a huge success over the last 10 years. It abstracted the compute functionality by creating virtual machines, made infrastructure management easy and optimizes resource management.

Now containers come along and all that great features of server virtualization are forgotten and basically Ops is being told: bare metal is the way forward.

In my opinion this all comes down to a responsibility definition : Containers solve a Dev problem, virtual machines solve an Ops problem.

Better yet virtual machines create a foundation for the challenges that containers have. The biggest problem with containers today is that they don’t provide isolation. This has been a feature of VMs since the beginning. So the shortcomings of containers are solved by virtual machines.

The picture below give a graphical representation of a container on top of a virtual machine.

Don’t get me wrong: I love containers and the problems that they solve for developers, but I don’t think we need to throw away best practices on how to operate an IT infrastructure just because we can. Virtual machines have proven to be a good foundation within IT infrastructures of today and test have already proven that the impact for resources of running containers on top of hypervisors are minimal, even neglectable.

So containers on top of virtual machines: the best of both worlds & bringing Dev and Ops closer together.

Cloud Immigrant vs. Cloud Native Applications

Lately I’ve been having discussion with customers around the topic of Cloud-Native Apps. It’s cool to talk about these new developments, but it also raises a lot of questions with my attendees and they want to know what my opinion / definition is about Cloud-Native.

Most of the times I refer to the analogy of Digital Natives vs. Digital Immigrant. This term was first coined by Marc Prensky with his article Digital Natives, Digital Immigrants in which he describes the failure of American educators to understand the needs of modern students. Or to look into a broader perspective, you have people (Digital Natives) that grow up with modern technology like computers, smartphones, tablets, etc. and people (Digital Immigrants) that have learned (or not) these new technologies later in life and have not grown up with them. It shows how different types of people consume the technology today and how they work with them.

And that’s where the analogy can be made to cloud native vs. cloud immigrant applications. Cloud in my opinion is a convergence of multiple technologies at the same time, that makes things possible that we’re not possible 5 – 10 years ago. But applications have been around since the start of the mainframe and boomed when we got the the client-server era. These applications nowadays reside on virtualized platforms. Platforms that are now converted to private clouds. Question however is if these applications make full use of the capabilities of a cloud environment. They were not designed with cloud in mind and are still very dependent of the capabilities that the infrastructure has to offer even if it’s all software-defined. They live in the cloud, but as they were not designed for it, they can be called “cloud immigrants”.

This of course is different from the applications that developers create today. If given the opportunity to design an application from the start, developers choose a resilient and scalable architecture and make use of architecture designs such as microservices. Everything is loosly coupled and can be replicated all over the cloud (or even clouds). This makes these applications “cloud native” and they make full use of all the benefits that a cloud architecture has to offer.

So both types of applications can run on a cloud platform, but both have different characteristics. Below a table showing the difference in some of the characteristics of “cloud immigrant” and “cloud native”.

There is no right or wrong when looking at the characteristics of the two different application structures. It just depends what the requirements are with regards to your applications. “Cloud immigrants” over the last decades have served us well. The majority of the applications today still are “cloud immigrants”. And for the years to come we’ll still need to support them and run them in our clouds. Migrating “cloud immigrants” to “cloud native” is not an easy task at hand and to give an example for that we just have to look into the past : we’re still running mainframe today, wasn’t that supposed to be migrated to the client-server model?

However “cloud native” is the way forward and IT departments need to prepare themselves to host these applications on top of their cloud infrastructures. Question then becomes : How do you run “cloud immigrants” and “cloud natives” jointly together?