:::: MENU ::::

Cool Fling : Web UI for ESXi

A lot of new stuff is released by VMware and of course everything is cool stuff, but sometimes something is released that you’ve been waiting for for quite some time. And in my case that’s a web console to control ESXi directly.

I’m using a Mac and always is a hassle to get access to get access to a user interface. There isn’t a client available for Mac and I first had to install the Web Client in order to get access.

Now some great engineers at VMware have released a Fling that gives you direct access to the ESXi host through a HTML5 web interface that can be installed through a VIB file. It’s a simple process that can be found on the Fling page over here. It’s straightforward to setup. It took me 2 minutes following the instructions.

Be aware that all Flings are tech preview, so not everything is working out-of-the-box. But these features work for sure :

  • VM operations (Power on, off, reset, suspend, etc).
  • Creating a new VM, from scratch or from OVF/OVA (limited OVA support)
  • Configuring NTP on a host
  • Displaying summaries, events, tasks and notifications/alerts
  • Providing a console to VMs
  • Configuring host networking
  • Configuring host advanced settings
  • Configuring host services

For more information also have a look at the blog of Willam Lam here. He has more details on the project.


The virtual machine as the foundation for containers

Since containers got traction within the IT community,  especially with developers,  there have been discussions that it would make virtual machines obsolete.

“VMs (and with it hypervisors) are consumers of resources that can be used to run applications” is the general thought that comes to mind when you only take containers into account. Which is true is you are a developer and care about your applications: containers only on bare metal is the way to go!

But you do need to take into account that applications need to be maintained and managed by IT infrastructure operations guys. For them it’s about creating a stable environment and making sure that the infrastructure delivers it’s service in a resilient way, while keeping everything manageable. For this reason server virtualization and virtual machines have been a huge success over the last 10 years. It abstracted the compute functionality by creating virtual machines, made infrastructure management easy and optimizes resource management.

Now containers come along and all that great features of server virtualization are forgotten and basically Ops is being told: bare metal is the way forward.

In my opinion this all comes down to a responsibility definition : Containers solve a Dev problem, virtual machines solve an Ops problem.

Better yet virtual machines create a foundation for the challenges that containers have. The biggest problem with containers today is that they don’t provide isolation. This has been a feature of VMs since the beginning. So the shortcomings of containers are solved by virtual machines.

The picture below give a graphical representation of a container on top of a virtual machine.

Don’t get me wrong: I love containers and the problems that they solve for developers, but I don’t think we need to throw away best practices on how to operate an IT infrastructure just because we can. Virtual machines have proven to be a good foundation within IT infrastructures of today and test have already proven that the impact for resources of running containers on top of hypervisors are minimal, even neglectable.

So containers on top of virtual machines: the best of both worlds & bringing Dev and Ops closer together.


Cloud Immigrant vs. Cloud Native Applications

Lately I’ve been having discussion with customers around the topic of Cloud-Native Apps. It’s cool to talk about these new developments, but it also raises a lot of questions with my attendees and they want to know what my opinion / definition is about Cloud-Native.

Most of the times I refer to the analogy of Digital Natives vs. Digital Immigrant. This term was first coined by Marc Prensky with his article Digital Natives, Digital Immigrants in which he describes the failure of American educators to understand the needs of modern students. Or to look into a broader perspective, you have people (Digital Natives) that grow up with modern technology like computers, smartphones, tablets, etc. and people (Digital Immigrants) that have learned (or not) these new technologies later in life and have not grown up with them. It shows how different types of people consume the technology today and how they work with them.

And that’s where the analogy can be made to cloud native vs. cloud immigrant applications. Cloud in my opinion is a convergence of multiple technologies at the same time, that makes things possible that we’re not possible 5 – 10 years ago. But applications have been around since the start of the mainframe and boomed when we got the the client-server era. These applications nowadays reside on virtualized platforms. Platforms that are now converted to private clouds. Question however is if these applications make full use of the capabilities of a cloud environment. They were not designed with cloud in mind and are still very dependent of the capabilities that the infrastructure has to offer even if it’s all software-defined. They live in the cloud, but as they were not designed for it, they can be called “cloud immigrants”.

This of course is different from the applications that developers create today. If given the opportunity to design an application from the start, developers choose a resilient and scalable architecture and make use of architecture designs such as microservices. Everything is loosly coupled and can be replicated all over the cloud (or even clouds). This makes these applications “cloud native” and they make full use of all the benefits that a cloud architecture has to offer.

So both types of applications can run on a cloud platform, but both have different characteristics. Below a table showing the difference in some of the characteristics of “cloud immigrant” and “cloud native”.

There is no right or wrong when looking at the characteristics of the two different application structures. It just depends what the requirements are with regards to your applications. “Cloud immigrants” over the last decades have served us well. The majority of the applications today still are “cloud immigrants”. And for the years to come we’ll still need to support them and run them in our clouds. Migrating “cloud immigrants” to “cloud native” is not an easy task at hand and to give an example for that we just have to look into the past : we’re still running mainframe today, wasn’t that supposed to be migrated to the client-server model?

However “cloud native” is the way forward and IT departments need to prepare themselves to host these applications on top of their cloud infrastructures. Question then becomes : How do you run “cloud immigrants” and “cloud natives” jointly together?


The rise of the manager of managers

“One ring to rule them all…” The phrase from Lord of the Rings to define the one ring that can control everything including the other rings with magic power. Kind of a nerd intro, but it’s a good analogy to describe what is currently happening in the space of IT infrastructure automation.

A few years ago every vendor had it’s own little product portfolio in which they excelled and made most of their money. Microsoft has Windows / Office, Red Hat has Linux, VMware has virtualization, etc. But as cloud popped up the game changed and everybody started to move into the same space: management and control of the IT infrastructure.

With that move everybody needed (or is going) to expand their capabilities into terrain that was not their area of expertise. Every vendor is moving up or down the stack to get the most control over the IT infrastructure. It’s all about the control of the resources within the IT infrastructure and being the manager to control those resources.

So with each vendor creating their own “manager” for their part of the stack and making that manager capable of managing “other” stuff in the IT infrastructure creates the question : “What manager should control my IT infrastructure?”

And as with all evolution it’s not the strongest, nor the smartest that will rise and will surface on top. It is the one that can adapt to it’s environment. As the data center is not comprised out of multiple vendor product, it needs to be a product that can integrate with all of them; old and new ones.

VMware’s flagship in automation and orchestration is vRealize Automation (vRA). But the engine that really makes this manager adaptable is the synergy it has with vRealize Orchestrator (vRO).

vRO is the “glue” that makes it possible to connect all the data center components together and integrate them into vRA. vRA will then orchestrate whatever process (i.e. use case) that needs to be automated. vRA and vRO are the tools to link everything together.

This does not mean that vRA/vRO replaces the orchestration of other management tooling of other vendors. vRA/vRO just becomes the central entity to govern, orchestrate and automate everything within the data center. One central tool to make sure that all your policies are applied with the IT infrastructure. It uses the capabilities of all the other managers to orchestrate the workflow to create IT services. In other words it becomes the manager of managers.

Below you’ll find a picture of the integration of vRealize Automation with vRealize Orchestration and how integration takes place with all the other components within the data center.

In the end it all comes down to integration and connecting all IT infrastructure services within the data center. vRealize Automation is the tool to provide that functionality and make sure that you can build a software-defined data center that can run any application.


Project Photon and Lightwave, the start of a new VMware era

VMware Cloud-Native Apps released their first open-source projects with the announcement and release of project Lightwave and Photon. This is a new step in the path forward for VMware. VMware has always been closed source and supportive of other open-source projects, but this is the first time that VMware is taking the lead and released code through open-source for it’s own projects.

A new step and it suits the approach of making “developers first class citizens of the datacenter”. I’ve been working with VMware products for some years now and have seen this trend slowly building up. Their is a shift happening. No longer are applications the turf that only belongs to developers and nor is IT infrastructure the turf that only belongs to the IT operations guys. Call it evolution, call it “DevOps”, but more and more organisations see the benefit of making applications and IT operations work closely together to get the best out of both worlds : a platform that can run any applications; legacy or cloud-native.

In my opinion it is a good move for VMware to follow this trend and to transform itself from an IT infrastructure company into a company that acknowledges the needs of both the developers and the IT ops guys. VMware is one of the thought leaders in the space of virtualization and cloud computing and has experience of introducing complex software concepts into enterprise environments. Server virtualization was the start, with Software-Defined Data Center being the vision that build on the advantages that virtualization provides.

VMware Cloud-Native Apps is a new era. A new step forward in the continuing to support the application evolution into the cloud. And in my opinion its was only natural to choose the path of open-source. If you want to treat developers as “first class citizens”, you need to make them part of the VMware application development lifecycle.

This is that start of more things to come. I hope we’ll see more projects targeted at the next generation of applications with lost of community involvement and the opportunity to be part of something great. VMware ❤️ Developers!

For more information on project Photon & Lightwave got to http://vmware.github.io/


VMware support for CoreOS

As of today VMware provides support for CoreOS on both vSphere and vCloud Air. This again marks the effort of VMware to support  the containerized world.

CoreOS is one of the lightweight Linux OS distribtutions that is ideal for containers. It is a minimum footprint OS and is designed to run apps that can benefit from a distributed architecture.

It is this distribution architecture that makes it possible to run services at scale with high resilience.

Bringing CoreOS to vSphere and vCloud Air really creates the best of both world: running an OS tailored for Cloud-Native Apps on an infrastructure platform that is build to provide the best performance and resilience from an infrastructure percepective.

The CoreOS OVA has the open-vm-tools natively installed and can get the full benefits of all the VMware has to offer to the OS.

More on the announcement can be found at VMware and CoreOS.


Positioning Openstack within the VMware SDDC

Openstack is the leading open-source platform for deploying virtual machines in data centers. It allows IT infrastructure teams to deploy virtual machines and other IT infrastructure components. Either through the service portal or through the API that comes with Openstack.

The discussion that I have with most customers around Openstack if fact that they think the functionality of Openstack and VMware vRealize Automation (vRA) is the same.

In fact customers are right. We do offer the same functionality that Openstack has to offer, but vRA is much more than an Infrastructure-as-a-Service (IaaS) platform. To define the positioning I have plotted Openstack in the VMware SDDC solution offering below.

Openstack VMware SDDC

vRA (Cloud Automation) in its core is a self-service portal that can deploy virtual machines. It consumes the resources that are provided to it from the compute, network and storage layer in order to create virtual machines that can host applications. This is the same functionality that Openstack offers.

However vRA and the rest of the vRealize suite can do a lot more then provision infrastructure resources. Providing IaaS is just the first step of automation. The end goal is to provide full management capabilities to manage and monitor all the data center resources in order to provide virtual machines and application resources. Integration of all the IT management components is crucial for the creation of a Software Defined Data Center.

And that’s where the big difference is: Openstack in its essence is an IaaS tool, vRealize Automation is a automation & orchestration engine to create a SDDC (and also includes IaaS).

SDDC is not a VMware-only stack. SDDC is a term for the automation, orchestration and integration of all IT components in the data center. It needs to work with all the IT solutions you already have inside your data center. So it could well be that you have a VMware estate next to an Openstack estate, to service different workloads within your datacenter. Whatever flavour of Openstack is the choice of the customer. VMware vRA can connect via the Openstack APIs to manage the resources in the Openstack layer.

VMware also offers an Openstack flavour:  VMware Integrated Openstack (VIO). This is a distribution for those companies that want an enterprise-grade version of Openstack. A predefined installation of Openstack is supported and maintained by VMware.

So the conclusion is that Openstack can be one of the building blocks within the SDDC to host the application workloads in your datacenter. It fully integrates and the result is the best of both world.


API Coolness = Real Life Service Mashups

YouTube Preview Image
Just read about this and in my opinion pretty cool. Uber and Spotify will join forces and give you the option to listen to your favorite music on Spotify while taking your Uber cab to your next destination. How cool is that?

The question that raised my mind was : Is this a trend that will be the next cool thing to do in 2015?

Probably there are loads of examples out there, but this does seem to become the trend. The option to connect the service that makes your life more comfortable and let it connect to multiple other services that provide another type of service, but the two services combines create a better consumer experience.

True this has existed for years in the digitized world of software, but as our real life services get digitized more and more, the possibilities grow with it. Digitization and consumerization create a drive for innovation and the exploration for new ways to take consumer experience to the new level.

And with all software APIs are the way to connect services together. This ability to mashup real life services is really a level up in consumerization. Today it’s Uber and Spotify connecting. I wonder what the future will hold. One things for sure: in the end it will deliver a better experience for both me and you.


“All things are created twice” : Basics to IT Infrastructure Design

“All things are created twice” is one of the principles that immediately comes to mind when thinking of designing IT infrastructures. It’s the principle by Stephen Covey that says that we first create things in our mind, before we even produce anything in the physical world.

Think about that for a sec. We think first before we do something. It’s the process we do unconsciously all day. We do it every moment of the day, over and over again.

So the same goes for designing new IT infrastructures. First think about it, write that down into design documents and then build the IT infrastructure as defined in your design documents.

Compare it to building a house. Nobody goes out, buys bricks and mortar and then starts building something without a design. Same goes for building a new IT infrastructure or whatever it is that needs to be thought out before it is created. You don’t go out and randomly install software hoping it will result in the optimal IT infrastructure that suits your needs.

Or better yet the needs of your customer / company. Cause most of the times you don’t design according to what you think is best. You design the infrastructure to suit the needs and requirements of somebody else. Just like with building a house, you are the architect. Trying to figure out those needs and requirement of your customer. How big it must be? How many people are going to live in it? How should the plumbing / electricity be installed? And last but not least how much in the total amount of money that can be spend?

But we’re not building a house, we are building an IT infrastructure. The variables change, but the design methodology is the same. First think of what you want to create, then go out and build it.

And maybe this is in a nutshell what the VCDX program is all about. It’s not magical sorcery what the program is about. It’s about showing you can architect a IT infrastructure that suits the needs of your customer / company. As I always say: “There is no general perfect design, the perfect design is the design that meets the requirements of your customer while taking the constraints into account.”

Thats what is looked for in the VCDX program. People that can show that skill and be able to present and defend that to the rest if the world. Or in case of the program : the panel. So step up to the plate and show that you are an IT infrastructure designer. Good luck!

Click on the link for more information on the book by Steven Covey “The 7 Habits of Highly Effective People: Powerful Lessons in Personal Change”


vExpert 2014

I’m grateful to be awarded the vExpert award once again in 2014. The VMware vExpert program acknowledges the people within the community that have contributed into evangelizing virtualization as whole. Proud to be part of that group and in my new role as Solutions Consultant I will hopefully contribute more to the community than before.

For the complete list of vExperts 2014 see: https://blogs.vmware.com/vmtn/2014/04/vexpert-2014-announcement.html

If you think you are vExpert material then apply here: https://blogs.vmware.com/vmtn/2014/04/vexpert-2014-q2-applications-open.html


Pages:1234567...13