Upcoming New Website and Collaboration with The Virtualists.org

by +

Guest Post By Marek ĎurišThe Virtualists.org 

I am very happy that some members of our community have stepped into a closer cooperation with virtualizationmatrix.com.

This site is well known by the broad technical community and used by many technical specialists to find the best match for their designs, use and adjustments of virtualized environments. All of that with an impartial technical look at each of those technologies.

Our experienced members were welcomed into a team working on significant enhancement of the capabilities following a simple objective – extend the coverage all over the virtualization technology landscape and enable other industry experts (like us) to publish their expertise.

I believe we can soon expect a brand new matrix website that will be of even more help to all industry practitioners, consultants and fans. The new web site will span most of currently existing solutions from the hypervisors of VMware, Microsoft, Citrix, Redhat, Oracle, through IBM Power technology to various products for backup, storage, network, performance & capacity management and security tools that may be used for the modern software-defined datacentre.

“All the variety of software will be structured with high level of detail and displayed in a holistic view for simpler decision making and interesting insights and reviews will be brought by TheVirtualist.org

Keep on screening this space and our website and you’ll find out more soon.

I’m looking forward to that!

Comment (VirtualizationMatrix): The Virtualists.org is comprised of a team of experienced industry experts including: 1 VCDX, 13 vEXPERT, 2 VCI, 9 VCAP5-DCD, 11 VCAP5-DCAs, all members are VCP, people certified in Hyper-V and Citrix XEN etc.

Deploying VMware Infrastructure on SoftLayer (public cloud) – Technical Considerations and FAQs

by +

This article will discuss SoftLayer’s unique “private cloud” capabilities and provide a technical Q&A for deploying VMware infrastructures on the SoftLayer public cloud.

After my first fair share of architectural engagements, it became clear that SoftLayer’s bare metal capabilities are resonating incredibly well with our clients.
Please let me stress that bare metal is just “one tool in our architectural tool box” (see picture) … we offer public virtual servers (even private ones, where you don’t share the underlying physical host with other tenants for e.g. compliance or performance reasons), so use those approaches where appropriate!

Actually, that’s exactly the point, SoftLayer can provide the appropriate technology for your specific workload and deployment scenario, we do not “force-fit” your workload on a restricted set of virtual (only) compute instances.

Bare-metal systems open up the door to workloads typically not associated with the public (virtual) cloud like HPC, gaming and big data … but many “Enterprise customers” are also looking for the capability to create flexible private clouds within SoftLayer.
There are many scenarios where this can make sense. Imagine a client that wants to move an on-premise IT solution “as-is” into the public cloud, however the solution has dependencies on a specific hypervisor. One of my current customers wants to e.g. move their e-learning platform into the cloud but their deployment method for individual “classes” relies on interacting with the physical ESX hosts.

Can I host VMware hypervisors on SoftLayer? Will I have access to the physical host? How is VMware deployed and licensed? What VMware features are available?

So what are your options when you have a dependency on a particular virtualization technology? For some scenarios you could select the public cloud of that particular hypervisor vendor. Yes, there is the risk that you lock yourself further into the ecosystem of a particular virtualization vendor or ignore how (potentially poorly) other workloads and hypervisor requirements might be facilitated. This approach is also unlikely to be sufficient if you need direct interaction with the physical hypervisor host (e.g. for custom deployments) as physical hosts are typically not exposed in these clouds.
Instead, you could consider a cloud vendor that allows you to deploy your workload on ANY hypervisor of your choice (and has also just been named “#1 in IDC’s Enterprise Cloud Customer Survey) … – hint ;)

So what VMware features are available? How is it licensed? Where can you find detailed information?

I have summarized the most frequently asked questions in this technical Q&A:


Can I really create an isolated private VMware cloud in SoftLayer?

Yes, you can deploy bare metal systems in SoftLayer, install any supported hypervisor (including VMware ESX) on these hosts and deploy virtual machines using the native management tools. SoftLayer systems are deployed be default in VLANs for segregation and various networking components (like gateways, routers and firewalls) can be used to create almost any topology.

How would I deploy VMware? (Can I deploy VMware components directly from the SoftLayer portal?

You have two options:

1) Select and deploy the ESX hypervisor automatically with a monthly bare metal system (see picture). You can also deploy vCenter management automatically with a virtual machine or bare metal system (Windows only).
(Go to https://store.softlayer.com/configure to see all bare metal configuration options.)

2) Alternatively you can deploy a bare metal host (e.g. with a free operating system like Cent OS) and subsequently install ESX manually (e.g. using Remote Console and Virtual media access of the host). You could then install vCenter Server manually or deploy the Linux-based VMware vCenter Server Appliance.

New: You can now also specify “No Operating System” when deploying a bare metal system (see pic)



How is VMware licensed in a SoftLayer environment?

Again, you have two options, essentially the licensing approach is tied to the above listed deployment mechanism.

1) When deploying ESX from the portal, SoftLayer will automatically enable VMware Service Provider Program licensing (VSPP).
On deployment, a default user “vmadmin” is added to the ESX server for data collection (do not delete). VSPP charges for RAM reserved/used for all “powered on” virtual machines (not “per socket” like a standard host license).

2) When deploying ESX manually, customers can utilize the “Bring Your Own License” approach (BYOL). That means they can apply their standard licenses to these hosts.

I have a client that is a service provider, can they use their own VSPP licensing for hosts that they rent in Softlayer? 

No, customers can either use SoftLayer VSPP or “BYOL” for socket based licensing. They cannot utilize their own VSPP agreement.

You state that Softlayers VSPP licensing approach is automatically enabled when deploying VMware from the portal – can I use my socket based license (BYOL) for these systems?

No, SoftLayer currently does not support this approach. When deployed from the portal Softlayer’s VSPP is expected to be enabled.

Do I have to deploy a vCenter Server instance for VSPP to work?

No, if no vCenter Server is deployed, usage information for licensing is collected directly from the ESX hosts.

What license level is enabled when deploying vSphere from the SL portal?

Enterprise Plus (highest vSphere license level)


Can I create ESX clusters and use features like vMotion, DRS, Vmware HA?

Yes physical servers deployed with VMware can be configured in a cluster with all associated features enabled. This typically requires shared storage and the appropriate VMware license level (when deploying VMware from the SL portal Enterprise Plus is enabled)

Can I segregate my VMware traffic in SoftLayer similarly to on-premise deployment (separate VLANs for vm traffic, storage and management)?

Yes, you can use segregate traffic using SoftLayers native VLAN capabilities. Multiple Private Networks can be ‘trunked’ to the ESX servers, allowing for the Virtual Switches to apply VLAN Tagging at the Port Group layer for layer 2 isolation. You can use SoftLayer network devices like the Vyatta gateway to route traffic as desired.

Could I use a SL DataCenter to provide a failover site for my VMware Site Recovery Manager (SRM)?

That scenario can be achieved with SRM and “vSphere Replication” (SW based replication).


What storage solution would you recommend for private VMware clouds in SoftLayer (to store virtual machines).

SoftLayer’s bare-metal mass storage servers are ideally suited to provide cost-effective and highly performing shared storage for virtual machines.
You can e.g. configure a server with the appropriate mix of SATA, SAS and SSD drives, deploy it with QuantaStor software from the Softlayer portal, then carve up LUNs according to your specific requirements and make them available as iSCSI or NFS LUNs.

Can I use VMware’s iSCSI initiator with MPIO (multi-pathing) in a SoftLayer environment?

Yes, you can. There are however some considerations.

By default, SoftLayer places the host uplink (NIC) ports on the infrastructure switches into an LACP Pair. Do not configure iSCSI MPIO on ports configured with LACP (MPIO expects “unbundled” ports).  You can ungroup the LACP ports by submitting a SoftLayer support ticket.

Can I use Softlayer’s (portal-deployed) NAS volumes to provide VMware hosts storage? 

No, SoftLayer’s NAS storage provides CIFS and FTP based access. VMware requires NFS connectivity for (NAS) datastores hosting virtual machines.

Where can I find considerations for deploying VMware VSAN in Softlayer

While technically possible to deploy it, VSAN is currently not a fully certified offering in SoftLayer. Merlin Glynn‘s (aka virtualMerlin) blog contains useful background information on VSAN and its deployment considerations (as well as other associated topics – have a look!)


I was told that SoftLayer is “Xen-based” – how can you offer VMware environments?

A: We need to distinguish between SoftLayer’s native (portal-deployed) virtual servers and virtual machines created on private clouds using bare metal hosts …

SoftLayers native virtual machines are Xen-based but this is of little relevance as the hypervisor is not exposed to the “consumer”. You order a vm and deploy a workload to it.
When creating private clouds in Softlayer, a bare metal host with a hypervisor of your choice is deployed and you will directly interact with that hypervisor (e.g. through vCenter) in order to create virtual machines.

Why would I not always use a “private cloud” approach in SoftLayer?

If a virtual machine to host a general workload is all you need, then the native public or private virtual servers available from SoftLayer are typically the most cost effective “out of the box” choice. SoftLayer manages the physical systems these VMs reside on for you, including hypervisor and virtual server deployment.

Bear in mind that when deploying private clouds for specific requirements (performance, compliance, hypervisor interaction etc.) you will have the responsibility to manage the virtualization platform and associated components.

What migration options are available to me to move VMs and VM templates into SoftLayer?

In addition the Softlayer’s data transfer service and migration partners (like Racemi) there are a variety of VMware tools and approaches available depending on your use case, including:

OVFTOOL, VMware Converter, vCloud Connector (VCC), vSphere Replication, SRM, vSphere cold migration, vSphere Replication (e.g. with Riverbed Virtual SteelHead WAN Optimization). Details see HERE

Where can I find more detailed information on VMware deployments in SoftLayer?

This VMware@Softlayer document in the KnowledgeLayer contains links to very useful cookbooks and technical documents including

Note: All of the above information is subject to change – always check for up-to-date information on “KnowledgeLayer” or using your formal IBM/Softlayer support mechanisms.

Update 15/07/14: Big shout-out to Adem Yetim from vmware.pro for translating this article into Turkish at:

Yes, the (Public) Cloud can be a Scary World!

by +

… let me add … for “traditional IT”. And before you stop reading as you expect another dogmatic downpour of “the public cloud is unequivocally great” – my background is actually “traditional IT” (having architected “Enterprise” virtualization since early 2002).

The success of Enterprise virtualization empowered IT departments – many have carved out infrastructure-services based revenue streams totally unrelated to the company’s core business

When I started to pitch public and hybrid cloud aspects, the reaction of my “audience” changed – while e.g. every new vSphere feature had been accompanied by gasps of excitement, public cloud enhancements were dismissed as marketing hype, and objections on “security” and “compliance” thrown back at me instantly.

Let me be clear, these objections are legitimate concerns. The cloud does not negate the requirement for us to act as architects, determine functional and operational requirements for our services to ultimately determine suitability of the platform (and that now includes private vs public / hybrid cloud).

What I am however increasingly worried about is the number of (larger) IT departments that see strategic “rescue from the public cloud” in creating their own “public” IaaS offerings. They not only aim to provide infrastructure internally but also to “their clients”.
I’m not just talking about the usual (legitimate) suspects, Service Providers, Integrators and Telcos – but financial institutes, rental companies, … hell, even an airline – where the IT departments had carved out infrastructure-services based revenue streams that were totally unrelated to the company’s core business.

So now IT departments (and I really don’t care how large they are) are trying to protect this revenue stream and compete against the largest public IaaS providers – with ‘economies of scale’ they are unlikely to ever reach … offering infrastructure (not even SaaS or higher value layers) … diluting focus on the core business of the organization …? Does that really make sense?

It is (more than ever) the responsibility of the core business line to validate internal IT strategy – consider a public offering to provide the commoditizing IaaS layer and focus on differentiating layers instead

We shouldn’t be surprised though, the success of virtualization I “Enterprise IT” literally empowered our IT departments over the last years … cost savings and automation often brought them to the top of the business agenda – public cloud however is (rightly or wrongly) associated with a loss of control, relevance and ultimately power, often resulting in upfront rejection of public cloud and the natural urge to provide it “yourself”.

So do we really expect the horse to tell us that the car has been invented …?
Therefore it is (more than ever) the responsibility of the core business line to validate internal IT strategies and evaluate alternatives, again, “public cloud” is not always the right answer but most organizations will (and should aim to) benefit from it.

Also (you probably already picked up on that), I am NOT referring to service providers that provide differentiating value on top of the infrastructure layer through e.g. SaaS or managed service offerings.
It is however key for these providers to stay one step ahead of the ever commoditizing IT landscape and constantly innovate to keep an (uncommoditized) “niche” at the core of their offering.
So consider this seriously … what is a better way to remain price-competitive, flexible and agile, than using a public IaaS offering to provide this commoditized layer and focus on the differentiating layer instead …?

And the very same applies to our “traditional IT” departments. Take advantage of the public cloud, embrace it as enabler (yes, where appropriate), reality is that people have and will work around you if you don’t – the stealth use of public cloud is a reality and I genuinely believe IT is fighting a losing battle if they choose to ignore it (e.g. a recent study shows that 80% of us admit to using non-approved SaaS software!).

So yes, the public can be a scary world! But ignore it at your own peril …!

And let me be clear – I am not using terms like “flexible” and “agile” as marketing buzzwords – the simplicity and speed of deployment in the public cloud can really be stunning.













When I got first access to a SoftLayer account – yes, shameless plug here ;)  – I deployed a physical host pre-configured with ESX hypervisor, alongside a virtual compute instance, fully pre-configured with OS and vCenter and then decided to securely connect it instead via (provided) VPN to my existing (on-prem) vCenter environment for a test run (see pic above) … all in the space of 3 hours – no kidding!

Yes, lots of time left to focus on the important stuff … ;)


PS If you want more (technical) information on how to configure VMware in a SoftLayer environment, see the following link: Deploy VMware @ SoftLayer

GPU for “3D VDI” – Vendor Comparison: Soft / Shared / Dedicated / vGPU

by +

A high-level ‘graphics acceleration’ comparison section has been added to the Desktop Virtualization category of the ‘Matrix’ (in addition to the previously covered aspects (all listed below).

Which vendor can provide software GPU, shared GPU or ‘GPU pass-through’ capabilities? What levels of OpenGL or DirectX are supported?
The comparison includes e.g. VMware’s vSGA, vDGA; Citrix XenDesktop’s HDX 3D Pro capabilities including vGPU support for NVIDIA’s Grid GPUs (K1 and K2) as well as Microsoft‘s RemoteFX capabilities.

To go directly to the “VDI Comparison View” on Virtualization Matrix click HERE:

Update (01/03/14): Quick shout-out to “HDX Master” Thomas Poppelgaard from http://www.poppelgaard.com for pointing out the existence of the Citrix OpenGL Accelerator (his side contains tons of useful details on this area!) 

Update2 (08/03/14): Thanks to Rachel Berry from Citrix for pointing out this new blog article on clarifying the DirectX 9 software rasterizer in XD7.x in the standard VDA as well as the OpenGL Accelerator available for VDI-in-a-box, XenDesktop and XenApp.

Sample Screenshot  (subset only) – the matrix allows you to click on each field for further details
(Note that XenDesktop 7.5 is announced but the release date is expected to be in March 2014)

List of supported GPUs (AMD and NVIDIA)
(also contained in “details popup in the matrix):

VMware: Hardware and software requirements for running AMD and NVIDIA GPUs in vSphere 5.5

Microsoft: GPU Requirements for RemoteFX on Windows Server 2012 R2

Citrix: GPU Pass-Through HCL   and     Virtual GPU HCL

The section now compares the following high-level Desktop Virtualization “features” for Microsoft (Remote Desktop Services – RDS), Citrix (XenDesktop / XenApp), VMware (Horizon Suite) and Red Hat (VDI as part of Red Hat Enterprise Virtualization):

  • Overview – High-level summary of the vendor’s desktop virtualization solution capabilities and packaging
  • Market Position – Vendor’s market position for VDI (either according to analysts or personal evaluation)
  • Solution Scalability – How large are typical deployments, what scalability limitations are documented
  • Complexity – How complex is the setup and management of the solution (impact on required skills)
  • Display Protocol(s) – What are the supported display protocols and associated capabilities?
  • “VDI” – Does the vendor provide capability to run hosted virtual desktops (HVD)
  • Hosted Sessions – Ability to provide Terminal Services like cababilities (SBC, RDS etc)
  • Graphics – General graphics related capabilities, hardware offload/redirection or other protocol capabilities
  • Software GPU – Ability to emulate GPU capabilities with specialized virtual hardware and drivers in the virtual machine – without the use of a fully capable physical graphics adapter (GPU) in the host system.
  • Shared GPU – The ability to share physical graphics adapters (GPUs) in the host to achieve advanced graphics
  • Dedicated GPU – The ability to dedicate a physical GPU to a user/vm
  • Endpoint Platforms – Supported platforms to run the client applications to connect to virtual desktops / Apps
  • Storage Optimization – Integrated storage technologies that e.g. reduce IOPS requirements for VDI
  • User Portal – Central (enterprise-class) portal capabilities that give users a single point of access
  • Persona & Layering – Integrated capabilities to manage apsects of user persona (profile management) and advanced layering technologies (compartmentalization of images into OS, app & persona layers)
  • DaaS – Desktop as a service (DaaS) capabilities enabled / offered by the vendor

“Detail Pop-up” sample (click on feature field in matrix)

The curse of mobility … and what do you really want from your future “personal computing device”?

by +

Think about it … Why do we REALLY carry 2, 3 or more devices with us? What is it that you want from your “personal computing device”, whether PC, tablet or Smartphone?

Today I stumbled across a note that I started to write in Sept 2012 – essentially thoughts on the future of the “desktop”. As they still hold true I thought I’d better post it before it’s another “blog idea” in my “Evernote drawer” …

A great way to create a “vision” and identify future trends is to play the “if you could create anything – what would it be …” scenario – essentially a “wish-list” without boundaries.

Same OS across all my devices …  switching seamlessly between “consumer” and “productivity” mode … same set of applications … same data  … image replicated to a DaaS instance … all enabled through next-gen I/O peripherals

So what’s my main gripe with today’s “End-User Computing” …?

The fact that I have multiple devices for “portability” reasons (not to do different things with them), all with varying sets of applications, data and usability attributes (screen size, touch vs click etc.).

Spreadsheets and power point on my tablet without mouse and MS office? No thank you! yes, I could plug an MS ad here ;)

Quickly replying to a mail in a crowded airport shuttle on my full-sized laptop? No thank you!
If I reverse over my laptop with my car (yes, I have seriously managed that), do I have immediately a replacement system with the same apps and data on it – no!
So what are my “wishes” (as “productivity” user)? It really comes down to one fundamental objective:
I want to be able to do “the same” with any device! (ultimately to just have”one” device).

“Imagine you sit in a plane and pull out your fold-able screen or switch on your projection glasses that give your “phone” a massive HD screen and project a virtual keyboard / mouse. Would you still carry a laptop?”

So on a practical level – what does that mean and what would a possible approach be?

  • Ability to run the same set of applications on all of my devices (phone, tablet, laptop, PC, VDI?
    • (Initially): same OS across all devices 
    • Evolution: OS independent app delivery (HTML5, virtual app streaming etc.)
    • Managed by a cross device (cloud) “App Manager” for “Install Once – use on All” capability
      • Install app on e.g. Laptop and have it automatically installed (or e.g. streaming enabled) on all devices (e.g. popup asking “install on all or local only”)
      • Policies to prevent this for selected apps / devices (e.g. work / compliance)
      • Or … better… Composite v monolithic image – filter mechanism to “hide” non compliant (but installed!) apps/components depending what network the device is attached to – work v personal
      • Enable central cross-device app license management
  • Ability to use the same devices for my “consumer” (touch) and “productivity” (mouse) mode 
    • Dual mode OS –  ability for the OS to switch (seamlessly) between the “consumer” and “precision” (mouse) mode – you can argue we see the beginnings today with Win8
    • Where required, develop “dual-mode” applications (touch/precision) that automatically adjust to OS mode (e.g. tiles to icons, touch to mouse)

I addition I want content on my devices to be identical so that regardless of what device I’m carrying I can access the same data, survive device failure without data loss and even access data without any of “my” devices, online or offline.

  • “Enterprise-class” replication service needs to be seamlessly integrated in “OS”
  • Similar “library” structure on all devices in all modes (I don’t want to be looking in different places on different devices – i really want to stop caring which device I carry)
  • Folder-level replication or block-level “data layer” (e.g. VMware Mirage-like central image repository) synchronization across all devices (remember we have now the same OS, the same apps (“hidden” or not) installed on all devices
  • Facilitate private / business confidential bubble layer that never gets replicated
  • And yes, have all of this seamlessly synchronized into a DaaS hosted virtual desktop for my access when I broke, lost or simply forgot my “mobile PC”.

A fundamental evolution of the the way we input data or display it is required to enable this approach …

In my view, a fundamental evolution of the I/O peripherals (the way we input data or display it) is required for this to happen. We should realize that THIS is the ultimate driver for the proliferation of devices due to “mobility” restrictions.

If we had a device the size/weight of a phone but with the “screen size” of a TV, equivalent of a full size keyboard and mouse as input device … Why would we carry 2,3 or more devices?? (sufficient compute power and above application and data equivalency assumed).

    • (Initially) I’d like all tablets / hybrids to initially have non-obtrusive trackkpoint / touchpad and pull-out / fold-out keyboards “built in” to facilitate both “touch” and “precision” mode
    • Smaller devices (e.g. phones) should all have standardized (yes, I said, I’m dreaming without boundaries) “cradles” that allow you to connect them to the larger form factor I/O devices
      • OS dimensions and mode should adjust automatically: set your phone in cradle that is connected to full size screen and keyboard/mouse -> switch look and feel from “mobile phone OS” to “workstation OS”, same for apps
    • (Future) Use projection glasses, flexible “fold-out” screens, mini projectors, laser keyboards, gesture/voice recognition devices to overcome I/O device size/portability issues. 

Imagine you sit in a plane (yes your “favorite middle-seat in “Economy”).
Now pull out your fold-able screen or switch on your evolved projection glasses (Google Glass or Oculus Rift-like) that at a touch of a button connect to your “phone” and give you the vision of a massive HD screen AND project a 3d virtual keyboard / mouse.
The Glasses could have Kinect-like fingertip tracking build in to allow you to “virto-type” …

Suddenly your “phone” can do it all … no need to carry other devices.

Yes, if it breaks, use one of your other devices (at least it will now have identical set of applications and data) or connect from a terminal to your virtual desktop, that is (again) “identical”.

Let me snap out of that dream, I know we are facing the reality of increasing concerns around privacy and data security, lack of standards, corporate policies, legacy dependencies and more … but remember, this is a “vision”, a wish list – not a technology-feasibility study, some technologies arguably already exist, some may be further out, some probably already superseded by better ideas … happy dreaming …

Who will Service Providers choose? Hybrid Cloud Management – Key Battle for VMware and Microsoft

by +

The Virtualization Matrix has been updated with vSphere 5.5, Windows Server / System Center 2012 R2 and related products (Cloud, Desktop Virtualization, Network Virtualization etc.)

The fight for the virtualization-vendor “top spot” is certainly not fading away, if anything it has intensified and quickly expanded into strategic peripheral parts of the ecosystem.  

Recent product releases from both, market leader VMware and challenger Microsoft make it clear that (while rapidly updating fundamental hypervisor and hypervisor management capabilities), focus has shifted into areas of software defined networking /storage and (above all) cloud management – aiming to gain control of the quickly emerging cloud ecosystem and address the needs of the “new age” Service Providers.

The trend is clear – IT resources will increasingly be provided from public cloud resources.
VMware and Microsoft will focus on convincing Service Providers to choose their hybrid cloud management layer – in order to make it the “default gateway” into this future resource pool, the vendor’s own public cloud services …

Both are asking Service Providers to become “brokers” of services, to succeed they have to provide Service Providers with a vision of a sustainable and profitable business model in this future ecosystem …

Both companies recognize that virtualized infrastructure and services will increasingly be provided / consumed from off-premise resources. While this is a gradual process, the trend is clear, ultimately reducing future revenue scope for these vendors with classic on-premise “enterprise virtualization” footprint. This is forcing both to provide not only leading virtual infrastructure & private cloud management but aggressively accelerate their ability to enable public and hybrid cloud capabilities.

Microsoft and VMware make impressive progress but also face similar challenges.


With vSphere 5.5 VMware made a number of enhancements to its “platform” capabilities (see separate blog summary) to defend its Enterprise virtualization leadership.

But let’s look at VMware’s more strategic battle grounds.
Driving the vision of the “Software Defined Datacenter” 
Vmware launched its second endeavor to provide “virtual SAN” capabilities with the promising VSAN (kernel-level integrated into vSphere rather than a virtual storage appliance as with vSA). VMware is also successfully developing “network virtualization” mind-share around its VMware NSX platform (combining NICIRA and vCloud Networking and Security).

Still, the recent announcements at VMworld Barcelona actually mirror VMware’s core focus – securing the hybrid cloud management control point.
The “Cloud Management Launch” focused on enhancements to vCloud Automation Center (now integrating with VMware’s vCloud Hybrid Service and Red Hat Open Stack) and the integration with the new VMware IT Business Management Suite (providing insight into cost and utilization of shared resources).
vCOPS Suite 5.8 ties into this by providing enhanced hybrid insight, with performance monitoring for Microsoft applications and deeper visibility into applications running on Microsoft Hyper-V and Amazon Web Services.

VMware will have to clarify its product strategy around vCloud Director (vCD) and vCloud Automation Center (vCAC)

But it’s not yet all perfectly lined up, VMware will for instance have to provide more clarity around its vCloud Director (vCD) and vCloud Automation Center (vCAC) strategy.
With vCAC arguably being a “victim” of its own success, Service Providers are expressing concerns around the future of vCloud Director. VMware’s blog clarifying the general direction of: vCD = Service Provider and vCAC = Enterprise Customers, was a needed positioning but (from personal feedback) did not give many SPs the needed “warm feeling” for a secure vCD road map and clear integration plans with vCAC.


Microsoft had caught up with most “must have” virtualization features with its Windows Server / System Center 2012 release.
The R2 release essentially provides none of the usual scalability updates and focuses instead on storage and network virtualization enhancements.

The new tiering feature of Storage Spaces and the Hyper-V Network Virtualization enhancements (Site-to-Site VPN, improved interoperability with 3rd party virtualization and hybrid forwarding) are welcome additions.
While they are provided “free” (part of WS2012 R2) – not as fee-based products (like NSX, VSAN), they provide varying levels of functionality compared to VMware (e.g. Storage Spaces does not provide “shared storage” through accumulation of in-expensive local storage like VSAN).

Like VMware, Microsofts real focus however is to start translating its “Cloud OS vision” (a single, common platform for “classic datacenters”, service provider datacenters and MS’ public cloud, Azure) into tangible product functionality. Different terminology – same focus – securing the control point of the (hybrid) cloud management layer.
Prime example here is the release Windows Azure Pack, essentially a collection of Windows Azure technologies on top of the System Center and Windows Server datacenter instances – allowing Service Providers and Enterprise customers to provide a “Windows Azure-like” experience when making these datacenter resources available to their “consumers” in the form of self-service, multi-tenant clouds.

Challenge is the true unification of the various virtualization and cloud management components

Similar to VMware, the challenge for Microsoft will be the true unification of its management components that – although now under simplified single System Center Suite – consist in reality of a collection of products and interfaces with varying degrees of integration and overlap.

vCHSThere is one important added dimension that both companies will have to handle with sensitivity. Given the competitive pressure in the space of providing public cloud resources (if VMware and MS don’t then “others” happily will), both are forced to aggressively expand their own public cloud offerings, indirectly competing with their own “route to market”. VMware’s recent announcements on expanding its Hybrid Cloud Service into Europe as well as Microsoft’s rapid expansion of Azure-based offerings (now offering IaaS & Big Data among many other services) will squeeze the scope for competitive, self-hosted Service Provider offerings. (for a comparison of Azure and vCHS, incl. “Enterprise” vs “cloud enabled workloads” see Marcel van den Berg’s blog) [edit] I missed this great article from Massimo Re Ferre on “cloud spectrum” trends covering these workloads – a “must read”. [edit end]

VMware’s recent acquisition of Desktone and Microsoft’s rumored “Mohoro” project are both examples of how quickly a profitable offering space (“Desktop as a Service” in this case) could become commoditized if the big players offer large scale public services (although e.g. Brian Madden puts a different spin on the potential use of Desktone).

Service Providers won’t be able to reach the same economies of scale for “main-stream” IaaS offerings, focus needs to be on providing differentiated offerings with additional value

This is not to say that there is no scope for Service Providers – quite the opposite – but simply put, SPs won’t be able to reach the same economies of scale for “main-stream” e.g. IaaS offerings. SPs will need to focus on providing differentiated offerings with additional value (e.g. additional SaaS layers, regional / compliance / performance niches etc.) – arguably associated with higher effort and forcing them to continuously innovate to stay one step ahead of “commoditized plays”.
Additionally they will increasingly be asked to act as “broker” to e.g. to the vendor’s public cloud offerings.
Picking the vendor that minimizes the risk with this transition, eases the integration of “on-premise” and “off-premise” and helps them to enable differentiated offerings with appropriate margins are all important considerations, after all there is the reality of tying yourself further to a particular vendor after adopting their “cloud management” layer and APIs. 

VMware’s Ramin Sayar was spot-on and straight forward when stating: “… IT needs to evolve from being just a builder to a broker of services. Don’t be a bottleneck … Instead, provide self-service access to critical services by strategically sourcing them either internally or from the public cloud.”
Clearly both of the two vendors will try to make sure that it is THEIR cloud management layer that will be adopted to provide the default conduit from on-premise resources based on THEIR virtual infrastructure into THEIR (public) cloud services.

You can draw your own conclusion from the above and while I’m certain that both VMware and Microsoft value (and need) their partner community they will have to provide Service Providers with a vision for a sustainable and profitable business model in this (challenging) future ecosystem.

Without it, Service Providers might be drawn to (e.g. open source, multi-vendor) cloud management alternatives like OpenStack or CloudStack, that (rightly or wrongly) promise less vendor dependency, something that both Microsoft and VMware will clearly want to avoid. 


VMware vSphere 5.5 – What’s New and Related Updates

by +

Below is a summary of new features introduced with (and around) vSphere 5.5 – the Virtualization Matrix has been updated to reflect those and other enhancements (new features are marked with new_a in the matrix)

Focus of this article is to summarize vSphere platform enhancement. For a view on aligned announcements and competitive challenges around the cloud management including vCAC, vCOPS, vCloud Hybrid Service , feel free to read  “Who will you choose? Cloud Management – Strategic Battle Ground for VMware and Microsoft”

New (aligned) VMware products:

VSAN – Allows the creation of a virtual ‘shared’ SAN through clustering of direct-attached SSD drives (High-IOPS caching) and HDDs (persistent data) – VSAN is kernel-level (no virtual appliance), details HERE

 VMware NSX – Convergance of Nicira NVP network virtualization and vCloud Networking and Security;  the network virtualization is essentially an ‘overlay’ virtual network, built on top of an existing (physical) network – allowing to create and provision virtual networks in software and managed independent of underlying hardware – details HERE

vSphere ESXi Hypervisor Enhancements

– Logical CPUs – doubled from 160 to 320
– NUMA nodes – doubled from 8 to 16
– virtual CPUs – doubled from 2048 to 4096
– virtual RAM – doubled from 2TB to 4TB

Hot-Pluggable SSD PCI Express (PCIe) Devices – hot-add / hot-remove SSD devices while a vSphere host is running
– Support for Reliable Memory Technology – utilizing CPU hardware feature through which ESXi Hypervisor runs in more  “reliable” memory region
– Enhancements for CPU C-States – deep processor power state (C-state) is now also used
New Virtual Machine ‘Hardware’ – virtual HW v 10 – LSI SAS support for Solaris 11, support for new CPUs, new  advanced host controller interface (AHCI)
Expanded vGPU Support – support for AMD GPUs (in addition to NVIDIA), automatic (switch between SW and HW rendering),hardware and software rendering modes
Graphic Acceleration for Linux Guests – graphic acceleration is now possible for Linux guest OSs – Ubuntu:12.04+,  Fedora:17+ RHEL7

Vmware vCenter Server Enhancements

– vCenter Single Sign-On – Simplified deployment (single deployment model), native AD support with cross-domain  authentication, completely new architecture addressing previous issues
– vSphere Web Client – full Mac OS X support, improved usability with “drag and drop”, filtered views and “recent  items” navigation
– vCenter Server Appliance – configuration maximum increases (100 hosts)
App HA – restart an application service when an issue is detected (vFabric Hyperic and vSphere App HA virtual  appliances plus guest agents required)
– HA and DRS Affinity Rules Enhancements – vSphere HA has been enhanced to conform with virtual machine- 2-virtual  machine anti-affinity rules.
Big Data Extensions (BDE) – web client plugin to deploy and manage Hadoop clusters on vSphere (Project Serengeti)

vSphere Storage Enhancements

Larger virtual disks – Support for 62TB VMDK up from 2TB
MSCS Updates – support for Windows 2012, iSCSI and FCoE for shared storage, Round-robin path policy
16GB E2E support – now full 16Gb end-to-end FC support (removing previous Throttle-down and switch to array  limitations)
– PDL AutoRemove – automatically removes a device from a host when it enters a “Permanent device loss” state
vSphere Replication Interoperability – replicated virtual machines can now be moved between datastores (Storage  vMotion or Storage DRS) without incurring a penalty on the replication
– vSphere Replication Multi-Point-in-Time Snapshot Retention – redo logs are are retained and cleaned up on a  schedule according to the MPIT retention policy
– vSphere Flash Read Cache – hypervisor-level integrated read cache through pooling of multiple Flash-based devices  into a single consumable “vSphere Flash Resource” (replaces the Swap to SSD feature)

vSphere Networking Enhancements

Link Aggregation Control Protocol Enhancements – providing 22 new hashing algorithms and increases the limit on  number of link aggregation groups (to 64)
Traffic Filtering – Additional port security through ability to filter packets based on the various parameters of the  packet header (ACLs)
Quality of Service Tagging – Prioritizing traffic at layer 3 by enabling users to insert tags in the IP header to increases QoS
SR-IOV Enhancements – simplified workflow of configuring SR-IOV–enabled NICs, ability to communicate the port  group properties defined to the virtual functions
Enhanced Host-Level Packet Capture – equivalent to the command-line tcpdump tool available on Linux (capture  traffic on VSS and VDS on Uplink, Virtual switch port, vNIC)
40GB NIC support – support for Mellanox ConnextX-3 VPI adapters configured in Ethernet mode

Matrix Updates: Create and Print your (new & free) Custom Comparison

by +

The content of this article has been added to the help section, additional new help topics are available here.

The enhanced “Custom Analysis” allows you to create and print a professional comparison report that documents your virtualization / cloud evaluation.


This functionality is completely self-service driven (free), regardless whether you are using it for an internal product evaluation (e.g. purchase decision) or to e.g. provide consultancy to other clients.

A sample output can be found HERE: Virtualization Matrix – Custom Report Sample

  • Use the default report to quickly print a basic analysis and the matrix content.
  • Use the custom report to create an evaluation that considers your priorities, allows you to change evaluationadd features, creates your individual matrix score and identifies potential “show-stoppers“.


The following is a step-by-step example for the creation of a custom report:

Initial Wizard

1. Click the “Custom Report” button


2. A simple wizard will welcome you and guide you through the options.

Select “Custom Report”
Note: The default report will be created using the default matrix values (no prioritization, changes, identification of “show-stoppers” etc. but it is a quick way of printing the matrix content with some additional analysis


3. Select all categories that you want to include in the evaluation

Note: The more categories you select the more features you will be asked to review


4. Decide whether you want to prioritize the importance of features for your environment

Note: This is one of the core benefits of the customized report so we strongly recommend enabling this feature.
It will allow you to create an evaluation that reflects the requirements of your environment (allows you to specify “high-priority” features or exclude features that you don’t care about).



5. Decide whether you want to change default evaluations

Note: This will allow you to change evaluations from e.g. “supported” (green) to “unsupported” (red).
This feature is only recommended if you are an expert or spotted a mistake in the matrix



6. Choose whether you want to add features that are currently NOT listed in the matrix

  • This feature allows you to add ANY custom feature to the matrix. You will have to provide content and evaluations for these features yourself.
  • When enabled the wizard will automatically add a number of suggested features (e.g. “existing licenses”).
  • You can delete any of the suggested features and add “unlimited” custom features.


7. Review the windows listing “next steps” and click “close” when done


The evaluation page will now be displayed.


The evaluation page is structured to guide the user through the simple (but potentially time consuming) process with  step-by-step sections (horizontal bars) and help comments.

Note: The following steps will customize the evaluation and create a custom matrix score for you.
Prioritizing features and changing evaluations will change the matrix score by adjusting it to your specific environment.

1. Fill in reference details and review menu options


    • You can review and change some of the options selected during the wizard setup
    • Fill in reference details (they will appear as ‘header’ in your final report (no information will be used or published by VirtualizationMatrix.com)

Note: You can ‘Save’ or ‘Load’ your progress locally any time to avoid data loss!
2. Adding and evaluation additional (custom) features

Note: You can disable this section from the main menu


    • Several custom features are suggested, either evaluate them or delete them (“delete custom row”)
    • Provide content (e.g. yes/no) and evaluate the feature (red, amber, green)
    • Tick the “Done” box for a visual progress indication
    • Optionally add any additional feature using the “Add Custom Row” function

3. Prioritize all features and adjust evaluations

Depending on the number of categories and customization, this step can take time. Save your progress.


    • Specify the importance of each feature in your environment (use the drop-down box)
    • Disagree with an evaluation? Change it! (“Change Evaluation” must be enabled)
    • Enter comments where applicable
    • Tick the “Done” check-box for visual progress indication

4. All Done? Then Save and Click “Finish and Show Results” (top or bottom of the page)


5. Review your report and click “Print”


A sample Output can be found HERE: Virtualization

What’s new in RHEV 3.2 and Red Hat’s Cloud Infrastructure … and what will Red Hat’s “Niche” be …?

by +

(The Virtualization Matrix has been updated with RHEV 3.2 and new cloud related content)

Following public announcements at its summit in June, Red Hat made significant updates to its cloud portfolio as well as its Enterprise Virtualization platform (RHEV) available in June and July 2013.

The two predominant improvements in RHEV were the full support for live storage migration and a new plugin framework that allows third parties to provide new features and actions directly into the RHEV management user interface (a full list of new features follows below).

RHCIRed Hat’s cloud portfolio had a major overhaul, announcing the new “Red Hat Cloud Infrastructure”.
This will be a welcome move after Red Hat’s initial release of its CloudForms IaaS offering in 2011 (re-positioned in 2012 as hybrid cloud offering) had not the desired impact in the Enterprise (competing against a mature VMware vCloud offering and upcoming open source based cloud offerings).

Additionally Red Hat had to react to OpenStacks increasing influence and decide whether to compete against or benefit from the OpenStack momentum. Red Hat early indicated its commitment to facilitate an OpenStack based approach and announced a commercial Red Hat Enterprise Linux OpenStack Platform in June 2013.

Core to Red Hat’s cloud revamp was the acquisition of ManageIQ in January 2013, adding significant management capabilities to Red Hat’s cloud platform.

RHCI sets out to unify RHEV and the RHEL OpenStack Platform under a common CloudForms management layer

As a result the new Red Hat Cloud Infrastructure (RHCI) sets out to unify RHEV and the RHEL OpenStack Platform under a common CloudForms management layer that features the enhanced ManageIQ based capabilities in order to provide an open hybrid cloud approach (new structure explained below).

VMware will leverage its maturity and market share in the enterprise, Microsoft will compete on price, Citrix targets Service Provider clouds …

What will Red Hat’s “niche” be?

At the same time (my personal view) Red Hat is facing a challenge of “identity” and “niche” in this market due to the recent dynamics in the industry. While VMware will continue to leverage its established market share and product maturity in the enterprise, Microsoft will aggressively compete on price with a quickly evolving “all inclusive” virtualization and cloud portfolio around its strong Windows Server / System Center ecosystem. Even Citrix is successfully carving out a niche for its mobile, desktop and networking services on CloudPlatform targeting Service providers.

Red Hat is facing the challenge of an increasing ecosystem trying to capture mind share with the same “open cloud” message

While it will continue to successfully carry the message of “open cloud”, it is facing the challenge of an increasing ecosystem trying to capture mind share with the same message based on OpenStack and other emerging OSS cloud technologies. When it comes to OpenStack, Red Hat will compete against a large number of OEMs and ISVs developing management stacks around the same community developed cloud foundation Red Hat is now hedging its bets on.

The management capabilities added through the ManageIQ acquisition will be a massive help and as long as Red Hat can succesfully integrate ManageIQ into the CloudForms management layer, eliminate overlap in functionality and merge OpenStack and RHEV capabilities seamlessly under this layer it will be well positioned to compete in this space, after all, ‘openness’ is Red Hats natural domain … .

OK, so what’s new:

RHCI is a single-subscription offering that bundles and integrates the following products:

  • RHEV: Datacenter virtualization hypervisor and management for traditional Enterprise workloads
  • Red Hat OpenStack: Scalable, fault-tolerant platform for developing a managed private or public cloud for cloud-enabled workloads
  • Red Hat CloudForms: Cloud management and orchestration across multiple hypervisors and public cloud providers
    CloudForms 2.0 is essentially a rebranded version of ManageIQ EVM Version 5 (product documentation here)

If you are not entirely clear on the relevance of differentiating between “Enterprise” and “Cloud enabled” workloads I can only suggest to read Massimo Re Ferre’s (as always) excellent article on this subject (even if the comparison is OpenStack v vCloud Director).

RHEV 3.2 – What’s NewRHEV_logo

While the two predominant improvements were the full support of live storage migration and a Framework for third party plugins, there were several additional enhancements:

  • Licensing:
    • New ‘Per Socket’ Licensing
      • Red Hat Enterprise Virtualization Premium: $1,498/socket-pair/year
      • Red Hat Enterprise Virtualization Standard: $998/socket-pair/year
    • Desktop Virtualization now included in RHEV subscription (no additional cost)
  • Internationalization: 
    • Localization of Web admin portal, user portal, documentation and landing page
    • Automatically detected from browser preferences
    • Supports manual selection to override browser default
  • Framework for 3rd party UI plugins for RHEV Manage
    • Enables third parties to integrate new features and actions directly into the Red Hat Enterprise Virtualization management user interface.
    • New menu items, panes, and dialog boxes allow users to access the new functionality the same way they use Red Hat Enterprise Virtualization’s native functionality
    • Early examples are: NetApp’s Virtual Storage Console (VSC), Symantec’s Veritas Cluster Server and HP’s Insight Control for Red Hat Enterprise Virtualization
  • Networking
    • New top level network management UI
    • Network ACLs: Apply Permissions / ACLs on logical networks, new role “NetworkUser”
    • Hot Switch: Switch virtual/logical network on running vNic
    • Statistics: Enhanced statistic and configuration collection from guest, Report all guest network interfaces, Report IPV6 addresses in addition to IPV4
    • Support E1000 nic for Windows VMs (in addition to VirtIO & RTL8139)
    • VDSM Hook for hot-plug events
  • Storage
    • Full support for live storage migration (from tech preview in 3.1)
    • Support migrating multiple disks from same virtual machine
    • Scan storage domain for new (orphaned) images, Import images into storage domain (API only)
    • Remove VM without deleting virtual disks
  • SLA/QoS
    • Support host CPU pass through (delivers optimal performance at the expense of migration)
    • Ability to define handling of Hyperthreads (count threads as cores)
    • Quota in User Portal (all user see breakdown of quota consumption in self service portal)
  • Operational
    • Delete Protection – Allow admin to set ‘do not delete’ on virtual machine, prevents accidental deletion through UI and API
  • VDI
    • UI / API support for Smartcard (CAC & PIV)
    • Dynamically change guest resolution by resizing client window
    • Ability to configure Configure proxy server for Spice protocol
    • Set per-device settings for console (VNC, Spice & RDP)
  • Platform
    • New reports including storage inventory, cloud provider utilization & VDI
    • Support for multiple Power Management (fencing) agents per host, Configure proxy selection for Power Management at DC/Cluster level


What’s new in XenServer 6.2 … and has Citrix finally given up on its hypervisor?

by +

(The Virtualization Matrix has been updated with XenServer 6.2 and associated content)

I often hear “XenServer is on it’s way out”, dead even … and looking at the 6.2 release seems to support this theory. Rather than a barrage of new features, the biggest part of the release notes it taken up by “Retired” and “Deprecated” features. Combine that with Citrix’s announcement to move XenServer (including XenCenter) to a fully open source model with the 6.2 release, essentially making all features available for free, and you could get the impression that Citrix has “given up” on XenServer.


And it wouldn’t come as a massive surprise given the increasing pressure from its open source cousin KVM, although many would argue that despite enormous interest and industry support, enterprise adoption of KVM-based technologies hasn’t necessarily been overwhelming until now.
Add Citrix’s symbiotic (but competitively challenging) relationship with Microsoft, adding pressure to facilitate the spearheading Hyper-V in this new world of multi-hypervisor environments and the case seems closed.

So is XenServer really at a dead-end … and Citrix in trouble?

Personally I think far from it. As a matter of fact I think Citrix made a smart move and looking behind the scenes supports this picture.
Companies like VMware currently seem to struggle with the diversification of their portfolio, spreading resource thin by the demand to constantly open new revenue streams, expanding into new areas (software defined networking, analytics, end user computing, PaaS, hybrid and public services etc.), trying to develop functionality “in-house” that is already covered by a thriving 3rd party ecosystem, creating new competitive battle fronts on a daily basis – all while trying to maintain the stronghold of the fundamental hypervisor management.

Citrix on the other hand seems to have found its “niche” and drives this with focus.

XenServer emerges as streamlined hypervisor for cloud services and desktop virtualization

In my view XenServer emerges as streamlined and somewhat re-positioned hyerpvisor to facilitate primarily two (connected) target use cases as a) hypervisor to enable mobility and desktop virtualization and b) “commodity” hypervisor for its CloudPlatform suite.
Combine the two to give Citrix a future end-to-end capability to deliver hybrid cloud services around its core-expertise and recent focus areas: enterprise mobility (XenMobile), application and desktop delivery (e.g. XenDesktop 7), collaboration (GoToxxx, ShareFile etc.) and the optimization of networking for those services (Netscaler / CloudBridge). What XenServer quite frankly isn’t trying to be anymore, is a contender for the increasingly commoditized (general purpose) Enterprise hypervisor market.

Of course both Microsoft and VMware are after the same opportunities. While VMware successfully leverages its install base and the maturity of its hypervisor/cloud platform, Microsoft is in a great position to promote Hyper-V as “free” hypervisor in the (still) primarily Windows-based End User Computing segment (alongside with  its much improved Remote Desktop Services). But again, for both companies this particular combination is just one out of a possible dozens of focus areas they need to cover.

Citrix removed and deprecated what some might consider critical (Enterprise) capabilities

Yes, Citrix removed and deprecated what some might consider critical (Enterprise) capabilities like workload balancing, integrated vm backup, P2V capability, (deprecated) SCVMM support, Storage Link and the distributed virtual switch). But look closer and you see that Citrix is not abandoning but “offloading” this functionality, actively promoting 3rd parties to provide this functionality instead (SDN partners, VMTurbo, PlateSpin etc.). In addition it aims to provide the automation functionality through its CloudPlatform management layer (rather than the base virtualization management layer).

Clever? A bold move, not without risk but I believe yes.

But creating an extended collaborative development environment by making the Xen Platform an official Linux Foundation project (that has proven to be successful), encourage proliferation and adoption by making XenServer 6.2 available for free while maintaining a commercial version for the risk-averse clientele, offloading development of functionality to 3rd party or providing it with its own higher layer management stack – all while increasing your ability to focus development on your core strategic areas for the rapidly emerging Service Provider market …

I’ll let you be the judge …

So what’s changed with XenServer 6.2 (summary, details here):


  • XenServer 6.2 available as free open source virtualization platform, including XenCenter and features previously only available with fee-based offerings
  • Single comercial edition of XenServer 6.2 – replacing the previous XenServer Free, Advanced, Enterprise, and Platinum editions – providing (over the free version)
    • Citrix Premier 24×7 worldwide support
    • Commercially packaged and certified product
    • Simplified patching and updating via XenCenter
    • Indemnification and license protection
    • Citrix knowledgebase & My Account Portal
  •  Socket-based licensing

(While scalability improvements with other hypervisors have recently been received with less excitement in the industry, XenServer really had to catch up to remain competitive).

  • Reduction in the amount of traffic between a VM and the Control Domain (Dom0).
  • Automatic scaling of Dom0 memory and vCPUs based on physical memory and CPU capacity on the host.

resulting in:

  • ‘VMs per host’ increase to 500 vms (Windows) and 650 (Linux) – from the previous limit of 150 VMs!


  • The XenServer 6.1.0 Performance and Monitoring Supplemental Pack is now fully integrated and extended for XenServer 6.2. (providing detailed monitoring of performance metrics, including CPU, memory, disk, network, C-state/P-state information, and storage. New system alerts can be seen in XenCenter and XenDesktop Director and optionally sent by e-mail)

Clone on boot:

  • This feature supports Machine Creation Services (MCS) which is shipped as part of XenDesktop. Clone on boot allows rapid deployment of hundreds of transient desktop images from a single source, with the images being automatically destroyed and their disk space freed on exit.

Retired Features (not available anymore in XenServer 6.2):

  • Workload Balancing and associated functionality (e.g. power-consumption based consolidation)
  • XenServer plug-in for Microsoft’s System Center Operations Manager
  • Virtual Machine Protection and Recovery (VMPR)
  • Web Self Service
  • XenConvert (P2V)

Depricated Features (no further development and removal in future release):

  • Microsoft System Center Virtual Machine Manager (SCVMM) support
  • Integrated StorageLink (iSL)
  • Distributed Virtual Switch (vSwitch) Controller (DVSC). The Open vSwitch remains fully supported and developed