At Softcat, we love server virtualisation – it’s been great for our customers in terms of cost savings, agility, increased availability. It’s been great for our services and storage business as well. I do think sometimes that virtualisation is considered a panacea- if we virtualise, everything will be OK and the infrastructure will just take care of itself. That’s not strictly true, however, out of the box.
The virtualisation vendors do a cracking job of protecting workloads from an outage of the immediate hardware on which those workloads is running. No longer is your application tied to a particular server; instead the virtualisation layer will ensure that the VM running that app restarts somewhere else with the minimum of disruption. Don’t get me wrong, this is fantastic, and loads better than what we had before. It’s just that the virtualisation layer is by default blissfully unaware of what is going on outside of the areas it controls.
I blogged about this before in connection with Neverfail’s vApp HA product, and I’m pleased to note that Symantec have released their version, ApplicationHA, which I think validates my view that the infrastructure needs to know what is happening at the application level and react accordingly.
Yesterday, I spent a couple of hours with APC looking at their InfraStruxure Central software. Rather than looking up into the application, this software looks down into the facilities element of your datacentre. This enables it to inform your choice of virtualisation vendor of what is going on so that decisions can be made on the placement of virtual machines.
Take for example a rack where a UPS has died, or a power feed is unavailable. That might not immediately affect the workloads running there, but it might breach SLA in terms of redundancy, or increase risk to an unacceptable level. At a more simple level, it would mean that the VM layer could spread workloads out to less power-constrained racks, or those running cooler- something you would have to do by guesswork really today. Think about initiating DR arrangements before power fails entirely, or automatically when the UPS has to kick in? To take it a stage further, this could automate the movement of workloads around the globe according to power availability and of course power costs. ‘Follow the moon computing’ anyone?
For me this could elevate the status of System Center or vCenter to the controller for the whole of your infrastructure – taking a feed from apps, infrastructure and facilities and making intelligent decisions about the placement and movement of workloads, without any human involvement other than setting the parameters when it is set up.
Sounds ominously like a cloud, doesn’t it?
Leave a Reply