Archive for the ‘Virtualisation’ Category

Is the midmarket better than enterprise at ‘Private Cloud’?

September 13, 2010

I heard a great story the other day about a large organization who embraced virtualisation to a fairly high degree. As part of this they managed to reduce the time taken to deploy a new workload from three months to six weeks for a virtual machine. Of course, it can be done significantly quicker than that, but their processes put the brakes on what the technology could deliver.

What do I mean by ‘Private Cloud’? I guess I am talking about what HP refer to as ‘Converged Infrastructure’; a heavily virtualised environment whereby resources can be provisioned and reallocated more or less dynamically to meet the needs of a business; one whereby traditional silos of storage, networking and compute resource are managed all together to ensure that the appropriate resource is available to a given workload.

Whilst midmarket organizations don’t perhaps have access to the most cutting edge (and most expensive!) technologies, it feels to me as if smaller companies have perhaps embraced virtualisation to a much deeper level than larger ones, and this means, I think, that they are closer to the likely future of computing than those stuck-in-the-mud big businesses who can’t change so quickly – and can’t make the most of what technology will enable them to do.

So why are SMEs better at this than the bigger boys?

–       Typically, SMEs seem more likely to go for 100% virtualised – or at the very least, a ‘virtualise by default’ policy.

–       The IT team within SMEs are often multi-discipline. This means that the traditional ‘silo’ of storage vs networking vs server can rapidly be overcome

–       Application owners are frequently also infrastructure owners – meaning that by default the infrastructure is designed to suit the needs of the business.

–       The layers of red tape that can unfortunately exist within larger organisations (with good reason, often) frequently don’t have such an impact in a smaller, more agile business..

What do you think? How far down the line towards a ‘private cloud’ style architecture has your company got?

Advertisement

Technology@Work Takeaways

May 5, 2010

It’s nearly a week since I got back from HP’s Technology at Work event, in Frankfurt, and I have just about caught up, so I thought it was time to summarise my experience. First out, I very much enjoyed it – the organisation was very slick, Frankfurt was friendly and the content was superb. I feel less guilty now that HP have cancelled their tablet plans (or at least delayed them pending the integration of Palm) for my use of an iPad throughout the event…

My first comment would be that getting involved in the social media scene at events like this is a good idea. It made me feel really engaged with the event to be following #HPTAW and tweeting with folks like @StorageGuy and @JezatHP (follow both of these guys for good insider info on HP Storage and Networking respectively). It’s a really useful back-channel to comment on what is going on and point people in the direction of stuff that is interesting. Needless to say I was delighted to end up as Mayor of HP Technology at Work on Foursquare

Now on with the technology industry… My primary takeaway was the emphasis from Gartner (who were out in force) on a simple statistic: that in their research, 70% of IT budget is spent on ‘keeping the lights on’, and only 30% spent on innovation – finding technology solutions to move the business forwards. That’s pretty scary, and I would welcome feedback from our customers as to whether that’s common experience. It feels about right to me. Almost every keynote referred back to this statistic, and HP are focusing their efforts on remedying the situation. A lot of the conversations were around reducing cost: power and cooling, management, etc. However, an interesting point was that CIOs with responsibility for the power bills refresh infrastructure more often, deriving ROI from this refresh – and in the process giving their organisation access to the latest technology, making the business more agile.

I’m sure we all hope we are on the way out of recession. Certainly the feedback from Gartner was that organisations are investing. In fact, the focus appears to be moving away from out-and-out cost, although budget growth will remain low or nonexistent. The focus is on balancing value with risk and innovation – the goal is to enable the business to be flexible and adapt to change. The legacy of the economic situation will leave organisations in a state of flux – M&A, emerging markets, volatility and a need to advance and retract into areas will mean that IT must be able to react to and enable this constant change. No wonder virtualisation, cloud and Web 2.0 are the top three technologies on CIOs’ watch-lists. HP are working hard on the concept of ‘converged infrastructure’ (separate post to follow) to provide a cohesive platform from which to deliver services to meet the needs of your organisation.

End-user computing was another area, and one which has interested me for some time. The story echoed my post on Next Generation Desktops, but suffice it to say that the computing environment of tomorrow will be a little different. Virtualisation of the desktop in its various forms will become commonplace, and management by separation of the various layers (hardware, OS, apps, profile etc) will be the norm. At the same time, IT departments will need to get used to delivering mobile, lightweight apps to laptops, netbooks, smartphones and (dare I say it) the iPad.

Lastly, there was a fair amount on the networking front. HP contest that the acquisition of 3COM gives them an end-to-end networking portfolio, from the core to the edge, including security, for the first time. Networking is key to the Converged Infrastructure message, so keep an eye on this (we will announce an event soon). There was a lot of talk about unified communications, in particular integration of Microsoft Exchange and OCS with Procurve and Proliant.

Whilst I don’t think I recall Cisco being mentioned by name throughout the entire event, it’s fairly evident that they have a fight on their hands here!

Great news on VDI licensing

March 19, 2010

Yesterday, Microsoft made an important announcement regarding virtual desktops. Currently, if you want to use any form of Virtual Desktop Infrastructure, you need to pay an additional fee for a Virtualised Enterprise Centralised Desktop (VECD)  license. The headline news is that the replacement for VECD, Virtual Desktop Access (VDA), will be a Software Assurance Benefit as of 1st July – so effectively free of charge to customers who maintain Software Assurance on their desktop operating system licenses. If you don’t have SA, VDA will be slightly cheaper than VECD.

This is really good news for anyone operating in the desktop virtualisation space, as it removes one of the blockers to adoption of this technology. It’s fantastic news for our customers as it means that they can make a decision on which technology to use independent of the licensing costs. There are seriously compelling reasons now for ensuring that you have SA on your desktop operating systems now, with VDA included and access to MDOP for a pittance – which gives you App-V for application among other things. Looks like a very cost-effective way of gaining access to some of the most exciting recent technology developments around desktop delivery.

Interesting also to see Microsoft taking the fight to VMware PCoIP with their forthcoming RemoteFX 3D and graphics acceleration technology, which will be extended by the impressive Citrix HDX. Microsoft are certainly serious about the partnership with Citrix, and those two are equally serious in their desire to compete for the virtual desktop with VMware.

Whichever vendor you favour, the real winners here are current and potential customers of VDI technology. Not only is your choice increased, but the licensing terms will be considerably more favourable come 1st July!

Coming soon: client hypervisors

February 8, 2010

I’ve usually got a technology are or two to bang on about as my ‘next big thing’. My current obsession is the advent, theoretically within the next six months, of ‘client hypervisors’. As I mentioned in my earlier post about ‘Next Generation Desktops’, we’re seeing a lot of interest from people looking to change the way they ‘do’ desktops, and it seems to me that client hypervisors will be in interesting option.

The basic premise is a bare-metal hypervisor that is installed on the local desktop/ laptop, enabling one or more VMs to run locally on that hardware. Management tools should allow the IT department to deploy, secure, backup and maintain from a central point – at this stage it appears that the primary management platform will be through the major VDI platforms – this will effectively be ‘off-line VDI. Of course, the benefit of a bare-metal, Type-1 hypervisor over VMware Player or Microsoft Virtual PC is mainly in performance through direct access to the hardware.

I can see a few benefits of this model to IT departments:

Ease of management and deployment: truly driver-independent images (just needs to support the hypervisor); a backup copy of the VM, possibly synchronised at regular intervals, can be kept and re-issued in the event of hardware failure or VM corruption. If you’re going down the thin client route with VMware View or Citrix, you’ll be able to manage your offline desktops through the same tool.

Contractors: Assuming your contractors have appropriate hardware (hey, you could even demand this!), you can issue a secure, time-limited VM for them to use whilst doing work on your behalf.

Consumerisation of IT: My personal favourite – and something we are talking about internally at Softcat: give your users a laptop allowance, let them buy whatever suits them within certain parameters, and deliver them a secure, managed VM on which to do their work. IT no longer have any hardware to worry about, and hopefully you have a happier user community – who, perhaps, take more care of ‘their’ equipment,

I know that Intel, VMware and Citrix are hard at work on this stuff, and my understanding is that we should be seeing some commercial availability around the middle of 2010. Exciting stuff? I think it is…

Neverfail vAppHA

January 6, 2010

I was fortunate enough, just before Christmas, to be invited to play with a new product coming soon from Neverfail, vAppHA. I’ve been hearing about this for a while, and it was great to see it actually working.

Neverfail have had a great partnership with VMware for some time (VMware’s vCenter Heartbeat is actually Neverfail for vCenter!), and vAppHA looks like it will extend this. We’ve been deploying VMware for about four years now, typically as much to improve availability as to consolidate physical servers. We’re great fans of the built-in high-availability features within VMware.

However, as good as VMware HA, vMotion and FT are, they don’t have very much in the way of application awareness. This is where vAppHA steps in. It uses Neverfail’s application aware technology, and uses the information garnered from that to trigger VMware events – making HA for example a much better bet for applicaion availability.

Envisage, for example, a SQL server running as a virtual machine. If a service stops, VMware unfortunately will be blissfully unaware. However, vAppHA recognises this and will the trigger an event of your chosing – for example, the restart of that virtual machine on another host. This provides a layer of ‘self-healing’ within your virtual infrastructure, up to the application layer, that didn’t exist before.

Coupled with an intelligent storage layer such as HP LeftHand which will enable you to stretch your VMware cluster over two sites on the same campus, this could be even more useful. It’s certainly something we’re looking forward to having in our kit-bag once it’s released to the world. For what it’s worth the pricing looks pretty fair too.

UPDATE (1/3/2010): Free trial now available here: http://www.neverfailgroup.com/virtualization/vapphatrial.html