Archive for the ‘Virtualisation’ Category

Lunch with the CEO of VMware…

June 27, 2013

Had lunch the other day with a chap called Pat Gelsinger – he’s the CEO of VMware. Pleasant chap – quite understated, not your usual flamboyant leader but a real techy underneath. He spent years at Intel and ran the design groups for most of the major processor developments so he kinda knows his stuff. Unfortunately we only had an hour with him – we were scheduled to have two, but apparently he got stuck in traffic caused by the Queen’s coronation celebration. I did enjoy winding the VMware guys up that it had only been in the diary for 60 years… 🙂

 

Anyway – there were just a couple of things I thought were worth sharing on their strategy that some or all of you may be interested in:

 

Cloud:

VMware recently announced vCloud Cloud Service (vCHS, a public-ish cloud service based, no surprises, on VMware). See http://www.theregister.co.uk/2013/03/13/vmware_vcloud_hybrid_cloud_service/ for a bit more detail. They recognise that they can’t compete with Amazon etc, but they think there is a real opportunity here for VMware partners (this will be a channel offering) for the following reasons:

 

1: it is entirely compatible with on premise workloads that run on VMware. Yes, Amazon etc will import workloads that are currently .vmdk but transferring from VMware on premise to VMware cloud (or partner cloud #cloudsoftcat) will take with it many attributes such as security and networking configurations.

 

2: Corporate workloads are certified to run on VMware in a way that they aren’t with Amazon. The big example here is that VMware have an agreement with SAP – SAP is fully supported on VMware on or off prem. In fact SAP HANA (big data stuff) on vCHS is the only way to consume HANA as a service/ online for production.

 

3: VMware had a load of tech that they bought that formed a Platform as a Service package called CloudFoundry.  Needless to say this stack runs on VMware…. on or off prem. This means that they will have the only platform where a customer can develop and test an app ‘in the cloud’ and then move it onto their own servers for production (for security, data residency reasons, whatever). Vice versa too. The Cloud Foundry stuff has been spun out (with some other bits) into a company called Pivotal run by VMware’s previous CEO Paul Maritz. (even geekier side-note: you know the term WinTel? Well, that was kinda driven by Paul Maritz and Pat Gelsinger. Paul was Mr Win (ran the Windows platform business at MS for years) and Pat was Mr Tel (Intel chips). Looks like the stack has moved up a little!)

 

Positioning:

Pat had just come from meeting the CIO at a large media company. The CIO surprised him by saying that this was the first time anyone at VMware had tried to meet him. VMware recognise that they have been selling infrastructure stuff to the head of infrastructure and that the stuff they have now is potentially more strategic. They are really trying to train their sales guys in ‘value selling’ and get them confident in front of CIOs. They are making a real effort to make their literature etc more business centric and less tech-centric. I’m hoping they will be sharing this stuff with the channel as I am sure we can use it as we gradually creep up our customers’ organisations…  

VMware are well aware that as servers get bigger, customers will need fewer of them so to keep revenues going in the right direction they have to be able to sell more than just vSphere, hence their three focus areas: Hybrid Cloud, Software Defined Datacentre and End User Computing.

 

Networking:

Who’s the largest networking vendor by port? If you count virtual ports, it’s VMware – 60 million (virtual) ports worldwide. They consider themselves to be the largest networking vendor no one talks about. They believe there is a real opportunity with virtual networking to increase efficiency and reduce latency – if a database server has to talk to a middleware server and they are on the same host, the traffic never needs to leave the host rather than going on a big loop (VM>virtual nic>physical nic> rack/ blade chassis networking> top of rack switch> core switch etc and back). They recently bought a company called Nicira who apparently have the most advanced distributed control plane – they already had in VCNS the most advanced network insertion. These will combine at VMworld into a single product called NSX. Interestingly they are looking at doing something similar for firewalls, which they expect to be a big channel opportunity…

Needless to say this is bringing them into some ‘interesting’ conversations with Cisco, but they have agreed to work together as they feel the opportunity is better. VMware need hooks into the hardware in the same way they do with server processors etc…

I hope that was interesting – please feel free to share your thoughts in the comments!

 

Helping HP to deliver their new baby

December 7, 2012

I’m sure, if you are reading this post, you have heard the happy news about a new baby on the way. No, not Will and Kate’s impending prince or princess – rather HP’s announcement of two new versions of their 3PAR StoreServ array aimed at the mid market, and priced from €20k for the dual controller version and €30k for the quad controller.

We were absolutely delighted to be part of the launch, over in Frankfurt just before HP’s annual Discover tech show. I joined in a customer panel at the official unveiling for the press the day before Discover started. I talked about our experiences in using 3PAR as the storage platform four our CloudSoftcat infrastructure as a service platform, which has rapidly scaled to more than 200 customers, including all of our own IT. I also presented a breakout session entitled ‘From Zero to Cloud’ later on in the event.

It’s great news that customers with serious IT needs but less serious budgets can now get access to this technology, which is in use at four of the top five largest hosting providers. Of course if you deploy 3PAR in your primary infrastructure, but don’t want to spend CAPEX on a DR site, you could replicate to our 3PAR-powered CloudSoftcat platform.

If you are interested to hear more about Softcat and HP at Discover 2012, I have collected together the following links:

Main launch site for HP 3PAR StoreServ: http://h17007.www1.hp.com/us/en/storage/nextera/index.aspx

Launch presentation from Dave Donatelli and David Scott of HP, and customer panel featuring yours truly: http://h17007.www1.hp.com/us/en/storage/nextera/live.aspx

Me being interviewed live on http://siliconangle.tv with John Furrier and David Vellante of Wikibon:
http://siliconangle.com/blog/2012/12/06/softcats-sam-routledge-gives-the-service-provider-some-cloud-perspective/

My podcast recorded with the legendary @HPStorageGuy Calvin Zito on the floor at Discover: http://h30507.www3.hp.com/t5/Around-the-Storage-Block-Blog/Softcat-utilizing-HP-3PAR-to-provide-managed-services/ba-p/128777

The Softcat tech team have a demo unit of the new HP 3PAR StoreServ 7000 on order, so if you want to learn more, give us a shout!

My predictions for 2012…

January 3, 2012

I normally do this the other way round, but this time I decided to publish on the Softcat.com news site first – but for those of you who follow me here (thank you!), here’s my predictions for IT, plus some suggestions for how to address and take advantage of these trends. Happy New Year!

Happy New Year from all at Softcat! I thought we would start the year off with a few predictions for what we expect to happen over the course of the year. Rather than this being simply some predictions in isolation, we’ve made a few suggestions as to how you might be able to respond to, or take advantage of, these trends:

Desktop – if you haven’t already, you will come under increasing pressure to migrate to Windows 7. We expect IT to need to reduce desktop management costs, while maintaining or improving service levels to the business. Windows 8 is looking likely to ship towards the end of the year – it’s looking good so far and could be interesting when your PC, phone and tablet run basically the same OS. We think Windows 8 will be the last revision of the ‘desktop’ operating system as we know it – Windows won’t go away, but will increasingly be a platform across a huge range of devices.

What to do?

Get your desktop house in order – look at management toolsets such as System Center to reduce the pain of migrating to Windows 7 and of ongoing maintenance, and reconsider server-based computing (VDI, Terminal Services etc) where this makes sense.

BYOD – more and more people within  your organisation will demand, and even expect, to be able to use their own devices – particularly phones and tablets – for work purposes. You need to set out your strategy here, as it is something that you can harness to improve productivity, employee satisfaction and business continuity.

What to do?

Evaluate the risk inherent in having data on these devices, versus the demand and potential gains. Technologies such as GOOD can encrypt data on those end-points, or you can use centralised computing to deliver Windows apps to those devices without any data getting on to the device. Over time, you may want to consider rearchitecting your line-of-business applications or at least having a transport mechanism that enables delivery to handheld devices.

Communications – we think there will be two main trends in communications. The first will be mobility-related – replacing desktop handsets by running telephony clients on mobile devices, and increasingly implementing VoIP over GSM or WiFi. The second will be an increase in the use of casual, desktop-based video conferencing and application sharing. Both of these trends will improve collaboration across geographical boundaries and bring disparate work-forces closer together.

What to do?

We think the increasing interest in Microsoft LYNC will drive both of these areas, so you could do worse than start there, especially if you have an Enterprise Agreement which gives you at least some LYNC entitlement. There’s a healthy ecosystem of products to improve and extend the capabilities of LYNC, as well.

Datacentre – virtualisation will continue to be the prevailing trend. If you haven’t, you really should – and if you have, you should probably do more! We’ll see more management technologies, both from the virtualisation vendors and from the ecosystem, which will help to push up the percentage of applications you virtualise, as well as to migrate towards the nirvana of a ‘private cloud’. No doubt your demand for storage will accelerate as well, so you will need to be clever about the way in which you use disk to keep on top of costs in that area while maintaining the performance your applications need. Oh, and you’ll need to improve your ability to recover in the event of a disaster, too.

What to do?

Review your strategy for virtualisation – are you trading physical server sprawl for the virtual equivalent? Would tighter management enable you to drive up utilisation? Review your storage strategy as well – can you make use of deduplication, for example – or employ a combination of SSDs for performance and cheaper, slower disk for data? The combination of clever use of storage and increased virtualisation can make it far easier – and cheaper – to implement a robust DR plan.

Cloud – this is the year that we will see ‘Cloud’ descend into Gartner’s ‘trough of disillusionment’. It wouldn’t surprise us if we saw the first serious security breach in the cloud space, and doubtless another large-scale outage.  Having said that, losing some of the hype will make Cloud a more realistic proposition for businesses when done properly. We think that localised, friendly cloud providers will be increasingly important – credit-card-cloud will be reserved for development and test.

What to do?

Don’t write cloud off – but look to providers where you have or can build a relationship and where doing your own due diligence is possible. Look to extend the capabilities of your infrastructure for capacity or DR by partnering with a provider of infrastructure as a service, and if your business needs a new application, consider cloud options against on-premise.

Security – Firstly, 2012 is the year we expect security technologies to start to catch up with virtualisation. Up until now the available technologies have been somewhat immature, and the new model of ‘introspection’ via APIs will improve this. Secondly, this will be the year the business tells you to stop blocking Facebook, Youtube and Twitter, as people are now genuinely using them as business tools. You will no longer be able to mitigate the risks by turning them off! We’ll also see more focus than ever on keeping data safe – something which is increasingly difficult in this mobile world.

What to do?

Firstly, if your legislative or threat landscape demands it, look seriously at Intrusion Prevention technologies which will work across your virtual environment. Secondly, make sure that your Web Security Policy is up-to-date, and supplement it with a powerful filtering technology that enables control down to application-level within social networking sites, rather than taking a simple allow-or-deny approach. Thirdly, gain an understanding of the way in which data moves around your organisation and make sure your access policy limits data misuse, especially through social media and email. Encryption of remote access is a sensible step, and you may, if your position requires it, like to give consideration to a data-loss-prevention tool.

Is your IT a customer satisfaction issue?

October 12, 2011

I’ve been meaning to put some thoughts down about this for a while. My catalyst actually to get round to it was being asked to deliver a presentation on Softcat’s unique brand of employee and customer satisfaction at the recent Call Centre Focus conference. We were asked to present through our partners QGate, who support and develop our CRM platform. They know how seriously we take employee satisfaction and customer satisfaction, and felt we had something to share.

I spoke for most of the time about the crazy place that Softcat is – how we give everybody breakfast, arrange for their ironing to be done, and generally have a great time (whilst working really hard of course) – and how because our employees feel valued, empowered and rewarded they will absolutely go the extra mile for our customers.

One point I did make relates to a conversation I have been having more and more with IT guys recently. Many IT teams are concerned that lack of recent investment in infrastructure, coupled with the growth of the importance of, and demands on, IT, has led to systems that are under-performing and not delivering the user experience which their internal customers demand. I am convinced that this has a direct effect on customer service, in that customers don’t get the information or response that they need quickly enough.

Equally importantly, I think this has a serious impact on employee satisfaction; if your people can’t get their systems working quickly enough to do their job, they are going to be less happy in their work. Doubtless this will affect their usual sunny disposition in dealing with customers! We’re currently making a huge investment in our own infrastructure to ensure that we can deliver to our staff and our customers the performance that they need – I’ll tell you more about the details of what we are doing in a future post.

In the meantime, a few things you might like to think about relating to performance and user experience (by no means an exhaustive list, and I would be very interested in your feedback in the comments):

  • Look into virtualisation as a cost-effective (and cost-avoiding!) way of upgrading ageing hardware and of allowing you to move applications around your infrastructure to free up space
  • Look into storage – SSD is becoming more and more prevalent, and, needless to say, can assist where your applications are constrained by IO. Some storage vendors offer data tiering which means that ‘hot’ data will automatically be moved to faster disk…
  • Think about how your applications are delivered – a ‘chatty’ application might be better deployed on Citrix or the equivalent, close to the servers on which it depends, rather than on PCs where it has to pull a lot of data over the network
  • Think about your network – is it time to look at 10GbE in the server room?
  • Consider your cloud options – you might not be ready to go full-bore into the public cloud, but pushing out some of your less critical or less security-dependent applications might free up some space for the rest – and get you used to cloud for the future.
Any other suggestions?

Total Value of Ownership and ‘Martini Computing’

June 30, 2011

After the event with HP’s CTO I hosted, I was fortunate to be invited to an event in London with the President and CEO of Citrix, Mark Templeton. I was quite impressed with the guy, if I am honest – a really down to earth chap, which you don’t expect at the higher echelons of a large organisation. After the main event, which was a presentation and a Q and A, he was more than happy to have a chat over a cup of coffee, which was enlightening.

Some of the stuff he talked about resonated with me, as it was similar to the way we try to shape our own business. He was saying that while they are a public company and have a duty to their shareholders, they are more interested in building something interesting and powerful, which will deliver benefits to their customers – whilst having fun – than focusing short term on the needs of those shareholders. We try to do the same – focus on employee satisfaction and customer satisfaction, in our belief that the former drives the latter. If we treat our customers right, the rest will follow – I’m sure that’s an appropriate standpoint. We’re privately owned, but I’m sure Mark’s attitude will deliver for his shareholders in the long term.

Anyway, back to business… Mark was talking about the way in which IT calculate the return of spending money – or how business calculates the return of spending money on IT. This is generally based on TCO – total cost of ownership – and whether having a lower TCO will generate a return for the investment over a given time period.

The suggestion was that in this day and age, we should be looking at Total Value of Ownership – TVO – instead. This means concentrating more on the benefits an IT solution will deliver to the business rather than the pure costs.

I guess this is particularly appropriate in the virtual desktop space where Citrix play – it’s difficult to justify a VDI project on capital grounds over the cost of replacing PCs, but the benefits come in terms of, yes, lowered operational expenditure but also in flexibility of workspace, device independent computing (letting users choose their access device), home working, employee experience… We’re likely to role out VDI for our own users, and can see such benefits, as an employer which likes to be nice to its staff, as really important. I look forward to the day when all users can access IT systems securely using the device of their choice, wherever and whenever they want. Maybe we should call it Martini computing….

VM as a transport layer

May 4, 2011

At Softcat, we love server virtualisation – it’s been great for our customers in terms of cost savings, agility, increased availability. It’s been great for our services and storage business as well. I do think sometimes that virtualisation is considered a panacea- if we virtualise, everything will be OK and the infrastructure will just take care of itself. That’s not strictly true, however, out of the box.

The virtualisation vendors do a cracking job of protecting workloads from an outage of the immediate hardware on which those workloads is running. No longer is your application tied to a particular server; instead the virtualisation layer will ensure that the VM running that app restarts somewhere else with the minimum of disruption. Don’t get me wrong, this is fantastic, and loads better than what we had before. It’s just that the virtualisation layer is by default blissfully unaware of what is going on outside of the areas it controls.

I blogged about this before in connection with Neverfail’s vApp HA product, and I’m pleased to note that Symantec have released their version, ApplicationHA, which I think validates my view that the infrastructure needs to know what is happening at the application level and react accordingly.

Yesterday, I spent a couple of hours with APC looking at their InfraStruxure Central software. Rather than looking up into the application, this software looks down into the facilities element of your datacentre. This enables it to inform your choice of virtualisation vendor of what is going on so that decisions can be made on the placement of virtual machines.

Take for example a rack where a UPS has died, or a power feed is unavailable. That might not immediately affect the workloads running there, but it might breach SLA in terms of redundancy, or increase risk to an unacceptable level. At a more simple level, it would mean that the VM layer could spread workloads out to less power-constrained racks, or those running cooler- something you would have to do by guesswork really today. Think about initiating DR arrangements before power fails entirely, or automatically when the UPS has to kick in? To take it a stage further, this could automate the movement of workloads around the globe according to power availability and of course power costs. ‘Follow the moon computing’ anyone?

For me this could elevate the status of System Center or vCenter to the controller for the whole of your infrastructure – taking a feed from apps, infrastructure and facilities and making intelligent decisions about the placement and movement of workloads, without any human involvement other than setting the parameters when it is set up.

Sounds ominously like a cloud, doesn’t it?

Useful note on Outlook in VDI

April 15, 2011

Just a short one this morning: I had to share this excellent article on setting up Outlook in VDI. Our customers are frequently looking to solve roaming profile issues, particularly involving sizeable Outlook caches, in hot-desking environments by using VDI so it is really handy to have such a well-written summary of the issues. Great post!

Checking out Tintri

April 13, 2011

I had a look yesterday evening at a webinar on Tintri, a start-up storage company founded by an ex-VMware guy amongst others. The concept is for VM-aware storage – no luns, just one big datastore per box which delivers VM storage. Everything seems to be managed by vCenter, or at least a very vCenter-like interface. It’s a single unit, which scales out by adding boxes in the manner of P4000 or Equallogic, but with dedupe and flash storage like a NetApp box with PAM/ Flash-Cache. $65k for 8.5Tb useable.

I love the concept – the theory that your VM administrators can manage the storage layer without needing to be a storage expert will I’m sure play well in the mid-market. Storage can be a bit of a black art for those who have never had a SAN before. Tintri is also VM storage only (VMware today, other hypervisors under consideration), and we seem to be pretty good at getting our customers to 100% virtualised.

There are a few things missing – no single view of multiple boxes, no replication, and as far as I can see no VSS snapshots (although integration with VMware snapshots helps), no VAAI (as that is block level only – will be supported with VMware’s NFS support in the next release). But it looks strong on thin provisioning, dedupe etc and it sounds like this stuff will be coming soon.

Sounds like they will hit the UK/ Europe towards the end of the year – I will look forward to seeing them land and to learning more about it as it sounds like a concept with potential for our customers.

Microsoft updates cloud licensing

March 30, 2011

The ‘cloud’ industry had some great news today – Microsoft are making their licensing rules significantly more cloud-service friendly.

Up until now, organisations taking cloud services using Microsoft software had to have their licensing covered through something called the SPLA model – the Service Provider Licensing Agreement. This specifically permits, in the EULA, delivery of services to a third party – something which is precluded under ‘traditional’ licensing. There were two issues with this – firstly that customers had frequently already invested in ‘on-premise’ licensing and were therefore having to double-purchase, and secondly the complexities around managing two models of licensing.

As of the 1st of July, Microsoft will issue updated Product Use Rights, which will grant customers with active Software Assurance (SA) the right to deploy certain server workloads (including Exchange, Lync, SQL, SharePoint and CRM) either on their own infrastructure or in ‘the cloud’. This means that the choice of how a customer consumes Microsoft technology is no longer constrained by a customer’s existing licensing investment (assuming they have SA, of course!).

An increase in customer choice can only be seen as an advantage – both for those customers, and for cloud providers who can now novate customers’ existing investments to their platform. Well done Microsoft!

The only downsides I can see are that desktop appears not to be included – so no hosted full desktop as a service (such services are available, but typically based on session-based desktops) and of course that access to these ‘license mobility enhancements’ is restricted to Software Assurance customers. But despite those two small downsides I applaud Microsoft for a forward-thinking move (and it is more of an incentive to include SA).

I can see our Software Asset Management Team will have to introduce a Cloud Licensing Mobility Readiness Assessment in fairly short order!

Gosh what a lot of cores: Intel E7000 boxes coming

March 29, 2011

Today, the Softcat team are out in force at the HP Gold Partner event in Gaydon. There’s an Intel stand here, and I had to share what they were showing off:

20110329-140137.jpg
So what is it? It’s the processor graphic for the latest Intel server processor – E7xxx – which comes out next week. 10 cores, and it will go in the four-socket boxes (DL580 and 980 in HP parlance). This means that it delivers 40 cores in a box- plus hyperthreading. So 80 threads in a single box.
Not only that, but it will support 2 Tb of memory – importantly with no speed drop. There’s also be built-in encryption with next to no overhead.
Think about it- VDI, OLTP, gaming, heavy virtualisation, finance where the database needs encrypting… Impressive stuff.

New lab box anyone?