Archive for the ‘DR/BC/HA’ Category

Helping HP to deliver their new baby

December 7, 2012

I’m sure, if you are reading this post, you have heard the happy news about a new baby on the way. No, not Will and Kate’s impending prince or princess – rather HP’s announcement of two new versions of their 3PAR StoreServ array aimed at the mid market, and priced from €20k for the dual controller version and €30k for the quad controller.

We were absolutely delighted to be part of the launch, over in Frankfurt just before HP’s annual Discover tech show. I joined in a customer panel at the official unveiling for the press the day before Discover started. I talked about our experiences in using 3PAR as the storage platform four our CloudSoftcat infrastructure as a service platform, which has rapidly scaled to more than 200 customers, including all of our own IT. I also presented a breakout session entitled ‘From Zero to Cloud’ later on in the event.

It’s great news that customers with serious IT needs but less serious budgets can now get access to this technology, which is in use at four of the top five largest hosting providers. Of course if you deploy 3PAR in your primary infrastructure, but don’t want to spend CAPEX on a DR site, you could replicate to our 3PAR-powered CloudSoftcat platform.

If you are interested to hear more about Softcat and HP at Discover 2012, I have collected together the following links:

Main launch site for HP 3PAR StoreServ:

Launch presentation from Dave Donatelli and David Scott of HP, and customer panel featuring yours truly:

Me being interviewed live on with John Furrier and David Vellante of Wikibon:

My podcast recorded with the legendary @HPStorageGuy Calvin Zito on the floor at Discover:

The Softcat tech team have a demo unit of the new HP 3PAR StoreServ 7000 on order, so if you want to learn more, give us a shout!


My predictions for 2012…

January 3, 2012

I normally do this the other way round, but this time I decided to publish on the news site first – but for those of you who follow me here (thank you!), here’s my predictions for IT, plus some suggestions for how to address and take advantage of these trends. Happy New Year!

Happy New Year from all at Softcat! I thought we would start the year off with a few predictions for what we expect to happen over the course of the year. Rather than this being simply some predictions in isolation, we’ve made a few suggestions as to how you might be able to respond to, or take advantage of, these trends:

Desktop – if you haven’t already, you will come under increasing pressure to migrate to Windows 7. We expect IT to need to reduce desktop management costs, while maintaining or improving service levels to the business. Windows 8 is looking likely to ship towards the end of the year – it’s looking good so far and could be interesting when your PC, phone and tablet run basically the same OS. We think Windows 8 will be the last revision of the ‘desktop’ operating system as we know it – Windows won’t go away, but will increasingly be a platform across a huge range of devices.

What to do?

Get your desktop house in order – look at management toolsets such as System Center to reduce the pain of migrating to Windows 7 and of ongoing maintenance, and reconsider server-based computing (VDI, Terminal Services etc) where this makes sense.

BYOD – more and more people within  your organisation will demand, and even expect, to be able to use their own devices – particularly phones and tablets – for work purposes. You need to set out your strategy here, as it is something that you can harness to improve productivity, employee satisfaction and business continuity.

What to do?

Evaluate the risk inherent in having data on these devices, versus the demand and potential gains. Technologies such as GOOD can encrypt data on those end-points, or you can use centralised computing to deliver Windows apps to those devices without any data getting on to the device. Over time, you may want to consider rearchitecting your line-of-business applications or at least having a transport mechanism that enables delivery to handheld devices.

Communications – we think there will be two main trends in communications. The first will be mobility-related – replacing desktop handsets by running telephony clients on mobile devices, and increasingly implementing VoIP over GSM or WiFi. The second will be an increase in the use of casual, desktop-based video conferencing and application sharing. Both of these trends will improve collaboration across geographical boundaries and bring disparate work-forces closer together.

What to do?

We think the increasing interest in Microsoft LYNC will drive both of these areas, so you could do worse than start there, especially if you have an Enterprise Agreement which gives you at least some LYNC entitlement. There’s a healthy ecosystem of products to improve and extend the capabilities of LYNC, as well.

Datacentre – virtualisation will continue to be the prevailing trend. If you haven’t, you really should – and if you have, you should probably do more! We’ll see more management technologies, both from the virtualisation vendors and from the ecosystem, which will help to push up the percentage of applications you virtualise, as well as to migrate towards the nirvana of a ‘private cloud’. No doubt your demand for storage will accelerate as well, so you will need to be clever about the way in which you use disk to keep on top of costs in that area while maintaining the performance your applications need. Oh, and you’ll need to improve your ability to recover in the event of a disaster, too.

What to do?

Review your strategy for virtualisation – are you trading physical server sprawl for the virtual equivalent? Would tighter management enable you to drive up utilisation? Review your storage strategy as well – can you make use of deduplication, for example – or employ a combination of SSDs for performance and cheaper, slower disk for data? The combination of clever use of storage and increased virtualisation can make it far easier – and cheaper – to implement a robust DR plan.

Cloud – this is the year that we will see ‘Cloud’ descend into Gartner’s ‘trough of disillusionment’. It wouldn’t surprise us if we saw the first serious security breach in the cloud space, and doubtless another large-scale outage.  Having said that, losing some of the hype will make Cloud a more realistic proposition for businesses when done properly. We think that localised, friendly cloud providers will be increasingly important – credit-card-cloud will be reserved for development and test.

What to do?

Don’t write cloud off – but look to providers where you have or can build a relationship and where doing your own due diligence is possible. Look to extend the capabilities of your infrastructure for capacity or DR by partnering with a provider of infrastructure as a service, and if your business needs a new application, consider cloud options against on-premise.

Security – Firstly, 2012 is the year we expect security technologies to start to catch up with virtualisation. Up until now the available technologies have been somewhat immature, and the new model of ‘introspection’ via APIs will improve this. Secondly, this will be the year the business tells you to stop blocking Facebook, Youtube and Twitter, as people are now genuinely using them as business tools. You will no longer be able to mitigate the risks by turning them off! We’ll also see more focus than ever on keeping data safe – something which is increasingly difficult in this mobile world.

What to do?

Firstly, if your legislative or threat landscape demands it, look seriously at Intrusion Prevention technologies which will work across your virtual environment. Secondly, make sure that your Web Security Policy is up-to-date, and supplement it with a powerful filtering technology that enables control down to application-level within social networking sites, rather than taking a simple allow-or-deny approach. Thirdly, gain an understanding of the way in which data moves around your organisation and make sure your access policy limits data misuse, especially through social media and email. Encryption of remote access is a sensible step, and you may, if your position requires it, like to give consideration to a data-loss-prevention tool.

VM as a transport layer

May 4, 2011

At Softcat, we love server virtualisation – it’s been great for our customers in terms of cost savings, agility, increased availability. It’s been great for our services and storage business as well. I do think sometimes that virtualisation is considered a panacea- if we virtualise, everything will be OK and the infrastructure will just take care of itself. That’s not strictly true, however, out of the box.

The virtualisation vendors do a cracking job of protecting workloads from an outage of the immediate hardware on which those workloads is running. No longer is your application tied to a particular server; instead the virtualisation layer will ensure that the VM running that app restarts somewhere else with the minimum of disruption. Don’t get me wrong, this is fantastic, and loads better than what we had before. It’s just that the virtualisation layer is by default blissfully unaware of what is going on outside of the areas it controls.

I blogged about this before in connection with Neverfail’s vApp HA product, and I’m pleased to note that Symantec have released their version, ApplicationHA, which I think validates my view that the infrastructure needs to know what is happening at the application level and react accordingly.

Yesterday, I spent a couple of hours with APC looking at their InfraStruxure Central software. Rather than looking up into the application, this software looks down into the facilities element of your datacentre. This enables it to inform your choice of virtualisation vendor of what is going on so that decisions can be made on the placement of virtual machines.

Take for example a rack where a UPS has died, or a power feed is unavailable. That might not immediately affect the workloads running there, but it might breach SLA in terms of redundancy, or increase risk to an unacceptable level. At a more simple level, it would mean that the VM layer could spread workloads out to less power-constrained racks, or those running cooler- something you would have to do by guesswork really today. Think about initiating DR arrangements before power fails entirely, or automatically when the UPS has to kick in? To take it a stage further, this could automate the movement of workloads around the globe according to power availability and of course power costs. ‘Follow the moon computing’ anyone?

For me this could elevate the status of System Center or vCenter to the controller for the whole of your infrastructure – taking a feed from apps, infrastructure and facilities and making intelligent decisions about the placement and movement of workloads, without any human involvement other than setting the parameters when it is set up.

Sounds ominously like a cloud, doesn’t it?

Silly Snow Season is here again!

November 30, 2010

Tube strikes on Monday; snow and ice today; I can’t be alone in having had meetings cancelled this week. Looking at the news, I’m quite glad I’m not travelling to Newcastle tonight – even though it is one of my favourite cities, it doesn’t look much fun at the minute.

Watching BBC News this morning (on the iPad of course, while I was making my daughter’s sandwiches!), they were again talking about people working from home – not just during periods of snow, but generally to save on transport infrastructure, time etc. Back-to-back with that, there was an article on how Hogg Robinson, a corporate travel company, had doubled their profit as travel increases. (I’m assuming this isn’t just down to the ridiculous costs of train fares!)

We’ve got a video conferencing system at Softcat – it’s really useful for talking to the Manchester team without a three hour drive. We use Office Communications Server, so we can communicate across offices and see where everyone is at a glance. We have remote access systems powered by Citrix, or just OWA for those that don’t need business systems. Last year in the snow, I really think we picked up a load of business as other organisations just weren’t there. We had people working from home, from our offices, from wherever – and we got the job done!

Isn’t it time to look at your unified communications and business continuity strategy?

Snow business?

January 11, 2010

As a technology company, we’re as guilty as anyone of thinking about ‘disaster recovery’ being all about ensuring that IT systems keep running in any eventuality. The recent snow puts a different perspective on things though, as almost the reverse is true in that our systems were unaffected, but quite a few people cound’t get into the office. Whether it’s snow, swine flu or something else, the conversation needs to be about ‘business continuity’ rather than DR.

Believe it or not, despite the disruption, we had a really busy and successful week. We’re well-equipped with remote systems access through a combination of Terminal Services and Citrix, and of course good old Outlook Web Access – so everyone had some level of ability to continue to work.

One area which we are looking to improve on is the phone side of things. Our aging Nortel phone system is scheduled for replacement in February. We’re moving to a Cisco Call Manager platform which will enable us to ‘log in’ to our phones from anywhere and use ‘single number reach’ to ensure we are available even if we aren’t in the office. In the event, about 40 people made it in so the phones were well covered – but this will be much improved next time round.

As usual, the snow has occasioned a fairly hefty degree of interest in remote access solutions and unified communications. As you can see we are in the process of ‘eating our own dogfood’! If you want to know more, we’re running events on both subjects, and on broader business continuity, in January and February. See for dates.

Neverfail vAppHA

January 6, 2010

I was fortunate enough, just before Christmas, to be invited to play with a new product coming soon from Neverfail, vAppHA. I’ve been hearing about this for a while, and it was great to see it actually working.

Neverfail have had a great partnership with VMware for some time (VMware’s vCenter Heartbeat is actually Neverfail for vCenter!), and vAppHA looks like it will extend this. We’ve been deploying VMware for about four years now, typically as much to improve availability as to consolidate physical servers. We’re great fans of the built-in high-availability features within VMware.

However, as good as VMware HA, vMotion and FT are, they don’t have very much in the way of application awareness. This is where vAppHA steps in. It uses Neverfail’s application aware technology, and uses the information garnered from that to trigger VMware events – making HA for example a much better bet for applicaion availability.

Envisage, for example, a SQL server running as a virtual machine. If a service stops, VMware unfortunately will be blissfully unaware. However, vAppHA recognises this and will the trigger an event of your chosing – for example, the restart of that virtual machine on another host. This provides a layer of ‘self-healing’ within your virtual infrastructure, up to the application layer, that didn’t exist before.

Coupled with an intelligent storage layer such as HP LeftHand which will enable you to stretch your VMware cluster over two sites on the same campus, this could be even more useful. It’s certainly something we’re looking forward to having in our kit-bag once it’s released to the world. For what it’s worth the pricing looks pretty fair too.

UPDATE (1/3/2010): Free trial now available here: