Archive for the ‘Converged Infrastructure’ Category

Helping HP to deliver their new baby

December 7, 2012

I’m sure, if you are reading this post, you have heard the happy news about a new baby on the way. No, not Will and Kate’s impending prince or princess – rather HP’s announcement of two new versions of their 3PAR StoreServ array aimed at the mid market, and priced from €20k for the dual controller version and €30k for the quad controller.

We were absolutely delighted to be part of the launch, over in Frankfurt just before HP’s annual Discover tech show. I joined in a customer panel at the official unveiling for the press the day before Discover started. I talked about our experiences in using 3PAR as the storage platform four our CloudSoftcat infrastructure as a service platform, which has rapidly scaled to more than 200 customers, including all of our own IT. I also presented a breakout session entitled ‘From Zero to Cloud’ later on in the event.

It’s great news that customers with serious IT needs but less serious budgets can now get access to this technology, which is in use at four of the top five largest hosting providers. Of course if you deploy 3PAR in your primary infrastructure, but don’t want to spend CAPEX on a DR site, you could replicate to our 3PAR-powered CloudSoftcat platform.

If you are interested to hear more about Softcat and HP at Discover 2012, I have collected together the following links:

Main launch site for HP 3PAR StoreServ: http://h17007.www1.hp.com/us/en/storage/nextera/index.aspx

Launch presentation from Dave Donatelli and David Scott of HP, and customer panel featuring yours truly: http://h17007.www1.hp.com/us/en/storage/nextera/live.aspx

Me being interviewed live on http://siliconangle.tv with John Furrier and David Vellante of Wikibon:
http://siliconangle.com/blog/2012/12/06/softcats-sam-routledge-gives-the-service-provider-some-cloud-perspective/

My podcast recorded with the legendary @HPStorageGuy Calvin Zito on the floor at Discover: http://h30507.www3.hp.com/t5/Around-the-Storage-Block-Blog/Softcat-utilizing-HP-3PAR-to-provide-managed-services/ba-p/128777

The Softcat tech team have a demo unit of the new HP 3PAR StoreServ 7000 on order, so if you want to learn more, give us a shout!

Advertisement

Vendor swag part 1: ‘Clouds’ of smoke

February 17, 2012

This is part 1 in an occasional series I have been meaning to kick off for a while, showcasing the more interesting pieces of vendor merchandise at conferences and so on. I have to say I am not sure we will better the first one!

This week, I have been away at HP’s Global Partner Summit. One evening I attended a reception for the storage business. The marketing guys had gone crazy as the drinks on offer were branded! I went for a CloudAgile Whiskey Sour, but quite a few people were tucking in to 3PARtinis… At the end of the reception, they handed out a gift bag. Inside, was a HP Storage branded cigar, a CloudAgile branded lighter and a cigar cutter. Amazing! Anyone seen anything cooler than that from a vendor?

Clouds of smoke

Not something I will make use of myself, but if any of our wonderful customers want it, drop me a line!

Is your IT a customer satisfaction issue?

October 12, 2011

I’ve been meaning to put some thoughts down about this for a while. My catalyst actually to get round to it was being asked to deliver a presentation on Softcat’s unique brand of employee and customer satisfaction at the recent Call Centre Focus conference. We were asked to present through our partners QGate, who support and develop our CRM platform. They know how seriously we take employee satisfaction and customer satisfaction, and felt we had something to share.

I spoke for most of the time about the crazy place that Softcat is – how we give everybody breakfast, arrange for their ironing to be done, and generally have a great time (whilst working really hard of course) – and how because our employees feel valued, empowered and rewarded they will absolutely go the extra mile for our customers.

One point I did make relates to a conversation I have been having more and more with IT guys recently. Many IT teams are concerned that lack of recent investment in infrastructure, coupled with the growth of the importance of, and demands on, IT, has led to systems that are under-performing and not delivering the user experience which their internal customers demand. I am convinced that this has a direct effect on customer service, in that customers don’t get the information or response that they need quickly enough.

Equally importantly, I think this has a serious impact on employee satisfaction; if your people can’t get their systems working quickly enough to do their job, they are going to be less happy in their work. Doubtless this will affect their usual sunny disposition in dealing with customers! We’re currently making a huge investment in our own infrastructure to ensure that we can deliver to our staff and our customers the performance that they need – I’ll tell you more about the details of what we are doing in a future post.

In the meantime, a few things you might like to think about relating to performance and user experience (by no means an exhaustive list, and I would be very interested in your feedback in the comments):

  • Look into virtualisation as a cost-effective (and cost-avoiding!) way of upgrading ageing hardware and of allowing you to move applications around your infrastructure to free up space
  • Look into storage – SSD is becoming more and more prevalent, and, needless to say, can assist where your applications are constrained by IO. Some storage vendors offer data tiering which means that ‘hot’ data will automatically be moved to faster disk…
  • Think about how your applications are delivered – a ‘chatty’ application might be better deployed on Citrix or the equivalent, close to the servers on which it depends, rather than on PCs where it has to pull a lot of data over the network
  • Think about your network – is it time to look at 10GbE in the server room?
  • Consider your cloud options – you might not be ready to go full-bore into the public cloud, but pushing out some of your less critical or less security-dependent applications might free up some space for the rest – and get you used to cloud for the future.
Any other suggestions?

VM as a transport layer

May 4, 2011

At Softcat, we love server virtualisation – it’s been great for our customers in terms of cost savings, agility, increased availability. It’s been great for our services and storage business as well. I do think sometimes that virtualisation is considered a panacea- if we virtualise, everything will be OK and the infrastructure will just take care of itself. That’s not strictly true, however, out of the box.

The virtualisation vendors do a cracking job of protecting workloads from an outage of the immediate hardware on which those workloads is running. No longer is your application tied to a particular server; instead the virtualisation layer will ensure that the VM running that app restarts somewhere else with the minimum of disruption. Don’t get me wrong, this is fantastic, and loads better than what we had before. It’s just that the virtualisation layer is by default blissfully unaware of what is going on outside of the areas it controls.

I blogged about this before in connection with Neverfail’s vApp HA product, and I’m pleased to note that Symantec have released their version, ApplicationHA, which I think validates my view that the infrastructure needs to know what is happening at the application level and react accordingly.

Yesterday, I spent a couple of hours with APC looking at their InfraStruxure Central software. Rather than looking up into the application, this software looks down into the facilities element of your datacentre. This enables it to inform your choice of virtualisation vendor of what is going on so that decisions can be made on the placement of virtual machines.

Take for example a rack where a UPS has died, or a power feed is unavailable. That might not immediately affect the workloads running there, but it might breach SLA in terms of redundancy, or increase risk to an unacceptable level. At a more simple level, it would mean that the VM layer could spread workloads out to less power-constrained racks, or those running cooler- something you would have to do by guesswork really today. Think about initiating DR arrangements before power fails entirely, or automatically when the UPS has to kick in? To take it a stage further, this could automate the movement of workloads around the globe according to power availability and of course power costs. ‘Follow the moon computing’ anyone?

For me this could elevate the status of System Center or vCenter to the controller for the whole of your infrastructure – taking a feed from apps, infrastructure and facilities and making intelligent decisions about the placement and movement of workloads, without any human involvement other than setting the parameters when it is set up.

Sounds ominously like a cloud, doesn’t it?

HP’s E5000 messaging appliance

April 11, 2011

After seeing the sales pitch a while back, I spent an hour or so this afternoon looking under the covers of the HP E5000 messaging appliances. These are part of the HP/ Microsoft Frontline partnership, where the two organisations have invested $250m on working together. The premise of a drop-in appliance which runs Exchange, sized by mailbox number and size, is a strong one I think – especially with a lot of organisations still on Exchange 2003 – 74% apparently!

Does anyone remember the old HP DL380 packaged cluster that was available up to maybe 4-5 years ago? I used to love those things… Great for SQL, Exchange, TS clusters. Well, the E5000 is basically a modern version of that. Two blades and a chunk of shared storage, in a custom enclosure. The ‘secret sauce’ is a wizard-driven interface which enables you to set up Exchange considerably quicker. Needless to say you still need to migrate the mailboxes, but if you are an IT team needing to get Exchange 2010 up and running, this may just help you out. With the fact that Exchange 2010 is tuned for local rather than SAN storage, and requires less in the way of IOPS, you are not taking up expensive space on your SAN….

Whilst the major benefit in my mind is the speed of deployment, it’s also space and power efficient, being based on blades. I reckon it will cut down on consultancy costs as well if you don’t have any Exchange experts on staff.

I’ve got some links to further information coming over shortly and I will add those then.

Gosh what a lot of cores: Intel E7000 boxes coming

March 29, 2011

Today, the Softcat team are out in force at the HP Gold Partner event in Gaydon. There’s an Intel stand here, and I had to share what they were showing off:

20110329-140137.jpg
So what is it? It’s the processor graphic for the latest Intel server processor – E7xxx – which comes out next week. 10 cores, and it will go in the four-socket boxes (DL580 and 980 in HP parlance). This means that it delivers 40 cores in a box- plus hyperthreading. So 80 threads in a single box.
Not only that, but it will support 2 Tb of memory – importantly with no speed drop. There’s also be built-in encryption with next to no overhead.
Think about it- VDI, OLTP, gaming, heavy virtualisation, finance where the database needs encrypting… Impressive stuff.

New lab box anyone?

Is the midmarket better than enterprise at ‘Private Cloud’?

September 13, 2010

I heard a great story the other day about a large organization who embraced virtualisation to a fairly high degree. As part of this they managed to reduce the time taken to deploy a new workload from three months to six weeks for a virtual machine. Of course, it can be done significantly quicker than that, but their processes put the brakes on what the technology could deliver.

What do I mean by ‘Private Cloud’? I guess I am talking about what HP refer to as ‘Converged Infrastructure’; a heavily virtualised environment whereby resources can be provisioned and reallocated more or less dynamically to meet the needs of a business; one whereby traditional silos of storage, networking and compute resource are managed all together to ensure that the appropriate resource is available to a given workload.

Whilst midmarket organizations don’t perhaps have access to the most cutting edge (and most expensive!) technologies, it feels to me as if smaller companies have perhaps embraced virtualisation to a much deeper level than larger ones, and this means, I think, that they are closer to the likely future of computing than those stuck-in-the-mud big businesses who can’t change so quickly – and can’t make the most of what technology will enable them to do.

So why are SMEs better at this than the bigger boys?

–       Typically, SMEs seem more likely to go for 100% virtualised – or at the very least, a ‘virtualise by default’ policy.

–       The IT team within SMEs are often multi-discipline. This means that the traditional ‘silo’ of storage vs networking vs server can rapidly be overcome

–       Application owners are frequently also infrastructure owners – meaning that by default the infrastructure is designed to suit the needs of the business.

–       The layers of red tape that can unfortunately exist within larger organisations (with good reason, often) frequently don’t have such an impact in a smaller, more agile business..

What do you think? How far down the line towards a ‘private cloud’ style architecture has your company got?

Technology@Work Takeaways

May 5, 2010

It’s nearly a week since I got back from HP’s Technology at Work event, in Frankfurt, and I have just about caught up, so I thought it was time to summarise my experience. First out, I very much enjoyed it – the organisation was very slick, Frankfurt was friendly and the content was superb. I feel less guilty now that HP have cancelled their tablet plans (or at least delayed them pending the integration of Palm) for my use of an iPad throughout the event…

My first comment would be that getting involved in the social media scene at events like this is a good idea. It made me feel really engaged with the event to be following #HPTAW and tweeting with folks like @StorageGuy and @JezatHP (follow both of these guys for good insider info on HP Storage and Networking respectively). It’s a really useful back-channel to comment on what is going on and point people in the direction of stuff that is interesting. Needless to say I was delighted to end up as Mayor of HP Technology at Work on Foursquare

Now on with the technology industry… My primary takeaway was the emphasis from Gartner (who were out in force) on a simple statistic: that in their research, 70% of IT budget is spent on ‘keeping the lights on’, and only 30% spent on innovation – finding technology solutions to move the business forwards. That’s pretty scary, and I would welcome feedback from our customers as to whether that’s common experience. It feels about right to me. Almost every keynote referred back to this statistic, and HP are focusing their efforts on remedying the situation. A lot of the conversations were around reducing cost: power and cooling, management, etc. However, an interesting point was that CIOs with responsibility for the power bills refresh infrastructure more often, deriving ROI from this refresh – and in the process giving their organisation access to the latest technology, making the business more agile.

I’m sure we all hope we are on the way out of recession. Certainly the feedback from Gartner was that organisations are investing. In fact, the focus appears to be moving away from out-and-out cost, although budget growth will remain low or nonexistent. The focus is on balancing value with risk and innovation – the goal is to enable the business to be flexible and adapt to change. The legacy of the economic situation will leave organisations in a state of flux – M&A, emerging markets, volatility and a need to advance and retract into areas will mean that IT must be able to react to and enable this constant change. No wonder virtualisation, cloud and Web 2.0 are the top three technologies on CIOs’ watch-lists. HP are working hard on the concept of ‘converged infrastructure’ (separate post to follow) to provide a cohesive platform from which to deliver services to meet the needs of your organisation.

End-user computing was another area, and one which has interested me for some time. The story echoed my post on Next Generation Desktops, but suffice it to say that the computing environment of tomorrow will be a little different. Virtualisation of the desktop in its various forms will become commonplace, and management by separation of the various layers (hardware, OS, apps, profile etc) will be the norm. At the same time, IT departments will need to get used to delivering mobile, lightweight apps to laptops, netbooks, smartphones and (dare I say it) the iPad.

Lastly, there was a fair amount on the networking front. HP contest that the acquisition of 3COM gives them an end-to-end networking portfolio, from the core to the edge, including security, for the first time. Networking is key to the Converged Infrastructure message, so keep an eye on this (we will announce an event soon). There was a lot of talk about unified communications, in particular integration of Microsoft Exchange and OCS with Procurve and Proliant.

Whilst I don’t think I recall Cisco being mentioned by name throughout the entire event, it’s fairly evident that they have a fight on their hands here!