Archive for the ‘Storage’ Category

#geekbands #storagebands

February 7, 2013

Had some geekery fun at work the other evening with some of the Softcat tech crew, including Lodz and Phillbert. The game started off as #storagebands – musical acts whose names could be punned into something related to storage technology – and later morphed to #geekbands – just anything vaguely technological.

I thought I would share a list of some of the best ones – please feel free to chip in below in the comments section!

Tenacious SSD (my favourite)

Linux Richie

Blood Sweat and Tiers

Michael BUEble (pretty tenuous – BUE equals Backup Exec)

Grandmaster Flash

Johnny Cache

NickelBackup

Fat32Boy Slim

Tiers for Fears

3PARamore

Portisheadcrash

(Kernel) Panic at the Disco

Queens of the Tape Age

The RAMones

The Wide Stripes (this triggered it all off)

Any more for any more?

Advertisement

Helping HP to deliver their new baby

December 7, 2012

I’m sure, if you are reading this post, you have heard the happy news about a new baby on the way. No, not Will and Kate’s impending prince or princess – rather HP’s announcement of two new versions of their 3PAR StoreServ array aimed at the mid market, and priced from €20k for the dual controller version and €30k for the quad controller.

We were absolutely delighted to be part of the launch, over in Frankfurt just before HP’s annual Discover tech show. I joined in a customer panel at the official unveiling for the press the day before Discover started. I talked about our experiences in using 3PAR as the storage platform four our CloudSoftcat infrastructure as a service platform, which has rapidly scaled to more than 200 customers, including all of our own IT. I also presented a breakout session entitled ‘From Zero to Cloud’ later on in the event.

It’s great news that customers with serious IT needs but less serious budgets can now get access to this technology, which is in use at four of the top five largest hosting providers. Of course if you deploy 3PAR in your primary infrastructure, but don’t want to spend CAPEX on a DR site, you could replicate to our 3PAR-powered CloudSoftcat platform.

If you are interested to hear more about Softcat and HP at Discover 2012, I have collected together the following links:

Main launch site for HP 3PAR StoreServ: http://h17007.www1.hp.com/us/en/storage/nextera/index.aspx

Launch presentation from Dave Donatelli and David Scott of HP, and customer panel featuring yours truly: http://h17007.www1.hp.com/us/en/storage/nextera/live.aspx

Me being interviewed live on http://siliconangle.tv with John Furrier and David Vellante of Wikibon:
http://siliconangle.com/blog/2012/12/06/softcats-sam-routledge-gives-the-service-provider-some-cloud-perspective/

My podcast recorded with the legendary @HPStorageGuy Calvin Zito on the floor at Discover: http://h30507.www3.hp.com/t5/Around-the-Storage-Block-Blog/Softcat-utilizing-HP-3PAR-to-provide-managed-services/ba-p/128777

The Softcat tech team have a demo unit of the new HP 3PAR StoreServ 7000 on order, so if you want to learn more, give us a shout!

Some interesting background articles on Big Data

January 10, 2012

There’s been a lot of fuss around this concept of ‘Big Data’ recently. The basic idea here is that there is more data than ever to collect – from social media, from sensors in the environment, from all sorts – and that now we have the wherewithal to store this data – and importantly do something with it. This is not about big storage – this is more about big analytics – how can we derive ‘actionable insight’ from this morass of data. I guess you could say ‘it’s not the size of your data, it’s what you do with it’!

Obviously you do need a fair amount of storage and processing power – or access to cloud services to burst to – in order to render down the raw data into something you can use for business insight. This is getting the vendors fairly excited, as, if a customer can see value in pursuing this route, somebody somewhere, whether end user or cloud provider, is going to have to buy some stuff! This appears to be very much a scale-out play rather than a scale-up one, and a lot of the focus is on storage platforms like EMC’s Isilon, as a lot of this data is unstructured – i.e. not in a database.

I came across a couple of articles over the last week which explain the approach and some of the concepts well – I hope you find these useful:

Freeform Dynamics: How big is Big Data?

Information week: Big Data: Why all the fuss?

 

My predictions for 2012…

January 3, 2012

I normally do this the other way round, but this time I decided to publish on the Softcat.com news site first – but for those of you who follow me here (thank you!), here’s my predictions for IT, plus some suggestions for how to address and take advantage of these trends. Happy New Year!

Happy New Year from all at Softcat! I thought we would start the year off with a few predictions for what we expect to happen over the course of the year. Rather than this being simply some predictions in isolation, we’ve made a few suggestions as to how you might be able to respond to, or take advantage of, these trends:

Desktop – if you haven’t already, you will come under increasing pressure to migrate to Windows 7. We expect IT to need to reduce desktop management costs, while maintaining or improving service levels to the business. Windows 8 is looking likely to ship towards the end of the year – it’s looking good so far and could be interesting when your PC, phone and tablet run basically the same OS. We think Windows 8 will be the last revision of the ‘desktop’ operating system as we know it – Windows won’t go away, but will increasingly be a platform across a huge range of devices.

What to do?

Get your desktop house in order – look at management toolsets such as System Center to reduce the pain of migrating to Windows 7 and of ongoing maintenance, and reconsider server-based computing (VDI, Terminal Services etc) where this makes sense.

BYOD – more and more people within  your organisation will demand, and even expect, to be able to use their own devices – particularly phones and tablets – for work purposes. You need to set out your strategy here, as it is something that you can harness to improve productivity, employee satisfaction and business continuity.

What to do?

Evaluate the risk inherent in having data on these devices, versus the demand and potential gains. Technologies such as GOOD can encrypt data on those end-points, or you can use centralised computing to deliver Windows apps to those devices without any data getting on to the device. Over time, you may want to consider rearchitecting your line-of-business applications or at least having a transport mechanism that enables delivery to handheld devices.

Communications – we think there will be two main trends in communications. The first will be mobility-related – replacing desktop handsets by running telephony clients on mobile devices, and increasingly implementing VoIP over GSM or WiFi. The second will be an increase in the use of casual, desktop-based video conferencing and application sharing. Both of these trends will improve collaboration across geographical boundaries and bring disparate work-forces closer together.

What to do?

We think the increasing interest in Microsoft LYNC will drive both of these areas, so you could do worse than start there, especially if you have an Enterprise Agreement which gives you at least some LYNC entitlement. There’s a healthy ecosystem of products to improve and extend the capabilities of LYNC, as well.

Datacentre – virtualisation will continue to be the prevailing trend. If you haven’t, you really should – and if you have, you should probably do more! We’ll see more management technologies, both from the virtualisation vendors and from the ecosystem, which will help to push up the percentage of applications you virtualise, as well as to migrate towards the nirvana of a ‘private cloud’. No doubt your demand for storage will accelerate as well, so you will need to be clever about the way in which you use disk to keep on top of costs in that area while maintaining the performance your applications need. Oh, and you’ll need to improve your ability to recover in the event of a disaster, too.

What to do?

Review your strategy for virtualisation – are you trading physical server sprawl for the virtual equivalent? Would tighter management enable you to drive up utilisation? Review your storage strategy as well – can you make use of deduplication, for example – or employ a combination of SSDs for performance and cheaper, slower disk for data? The combination of clever use of storage and increased virtualisation can make it far easier – and cheaper – to implement a robust DR plan.

Cloud – this is the year that we will see ‘Cloud’ descend into Gartner’s ‘trough of disillusionment’. It wouldn’t surprise us if we saw the first serious security breach in the cloud space, and doubtless another large-scale outage.  Having said that, losing some of the hype will make Cloud a more realistic proposition for businesses when done properly. We think that localised, friendly cloud providers will be increasingly important – credit-card-cloud will be reserved for development and test.

What to do?

Don’t write cloud off – but look to providers where you have or can build a relationship and where doing your own due diligence is possible. Look to extend the capabilities of your infrastructure for capacity or DR by partnering with a provider of infrastructure as a service, and if your business needs a new application, consider cloud options against on-premise.

Security – Firstly, 2012 is the year we expect security technologies to start to catch up with virtualisation. Up until now the available technologies have been somewhat immature, and the new model of ‘introspection’ via APIs will improve this. Secondly, this will be the year the business tells you to stop blocking Facebook, Youtube and Twitter, as people are now genuinely using them as business tools. You will no longer be able to mitigate the risks by turning them off! We’ll also see more focus than ever on keeping data safe – something which is increasingly difficult in this mobile world.

What to do?

Firstly, if your legislative or threat landscape demands it, look seriously at Intrusion Prevention technologies which will work across your virtual environment. Secondly, make sure that your Web Security Policy is up-to-date, and supplement it with a powerful filtering technology that enables control down to application-level within social networking sites, rather than taking a simple allow-or-deny approach. Thirdly, gain an understanding of the way in which data moves around your organisation and make sure your access policy limits data misuse, especially through social media and email. Encryption of remote access is a sensible step, and you may, if your position requires it, like to give consideration to a data-loss-prevention tool.

Is your IT a customer satisfaction issue?

October 12, 2011

I’ve been meaning to put some thoughts down about this for a while. My catalyst actually to get round to it was being asked to deliver a presentation on Softcat’s unique brand of employee and customer satisfaction at the recent Call Centre Focus conference. We were asked to present through our partners QGate, who support and develop our CRM platform. They know how seriously we take employee satisfaction and customer satisfaction, and felt we had something to share.

I spoke for most of the time about the crazy place that Softcat is – how we give everybody breakfast, arrange for their ironing to be done, and generally have a great time (whilst working really hard of course) – and how because our employees feel valued, empowered and rewarded they will absolutely go the extra mile for our customers.

One point I did make relates to a conversation I have been having more and more with IT guys recently. Many IT teams are concerned that lack of recent investment in infrastructure, coupled with the growth of the importance of, and demands on, IT, has led to systems that are under-performing and not delivering the user experience which their internal customers demand. I am convinced that this has a direct effect on customer service, in that customers don’t get the information or response that they need quickly enough.

Equally importantly, I think this has a serious impact on employee satisfaction; if your people can’t get their systems working quickly enough to do their job, they are going to be less happy in their work. Doubtless this will affect their usual sunny disposition in dealing with customers! We’re currently making a huge investment in our own infrastructure to ensure that we can deliver to our staff and our customers the performance that they need – I’ll tell you more about the details of what we are doing in a future post.

In the meantime, a few things you might like to think about relating to performance and user experience (by no means an exhaustive list, and I would be very interested in your feedback in the comments):

  • Look into virtualisation as a cost-effective (and cost-avoiding!) way of upgrading ageing hardware and of allowing you to move applications around your infrastructure to free up space
  • Look into storage – SSD is becoming more and more prevalent, and, needless to say, can assist where your applications are constrained by IO. Some storage vendors offer data tiering which means that ‘hot’ data will automatically be moved to faster disk…
  • Think about how your applications are delivered – a ‘chatty’ application might be better deployed on Citrix or the equivalent, close to the servers on which it depends, rather than on PCs where it has to pull a lot of data over the network
  • Think about your network – is it time to look at 10GbE in the server room?
  • Consider your cloud options – you might not be ready to go full-bore into the public cloud, but pushing out some of your less critical or less security-dependent applications might free up some space for the rest – and get you used to cloud for the future.
Any other suggestions?

VM as a transport layer

May 4, 2011

At Softcat, we love server virtualisation – it’s been great for our customers in terms of cost savings, agility, increased availability. It’s been great for our services and storage business as well. I do think sometimes that virtualisation is considered a panacea- if we virtualise, everything will be OK and the infrastructure will just take care of itself. That’s not strictly true, however, out of the box.

The virtualisation vendors do a cracking job of protecting workloads from an outage of the immediate hardware on which those workloads is running. No longer is your application tied to a particular server; instead the virtualisation layer will ensure that the VM running that app restarts somewhere else with the minimum of disruption. Don’t get me wrong, this is fantastic, and loads better than what we had before. It’s just that the virtualisation layer is by default blissfully unaware of what is going on outside of the areas it controls.

I blogged about this before in connection with Neverfail’s vApp HA product, and I’m pleased to note that Symantec have released their version, ApplicationHA, which I think validates my view that the infrastructure needs to know what is happening at the application level and react accordingly.

Yesterday, I spent a couple of hours with APC looking at their InfraStruxure Central software. Rather than looking up into the application, this software looks down into the facilities element of your datacentre. This enables it to inform your choice of virtualisation vendor of what is going on so that decisions can be made on the placement of virtual machines.

Take for example a rack where a UPS has died, or a power feed is unavailable. That might not immediately affect the workloads running there, but it might breach SLA in terms of redundancy, or increase risk to an unacceptable level. At a more simple level, it would mean that the VM layer could spread workloads out to less power-constrained racks, or those running cooler- something you would have to do by guesswork really today. Think about initiating DR arrangements before power fails entirely, or automatically when the UPS has to kick in? To take it a stage further, this could automate the movement of workloads around the globe according to power availability and of course power costs. ‘Follow the moon computing’ anyone?

For me this could elevate the status of System Center or vCenter to the controller for the whole of your infrastructure – taking a feed from apps, infrastructure and facilities and making intelligent decisions about the placement and movement of workloads, without any human involvement other than setting the parameters when it is set up.

Sounds ominously like a cloud, doesn’t it?

Checking out Tintri

April 13, 2011

I had a look yesterday evening at a webinar on Tintri, a start-up storage company founded by an ex-VMware guy amongst others. The concept is for VM-aware storage – no luns, just one big datastore per box which delivers VM storage. Everything seems to be managed by vCenter, or at least a very vCenter-like interface. It’s a single unit, which scales out by adding boxes in the manner of P4000 or Equallogic, but with dedupe and flash storage like a NetApp box with PAM/ Flash-Cache. $65k for 8.5Tb useable.

I love the concept – the theory that your VM administrators can manage the storage layer without needing to be a storage expert will I’m sure play well in the mid-market. Storage can be a bit of a black art for those who have never had a SAN before. Tintri is also VM storage only (VMware today, other hypervisors under consideration), and we seem to be pretty good at getting our customers to 100% virtualised.

There are a few things missing – no single view of multiple boxes, no replication, and as far as I can see no VSS snapshots (although integration with VMware snapshots helps), no VAAI (as that is block level only – will be supported with VMware’s NFS support in the next release). But it looks strong on thin provisioning, dedupe etc and it sounds like this stuff will be coming soon.

Sounds like they will hit the UK/ Europe towards the end of the year – I will look forward to seeing them land and to learning more about it as it sounds like a concept with potential for our customers.

Speed and ‘Stuff’: on SSD and Scale-out

April 12, 2011

The trend in the storage market over the last few years has been towards ‘unified’ storage architectures – one box from which any manner of storage can be delivered. The background to this is that some time back, storage was generally acquired as either ‘block-based’: DAS (direct attached) or SAN (storage area network) to support applications or as ‘file-based’: NAS (network attached) to store and share files and items. Combining both arrangements into one box, delivering both file and block storage over multiple protocols (iSCSI, Fibre, FCoE – maybe even FCoTR!) has delivered savings, particularly in terms of management. Often such a box would have different types of hard drive for different workloads – small, fast drives for applications and slower, cheaper higher capacity drives for file and snapshots.

I certainly think this approach has legs – that it will suit many organisations for some considerable time to come. However, I’m just starting to see the signs that for some companies, a different approach might suit. There are two trends that might just take us in a different direction:

Firstly, the need for speed. While storage capacities have increased by an unbelievable amount, the physical speed with which drives can read and write data hasn’t kept pace. Physics gets in the way, here, as hard drives are mechanical devices. I’m guessing that if you built a hard drive faster than 15k RPM, it might well spin itself apart (or at least the MTBF would be shortened). Enter the SSD, or solid-state drive. As this is based on non-mechanical storage, given the right environment, data can be delivered much faster. Databases, VDI, that sort of thing – these workloads drive IOPS like never before and SSD could be a part of a solution. The downside, of course, is that it is expensive and capacities are limited.

The second trend is what EMC are calling Big Data (I love the lack of buzzwords and acronyms in that phrase!). Files, videos, images… everything is getting bigger, and companies are crunching more and more data – and needing to do it quicker and quicker – than ever before. Just look at Apple’s recent purchase of 12 Petabytes of storage, presumably for iTunes. That’s extreme, and not every organisation needs anywhere near that amount of ‘stuff’, but there’s a need in some cases for large amounts of file-based storage, which can scale dramatically with minimal management overhead and high levels of availability. Other scale-out NAS platforms are available, of course, including HP’s IBRIX acquisition from 2009, which is now known as X9000.

I think that based on these two trends, we will soon start to see companies run their applications off SSD storage (or other solid-state devices), and use cost-effective, designed-for-purpose, capacity from scale-out NAS architecture for their ‘stuff’ – all that unstructured data that exists outside of a database – the growth rate of which is far exceeding structured data (databases etc).

Anybody else share this view?