The trend in the storage market over the last few years has been towards ‘unified’ storage architectures – one box from which any manner of storage can be delivered. The background to this is that some time back, storage was generally acquired as either ‘block-based’: DAS (direct attached) or SAN (storage area network) to support applications or as ‘file-based’: NAS (network attached) to store and share files and items. Combining both arrangements into one box, delivering both file and block storage over multiple protocols (iSCSI, Fibre, FCoE – maybe even FCoTR!) has delivered savings, particularly in terms of management. Often such a box would have different types of hard drive for different workloads – small, fast drives for applications and slower, cheaper higher capacity drives for file and snapshots.
I certainly think this approach has legs – that it will suit many organisations for some considerable time to come. However, I’m just starting to see the signs that for some companies, a different approach might suit. There are two trends that might just take us in a different direction:
Firstly, the need for speed. While storage capacities have increased by an unbelievable amount, the physical speed with which drives can read and write data hasn’t kept pace. Physics gets in the way, here, as hard drives are mechanical devices. I’m guessing that if you built a hard drive faster than 15k RPM, it might well spin itself apart (or at least the MTBF would be shortened). Enter the SSD, or solid-state drive. As this is based on non-mechanical storage, given the right environment, data can be delivered much faster. Databases, VDI, that sort of thing – these workloads drive IOPS like never before and SSD could be a part of a solution. The downside, of course, is that it is expensive and capacities are limited.
The second trend is what EMC are calling Big Data (I love the lack of buzzwords and acronyms in that phrase!). Files, videos, images… everything is getting bigger, and companies are crunching more and more data – and needing to do it quicker and quicker – than ever before. Just look at Apple’s recent purchase of 12 Petabytes of storage, presumably for iTunes. That’s extreme, and not every organisation needs anywhere near that amount of ‘stuff’, but there’s a need in some cases for large amounts of file-based storage, which can scale dramatically with minimal management overhead and high levels of availability. Other scale-out NAS platforms are available, of course, including HP’s IBRIX acquisition from 2009, which is now known as X9000.
I think that based on these two trends, we will soon start to see companies run their applications off SSD storage (or other solid-state devices), and use cost-effective, designed-for-purpose, capacity from scale-out NAS architecture for their ‘stuff’ – all that unstructured data that exists outside of a database – the growth rate of which is far exceeding structured data (databases etc).
Anybody else share this view?