There was a great post a couple of weeks ago, with Tom Coughlin as a contributing editor, on Forbes’ news site about the floods that hit Thailand and how it will affect the disk drive market. The great thing about the article is it truly highlights that necessity is the mother of invention. What do I mean by that? Over the past few “storage efficiency” has been a big topic with vendors. Helping customers “do more with less”, especially in these stringent economic times, is key to the vitality of a number of businesses. Technologies such as storage virtualization and thin provisioning have helped customers to slow their storage spend and get better utilization out of their existing storage. Once customers have moved their utilization rates from 35% to 65% or 70%, time comes when new storage needs to be acquired to keep up with the growth of data. The issue comes when there are no more disk drives to be acquired. Due to the floods in Thailand, analysts predict that the storage industry could be 50 to 60 million units shy of the demand this quarter. This does two things:

1) Drives the price of disk higher, at a time when the expectation is to spend less for disk

2) Has IT getting more creative on how they use and deploy their storage

It is the later that I want to focus on as paying more for disk is not necessarily the best option. It is important to note that data grows for one reason, business does not stop, it needs to keep going and it is what is driving the demand on the data.

In the Forbes piece Tom talks about “a surge in new technologies because of this disk shortage” but he doesn’t cover some of the most innovative technologies that are available to help customers. I would agree with Tom that we “could” see a surge in SSD but that would be short lived do to both supply and cost as well as a surge in tape, but these aren’t really “new technologies”.

New technologies for primary storage optimization can and will play a key role in helping IT be more productive with their existing capacity. New technology such as Real-time Compression can help customers get back up to 80% of their existing storage capacity without losing any of their current capabilities or changing any of their data management processes. The technology seamlessly integrates into your storage environment and compresses your data 50% to 80% (depending upon data type). It also fits into IT’s existing data management practices without having to change anything. No change is required to any of the applications, snapshots stay the same, replication stays the same even backup works without having to change anything in the environment. And while some vendors may say “you can’t deduplicate compressed data”, you actually can deduplicate data written with Real-time Compression.

The Real-time Compression technology is truly a “new” technology that can expect to surge in this environment. IT can deploy this technology and expect:

  1. Up to 80% compression on their primary storage


    1. This means they can defer adding new capacity until the HDD market comes back and disk prices stabilize
  2. See up to 80% optimization in each of their downstream processes that use disk (In each of these cases, each process uses disk so there is a tremendous savings by just compressing the primary copy of the data)


    1. Meaning up to 80% less capacity for snapshots
    2. Meaning up to 80% less capacity for replication
    3. Meaning up to 80% less capacity for backups
  3. The technology will be transparent to their existing infrastructure

In addition, Real-time Compression can cut your cost per TB by a factor of your compression ratio (50% compression is a 2:1 cost reduction in your $/TB cost). It is also the case, if you are looking to SSD for performance, you can now afford to spend some money on SSD or more money on SSD given the new cost model.

Now, the “new” technology does need to be efficient and fit into a customer’s existing infrastructure seamlessly or it isn’t really useful. Asking IT to change their processes can be just as costly as purchasing new capacity in the long run. I mention this because in a related story, NetApp is also fearful about what the HDD shortage will do for their business. I find this ironic. On a recent panel I was on at SNW with Larry Freeman of NetApp, he told the audience that NetApp filers have these “new” technologies “built in” to their WAFL file system, in fact they have 10 “storage efficiency” features built in to WAFL. He went on to say that on a weekly basis they get reports from a number of systems in the field that “report in” on how customers are using their systems. On average customers use only 3 of the 10 features. When we polled the audience to ask them why, they said that while the feature may help them save space, they impact other areas of their operation. Maybe it impacts system performance, maybe it impacts backup so they can’t use the feature.

The moral of the story is that I do believe that new technologies are going to “surge” (as Tom states) in his piece, because IT will need other alternatives to the shortage of disk drive that are available and the higher prices. In addition, this will force IT to look at their environment to identify how to be more efficient with their storage environment as stuff like the flood could come up again and affect the supply and demand of HDD. But the right technologies that not only help with storage capacity as well as data growth needs to be the answer to the challenge. The best technologies fit into IT’s existing infrastructure and makes it more efficient overall.

Tags:

Compression, data, Data Deduplication, disk, flooding, HDD, IBM, NetApp, NTAP, real-time compression, SSD, Storage, Thailand, virtualzation