Today I read a very well written blog by The SANMan. The only issue is, you can't comment on his blog. This is the first technology blog I have seen like this. So, I will have to post my thought here.

In his post "NetApp Takes the "Primary" Lead for Data Reduction" - which seems more like theory and a commercial for NTAP than reality (see comments @ The Register) the SANMan states:

"Yes, Ocarina and Storwize have appliances that compress and uncompress data as it’s alternatively stored and read but what performance overhead do such technologies have when hundreds of end users concurrently access the same email attachment? As for Oracle’s Solaris ZFS file system sub level deduplication which is yet to see the light of day one wonders how much hot water it will get Oracle into should it turn out to be a direct rip off of the NetApp model."

I have two comments:

1) You are right - you CAN'T do deduplicaiton on primary if you affect performance. All indications for customers are that they cannot use NTAP deduplicaiton or even compression 'in-line' as the performance is just too terrible so all processes must be done post-process.

2) I direct your attention to the Wikibon Blog on CORE - "Dedupe Rates Matter...Just Not as Much as You Think" - Storwize can do in-line data optimization without any performance degradation. So the question is - if customers can 'Optimize without Compromise' - why wouldn't they?

Updated 6/7/2010 - Oh, quick question - how does the SANMan get away with the graphics he uses? I would think that Walt Disney & Pixar would get a bit upset with the use of the character Carl Fredricksen, no?

Tags:

Capacity Optimization, Compression, data compression, Data Deduplication, Dedupe, Deduplication, NetApp, NTAP, real-time compression, Storage, Storwize