A Simplifcation Proposal

Having worked in various organizations over the years performing infrastructure audits, creating documentation, exploring and explaining how things are connected, how different elements in the SAN infrastructure relate to each other, where they are physically located: one thought keeps coming back to me. Does it really have to be this complicated?

Enter Virtualization.

Well, SAN and storage is still lagging, years after the introduction of VmWare and the likes emerged. Storage and SAN people are still struggling with WWN’s, Switch Ports, SAN zones, Host Groups, LUN’s, RAID groups and, cables.

What if there was a way to drastically simplify the whole stack of dependencies and elements that make up the chain all the way from the RAID group, the LUN, the Host Group, the SAN zones, the SAN ports, the HBA’s, the HBA firmware release level, The HBA OS driver release level. Everything! Just get rid of it once and for all!

That is the thought I have been contemplating for some time now.

The fact to honor is the original thought that paved the way for Storage Area Networks: higher utilization of storage volume acquired. Remember when each server had a storage device connected to it, using perhaps 20% of the capacity both from a volume and a bandwidth stand point? That’s why we have SAN’s today, that was the original driver back in the days, and that still stands true. We must not forget that.

When you think of storage, don’t just think of it in terms of storage capacity and cost. This is probably the most common mistake. Think about the value of the data you are storing and how it is being protected. Think about how long you need to keep it and how often or fast you need access to it over longer time periods. Depending on what you have, it is most likely subject to some regulatory requirements. Look for a storage solution that is sustainable and scalable, secure and tamper-proof.

So, data is created by a computing element and then stored. That’s the easy part.

After that point you need to have an infrastructure that allows for you to restore data that is deleted by mistake, retrieve data from an archive designed specifically for storing data for a long time, again based on regulatory requirements. Finally you need to be able to recover your systems and data due to logical faults created by the system itself or a virus or a physical fault caused by flood or fire. Retrieve, Restore, Recover.

That’s it folks. That is all you need.

But there is one more thing… All of the above really have little or no value unless the infrastructure allows for practicing disaster recovery on real relevant active data-sets simply and frequently. The key to making that possible is the CDP element, Continuous Data Protection being an integral part of the HyperScale architecture.

So what am I selling?

I am simply proposing a new way to look, a new vantage point, examining the infrastructure as a whole, based on the thoughts outlined above, implemented using the unique Scale Computing HC3 platform that removes the whole SAN/Storage complex and forges the Virtualization of machines and storage management into one comprehensive manageable unit in an unprecedented fashion.

Once implemented the whole stack of complexity, risk and cost associated with the HBA/SAN/Storage/cabling will go away without loosing the original thought underpinning the advent of the SAN many moons ago.

Start by identifying candidate systems in your current complex infrastructure and create a plan for how to migrate into the new simple HypeScale infrastructure.

You might decide that most new systems will go in here or perhaps systems of a certain class qualifies as the first candidates? Eventually you will have 80% of your systems in the HyperScale and the rest, systems that for one reason or another can or should not be migrated, will remain in the old infrastructure. By that time however the remaining traditional platform will be much less complex. Less ports, less zones, less host-groups, less LUN’s, less storage devices to manage, less time, less worries, less money, less risk.

You win!

The HyperScale Architecture

Scale Computing HC3 example deployment







If you are contemplating these matters, perhaps you think that what I am saying makes sense, perhaps you have a budget and would like to explore my HyperScale solution, feel free to contact me at pm@dataresilience.com.

Good luck!

Stockholm August 4th 2012


More information on Scale HC3 found here.