What is Scale Computing HC3?

Scale Computing HC3 is all about lowering cost without messing things up.

With cost we mean cost on many levels: electricity, cooling, rack space, knowledge and accreditation of staff to keep them up-to-date on Hypervisors, SAN storage, SAN Fabrics and switch operating systems, patch cables and documentation, the list goes on.

Scale HC3 is a turnkey infrastructure for you applications. Integrated, Elegant, built for Everyman IT. Servers, Storage and Virtualization; all in one package.

Simple, Robust, Affordable. Built for Small Medium Business, or for larger distributed corporations that need to keep things simple for the local staff in remote locations.

Share the Ride

How do you explain something like Virtualization and the benefit of Scale Computing HC3 to someone without ANY technical words or concepts?

One way would be to say that it is about sharing the ride.

Like carpooling. Like the story of Noah and his boat. Not really a new concept. But proven.

Check this old classic out and you will get the spirit of Scale HC3 and Virtualization:

http://www.youtube.com/watch?v=2yY_9D8d0rk&feature=youtu.be

Your Applications. Always there.

Scale HC3 was designed and built for mid sized companies. This does not exclude large companies as beneficiaries of the technology as they would also save money from the ease of use and simplicity offered by the Scale solution. Only more so.

The argument is that small and mid sized companies can not afford the cost of managing the infrastructure complexity in traditional SAN based virtualised infrastructures. This is true. And it is true for a company any size, only more true for the mid size company. For the medium sized company it is notoriously difficult for the often thin-spread single resource to stay on top of everything from operating systems, hypervisors, clusters and SAN components. It is impossible for one individual to be an expert in all these separate fields, and this poses a real risk to the mid size company.

Scale offers an nice alternative, simple & elegant scalable and ready to use building blocks that is incredibly easy to deploy. Scale HC3 offers virtually all the features big companies expect and use every day, but it comes with less complexity and as a result offers a solution with significantly lower cost to manage.

Scale HC3 offers the shortest way possible from concept to a fully functional production platform underpinning your applications. And that is what it is all about. Your applications, always there, always online.

Easy Like Sunday Morning.

I had a really nice installation experience at a customer site this Friday that I’d like to share with you. I found the hands on experience from using Scale in a real life situation to be really positive. I am biased, partial and all that, but the feeling remains; this is easy like Sunday morning, compared to other systems I have come across over the years.

Of course, without careful planning and preparation you are bound to fail regardless of system or architecture, and I did arrive well prepared to the site, leaving nothing to chance. Cables where labeled, blue cables for the public IP network and yellow for the back-plane, the system was staged and checked before shipping it out to the customer etc.

I have never though, at any point in my career, been able to leave a site in just under two hours, having delivered a complete infrastructure installation comprising a complete data center infrastructure including servers, virtualization and storage. Un-boxing, racking up, connecting and powering up the four nodes and the two switches was done in two hours.

And the result? A complete virtual infrastructure, ready to be populated by Virtual Machines and any combination of operating systems, be it Windows 2008 R2, Windows 7, Windows 8, RedHat Linux 6.2, Ubuntu 12, what ever your flavor,  just drop ’em in. I takes about 15 minutes to have a working Windows server up and running from an ISO.

And, mind you, this first Windows 2008 R2 server was instantly protected by a resilient, redundant infrastructure that is increadibly easy to understand and operate.

Installation of the Scale HC3 as a whole has been a straight forward experience and the result is a very neat and clean installation that can be easily expanded in the future.

The picture below show the 4 node Scale HC3 demo environment after installation.

 

 

 

Robust, Kostnads-effektiv och Enkel att hantera.

Känner du dig frustrerad över komplexiteten som kännetecknar modern datadrift och de kostnader som ett traditionellt införande av virtualisering av driftmiljön medför?

Tänk dig en driftmiljö där du helt och fullt kan fokusera på dina applikationer, utan att behöva ta hänsyn till den komplexitet som separata storage system, server hantering, virtualisering och klustring för med sig?

Det är just detta som Scale Computing med sitt HC3 erbjuder: ett nyckelfärdigt system komplett med virtualisering, lagring och hög tillgänglighet sammanfört i en skalbar och kostnads-effektiv platform som är mycket enkel att använda. Scale minskar således kraftigt den osäkerhet och risk som alltid finns associerad med etableringen av en virtuell plattform, eftersom systemet levereras som en fungerande helhet, med alla ingående komponenter anpassade, testade och paketerade.

När du behöver lägga till resurser, lägg helt enkelt till ytterligare en enhet som innehåller både beräkningskapacitet, minne, lagring, och nätverks kapacitet till resurs-poolen.

Systemet bygger på en arkitektur som ger var och ett av de VM (Virtuella Maskiner) som skapas tillgång till en miljö med hög tillgänglighet och redundans. Om en av de underliggande enheterna förloras kommer de VM som kör på denna enhet att automatiskt flyttas över och köra vidare på någon av de övriga enheterna, helt utan manuella ingrepp.

Eftersom varken investering i Virtualiserings mjukvara eller extern lagring fodras, så erbjuder Scale Computing HC3 en låg start-kostnad för att komma igång med den kompletta driftsmiljön, och eftersom komplexiteten i driftsmiljön kraftigt reduceras så minskar även kostnader kopplat till hanetring av systemet kraftigt.

HC3 gör både etablering och hantering av plattformen lika enkel att administrera som en enda enskild server.

HC3 är skapad främst med tanke på medelstora företag och organisationer, speciellt de som ännu inte har tagit ombord virtualisering. Företag som på grund av den kostnadströskel och komplexitet som en sådan etablering innebär inte får tillgång till den nivå av tillgänglighet som normalt erbjuds genom virtualisering och olika typer av SAN lösningar.

Scale Computing HC3 erbjuder ett all-inclusive, komplett system med servrar, lagring och virtualisering i en lätthanterlig och robust plattform.

Scale Computing HC3: ett komplett Data-Center som ryms i en VW Golf…

Scale Computing HC3: Ett komplett Data Center som ryms i en VW Golf.

Data Centre In A Box

Are you frustrated by the complexity of the modern data centre and the price of making the move to Virtualization?

Imagine a data centre where you could focus only on your applications, without the complexity of separate storage systems, server management, virtualization and clustering?

That is the expereience delivered by Scale Computing’s HC3: a turn-key, data-centre-in-a -box with virtualization, storage and high availability seamlessly integrated into a scaleable and cost effective system that is easy to manage.

When your environment needs to grow, simply add HC3 nodes to add compute and storage capacity, caching and network connectivity to the resource pool.

The system is architected so that every VM created on HC3 is automatically made highly available. In the event of a node failure, VM’s on that node will automatically fail over to the other nodes in the system without manual intervention.

With no virtualization software to license and no external storage to buy, HC3 lowers out of pocket costs and radically simplifies the infrastructure needed to keep your applications running.

HC3 makes the deployment and management of a highly available and scaleable infrastructure as easy to manage as a single server.

Designed specifically for mid-sized companies HC3 is ideal for those who have not yet adopted virtualization due to the cost and complexity enabling them to run highly available applications.

Scale computing HC3, integrating Servers, Storage and Virtualzation. No Sweat.

 

 

Scale Computing Hyper Convergence, what does it even mean?

Keep It Simple Sugar. Because simple is beautiful. Perhaps I am just lazy, but I think of Hyper Convergence as something beautiful. Geeky, but undoubtedly beautiful. – Per-Ola Mård, Data Resilience AB

So how is Hyper Convergence achieved, and what is it?

Simple.

By its unique architecture Scale offers its customers elimination of storage provisioning complexity by greatly reducing the time and effort required to configure the virtual infrastructure, or in fact even just the time required to deploy storage to virtual machines.

Agile.

To drive up utilization and eliminate complex planning, by having data accessible from all compute systems, virtual machines can be easily and swiftly moved and rebalanced across physical resources. Out of the box. No complex cluster or resource pool configuration required for this inherent feature.

Resilient.

The shared highly protected platform can tolerate a component failure anywhere, all that is required for recovery is the restart of a virtual machine on a new physical node. Automatically. Data remains available despite component failure and the platform redistributes data to maintain sufficient protection against additional failures.

Expandable.

Expansion of the system along the performance or capacity axis takes no more than the simple addition of two more paired nodes to the existing platform. Distinctively simpler than planning for the expansion of storage pools, data stores, and traditional host clusters that might all be subject to limitations requiring a complex planning exercise to figure out just how to add more capacity or performance.

Retireable.

As hardware ages new hardware can be introduced to the existing platform and older physical elements can be phased out at the right time. A rolling upgrade process means more timely data migration at a lower price point.

You win.

///pm

A Simplifcation Proposal

Having worked in various organizations over the years performing infrastructure audits, creating documentation, exploring and explaining how things are connected, how different elements in the SAN infrastructure relate to each other, where they are physically located: one thought keeps coming back to me. Does it really have to be this complicated?

Enter Virtualization.

Well, SAN and storage is still lagging, years after the introduction of VmWare and the likes emerged. Storage and SAN people are still struggling with WWN’s, Switch Ports, SAN zones, Host Groups, LUN’s, RAID groups and, cables.

What if there was a way to drastically simplify the whole stack of dependencies and elements that make up the chain all the way from the RAID group, the LUN, the Host Group, the SAN zones, the SAN ports, the HBA’s, the HBA firmware release level, The HBA OS driver release level. Everything! Just get rid of it once and for all!

That is the thought I have been contemplating for some time now.

The fact to honor is the original thought that paved the way for Storage Area Networks: higher utilization of storage volume acquired. Remember when each server had a storage device connected to it, using perhaps 20% of the capacity both from a volume and a bandwidth stand point? That’s why we have SAN’s today, that was the original driver back in the days, and that still stands true. We must not forget that.

When you think of storage, don’t just think of it in terms of storage capacity and cost. This is probably the most common mistake. Think about the value of the data you are storing and how it is being protected. Think about how long you need to keep it and how often or fast you need access to it over longer time periods. Depending on what you have, it is most likely subject to some regulatory requirements. Look for a storage solution that is sustainable and scalable, secure and tamper-proof.

So, data is created by a computing element and then stored. That’s the easy part.

After that point you need to have an infrastructure that allows for you to restore data that is deleted by mistake, retrieve data from an archive designed specifically for storing data for a long time, again based on regulatory requirements. Finally you need to be able to recover your systems and data due to logical faults created by the system itself or a virus or a physical fault caused by flood or fire. Retrieve, Restore, Recover.

That’s it folks. That is all you need.

But there is one more thing… All of the above really have little or no value unless the infrastructure allows for practicing disaster recovery on real relevant active data-sets simply and frequently. The key to making that possible is the CDP element, Continuous Data Protection being an integral part of the HyperScale architecture.

So what am I selling?

I am simply proposing a new way to look, a new vantage point, examining the infrastructure as a whole, based on the thoughts outlined above, implemented using the unique Scale Computing HC3 platform that removes the whole SAN/Storage complex and forges the Virtualization of machines and storage management into one comprehensive manageable unit in an unprecedented fashion.

Once implemented the whole stack of complexity, risk and cost associated with the HBA/SAN/Storage/cabling will go away without loosing the original thought underpinning the advent of the SAN many moons ago.

Start by identifying candidate systems in your current complex infrastructure and create a plan for how to migrate into the new simple HypeScale infrastructure.

You might decide that most new systems will go in here or perhaps systems of a certain class qualifies as the first candidates? Eventually you will have 80% of your systems in the HyperScale and the rest, systems that for one reason or another can or should not be migrated, will remain in the old infrastructure. By that time however the remaining traditional platform will be much less complex. Less ports, less zones, less host-groups, less LUN’s, less storage devices to manage, less time, less worries, less money, less risk.

You win!

The HyperScale Architecture

Scale Computing HC3 example deployment

 

 

 

 

 

 

If you are contemplating these matters, perhaps you think that what I am saying makes sense, perhaps you have a budget and would like to explore my HyperScale solution, feel free to contact me at pm@dataresilience.com.

Good luck!

Stockholm August 4th 2012

//pm

More information on Scale HC3 found here.

 

Where do I begin?

What is the real value of your data? What is the hourly cost for your business when your servers are down?

When was the last time you verified your backups? How old was the data you verified? Was it in fact relevant to your business?

Once the recovery process is completed, how many hours will pass before the business is operational, and what happens to incoming data streams during the outage?

If the DR process takes 8 hours to complete, you effectively have a 8 hour “hole” in every database and system that needs mending, all hands on deck! The mending process is often manual, lengthy and exhausting for the staff, and therefore scattered with risks for human errors.

The InMage:Scout software platform offers unprecedented Return Point Objectives and Return Time Objectives (RPO/RTO).

What does this even mean?

InMage:Scout is designed to give your organization a new found business agility and ways to manage LOGICAL protection of your business vital systems in a way that simply is impossible with traditional backup/restore schemes.

So what is logical protection? How is it different from physical protection of data? Well, physical protection will protect you from water and fire. Logical protection will protect you from things like cyber-attacks and unfortunate mishaps in the normal systems maintenance process: a patch to a database program is performed that mistakenly destroy the content of the database.

This is where the time element comes into play: how often do you make a copy of your systems and your data? Once the data is gone or corrupted, the pace at which this happens becomes extremely important.

With InMage:Scout every 8K of data that is written to a disk is protected. All the time. Continuously. This allows for the sysadmin to, once the disaster have happened, return to virtually any point in time for each disk that you have assigned a protection scheme to, mount it, and simply start the system from that point in time.

This is a fantastic evolutionary step in the world of data protection. Why? Because it allows for practice: you do not have to wait for the disaster to practice using this platform: you could evaluate and verify it every hour of every day if you wanted to. Perhaps this is the most significant value of InMage:Scout: it allows for practice without affecting the running production environment. No more guesswork: You will know that your protection works simply because verifying it is easy to do, so there is a real chance it will get done on a regular basis.

Ask yourself :  When will I be back in business?

Once You realize all of the above, and think: hey this all makes a lot of sense to me! Where do you begin your CDP journey?

I normally start out with the following three simple steps to kick start the process:

  1. Understand the parameters of the existing environment: Architecture/Hard & Soft elements/Data Volume & Change Rate.
  2. Define and document what the customer might expect from a CDP solution such as InMage:Scout. What can and can not be achieved given the current infrastructure elements?
  3. Confirm that there is a budget for what a+b renders in terms of a specific Scout configuration: i.e. would investing in a Scout Project be financially meaningful for you?

Would You be open for a chat on this with respect to Your Business?

Data Resilience is getting ready to Scale

Today Data Resilience passed the Scale Technical Training exam. Happy Days!

Contact Data Resilience today for a smoother more flexible and affordable storage solution designed to save you both time and money.

Start small, grow large. Start with iSCSI, add NFS and CIFS. Start simple, add replication and snapshots: its all included in one smooth scalable package.

Why wait?

Data Resilience AB passed the Scale Technical Training exam. 03AUG11