Scale Computing Hyper Convergence, what does it even mean?

Keep It Simple Sugar. Because simple is beautiful. Perhaps I am just lazy, but I think of Hyper Convergence as something beautiful. Geeky, but undoubtedly beautiful. – Per-Ola Mård, Data Resilience AB

So how is Hyper Convergence achieved, and what is it?


By its unique architecture Scale offers its customers elimination of storage provisioning complexity by greatly reducing the time and effort required to configure the virtual infrastructure, or in fact even just the time required to deploy storage to virtual machines.


To drive up utilization and eliminate complex planning, by having data accessible from all compute systems, virtual machines can be easily and swiftly moved and rebalanced across physical resources. Out of the box. No complex cluster or resource pool configuration required for this inherent feature.


The shared highly protected platform can tolerate a component failure anywhere, all that is required for recovery is the restart of a virtual machine on a new physical node. Automatically. Data remains available despite component failure and the platform redistributes data to maintain sufficient protection against additional failures.


Expansion of the system along the performance or capacity axis takes no more than the simple addition of two more paired nodes to the existing platform. Distinctively simpler than planning for the expansion of storage pools, data stores, and traditional host clusters that might all be subject to limitations requiring a complex planning exercise to figure out just how to add more capacity or performance.


As hardware ages new hardware can be introduced to the existing platform and older physical elements can be phased out at the right time. A rolling upgrade process means more timely data migration at a lower price point.

You win.