CDP explained

The challenge is this: how do you reduce your business risk by improving your applications return point and return time objectives and at the same time keep the impact on the application performance at an absolute minimum whilst protecting it?

CDP, short for Continuous Data Protection, is a term used in computer science to describe a method used to recover from logical errors much faster than with traditional backup/restore methods.

Although the principle is very simple, many different attempts exist in the market rendering varying results. The picture below describes one example of such a process.

  • Data is written to a locally attached disk in the server that is to be protected.
  • Data blocks of e.g. 8K are passed on to an off-load engine, and stored at target or replica, on so called journal volumes, located locally and/or remotely.
  • Some of the written 8K data blocks are flagged as “consistent” by the application that originally created them: this is very important. Only the application knows when its data is consistent, not the operating system, not the storage device.

The key here is that data is _not_ written to the target device, but is instead written to a log. This makes it possible to, very quickly, “roll back” to any point in history, mount a shadow volume, representing the most recent point in time where the logical error had not yet occurred, and continue from there. This is something entirely different from the traditional approach where data is copied to tape (or more recently to a cloud service) once every 24 hours. Restore times are bound to be very long, and the data you do get back will be several hours old.

There are many benefits gained from keeping the CDP software and process separate from the Operating System and/or Hypervisor. Doing so opens up for easy migration between hypervisors of different origin (VMW/XEN/KVM) and limits the lock in effect that most customers tries to avoid. Software such as InMage Scout is often immediately appreciated by the systems administrators as it reduces the time spent on stressful, unreliable, time consuming and cumbersome restore tasks.

DS4; The HotRod of Computer Design

What exactly is the DS4 Micro Data Center? Well, one way to think of it is it is the HotRod of computer design. Clean, smooth, powerful. A bit like Boyd Coddingtons unique designs.

The DS4 represents a carefully designed and tailored computer platform. It is symmetrical. It is easy to comprehend, therefor less error prone. Each element is carefully selected, assembled and optimized.

Boyd Coddingtons Beautiful Creation

Boyd Coddingtons Beautiful Creation The Boydster

A lot of thought was put into creating the well balanced architecture, high performance, low power consumption, small footprint, unprecedented scalability: the DS4 is elegant by design.

Aside the elegance, the DS4 is designed to be secure, tailored to protect your valuable data resources. The logical protection offered by the architecture is unprecedented. The DS4 represents the latest in computer production architecture, powered by its unique CDP (Continuous Data Protection) software element removing the limitations of contemporary data production environments and protection elements, like snapshots and traditional backup solutions.

The DS4 paves the way to a better way to solve a very old problem.

When will you be back?

Contact Data Resilience today to find out more.

Mission Statement

Data Resilience AB offers SMB’s a comprehensible and affordable remote backup and business recovery platform.

The platform is based on a transparent design and a stringent deployment protocol: contemporary virtualization and a unique CDP* technology offering unprecedented recovery times and recovery precision.

The platform can be configured as a mobile unit or installed in the customers own Data Centre and/or run as a service in the Data Resilience DC facilities.

*) Continuous Data Protection