Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords:...

17
REDUXIO NORESTORE AND CEPH

Transcript of Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords:...

Page 1: Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords: timeos,norestore,ceph,iscsi Created Date: 12/20/2017 7:13:53 AM ...

REDUXIO NORESTOREAND CEPH

Page 2: Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords: timeos,norestore,ceph,iscsi Created Date: 12/20/2017 7:13:53 AM ...

| 2

For more information, refer to Reduxio Systems Inc. website at http://www.reduxio.com.

If you have comments about this documentation, submit your feedback to [email protected].

© 2017 Reduxio Systems Inc. All rights reserved. No portions of this documentmay be reproduced without

prior written consent of Reduxio Systems Inc..

Reduxio® , the Reduxio logo, NoDup® , BackDating™ , Tier-X™ , StorSense™ , NoRestore® , TimeOS™ , and

NoMigrate™ are trademarks or registered trademarks of Reduxio in the United States and/or other

countries.

Linux is a registered trademark of Linus Torvalds.

Windows is a registered trademark of Microsoft Corporation.

UNIX is a registered trademark of The Open Group.

ESX and VMWare are registered trademarks of VMWare, Inc.

Amazon and Amazon Web Services and AWS are trademarks of Amazon Services LLC and/or its affiliates.

All other brands or products are trademarks or registered trademarks of their respective holders and

should be treated as such.

The Reduxio system hardware, software, user interface and/or information contained herein are Reduxio

Systems Inc. proprietary and confidential. Any and all rights including all intellectual property rights

associated therewith are reserved and shall remain with Reduxio Systems Inc. Rights to use, if any, shall be

subject to the acceptance of the End User License Agreement provided with the system.

Information in this document is subject to change without notice.

Reduxio Systems Inc.

111 Pine Avenue

South San Francisco CA, 94080

United States

www.reduxio.com

Page 3: Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords: timeos,norestore,ceph,iscsi Created Date: 12/20/2017 7:13:53 AM ...

| 3

Table of Contents

Ceph Overview 4

Test Environment 4

Ceph Installation 5

Initial Ceph Installation 5

Create a Ceph Deploy User 6

Clone Servers 6

Create Ceph Cluster 7

Configure iSCSI 9

Disable CHAP 10

Identify the Initiator IQNs 11

Add the Initiator Host 11

NoRestore and Ceph 13

Introduction to NoRestore™ 13

How Does NoRestore Work 14

NoRestore Components 14

NoRestore Repository Limitations 15

NoRestore Repository Size Recommendation 15

Estimating the Required Repository Size 15

Repository Size Considerations 16

Creating NoRestore Repository: Third-Party iSCSI Storage 17

Page 4: Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords: timeos,norestore,ceph,iscsi Created Date: 12/20/2017 7:13:53 AM ...

Ceph Overview | 4

Ceph Overview

Ceph is an open-source, unified, distributed storage system providing the ability to aggregate storage

resources and deliver file, block and object services.

Reduxio HX Series can be integrated with Ceph to deliver a cost-effective disaster recovery and data

protection solution by combining NoRestore - Reduxio's unique, built-in, instant data protection technology,

and Ceph scalable storage with iSCSI support.

Test Environment

This document was prepared using the following test environment:

UNIX Ubuntu 16.04

Ceph release Luminous (12.2.0)

Page 5: Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords: timeos,norestore,ceph,iscsi Created Date: 12/20/2017 7:13:53 AM ...

Ceph Installation | 5

Ceph Installation

Installing Ceph requires some basic configuration work prior to deploying a Ceph Storage Cluster. Once the

basic requirements are configured, the Ceph Storage Cluster can be deployed, the Ceph Block Device

created, and iSCSI target services configured.

For more information on Ceph installation, refer to the Ceph installation page.

Initial Ceph Installation

First, add the Ceph release key:

wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -

Now add the Ceph packages to your repository:

echo deb https://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc)main | sudo tee /etc/apt/sources.list.d/ceph.list

For Luminous:

echo deb https://download.ceph.com/debian-luminous/ $(lsb_release -sc) main | sudotee /etc/apt/sources.list.d/ceph.list

Update your repository and install ceph-deploy:

sudo apt updatesudo apt install ceph-deploy

Now install NTP client and SSH server:

sudo apt install ntpsudo apt install openssh-server

Page 6: Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords: timeos,norestore,ceph,iscsi Created Date: 12/20/2017 7:13:53 AM ...

Ceph Installation | 6

Create a Ceph Deploy User

Add user and password:

sudo useradd -d /home/<user_name> -m <user_name>sudo passwd <user_name>

For example:

sudo useradd -d /home/cephadmin -m cephadminsudo passwd cephadmin

Enable the user to run sudowithout password:

echo "<user_name> ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/<user_name>sudo chmod 0440 /etc/sudoers.d/<user_name>

For example:

echo "cephadmin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadminsudo chmod 0440 /etc/sudoers.d/cephadmin

Clone Servers

Enable password-less SSH between the nodes run ssh-keygen and use all the default options:

ssh-keygen

For each node / client run:

ssh-copy-id <ceph-user>@<node_name>

Examples:

ssh-copy-id cephadmin@Node1ssh-copy-id cephadmin@Node2ssh-copy-id cephadmin@Node3

Page 7: Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords: timeos,norestore,ceph,iscsi Created Date: 12/20/2017 7:13:53 AM ...

Ceph Installation | 7

Create Ceph Cluster

On the admin node:

mkdir <my-cluster>cd <my-clustFrom the <my-cluster> folder run:

From the <my-cluster> folder run:

ceph-deploy new <initial-monitor-node(s)>

For example:

ceph-deploy new cephNode1 cephNode2 cephNode3

If you have more than one network interface, add the public network setting under the [global] section

of your Ceph configuration file. See the Network Configuration Reference for details.

public network = {ip-address}/{bits}

For example:

public network = 172.18.4.0/24

Install Ceph packages on all nodes:

ceph-deploy install <--release luminous> <ceph-node> [...]

For example:

ceph-deploy install --release luminous cephNode1 cephNode2 cephNode3

Deploy the initial monitor(s) and gather the keys:

ceph-deploy mon create-initial

Copy the configuration file and admin key to your admin node and your Ceph Nodes

Page 8: Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords: timeos,norestore,ceph,iscsi Created Date: 12/20/2017 7:13:53 AM ...

Ceph Installation | 8

ceph-deploy admin <node_name>

For example:

ceph-deploy admin cephNode1 cephNode2 cephNode3

Deploy a manager daemon:

ceph-deploy mgr create <node_name>

Adding monitor nodes:

ceph-deploy mon add <node_name>

Modify the ~/.ssh/config file of your ceph-deploy admin node so that ceph-deploy can log in to

Ceph nodes as the user you created without requiring you to specify --username {username}

each time you execute ceph-deploy. This has the added benefit of streamlining ssh and scp usage.

Replace {username} with the user name you created:

Host <node_name1>Hostname <node_name1>User <user-name>Host <node_name2>Hostname <node_name2>User <user-name>Host <node_name3>Hostname <node_name3>User <user-name>

On each node run:

sudo chmod 644 /etc/ceph/ceph.client.admin.keyring

Add storage to the cluster:

ceph-deploy osd create cephNode1:sdb cephNode2:sdb cephNode3:sdb

Page 9: Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords: timeos,norestore,ceph,iscsi Created Date: 12/20/2017 7:13:53 AM ...

Ceph Installation | 9

Configure iSCSI

Create a new pool:

ceph osd pool create <pool-name> pg_num

For example:

ceph osd pool create rbd 128

From the Admin node:

ssh-copy-id <ceph-user>@<node_name>

For example:

ssh-copy-id cephadmin@cephiSCSI

Install and configure the Ceph command-line Interface:

ceph-deploy install <--release luminous> <ceph-node> [...]

For example:

ceph-deploy install --release luminous cephiSCSI

Disable unsupported features:

rbd feature disable iSCSI2 fast-diff object-map exclusive-lock deep-flatten

sudo rbd map iSCSI2 --name client.admin

Make the map persistent:

sudo vi /etc/ceph/rbdmap

For example:

Page 10: Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords: timeos,norestore,ceph,iscsi Created Date: 12/20/2017 7:13:53 AM ...

Ceph Installation | 10

rbd/iSCSI2 id=admin,keyring=/etc/ceph/ceph.client.admin.keyring

sudo systemctl enable rbdmap.servicesudo mkfs.ext4 -m0 /dev/rbd/rbd/iSCSI2

The drive should not be mounted!

Enable iscsi on the iscsi gateway:

sudo apt-get install --no-install-recommends targetcli

Video: https://www.youtube.com/watch?v=Bar9E2b8dqk

sudo targetcli/> cd backstores/iblock/backstores/iblock> create dev=<path_to_the_block_device> <object-name>

For example:

create dev=/dev/rbd/rbd/iSCSI2 iSCSI2

/backstores/iblock> cd /iscsi/iscsi> create/iscsi> cd <iqn-that-was-created>/tpg1/

For example:

cd iqn.2003-01.org.linux-iscsi.cephiscsi.x8664:sn.af8f14772c54/tpg1/

Disable CHAP

/iscsi/iqn.20...14772c54/tpg1> set attribute authentication=0/iscsi/iqn.20...14772c54/tpg1> cd luns/iscsi/iqn.20...c54/tpg1/luns> create <block-device>

For example:

create /backstores/iblock/iSCSI2

Page 11: Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords: timeos,norestore,ceph,iscsi Created Date: 12/20/2017 7:13:53 AM ...

Ceph Installation | 11

/iscsi/iqn.20...c54/tpg1/luns> cd ../portals

/iscsi/iqn.20.../tpg1/portals> create <ip-address-of-iscsi>

For example:

create 10.18.1.62

/iscsi/iqn.20.../tpg1/portals> cd ../acls

Identify the Initiator IQNs

To identify the iSCSI initiator IQNs of the Reduxio system:

1. Connect to the Reduxio storage system.

2. Click the Settings icon from the icon bar to open the Settings screen.

3. From the left navigation bar, click SYSTEM and look for the ISCSI INITIATORS.

4. Copy the iSCSI initiator IQNs of controller 1 and controller 2.

Add the Initiator Host

/iscsi/iqn.20...cfb/tpg1/acls> create <host-name>

For Reduxio, we need to add both initiators.

For example:

create iqn.2013-12.com.reduxio:af4032f0003a000e.0 and create iqn.2013-

12.com.reduxio:af4032f0003a000e.1.

/iscsi/iqn.20...cfb/tpg1/acls> cd //> saveconfig

Page 12: Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords: timeos,norestore,ceph,iscsi Created Date: 12/20/2017 7:13:53 AM ...

Ceph Installation | 12

Enabling iSCSI

Page 13: Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords: timeos,norestore,ceph,iscsi Created Date: 12/20/2017 7:13:53 AM ...

NoRestore and Ceph | 13

NoRestore and Ceph

Now that the Ceph cluster is up and running, and the Reduxio iSCSI initiator IQNs are configured, the

following steps must be taken to complete the NoRestore setup:

l Configure remote storage

l Create NoRestore repository

For more information on NoRestore configuration, refer to Reduxio TimeOS Administration Guide and

NoRestore Configuration Guide.

Introduction to NoRestore™

Reduxio NoRestore is an advanced data protection and data mobility technology built into the Reduxio

TimeOS operating system. By leveraging the virtualization of data location and the multi-tiering nature of

TimeOS, NoRestore enables users to recover or transfer large data sets in a very short time.

NoRestore provides the following capabilities:

• Near-zero RTO data protection: Protect data stored in a Reduxio system and instantly recover it.

NoRestore provides an almost immediate restore ability (within minutes) for a system with more than a

hundred terabytes.

• Granular management: Protect a single volume or a group of volumes using a Data Protection Group.

Unlike other storage-side data protection mechanisms, NoRestore is an open solution, providing a wide

choice of backup destinations:

• Backing up Reduxio volumes to another Reduxio storage system.

• Backing up Reduxio volumes to a third-party iSCSI storage system.

• Backing up Reduxio volumes to an object store destination (private or public clouds supporting the S3

protocol).

NoRestore provides resource grouping and policy-based management for simplified data protection

management.

Page 14: Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords: timeos,norestore,ceph,iscsi Created Date: 12/20/2017 7:13:53 AM ...

NoRestore and Ceph | 14

How Does NoRestore Work

NoRestore is a smart data protection technology. By leveraging Tier-X, Reduxio's built-in multi-tiering

engine, it is able to copy volume data from a Reduxio system to a remote storage destination. Thereafter,

when restoring that volume back, the volume itself can be immediately created. A restore process copies

back the missing data blocks from the remote storage. If a host tries to read data that is still missing, it is

fetched on-demand from the other side. This effectively provides the user with an instant restoration of

data, without the need to wait for the full restore to finish, hence the name "NoRestore".

NoRestore Components

A NoRestore solution consists of multiple components and configuration options:

Source Storage System A Reduxio HX Series storage system.

Remote Storage A remote storage system or cloud storage service used to hold NoRestore

backup data.

Backup Repository A remote storage LUN or cloud storage service bucket holding NoRestore

backup data.

Because NoRestore is a one-to-one solution there can be only

one active backup repository at a time. Once the backup

repository is detached and a new one attached instead, you

cannot continue making backups to the repository in the

previous location.

Restore Repository When accessed for data restore, a repository is referred to as a restore

repository. There can be up to ten restore repositories accessed

concurrently.

Data Protection Group A set of volumes used for group-based data protection management. A

data protection group can be backed up together, and configured with

the a single backup policy.

Backup/Restore Job A single job, either to backup or restore a single volume.

Backup Point A single volume backup of a specific timestamp.

Page 15: Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords: timeos,norestore,ceph,iscsi Created Date: 12/20/2017 7:13:53 AM ...

NoRestore and Ceph | 15

Backup Policy A set of definitions for the frequency, times and retention of backup

points.

Data Transfer Window Time window in which data is transferred to and from the NoRestore

repositories. Data transfers are either skipped or pending outside of this

window. By default, data transfers can occur 24/7.

Data Capture Window Time window in which backup points are captured.

NoRestore Repository Limitations

NoRestoreis a one-to-one solution - there can be only one active backup repository at a time. Switching from

one backup repository to another one will not allow the user to switch back to the previous repository and

continue to backup to this repository.

NoRestore Repository Size Recommendation

A NoRestore repository can keep data from only one Reduxio system. The data is stored in dedup and

compressed format to save space.

The dedup ratio on a repository may vary from the dedup ratio of the source since the data set there varies

from the source. We recommend using the system savings ratio as an estimate of the saving for the

repository.

It is important to understand that the savings ratio from the source system is used as an estimate

only. The actual savings ratio in a repository can be as low as no savings (1:1). It is crucial tomonitor

the repository usage and increase the size if needed.

Estimating the Required Repository Size

Savings Ratio The saving ratio reported by Reduxio System Manager

Effective Data The amount of data before dedup and compression that will be protected using

NoRestore

Change Rate Daily change rate

Page 16: Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords: timeos,norestore,ceph,iscsi Created Date: 12/20/2017 7:13:53 AM ...

NoRestore and Ceph | 16

Retention The retention period in days according to the backup policies.

For example:

If the INTERVAL is every six hours (four times a day) and the RETENTION is 28, the

Retention is seven days.

The recommended repository size is the effective amount of data plus daily change multiplied by the

retention (in days), divided by the saving ratio.

Or

Repository size = (Effective data + Effective data x Change rate x Retention) / Savings ratio

Savings ratio 4:1

Effective data (of all

volumes that will be

protected)

20 TB

Daily change rate 5%

Retention Protecting twice a day with retention of 120 = 60 days of retention

Estimated repository size = 20 / 4 + (20 x 0.05 x 60) / 4 = 5 + 15 = 20 TB

Repository Size Calculation Example

Repository Size Considerations

• There are twomain reasons why the savings ratio in the repository may vary from the savings ratio in the

source machine:

• Each data set would end up with a different savings ratio. Volumes with data that dedupes well will

provide a better savings ratio—both in the source system, and in the NoRestore repository.

• Reduxio uses global dedup. This means that all blocks of data, from all the volumes, form part of the

deduplication. When protecting a subset of the volumes using NoRestore, not all blocks are sent to

the repository, and as a result the dedup ratiomay change.

• Data in the repository is saved in a dedup and compressed format. Even if the storage repository has a

data reduction mechanism (such as dedup or compression), you should still use the recommended

sizing, since it, most likely, will not be able to reduce the capacity of the data again.

Page 17: Reduxio NoRestore and Ceph · Reduxio NoRestore and Ceph Author: Reduxio Systems Keywords: timeos,norestore,ceph,iscsi Created Date: 12/20/2017 7:13:53 AM ...

NoRestore and Ceph | 17

• It is important tomonitor the repository usage on the storage that hosts the repository, which can be

done using the Reduxio Storage Manager, especially until the repository reaches a stable stage: when all

data that has been backed up reaches retention.

• When sizing the repository, make sure that the storage that hosts the repository supports the

recommended capacity, and that it can be expanded if needed.

Creating NoRestore Repository: Third-Party iSCSI Storage

To create a NoRestore repository on the third-party iSCSI storage system:

1. Select the Settings icon from the icon bar to open the Settings screen.

2. From the left menu navigate toNORESTORE.

3. Click CONFIGURE FOR BACKUP .

4. Select the relevant remote storage from the remote storage drop-down list.

5. Click the repository volume previously created and select FORMAT REPOSITORY.

6. Verify that the new repository volume was added as a backup repository.