Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2...

32
Designing High Availability Disk Volumes Using Red Hat Storage Clusters Zoltan Porkolab, IT Consultant RHCE Version 2.3 August 2013

Transcript of Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2...

Page 1: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

Designing High Availability Disk

Volumes Using Red Hat Storage

Clusters

Zoltan Porkolab, IT Consultant

RHCE

Version 2.3

August 2013

Page 2: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

Table of Contents

1 Executive Summary........................................................................................ 1

2 Introduction..................................................................................................... 2

2.1 Audience.......................................................................................................................... 2

2.2 Acronyms......................................................................................................................... 2

3 System Requirements..................................................................................... 3

3.1 Hardware Requirements ................................................................................................. 3

3.2 Supported Client OS Platforms........................................................................................ 4

3.3 Software Components..................................................................................................... 4

3.3.1 Red Hat Storage Software Appliance......................................................................... 4

3.3.2 Samba Cluster........................................................................................................... 4

3.3.3 Microsoft Active Directory........................................................................................... 4

4 Reference Architecture Environment............................................................... 5

5 Red Hat Storage Installation........................................................................... 6

5.1 Network Configuration..................................................................................................... 6

5.2 Host Configuration........................................................................................................... 7

6 RHS Configuration and Volume Extension...................................................... 8

6.1 Terms............................................................................................................................... 8

6.2 Partitioning....................................................................................................................... 8

6.3 Trusted Storage Pool....................................................................................................... 9

6.4 Volume Management..................................................................................................... 10

6.5 Extend RHS Network and Volumes............................................................................... 11

6.5.1 Extend Trusted Pool with Servers............................................................................ 11

6.5.2 Extend Volumes with Bricks..................................................................................... 12

7 Client Access................................................................................................ 14

7.1 RHS Native Client.......................................................................................................... 14

7.2 NFS Client..................................................................................................................... 14

7.3 CIFS Client.................................................................................................................... 15

8 Implementing High Availability Volumes ....................................................... 16

redhatsolutions.wordpress.com ii Enterprise IT Solutions

Page 3: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

8.1 Automated IP Failover................................................................................................... 16

8.2 Samba Cluster Configuration......................................................................................... 16

9 Failover and Failback Test............................................................................. 19

10 Configuring Microsoft Active Directory Authentication................................. 20

10.1 Setup and Configuration.............................................................................................. 20

10.1.1 Configuring Kerberos............................................................................................. 20

10.1.2 Winbind.................................................................................................................. 21

10.1.3 CTDB..................................................................................................................... 22

10.1.4 Samba.................................................................................................................... 22

10.2 Setup AD authentication for RHS Volumes.................................................................. 23

11 Conclusion................................................................................................... 25

Appendix A: Gluster Commands..................................................................... 26

Appendix B: References.................................................................................. 29

Appendix C: Revision History.......................................................................... 30

Enterprise IT Solutions iii redhatsolutions.wordpress.com

Page 4: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

1 IntroductionRed Hat Storage provides freedom of choice to customers by allowing them to deploy a cost-effective and highly available storage solution with online data protection and third-party authentication.

This technical paper details the deployment steps of a high availability and protected file storage environment on Red Hat Storage Servers (RHS). After an introduction to the basic concepts, system requirements and installation steps, this document provides detailed information about the following:

• Creating and configuring RHS Volumes

• Extend Volumes without downtime

• Configuring Red Hat Storage clients

• Implementing Samba Cluster (CTDB) on RHS Volumes

• Failover/Failback tests on CTDB cluster

• Setup Microsoft Active Directory (AD) authentication for access to the RHS Volumes

Enterprise IT Solutions 1 redhatsolutions.wordpress.com

Page 5: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

2 System Requirements

2.1 Hardware Requirements • Must be in Red Hat Hardware Compatibility List for Red Hat Enterprise Linux 6.0 or

beyond

• RAID 6 Support in hardware RAID controller. RAID controller card must be flash-backed or battery-backed. All data disks configured in groups of 12 drives in RAID6 configuration

• Must be 2-socket (4-core or 6-core) servers (no 1-socket, 4-socket servers, or 8-socketservers)

• 2 X 10G Ethernet (copper or optical) preferred. 2 X 1G Ethernet also supported between RHS servers

• Red Hat Storage Server is working on 64bit environments only

NOTE: The Red Hat Hardware Compatibility List is available at this address https://hardware.redhat.com

High Performance Computing use-case:

• 2u/24 CPU

• 15000 RPM 600GB drives (2.5" inch SAS)

• Minimum RAM 48 GB

General Purpose File Serving use-case:

• 2u/12 CPU

• 7200 or 10000 RPM 2/3 TB drives (3.5" SAS or SATA)

• Minimum RAM 32 GB

Archival use-case:

• 4u/36 CPU

• 7200 or 10000 RPM 2/3 TB drives (3.5" SAS or SATA)

• Minimum RAM 16 GB

Supported Virtual Platforms: (Minimum 4 vCPU and minimum 16 GB RAM)

redhatsolutions.wordpress.com 2 Enterprise IT Solutions

Page 6: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

• Red Hat Enterprise Virtualization 3.0

• VMware vSphere 5.x

• VMware ESXi 5.x

2.2 Supported Client OS PlatformsFor NFS/CIFS connections:

• Fedora and Debian based distributions.

• UNIX (Solaris 10+)

• Microsoft Windows Server 2008, Windows 7.

With GlusterFS FUSE clients:

• Red Hat Enterprise Linux 5.8 and beyond.

• Red Hat Enterprise Linux 6.0 and beyond.

2.3 Software Components

2.3.1 Red Hat Storage Software ApplianceRed Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included the following package groups: Core, Gluster File System, Red Hat Storage Software Appliance Tools and Scalable File systems. It can be deployed in acouple of minutes for scalable, high-performance storage in your datacentre or on premises private cloud.

2.3.2 Samba ClusterSamba cluster (Cluster Trivial Database) is a clustered implementation of the TDB database used by Samba and other projects to store temporary data. The CTDB provides a failover solution with a flexible virtual IP address. If an application is already using TDB for temporary data it is very easy to convert that application to be cluster aware and use CTDB instead.

2.3.3 Microsoft Active DirectoryMicrosoft Active Directory (AD) is a directory service created by Microsoft for Windows domain networks. It is included in most Windows Server operating systems. Active Directory provides a central location for network administration and security. Server computers that run Active Directory are called domain controllers. An AD domain controller authenticates and authorizes all users and computers in a Windows domain type network-assigning and enforcing security policies for all computers and installing or updating software.

Enterprise IT Solutions 3 redhatsolutions.wordpress.com

Page 7: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

3 Reference Architecture EnvironmentTo demonstrate the RHS functionality this reference architecture environment consists of four Red Hat Storage nodes and multiple types of client. The first step shows the Trusted Pool installation with two RHS nodes and it will be extended afterwards with two more RHS nodes.

Figure 3.1: RHS Reference Architecture Overview

Servers “rhs-test-01”Red Hat Storage Server (master)

“rhs-test-02”Red Hat Storage Server

“rhs-test-03”Red Hat Storage Server

“rhs-test-04”Red Hat Storage Server

Clients Red Hat Enterprise Linux 6.3 x86-64

Debian Linux 6.0.6 (Other Linux Clients)

Windows 7 Ultimate client (Windows clients)

Table 3.1: Servers and Clients

redhatsolutions.wordpress.com 4 Enterprise IT Solutions

Page 8: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

4 Red Hat Storage InstallationThe Red Hat Storage Software Appliance installation media is downloadable from Red Hat Network. The RHS node installation process is simple and quick. The RHS installer skips a couple of traditional OS configuration steps such as languages and keyboard selection, network configuration, etc. The default keyboard is US English and the network is configured with DHCP client but all of these can be modified later.

The basic installation steps are the following:

• ROOT password setup

• Partitioning

• Package installations

If the partitioning is left on default and not configured manually during the installation, then theinstaller will create the following partitions:

• One 512MB disk partition for /boot, and

• One physical volume with a Volume Group (VolGroup) including:

◦ SWAP partition (size is dependent on the physical memory)

◦ Logical Volume “root” with a 50G size and ext4 file system.

◦ Logical Volume “home” with all remaining disk spaces.

After the installation we have a basic RHS system with a basic IP address on the first networkdevice (eth0) provided by DHCP. The host name is “localhost.localdomain”, SELinux operatesin “Enforcing” mode and the firewall accepts all incoming/outgoing communications.

4.1 Network ConfigurationIt is recommended to setup static IP address for servers. Static IP provides the most secure and stable connection with other RHS nodes in the Trusted Storage Pool.

It is also recommended to use separated networks for Public network and the internal RHS communications. If the RHS unit has sufficient network adapters it is useful to bond them in pairs. Bounding provides high availability on all routes. More articles about network bonding configuration are available in the “Red Hat Enterprise Linux 6 Deployment Guide” or on the Red Hat Customer Support portal.

The following example describes how to set a static IP address on the network adapter (eth0).

NOTE: Make sure that Network Manager is switched off if network bonding is configured.

Edit the “/etc/sysconfig/network-scripts/ifcfg-eth0”.

Enterprise IT Solutions 5 redhatsolutions.wordpress.com

Page 9: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

DHCP based eth0 settings Eth0 example with fix IP

DEVICE="eth0"HWADDR="00:13:21:B4:98:F6"NM_CONTROLLED="yes"ONBOOT="yes"BOOTPROTO="yes"

DEVICE="eth0"HWADDR="00:13:21:B4:98:F6"NM_CONTROLLED="yes"ONBOOT="yes"BOOTPROTO="none"TYPE="Ethernet"DEFROUTE=yesIPV4_FAILURE_FATAL=yesIPV6INIT=noIPADDR=10.0.69.127NETMASK=255.255.252.0GATEWAY=10.0.69.252DNS1=10.0.69.252

Table 4.1.1: Network Configuration

4.2 Host ConfigurationChange the host name from “localhost.localdomain”.

# hostname rhs-test-01.testenv.co.uk

Update the new host name in the “/etc/sysconfig/network”. To provide stable name resolving, it is recommended to extend the “/etc/hosts” file with the names and IP addresses of all RHS nodes.

Next step is register the server on the Red Hat Network or Satellite server. Run “rhn_register” command and follow the instructions. Once a node is registered update it to get the latest software versions.

# rhn_register# yum update –y

NOTE: More information from the RHEL registration process is in the “Red Hat Network Satellite Reference Guide”.

redhatsolutions.wordpress.com 6 Enterprise IT Solutions

Page 10: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

5 RHS Configuration and Volume Extension

5.1 Terms• Storage server: The machine which hosts the file system in which data will be stored.

• Storage client: The machine which mounts the RHS Volume (this may also be a server).

• Brick: The brick is a disk partition with storage file system that has been assigned to a Volume.

• RHS Volume: The logical collection of bricks.

5.2 PartitioningBy default the system has two partitions, a boot partition and an LVM. The LVM has one VG including three logical volumes “lv_root”, “lv_swap”, and “lv_home”. The first one is the root LVwith ext4 file system and 50G disk space. The home partition is also ext4 partition and it is mounted to the /home directory. This partition has all of the remained disk space.

RHS Volume requires XFS partitions so the next step is setup the file systems. The following example shows how to use the command line tools to change default partitions.

Check the active Logical Volumes

# lvscan ACTIVE '/dev/VG/lv_root' [50.00 GiB] inherit ACTIVE '/dev/VG/lv_swap' [15.91 GiB] inherit ACTIVE '/dev/VG/lv_home' [1023.78 GiB] inherit

Delete the “lv_home” partition

# umount /home# lvremove /dev/VG/lv_home

NOTE: Do not forget to update the “/etc/fstab” once a partition is no longer available.

Create a new 200GB partition for the XFS brick.

# lvcreate –L200G --name lv_storage1 VG

Setup XFS file system on the new partition.

Enterprise IT Solutions 7 redhatsolutions.wordpress.com

Page 11: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

# mkfs.xfs /dev/VG/lv_storage1

Create a “/storage1” mount point and mount the new partition.

# mkdir /storage1# mount /dev/VG/lv_storage1 /storage1

Extend “/etc/fstab” with new partition details.

/dev/VG/lv_storage1 /storage1 xfs defaults 0 0

Now the partition/brick is ready to join an RHS Volume.

5.3 Trusted Storage PoolA Trusted Pool is comprised of a group of peer Red Hat Storage Servers. Peers are the nodes that have been configured to recognize each other server's participation in a Trusted Pool. The master peer is the first configured peer. This part describes how to setup a Trusted Pool with two RHS nodes.

Ports TCP 24007/24008 are required for communication between all RHS nodes and each brick requires another TCP port starting at 24009.

This example is configuring firewall for gluster communication including one brick.

# iptables -I INPUT -m tcp -m multiport -p tcp --dports 24007:24008 -j ACCEPT# iptables -I INPUT -m tcp -p tcp --dport 24009 -j ACCEPT

NOTE: The firewall accepts all communications by default on RHS nodes.

Use the first RHS unit as master peer and add another RHS server to the trusted pool.

# gluster peer probe rhs-test-02.testenv.co.ukProbe successful

Now the Trusted Pool has 2 RHS nodes. Once a Trusted Pool is created, the configuration is available for review within the “/etc/glusterd/” directory. In the previous RHS versions, this configuration directory can be found under “/var/lib/glusterd” path.

Check peer status.

# gluster peer statusNumber of Peers: 1

redhatsolutions.wordpress.com 8 Enterprise IT Solutions

Page 12: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

Hostname: rhs-test-02.testenv.co.uk

Uuid: 8062f05b-0375-46d5-a754-f863a45dc052

State: Peer in Cluster (Connected)

5.4 Volume ManagementRHS Volume works with Gluster File System which is a logical collection of XFS bricks. Glusterfs is a network/cluster file system. It takes a layered approach to the file system, wherefeatures are added or removed as per the requirement. Though Glusterfs is a File System, it uses another file systems such as ext3, ext4, xfs, etc. to store the data. It can easily scale up to petabytes of storage which is available to users under a single mount point.

Available RHS Volume types:

• Distributed (for maximum space)

• Replicated (for high availability)

• Striped (for large files)

• Distributed and Replicated

• Distributed and Striped

• Replicated and Striped

• Distributed, Replicated and Stripped

Volume space calculations:

• Distributed: 1G + 1G = 2G

• Replicated: 1G + 1G = 1G

• Stripped: 1G + 1G = 2G

• Distributed-replicated: (1G+1G) + (1G+1G) = 2G

There are some “gluster” command examples for Volume management. More examples are available in Appendix A.

Gluster command syntax for new Volume:

gluster volume create NEW-VOLNAME [replica COUNT] [transport tcp | rdma |

tcp,rdma] NEW-BRICK...

Create a Replicated Volume with two bricks. Each RHS unit in the Trusted Pool has one brick with the same size.

# gluster volume create glustervol1 replica 2 transport tcp rhs-test-

01.testenv.co.uk:/storage1 rhs-test-02.testenv.co.uk:/storage1

Creation of volume glustervol1 has been successful. Please start the volume

to access data.

If a Volume has been created with four or more bricks and the replication level is two then the brick order is important. The first two and the second two bricks are going to be replicated to each other and the two pair of replicated bricks became a distributed-replicated volume. It is recommended to use replicated volumes in the environments where the high-availability and high-reliability are important. The replicated volumes create copies of files across multiple bricks in the volume.

Enterprise IT Solutions 9 redhatsolutions.wordpress.com

Page 13: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

# gluster volume start glustervol1Starting volume glustervol1 has been successful

Once a Volume is started it is accessible via CIFS, NFS and Gluster Native Clients.

Display Volume information:

# gluster volume info all

Volume Name: glustervol1Type: ReplicateStatus: StartedNumber of Bricks: 2Transport-type: tcpBricks:Brick1: rhs-test-01.testenv.co.uk:/storage1Brick2: rhs-test-02.testenv.co.uk:/storage1

Mount RHS Volume to a directory and check it.

# mount -t glusterfs rhs-test-01:glustervol1 /mnt/glustervol1# mount |grep glusterfsrhs-test-01:glustervol1 on /mnt/glustervol1 type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)

Fix the glusterfs mount in the fstab.

rhs-test-01:glustervol1 /mnt/glustervol1 glusterfs defaults,_netdev 0 0

Mount the Volume via NFS.

# mount -t nfs -o vers=3,mountproto=tcp rhs-test-01.testenv.co.uk:glustervol1 /mnt/gluster-nfs

5.5 Extend RHS Network and Volumes

5.5.1 Extend Trusted Pool with Servers

The Trusted Pool is extendable with new Red Hat Storage nodes. To probe for the new servers use the following command on the firstly created server in the RHS network (master server).

Syntax:

gluster peer probe HOSTNAME

redhatsolutions.wordpress.com 10 Enterprise IT Solutions

Page 14: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

Trusted Pool extension example with two more RHS nodes:

# gluster peer probe rhs-test-03.testenv.co.ukProbe successful# gluster peer probe rhs-test-04.testenv.co.ukProbe successful

5.5.2 Extend Volumes with Bricks

The RHS Volume is online extendable with number of bricks that is a multiple of the replica or stripe count.

Syntax:

gluster volume add-brick VOLNAME NEW-BRICK NEW-BRICK

Volume size extension example with new bricks.

# gluster volume add-brick glustervol1 rhs-test-01.testenv.co.uk:/storage2 rhs-test-02.testenv.co.uk:/storage2

Add Brick successful

These new XFS bricks are already mounted to the /storage2 on both RHS servers. The XFS brick creation steps are described in the Partitioning section. Please remember to open the TCP 24010 port for the second (new) brick. Do not forget to extend the “/etc/fstab” with the new XFS partition.

/dev/VG/lv_storage2 /storage2 xfs defaults 0 0

If the Volume type is replicated then the size of the new brick must be the same on each node. After the Volume is extended it has four bricks and becomes distributed and replicated.

Check the port number for a brick:

# cd /etc/glusterd/vols/glustervol1/bricks/# cat ./rhs-test-01.testenv.co.uk\:-storage2hostname=rhs-test-01.testenv.co.ukpath=/storage2listen-port=24010rdma.listen-port=0

In the following example the “glustervol1” volume will be extended with four bricks. The new bricks are created on the new RHS nodes.

# gluster volume add-brick glustervol1 rhs-test-03.testenv.co.uk:/storage1

Enterprise IT Solutions 11 redhatsolutions.wordpress.com

Page 15: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

rhs-test-04.testenv.co.uk:/storage1 rhs-test-03.testenv.co.uk:/storage2 rhs-

test-04.testenv.co.uk:/storage2

Add Brick successful

When expanding a distributed replicated or distributed striped volume, it needs to add a number of bricks that is a multiple of the replica or stripe count. For example, to expand a distributed replicated volume with a replica count of 2, it needs to add bricks in multiples of 2.

Check the updated Volume:

# gluster volume info all

Volume Name: glustervol1

Type: Distributed-Replicate

Status: Started

Number of Bricks: 4 x 2 = 8

Transport-type: tcp

Bricks:

Brick1: rhs-test-01.testenv.co.uk:/storage1

Brick2: rhs-test-02.testenv.co.uk:/storage1

Brick3: rhs-test-01.testenv.co.uk:/storage2

Brick4: rhs-test-02.testenv.co.uk:/storage2

Brick5: rhs-test-03.testenv.co.uk:/storage1

Brick6: rhs-test-04.testenv.co.uk:/storage1

Brick7: rhs-test-03.testenv.co.uk:/storage2

Brick8: rhs-test-04.testenv.co.uk:/storage2

On replicated volume environments rebalance the volume to ensure that all files are going to be distributed to the new bricks. The rebalancing process time depends on the number of filesin the volume along with the corresponding file sizes.

A series of volume-spec files is created in the /etc/glusterd/vols directory.

# gluster volume rebalance glustervol1 start

starting rebalance on volume glustervol1 has been successful

# gluster volume rebalance glustervol1 status

rebalance completed: rebalanced 1 files of size 29 (total files scanned 2)

redhatsolutions.wordpress.com 12 Enterprise IT Solutions

Page 16: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

6 Client AccessClients can connect to the volumes via Red Hat Storage Native Client from other RHEL5/RHEL6 (x64) servers, via NFS (v3) from other Linux or via CIFS from Linux or Windows clients. The supported clients are described in the System requirements section.

6.1 RHS Native Client

This solution is available on RHEL 5.8 (x64) or beyond versions only.

Install the following packages on the client: fuse, fuse-libs, libibverbs, glusterfs, glusterfs-fuse,glusterfs-rdma, glusterfs-devel, glusterfs-debuginfo. The package versions are must be the same as on the server. The packages are available on the RHS installation media or on the Red Hat Network. Once the packages are installed the RHEL client is ready to connect to the RHS volumes via RHS Native Client.

Red Hat Network Channels for RHS Client:

• “Red Hat Storage Native Client (RHEL v. 6 for 64-bit AMD64 / Intel64)”

• “Red Hat Storage Software Appliance 3.2 (6.1.z for x86_64)”

NOTE: The latest Glusterfs Client RPM`s for RHEL are available to download from the gluster.org website as well.

Mount the Volume into a folder (/mnt/gluster-client) on the client and check the connection:

# mount -t glusterfs rhs-test-01.testenv.co.uk:glustervol1 /mnt/gluster-client

# mount |grep glusterrhs-test-01.testenv.co.uk:glustervol1 on /mnt/gluster-client type fuse.glusterfs (rw,allow_other,default_permissions,max_read=131072)

Extend the fstab with the Glusterfs mount.

rhs-test-01.testenv.co.uk:glustervol1 /mnt/gluster-client glusterfs defaults,_netdev 0 0

NOTE: The _netdev mount option is for waiting until the network devices are loaded.

6.2 NFS Client

Prerequisites on the client: nfs-utils, nfs-utils-lib packages. The Red Hat Storage Server supports the NFS v3 with TCP transport type.

Create a mount point (/mnt/gluster-nfs) on the Linux client and mount the Volume:

Enterprise IT Solutions 13 redhatsolutions.wordpress.com

Page 17: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

# mkdir /mnt/gluster-nfs

# mount -t nfs -o vers=3,mountproto=tcp rhs-test-02.testenv.co.uk:glustervol1 /mnt/gluster-nfs

Add the Volume mount point to the “fstab”.

rhs-test-01.testenv.co.uk:/glustervol1 /mnt/gluster-nfs nfs defaults,vers=3,user,auto,noatime,_netdev,intr 0 0

6.3 CIFS Client

Connecting to the RHS Volume via CIFS requires Samba server on all RHS nodes in the Trusted Pool. Samba is installed and started by default on the RHS servers.

The newly created and started RHS Volume is automatically exported through Samba on all Red Hat Storage servers. The volume is mounted using the RHS Native Client at /mnt/samba/<Volume_name> directory. It is exported automatically into the Samba configuration file as “gluster-<Volume_name>”.

Mount the RHS Volume as Z: drive on Windows client.

c:\>net use Z: \\rhs-test-01.testenv.co.uk\gluster-glustervol1The command completed successfully.

Remove the Volume from Windows.

c:\>net use Z: /delete

Z: was deleted successfully.

Mount the Volume to the /mnt/gluster-cifs directory on a Linux client.

# mount -t cifs //rhs-test-01.testenv.co.uk/gluster-glustervol1 -o username=nobody,password=nobody /mnt/gluster-cifs

redhatsolutions.wordpress.com 14 Enterprise IT Solutions

Page 18: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

7 Implementing High Availability Volumes

7.1 Automated IP FailoverFailover is typically an integral part of mission-critical systems that must be constantly available. It is a backup operational mode in which the functions of a system component assumed by secondary system components when the primary component becomes unavailable through either failure or scheduled down time. The IP failover is switching to a standby network upon the failure or abnormal termination of the previously active network. It is recommended to ensure high availability.

The clustered Samba is a reliable solution for providing fully automated IP failover on the replicated RHS Volumes. Using Cluster Trivial Database (CTDB) is possible in replicated or distributed-replicated volume environments for NFS and CIFS clients. CTDB adds a virtual IP (VIP) address to Red Hat Storage Server. Basically the Virtual IP is started on the master RHS node and clients can connect to the RHS volumes with the VIP. If the master node has crashed then the CTDB enables a different node to take over the IP address of the failed node. If the master node becomes available again the VIP is changed back to the master node automatically.

7.2 Samba Cluster ConfigurationCTDB needs a “lock” volume. It is a small replicated RHS Volume for the CTDB management files. This Volume contains a lock file that specifies which node is acting as the recovery master. The “ctdb lock” Volume is mounted to all RHS nodes in the Trusted Pool.

Main implementation steps:

• New XFS brick creation on all RHS nodes

• Replicated RHS Volume creation for the CTDB files

• CTDB configuration

RHS Volume creation is described in the RHS Volume Management section and the XFS partition creation steps are available in the Partitioning section.

This example is for creating a new XFS brick and mounting it to the “/storage3” folder.

# mkdir /storage3# lvcreate -L2G -n lv_storage3 VG# mkfs.xfs /dev/VG/lv_storage3# mount /dev/VG/lv_storage3 /storage3

Add the new partition to the “/etc/fstab”.

/dev/VG/lv_storage3 /storage3 xfs defaults 0 0

In the next example we create a new replicated RHS Volume for CTDB files and mount it into the “/mnt/gluster-ctdb” directory on all four nodes.

Enterprise IT Solutions 15 redhatsolutions.wordpress.com

Page 19: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

# gluster volume create gluster-ctdb replica 4 transport tcp rhs-test-01.testenv.co.uk:/storage3 rhs-test-02.testenv.co.uk:/storage3 rhs-test-

03.testenv.co.uk:/storage3 rhs-test-04.testenv.co.uk:/storage3

Creation of volume gluster-ctdb has been successful. Please start the volumeto access data.# gluster volume start gluster-ctdbStarting volume gluster-ctdb has been successful# mkdir /mnt/gluster-ctdb# mount -t glusterfs rhs-test-01:gluster-ctdb /mnt/gluster-ctdb

If the new RHS volume is created, started and mounted then add the following lines to the [global] section in the smb.conf file.

clustering = yesidmap backend = tdb2

Create and configure cluster management files on the new Volume:

• “public_addresses” (/mnt/gluster-ctdb/public_addresses)

• “nodes” (/mnt/gluster-ctdb/nodes)

• “ctdb” (/mnt/gluster-ctdb/ctdb)

“public_addresses” describes the public virtual IP addresses with network interfaces where the VIP is going to be available.

10.0.69.140/22 eth0

The “nodes” file has IP address information of all RHS nodes.

10.0.69.13110.0.69.13210.0.69.13310.0.69.134

“ctdb” is the cluster configuration file with all necessary parameters.

CTDB_RECOVERY_LOCK=/mnt/gluster-ctdb/lockfileCTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addressesCTDB_NODES=/etc/ctdb/nodes CTDB_MANAGES_SAMBA=yes

Next rename the original “ctdb” file and create the following soft links.

redhatsolutions.wordpress.com 16 Enterprise IT Solutions

Page 20: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

# mv /etc/sysconfig/ctdb /etc/sysconfig/ctdb.bak# ln -s /mnt/gluster-ctdb/nodes /etc/ctdb/nodes# ln -s /mnt/gluster-ctdb/ctdb /etc/sysconfig/ctdb# ln -s /mnt/gluster-ctdb/public_addresses /etc/ctdb/public_addresses

Finally start CTDB and ensure that the service starts automatically after reboot.

# chkconfig ctdb on# service ctdb startStarting ctdbd service: [ OK ]

Make sure that the dedicated RHS Volume is mounted and available before the CTDB serviceis starting.

CTDB health and VIP verification.

# ctdb statusNumber of nodes:4

pnn:0 10.0.69.131 OK (THIS NODE)

pnn:1 10.0.69.132 OK

pnn:2 10.0.69.133 OK

pnn:3 10.0.69.134 OK

Generation:1566261758

Size:4

hash:0 lmaster:0

hash:1 lmaster:1

hash:2 lmaster:2

hash:3 lmaster:3

Recovery mode: NORMAL (0)

Recovery master:0

# ctdb ip

Public IPs on node 0

10.0.69.140 0

Now the RHS Volume is available for client’s access via CTDB Virtual IP and the file share is part of the failover cluster.

All CTDB variables are displayable with “ctdb getvar” and tuneable with “ctdb setvar <name> <value>” commands.

NOTE: The CTDB log location is /var/log/log.ctdb.

Enterprise IT Solutions 17 redhatsolutions.wordpress.com

Page 21: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

8 Failover and Failback TestTest scenario:

During a copy to the file share the VIP holder master server crashes. This situation gives a good chance to check the automatic CTDB failover, failback as well as the high availability for the file share.

Steps:

• Connect to the distributed-replicated RHS Volume from a Windows client via CTDB VIP and mounting it as “Z:” drive

• Check the CTDB VIP address availability with “ping” utility

• Copy a big file to the file share and make a server crash on the CTDB VIP holder RHS node during the copy process

• Check the automatic failover and the share availability

• Reboot the crashed RHS node and check the automatic failback

Result:

After the simulated server crash the master CTDB node (rhs-test-01) became unavailable andthe “ctdb status” of the node is changed to “DISCONNECTED | UNHEALTHY”. The other CTDB nodes are remained OK. The ping process gives “Destination Host Unreachable” feedback. The CTDB realizes that the node has crashed and the automated VIP failover process has begun. Less than thirty seconds later the VIP address restarts automatically on the secondary CTDB node. The file share becomes available again without human intervention. The mounted “Z:” drive on the Windows Desktop was working without errors or manual remounting.

# ctdb status

Number of nodes:4

pnn:0 10.0.69.141 DISCONNECTED|UNHEALTHY

pnn:1 10.0.69.142 OK (THIS NODE)

pnn:2 10.0.69.143 OK

pnn:3 10.0.69.144 OK

The copy process is interrupted but the client has received “The network path was not found” error message from the failed copy. He will be able to resend his file after the CTDB failover. The reason the copy process crashed is because the replicated RHS Volume provides real time synchronization between the Volume bricks. The volume replication process starts immediately during the copy process. The status of the replicated data is “read only” on the other bricks of Volumes during the real time data replication process. Once the crashed master node is working again the VIP changes back and the synchronization between the replicated volume bricks is started automatically.

The automated failover and failback processes are completed successfully.

redhatsolutions.wordpress.com 18 Enterprise IT Solutions

Page 22: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

9 Configuring Microsoft Active Directory AuthenticationNTP must be configured, started, and synchronized on all members in the Trusted Pool. It is recommended to use the Microsoft Domain Controllers as Time server. All RHS nodes have to have a host “A” record on the DNS server. The hostname is recommended to be the same as the DNS host record.

Implementing Microsoft AD authentication on RHS Volumes involves the following main steps.

• Kerberos configuration

• Winbind authentication configuration

• CTDB configuration extension

• AD authenticated Volume access configuration

The following example shows how to configure a Windows domain in Kerberos and Winbind. The name of the test domain is “ab-domain.local” and the PDC server name is “ab-pdc.ab-domain.local”.

9.1 Setup and ConfigurationInstall the required packages: samba, samba-client, samba-common, samba-winbind, samba-winbind-clients, krb5-libs, krb5-server, krb5-workstation, pam_krb5.

9.1.1 Configuring KerberosTo configure Kerberos add the correct domain information to the “krb5.conf”. Here is a configuration example for “ab-domain.local” domain and “ab-pdc.ab-domain.local” PDC.

[logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log

[libdefaults] default_realm = AB-DOMAIN.LOCAL dns_lookup_realm = true dns_lookup_kdc = true ticket_lifetime = 24h forwardable = true

[realms] AB-DOMAIN.LOCAL = { kdc = AB-PDC.AB-DOMAIN.LOCAL admin_server = AB-PDC.AB-DOMAIN.LOCAL default_domain = ab-domain.local }

[domain_realm]

Enterprise IT Solutions 19 redhatsolutions.wordpress.com

Page 23: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

.ab-domain.local = AB-DOMAIN.LOCAL ab-domain.local = AB-DOMAIN.LOCAL

[appdefaults] pam = { debug = false ticket_lifetime = 36000 renew_lifetime = 36000 forwardable = true krb4_convert = false }

Test how Kerberos works:

# host -t srv _kerberos._tcp.ab-domain.local_kerberos._tcp.ab-domain.local has SRV record 0 100 88 ab-pdc.ab-domain.local.# kinit [email protected] for [email protected]:#

9.1.2 Winbind

To configure Winbind authentication install the “X window system” package group on the RHSnodes. The “X window system” package group is part of the “rhel-x86_64-server-6” RHN channel or available in the RHEL6 installation media.

# yum groupinstall "X Window System"

Create a new folder in the /home directory, set the name to the same as the domain name. (for example “/home/AB-DOMAIN”). When a new user login to the system at the first time then the system is going to generate his home directory automatically under the /home/AB-DOMAIN/ folder.

Connect to the nodes from another graphical Linux client via SSH and use the “system-config-authentication” graphical tool to setup Winbind.

[root@linux-desktop ~]# ssh -X root@rhs-test-02[root@rhs-test-02 ~]# system-config-authentication

The following example details how can you configure Winbind using the “system-config-authentication” tool.

On the “Identity & Authentication” window under the “User Account Configuration” menu select “Winbind” in the “User Account Database” and fill the forms with the correct domain information.

redhatsolutions.wordpress.com 20 Enterprise IT Solutions

Page 24: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

The Security Model is “ads”. Choose the correct “Template Shell” for Winbind users. Switch the “Create home directories on the first logon” in the “Advanced Options” menu and finally click on the “Join Domain”.

Figure 9.1.2.1: Winbind configuration

9.1.3 CTDB

CTDB can manage Winbind so extend the CTDB base configuration “/mnt/gluster-ctdb/ctdb” with the following highlighted row.

CTDB_RECOVERY_LOCK=/mnt/gluster-ctdb/lockfileCTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addressesCTDB_NODES=/etc/ctdb/nodes CTDB_MANAGES_SAMBA=yesCTDB_MANAGES_WINBIND=yes

9.1.4 Samba

The Winbind authentication makes some modifications in the smb.conf but it still needs to be extended manually. The following example shows how the “[global]” section looks in the Samba configuration file.

[global] workgroup = AB-DOMAIN realm = AB-DOMAIN.LOCAL server string = Samba Server Version %v security = ADS password server = ab-pdc.AB-DOMAIN.LOCAL

Enterprise IT Solutions 21 redhatsolutions.wordpress.com

Page 25: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

log file = /var/log/samba/log.%m

max log size = 50

clustering = Yes

idmap backend = tdb2

idmap uid = 600 - 1000000

idmap gid = 600 - 1000000

template homedir = /home/AB-DOMAIN/%U

template shell = /bin/bash

winbind separator = +

cups options = raw

Test Winbind configuration.

# net ads testjoin

Join is OK

9.2 Setup AD authentication for RHS VolumesCreate a user named “glustertest” in the Microsoft Active Directory then create an AD group named “Volume1 users” for testing how AD authentication works. The members of this group are permitted to access to the replicated RHS Volumes via CIFS. The “glustertest” AD user is member of the “Volume1 users” group.

Here is a Windows command line example which creates a global security AD group in the “AB-DOMAIN.LOCAL/Test-Domain/users/Access_Groups” Organization Unit.

Figure 9.2.1: Create “Volume1 users” AD group

NOTE: To add one or more members to this group extend this command with “-members <username1, username2, etc.>” e.g. “-members glustertest”

“wbinfo” is a command for testing the Active Directory communication. Check the AD user andthe AD group availability from the RHS nodes.

# wbinfo --user-info=AB-DOMAIN+glustertest

AB-DOMAIN+glustertest:*:600:602::/home/AB-DOMAIN/glustertest:/bin/bash

# wbinfo --group-info="AB-DOMAIN+volume1 users"

AB-DOMAIN+volume1 users:*:603:AB-DOMAIN+glustertest

redhatsolutions.wordpress.com 22 Enterprise IT Solutions

Page 26: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

Check the “glustertest” user login.

# wbinfo -a AB-DOMAIN+glustertest

Enter AB-DOMAIN+glustertest's password:

plaintext password authentication succeeded

Enter AB-DOMAIN+glustertest's password:

challenge/response password authentication succeeded

Now the Winbind is tested and an AD user is able to access to the RHS nodes. Next change the permissions on the RHS Volume.

# chgrp -R "AB-DOMAIN+volume1 users" /mnt/samba/glustervol1/

The Samba reconfiguration is the last step. The smb.conf must be the same on all RHS nodes. The following smb.conf example shows how-to configure access for “Volume1 users” AD group to the “glustervol1” RHS Volume.

[gluster-glustervol1]

comment = RHS Volume

path = /mnt/samba/glustervol1

create mask = 0660

directory mask = 770

writeable = yes

browseable = yes

valid users = +"AB-DOMAIN+volume1 users"

guest ok = no

Test the “glustertest” user access to the Volume:

# smbclient //rhs-test-ctdb.AB-DOMAIN.LOCAL/gluster-glustervol1 -U

glustertest

Enter glustertest's password:

Domain=[AB-DOMAIN] OS=[Unix] Server=[Samba 3.5.10-125.el6]

smb: \>

Now the distributed-replicated Volume works with integrated Microsoft AD authentication, and Microsoft AD users are able to login to the RHS nodes.

Enterprise IT Solutions 23 redhatsolutions.wordpress.com

Page 27: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

10 Conclusion

The glusterfs based Red Hat Storage technology is much easier to understand, design and implement than the other storage solutions. No special skills are required to manage it.

The RHS environment provides a highly available, scalable, protected, and fault tolerant file storage solution with failover cluster and third party authentication support. Red Hat Storage Volume can be used with IP failover cluster without extra cluster solutions or licenses. The stored data on the glusterfs volumes are protected with real-time volume bricks replication. The RHS environment is expandable online with more bricks (disks) or more RHS servers without any downtime.

This storage solution is highly recommended to enterprise companies for whom a professional level data storage solution is extremely important.

redhatsolutions.wordpress.com 24 Enterprise IT Solutions

Page 28: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

Appendix A: Gluster Commands

Command usage:

# gluster

gluster> peer status

Number of Peers: 3

Hostname: rhs-test-02.testenv.co.uk

Uuid: 8062f05b-0375-46d5-a754-f863a45dc052

State: Peer in Cluster (Connected)

Hostname: rhs-test-03.testenv.co.uk

Uuid: 694fc65c-077f-4637-b5cc-33c4db4433a5

State: Peer in Cluster (Connected)

Hostname: rhs-test-04.testenv.co.uk

Uuid: 7f4df2c1-7da2-4b4b-8055-16c676b88f61

State: Peer in Cluster (Connected)

gluster>

gluster> volume info all

Volume Name: glustervol1

Type: Distributed-Replicate

Status: Started

Number of Bricks: 4 x 2 = 8

Transport-type: tcp

Bricks:

Brick1: rhs-test-01.testenv.co.uk:/storage1

Brick2: rhs-test-02.testenv.co.uk:/storage1

Brick3: rhs-test-01.testenv.co.uk:/storage2

Brick4: rhs-test-02.testenv.co.uk:/storage2

Brick5: rhs-test-03.testenv.co.uk:/storage1

Brick6: rhs-test-04.testenv.co.uk:/storage1

Brick7: rhs-test-03.testenv.co.uk:/storage2

Brick8: rhs-test-04.testenv.co.uk:/storage2

Volume Name: gluster-ctdb

Type: Replicate

Status: Started

Number of Bricks: 4

Transport-type: tcp

Bricks:

Brick1: rhs-test-01.testenv.co.uk:/storage3

Brick2: rhs-test-02.testenv.co.uk:/storage3

Brick3: rhs-test-03.testenv.co.uk:/storage3

Brick4: rhs-test-04.testenv.co.uk:/storage3

gluster> quit

#

Enterprise IT Solutions 25 redhatsolutions.wordpress.com

Page 29: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

Command and Syntax Description

volume info [all|<VOLNAME>] List information of all volumes

volume info [all|<VOLNAME>] List information of all volumes

volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK> ...

Create a new volume of specified type with mentioned bricks

volume delete <VOLNAME> Delete volume specified by <VOLNAME>

volume start <VOLNAME> [force] Start volume specified by <VOLNAME>

volume stop <VOLNAME> [force] Stop volume specified by <VOLNAME>

volume add-brick <VOLNAME> [<stripe|replica> <COUNT>] <NEW-BRICK>

Add brick to volume <VOLNAME>

volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK>

Remove brick from volume <VOLNAME>

volume rebalance <VOLNAME> [fix-layout] {start|stop|status} [force]

Rebalance operations

volume replace-brick <VOLNAME> <BRICK><NEW-BRICK> {start|pause|abort|status|commit [force]}

Replace-brick operations

volume set <VOLNAME> <KEY> <VALUE> Set options for volume <VOLNAME>

volume help Display help for the volume command

volume log rotate <VOLNAME> [BRICK] Rotate the log file for corresponding volume/brick

volume sync <HOSTNAME> [all|<VOLNAME>]

Sync the volume information from a peer

volume reset <VOLNAME> [option] [force] Reset all the reconfigured options

volume geo-replication [<VOLNAME>] [<SLAVE-URL>] {start|stop|config|status|log-rotate} [options...]

Geo-sync operations

volume profile <VOLNAME> {start|info|stop} [nfs]

Volume profile operations

volume quota <VOLNAME> <enable|disable|limit-usage|list|remove> [path] [value]

Quota translator specific operations

volume status [all | <VOLNAME> [nfs|shd|<BRICK>]] [detail|clients|mem|inode|fd|callpool]

Display status of all or specified volume(s)/brick

volume heal <VOLNAME> [{full | info {healed | heal-failed | split-brain}}]

Self-heal commands on volume specified by <VOLNAME>

volume statedump <VOLNAME> [nfs] [all|mem|iobuf|callpool|priv|fd|inode|history]..

Perform statedump on bricks

redhatsolutions.wordpress.com 26 Enterprise IT Solutions

Page 30: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

Command and Syntax Description

volume list List all volumes in cluster

volume clear-locks <VOLNAME> <path> kind{blocked|granted|all}{inode [range]|entry [basename]|posix [range]}

Clear locks held on path

peer probe <HOSTNAME> Probe peer specified by <HOSTNAME>

peer detach <HOSTNAME> [force] Detach peer specified by <HOSTNAME>

peer status List status of peers

peer help Help command for peer

Quit Quit

Help Display command options

Exit Exit

Enterprise IT Solutions 27 redhatsolutions.wordpress.com

Page 31: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

Appendix B: References

Red Hat Storage 2.0 Installation GuideThis guide describes the prerequisites and provides step-by-instructions to install Red Hat Storage using

different methods.

Red Hat Storage 2.0 Administration GuideThis guide introduces Red Hat Storage, describes the minimum requirements, and provides step-by-step

instructions to install the software and manage your storage environment.

Red Hat Storage Software Appliance 3.2 User GuideThis guide introduces Red Hat Storage Software Appliance, describes the minimum requirements, and

provides step-by-step instructions to install the software and manage your cluster environment.

Red Hat Enterprise Linux 6 Deployment GuideThe Deployment Guide documents relevant information regarding the deployment, configuration and

administration of Red Hat Enterprise Linux 6. It is oriented towards system administrators with a basic understanding of the system.

Gluster 3.1 Filesystem Installation and Configuration GuideThis document introduces GlusterFS, describes the minimum hardware requirements, and shows how to

prepare and install the software in your environment.

Gluster 3.1 Filesystem Administration GuideThis document introduces GlusterFS management and explains how to perform the most common GlusterFS

operations.

Samba CTDB documentationThis collection of documentation introduces Cluster Trivial Database, describes the requirements, and

provides CTDB Wiki and best practices to manage your cluster environment.

Microsoft Active DirectoryThis Wikipedia page provides more information about Microsoft Active Directory.

redhatsolutions.wordpress.com 28 Enterprise IT Solutions

Page 32: Designing High Availability Disk Volumes Using Red Hat ......Red Hat Storage Software Appliance 3.2 (RHSSA) is a Red Hat Enterprise Linux Server with basic server functionality, included

Appendix C: Revision History

Revision 2.1 Monday September 2, 2013 Zoltan Porkolab

Updated cover

Revision 2.0 Wednesday August 30, 2013 Zoltan Porkolab

Updated document styleUpdated Executive Summary sectionAdded table to Reference Architecture Environment sectionUpdated Network Configuration sectionExtended High Availability Implementation sectionAdded Revision History sectionUpdated Table of Contents

Revision 1.0 Tuesday August 20, 2013 Zoltan Porkolab

Initial Release

Enterprise IT Solutions 29 redhatsolutions.wordpress.com