Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment...

83
Reference Architecture Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage Jon Benedict, NetApp September 2010 | RA-0004-0810

Transcript of Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment...

Page 1: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Reference Architecture

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage Jon Benedict, NetApp September 2010 | RA-0004-0810

Page 2: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 2

TABLE OF CONTENTS

PURPOSE OF THIS DEPLOYMENT GUIDE ................................................................................... 4 1.1 INTENDED AUDIENCE ............................................................................................................................... 4 1.2 TERMINOLOGY .......................................................................................................................................... 4

2 SYSTEM REQUIREMENTS ........................................................................................................ 5 2.1 MINIMUM SYSTEM REQUIREMENTS ........................................................................................................ 5 2.2 RECOMMENDED SYSTEM REQUIREMENTS............................................................................................ 5 2.3 NETWORK REQUIREMENTS ..................................................................................................................... 5 2.4 KVM REQUIREMENTS ............................................................................................................................... 5 2.5 STORAGE REQUIREMENTS ...................................................................................................................... 6 2.6 SUPPORTED GUEST OPERATING SYSTEMS .......................................................................................... 6 2.7 KVM HARDWARE LIMITATIONS ............................................................................................................... 6 2.8 NETWORK ARCHITECTURE ...................................................................................................................... 6

3 BASIC INFRASTRUCTURE DESCRIPTION ............................................................................. 7 4 NETAPP CONFIGURATION ...................................................................................................... 8

4.1 BASE CONFIGURATION OF NETAPP FAS CONTROLLER ...................................................................... 8 4.2 INSTALLING LICENSES ............................................................................................................................. 8 4.3 CONFIGURE SSH ....................................................................................................................................... 8 4.4 CONFIGURE DISK SPACE ON THE NETAPP FAS CONTROLLER .......................................................... 9 4.5 NETWORK CONFIGURATION FOR THE NETAPP FAS CONTROLLER ................................................. 11

5 CONFIGURING SHARED STORAGE ON THE NETAPP FAS CONTROLLER ..................... 15 5.1 CONFIGURE NETAPP FAS3170 FOR NFS .............................................................................................. 15 5.2 CONFIGURE NETAPP FAS3170 FOR ISCSI OR FCP .............................................................................. 17 5.3 CONFIGURE FIBRE HBAS ON THE NETAPP FAS CONTROLLER ........................................................ 20

6 INSTALLATION AND BASE CONFIGURATION OF HOST NODES ...................................... 22 6.1 BIOS CONFIGURATION ........................................................................................................................... 22 6.2 INSTALLATION OF RED HAT ENTERPRISE LINUX 5.4 .......................................................................... 22 6.3 DISK LAYOUT ........................................................................................................................................... 22 6.4 REGISTER WITH RED HAT NETWORK ................................................................................................... 23 6.5 HOST SECURITY ...................................................................................................................................... 25 6.6 DISABLE UNNECESSARY AND INSECURE SERVICES ......................................................................... 26 6.7 SECURE REMOTE ACCESS TO HOST NODES (SSH KEYS) ................................................................. 26 6.8 NETWORK CONFIGURATION OF HOST NODES .................................................................................... 28

7 CONFIGURE A REMOTE ADMINISTRATION HOST (OPTIONAL) ....................................... 34 7.1 BASIC REMOTE HOST CONFIGURATION .............................................................................................. 34 7.2 CONFIGURE SECURITY ........................................................................................................................... 34

Page 3: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 3

7.3 CONFIRM NTP IS RUNNING AND STARTS ON BOOT............................................................................ 34 7.4 CONFIGURE NFS ACCESS TO THE NETAPP FAS CONTROLLER ....................................................... 34 7.5 INSTALL THE PACKAGES NEEDED TO ADMINISTER KVM REMOTELY ............................................. 35 7.6 CONFIGURE THE SSH KEY PAIR ............................................................................................................ 36

8 INSTALL AND CONFIGURE KVM ........................................................................................... 37 8.1 INSTALL THE REQUIRED PACKAGES ................................................................................................... 37

9 SHARED STORAGE ................................................................................................................. 38 9.1 CONFIGURE NFS-BASED SHARED STORAGE ON THE HOST NODES ............................................... 38 9.2 CONFIGURE ISCSI-BASED SHARED STORAGE ON THE HOST NODES ............................................. 39 9.3 CONFIGURE FCP-BASED SHARED STORAGE ON THE HOST NODES ............................................... 41

10 CONFIGURE MULTIPATHING ON THE HOST NODES ......................................................... 44 11 CONFIGURE GFS2-BASED SHARED STORAGE ................................................................. 46

11.1 CONFIGURE THE HOST NODES ............................................................................................................. 46 11.2 CONFIGURE THE CLUSTER MANAGER ................................................................................................. 47 11.3 CONFIGURE THE CLUSTER .................................................................................................................... 48 11.4 CREATE FENCING DEVICES ................................................................................................................... 50 11.5 CONFIGURE GFS2 ................................................................................................................................... 52

12 SELINUX CONSIDERATIONS ................................................................................................. 56 13 CREATE A GOLDEN IMAGE OR TEMPLATE ........................................................................ 58

13.1 CREATE AND ALIGN A DISK IMAGE FOR VIRTUAL GUESTS .............................................................. 58 13.2 PREPARE THE GOLDEN IMAGE FOR CLONING.................................................................................... 61

14 CLONE VIRTUAL SERVERS ................................................................................................... 63 15 LIVE MIGRATION OF VIRTUAL SERVERS ............................................................................ 65

15.1 LIVE MIGRATION USING VIRTUAL MACHINE MANAGER ..................................................................... 65 15.2 LIVE MIGRATION FROM COMMAND LINE .............................................................................................. 67

16 CONFIGURE DATA RESILIENCY AND EFFICIENCY ............................................................ 69 16.1 THIN PROVISIONING ................................................................................................................................ 69 16.2 DEDUPLICATION ...................................................................................................................................... 70 16.3 SNAPSHOT ............................................................................................................................................... 71

APPENDIXES .................................................................................................................................. 74 APPENDIX A: CONFIGURE HARDWARE-BASED ISCSI INITIATOR .................................................................. 74 APPENDIX B: CHANNEL BONDING MODES ....................................................................................................... 76 APPENDIX C: SAMPLE FIREWALL FOR HOST NODES ..................................................................................... 79 APPENDIX D: SAMPLE SNAPSHOT SCRIPT ...................................................................................................... 79 APPENDIX E: SAMPLE KICKSTART FILE FOR A PROPERLY ALIGNED VIRTUAL SERVER .......................... 81

REFERENCES ................................................................................................................................ 82

Page 4: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 4

PURPOSE OF THIS DEPLOYMENT GUIDE

This deployment guide discusses tested best practices for setting up a virtual server environment based around the Kernel Virtual Machine (KVM) hypervisor and NetApp® storage. This guide provides instructions for deploying a stable and efficient virtual server environment that serves as a solid foundation for many different applications and workloads.

1.1 INTENDED AUDIENCE This guide is written for system architects, system administrators, and storage administrators who deploy the KVM hypervisor in a data center where NetApp is the intended back-end storage.

A level of expertise is expected in Linux®, virtualization, and storage, preferably with a focus on Red Hat Enterprise Linux and NetApp.

Additional expertise in IP and switched fabric networks (if using Fibre Channel) is also required. Setting up an IP or switched fabric network is not covered in this guide; expertise in these areas, however, is necessary to deploy certain elements of the KVM virtual environment.

1.2 TERMINOLOGY The following terms are used in this guide:

• Channel bond: Red Hat’s naming convention for bonding two or more physical NICs for purposes of redundancy or aggregation.

• Cluster: A group of related host nodes that support the same virtual servers. • Host node: The physical server or servers that host one or more virtual servers. • KVM environment: A general term that encompasses KVM, Red Hat Enterprise Linux (RHEL),

network, and NetApp storage as described in this guide. • Shared storage: A common pool of disk space, file or local unit number (LUN) based, simultaneously

available to two or more host nodes. • Virtual interface (VIF): A means of bonding two or more physical network interface cards (NICs) for

purposes of redundancy or aggregation. • Virtual local access network (VLAN): Useful at Layer 2 switching to segregate broadcast domains

and to ease the physical elements of managing a network. • Virtual server: A guest instance that resides on a host node.

Page 5: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 5

2 SYSTEM REQUIREMENTS

Requirements to launch the hypervisor are conservative; however, overall system performance depends on the nature of the workload.

2.1 MINIMUM SYSTEM REQUIREMENTS The following list specifies the minimum system requirements:

• 6GB free disk space per host node • 2GB RAM

2.2 RECOMMENDED SYSTEM REQUIREMENTS Although not required, NetApp strongly recommends the system considerations described in the following list:

• One processor core or hyperthread for each virtualized CPU and one for the hypervisor • 2GB RAM plus additional RAM for virtualized guests • Some type of out-of-band management (IBM RSA, HP ILO, Dell DRAC, and so on) • Multiple sets of at least 1GB NICs to separate traffic and allow for bonding, or one pair of 10GB NICs to

be bonded to carry all traffic • Fibre Channel or iSCSI host bus adapters (HBAs) (if using hardware initiators and LUN-based storage) • Redundant power

2.3 NETWORK REQUIREMENTS The following list specifies the network requirements:

• Switches capable of VLAN segmentation • Gigabit Ethernet (GbE), or 10GbE, if available • Multiple switches for channel bonding

2.4 KVM REQUIREMENTS The KVM hypervisor requires a 64-bit Intel® processor with the Intel VT extensions or a 64-bit AMD processor with the AMD-V extensions. It might be necessary first to enable the hardware virtualization support from the system BIOS.

Run the command shown in Figure 1 from within Linux to verify that the CPU virtualization extensions are available.

Figure 1) Verify availability of CPU virtualization extensions.

Page 6: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 6

If the output includes vmx (Intel) or svm (AMD), the proper extensions are present and enabled.

2.5 STORAGE REQUIREMENTS Whether there are one or many host nodes hosting virtual machines, KVM requires a flexible way to store virtual systems. KVM supports the following storage types:

• Directly attached storage • iSCSI or Fibre Channel LUNs, which may be shared in GFS or GFS2 configurations • Network File System (NFS)-mounted file system

2.6 SUPPORTED GUEST OPERATING SYSTEMS The following guest operating systems are supported:

• RHEL 3, 4, and 5 (32-bit and 64-bit) • Windows® 2003 Server, Windows 2008 Server (32-bit and 64-bit) • Windows XP

2.7 KVM HARDWARE LIMITATIONS The following limitations apply to KVM:

• 256 CPUs per host node • 16 virtual CPUs per guest • 8 virtual NICs per guest • 1TB RAM per host node • 256GB RAM per guest

2.8 NETWORK ARCHITECTURE Although this deployment guide does not discuss the specifics of setting up an Ethernet or switched fabric, certain items, such as configuration of VLANs, need to be addressed. Deployed switches must support:

• VLAN segregation • Link Aggregation Control Protocol (LACP) (if deploying LACP-mode channel bonds or VIFs) • At least 1Gbps line rate (10Gbps is preferred) Additional considerations include:

• Multiple switches must be configured for redundancy. • Redundancy is required for Fibre Channel fabric switches. • Considerations must be made for redundant fibre switches, as well as zoning.

Page 7: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 7

3 BASIC INFRASTRUCTURE DESCRIPTION

Table 1 describes the major components of the KVM virtualization environment that are used in this guide.

Table 1) Major components of the KVM virtualization environment.

Function Name Description Notes

Host node 1 chzbrgr Dual quad-core Intel-VT, RHEL 5.4 x86_64

Qlogic dual-port iSCSI HBA, Qlogic dual-port Fibre HBA

Host node 2 hmbrgr Dual quad-core Intel-VT, RHEL 5.4 x86_64

Qlogic dual-port iSCSI HBA, Qlogic dual-port Fibre HBA

Network switch 1 n/a Cisco Catalyst 4948 Used for primary traffic (primary in context of channel bond and VIF); also, separate VLAN for management traffic

Network switch 2 n/a Cisco Catalyst 4948 Used for secondary traffic (secondary in context of channel bonding and VIF)

Remote host taco Nondescript host running RHEL 5.4

Used for remote administration of NetApp FAS controller, KVM, and GFS2 cluster

Storage ice3170-3a NetApp FAS3170, Data ONTAP® 7.3.2

Back-end storage providing Fibre Channel Protocol (FCP), iSCSI, and NFS-based storage

Fibre switch icefc-4 Brocade Silkworm 410 Fibre switch for FCP connectivity

Repository Red Hat Network

Subscription compliant means of getting Red Hat packages/updates

Web-accessible means of managing subscriptions and packages

Page 8: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 8

4 NETAPP CONFIGURATION

4.1 BASE CONFIGURATION OF NETAPP FAS CONTROLLER This deployment guide makes the following assumptions regarding the setup of the NetApp FAS controller:

• NetApp FAS controller is deployed according to existing best practices. • The latest version of Data ONTAP is installed (7.3.2 minimum). • The management interface or serial console is set up. • Licenses for NFS, FCP, and iSCSI are installed. (See section 4.2.)

4.2 INSTALLING LICENSES The quickest method of installing licenses is from the console of the FAS controller, using the command:

icd3170-3a> license add <license_number>

where <license_number> is the code provided for the requested feature. You can add multiple licenses at one time.

4.3 CONFIGURE SSH With the possible exception of a purely private terminal server, all traffic to the NetApp FAS controller should be encrypted. Perform the commands in Figure 2 to confirm that Secure Shell (SSH) is enabled and that Remote Shell (RSH) and telnet are disabled.

Figure 2) Confirm SSH is enabled and RSH and telnet are disabled.

Page 9: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 9

4.4 CONFIGURE DISK SPACE ON THE NETAPP FAS CONTROLLER

CREATE A DISK AGGREGATE If the NetApp FAS controller is new, there should be only one existing disk aggregate that contains three disks within a single volume, vol0. This volume stores the Data ONTAP operating system and should never contain user data.

To prepare the NetApp FAS controller for shared storage, you must configure at least one additional disk aggregate. If such an aggregate is already present, skip to the next section, “Create a Volume.”

1. From the FilerView® Web console, select Aggregates→Manage to view the one existing aggregate, aggr0.

2. Under Aggregates, select Add to launch the Aggregate wizard, as shown in the following window.

3. For most of the items in the Aggregate wizard, choose the defaults, such as double parity and RAID

group size of 16. 4. When the wizard gets to number of disks, choose as many as possible.

5. Select Commit. 6. As shown in the following window, return to the Aggregates→Manage menu to place the aggregate

online and confirm the configuration.

CREATE A VOLUME The next activity, common to all types of storage, is to create a flexible volume (FlexVol® volume). FlexVol volumes will contain the NFS export or LUNs that the host nodes use for shared storage. In the following procedure, ISOs are stored in the same volume as virtual server data, but this is not required.

1. Log into the Web interface.

Page 10: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 10

2. From the menu on the left, select Volumes→Add to launch the Volume wizard, as shown in the following window.

3. Following the prompts, choose the following:

a. Flexible (for a FlexVol volume).

b. Type kvm_nfs or another arbitrary but meaningful name for the volume name, along with POSIX. Leave UTF-8 unchecked.

c. Choose aggr1 for the containing aggregate. Never use aggr0 for user data. d. Set Space Guarantee to none in order to maintain the concept of thin provisioning across volumes,

LUNs, and disk images.

e. For size, choose according to the initial needs, taking into consideration the following: i. Number and size of ISO images (DVD ISOs are from 3GB to 4.5GB). ii. Number and size of guest disk images. iii. Number and size of guest template (golden) images. iv. Whether or not deduplication is to be used.

If using NetApp deduplication on a volume that stores a LUN, size the volume at two times the size of the LUN. Also, enable thin provisioning on the volume by setting the Space Guarantee to none. No other considerations need to be made for volumes that will store NFS exports.

v. In this procedure, a 100GB volume was used for each of the shared storage protocols (NFS, iSCSI, and FCP), with 20% Snapshot™ reserve, leaving 80GB of usable space for each.

4. Click Commit. 5. Select Volumes→Manage, as shown in the following window, to view the new volumes created in this

procedure.

By default, a volume is automatically exported as an NFS share when it is created, regardless of how it will be used. For volumes that will be used for LUNs, go to NFS→Manage Exports and delete the unwanted exports.

Page 11: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 11

4.5 NETWORK CONFIGURATION FOR THE NETAPP FAS CONTROLLER

CONFIGURE VLANS TO SEGREGATE DATA TRAFFIC Create unique VLANs for each role to separate public, management, and storage traffic. In addition to providing important security benefits, creating unique VLANs for each role allows for the configuration of jumbo frames for NFS and iSCSI traffic. Even if you expect to use FCP to access the storage, complete this configuration to separate public and management traffic. For VLANs with jumbo frames, you must configure the maximum transmission unit (MTU) setting end to end, from storage to host NIC and every switch port in between.

Table 2 shows the VLANs used in this document.

Table 2) VLANs used in this document.

Description VLAN Subnet

Public traffic 186 10.61.186.0/24

Management traffic 185 10.61.185.0/24

iSCSI traffic 3029 192.168.1.0/24

NFS traffic 3027 192.168.27.0/24

The configuration procedure is the same for public, iSCSI, and NFS traffic; simply substitute the proper VLAN and subnet information as appropriate. Follow three main steps to create the VLANs on the NetApp FAS controller:

1. Configure a VIF on the NetApp FAS controller. 2. Configure a VLAN on the NetApp FAS controller. 3. Assign an IP address to the VLAN.

Page 12: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 12

CONFIGURE A VIF ON THE NETAPP FAS CONTROLLER A VIF is necessary to facilitate redundancy for the physical network interfaces. To configure a VIF using interfaces that are already in use, access the NetApp FAS controller from the management console or serial console. In the following procedure, the onboard interfaces e0a and e0b are used for the VIF.

1. Log into the management console or serial port of the NetApp FAS controller. 2. Bring down the physical interfaces that will be used in the VIF, then create the VIF:

ice3170-3a> ifconfig e0a down

ice3170-3a> ifconfig e0b down

ice3170-3a> vif create lacp vif1 –b ip e0a e0b

You can use the Web console to configure the VIF if you are creating a VIF from NICs on the NetApp FAS controller that is not already being used.

Note: NICs must be down in order to be configured for use within a VIF.

After creating the VIF, you can configure one or more VLANs for use with that VIF. IP addresses are then assigned to the individual VLANs.

CONFIGURE A VLAN ON THE NETAPP FAS CONTROLLER Use the following procedure to configure a VLAN on the NetApp FAS controller.

1. From the menu on the left of the Web console, select Network→Manage Interfaces. 2. Select Add a New VLAN. 3. As shown in the following window, select the VIF created in the preceding procedure for the physical

interface. Also, select the VLAN tag. In this procedure, VLAN 3027 has been created for private NFS traffic.

You must also configure all switches between the host’s private interface and the NetApp FAS controller to forward VLAN traffic. Trunk ports must be configured on the switches that allow specific VLAN traffic to pass.

ASSIGN AN IP ADDRESS TO THE VLAN Use the following procedure to configure an IP address for the VLAN.

1. From the menu on the left of the Web console, select Network→Manage Interfaces. 2. As shown in the following window, select Modify on the line that contains the VIF that you just created

(vif1-3027).

Page 13: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 13

3. Populate the fields to match the network, as shown in the following window. In the case of the iSCSI or

NFS nonrouted VLANs, the IP assignments are arbitrary. As long as the IP addresses of the host nodes’ private NICs are on the same subnet and the switches are configured properly, the VLAN is complete.

If jumbo frames are being used in the KVM environment, increase the default of 1500 for MTU size to 9198. Every port along the way needs to support and be configured for jumbo frames. This includes the private

Page 14: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 14

interface of the host nodes. The host node configuration of jumbo frames is discussed in section 6.8, “Network Configuration of Host Nodes.”

Page 15: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 15

5 CONFIGURING SHARED STORAGE ON THE NETAPP FAS CONTROLLER

Shared storage may be based on:

• NFS • iSCSI • FCP The following sections cover all three types of shared storage configuration; however, unless you are configuring multiple virtualization environments on the same NetApp FAS controller, you need to configure only one type of shared storage.

5.1 CONFIGURE NETAPP FAS3170 FOR NFS The following list contains prerequisites for NFS-based shared storage:

• License the controller for NFS. • Create a private VLAN to segregate NFS traffic. • Create a volume. See section 4, “NetApp Configuration,” for more details on these prerequisites.

CREATE A QTREE (OPTIONAL) Use the following procedure to create a qtree.

Note: Qtrees are not required for an NFS share.

1. Log into the Web console.

2. As shown in the following window, select Volumes→QTrees→Add from the menu on the left of the Web console to launch the QTree wizard.

3. For Volume, select the volume previously created. (Never use vol0 for user data.) 4. For QTree Name, choose an arbitrary but meaningful name, such as kvm_q. 5. For Security Style, choose Unix. 6. Leave Oplocks checked.

7. Click Add. The qtree is created.

Page 16: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 16

CREATE THE EXPORT Use the following procedure to create the export.

1. Log into the Web console.

2. From the menu on the left of the Web console, select NFS→Add Export to launch the NFS Export wizard.

3. On the first window, check Read-Write Access, Root Access, and Security. 4. For Export Path, choose the previously created volume, /vol/kvm_vol. If a qtree was created, type the

path of the qtree, such as “/vol/kvm_vol/kvm_q."

5. For read-write hosts, add the individual IP addresses of the private interfaces that will be used for NFS traffic.

6. As seen in the following window, do the same for the Root Access window.

7. For Security, select UNIX® style. 8. Click Commit. 9. As shown in the following window, click Manage Exports to review the NFS share.

Page 17: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 17

5.2 CONFIGURE NETAPP FAS3170 FOR ISCSI OR FCP The procedures for configuring LUNs on the NetApp FAS controller for iSCSI and FCP are almost identical. Any differences are noted in the following prerequisite list and subsequent procedure.

Following are prerequisites for iSCSI- and FCP-based shared storage:

• License the controller for iSCSI or FCP. • Set up a VLAN for private iSCSI traffic or set up a switched fabric network for FCP traffic. • Create a volume.

CONFIGURE LUNS ON THE NETAPP FAS CONTROLLER FOR ISCSI AND FCP Use the following procedure to configure LUNs on the NetApp FAS controller for iSCSI and FCP.

This procedure uses the two volumes (kvm_iscsi and kvm_vol, sized 80GB and 66GB, respectively) that were created earlier in this deployment guide. See the following window.

1. Create a LUN inside that volume by selecting LUNs→Add.

The path needs to include the created volume as well as the name of the LUN. The LUN protocol type needs to match the operating system, and you should include a brief description, as shown in the following window.

Page 18: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 18

2. Create an initiator group (igroup) by selecting Initiator Groups→Add.

The igroup enables the host nodes to access to the LUN being created.

The group name should be easily recognizable. In the following window, the name of the host gaining access is used for part of the name. The operating system is Linux.

3. Select a Type for the igroup.

Page 19: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 19

For iSCSI-based storage, select iSCSI. The initiators need to match either what is in the contents of /etc/iscsi/initiatorname.iscsi of the host node (for software iSCSI) or the initiator name configured in the iSCSI HBA BIOS (for hardware iSCSI). In the preceding window, the names from the iSCSI HBAs are used. Although a separate igroup was created for the other host node, in practice, all host nodes in the same storage can use the same igroup. In an environment with many host nodes, it is easier to manage a single igroup than to create a new igroup each time you add a host node to the infrastructure.

For FCP-based storage, select FCP. The initiators need to be the WWPN(s) of each HBA. The WWPN, or port name, can be found in /sys/class/fc_host/hostX/port_name on the host nodes, where X is the host ID of the HBA. For more information, see section 9.3, “Configure FCP-based Shared Storage on the Host Nodes.”

4. For both FCP and iSCSI, complete the procedure to configure the iSCSI LUN on the NetApp FAS controller by mapping the LUN to the igroup. As shown in the following window, return to LUNs→Manage and click the No Maps link on the right.

5. On the subsequent window, click Add Groups to Map.

6. On the LUN Map Add Groups window following, select the igroups that are to have access and click Add.

7. On the final window, click Apply.

If using iSCSI, the final step is to restrict which network interfaces have access to the iSCSI LUNs. See the following section, “Restrict iSCSI Traffic to VLAN.”

If using FCP, the final step on the NetApp FAS controller is to make sure that the HBAs are configured. See section 5.3, “Configure Fibre HBAs on the NetApp FAS Controller.”

Page 20: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 20

RESTRICT ISCSI TRAFFIC TO VLAN For security purposes, the network interfaces that have access to the iSCSI LUNs should be restricted. Use the following procedure to restrict iSCSI traffic to VLAN.

1. Select LUNs→iSCSI→Manage Interfaces. See the following window for an example of the Manage ISCSI interfaces window.

2. Check each interface that should not have access. 3. Click Disable. In the following window, only the VLAN created for iSCSI traffic is enabled.

5.3 CONFIGURE FIBRE HBAS ON THE NETAPP FAS CONTROLLER Use the fcadmin command on the NetApp FAS controller to view and alter the configuration of the onboard fibre HBAs. In Figure 3, adapters 0b and 0d are configured as targets. As targets, they can receive and handle requests related to FCP-based LUNs. (The initiators are attached to the disk shelves.)

To configure an onboard fibre HBA as a target, issue the command:

fcadmin config -t target <adapter>

HBA add-on cards are generally preconfigured as target ports.

Page 21: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 21

Figure 3) View and alter the configuration of the onboard fibre HBAs.

The FCP service is then confirmed to be running. If the FCP service is not running, issue the following command on the NetApp FAS controller:

fcp start

Page 22: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 22

6 INSTALLATION AND BASE CONFIGURATION OF HOST NODES

This deployment guide provides the procedures for setting up two hosts to host a number of guest virtual machines. The two hosts use shared storage to facilitate live migration of virtual machines from one host to the other. You can add hosts as needed.

The host node installations should be very basic and should use the 64-bit version of RHEL 5.4. For security and performance reasons, configure the minimum set of services. Also, the host nodes should be identical except for naming and IP information. Identical information should include mountpoints, packages, layout, and security settings.

6.1 BIOS CONFIGURATION To take advantage of the Intel VT or AMD-V virtualization enhancements, you might need to toggle them in the server BIOS. In addition, if the servers are to be configured with Red Hat Cluster Suite, disable ACPI in the BIOS, if possible.

6.2 INSTALLATION OF RED HAT ENTERPRISE LINUX 5.4 All typical means of RHEL installation (CD-ROM, HTTP, FTP, NFS, and PXE) are available; however, a minimal install is preferable. For example, the package and package group listing for the servers used in this document guide include:

• @admin-tools • @base • @core • @editors • @text-internet • device-mapper-multipath You can choose the KVM-related packages at install time, but for the sake of example, they will be installed manually postinstall. If using Kickstart to install the packages, add the following packages to the package list:

• kvm • libvirt • libvirt-python • python-virtinst • virt-manager • virt-viewer If you are not installing graphical packages in the KVM environment, you can omit the packages virt-manager and virt-viewer.

6.3 DISK LAYOUT The disk layout should follow the needs of the data center, provided that the layout meets Red Hat best practices, for example, providing at least 6GB of space plus the recommended swap. Table 3 provides the basic layouts of the servers used in this document.

Table 3) Basic server layout.

Partition Size LVM

/boot 100MB n/a

Page 23: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 23

Partition Size LVM

/ 10,240MB VolGroup00/LogVol02

/var 1024MB VolGroup00/LogVol00

Swap 8192MB (based on Table 4) VolGroup00/LogVol01

You may deploy any partition layout that provides at least 6GB of root storage and at least the swap space recommended by Red Hat. Table 4 shows the recommended swap space.

Table 4) Swap space recommended by Red Hat.

Amount of Physical RAM Recommended Swap

4 GB or less At least 2GB

4GB to 16GB At least 4GB

16GB to 64GB At least 8GB

64GB to 256GB At least 16GB

256GB to 512GB At least 32GB

6.4 REGISTER WITH RED HAT NETWORK If you have not already completed the procedure for the base configuration, register the host nodes to the Red Hat network (RHN). Repeat the registration for each host node. This offers proper compliance with Red Hat subscription requirements and provides access to all of the necessary packages.

In the following window, rhn_register was run from one of the host nodes. Except for the account login and password, defaults are chosen.

Page 24: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 24

After properly registering the hosts, subscribe them to the virtualization child channel.

1. Log in to RHN. For the remaining procedure, refer to the following window.

2. Click the Systems tab.

Page 25: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 25

3. Select the system to be managed. 4. Click the Software tab. 5. Click the Software Channels tab. 6. Check the RHEL Virtualization Channel Entitlement.

7. Click the Change Subscriptions button at the lower right.

6.5 HOST SECURITY SELinux, a security enhancement for Linux, is enabled by default. Disable it only if using Red Hat Cluster Suite, in which case it must be disabled. As will be explained in section 11, “Configure GFS2-based Shared Storage,” Red Hat Cluster Suite adds a required layer of data integrity for LUN-based shared storage needed by GFS2.

The iptables firewall should be enabled and configured to allow the ports shown in Table 5.

Table 5) Service and KVM-related ports.

Port Protocol Description

22 TCP SSH

53 TCP, UDP DNS

111 TCP, UDP Portmap

123 TCP NTP

3260 TCP, UDP iSCSI (only if using a software iSCSI initiator)

5353 TCP, UDP mDNS

54321 TCP KVM interhost communication

32803, 662 TCP NFS (only if using NFS, also requires additional configuration)

49152-49216 TCP KVM migration

5900-5910 TCP Virtual consoles (extend out for additional consoles)

67, 68 TCP, UDP DHCP

n/a n/a ICMP

n/a n/a Public virtual bridge (see section 6.8, “Network Configuration of Host Nodes”)

You must perform additional configuration for NFS to use consistent ports. See section 9.1, “Configure NFS-Based Shared Storage on the Host Nodes,” for instructions on how to perform this configuration.

If you use Red Hat Cluster Suite, GFS, GFS2, or any combination of these, you should open the additional ports listed in Table 6. The services listed in Table 6 are all components of the Red Hat Cluster Suite, such as the cluster manager, the luci and ricci agents, distributed lock manager, and the daemon that monitors configuration consistency between host nodes.

Table 6) Cluster-related ports.

Port Protocol Description

5404, 5405 UDP cman

8084 TCP luci

11111 TCP ricci

Page 26: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 26

16851 TCP modclusterd

21064 TCP dlm

50007 UDP ccsd

50006, 50008, 50009 TCP ccsd

Appendix C: Sample Firewall for Host Nodes,” contains an example of one way to set up the iptables firewall.

6.6 DISABLE UNNECESSARY AND INSECURE SERVICES You should disable all unnecessary services on the host server. For example, you should stop and disable services such as avahi-daemon, bluetooth, cups, hplip, and pcsd. Perform this task on all host nodes. In the interest of security, no unnecessary packages should be installed on the host nodes.

SELinux is enabled by default; however, confirm that it is enabled, as shown in Figure 4.

Figure 4) Confirm that SELinux is enabled.

If SELINUX=enforcing is listed, SELinux is enabled.

Note: If using Red Hat Cluster Suite, SELinux must be disabled.

Insecure protocols such as FTP, TFTP, RSH, and Telnet should never be used. Secure equivalents such as SSH, SCP, and SFTP should be used instead; these services are provided by the SSH daemon and are enabled by default.

6.7 SECURE REMOTE ACCESS TO HOST NODES (SSH KEYS)

SSH KEY PAIRS AND TLS Securing access to the host nodes is critical to the security of the virtual environment as well as to secure live migration. SSH and TLS are two primary methods for securing administrative communication between nodes. This guide covers setting up SSH key pairs. For information on using TLS, see section 20.2 of the “Red Hat Enterprise Linux 5 Virtualization Guide.”

CREATE SSH KEY PAIRS Set up SSH keys on every host that is expected to run the Virtual Machine Manager (virt-manager). For example, consider the following two basic scenarios:

• There are two or more nodes hosting guests. There is no remote access server. Node 1 is selected to be the host running virt-manager. An SSH key pair is created on node 1. The public key is distributed to each of the other nodes, thereby allowing an encrypted means of communicating without the use of a password.

• There are two or more host nodes with a remote system that is used to manage the virtual environment. The SSH key pair is created on the remote node. The public key is then distributed to each of the host nodes.

In either scenario, begin by creating the key pair. Figure 5 shows the key pair being created from a remote administration node.

Page 27: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 27

Figure 5) Create the key pair.

Accept the defaults and leave the passphrase empty. This creates two files:

• A private key (id_rsa) • A public key (id_rsa.pub)

DISTRIBUTE THE PUBLIC KEY 1. Copy the public key to each host node.

In the following screenshot, the remote administration host taco has the key pair. In the process of copying the file to chzbrgr, the file name is changed to track the key’s origin.

After copying the public key to host node chzbrgr, configure it on host node chzbrgr.

2. Log in to the host.

3. Make sure that the .ssh directory has permissions set to 700.

4. Create the authorized_keys file with permissions set to 600.

5. Test the key by logging out of the host node, then logging back in. Note that if SSH does not require a password, the key is working properly, as seen in the following screenshot.

Page 28: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 28

6. Distribute the public key and test it on each host node.

6.8 NETWORK CONFIGURATION OF HOST NODES As seen in Table 7, the IP network configuration of the host nodes configured for this example deployment accounts for:

• Access to the host node • A private network for NFS • A virtual Ethernet bridge allowing two-way traffic for the virtual guests In addition, all of the interfaces have been configured for redundancy with channel bonds. The two Ethernet devices bonded for the private network can be used for a software-based iSCSI initiator as well. Table 7 contains the specifics of the host node IP configuration.

Table 7) Host node IP configuration.

Interface Channel Bond Subnet Note

eth0 bond0 10.61.186.0/24 Public traffic to host nodes

eth1

eth2 bond1 192.168.27.0/24 Private traffic for NFS (or iSCSI)

eth3

eth4 bond2 10.61.186.0/24 Public traffic for virtual guests by

way of the virtual Ethernet bridge eth5

The host nodes in this deployment guide also have hardware-based iSCSI initiators installed that are seen by the operating system as eth6 and eth7. While there are configuration files in the network-scripts directory that were created at install time, their configuration is handled at the PCI BIOS layer. See Appendix A: Configure Hardware-Based iSCSI Initiator,” for more information.

PUBLIC INTERFACE As noted, all of the connections to the host nodes are accomplished through channel-bonded interfaces for redundancy. In the following examples, the standard configuration files (ifcfg-ethX, ifcfg-bondX) are listed consecutively for brevity. In practice, the lines under ETH0 belong in the ifcfg-eth0 file. Figure 6 shows the proper configuration.

Page 29: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 29

Figure 6) Channel bond for public host node traffic.

Before the bonded interface can be brought up, you must configure the bonding module in /etc/modprobe.conf. Figure 7 shows three alias lines as well as two option lines added to the file. The max_bonds parameter is necessary if more than one bonded interface is to be configured. The second options line is unnecessary if the BONDING_OPTS line is used as in Figure 6.

Figure 7) Configure the bonding module.

After the configuration files and modules are configured, restart networking with the command:

service network restart

Channel bonding mode 1 (active-passive) was chosen for this environment but might not be appropriate for every environment. See Appendix B: Channel Bonding Modes,” for a brief description of the different modes available.

PRIVATE INTERFACE Except for the IP and subnet information, configure the channel bond for the private NFS traffic identically to the public bond. Figure 8 provides an example.

Page 30: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 30

Figure 8) Channel bond for private NFS traffic.

Like the public channel bond, restart networking to test. Alternatively, you can use the ifup command to bring up only the newly configured channel bond.

If you are to configure jumbo frames on an interface, add the line MTU=9000 to the ifcfg-ethX file.

CREATE A VIRTUAL ETHERNET BRIDGE The default virtual bridge in KVM uses NAT to forward traffic from the virtual servers to the outside network. It also allows the virtual servers to communicate with each other, but there is no path back to the virtual servers from the outside network. The only way to deploy new virtual servers and golden images is from ISO images.

To circumvent this limitation, an additional bridge that binds to a physical Ethernet NIC or channel bond is configured. This allows two-way traffic to the virtual guests on the public network. It also opens up the possibility of network installations of guests.

For the virtual servers to have two-way access to the network outside of the host nodes, you need to create at least one virtual Ethernet bridge on each host node. With the exception of the MAC addresses, the bridges must have the identical configuration on all nodes.

Like the other interfaces on the host node, the bridge is configured for redundancy by way of channel-bonded interfaces. Figure 9 shows the contents of the relevant configuration files in the /etc/sysconfig/network-scripts directory. Interfaces eth4 and eth5 are first bonded into bond2, then bond2 is configured with the entry BRIDGE=br0 line, and finally, the bridge (br0) is configured.

Page 31: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 31

Figure 9) Channel bond for public virtual guest traffic through virtual Ethernet bridge.

To test the bonded bridge immediately, use the ifup command to bring up the channel bond, followed by the bridge.

As shown in Figure 10, check the status of the newly created bridge.

Page 32: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 32

Figure 10) Status of the newly created virtual Ethernet bridge.

ALLOW TWO-WAY TRAFFIC FOR VIRTUAL ETHERNET BRIDGES After configuring the bonded bridge, configure the host node to allow inbound traffic back through the host node. There are two methods for configuring the host node; both methods are equally acceptable.

Method 1

Use iptables. As shown in Figure 11, add a rule that allows traffic bound for the Ethernet bridge and save the configuration:

Figure 11) Using iptables to allow two-way traffic for virtual Ethernet bridges.

Repeat this configuration on all host nodes.

Method 2

Use kernel tunable parameters in /etc/sysctl.conf. The keys in Figure 12 tell iptables not to filter bridge traffic:

Figure 12) Using kernel tunable parameters in /etc/sysctl.conf to allow two-way traffic for virtual Ethernet bridges.

The three net.bridge parameters listed in Figure 12 need to be disabled (switched to 0) so that bridged traffic is forwarded successfully back to the virtual servers. Repeat this configuration on all host nodes.

After disabling the keys, run the command shown in Figure 13 to enact the changes.

Page 33: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 33

Figure 13) Enact the configuration changes.

CONFIGURE NTP Configure the host nodes for use with Network Timing Protocol (NTP) to keep the time synchronized. In Figure 14, the time is given an initial sync with the ntpdate command. The ntp.conf file then is backed up, copied, and edited. Finally, NTP is configured to start on boot and started.

Figure 14) Configure host nodes for use with NTP.

Page 34: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 34

7 CONFIGURE A REMOTE ADMINISTRATION HOST (OPTIONAL)

There are two primary options for performing administration tasks on the virtual environment:

• Graphical Virtual Machine Manager tool • Command-line tools provided by libvirt and kvm-qemu-img (virsh, qemu-img, and so on) Virtual Machine Manager can accomplish many common administration tasks. The libvirt command-line tools provide a detailed interface, while the libvirt API allows for the development of integration and automation. This deployment guide does not cover specific integration.

Strongly consider making use of a remote administration host whether using the graphical Virtual Machine Manager or text-mode virsh tool. You can use the remote administration host to manage the host nodes; manage the NetApp FAS controller; and, in the case of GFS2, manage the Red Hat Cluster Suite. Otherwise, one or more host nodes must be configured for these duties.

While you can use our RHEL desktops to manage the KVM environment, it is not the best way to manage the cluster or the NetApp FAS controller. The remote administration host provides a more centralized way of managing the different aspects of the virtual environment.

If not using the graphical tools, there is no need to install any desktop, X-windows, or virt-manager package on the host nodes. This helps to limit the number of packages installed on the host nodes.

7.1 BASIC REMOTE HOST CONFIGURATION Basic remote host configuration can be performed from any server or workstation running RHEL 5.4 or later. If installing virt-manager, the gnome desktop environment is an optional GUI. It must be registered to RHN or to a local Yellowdog Updater, Modified (YUM) repository containing the proper packages.

7.2 CONFIGURE SECURITY Table 8 lists the remote host ports that should be enabled in iptables on the remote host.

Table 8) Remote host ports to be enabled on the remote host.

Port Protocol Description

22 TCP SSH

53 TCP, UDP DNS

123 TCP NTP

N/A N/A ICMP

8084 TCP luci (If using Cluster Suite and/or GFS)

11111 TCP ricci (If using Cluster Suite and/or GFS)

Also enable SELinux.

7.3 CONFIRM NTP IS RUNNING AND STARTS ON BOOT Refer to the previous section, "Installation and Base Configuration of Host Nodes,” for information on how to configure NTP.

7.4 CONFIGURE NFS ACCESS TO THE NETAPP FAS CONTROLLER If using the remote administration host to manage the NetApp FAS controller, its NFS exports must be amended to provide access to the administration host.

Page 35: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 35

From the Web console of the NetApp FAS controller, select NFS→Manage Exports, as seen in Figure 15. View the Path column for /vol/vol0. This volume contains all of the configuration files for the Data ONTAP operating system. In the corresponding Options column, the Read-Write and Root access should be configured for the IP address of the remote administration host.

Figure 15) Manage NFS export.

After the access is granted, create a mountpoint on the remote administration host, as shown in Figure 16.

Figure 16) Create a mountpoint on the remote administration host.

7.5 INSTALL THE PACKAGES NEEDED TO ADMINISTER KVM REMOTELY Note: Install virt-manager only if using the graphical Virtual Machine Manager.

To install libvirt and its associated command-line tools, use the following command:

# yum install libvirt

To include the graphical Virtual Machine Manager, append virt-manager to the command:

# yum install libvirt virt-manager

A number of dependencies will also be installed.

Page 36: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 36

7.6 CONFIGURE THE SSH KEY PAIR See section 6.7, “Secure Remote Access to Host Nodes (SSH Keys),” for the configuration of SSH key pairs. You must distribute the public key for the remote administration host to each of the host nodes.

If you are using the remote administration host to manage the NetApp FAS controller, you must create an additional SSH key pair. Creation of this additional SSH key pair is described in section 6.7, “Secure Remote Access to Host Nodes (SSH Keys)”; however, there are two important differences:

• The key pair for the NetApp FAS controller must be of the type dsa instead of rsa. • The DSA public key must be distributed to the NetApp FAS controller. After you configure the second SSH key pair, use the NFS mount /vol/vol0 as described in section 7.4, “Configure NFS Access to the NetApp FAS Controller,” to copy the public key to the appropriate file on the NetApp FAS controller.

In Figure 17, vol0 is NFS mounted under /na_fas on a remote administration host. The /etc/sshd directory on the storage controller was created automatically, but the root/.ssh subdirectories needed to be added for this process, as did the file authorized_keys.

Figure 17) NFS mount voI0 under /na_fas.

The public key is then appended to the authorized_keys file.

Page 37: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 37

8 INSTALL AND CONFIGURE KVM

Before continuing, be sure that the host nodes are registered to the Red Hat network (including subscription to the virtualization child channel), as described in section 6.4, “Register with Red Hat Network.” If the host nodes are being deployed in a secure environment that does not have access to the Internet, you need to deploy and configure a YUM repository. The “Red Hat Enterprise Linux 5 Deployment Guide” contains additional information on repositories.

8.1 INSTALL THE REQUIRED PACKAGES Run the command shown in Figure 18 to install the KVM-related packages.

Figure 18) Install the KVM-related packages.

Note: You can also install the virt-viewer. It is optional, but adds additional functionality in a graphical environment.

There are many packages necessary to run KVM, but because YUM properly identifies and grabs packages based on dependencies, all required packages are installed. Currently, running the yum install kvm command installs 24 packages.

After installing the KVM-related packages, start libvirtd, as shown in Figure 19, and make sure that it starts automatically on boot.

Figure 19) Start libvirtd.

Install the KVM-related packages on each host node.

Page 38: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 38

9 SHARED STORAGE

Shared storage is the keystone of a flexible and scalable virtual environment such as KVM. The ability to migrate a virtual server from one host node to another, without downtime, requires that both host nodes see the same storage in the same manner.

The primary media for shared storage in a KVM virtual environment are NFS, iSCSI, and FCP. The following subsections discuss each primary medium. Note that a KVM and NetApp infrastructure supports multiple environments and that all three storage media can be used simultaneously, but only one should be used at a time for each environment.

For example, assume a cluster of 5 host nodes using NFS and a cluster of 10 host nodes using FCP. The shared storage for both environments can be maintained on the same NetApp FAS controller. However, the servers in the same cluster have to use the same shared storage medium. If additional host nodes are added to the first cluster of 5, the new host nodes also must use NFS.

9.1 CONFIGURE NFS-BASED SHARED STORAGE ON THE HOST NODES NFS-based shared storage is very straightforward in a KVM and NetApp virtual environment. It typically involves the following tasks:

• Configure private bonded interfaces for NFS traffic • Specify predictable NFS client ports • Mount the NFS export • Tune the number of concurrent I/Os • Configure SELinux

CONFIGURE PRIVATE BONDED INTERFACES FOR NFS TRAFFIC If the private bonded interfaces for NFS traffic have not yet been configured, refer to section 6.8, “Network Configuration of Host Nodes.” In this KVM environment, bond1 was set up on both host nodes on a private 192.168.27.0/24 network. This corresponds to a VIF on the NetApp storage that is tied to VLAN 3027. The switches between the host nodes and the NetApp storage have been configured to deliver traffic on VLAN 3027 to the VIF designated for private NFS traffic.

SPECIFY PREDICTABLE NFS CLIENT PORTS In /etc/sysconfig/nfs, uncomment LOCKD_TCPPORT=32803 and STATD_PORT=662, as shown in Figure 20. This forces the NFS to use those ports instead of the default random ones, allowing iptables to lock down the ports.

Figure 20) Specify predictable NFS client ports.

After making sure that the proper ports are opened in the firewall and editing /etc/sysconfig/nfs, restart the host node. This makes certain that the newly configured NFS ports are in use. Section 6.5, “Host Security,” provides more information on firewall ports to be opened on the host nodes in the KVM environment.

MOUNT THE NFS EXPORT The NFS export must be created on the NetApp FAS controller before moving forward.

Page 39: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 39

Figure 21 shows the NFS mount entry in /etc/fstab. Note the use of the _netdev option. This makes sure that the NFS mount is not attempted until networking is up and running.

Test the NFS mount entry in /etc/fstab with the mount -a command; then run mount to list out the newly mounted file system.

Figure 21) Test the NFS mount entry.

Running mount -a mounts anything listed in /etc/fstab that is not already mounted. A lack of output from the command indicates that there are no errors. Then run the nfsstat –m command to illustrate the mount options included when using the defaults mount option.

TUNE THE NUMBER OF CONCURRENT I/OS The default number of concurrent I/Os to be submitted to the NetApp FAS controller is 16. The relevant key in /etc/sysctl.conf must be reassigned a value of 128. Figure 22 shows the key changed on the live system and then made permanent by appending it to the /etc/sysctl.conf file.

Figure 22) Making the key permanent.

Note: There is also a key in /etc/sysctl.conf for the relevant User Datagram Protocol (UDP) slot table, but it can be ignored because NFS is going over only Transmission Control Protocol (TCP) in this environment.

Next, create any wanted subdirectories under /var/lib/libvirt/images and upload any ISO images. For instance, some environments might need separate subdirectories for each operating system, release, ISO image, and golden image.

CONFIGURE SELINUX It is very important to configure the images directory and all of its contents for use with SELinux. If done improperly, the KVM virtual servers will fail to operate. Section 12, “SELinux Considerations,” contains details on configuring the images directory.

9.2 CONFIGURE ISCSI-BASED SHARED STORAGE ON THE HOST NODES The process of configuring iSCSI-based shared storage includes four major steps:

• Configuring an iSCSI initiator • Rescanning or discovery on the SCSI bus • Configuring multipathing • Configuring GFS2

Page 40: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 40

For iSCSI, there is the concept of the initiator and the target. The initiator is the hardware or software device that makes requests of the storage, which is also known as the target. The KVM and NetApp environment set up for this deployment guide made use of Qlogic dual-port iSCSI HBAs (hardware); however, instructions for both hardware- and software-based initiators are included in the guide.

This section contains instructions for the Red Hat supplied software initiator. Appendix A: Configure Hardware-Based iSCSI Initiator,” contains instructions for the hardware initiator.

The iscsi-initiator-utils package is required to configure the software-based iSCSI initiator. Complete the following procedure for configuring a separate VLAN on the NetApp FAS controller for iSCSI traffic, as well as a private interface or bond on the host nodes, prior to configuring the software-based iSCSI initiator.

1. Identify the initiator name. The initiator name will need to be entered as part of an igroup on the NetApp FAS controller.

As shown in the following screenshot, the initiator name is created automatically when the iscsi-initiator-utils package is installed.

2. Be sure that the private VLAN on the NetApp FAS controller that handles the iSCSI traffic can be

pinged from the host.

3. Use the iscsiadm command to discover the iSCSI target.

The IP addresses for the private iSCSI network exist on the 192.168.29.0/24 network, on VLAN 3029.

4. After the discovery process returns with the target address (in the following screenshot, iqn.1992-08.com.netap:sn.151732002), restart the iscsi service. This information is saved automatically.

5. Confirm that you can see the new partitions.

In the following screenshot, two devices are seen because there are two paths to the same device.

Page 41: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 41

6. Repeat this procedure exactly on each host node.

It is also imperative that each host node assigns the same device names. In this KVM and NetApp environment, as seen in the preceding screenshot, each host node sees the devices as sda and sdb.

There are two final configurations:

• Configure multipathing • Configure GFS2 These configurations are covered in separate sections because they also apply to FCP-based shared storage. Section 10 covers multipathing configuration, and section 11 covers GFS2 configuration.

DISK ALIGNMENT FOR ISCSI SHARED STORAGE The initial requirements of aligning an iSCSI-based LUN are satisfied by properly configuring the LUN and igroup when you first create the LUN. This includes proper operating system and type. The remaining requirements to properly align the iSCSI-based shared storage are satisfied by following the instructions in section 11.5, “Configure GFS2.”

The steps to align a raw disk image properly are included in section 13, “Create a Golden Image or Template.”

9.3 CONFIGURE FCP-BASED SHARED STORAGE ON THE HOST NODES The configuration of the fibre HBAs is done automatically at install time and requires little additional configuration. The major steps are directed more toward information gathering than configuration. The major steps include:

• Capture the HBA host IDs • Capture the port names of the HBAs • Rediscover the fabric Note: Other steps, which are outside the scope of this deployment guide, involve proper zoning in the fabric by way of the fibre switch.

1. Determine how the operating system defines the HBAs. At this point, it is important to know the make and model of all HBAs installed on the host nodes. For example, the host nodes in this KVM and NetApp environment have both iSCSI and FCP HBAs installed, all made by Qlogic.

The following screenshot, based on the models listed, shows that the first two HBAs are iSCSI and that the second two are the fibre HBAs.

Page 42: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 42

2. Determine the port names to configure the igroup on the NetApp FAS controller as well as the zoning

on the fabric switch.

The following screenshot shows the command used to identify the port names.

3. After the LUN and igroup are configured and mapped on the NetApp FAS controller, rediscover the

fabric. In the following screenshot, devices sdc through sdf are actually the same device.

Page 43: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 43

Because of the redundant paths as well as how the device was zoned on the fabric switch, it shows up as four devices. As a result, the next task is to configure multipathing. Section 10, “Configure Multipathing on the Host Nodes,” explains multipathing on the host nodes.

DISK ALIGNMENT FOR FCP SHARED STORAGE The initial requirements of aligning an iSCSI-based LUN are satisfied by properly configuring the LUN and igroup when you first create the LUN. This includes proper operating system and type. The remaining requirements to properly align the iSCSI-based shared storage are satisfied by following the instructions in section 11.5, “Configure GFS2.”

The steps to align a raw disk image properly are included in section 13, “Create a Golden Image or Template.”

Page 44: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 44

10 CONFIGURE MULTIPATHING ON THE HOST NODES

Redundant paths running to the shared storage require multipathing to manage the paths. Without multipathing software, the operating system does not recognize that the device at the end of each path is the same device.

Red Hat Device Mapper Multipath is included with Red Hat Enterprise Linux and is configured in the following procedure to manage the multiple storage paths in preparation for use with GFS2. The configuration for multipathing is identical for iSCSI and FCP.

1. Confirm that the multipath package is installed.

2. If the package needs to be installed, use the command shown in the following screenshot.

3. Edit the /etc/multipath.conf file to comment out the blacklist block near the top of the file.

4. Insert the multipath module.

In the following screenshot, the configuration is completed by starting the service, running the multipath -v2 command to configure the paths, and running multipath –ll to list the configured paths.

Device mpath0 was created. This device will be used by GFS2 to complete the configuration of the LUN-based iSCSI or FCP shared storage.

Page 45: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 45

5. Configure multipath to start automatically at boot time.

The following screenshot shows the multipath configuration for the FCP LUN that was created earlier with four paths. The multipath device mpath1 was created with the four devices.

The “Red Hat Enterprise Linux 5 DM Multipath” guide provides additional information on multipath.

Continue to section 11, “Configure GFS2-based Shared Storage.”

Page 46: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 46

11 CONFIGURE GFS2-BASED SHARED STORAGE

Note: SELinux is not supported for use with Red Hat Cluster Suite in RHEL 5.4 and must be disabled prior to configuration of the cluster or GFS2. SELinux will be supported in RHEL 6, which is scheduled to be released in late 2010.

A clustered file system is required to provide the file locking needed to prevent data corruption when two or more hosts have read and write access to the same LUN-based file system. Red Hat Enterprise Linux 5.4 AP ships with both GFS and GFS2. While both satisfy the clustered file system requirement, this deployment guide covers only the GFS2 configuration.

Prior to the GFS2 configuration, set up a basic cluster that includes all of the host nodes that will access the same shared storage device. Red Hat Cluster Suite, also included in RHEL 5.4 AP, provides a configurable means of providing high availability to various services. However, for the purposes of the shared storage, the configuration is quite basic; it is enough to fence a host node properly, as well as provide the cluster requirement to the GFS2 file system.

Fencing refers to the process by which one host node forces another host node out of the cluster (reboot) or triggers an action that cuts data access to another host node. This happens when a host node is hung or otherwise fails to respond to a heartbeat. When fenced, the hung host node is forced to release any locks on the file system and is prevented from writing dirty, old, or corrupted data. In the case of a fencing action that forces a reboot, the offending host node rejoins the cluster when it comes back up.

The “Red Hat Enterprise Linux 5 Cluster Administration” guide provides more information on fencing.

Note: Although it is not documented in this deployment guide, Red Hat Cluster Suite also supports configuring virtual systems as cluster nodes to provide high availability to the virtual environment.

11.1 CONFIGURE THE HOST NODES Subscribe the host nodes to the cluster and cluster storage child channels in Red Hat network. Section 6.4, “Register with Red Hat Network,” contains additional information on adding subscriptions. These child channels contain the GFS2 and Red Hat Cluster Suite packages required to continue host node configuration.

After updating the channel subscriptions, install ricci on all host nodes, as shown in Figure 23.

Figure 23) Install ricci.

Installing the ricci package also installs several dependencies, as shown in Figure 24.

Page 47: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 47

Figure 24) Installing dependencies.

11.2 CONFIGURE THE CLUSTER MANAGER If using a remote administration host, install the luci package on the host, as shown in Figure 25. If not using a remote administration host, choose a host node that will also serve as the management node and install luci on that host node:

Figure 25) Install luci.

Initialize luci. This triggers a password prompt for the interface, as shown in Figure 26.

Page 48: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 48

Figure 26) Initialize luci.

11.3 CONFIGURE THE CLUSTER Begin configuring the cluster after restarting luci.

1. To complete the cluster configuration, open a Web browser and go to https://name_of_luci_host:8084, where name_of_luci_host is the hostname or IP address of the host running luci.

2. Log in to Red Hat cluster and storage systems.

3. Click the Cluster tab, then click Create a New Cluster, as shown in the following window.

Page 49: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 49

4. Populate the fields with the appropriate host and password information and click Submit.

This creates the cluster as well as automatically installs the required cluster packages on the host nodes. If there are errors, be sure that the ricci service is started on the host nodes.

The following window appears when the cluster is being built.

Page 50: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 50

Next, create and configure fencing devices.

11.4 CREATE FENCING DEVICES When a failure occurs, a fencing device enables the cluster to remove a node in order to prevent data corruption. The cluster in this deployment guide uses the management card included in each host.

If you have not already done so, review the “Red Hat Enterprise Linux 5 Cluster Administration” guide for information on other supported fencing devices.

1. Click the Cluster tab; then click Nodes→Configure, as shown in the following window. 2. Click the first host node. 3. Under Main Fencing Method, click Add a fence device to this level.

Page 51: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 51

4. Enter the information as appropriate on the Failover Domain Membership window. The following

window shows the HP iLO management card being set up to power the host on and off as required.

Page 52: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 52

5. Disable ACPI with the following command: chkconfig acpid off; service acpid stop

This keeps the ACPI from interfering with the fencing of a node.

6. Repeat this procedure for each host node prior to configuring GFS2.

11.5 CONFIGURE GFS2

CREATE THE GFS2 FILE SYSTEM USING LVM ON THE ENTIRE LUN 1. Make sure that the GFS2 kernel module, GFS2 utilities, and clustered LVM packages are installed on

each host node.

2. Start the clustered LVM service and configure it to start automatically at boot time.

Clustered LVM uses the same commands and options as LVM; it is simply a cluster-aware version. It does not conflict with the existing LVM package or configuration.

3. Initialize the multipath device for use with LVM and then create a volume group. Run the pvdisplay

command to capture the number of free physical extents. This number, highlighted in the following screenshot, is used in the next step. Note: In the following screenshot, the mpath0 multipath device, which was created in section 10, “Configure Multipathing on the Host Nodes,” is used for creating the GFS2 file system.

4. Using all available free physical extents, create a logical volume, or partition, within the volume group.

Page 53: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 53

5. Scan the volumes from the other host nodes. Because the LVM is running in clustered mode, the

volume group and logical volume need only to be created from one node.

6. Create one journal for each host node; however, you can create additional journals if additional host

nodes need to be added later. The locking protocol, lock_dlm, is required for use in a cluster. In the following window, the locking table is specific to the cluster in this example. The locking table shown, sharstor:kvm_data, includes the cluster name plus an arbitrary name for the file system. In this example, the cluster name, which was created previously, is sharstor; the arbitrary file system name is kvm_data. There are four journals created, and the file system is created on the lv-kvm logical volume.

Like the clustered LVM volumes, the GFS2 file system is created on one host node only.

Page 54: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 54

7. Create an entry in /etc/fstab and test the entry.

Note that the options are noatime and _netdev. The noatime option improves performance, and the _netdev option prevents mounting the file system until networking is up.

8. Execute the /etc/fstab entry and mount it on all host nodes.

Section 0, “References,” provides links to documents containing additional information on Red Hat Cluster Suite and GFS2.

If LVM is used either on the entire LUN or on a partition that is properly offset, the LUN itself will be aligned properly. The default sizes for LVM physical extents and GFS2 block size are both evenly divisible by 8.

The “Best Practices for File System Alignment in Virtual Environments” guide provides a full explanation of disk alignment.

USE LVM ON A PARTITION WITH THE PROPER OFFSET The preceding procedure explained how to create the GFS2 file system using LVM on the entire LUN. This procedure explains how to use LVM on a partition with the proper offset. The following screenshot shows the command used and the system output.

1. Run the parted command to create a disk label on mpath1.

2. Create a single partition starting at sector 64 and ending when it runs out of space. This results in the creation of the partition mpath1p1.

3. The partition is given the type LVM, and the LVM-related commands are run against the new partition.

4. Run the fdisk command to show the details of the newly created partition.

5. Run the pvcreate command to label the disk.

6. Run the vgcreate command to create the volume group.

All other commands are the same.

Page 55: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 55

Each procedure affects performance equally, whether using the entire disk or creating a partition first; however, creating a partition first uses a few more commands, and the first 64 sectors remain unused.

Page 56: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 56

12 SELINUX CONSIDERATIONS

Note: SELinux must be disabled if using Cluster Suite in RHEL 5.4. SELinux and Red Hat Cluster Suite will be supported for use together in late 2010 when RHEL 6 is scheduled for release.

This section explains how to make sure that KVM works properly with SELinux for NFS-based shared storage.

SELinux is a key layer in securing the host nodes. In Figure 27, three subdirectories have been created in the images directory. Other environments might have more or fewer directories or might even have a nondefault location for the images directory. The instructions in this section apply to all of the use cases.

Configure the security context for the subdirectories. If the images directory is moved to a nondefault location, it too needs to be configured so KVM is able to use its contents.

In this scenario, the ls -Z command lists the directories’ SELinux context prior to having their context corrected. The semanage command is used first to update the images directory, to which it replies that it is already defined. (Normally, you do not need to run the semanage command on the default directory because it already has the proper security context. In this example, it was run to show what to do if a nondefault directory is used.) Finally, the restorecon command is used recursively to update the SELinux context on everything under the images directory.

Figure 27) Configure the security context for the subdirectories.

In Figure 28, the newly updated SELinux context (virt_image_t) is listed. An empty file is then created to show that created files will inherit the proper context.

Figure 28) Newly created files inheriting the proper context.

Page 57: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 57

KVM will now work properly with SELinux for NFS-based shared storage.

Page 58: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 58

13 CREATE A GOLDEN IMAGE OR TEMPLATE

Creating a base system to work from makes good sense for many reasons, including:

• Automation • Consistency • Predictability • Faster deployment of virtual servers After creating a template or golden image, you can clone virtual servers in a fraction of the time. In this deployment guide, the words template and golden image are used interchangeably.

Creating a template image follows this general process:

1. Choose an operating system. 2. Determine sizing, layout, and package requirements. 3. Make available an ISO image, DVD, or PXE environment to a host node.

4. Create and align a disk image. 5. Build a virtual server using the disk image. 6. Install the operating system. 7. Reboot the virtual server and make it generic. 8. Shut down the virtual server.

9. Using the template, make one or more clones.

Templates are created based on the different types of servers being deployed in the environment. In this deployment guide, the template is created for a very basic Web server based on RHEL 5.4.

13.1 CREATE AND ALIGN A DISK IMAGE FOR VIRTUAL GUESTS In the KVM virtual environment, you can create a disk image automatically during the process of creating a virtual server, or you can create a disk image as a separate process. In this deployment guide, the disk image is created as a separate process using the qemu-img command.

As documented in NetApp “Best Practices for File System Alignment in Virtual Environments,” it is necessary to align properly each layer of storage between the virtual server and the underlying storage. In the context of a KVM disk image, this means that each partition needs to start at a sector number that is cleanly divisible by 8. For example, a legacy default starting sector is 63, but an aligned first partition will start at 64 or 128.

For the purposes of this deployment guide, a simple script was written to automate the creation of a disk image, create the virtual server, and then use the virtual Ethernet bridge to access a Kickstart server. Figure 29 shows the script that was used.

Page 59: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 59

Figure 29) Script to automate creation of a disk image and creation of the virtual server and use the virtual Ethernet bridge to access a Kickstart server.

The two primary commands are qemu-img and virt-install. The qemu-img command in this script creates a raw disk image 8GB in size. The new image is named when the script is run. For example, if using the script in Figure 29, ./build_me.sh dbserver01 creates a new virtual server named dbserver01.

The virt-install command in Figure 29 specifies that a virtual server is to be created on the local host using KVM (hvm), the virtual bridge, and 1GB of memory, and is pointed to the Kickstart server found at 10.61.186.248. Note that the --file-size=8 is not actually needed here because the qemu-img command was used earlier in the script. This option automatically creates the disk image at the creation time of the virtual server.

Appendix E: Sample Kickstart File for a Properly Aligned Virtual Server,” lists the referenced Kickstart file in its entirely. It is very basic in that it specifies only a few packages and a basic disk layout. However, it is very important to note two sections of the Kickstart file as they relate to disk alignment:

• %pre section • Disk layout section Figure 30 shows the %pre section, which executes prior to the rest of the install.

The parted tool creates two partitions on the disk image that is seen by the virtual server as device /dev/hda. Each of the partitions is started on a sector that is cleanly divisible by 8.

Figure 30) Kickstart file, %pre section.

Page 60: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 60

Figure 31 shows the disk layout section.

Note the two lines that start with the abbreviation part. They dictate that the /boot directory and that LVM are used, respectively, on the partitions created in the %pre section, thereby preserving the properly aligned partitions. Any remaining partitions are created within the LVM managed partition. Do not use the clearpart directive in this Kickstart scenario because it wipes out the partitions created in the %pre section.

Figure 31) Kickstart file, disk layout section.

For more information on Kickstart, see the “Red Hat Enterprise Linux 5 Deployment Guide.”

Note: The creation of a golden image does not require Kickstart; however, Kickstart automates the alignment process. DVD or ISO images are the default means for creating virtual servers. The use of a virtual bridge opens the possibility of using Kickstart and PXE for installation.

Figure 32 shows the build_me.sh script launched to create a golden image for a RHEL 5.4 Web server. The Virt Viewer window opens automatically because the virt-install command in the build_me script called for VNC.

Page 61: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 61

Figure 32) Create a golden image.

The virtual server rhel54_web_gimage is created.

13.2 PREPARE THE GOLDEN IMAGE FOR CLONING Next, configure the newly created virtual server to be a golden image. This involves adding any third-party software, configuring extra security settings, disabling unnecessary services, and anything else that contributes to the automation of cloning the golden image or template. At a minimum, the hostname, IP addresses, and MAC addresses for any network interfaces need to be removed. The ultimate purpose is the ability to clone servers on demand that require very little or no interaction to put them into production.

In this particular scenario, only the hostname, IP address, and MAC address are unconfigured. Later, you can assign a MAC address during the cloning process.

Page 62: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 62

Figure 33) Generic network configuration.

Shut down the golden image once it has all of its software configured and is made generic. The golden image is not meant to be a running server; it is only a template to be cloned.

Also note that both raw disk images and cloned raw disk images are thin by default. That is, while a raw disk image might be 8GB in size, it might only take up 2GB in space. Figure 34 shows this situation. The raw disk image webserv01.img was created as 8GB in size, but the leftmost column shows that it takes up only 1.2GB space. As more data is stored on the disk image, that number grows.

Figure 34) Raw disk image size.

Section 16, “Configure Data Resiliency and Efficiency,” illustrates the means of taking advantage of this thin provisioning on the NetApp FAS controller.

Page 63: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 63

14 CLONE VIRTUAL SERVERS

After you create a golden image or template, two different types of files are created. The raw disk image is created to provide a logical abstraction that the virtual server sees as a physical disk. In addition, an XML file, which stores the metadata for the virtual server, is created in the /etc/libvirt/qemu directory. The metadata includes where to locate the disk image, how it is connected to the network, and all of the hardware resources it has.

During cloning, both the original disk image and the XML file are referenced to create the new virtual server. The virt-clone command is used to clone a template. Figure 35 shows a rudimentary script that has been created to automate the process and is based on virt-clone.

Figure 35) Script created to automate the process.

The virt-clone command is the core of the script. The virsh command is used to start the newly cloned server. These commands, in addition to having a virtual server to clone, are all that is needed to create a new server based on a golden image. Also note that you could specify a predetermined MAC address as an option to the virt-clone command.

Figure 36 shows a script being called to clone the template rhel54_web_gimage. The newly cloned virtual server is websrv01.

Page 64: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 64

Figure 36) Cloning a template.

Note that the automatically assigned MAC address is listed as part of the script output. In this example, it is assigned by the virt-clone command, but it also could have been specified in the script or as an option to the virt-clone command.

In addition, note use of the Virtual Machine Manager, or virt-manager. This is a graphical tool that can be used interactively to monitor and manage the KVM virtual environment.

Page 65: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 65

15 LIVE MIGRATION OF VIRTUAL SERVERS

The process for moving a virtual server from one physical server to another is a cornerstone of any virtual environment, not just KVM. Live migration on KVM and NetApp is straightforward. The main requirement is that the source and target host nodes have access to the same shared storage.

If you have not already created and distributed the SSH keys, stop and perform this task now. In the KVM and NetApp environment depicted in this deployment guide, a remote administration host is used to perform many of the tasks rather than perform them directly from one of the host nodes. Therefore, the remote host, taco, has its public SSH key distributed to the host nodes chzbrgr and hmbrgr.

You can initiate the live migration from the command line or from the Virtual Machine Manager. Both are illustrated in this deployment guide. Regardless of the method, if the virtual server has never run on the target host node, the disk image and the XML file are both copied over automatically.

15.1 LIVE MIGRATION USING VIRTUAL MACHINE MANAGER 1. Run the virt-manager & command from the console to launch the Virtual Machine Manager.

The first task is to establish a connection between the remote administration host and the host nodes.

2. Select File→Add Connection, as shown in the following window.

The Add Connection dialogue box opens.

3. As shown in the following window, select QEMU from the Hypervisor drop-down menu and Remote tunnel over SSH from the Connection drop-down menu. Also, enter the hostname that needs to be connected to.

4. Repeat these last two steps for each host node. Note: If not using a remote administration host, choose a host node to host the Virtual Machine Manager in addition to virtual guests.

Page 66: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 66

5. After you enter all of the host nodes, right-click the virtual server that needs to be migrated. 6. Select Migrate→<destination host>.

In the following window, there is only one other host node configured; therefore, there is only one choice of host nodes.

Note: You can configure the Virtual Machine Manager to connect to all host nodes; however, the Virtual Machine Manager is not aware of what nodes share the same storage and which ones do not. The migration will fail when attempting to migrate a virtual server between physical servers that do not share the same storage.

Page 67: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 67

The following window shows that virtual server websrv01 has been migrated successfully from host node chzbrgr to host node hmbrgr.

The Virtual Machine Manager lists all virtual servers, regardless of whether they are active, shut off, or paused.

15.2 LIVE MIGRATION FROM COMMAND LINE Initiating a live migration using the command line involves running the virsh command. Figure 37 shows running the virsh command from a remote administration host.

The first command lists the running virtual servers on host node hmbrgr.

The second command adds the --migrate switch to initiate the migration from one of the host nodes.

The third command shows that the virtual server is no longer running on the source host.

The final command confirms that the virtual server was migrated successfully.

Page 68: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 68

Figure 37) Initiating a live migration.

Note: If initiating the migration from the source host node instead of a remote host, the --connect and initial uniform resource identifier (URI) are not necessary.

For example, running the following command from host node hmbrgr successfully migrates virtual server websrv01 to a host node:

chzbrgr: virsh migrate –live websrv01 qemu-ssh://chzbrgr/system.

Page 69: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 69

16 CONFIGURE DATA RESILIENCY AND EFFICIENCY

16.1 THIN PROVISIONING Thin provisioning is a way to allocate space without actually reserving all of the space at once; it only reserves written sectors. In contrast, thick provisioning reserves all space at creation time. You can enable thin provisioning at the volume, LUN, and disk image layers.

If you enable thin provisioning at one layer, it is important to enable it at all layers, or the space efficiency will not be fully recognized. For example, thin provisioning a LUN on a thick-provisioned volume provides little or no benefit because the volume still reserves all allocated space.

THIN-PROVISIONED VOLUME As shown in Figure 38, enable thin provisioning on a volume at creation time by setting the Space Guarantee to none.

Figure 38) Enabling thin provisioning on a volume.

THIN-PROVISIONED LUN As shown in Figure 39, enable thin provisioning on a LUN at creation time by leaving Space Reserved unchecked.

Page 70: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 70

Figure 39) Enabling thin provisioning on a LUN.

If using NetApp deduplication on a volume serving LUNs, you should thin provision the volume at two times the size of the LUN or LUNs.

THIN-PROVISIONED DISK IMAGE As shown in Figure 40, raw disk images in KVM are thin by default. Creating a 10GB raw disk image with no other options results in a file that allocates 10GB of space but reserves only 12K.

Also, note the difference between allocated space and reserved space in other disk image files. For example, the disk image align.img is 8GB in size, but only takes up 2.7GB.

Figure 40) Enabling thin provisioning on a disk image.

16.2 DEDUPLICATION Because most virtual servers are cloned from a golden image or template, they share many of the same data blocks. NetApp deduplication folds the identical blocks from the different virtual server images into a single instance on the NetApp FAS controller. Deduplication is enabled on the volume level. In addition, deduplication requires a license to enable it.

Page 71: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 71

There is no graphical tool for NetApp deduplication. As shown in Figure 41, it must be enabled from the command line.

In this example, the sis command enables and starts deduplication on the volume kvm_fcp. You should configure a schedule to automate the deduplication at a regular interval. For more information on NetApp deduplication, see the “NetApp Deduplication for FAS and V-Series Deployment and Implementation Guide.”

Figure 41) Enabling deduplication.

16.3 SNAPSHOT Snapshot technology provides a read-only, point-in-time copy of a volume. It takes very little space and usually takes less than a second to create. Once a Snapshot copy is created, you can restore entire volumes or individual files. If a virtual server configuration is altered or data on a virtual server is deleted, you can restore the virtual server from the Snapshot copy. In addition, you can back up Snapshot copies to tape or replicate them to another site using NetApp SnapVault® or SnapMirror®.

Snapshot technology requires very little configuration, but the license for SnapManager® must be installed on the NetApp FAS controller.

When creating a Snapshot copy of a volume that contains active virtual servers, you must quiesce the active virtual servers first to get the most accurate view of the servers and data. This means that if there are virtual servers distributed among three host nodes, you must quiesce all of the virtual servers because they all reside on the same volume.

Figure 42 shows remote commands being run to list the active virtual servers and then suspending (quiescing) the active virtual servers.

After the active virtual servers are quiesced, a remote command is issued to the NetApp FAS controller to create a Snapshot copy of the volume kvm_iscsi, named kvmsnap. The second virsh list command is run to demonstrate the paused state of the virtual server before it is put back into active status.

Page 72: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 72

Figure 42) Listing and suspending the active virtual servers.

Appendix D: Sample Snapshot Script,” contains a basic script to automate the process of quiescing the running virtual servers, triggering the Snapshot copy, and resuming the virtual servers.

Because all running virtual machines must be quiesced prior to a Snapshot copy being created, the normal schedule for Snapshot copy should be disabled. Instead, a Snapshot copy should be triggered by a Linux or UNIX style cron job, such as a cron job from a remote administration host that includes first quiescing the virtual servers.

Figure 43 shows how to disable the scheduled Snapshot copy from the NetApp FAS controller. Select Volumes→Snapshots→Configure. From the Volume drop-down menu, select the shared storage volume. In Figure 43, the kvm_iscsi volume is chosen, and the Scheduled Snapshots check box is unchecked, thereby disabling the default schedule.

The hourly Snapshot schedule is in effect only if the Scheduled Snapshots check box is checked.

Page 73: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 73

Figure 43) Disable the scheduled Snapshot copy from the NetApp FAS controller.

Page 74: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 74

APPENDIXES

APPENDIX A: CONFIGURE HARDWARE-BASED ISCSI INITIATOR A hardware-based iSCSI initiator was used for the purposes of this deployment guide. It is a Qlogic QLE4062C iSCSI HBA, based on the ISP4032 chip.

To configure the HBA, you must reboot the host and trigger the Qlogic BIOS configuration.

1. Press Ctrl-Q after the words Press <CTRL-Q> for Fast!UTIL appear on the console. The Select Host Adapter window appears. Because the HBA has two ports, it appears as two separate adapters in the BIOS. You must configure each adapter separately.

2. Select the first adapter and press Enter.

3. On the Fast!UTIL Options window, select Configuration Settings and press Enter.

4. On the Configuration Settings window, select Host Adapter Settings and press Enter.

Page 75: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 75

5. On the Host Adapter Settings window, select Initiator IP Settings.

6. Enter the IP and Netmask that match the private VLAN for iSCSI traffic that was set up during the base configuration.

7. Select Initiator iSCSI Name and edit the iSCSI name after the colon so it is more meaningful and easier to remember.

8. Return to the Configuration Settings window by pressing Esc (Escape). 9. Select iSCSI Boot Settings and press Enter. 10. In the iSCSI Boot Settings window, select Manual for Adapter Boot Mode. 11. Select Primary Boot Device Settings.

Page 76: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 76

12. In the Primary Boot Device Settings window, for the Target IP, enter the IP of the private VLAN IP for the NetApp FAS controller. Do not edit the values for the following fields:

• Use IPv4 or IPv6 • Target Port • Boot LUN These are default values and do not need to be edited.

13. Edit the iSCSI Name field on the Primary Boot Device Settings window.

You can find the edited iSCSI node name on the Web console by selecting LUNs→iSCSI→Manage Names.

14. Repeat this configuration procedure for each HBA, on each host node.

You must configure multipathing and GFS2 to complete the process.

APPENDIX B: CHANNEL BONDING MODES The following information on channel bonding modes is from Red Hat Knowledge Base Article 16008.

Table 9) Channel bonding modes.

Channel Bonding Modes Description

Balance-rr or 0 Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.

Page 77: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 77

Channel Bonding Modes Description

Active-backup or 1 Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on one port only (network adapter) to avoid confusing the switch. In bonding version 2.6.2 or later, when a failover occurs in active-backup mode, bonding issues one or more gratuitous Address Resolution Protocols (ARPs) on the newly active slave. One gratuitous ARP is issued for the bonding master interface and each VLAN interface configured above it, provided that the interface has at least one IP address configured. Gratuitous ARPs issued for VLAN interfaces are tagged with the appropriate VLAN ID. This mode provides fault tolerance. The primary option affects the behavior of this mode.

Balance-xor or 2 XOR policy: Transmit based on the selected transmit hash policy. The default policy is a simple [(source MAC address XOR'd with destination MAC address) modulo slave count]. Alternate transmit policies may be selected by use of the xmit_hash_policy option, described below. This mode provides load balancing and fault tolerance.

Broadcast or 3 Broadcast policy: Transmits everything on all slave interfaces. This mode provides fault tolerance.

802.3ad or 4 IEEE 802.3ad Dynamic link aggregation: Creates aggregation groups that share the same speed and duplex settings. Uses all slaves in the active aggregator according to the 802.3ad specification. Slave selection for outgoing traffic is done according to the transmit hash policy, which may be changed from the default simple XOR policy by use of the xmit_hash_policy option. Note that not all transmit policies may be 802.3ad compliant, particularly in regard to the packet misordering requirements of section 43.2.4 of the 802.3ad standard. Differing peer implementations have varying tolerances for noncompliance. Prerequisites: • Ethtool support in the base drivers for retrieving the speed and duplex of each

slave. • A switch that supports IEEE 802.3ad Dynamic link aggregation. Note: Most switches require some type of configuration to enable 802.3ad mode.

Balance-tlb or 5 Adaptive transmit load balancing: Channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave. Prerequisite: Ethtool support in the base drivers for retrieving the speed of each slave.

Page 78: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 78

Channel Bonding Modes Description

Balance-alb or 6 Adaptive load balancing: Includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond, such that different peers use different hardware addresses for the server. Receive traffic from connections created by the server is also balanced. When the local system sends an ARP request, the bonding driver copies and saves the peer's IP information from the ARP packet. When the ARP reply arrives from the peer, its hardware address is retrieved and the bonding driver initiates an ARP reply to this peer, assigning it to one of the slaves in the bond. A problematic outcome of using ARP negotiation for balancing is that each time that an ARP request is broadcast, it uses the hardware address of the bond. Hence, peers learn the hardware address of the bond and the balancing of receive traffic collapses to the current slave. This is handled by sending updates (ARP replies) to all the peers with their individually assigned hardware address such that the traffic is redistributed. Receive traffic is also redistributed when a new slave is added to the bond and when an inactive slave is reactivated. The receive load is distributed sequentially (round robin) among the group of highest speed slaves in the bond. When a link is reconnected or a new slave joins the bond, the receive traffic is redistributed among all active slaves in the bond by initiating ARP replies with the selected MAC address to each of the clients. The updelay parameter must be set to a value equal to or greater than the switch's forwarding delay so that the ARP replies sent to the peers will not be blocked by the switch. Prerequisites: • Ethtool support in the base drivers for retrieving the speed of each slave. • Base driver support for setting the hardware address of a device while it is open.

This is required so that there will always be one slave in the team using the bond hardware address (the curr_active_slave) while having a unique hardware address for each slave in the bond. If the curr_active_slave fails, its hardware address is swapped with the new curr_active_slave that was chosen.

Page 79: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 79

APPENDIX C: SAMPLE FIREWALL FOR HOST NODES Figure 44 is an example of one method used to set up the iptables firewall; it is not the only method.

Figure 44) Set up the iptables firewall.

APPENDIX D: SAMPLE SNAPSHOT SCRIPT Figure 45 shows how to script Snapshot copies and run them from a remote administration host or from a host node.

Page 80: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 80

Figure 45) Script Snapshot copies.

The script requires that SSH keys (dsa) be set up on the hosts expected to run the script and the public key distributed to the NetApp FAS controller. This is described in section 6.7, “Secure Remote Access to Host Nodes (SSH Keys).”

Page 81: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 81

APPENDIX E: SAMPLE KICKSTART FILE FOR A PROPERLY ALIGNED VIRTUAL SERVER Figure 46 provides an example of a Kickstart file for a properly aligned virtual server.

Figure 46) Kickstart file.

The %pre section creates properly aligned partitions, and the disk layout references the newly created partitions. If using the virtio drivers, use device vda instead of hda.

Page 82: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 82

REFERENCES

Home Page for KVM

www.linux-kvm.org

Red Hat Enterprise Linux and Microsoft Windows Virtualization Interoperability

www.redhat.com/promo/svvp/

KVM: Kernel-Based Virtual Machine

www.redhat.com/f/pdf/rhev/DOC-KVM.pdf

Red Hat Enterprise Linux 5 Virtualization Guide

www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Virtualization_Guide/index.html

Red Hat Enterprise Linux 5 Deployment Guide

www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Deployment_Guide/index.html

Red Hat Enterprise Linux 5 Installation Guide

www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Installation_Guide/index.html

Red Hat Enterprise Linux 5.5 Online Storage Guide

www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/html/Online_Storage_Reconfiguration_Guide/index.html

Red Hat Enterprise Linux 5 DM Multipath

www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/DM_Multipath/index.html

Best Practices for File System Alignment in Virtual Environments

www.netapp.com/us/library/technical-reports/tr-3747.html

Technical Report: Using the Linux NFS Client with Network Appliance Storage

www.netapp.com/us/library/technical-reports/tr-3183.html

Storage Best Practices and Resiliency Guide

http://media.netapp.com/documents/tr-3437.pdf

KVM Known Issues

www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Technical_Notes/Known_Issues-kvm.html

NetApp Deduplication for FAS and V-Series Deployment and Implementation Guide

www.netapp.com/us/library/technical-reports/tr-3505.html

www.media.netapp.com/documents/tr-3505.pdf

Red Hat Enterprise Linux 5 Global File System 2

www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Global_File_System_2/index.html

SnapMirror Async Overview and Best Practices Guide

http://media.netapp.com/documents/tr-3446.pdf

SnapVault Best Practices Guide

http://media.netapp.com/documents/tr-3487.pdf

Data ONTAP 7.3 Data Protection Online Backup and Recovery Guide (available on NOW™)

Page 83: Reference Architecture Deployment Guide for KVM and Red Hat … · 2018-08-31 · 6 Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public If the output

Deployment Guide for KVM and Red Hat Enterprise Linux on NetApp Storage NetApp Public 83

Red Hat Enterprise Linux 5 Cluster Administration

www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Cluster_Administration/index.html

NetApp provides no representations or warranties regarding the accuracy, reliability or serviceability of any information or recommendations provided in this publication, or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS, and the use of this information or the implementation of any recommendations or techniques herein is a customer’s responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.

© Copyright 2010 NetApp, Inc. All rights reserved. No portions of this document may be reproduced without prior written consent of NetApp, Inc. NetApp, the NetApp logo, Go further, faster, Data ONTAP, FilerView, FlexVol, Network Appliance, NOW, SnapManager, SnapMirror, Snapshot, and SnapVault are trademarks or registered trademarks of NetApp, Inc. in the United States and/or other countries. Intel is a registered trademark of Intel Corporation. Linux is a registered trademark of Linus Torvalds. UNIX is a registered trademark of The Open Group. Windows is a registered trademark of Microsoft Corporation. All other brands or products are trademarks or registered trademarks of their respective holders and should be treated as such. RA-0004-0810