Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function...

104
Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud Director 9.7.0.3 Core and Edge NFV Data Center Architecture Guide

Transcript of Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function...

Page 1: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Dell EMC Ready Architecture for VMwarevCloud NFV 3.2.1 vCloud Director 9.7.0.3Core and Edge NFV Data Center Architecture Guide

Page 2: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the

problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2020 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or itssubsidiaries. Other trademarks may be trademarks of their respective owners.

2020 - 03

Rev. A00

Page 3: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Overview........................................................................................................................................6Introduction............................................................................................................................................................................ 6Intended audience................................................................................................................................................................. 6Acronyms and definitions......................................................................................................................................................6

1 Deployment architecture for vCloud NFV........................................................................................ 7General topology.................................................................................................................................................................... 7Solution bundle terminology................................................................................................................................................. 8

Core data center..............................................................................................................................................................8Core Management domain............................................................................................................................................. 9Regional Management domain.......................................................................................................................................9Regional Edge domain....................................................................................................................................................10

Solution bundle network topology..................................................................................................................................... 10Solution bundle physical network design and topology..............................................................................................11Solution bundle virtual network design and topology................................................................................................ 13VDS DvPort group mapping with VLAN ID and related ESXi VMNIC..................................................................... 17

NFVi pod................................................................................................................................................................................18Management pod........................................................................................................................................................... 19Edge pod......................................................................................................................................................................... 19Resource pod.................................................................................................................................................................. 19Backup pod..................................................................................................................................................................... 19

2 Solution hardware...................................................................................................................... 20Hardware installation and configuration........................................................................................................................... 20

Unpack and install equipment...................................................................................................................................... 20Power on equipment.....................................................................................................................................................20

Tested BIOS and firmware.................................................................................................................................................20Supported configuration......................................................................................................................................................21List of components.............................................................................................................................................................. 21

3 Deployment server..................................................................................................................... 23Configure networks and datastore on deployment server.............................................................................................23Deployment VM...................................................................................................................................................................24

4 Install and configure ESXi on each rack server.............................................................................. 26

5 Installation of auxiliary components............................................................................................. 27Deploying the auxiliary components prerequisites...........................................................................................................27Configuring the NTP........................................................................................................................................................... 28Installation of AD-DNS........................................................................................................................................................28

6 VMware vCenter Server installation and configuration.................................................................. 29

7 Configure the virtual network...................................................................................................... 31

Contents

Contents 3

Page 4: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

8 Configure VMware vSAN cluster..................................................................................................35Configuring the VMFS datastore on Backup cluster......................................................................................................35

9 Configure VMware vCenter High Availability.................................................................................36

10 Deployment and configuration of NSX-T..................................................................................... 37Deployment of the NSX-T Manager................................................................................................................................. 37Deployment and configuration of NSX-T nodes............................................................................................................. 38Configuring the NSX-T Manager components................................................................................................................38NSX-T Edge..........................................................................................................................................................................41Configure logical switches..................................................................................................................................................43Deploy the logical router.....................................................................................................................................................44

Configure NSX-T Tier 1 router.....................................................................................................................................44Configure NSX-T Tier-0 router................................................................................................................................... 45Create and configure VCD Tier-1 router.....................................................................................................................47Create and configure load balancer............................................................................................................................ 48

11 VMware vCloud Director deployment and configuration................................................................50Installation of the NFS server............................................................................................................................................ 50Installation and configuration of the vCloud Director......................................................................................................51Configure the vCloud Director to use vCenter Single Sign On......................................................................................51Integration of the vCloud Director with other components.......................................................................................... 52Working with vCloud Director APIs to create Provider VDC.........................................................................................53

Creating a session token for vCloud Director............................................................................................................53Retrieve VIM server details.......................................................................................................................................... 53Update VIM server........................................................................................................................................................ 54Retrieve the list of available resource pool.................................................................................................................55Retrieve NSX-T Manager instance details.................................................................................................................56Create a provider VDC..................................................................................................................................................56

Working with VMware vCloud Director............................................................................................................................58Multisite VCD Configuration.............................................................................................................................................. 59

Provider site pairing.......................................................................................................................................................59Organization site pairing................................................................................................................................................ 61

12 VMware vRealize Log Insight deployment and configuration......................................................... 63Integration of vRLI with NFV components...................................................................................................................... 64

13 VMware vRealize Orchestrator deployment and configuration...................................................... 66vRealize Orchestrator integration with vCenter Server.................................................................................................67Configure vRealize Orchestrator to forward logs to vRLI............................................................................................. 67

14 VMware vRealize Operation Manager deployment and configuration............................................. 69Integration of vROps with NFV components.................................................................................................................. 70

15 VMware vSphere Replication deployment and configuration......................................................... 72

16 Set up anti-affinity rules............................................................................................................74

4 Contents

Page 5: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Enable vSphere DRS........................................................................................................................................................... 75Enable vSphere availability................................................................................................................................................. 75

17 Forwarding logs to vRLI............................................................................................................. 76Forwarding vROps log to vRLI ..........................................................................................................................................76Forwarding logs from vCD to vRLI....................................................................................................................................76Configure syslog server for NSX-T................................................................................................................................... 76

Log Message IDs............................................................................................................................................................77

18 Data protection.........................................................................................................................79Data protection architecture..............................................................................................................................................79Backup operation in data protection.................................................................................................................................79Replication operation in data protection...........................................................................................................................79Deployment and configuration of Data Domain.............................................................................................................. 80

Configuration of DD VE................................................................................................................................................ 80Deployment and configuration of Avamar........................................................................................................................82

Installing Avamar Administrator software...................................................................................................................83Import vCenter certificate in Avamar UI.....................................................................................................................83Add vCenter as an Avamar client in AUI.....................................................................................................................84Avamar Proxy installation and configuration..............................................................................................................85Configure MCS support................................................................................................................................................86Configure Avamar Client for Guest Backup...............................................................................................................86

Data Domain integration with Avamar Server................................................................................................................. 87Adding the Data Domain System to Avamar ............................................................................................................ 88

Replication............................................................................................................................................................................ 88Add an Avamar system as a replication destination..................................................................................................89Add a replication policy and create a replication group............................................................................................ 89

Integration of Avamar with VMware vRealize Operations Manager............................................................................90Integration of Avamar with VMware vRealize Log Insight............................................................................................ 90vCloud Director Data Protection Extension......................................................................................................................91

vCD DPE installation...................................................................................................................................................... 91Data Protection in the vCloud Director Tenant Portal UI........................................................................................ 95Replication of vApps .................................................................................................................................................... 96

A Bill of materials for Dell EMC Networking..................................................................................... 97

B Bill of materials for Dell EMC PowerEdge servers..........................................................................99

C Reference documentation......................................................................................................... 104

Contents 5

Page 6: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Overview

IntroductionThe Dell EMC Ready Solution bundle is designed to consolidate and deliver the networking components that support a fully virtualizedinfrastructure. The components include virtual servers, storage, and other networks. It uses standard IT virtualization technologies that runon high-volume services, switches, and storage hardware to virtualize network functions.

This guide provides the configuration steps required to set up a Dell EMC vCloud NFV environment based on vCloud NFV Edge ReferenceArchitecture.

Intended audienceThe information in this guide is intended for use by system administrators who are responsible for the installation, configuration, andmaintenance of Dell EMC 14G technology along with the suite of VMware applications.

Acronyms and definitionsThe Dell EMC Ready Solution bundle uses a specific set of acronyms that apply to NFV technology.

Table 1. Acronyms and definitions

Acronyms Description

AVE Avamar Virtual Edition

DD OS Data Domain Operating System

DD VE Data Domain Virtual Edition

DPDK Data Plane Development Kit, an Intel-led packet processing acceleration technology

iDRAC integrated Dell Remote Access Controller

NFVI Network Functions Virtualization Infrastructure

NFV-OI NFV Operational Intelligence

N-VDS (E) Enhanced mode when using the NSX-T Data Center N-VDS logical switch that enables DPDK for workloadacceleration

N-VDS (S) Standard mode when using the NSX-T Data Center N-VDS logical switch

ToR Top-of-Rack

vCD vCloud Director

VIM Virtualized Infrastructure Manager

VNF Virtual Network Function running in a virtual machine

VR vSphere Replication

vRLI VMware vRealize Log Insight

vRO vRealize Orchestrator

vROps VMware vRealize Operations

Preface

6 Overview

Page 7: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Deployment architecture for vCloud NFVThis deployment is based on the VMware vCloud NFV Edge Reference Architecture. This deployment follows specific design principlesand topology to successfully set up the Dell EMC vCloud NFV Edge environment.

For more information, see:

• General topology• Network topology• Topology terminology and nomenclature

General topologyThis deployment uses the concept of Core Management domain, Regional Management domain, and Regional Edge site from the VMwarevCloud NFV Edge Reference Architecture to set up the Dell EMC vCloud NFV environment.

Using this concept, you can set up the environment in the following configurations:

• A Core Management domain, a Regional Management domain, and a Regional Edge domain configuration• A Core Management domain, a Regional Management domain, and multiple Regional Edge domains configuration• A Core Management domain, multiple Regional Management domains, and multiple Regional Edge domains configuration

This guide provides the configuration steps to set up the environment using a Core Management domain, a Regional Management domain,and a Regional Edge domain configuration.

The Core Management domain, Regional Management domain, and Regional Edge site are connected to a single Telco router thatestablishes the connectivity between the management and edge components. This helps to manage the VNFs and workloads of edgesites from both region sites and the core data center. This deployment consists of three primary clusters: Management, Resource, andEdge cluster. Each of these clusters has its own VMware vSAN datastore. You can scale up these clusters by adding more hosts, and youcan scale up the pods by adding more clusters to them. This allows you to set up the environment as per the requirements of specificfunctions on each pod.

Also within the Dell EMC vCloud NFV environment, a Backup cluster is created and is backed by the VMFS datastore. This Backup clusterhelps to create a backup, restore the created backup, and replicate the backup data.

The following figure displays the architecture diagram that is used to deploy the Dell EMC Ready Solution vCloud NFV Edge.

1

Deployment architecture for vCloud NFV 7

Page 8: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Figure 1. Architecture overview

Solution bundle terminologyAs described in the previous section, the Dell EMC vCloud NFV solution is created using a Core Management domain, a RegionalManagement domain, and an Edge site.

Based on the component placement and architecture that is used to successfully set up the Dell EMC vCloud NFV Edge environment, thefollowing terminology is used:

• Core data center

• Core Management domain• Regional Management domain

• Regional Edge domain

Core data centerIn the Dell EMC vCloud NFV edge environment, the Core data center is a combination of the Core Management domain and the RegionalManagement domain. Both can be present at the same or in different geographical locations.

The Core Management domain, Regional Management domain, and Regional Edge domain components are connected to a Telco router toestablish the network connectivity between them.

8 Deployment architecture for vCloud NFV

Page 9: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Core Management domainThe Core Management domain follows a three-pod architecture design that is used in the Dell EMC vCloud NFV environment. The CoreManagement domain consists of Management, Edge, and Resource pods. The Management pod hosts all the management and analyticalcomponents that manage its local edge and compute pod as well as remote Edge domain or sites.

Also, a data protection system is configured using Avamar Virtual Edition (AVE) and Data Domain Virtual Edition (DD VE) on the CoreManagement domain to create backups, restore the created backups, and replicate the backup data.

The Core Management domain hosts the four pods that are used in this deployment:

• Management pod• Resource pod• Edge pod• Backup pod

Figure 2. Core Management domain

Regional Management domainThe Regional Management domain consists of the management pod only. It hosts all the management components that manage the Edgedomain or sites at the remote location.

Deployment architecture for vCloud NFV 9

Page 10: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Figure 3. Regional management domain

Regional Edge domainThe Regional Edge domain consists of Edge and Resource pods. The Edge pod hosts all the NSX Edge VMs on which all the Edge servicesrun. The Resource pod hosts all the Telco workloads, VMs, and VNFs. The Regional Edge domain is managed by both the CoreManagement and Regional Management domains.

Figure 4. Regional Edge domain

Solution bundle network topologyIn this deployment, the Core Management domain, Regional Management domain, and Edge site are connected to a common Telco router.This router establishes the connectivity between all the sites. This allows the Core Management domain and Regional Managementdomain to manage the workloads and VNFs of Edge sites remotely. Also, it enables the Management domain to manage Regional sites.

10 Deployment architecture for vCloud NFV

Page 11: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

For more information, see:

• Solution bundle physical network design and topology• Solution bundle virtual network design and topology

Solution bundle physical network design and topologyThe following figure displays the physical network topology that is used in this deployment. In this deployment, a Dell EMC Z9264 switch isused as a Telco router. This router is connected to the leaf switches installed on each domain to establish the network connectivitybetween them.

A pair of leaf switches is installed and configured on each domain. The Dell EMC S4048 top-of-rack (ToR) switch is installed on the Coredata center and Regional Edge domain. The deployment server is connected to the ToR and leaf switches. The ToR switch provides theOut-of-Band (OOB) connectivity to the environment. The following figure also displays the networks that are created for this deployment.The leaf switches are configured as a Virtual Link Trunking (VLT) pair. This enables all the connections to be active while providing faulttolerance. The Telco router is connected to the leaf switches using VXLAN with BGP eVPN.

Figure 5. Physical network topology

Network connectivity to server portsTo ensure the network connectivity to server ports, use the information that is provided in the following tables. The information ensuresthat the network cables are connected correctly to the servers. The tables also provide the port-mapping information for the VMwareVMNIC port references. The installation process requires that the PCIe expansion card slot (riser 1) is used for network connectivity.

NOTE: For more information, see Reference documentation to download the appropriate Dell EMC PowerEdge Owner’sManual and reference the Expansion card installation section.

The configuration process requires that the NIC ports that are integrated on the network adapter card, or NDC, are connected as outlinedin the following tables.

Deployment architecture for vCloud NFV 11

Page 12: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

NOTE: For more information, see Reference documentation to download the appropriate Dell EMC PowerEdge Owner’sManual and reference the Technical specifications section.

Table 2. Network connectivity configuration for Core Management domain (Management ESXi hosts)

LOM/NDC ports NICs slot 1 NICs slot 2

Port number 1 2 3 4 1 2 1 2

VMware VMNIC portreference

vmnic0 vmnic1 vmnic2 vmnic3 vmnic4 vmnic5 vmnic6 vmnic7

R640/R740 servers - - - - 25 G 25 G 25 G 25 G

Switch - - - - Leaf1 Leaf1 Leaf2 Leaf2

Table 3. Network connectivity configuration for Core Management domain (Resource ESXi hosts)

LOM/NDC ports NICs Slot 1 NICs Slot 3 NICs Slot 4

Port number 1 2 3 4 1 2 1 2 1 2

VMware VMNIC portreference

vmnic0 vmnic1 vmnic2 vmnic3 vmnic4 vmnic5 vmnic6 vmnic7 vmnic8 vmnic9

R740xd servers – – – – 25G 25G 25G 25G 25G 25G

Switch – – – – Leaf1 Leaf1 Leaf2 Leaf2 Leaf1 Leaf2

Table 4. Network connectivity configuration for Core Management domain (Edge ESXi hosts)

LOM/NDC ports NICs Slot 1 NICs Slot 3

Port number 1 2 3 4 1 2 1 2

VMware VMNIC portreference

vmnic0 vmnic1 vmnic2 vmnic3 vmnic4 vmnic5 vmnic6 vmnic7

R740xd servers – – – – 25G 25G 25G 25G

Switch – – – – Leaf1 Leaf1 Leaf2 Leaf2

Table 5. Network connectivity configuration for Core Management center deployment server

LOM/NDC ports NICs Slot 1

Port number 1 2 3 4 1 2

VMware VMNIC port reference vmnic0 vmnic1 vmnic2 vmnic3 vmnic4 vmnic5

R640/R740 servers – – 10G – 10G 10G

Switch – – ToR – Leaf1 Leaf2

Table 6. Network connectivity configuration for Core Management domain (Backup)

LOM/NDC ports NICs Slot 1 NICs Slot 3

Port number 1 2 3 4 1 2 1 2

VMware VMNIC portreference

vmnic0 vmnic1 vmnic2 vmnic3 vmnic4 vmnic5 vmnic6 vmnic7

R740xd servers 10G – – – 25G 25G 25G 25G

Switch – – – – Leaf1 Leaf1 Leaf2 Leaf2

Table 7. Network connectivity configuration for Regional Management domain (Management ESXi host)

LOM/NDC ports NICs Slot 1 NICs Slot 2

Port number 1 2 3 4 1 2 1 2

VMware VMNIC portreference

vmnic0 vmnic1 vmnic2 vmnic3 vmnic4 vmnic5 vmnic6 vmnic7

12 Deployment architecture for vCloud NFV

Page 13: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

LOM/NDC ports NICs Slot 1 NICs Slot 2

R640/R740 servers – – – – 25G 25G 25G 25G

Switch – – – – Leaf1 Leaf1 Leaf2 Leaf2

Table 8. Network connectivity configuration for Edge Management domain (Resource ESXi host)

LOM/NDC ports NICs Slot 1 NICs Slot 3 NICs Slot 4

Port number 1 2 3 4 1 2 1 2 1 2

VMware VMNIC portreference

vmnic0 vmnic1 vmnic2 vmnic3 vmnic4 vmnic5 vmnic6 vmnic7 vmnic8 vmnic9

R740xd servers – – – – 25G 25G 25G 25G 25G 25G

Switch – – – – Leaf1 Leaf1 Leaf2 Leaf2 Leaf1 Leaf2

Table 9. Network connectivity configuration for Edge Management domain (Edge ESXi host)

LOM/NDC ports NICs Slot 1 NICs Slot 3

Port number 1 2 3 4 1 2 1 2

VMware VMNIC portreference

vmnic0 vmnic1 vmnic2 vmnic3 vmnic4 vmnic5 vmnic6 vmnic7

R740xd server - - - - 25G 25G 25G 25G

Switch - - - - Leaf1 Leaf1 Leaf2 Leaf2

Solution bundle virtual network design and topologyThe vCloud NFV platform consist of two networks:

• Infrastructure network• Virtual Machine (VM) network

Infrastructure networks are host-level networks that are used to connect hypervisors with the physical networks. Each ESXi host hasmultiple port groups that are configured on each infrastructure network.

The VMware vSphere Distributed Switch (VDS) is configured on the hosts in each pod. This configuration provides a similar networkconfiguration across multiple hosts. One VDS is used to manage the infrastructure network and another is used to manage the VMnetworks. Also, N-VDS is used to manage the traffic between:

• Components that are running on the transport node• Internal components and the physical network

The ESXi hypervisor uses the infrastructure network for Edge overlay, vMotion, and vSAN traffic. The VMs use the VM network tocommunicate with each other. In this configuration, two distribution switches are used to create a separation. One switch is used for theinfrastructure network, and the second switch is used for the VM network.

Each distribution switch has a separate uplink connection for a physical data center network that separates uplink traffic from othernetwork traffic. The uplinks are mapped with a pair of physical NICs on each ESXi host for best performance and resiliency.

NSX-T creates the VLAN-backed logical switches, which provide the connectivity to VNF components and VMs. On the ESXi hosts,physical NICs act as uplinks to connect the host virtual switches to the physical switch.

The following infrastructure networks are used in the pods:

• ESXi management network – ESXi host management traffic• vMotion network – VMware vSphere vMotion traffic• vSAN network – vSAN shared storage traffic• Replication network – vSphere replication storage traffic• Backup network – Avamar and Data Domain to create backup, restore the backup, and create the replication of backup

The following VM networks are used and created in this deployment:

• Management network: management component communication• VCSA HA network: VMware vCenter Server High Availability traffic• Overlay network: NSX edge overlay Tunnel End Point connectivity

Deployment architecture for vCloud NFV 13

Page 14: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

• External network: Telco workloads and VNF connectivity to the vCloud NFV management components• Virtual network topology of management pod: Management pod networking consists of the infrastructure and VM networks, as

shown in the following figure:

Figure 6. Management pod virtual network topology

• Virtual network topology of Edge pod: The virtual network of the Edge pod depends on the network topology that is required for VNFworkloads. In general, the Edge pod has the infrastructure networks, networks for management, and networks for the workloads.

14 Deployment architecture for vCloud NFV

Page 15: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Figure 7. Edge pod virtual network topology

• Virtual network topology for Resource pod: The Resource pod virtual network depends on the network topology that is required todeploy tenants, as a specific tenant has a certain set of networking requirements.

Deployment architecture for vCloud NFV 15

Page 16: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Figure 8. Resource pod virtual network topology

• Virtual network topology for backup: The virtual network of the Backup cluster depends on the network topology that is required tocreate a backup, restore the backup, and replicate the backup.

Figure 9. Backup virtual network topology

16 Deployment architecture for vCloud NFV

Page 17: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

VDS DvPort group mapping with VLAN ID and related ESXiVMNICThe tables that are provided in this section display the VDS-DvPort group/VSS port group mappings with VLAN ID and correspondingVMNIC present on ESXi, which are assigned as uplinks to the VDS/VSS.

For example, the VDS named Infrastructure Management VDS with the DvPort Group ESXi_Mgmt_Network is configured with VLAN ID100 and uses a pair of VMNICs (vmnic4 and vmnic6) as uplinks for the VDS.

Table 10. Core Management domain port groups

VDS type VDS name Port groups VLAN ID Uplink NICs Switch

Management cluster

VDS (infrastructure) Core_Mgmt_Infra_VDS Core_Mgmt_Esxi_NW 100 vmnic4 andvmnic6

Leaf1 andLeaf 2

Core_Mgmt_vSAN_NW 300

Core_Mgmt_vMotion_NW 200

Core_Mgmt_vRep_NW 500

Core_Mgmt_Back_NW 400

VDS (virtual machinenetwork)

Core_Mgmt_VM_VDS Core_Mgmt_VM_NW 20 vmnic5 andvmnic7

Leaf1 andLeaf 2

Core_Mgmt_VCSA_HA_NW 30

Resource cluster

VDS (infrastructure) Core_Res_Infra_VDS Core_Res_Esxi_NW 100 vmnic4 andvmnic6

Leaf1 andLeaf 2

Core_Res_vSAN_NW 300

Core_Res_vMotion_NW 200

Core_Res_VM_NW 20

Core_Res_Backup_NW 400

Edge network NVDS_Overlay Core_Res_Overlay_NW 700 vmnic5 andvmnic7

Leaf1 andLeaf 2

NVDS_DPDK Core_Res_ DPDK_NW 40 vmnic8 andvmnic9

Edge cluster

VDS (infrastructure) Core_Edge_Infra_VDS Core_Edge_Esxi_NW 100 vmnic4 andvmnic6

Leaf1 andLeaf 2

Core_Edge_vSAN_NW 300

Core_Edge_VM_NW 20

Core_Edge_vMotion_NW 200

Core_Edge_Backup_NW 400

VDS (virtual Edgenetwork)

Core_Edge_External_VDS Core_Edge_Overlay_NW 0–4094 vmnic5 andvmnic7

Leaf1 andLeaf 2

Core_Edge_External_NW

Backup cluster

VDS (infrastructure) Core_Backup_VDS Core_Bck_Esxi_NW 100 vmnic4 andvmnic6

Leaf1 andLeaf 2

Core_Bck_VM_NW 20

Core_Bck_DD_Back_NW 400

VDS (public) Public_VDS Public_NW - vmnic0 ToR

Deployment architecture for vCloud NFV 17

Page 18: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Table 11. Regional Management domain port groups

VDS type VDS name Port groups VLAN ID Uplink NICs Switch

Management cluster

VDS (infrastructure) Reg_Mgmt_Infra_VDS Reg_Mgmt_Esxi_NW 100 vmnic4 andvmnic6

Leaf1 andLeaf 2

Reg_Mgmt_vSAN_NW 300

Reg_Mgmt_vMotion_NW 200

Reg_Mgmt_Backup_NW 400

Reg_Mgmt_vRep_NW 500

VDS (virtual machinenetwork)

Reg_Mgmt_VM_VDS Reg_Mgmt_VM_NW 20 vmnic5 andvmnic7

Leaf1 andLeaf 2

Reg_Mgmt_VCSA_HA_NW 30

Table 12. Regional Edge Management domain port groups

VDS type VDS name Port groups VLAN ID Uplink NICs Switch

Edge cluster

VDS (infrastructure) Reg_Res_Infra_VDS Reg_Edge_Esxi_NW 100 vmnic4 andvmnic6

Leaf1 andLeaf 2

Reg_Edge_vSAN_NW 300

Reg_Edge_vMotion_NW 200

Reg_Edge_VM_NW 20

Reg_Edge_Backup_NW 400

Edge network Reg_Res_External_VDS Reg_Res_Overlay_NW 0–4094 vmnic5 andvmnic7

Leaf1 andLeaf 2

Reg_Res_External_NW

Resource cluster

VDS (infrastructure) Core_Res_Infra_VDS Reg_Res_Esxi_NW 100 vmnic4 andvmnic6

Leaf1 andLeaf 2

Reg_Res_vSAN_NW 300

Reg_Res_vMotion_NW 200

Reg_Res_VM_NW 20

Reg_Res_Backup_NW 400

N-VDS (E) N-VDS (E) Reg_Res_Overlay_NW 700 vmnic5 andvmnic7

Leaf1 andLeaf 2

Reg_Res_ DPDK_NW 40 vmnic8 andvmnic9

NFVi podIn this deployment, a pod is used to streamline the NFV environment operations and other roles. This deployment architecture illustrates athree-pod configuration, where three vSphere clusters are deployed to create the following clusters within the pods:

• Management pod• Edge pod• Resource pod

Also, a Backup pod is created for data protection. Clusters are the vSphere objects that are used to access the virtual domain resourcesand manage the resource allocation.

During the initial deployment, Dell EMC recommends:

• A minimum of four servers that consist of either Dell EMC PowerEdge vSAN Ready Nodes servers in the Management pod.• A minimum of four servers that consist of Dell EMC PowerEdge vSAN Ready Nodes servers in the Edge pod.• A minimum of four servers that consist of Dell EMC PowerEdge vSAN Ready Nodes servers in the Resource pod.• A minimum of two servers that consist of either Dell EMC PowerEdge vSAN Ready Nodes servers in the Backup pod.

18 Deployment architecture for vCloud NFV

Page 19: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

NOTE: A maximum of 64 servers can be added to each pod to scale up the deployment.

Management podThe Management pod hosts and manages the following NFV management components:

• AD-DNS• Network Time Protocol (NTP)• VMware vCenter Server Appliance• VMware NSX-T Manager• VMware vCloud Director

Analytics components such as vRealize Operations (vROps) Manager and vRealize Log Insight (vRLI) are also deployed in theManagement cluster. Avamar Virtual Edition (AVE), VMware vRealize Orchestrator (vRO), and VMware vSphere Replication are alsodeployed on the Management cluster.

Edge podThe Edge pod hosts the NSX-T Edge as a Virtual Machine (VM) and manages all the connectivity to the physical domain within thearchitecture. The Edge pod also creates different logical networks between VNFs and external networks. The NSX-T Edge nodes work asNSX-T data center network components. Edge nodes participate in east-west traffic forwarding and provide connectivity to the physicalinfrastructure for north-south traffic management and capabilities.

Resource podThe Resource cluster provides the virtualized runtime environment (mainly to compute, network, and storage environments) to fulfill theTelco workloads requirement.

Backup podIn the Dell EMC vCloud NFV Edge deployment, two additional ESXi servers are used to create a Backup cluster. On each ESXi server, aData Domain Virtual Edition (DD VE) instance is deployed for backup and replication operations.

One DD VE instance acts as a primary backup system, while another DD VE instance is used for replication if the primary fails or crashes.Each DD VE instance integrates with the respective Avamar Virtual Edition (AVE) instance.

Figure 10. Backup pod

Deployment architecture for vCloud NFV 19

Page 20: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Solution hardware

Hardware installation and configurationThe servers, storage, and other networking components are required to install and configure the deployment of the Dell EMC ReadySolution bundle.

The following server solution is used in this deployment:

• Dell EMC PowerEdge R640 or R740 servers• Dell EMC PowerEdge R740xd servers

This configuration uses the following switches:

• Dell EMC Networking S4048-ON with one switch used as a ToR• Dell EMC Networking S5232-ON used as leaf switches• A Dell EMC Networking Z9264F-ON used as a Telco switch

This deployment also uses the iDRAC9 to improve the overall availability of Dell systems.

Unpack and install equipmentAfter performing all standard industry safety precautions, proceed with the following steps:

1. Unpack and install the racks.2. Unpack and install the server hardware.3. Unpack and install the switch hardware.4. Unpack and install the network cabling.5. Connect each individual machine to both power bus installations.6. Apply power to the racks.

NOTE: The Dell EMC EDT team usually performs these steps.

Power on equipmentTo test the installation of the equipment, perform the following steps:

1. Power on each server node individually.2. Wait for the internal system diagnostic procedures to complete.3. Power up the network switches.4. Wait for the internal system diagnostic procedures to complete on each of the switches.

NOTE: The Dell EMC EDT team usually performs these steps.

Tested BIOS and firmwareCAUTION: Ensure that the firmware on all servers, storage devices, and switches is up to date, as outdated firmware

may cause unexpected results to occur.

The server BIOS and firmware versions that are tested for the Dell EMC Ready Bundle for NFV platform are as follows:

Table 13. Dell EMC PowerEdge R640/R740 and R740xd tested BIOS and firmware versions

Product Firmware version

BIOS 2.4.8

2

20 Solution hardware

Page 21: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Product Firmware version

iDRAC with Lifecycle Controller 4.0

rNDC - Intel(R) 4P X550-t 19.0.12

PCIe - Intel 25G 2P XXC710/10G X550 19.0.12

HBA330 ADP/Mini storage controller 16.17.00.05

BP14G R640/R740 4.32

BP14G R740XD 2.41

CPLD firmware for R740XD/R740 1.1.3

The firmware switch versions that are tested for the Dell EMC Ready Bundle for NFV platform are as follows:

Table 14. Dell Networking tested firmware and OS versions

Product Version

S4048T-ON firmware (1) out-of-band management OS 10.5.0.3P1

S5232-ON firmware (2) leaf-switch OS 10.5.0.3P1

Z9264F-ON firmware (1) telco-switch OS 10.5.0.3P1

Supported configurationThe following table provides the list of VMware components and their supported versions that are used and verified for this deployment.

Table 15. VMware vCloud NFV product inventory list

Product Version

ESXi 6.7 U3

VMware vCenter Server 6.7 U3

VMware NSX–T 2.5.1

VMware vSAN 6.7 U3

VMware vRealize Log Insight 4.8

VMware vRealize Operations Manager 7.5

VMware vCloud Director 9.7.0.3

vSphere Replication 8.2.0.2

vRealize Orchestrator 7.6

List of componentsVarious software products are used to create the NFVI environment. The following table displays the list of components and theirinstances that are deployed in this deployment.

Table 16. NFVI components

Product Core Management domain Regional Management Regional Edge

ESXi Yes Yes Yes

AD-DNS Yes No No

NTP Yes No No

VMware vCenter Server Yes Yes Yes

VMware vSAN Yes Yes Yes

VMware NSX–T Manager Yes Yes No

Solution hardware 21

Page 22: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Product Core Management domain Regional Management Regional Edge

VMware NSX–T Edge Yes No Yes

VMware vRealize Log Insight Yes Yes No

VMware vRealize Operations Manager Yes Yes No

VMware vCloud Director Yes Yes No

vSphere Replication Yes Yes No

vRealize Orchestrator Yes Yes No

Avamar Virtual Edition Yes No No

Avamar Proxy Yes Yes Yes

Data Domain Virtual Edition Yes No No

22 Solution hardware

Page 23: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Deployment serverAbout this task

In this section, the manual deployment process to create a Dell EMC vCloud NFV environment is started. To create this environment, adeployment server is used that hosts a Virtual Machine (VM). This VM has all the required software, ISO/OVA files, and licenses.

Prerequisites

The following requirements must be satisfied before beginning the Dell EMC VMware vCloud NFV platform manual deployment:

NOTE: All compute nodes must have identical hard drives, RAM, and NIC configurations.

• The required hardware must be installed and configured as indicated in the Solution Hardware section.• Once the systems are configured as described in the Solution Hardware section, power on the systems.• Ensure that there is Internet access, including but not limited to the deployment server.• Verify that the deployment server that is used to deploy the solution can hold the required VMware Software Appliance files.• Verify that iDRAC is configured and accessible.• Verify that the ESXi 6.7 U3 ISO file is available on the local machine.• Verify that the ToR switch is configured.• Verify that the Dell PowerEdge server controllers are set to HBA mode.

Steps

1. Install the ESXi on the deployment server using the iDRAC console.

2. Configure the ESXi system settings as follows:

a) Management configuration: Change IPv4 configuration, configure DNS.

Configure networks and datastore on deploymentserverAbout this task

Once the ESXi is installed and configured on the deployment server, you must create the vSwitch, port-groups, and a datastore on thedeployment server.

Table 17. vSwitch details

vSwitchname

Uplink MTU(bytes)

Linkdiscovery

Security

vSwitch0 vmnic2 1500 Listen/CDP For Promiscuous mode and Forged transmits, enablethe Reject radio button.

vSwitch1 vmnic4, vmnic5 9000 Listen/CDP For Promiscuous mode and Forged transmits, enablethe Accept radio button. The NIC teaming policy isrouted based on IP hash.

Table 18. Port group details

Port groupname

Description VLAN ID Virtualswitch

Security

VM network For iDRAC (OOB managementnetwork)

0 vSwitch0 For Promiscuous mode and Forged transmits, enablethe Inherit from vSwitch radio button.

vSwPG-100-ESXiitch1

For rack ESXi servers (ESXimanagement network)

100 vSwitch1 For Promiscuous mode and Forged transmits, enablethe Inherit from vSwitch radio button.

3

Deployment server 23

Page 24: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Port groupname

Description VLAN ID Virtualswitch

Security

PG-20-VM-Mgmt

For rack server VMs 20 vSwitch1 For Promiscuous mode and Forged transmits, enablethe Inherit from vSwitch radio button.

Steps

1. Open the web browser, and log in to the deployment server.

2. Configure vSwitch:

a) By default, vSwitch0 is available on the deployment server. See the vSwitch details table and configure it.b) See the vSwitch details table to create and configure a new vSwitch.

3. Create port groups:

a) By default, a VM network port group with VLAN ID 0 is created, configured as specified in the Port group details table, on thedeployment server.

b) Create port groups on the deployment server as specified in the Port group details table.

4. Create a datastore on the deployment server.

5. Add an ESXi management adapter to access the ESXi server from the deployment server.

6. Configure the VM network within the network mapping for management purposes.

Deployment VMAbout this task

In this deployment, a deployment VM is created to deploy the solution. The deployment VM contains the licenses and required VMwaresoftware, ISO, and other software and licenses required for the deployment.

NOTE: To deploy the VM, ensure that the Dell 14G servers and network are accessible.

The CentOS deployment VM is used in this guide as a base operating system platform for the deployment of the NFV Infrastructure(NFVI). The deployment VM performs all the steps involving installation, configuration, and verification of the VMware software stack.

NOTE: Before initiating the deployment, ensure that the necessary software is copied or downloaded in the deployment

VM.

This document provides the steps necessary to install the following applications:

• VMware-VMvisor-Installer-6.7.0.update03• Microsoft Windows Server 2016 ISO for AD-DNS• CentOS 8.0 ISO for NTP• VMware-VCSA-all-6.7U3• VMware-vRealize-Log-Insight-4.8• vRealize-Operations-Manager-Appliance-7.5• NSX-T Manager 2.5.1• VMware-vCloud-Director-9.7.0.3• Avamar-virtual-instance• Data-Domain-virtual-instance• VMware-vRealize-Orchestrator• VMware-vRealize-Replication

Prerequisites

• VMware ESXi Server 6.7U3 is installed on the deployment server.• The CentOS 7.6 (or above) ISO file is available.• Availability of three network adapters:

• vnic1: For management network (vSwitch0 – vmnic2)• vnic2: For VM management network (vSwitch1 – vmnic4 and vmnic5)• vnic3: For management ESXi network (vSwitch1 – vmnic4 and vmnic5)

• Available disk storage is greater than 150 GB.

24 Deployment server

Page 25: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Steps

1. Using a browser, open the deployment server ESXi hosts.

2. Create a VM using the CentOS 8.0 (or above) ISO file.

3. Once the VM is created, power-on the VM then complete the Installation of Cent-OS.

4. Configure the network settings including IP address, Netmask IP, and Gateway IP for the deployment VM.

5. Once the network settings are configured, restart the network.

6. Enable the automatic connectivity for networks.

7. Configure the NTP settings in the deployment VM.

8. Configure the time-zone from CentOS to UTC in the deployment VM.

9. Disable the DHCP script from adding entries to the resolve.conf file during the boot process.

10. Disable the auto-mount option on the CentOS to avoid the multiple mounts of the ISO files.

11. Install Google Chrome on the deployment server.

12. Install and configure the VMware OVF tool.

13. Add the network adapters created using the Port group details table.

14. Assign IPs to each network adapter.

15. Install the VMRC application to open the VM console on a remote host.

Deployment server 25

Page 26: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Install and configure ESXi on each rack serverAbout this task

The creation of an NFV infrastructure requires you to install and configure the ESXi on the PowerEdge R640, PowerEdge R740, andPowerEdge R740xd servers based on the vSAN Ready Node.

Follow a naming convention that makes it easy to identify the placement and purpose of each component.

Table 19. Core Management domain ESXi distribution

Management cluster Resource cluster Edge cluster Backup cluster

Core-MGMT-ESXi-01 Core-RES-ESXi-01 Core-Edge-ESXi-01 Core-Back-ESXi-01

Core-MGMT-ESXi-02 Core-RES-ESXi-02 Core-Edge-ESXi-02 Core-Back-ESXi-02

Core-MGMT-ESXi-03 Core-RES-ESXi-03 Core-Edge-ESXi-03

Core-MGMT-ESXi-04 Core-RES-ESXi-04 Core-Edge-ESXi-04

Table 20. Regional Management and Regional Edge domain ESXi distribution

Regional Management Regional Edge

Management cluster Resource cluster Edge cluster

Reg-MGMT-ESXi-01 Reg-RES-ESXi-01 Reg-Edge-ESXi-01

Reg-MGMT-ESXi-02 Reg-RES-ESXi-02 Reg-Edge-ESXi-02

Reg-MGMT-ESXi-03 Reg-RES-ESXi-03 Reg-Edge-ESXi-03

Reg-MGMT-ESXi-04 Reg-RES-ESXi-04 Reg-Edge-ESXi-04

Prerequisites

• Verify that the minimum required hardware firmware versions are installed on the servers, as described in the Dell EMC PowerEdgeR640/R740 test BIOS and firmware versions table.

• ESXi Installer 6.7 U3 or later ISO file is installed.• The iDRAC with at least 16 GB SD card is enabled.

Steps

1. Install the ESXi on each rack server using the iDRAC console.

2. From the management configuration option, configure the management network, set DNS, set the DNS suffix, and set the VLAN ID.

3. From the troubleshooting options, enable the SSH and shell for the ESXi servers.

4. Disable the firewall.

5. Assign a license to each ESXi host.

6. Set the SSH policy for each ESXi host.

7. On the Resource pod, install the DPDK drivers to each Resource pod ESXi host.

4

26 Install and configure ESXi on each rack server

Page 27: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Installation of auxiliary componentsAbout this task

Auxiliary components are required for installation on the Dell EMC Ready Solution bundle. Network Time Protocol (NTP), Active Directory(AD), and Domain Name System (DNS) serve as the auxiliary components.

• AD provides a centralized authentication source for management components.• DNS provides forward and reverses lookup services to all platform components.• NTP provides a time synchronization source to all components.

In this deployment, a single instance of AD-DNS and NTP is created for the entire deployment. It is integrated with all the requiredcomponents.

Deploy the AD-DNS and NTP virtual machines on the first management ESXi server.

Deploying the auxiliary components prerequisitesAbout this task

In order to successfully install the auxiliary components, you must create a datastore, a standard vSwitch, and a port group on the firstmanagement ESXi server.

The datastore created in this section is a temporary datastore. This datastore will be merged with the vSAN datastore once the vSAN isconfigured.

Table 21. vSwitch details

vSwitch name Uplink MTU (bytes) Link discovery Security

vSwitch1 vmnic5 9000 Listen/CDP • For Promiscuous mode and Forged transmits, selectthe Reject radio button.

• For MAC address changes, select the Accept radiobutton.

Table 22. Port group details

Port groupname

Description VLAN ID Virtual switch Security

Appliance_Network

For rack ESXiservers (VMmanagementnetwork)

20 vSwitch1 • For Promiscuous mode and Forged transmits, selectthe Reject radio button.

• For MAC address changes, select the Reject radiobutton.

Steps

1. Open the web browser, log in to the first management ESXi server.

2. Create a datastore on the first management ESXi server.

3. Create a standard vSwitch on the first management ESXi server using the information specified in the vSwitch details table.NOTE: By default, vSwitch0 exists on the first ESXi server. Create other virtual switches using the vSwitch details

table.

4. Create the port groups on the first management ESXi server using the information specified in the Port group details table.NOTE: By default, the VM network port group VLAN ID 0 exists on the ESXi server. Create and assign the VLAN and

vSwitch information to the port group as specified in the Port group details table.

5

Installation of auxiliary components 27

Page 28: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Configuring the NTPAbout this task

Install the NTP on the Core data center.

Prerequisites

• Linux CentOS 8.0 or higher VM is installed, configured, and running for network use.• Verify presence of Internet connectivity on the VM.

NOTE: The Linux VM acts as the NTP server.

Steps

1. Install the Chrony NTP provided in the CentOS repository.

2. Set the time zone to UTC and verify that the system and hardware clocks are synchronized.

NOTE: Set the NTP client and NTP server to have the same time zone.

3. Set Chrony to act as an NTP server for a local network.

4. Add the firewall rules.

5. Activate the Chrony NTP services and ensure that NTP is running correctly.

6. Synchronize each ESXi server clock with NTP.

Installation of AD-DNSAbout this task

In this deployment, Microsoft Windows Server 2016 is used for AD and DNS auxiliary services.

Prerequisites

• Minimum 1.4-GHz 64-bit processor• Minimum 4 GB memory• 40 GB or more disk space• ESXi datastore• VM with connectivity to the Internet

Steps

1. Create a VM using the Microsoft Windows Server 2016 ISO file.

2. Install and configure the Microsoft Windows Server 2016.

3. Configure the network properties for Microsoft Window Server 2016.

4. Install the VMware tools on this VM.

5. Update the computer name of the Windows VM.

6. Add roles and features to configure the AD.

7. Configure the DNS properties.

8. Configure the reverse lookup zone.

9. Configure the forward lookup zone.

NOTE: Create a reverse and forward lookup zone entry for each component before deploying the respective component.

10. Disable the Windows firewall for the network settings.

11. Add a self-signed certificate to the Active Directory.

12. Configure the NTP client in the Microsoft Windows VM.

28 Installation of auxiliary components

Page 29: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

VMware vCenter Server installation andconfiguration

About this task

In this deployment, four instances of a vCenter Server with Embedded PSC are deployed and configured.

Two vCenter Servers are deployed on the Core Management domain:

• One vCenter Server manages the Management pod and Backup cluster.• One vCenter Server manages the Resource and Edge pod of the Core Management domain.

Two are deployed on the Regional Management domain:

• One vCenter Server manages the Management pod.• One vCenter Server manages the Regional Edge site.

All four vCenter Servers are deployed on the Management pod of the Core Management domain and Regional Management domain, andall four are implemented as a cluster.

In this configuration, the vCenter with Platform Services Controller (PSC) is deployed and has the PSC configured with commoninfrastructure security services, such as the following:

• VMware vCenter Single Sign-On• VMware Certificate Authority• Licensing• Service registration• Certificate management services

For each instance of VCSA, the deployment is completed in two stages. The first stage is deployment of the ISO file for the vCenterServer Appliance with embedded PSC, and the second stage is setting up the vCenter Server with embedded PSC.

Table 23. Naming convention for VCSA

Core Management domain Regional Management domain

Core_Mgmt_VCSA Reg_Mgmt_VCSA

Core_Res_VCSA Reg_Res_VCSA

Prerequisites

• ESXi 6.7 U3 is configured and running.• AD-DNS and NTP are running.• Manual creation of forward and reverse lookup entries for all VCSA instances on the DNS server before deployment.

Steps

1. For the Core Management domain, deploy the management VCSA with embedded PSC.

NOTE: The vSAN should be enabled, and management VCSA should be stored in the vSAN datastore.

2. Once the management VCSA is deployed, log in to the core management VCSA web client.

3. Update the vSAN default storage policy.

4. For the Core Management domain, deploy the resource VCSA with embedded PSC.

NOTE: Resource VCSA should be stored in the vSAN datastore.

5. Once the resource VCSA is deployed, log in to the core resource VCSA web client.

6. Update the vSAN default storage policy.

7. Add AD authentication for both Core_Mgmt_VCSA and Core_Res_VCSA.

8. Assign license to VCSA.

6

VMware vCenter Server installation and configuration 29

Page 30: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

9. Create a data center, Resource cluster, and Edge cluster on the resource vCenter.

NOTE: The management data center and cluster should be created while the management vCSA is deployed.

10. Create the Backup cluster on the management vCenter.

11. On the management vCenter, add hosts to Management and Backup clusters.

12. On the resource vCenter, Add hosts to Resource and Edge clusters.

13. Enable the VMware EVC mode for each VCSA cluster.

14. Repeat the steps provided in this section to deploy and set up the VCSA on the Regional Management domain.

30 VMware vCenter Server installation and configuration

Page 31: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Configure the virtual networkAbout this task

After deploying and configuring the VCSA, configure the virtual networks for the environment. Figure 6 through Figure 9 display theunderlying virtual distributed switch, or VDS, for the Management, Edge, and Resource clusters. Different VLAN IDs can be used in thephysical environment.

Different virtual networks are configured on the Core Management domain and Regional Management domain to establish networking inthe environment.

Table 24. Core Management domain VDS

Distributed switch name Version Number ofuplinks

Network I/Ocontrol

Discovery protocol type/operation

MTU (bytes)

Management cluster

Core_Mgmt_Infra_VDS 6.6.0 2 Enabled CDP/Both 9000

Core_Mgmt_VM_VDS 6.6.0 2 Enabled CDP/Both 9000

Edge cluster

Core_Edge_Infra_VDS 6.6.0 2 Enabled CDP/Both 9000

Core_Edge_External_VDS 6.6.0 2 Enabled CDP/Both 9000

Resource cluster

Core_Res_Infra_VDS 6.6.0 2 Enabled CDP/Both 9000

Backup cluster

Core_Backup_VDS 6.6.0 2 Enabled CDP/Both 9000

Public_VDS 6.6.0 2 Enabled CDP/Both 9000

Table 25. Regional Management and Regional Edge domain VDS

Distributed switch name Version Number ofuplinks

Network I/Ocontrol

Discovery protocol type/operation

MTU (bytes)

Management cluster

Reg_Mgmt_Infra_VDS 6.6.0 2 Enabled CDP/Both 9000

Reg_Mgmt_VM_VDS 6.6.0 2 Enabled CDP/Both 9000

Edge cluster

Reg_Edge_Infra_VDS 6.6.0 2 Enabled CDP/Both 9000

Reg_Edge_External_VDS 6.6.0 2 Enabled CDP/Both 9000

Resource cluster

Reg_Res_Infra_VDS 6.6.0 2 Enabled CDP/Both 9000

Table 26. Core Management domain VDS - LAG settings

Distributed switch Name Number of ports Mode Load balancing mode

Management cluster

Core_Mgmt_Infra_VDS Lag1 2 Active Source and destination IP address, TCP/UDP port, and VLAN

Core_Mgmt_VM_VDS Lag1 2 Active Source and destination IP address, TCP/UDP port, and VLAN

Edge cluster

7

Configure the virtual network 31

Page 32: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Distributed switch Name Number of ports Mode Load balancing mode

Core_Edge_Infra_VDS Lag1 2 Active Source and destination IP address, TCP/UDP port, and VLAN

Core_Edge_External_VDS Lag1 2 Active Source and destination IP address, TCP/UDP port, and VLAN

Resource cluster

Core_Res_Infra_VDS Lag1 2 Active Source and destination IP address, TCP/UDP port, and VLAN

Backup cluster

Core_Backup_VDS Lag1 2 Active Source and destination IP address, TCP/UDP port, and VLAN

Table 27. Regional Management domain VDS - LAG settings

Distributed switch Name Number of ports Mode Load balancing mode

Management cluster

Reg_Mgmt_Infra_VDS Lag1 2 Active Source and destination IP address, TCP/UDP port, and VLAN

Reg_Mgmt_VM_VDS Lag1 2 Active Source and destination IP address, TCP/UDP port, and VLAN

Edge cluster

Reg_Edge_Infra_VDS Lag1 2 Active Source and destination IP address, TCP/UDP port, and VLAN

Reg_Edge_External_VDS Lag1 2 Active Source and destination IP address, TCP/UDP port, and VLAN

Resource cluster

Reg_Res_Infra_VDS Lag1 2 Active Source and destination IP address, TCP/UDP port, and VLAN

Table 28. Core Management domain port groups

VDS type VDS name Port groups VLAN ID Uplink NICs Switch

Management cluster

VDS (infrastructure) Core_Mgmt_Infra_VDS Core_Mgmt_Esxi_NW 100 vmnic4 andvmnic6

Leaf1 andLeaf 2

Core_Mgmt_vSAN_NW 300

Core_Mgmt_vMotion_NW 200

Core_Mgmt_vRep_NW 500

Core_Mgmt_Back_NW 400

VDS (Virtual machinenetwork)

Core_Mgmt_VM_VDS Core_Mgmt_VM_NW 20 vmnic5 andvmnic7

Leaf1 andLeaf 2

Core_Mgmt_VCSA_HA_NW 30

Resource cluster

VDS (Infrastructure) Core_Res_Infra_VDS Core_Res_Esxi_NW 100 vmnic4 andvmnic6

Leaf1 andLeaf 2

Core_Res_vSAN_NW 300

Core_Res_vMotion_NW 200

Core_Res_VM_NW 20

Core_Res_Backup_NW 400

Edge cluster

VDS (Infrastructure) Core_Edge_Infra_VDS Core_Edge_Esxi_NW 100 vmnic4 andvmnic6

Leaf1 andLeaf 2

Core_Edge_vSAN_NW 300

Core_Edge_VM_NW 20

Core_Edge_vMotion_NW 200

Core_Edge_Backup_NW 400

VDS (Virtual machinenetwork)

Core_Edge_External_VDS Core_Edge_Overlay_NW 0–4094 vmnic5 andvmnic7

Leaf1 andLeaf 2

Core_Edge_External_NW

32 Configure the virtual network

Page 33: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

VDS type VDS name Port groups VLAN ID Uplink NICs Switch

Backup cluster

VDS (Infrastructure) Core_Backup_VDS Core_Bck_Esxi_NW 100 vmnic4 andvmnic6

Leaf1 andLeaf 2

Core_Bck_VM_NW 20

Core_Bck_DD_Back_NW 400

VDS (public) Public_VDS Public_NW - vmnic0 ToR

Table 29. Regional Management domain port groups

VDS type VDS name Port groups VLAN ID Uplink NICs Switch

Management cluster

VDS (infrastructure) Reg_Mgmt_Infra_VDS Reg_Mgmt_Esxi_NW 100 vmnic4 andvmnic6

Leaf1 andLeaf 2

Reg_Mgmt_vSAN_NW 300

Reg_Mgmt_vMotion_NW 200

Reg_Mgmt_Backup_NW 400

Reg_Mgmt_vRep_NW 500

VDS (Virtual machinenetwork)

Reg_Mgmt_VM_VDS Reg_Mgmt_VM_NW 20 vmnic5 andvmnic7

Leaf1 andLeaf 2

Reg_Mgmt_VCSA_HA_NW 30

Edge cluster

VDS (infrastructure) Reg_Res_Infra_VDS Reg_Edge_Esxi_NW 100 vmnic4 andvmnic6

Leaf1 andLeaf 2

Reg_Edge_vSAN_NW 300

Reg_Edge_vMotion_NW 200

Reg_Edge_VM_NW 20

Reg_Edge_Backup_NW 400

VDS (Virtual machinenetwork)

Reg_Edge_External_VDS Reg_Edge_Overlay_NW 0–4094 vmnic5 andvmnic7

Leaf1 andLeaf 2

Reg_Edge_External_NW

Resource cluster

VDS (Infrastructure) Infrastructure ManagementVDS

Reg_Res_Esxi_NW 100 vmnic4 andvmnic6

Leaf1 andLeaf 2

Reg_Res_vSAN_NW 300

Reg_Res_vMotion_NW 200

Reg_Res_VM_NW 20

Reg_Res_Backup_NW 400

Prerequisites

• All of the VCSA must be deployed and configured.• Manual creation of forward and reverse lookup entries for all VCSA instances on the DNS server before deployment.

Steps

1. In the VMware vSphere Web Client, log in to the Management vCenter (Core_Mgmt_VCSA) of the Core Management domain.

2. Create the Virtual Distribute Switch (VDS) for the Management cluster using the Core Management domain VDS table.

3. Create the LAG for the Management cluster VDS using the Core Management domain VDS - LAG settings table.

4. Create port groups for the Management cluster VDS using the Core Management domain port groups table.

5. Add hosts to the Management infra VDS (Core_Mgmt_Infra_VDS).

6. Configure the vSAN datastore on the Management cluster.

7. Add hosts to the Management VM VDS (Core_Mgmt_VM_VDS). Do not select a host that is associated with any VM.

8. Migrate the AD-DNS and NTP to the vSAN datastore.

Configure the virtual network 33

Page 34: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

9. Migrate the VMs to hosts that were added to VDS (Core_Mgmt_VM_VDS) in step 7.

10. Migrate the management VCSA VM to hosts that were added to VDS (Core_Mgmt_VM_VDS) in step 7.

11. Migrate the resource VCSA VM to hosts that were added to VDS (Reg_Mgmt_VM_VDS) in step 7.NOTE: Once the VMs are migrated from the vSphere Standard Switch (vSwitch) to VDS, you must add the

remaining ESXi host to VDS.

12. Create VDS for the Resource and Edge clusters using the Core Management domain VDS table.

13. Create the LAG for the Resource and Edge clusters VDS using the Core Management domain VDS - LAG settings table.

14. Create port groups for the Resource and Edge VDS using the Core Management domain port groups table.

15. Add hosts to the Edge infra VDS (Core_Edge_Infra_ VDS).

16. Add hosts to the Edge VM VDS (Core_Edge_External_VDS).

17. Add hosts to the Resource infra VDS (Core_Res_Infra_VDS).

18. Create the VDS for the Backup cluster using the Core Management domain VDS table.

19. Create the LAG for the Backup cluster VDS using the Core Management domain VDS - LAG settings table.

20. Create the port group for the Backup VDS using the Core Management domain port groups table.

21. Add hosts to the Backup infra VDS (Core_Backup_Infra_VDS).

22. Repeat the steps provided in this section and use the information provided in the Regional Management and Regional Edge domainVDS, Regional Management domain VDS - LAG settings, and Regional Management domain port groups tables to configure virtualnetworks for Regional Management and Regional Edge domain.

34 Configure the virtual network

Page 35: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Configure VMware vSAN clusterAbout this task

The VMware Virtual SAN (vSAN) is a distributed layer of software that runs natively as a part of the ESXi hypervisor. vSAN aggregateslocal or direct-attached capacity devices of a host cluster and creates a single storage pool that is shared across all hosts in the vSANcluster.

Each server has a hybrid mix of SSD and HDD drives for storage deployment. For vSAN, solid-state disks are required for the cache tier,while the spinning disks make up the capacity tier. Each Dell EMC server is configured for Host Bus Adapter (HBA) in non-RAID or pass-through mode since the vSAN software handles the redundancy and storage cluster information.

The vSAN should be enabled at the time of Management VCSA deployment on both the Core Management and Regional Managementdomains. The destination of management VCSA should be stored on vSAN datastore. In this section, vSAN is configured for the Edge andResource clusters of the Core Management and Regional Management domains.

Steps

1. Log in to the Management VCSA of the Core Management domain (Core_Mgmt_VCSA).

2. Configure the vSAN on the remaining ESXi hosts attached to the Management cluster.

3. Assign the vSAN license key to vSAN of the Management cluster.

4. Update the vSAN HCL database for management vSAN.

5. Update the vSAN release catalog for management vSAN.

6. Enable the vSAN performance service for management vSAN.

7. Log in to the Resource VCSA of Core Management domain (Core_Res_VCSA).

8. Configure the vSAN for both Edge and Resource clusters.

9. Assign the vSAN license key to both Edge and Resource vSAN.

10. Update the vSAN HCL database for both Edge and Resource vSAN.

11. Update the vSAN release catalog for both Edge and Resource vSAN.

12. Enable the vSAN performance service for both Edge and Resource vSAN.

13. Repeat the steps provided in this section to configure the vSAN for the Management, Resource, and Edge clusters for the Regionaland Edge Management domain.

Configuring the VMFS datastore on BackupclusterAbout this task

On the Core Management domain, two standalone VMFS datastores will be created, one on each ESXi host in the Backup cluster.

Steps

1. Open the ESXi in your browser, and log in with valid credentials.

2. From the Home window, click Storage > New datastore.

3. In the Select Creation Type window, select Create new VMFS datastore then click Next.

4. Enter the necessary information as required and click Finish.

5. Repeat the steps provided in this section to create a second VMFS datastore on the second ESXi host.

8

Configure VMware vSAN cluster 35

Page 36: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Configure VMware vCenter High AvailabilityAbout this task

The VCSA-HA feature works as a cluster of three VMs. The three-node cluster contains Active, Passive, and Witness nodes. A differentconfiguration path might be available, depending on your current configuration.

The vCenter Server Appliances (VCSA) are embedded with PSC and related services. In this configuration, vCenter HA ensures theavailability of the VCSA. The vCenter HA cluster consists of the following nodes:

• Active node - serves the client request.• Passive node - acts as a backup node if the Active node crashes.• Witness node – serves as a witness node.

Dedicated vCenter High Availability (vCHA) replicates the vCenter Server Appliance data between nodes and ensures that the data iscontinuously synchronized and up to date. The configuration of the anti-affinity rules ensures that all of the nodes are on different hostsand protects against host failures.

NOTE: As part of this solution deployment, VCSA-HA is configured differently for the Management, Resource, Edge,

and Resource vCenter clusters.

The Management Cluster VCSA-HA for both the Core Management and Regional Management domains is configured using the BasicOption. The Basic Option allows the vCenter HA wizard to create and configure a second network adapter on the VCSA. The vCenter HAwizard also clones the Active and Witness nodes and configures the vCenter HA network.

The Resource Cluster VCSA-HA for both the Core Management and Regional Management domains is configured using the AdvancedOption. Using the Advanced Option to configure the vCenter HA cluster, you are responsible for adding a second NIC to VCSA, cloningthe Active node to Passive and Witness nodes, and configuring the clones.

NOTE: Each HA node is configured to reside on a different host. Verify that the IPv4 addressing was not mixed when

networking was configured on the nodes. Ensure that a gateway for the HA network was not specified when configuring

the nodes.

The Configure Resource vCenter HA for both the Core Management and Regional Management domains requires the manual addition of asecond network card to the VCSA and the cloning (two times) of the appliance. Also, the creation of Passive and Witness node clonesmust be done halfway through the Resource vCenter HA configuration process.

NOTE: Each VCSA HA node requires its own ESXi host. In the installation process, set the Inventory Layout accordingly

to identify and select the ESXi host where the VCSA appliance and HA instances are deployed.

Steps

1. Configuring the management VCSA HA on the Core Management domain:

a) In the VMware vSphere Web Client, log in to the management vCenter (Core_Mgmt_VCSA).b) Configure the management vCenter HA settings for Active, Passive, and Witness nodes using the Basic option.

2. Configuring the resource VCSA HA on the Core Management domain:

a) In the VMware vSphere Web Client, log in to the resource vCenter (Core_Res_VCSA).b) Configure the resource vCenter HA settings for the Active, Passive, and Witness nodes using the Advanced option.

3. Repeat these steps to configure the VCSA HA on the Regional Management domain.

9

36 Configure VMware vCenter High Availability

Page 37: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Deployment and configuration of NSX-TThe NSX-T Data Center is the software-defined networking component for the vCloud NFV platform. It allows you to create, delete, andmanage software-based virtual networks.

In this deployment, one NSX-T manager VM and two NSX-T manager node VMs are deployed on the Core Management Domain andRegional Management Domain.

Prerequisites

• Review the hardware requirements for NSX-T specified in Installing NSX Edge.• NSX-T 2.5.1 OVA is present in the deployment VM.• VMware vCenter Server is installed and configured.• VMware ESXi 6.7U3 is installed.• Manual creation of forward and reverse lookup entries for all NSX-T instances on the DNS server before deployment.• For client and user access, consider the following:

• For ESXi hosts added to the vSphere inventory by name, ensure that forward and reverse name resolution is working; otherwise,the NSX-T Manager cannot resolve the IP addresses.

• Permissions are provided to add and power-on virtual machines.• The VMware Client Integration plug-in must be installed.• A web browser that is supported for the version of vSphere Web Client that you are using.• IPv4 IP addresses used as IPv6 are not supported in the previously mentioned version of NSX-T.

Deployment of the NSX-T ManagerThe NSX-T Manager provides a Graphical User Interface (GUI) and REST API for the creation, configuration, and monitoring of NSX-Tcomponents such as logical switches, logical routers, and firewalls. The NSX-T Manager provides an aggregated system view, and it is thecentralized network management component of NSX. The NSX-T Manager is installed as a virtual appliance on any ESX host in thevCenter environment.

The NSX-T Manager virtual machine is packaged as an OVA file, which allows for the use of the vSphere Web Client to import the NSX-TManager into the datastore and virtual machine inventory.

One instance of the NSX-T Manager will be deployed on the Core Management and Regional Management domains. When the NSX-TManager is deployed on an ESXi host, the vSphere high availability (HA) feature can be used to ensure the availability of the NSX-TManager.

NOTE: The NSX-T Manager virtual machine installation includes VMware Tools. Do not attempt to upgrade or remove

VMware Tools on the NSX-T Manager.

Table 30. Naming convention

Core Management domain Regional Management domain

Core_NSX_Manager-1 Reg_NSX_Manager-1

Steps

1. Log in to the management vCenter Server of the Core Management domain using the vSphere Web Client.2. Deploy the NSX-T manager on the management cluster of the Core Management domain. For this deployment, vSAN Datastore is

selected as datastore and Core_Mgmt_VM_NW is selected as network port group.3. Once the NSX-T manager is deployed, perform the following steps:

a. Power-on the NSX-T Manager VM.b. After the NSX-T Manager boots, connect to the NSX-T manager GUI at: https://<IP/FQDN of NSX-T Manager>.c. Review the EULA, and if you agree to the terms, check I understand and accept the terms of the license agreement box and

click CONTINUE.d. Click Save to finish.

4. Add the license key to the NSX-T manager.

10

Deployment and configuration of NSX-T 37

Page 38: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

5. Add management VCSA as a compute manager:

a. Navigate to: System > Fabric > Compute Managers > Add.b. Fill in the information using the following table to create a compute manager.

Table 31. Compute manager

Field Description

Name Name of Management VCSA

Domain name/IP address FQDN for Management VCSA

Type vCenter

Username Administrator user name of Management VCSA

Password Administrator password of Management VCSA

c. Repeat the above steps and add resource VCSA as a compute manager.6. Repeat the steps provided in this section to deploy the NSX-T manager on the Regional Management domain.

Deployment and configuration of NSX-T nodesDeploy NSX-T Nodes using the NSX-T Manager on vSphere ESXi hosts that are managed by a vCenter Server. In this deployment, twoNSX-T nodes are deployed in the core management domain and two NSX-T nodes are deployed on the Regional Management domain.

Table 32. Naming convention for NSX-T nodes

Core Management domain Regional Management domain

Core_NSX_Manager-2 Reg_NSX_Manager-2

Core_NSX_Manager-3 Reg_NSX_Manager-3

Prerequisites

• The NSX-T Manager is deployed.• vCenter Server is deployed and configured.• VMware ESXi 6.7U3 is installed.• Register the vSphere ESXi host to the vCenter Server.• The VMware vSphere ESXi host has the necessary CPU, memory, and hard disk resources to support 12vCPUs, 48 GB of RAM, and

360 GB of storage.

Steps

1. Log in to the NSX-T Manager GUI of the Core Management domain using administrator credentials at: https://<IP/FQDN of NSX-TManager>.

2. Navigate to: System > Overview > Add Nodes.3. Deploy the NSX-T node 1 on the NSX-T manager of the Core Management domain.4. Deploy the NSX-T node 2 on the NSX-T manager of the Core Management domain.5. Verify that the NSX-T manager and NSX-T nodes are updated and the NSX-T manager cluster is stable and green.6. Add a virtual IP to the NSX-T manager cluster.7. Repeat the steps provided in this section to configure the NSX-T nodes for the Regional Management domain.

Configuring the NSX-T Manager componentsOnce the NSX-T Manager and NSX nodes are deployed successfully, configure the NSX-T manager components such as logical switches,transport zone, transport nodes, uplink profiles, IP pools, and routers.

Table 33. Transport Zone details

Name N-VDS name Host membership criteria Traffic type

Core Management domain

Core-Overlay-TZ nvds-overlay Enhanced Datapath Overlay

Core-Dpdk-TZ nvds-dpdk Enhanced Datapath VLAN

38 Deployment and configuration of NSX-T

Page 39: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Name N-VDS name Host membership criteria Traffic type

Core-Vlan-TZ nvds-vlan Standard VLAN

Regional Management domain

Reg-Overlay-TZ nvds-overlay Enhanced Datapath Overlay

Reg-Dpdk-TZ nvds-dpdk Enhanced Datapath VLAN

Reg-Vlan-TZ nvds-vlan Standard VLAN

Table 34. Uplink profile details

Name LAGname

LACPmode

LACP loadbalancing

Uplinks LACPtimeout

Teaming policy Activeuplinks

Transport VLANID

MTU

Core Management domain

Core-edge-overlay-uplink-profile

NA NA NA NA NA Failover Order Uplink1 700 1600

Core-edge-vm-uplink-profile

NA NA NA NA NA Failover Order Uplink1 40 1600

Core-edge-vlan-uplink-profile

NA NA NA NA NA Failover Order Uplink1 20 1600

Core-host-overlay-uplink-profile

LAG1 Active Source MACaddress

2 FAST LOADBALANCE_SRC_MAC

LAG1 700 1600

Core-host-dpdk-uplink-profile

LAG1 Active Source MACaddress

2 FAST LOADBALANCE_SRC_MAC

LAG1 40 1600

Regional Management domain

Reg-edge-overlay-uplink-profile

NA NA NA NA NA Failover Order Uplink1 700 1600

Reg-edge-vm-uplink-profile

NA NA NA NA NA Failover Order Uplink1 40 1600

Reg-edge-vlan-uplink-profile

NA NA NA NA NA Failover Order Uplink1 20 1600

Reg-host-overlay-uplink-profile

LAG1 Active Source MACaddress

2 FAST LOADBALANCE_SRC_MAC

LAG1 700 1600

Reg-host-dpdk-uplink-profile

LAG1 Active Source MACaddress

2 FAST LOADBALANCE_SRC_MAC

LAG1 40 1600

Table 35. IP Pool details

Fields Description

Name IP Pool name

IP ranges IP allocation ranges for pool

Gateway Gateway IP address of IP pool

CIDR Network address in a CIDR notation

DNS servers DNS server IP address

DNS suffix Domain name

Deployment and configuration of NSX-T 39

Page 40: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Table 36. Core Management domain host transport nodes details

Host TransportZone

NVDS Uplinkprofile

LLDPprofile

IPassignment

IP pool Physical NICs

Core-RES-ESXi-01 Core-Overlay-TZ

nvds-overlay

Core-host-overlay-uplink-profile

LLDP [SendpacketEnabled]

Use IP Pool Core-TEP-IP-Pool

vmnic5- LAG0 vmnic7-LAG1

Core-Dpdk-TZ

nvds-dpdk Core-host-dpdk-uplink-profile

LLDP [SendpacketEnabled]

NA NA vmnic8- LAG0 vmnic9-LAG1

Core-RES-ESXi-02 Core-Overlay-TZ

nvds-overlay

Core-host-overlay-uplink-profile

LLDP [SendpacketEnabled]

Use IP Pool Core-TEP-IP-Pool

vmnic5- LAG0 vmnic7-LAG1

Core-Dpdk-TZ

nvds-dpdk Core-host-dpdk-uplink-profile

LLDP [SendpacketEnabled]

NA NA vmnic8- LAG0 vmnic9-LAG1

Core-RES-ESXi-03 Core-Overlay-TZ

nvds-overlay

Core-host-overlay-uplink-profile

LLDP [SendpacketEnabled]

Use IP Pool Core-TEP-IP-Pool

vmnic5- LAG0 vmnic7-LAG1

Core-Dpdk-TZ

nvds-dpdk Core-host-dpdk-uplink-profile

LLDP [SendpacketEnabled]

NA NA vmnic8- LAG0 vmnic9-LAG1

Core-RES-ESXi-04 Core-Overlay-TZ

nvds-overlay

Core-host-overlay-uplink-profile

LLDP [SendpacketEnabled]

Use IP Pool Core-TEP-IP-Pool

vmnic5- LAG0 vmnic7-LAG1

Core-Dpdk-TZ

nvds-dpdk Core-host-dpdk-uplink-profile

LLDP [SendpacketEnabled]

NA NA vmnic8- LAG0 vmnic9-LAG1

Table 37. Regional Management domain host transport nodes details

Host Transport Zone

NVDS Uplink profile LLDPprofile

IPassignment

IP pool Physical NICs

Reg-RES-ESXi-01 Reg-Overlay-TZ

nvds-overlay

Rgg-host-overlay-uplink-profile

LLDP[SendpacketEnabled]

Use IPPool

Reg-TEP-IP-Pool

vmnic5- LAG0 vmnic7-LAG1

Reg-Dpdk-TZ

nvds-dpdk Reg-host-dpdk-uplink-profile

LLDP[SendpacketEnabled]

NA NA vmnic8- LAG0 vmnic9-LAG1

Reg-RES-ESXi-02 Reg-Overlay-TZ

nvds-overlay

Rgg-host-overlay-uplink-profile

LLDP[SendpacketEnabled]

Use IPPool

Reg-TEP-IP-Pool

vmnic5- LAG0 vmnic7-LAG1

Reg-Dpdk-TZ

nvds-dpdk Reg-host-dpdk-uplink-profile

LLDP[SendpacketEnabled]

NA NA vmnic8- LAG0 vmnic9-LAG1

40 Deployment and configuration of NSX-T

Page 41: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Host Transport Zone

NVDS Uplink profile LLDPprofile

IPassignment

IP pool Physical NICs

Reg-RES-ESXi-03 eEg-Overlay-TZ

nvds-overlay

Rgg-host-overlay-uplink-profile

LLDP[SendpacketEnabled]

Use IPPool

Reg-TEP-IP-Pool

vmnic5- LAG0 vmnic7-LAG1

Reg-Dpdk-TZ

nvds-dpdk Reg-host-dpdk-uplink-profile

LLDP[SendpacketEnabled]

NA NA vmnic8- LAG0 vmnic9-LAG1

Reg-RES-ESXi-04 Reg-Overlay-TZ

nvds-overlay

Rgg-host-overlay-uplink-profile

LLDP[SendpacketEnabled]

Use IPPool

Reg-TEP-IP-Pool

vmnic5- LAG0 vmnic7-LAG1

Reg-Dpdk-TZ

nvds-dpdk Reg-host-dpdk-uplink-profile

LLDP[SendpacketEnabled]

NA NA vmnic8- LAG0 vmnic9-LAG1

Prerequisites

• The NSX-T manager is deployed successfully. See Deployment of NSX-T manager.• The NSX-T nodes are deployed successfully. See Deployment and configuration of NSX-T nodes.

Steps

1. Log in to the NSX-T Manager GUI of the Core Management domain using administrator credentials at: https://<IP/FQDN of NSX-TManager>.

2. Create the transport zones:

a. Navigate to: System > Fabric > Transport Zones > Add.b. Use the information provided in the Transport Zone details table and create transport zones.

3. Create the uplink profiles:

a. Navigate to: System > Fabric > Profiles > Uplink Profiles > Add.b. Use the information provided in the Uplink profile details table and create uplink profiles.

4. Navigate to: Advanced Networking & Security > Inventory > Groups > IP Pools and create IP pools for the overlay VLANnetworks using Table 35.

NOTE: Do not use the same IP pool for Regional Management Domain NSX-T.

5. Navigate to: System > Fabric > Nodes > Host Transport Nodes and use the Core Management domain host transport nodesdetails table to add all resource cluster hosts as transport nodes by adding the hosts to the transport zones.

6. Repeat the steps provided in this section and use the information provided in the Transport Zone details , Uplink profile details, IP pooldetails, and Regional Management domain host transport nodes details tables to configure the NSX-T Manager components for theRegional Management domain.

NSX-T EdgeNSX-T edge provides connectivity to the internal and external networks. In this deployment, four edge nodes: Edge01, Edge02, Edge03,and Edge04 are created on both the Core Management domain and the Regional Management domain.

Using the four edge nodes, two edge clusters will be created: one NSX-Edge cluster for NSX-T and one NSX-Edge cluster for vCD.

Table 38. Edge nodes

Core Management Domain Regional Management Domain

Core-Edge-01 Reg-Edge-01

Core-Edge-02 Reg-Edge-02

Core-Edge-03 Reg-Edge-03

Core-Edge-04 Reg-Edge-04

Deployment and configuration of NSX-T 41

Page 42: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Table 39. Core Management domain edge transport nodes details

Edge node TransportZone

Edgeswitchname

Uplinkprofile

IPassignment

IP pool DPDK Fastpath interfaces

Uplink Connected to

Core-Edge-01

Core-Overlay-TZ

nvds-overlay Core-edge-overlay-uplink-profile

Use IP Pool Core-TEP-IP-Pool

Uplink1 fp-eth0(Core_Edge_Overlay_NW)

Core-Dpdk-TZ

nvds-dpdk Core-edge-vm-uplink-profile

- - Uplink1 fp-eth1(Core_Edge_External_NW)

Core-Edge-02

Core-Overlay-TZ

nvds-overlay Core-edge-overlay-uplink-profile

Use IP Pool Core-TEP-IP-Pool

Uplink1 fp-eth0(Core_Edge_Overlay_NW)

Core-Dpdk-TZ

nvds-dpdk Core-edge-vm-uplink-profile

- - Uplink1 fp-eth1(Core_Edge_External_NW)

Core-Edge-03

Core-Overlay-TZ

nvds-overlay Core-edge-overlay-uplink-profile

Use IP Pool Core-TEP-IP-Pool

Uplink1 fp-eth0(Core_Edge_Overlay_NW)

Core-Vlan-TZ

nvds-vlan Core-edge-vlan-uplink-profile

- - Uplink1 fp-eth1(Core_Edge_External_NW)

Core-Edge-04

Core-Overlay-TZ

nvds-overlay Core-edge-overlay-uplink-profile

Use IP Pool Core-TEP-IP-Pool

Uplink1 fp-eth0(Core_Edge_Overlay_NW)

Core-Vlan-TZ

nvds-vlan Core-edge-vlan-uplink-profile

- - Uplink1 fp-eth1(Core_Edge_External_NW)

Table 40. Regional Management domain edge transport nodes details

Edge node TransportZone

Edge switchname

Uplink profile IPassignment

IP pool DPDK Fastpath interfaces

Uplinks Connected to

Reg-Edge-01 Reg-Overlay-TZ

nvds-overlay Reg-edge-overlay-uplink-profile

Use IP Pool Reg-TEP-IP-Pool

Uplink1 fp-eth0(Reg_Edge_Overlay_NW)

Reg-Dpdk-TZ nvds-dpdk Reg-edge-vm-uplink-profile

- - Uplink1 fp-eth1(Reg_Edge_External_NW)

Reg-Edge-02 Reg-Overlay-TZ

nvds-overlay Reg-edge-overlay-uplink-profile

Use IP Pool Reg-TEP-IP-Pool

Uplink1 fp-eth0(Reg_Edge_Overlay_NW)

Reg-Dpdk-TZ nvds-dpdk Reg-edge-vm-uplink-profile

- - Uplink1 fp-eth1(Reg_Edge_External_NW)

Reg-Edge-03 Reg-Overlay-TZ

nvds-overlay Reg-edge-overlay-uplink-profile

Use IP Pool Reg-TEP-IP-Pool

Uplink1 fp-eth0(Reg_Edge_Overlay_NW)

Reg-Vlan-TZ nvds-vlan Reg-edge-vlan-uplink-profile

- - Uplink1 fp-eth1(Reg_Edge_External_NW)

Reg-Edge-04 Reg-Overlay-TZ

nvds-overlay Reg-edge-overlay-uplink-profile

Use IP Pool Reg-TEP-IP-Pool

Uplink1 fp-eth0(Reg_Edge_Overlay_NW)

42 Deployment and configuration of NSX-T

Page 43: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Edge node TransportZone

Edge switchname

Uplink profile IPassignment

IP pool DPDK Fastpath interfaces

Uplinks Connected to

Reg-Vlan-TZ nvds-vlan Reg-edge-vlan-uplink-profile

- - Uplink1 fp-eth1(Reg_Edge_External_NW)

Table 41. Edge clusters and their participating edge nodes

Domain Cluster name Participating edge nodes

Core Management domain Core-NSX-edge-Cluster Core-edge01 and Core-edge02

Core-VCD-edge-Cluster Core-edge03 and Core-edge04

Regional Management domain Reg-NSX-edge-Cluster Reg-edge01 and Reg-edge02

Reg-VCD-edge-Cluster Reg-edge03 and Reg-edge04

Prerequisites

• Review the NSX-T Edge network requirements in Installing NSX Edge.• The Management and Resource vCenter is registered as a compute manager.• vCenter Server vSAN datastore has a minimum of 120 GB storage or disk space available.• The vCenter Server cluster or host has access to the specified networks and vSAN datastore in the configuration.• Transport zones are configured.• Uplink profiles are configured.• The IP pool is configured.• One unused physical NIC is available on the host or NSX-T edge node.

Steps

1. Log in to the NSX-T Manager GUI of the Core management domain using administrator credentials at: https://nsx-manager-ip-address.

2. Navigate to: System > Fabric > Nodes.3. Deploy the first NSX edge node using the information provided in the Edge nodes and Core Management domain edge transport

nodes details tables.4. Deploy the second NSX edge node using the information provided in the Edge nodes and Core Management domain edge transport

nodes details tables.5. Deploy the third NSX edge node using the information provided in the Edge nodes and Core Management domain edge transport

nodes details tables.6. Deploy the fourth NSX edge node using the information provided in the Edge nodes and Core Management domain edge transport

nodes details tables.7. Create the NSX edge cluster for NSX-T using the information provided in the Edge clusters and their participating edge nodes table.8. Create the NSX edge cluster for vCD using the information provided in the Edge clusters and their participating edge nodes table.9. Repeat the steps provided in this section and use the information provided in the Edge nodes, Regional Management domain edge

transport nodes details, and Edge clusters and their participating edge nodes tables to deploy NSX edge nodes on the RegionalManagement domain.

Configure logical switchesLogical switches attach to single or multiple VMs in the network. The VMs connected to a logical switch communicate with each otherusing the tunnels between hypervisors.

In this deployment, three logical switches are created:

• One ENS VLAN-backed logical switch using the VLAN ID of the External Network• Two ENS overlay-backed logical switches using overlay transport zone• One standard VLAN-backed logical switch using the VLAN ID of the management network

NOTE: QLogic drivers do not support the N-VDS Enhanced data path feature

Deployment and configuration of NSX-T 43

Page 44: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Table 42. Logical switches

Logical switch name Transport zone VLAN

Core Management domain

Core-External_LS Core-Dpdk-TZ 40

Core-LS_1 Core-Overlay-TZ -

Core-LS_2 Core-Overlay-TZ -

Core-VCD_LS Core-Vlan-TZ 20

Regional Management domain

Reg-External_LS Reg-Dpdk-TZ 40

Reg-LS_1 Reg-Overlay-TZ -

Reg-LS_2 Reg-Overlay-TZ -

Reg-VCD_LS Reg-Vlan-TZ 20

Prerequisites

• NSX-T Manager is installed and configured.• The transport zone is configured.• Fabric nodes are successfully connected to NSX-T Management Plane Agent (MPA) and NSX-T Local Control Plane (LCP).• Transport nodes are added to the transport zone.• ESXi hosts are added to the NSX-T fabric and VMs are hosted on these ESXi hosts.• NSX-T Manager cluster is stable.• DPDK drivers are installed on Intel NIC.

NOTE: QLogic NIC does not support the N-VDS Enhanced data path feature.

Steps

1. Log in to the NSX-T Manager GUI of the Core management domain using administrator credentials at: https://nsx-manager-ip-address.

2. Navigate to: Advanced Networking & Security > Networking > Switching.3. Create one external logical switch using the information provided in the Logical switches table.4. Create two overlay-backed logical switches using the information provided in the Logical switches table.5. Create one vCD logical switch using the information provided in the Logical switches table.6. Repeat the steps provided in this section to create logical switches using the information provided in the Logical switches table for the

Regional Management domain.

Deploy the logical routerAn NSX-T logical router creates logical routing in the vCloud NFV virtual environment that is completely decoupled from the underlyinghardware. In this deployment, Tier-1 and Tier-0 routers are created on both core management domain and regional management domain.

Configure NSX-T Tier 1 routerThe Tier-1 logical routers have downlink ports to connect to NSX-T Data Center logical switches and uplink ports to connect to NSX-TData Center tier-0 logical routers.

One NSX Edge Tier-1 Router is configured in this deployment for NSX-Edge Cluster on both Core Management and RegionalManagement domains.

Table 43. NSX Edge Tier 1 router

Core Management domain Regional Management domain

Core-NSX-Tier-1 Reg-NSX-Tier-1

44 Deployment and configuration of NSX-T

Page 45: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Table 44. Tier-1 logical router port details

Name Type Logical switch Logical switch port Subnet

IP address Prefix

Core Management domain

Core-RP_1 Downlink Core-LS_1 Attach to newswitch port

IP address for routerport.

24

Core-RP_2 Downlink Core-LS_2 Attach to newswitch port

IP address for routerport

24

Regional Management domain

Reg-RP_1 Downlink Reg-LS_1 Attach to newswitch port

IP address for routerport

24

Reg-RP_2 Downlink Reg-LS_2 Attach to newswitch port

IP address for routerport

24

Table 45. Route advertisement on routers for Core Management and Regional Management domain

Field Value

Status Enabled

Advertise All connected routes Yes

Advertise All NAT routes Yes

Advertise All Static routes Yes

Advertise All LB VIP routes Yes

Advertise All LB SNAT IP Routes Yes

Advertise All DNS Forwarder Routes No

Prerequisites

• NSX-T Edge is installed and configured.• NSX-T Manager cluster is stable.• Edge cluster is configured.

Steps

1. Log in to the NSX-T Manager GUI of the Core management domain using administrator credentials at: https://nsx-manager-ip-address.

2. Navigate to: Advanced Networking & Security > Networking > Routers.3. Create the NSX-T tier-1 router using the information provided in the NSX Edge Tier 1 router table.4. Create two router port groups using the information provided in the Tier-1 logical router port details table.5. Configure the route advertisement on tier-1 router using the Route advertisement on routers for Core and Regional Management

domain table.6. Repeat the steps provided in this section to create and configure the NSX-T Tier-1 router using the information provided in the NSX

Edge Tier 1 router, Tier-1 logical router port details, and Route advertisement on routers for Core and Regional Management domaintables for the Regional Management domain.

Configure NSX-T Tier-0 routerAn NSX-Edge cluster can be backed by multiple tier-0 logical routers. Tier-0 routers support the BGP dynamic routing protocol. Whenadding a tier-0 logical router, it is important to map out the networking topology that you are building. In this deployment, one Tier-0router will be deployed on the Core Management domain and the Regional Management domain.

Table 46. NSX Edge Tier-0 router

Core Management domain Regional Management domain

Core-NSX-Tier-0 Reg-NSX-Tier-0

Deployment and configuration of NSX-T 45

Page 46: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Table 47. Tier-0 logical router port group details

Name Type MTU Transportnode

Logicalswitch

Logicalswitch port

Subnet

IP address Prefix

Core Management domain

Core-External_RP

Uplink 1600 Select the NSXEdge node.

Core-External_LS

Attach to newswitch port

IP address forrouter port

24

Regional Management domain

Reg-External_RP

Uplink 1600 Select the NSXEdge node.

Reg-External-LS

Attach to newswitch port

IP address forrouter port

24

Table 48. BGP details

Field Core Management domain Regional Management domain

Status Enabled Enabled

ECMP Enabled Enabled

Inter SR Routing Enabled Enabled

Local As 65001 65002

Table 49. Neighbor details

Field Neighbor-1 Neighbor-2

Core Management domain

Neighbor tab

Admin Status Enabled Enabled

IP The IP address of Leaf1 external VLAN ofthe Core Management domain

The IP address of Leaf2 external VLAN ofthe Core Management domain

Remote AS AS configured on Leaf1 of the CoreManagement domain

AS configured on Leaf2 of the CoreManagement domain

Maximum Hop Limit 2 2

Keep alive time (in seconds) 60 60

Hold down time (in seconds) 180 180

Local address tab

Type Uplink, then from the Available column,move the uplink to Selected column

Uplink, then from the Available column,move the uplink to Selected column

Address families tab

Type IPv4_Unicast IPv4_Unicast

State Enabled Enabled

Regional Management domain

Neighbor tab

Admin Status Enabled Enabled

IP The IP address of Leaf1 external VLAN ofRegional Management domain

The IP address of Leaf2 external VLAN ofthe regional management domain

Remote AS AS configured on Leaf1 of the regionalmanagement domain

AS configured on Leaf2 of the regionalmanagement domain

Maximum Hop Limit 2 2

Keep alive time (in seconds) 60 60

Hold down time (in seconds) 180 180

46 Deployment and configuration of NSX-T

Page 47: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Field Neighbor-1 Neighbor-2

Local address tab

Type Uplink, then from the Available column,move the uplink to Selected column

Uplink, then from the Available column,move the uplink to Selected column

Address families tab

Type IPv4_Unicast IPv4_Unicast

State Enabled Enabled

Prerequisites

• NSX-T Edge is installed and configured.• NSX-T Manager cluster is stable.• The edge cluster is configured.• BGP is configured on the Leaf switches to add neighbors.

Steps

1. Log in to the NSX-T Manager GUI of the Core management domain using administrator credentials at: https://nsx-manager-ip-address.

2. Navigate to: Advanced Networking & Security > Networking > Routers.3. Create the NSX-T tier-0 router using the NSX Edge Tier-0 router table.4. Navigate to: Advanced Networking & Security > Networking > Routers > NSX Tier-0 router > Configuration.5. Connect the NSX-Tier-1 router with the NSX-T tier 0 router.6. Create the logical router port group using the Tier-0 logical router port group details table.7. Enable the route redistribution configuration on the tier-0 router.8. Create new Route Redistribution on NSX-Tier-0-router with all sources checked.9. Configure the BGP on NSX-Tier-0 router using the BGP details table.10. Add one BGP neighbor for leaf 1 using the Neighbor details table.11. Add second BGP neighbor for leaf 2 using the Neighbor details table.12. Repeat the steps provided in this section to create and configurd the NSX-T Tier-0 router using the information provided in the NSX

Edge Tier-0 router, Tier-0 logical router port group details, BGP details, and Neighbor details tables on the Regional Managementdomain.

Create and configure VCD Tier-1 routerThe VCD Tier-1 logical router is a stand-alone router, and it does not have any downlink or connection with the Tier-0 router. It has aservice router but no distributed router. The VCD Tier-1 logical router has a centralized service port (CSP) to connect with a LoadBalancer.

Table 50. VCD Tier-1 router

Core Management domain Regional Management domain

Core-VCD-Tier-1 Reg-VCD-Tier-1

Table 51. Tier-1 logical router port group details

Name Type MTU Transportnode

Logicalswitch

Logicalswitch port

Subnet

IP address Prefix

Core Management domain

Core-VCD_RP Centralized 1600 Select the NSXEdge node.

Core-VCD_LS Attach to newswitch port

IP address forrouter port

24

Regional Management domain

Reg-VCD_RP Centralized 1600 Select the NSXEdge node.

Reg-VCD_LS Attach to newswitch port

IP address forrouter port

24

Deployment and configuration of NSX-T 47

Page 48: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Table 52. Route advertisement on routers for Core Management and Regional Management domain

Field Value

Status Enabled

Advertise All connected routes Yes

Advertise All NAT routes Yes

Advertise All Static routes Yes

Advertise All LB VIP routes Yes

Advertise All LB SNAT IP Routes Yes

Advertise All DNS Forwarder Routes Yes

Prerequisites

• NSX-T Edge is installed and configured.• NSX-T Manager cluster is stable.

Steps

1. Log in to the NSX-T Manager GUI of the Core Management domain using administrator credentials at: https://nsx-manager-ip-address.

2. Navigate to: Advanced Networking & Security > Networking > Routers.3. Create a VCD Tier-1 router using the information provided in the VCD Tier-1 router table.4. Navigate to: Advanced Networking & Security > Networking > Routers > VCD Tier-1 router > Configuration.5. Create logical router port group for VCD Tier-1 router using the information provided in the Tier-1 logical router port group details

table.6. Configure the Route Advertisement on VCD Tier-1 router using the information provided in the Route advertisement on routers for

Core and Regional Management domain table.7. Repeat the steps provided in this section to create and configure the VCD Tier-1 router using the information provided in the VCD

Tier-1 router, Tier-1 logical router port group details, andRoute advertisement on routers for Core and Regional Management domaintables for the Regional Management domain.

Create and configure load balancerThe NSX-T logical load balancer provides the high-availability services and distributes the network traffic load between the servers. Onlythe Tier-1 router supports the NSX-T load balancer. One load balancer can be linked with only a Tier-1 logical router.

Table 53. Load balancer

Core Management domain Regional Management domain

Core-VCD_LB Reg-VCD_LB

Table 54. Health monitor

Fields Core Management domain Regional Management domain

Name Core-TCP_VCD Reg-TCP_VCD

Health Check Protocol LbTcpMonitor LbTcpMonitor

Port 443 443

Monitoring Interval (sec) Keep the default Keep the default

Fall Count Keep the default Keep the default

Rise Count Keep the default Keep the default

Timeout Period (sec) Keep the default Keep the default

Table 55. Server pool

Fields Core Management domain Regional Management domain

General properties

48 Deployment and configuration of NSX-T

Page 49: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Fields Core Management domain Regional Management domain

Name Core-VCD_IP Reg-VCD_IP

Load balancing Algorithm ROUND_ROBIN ROUND_ROBIN

TCP Multiplexing Keep the default Keep the default

Maximum Multiplexing Connections Keep the default Keep the default

SNAT translations

Translation Mode Auto Map Auto Map

Port Overload Keep the default Keep the default

Overload Factor Keep the default Keep the default

Pool members

Membership Types Static Static

Pool Members Add all core management domain vCD cellsas pool members:

Member 1 settings:

• Name: Core-VCD-Cell-1• IP Address: Enter the Core-VCD-Cell-1

IP address• State: Enabled

Member 2 settings:

• Name: Core-VCD-Cell-2• IP Address: Enter the Core-VCD-Cell-2

IP address• State: Enabled

Member 3 settings:

• Name: Core-VCD-Cell-3• IP Address: Enter the Core-VCD-Cell-3

IP address• State: Enabled

Add all regional management domain vCDcells as pool members:

Member 1 settings:

• Name: Reg-VCD-Cell-1• IP Address: Enter the Reg-VCD-Cell-1

IP address• State: Enabled

Member 2 settings:

• Name: Reg-VCD-Cell-2• IP Address: Enter the Reg-VCD-Cell-2

IP address• State: Enabled

Member 3 settings:

• Name: Reg-VCD-Cell-3• IP Address: Enter the Reg-VCD-Cell-3

IP address• State: Enabled

Health monitors

Enter the Minimum Active Members 1 1

Active Health Monitor Core-TCP_VCD Reg-TCP_VCD

Prerequisites

• Edge cluster is configured.• A Tier-1 router for vCloud Director is created. See Create and configure vCD Tier-1 router.• All the vCloud Director cells are deployed. See VMware vCloud Director deployment and configuration.

Steps

1. Log in to the NSX-T Manager GUI of the Core management domain using administrator credentials at: https://nsx-manager-ip-address.

2. Navigate to: Advanced Networking & Security > Networking > Routers > VCD Tier-1 router > Configuration.3. Create and configure a load balancer for the VCD Tier-1 router using the information provided in the Load balancer table.4. Attach the load balancer with VCD Tier-1 router.5. Create a health monitor for load balancer using the information provided in the Health monitor table.6. Add a server pool for load balancer using the information provided in the Server pool table.

NOTE: To create server pool, all the vCloud Director cells must be deployed. See VMware vCloud Director

deployment and configuration.

7. Create a virtual server then attach the virtual server to the load balancer.8. Repeat the steps provided in this section to create and configure the load balancer using the information provided in the Load

balancer, Health monitor, and Server pool tables for the Regional Management domain.

Deployment and configuration of NSX-T 49

Page 50: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

VMware vCloud Director deployment andconfiguration

VMware vCloud Director (vCD) is a VIM component and works on top of other VIM components, the vCenter Server, and the NSX-TManager. The vCloud Director is deployed on the Management pod.

vCloud Director connects with:

• The vCenter Server to manage the workloads• The NSX Manager associated with tenant networking

The vCloud Director server is grouped by deploying three vCD instances to create vCD cells: one vCD cell is used as Primary cell, and theremaining two are used as Standby cells. These cells are attached with the NSX-T load balancer for high availability. An NFS serverinstance is created to provide the temporary storage to upload or download the catalog items that are published externally.

Installation of the NFS serverOn the Core Management and Regional Management domain, deploy and configure an NFS server accessible to all the servers of thevCloud Director Server group.

Prerequisites

• A virtual machine running CentOS 8 with following configuration:

• 8 GB Memory• 1 TB disk space• vCPU:1• vNIC:1

• Management pod is configured with internet connectivity.• Manual creation of forward and reverse lookup entries on all the vCloud Director cells instances on the DNS server before deployment.

Steps

1. Log in as a root user into the Core Management CentOS 8 VM and open the terminal.2. Install package nfs-utils using the following command:

yum install nfs-utils3. Start the NFS-related services using the following command:

systemctl start nfs-server4. Make an export directory using the following command:

mkdir /opt/vcd-share5. Restart the NFS server using the following command:

systemctl restart nfs-server6. Append the following line in the /etc/export file:

/opt/vcd-share *(rw,sync,no_root_squash)7. Stop the firewall, then turn it off using the following commands:

systemctl stop firewalldchkconfig firewalld off

11

50 VMware vCloud Director deployment and configuration

Page 51: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

NOTE: Verify that the NFS server is correctly configured by running the following command. This path will be

required for deployment of vCD cells to mount the NFS Server:

# showmount -e <NFS_IP>Output: Export list for <NFS_IP>:/opt/vcd-share*

8. Repeat the steps provided in this section to install the NFS server in the Regional Management domain.

Installation and configuration of the vCloudDirectorThe vCloud Director servers consist of one or more vCloud Director cells. These vCloud Director cells are created by deploying vCloudDirector Appliances. The process to install and configure the vCloud Director creates vCloud Director cells. Each server in the group runsthe number of services that the vCloud Director Cell calls. These cells have a common database and connect with the vCenter Server,ESXi hosts, and the NSX-T Manager.

In this deployment, three vCD Cells are deployed that are created on both the Core Management vCenter server and the RegionalManagement vCenter Server. Cell01 is used as the primary cell, and Cell02 and Cell03 are used as the stand-by cells. Deployment size forboth the primary and standby cells is the same.

For example, it is possible to use one primary-small and two standby-small cells, or one primary-large and two standby-large cells for an HAcluster. For this deployment, one primary-large and two standby-large cells are used.

Table 56. Naming convention

vCD cells Core Management domain Regional Management domain

Cell 1 Core-VCD-Cell01 Reg-VCD-Cell01

Cell 2 Core-VCD-Cell02 Reg-VCD-Cell02

Cell 3 Core-VCD-Cell03 Reg-VCD-Cell03

Prerequisites

• The NFS Server is correctly configured.• The vCD 9.7.0.3 OVA is present in the deployment VM.• The vCenter Server is up and running.• The AD-DNS and the NTP server are up and running.• Manual creation of forward and reverse lookup entries on all the vCD cells instances on the DNS server before deployment.• The DRS Automation option on the vCenter Server cluster used for vCD deployment is set to Fully Automated.

NOTE: See the Enable vSphere DRS section for information about setting the DRS automation.

Steps

1. In a web browser, log in to the Management vCenter Server of the Core Management domain using the vSphere Web Client.2. Deploy the Core-VCD-Cell01 in the Management cluster of the Core Management domain. For this deployment, vSAN Datastore is

selected as the datastore and Core_Mgmt_VM_NW as the network for eth0 and eth1.3. Assign a license to Core-vCD-Cell 01.4. Deploy the Core-VCD-Cell02 in the Management cluster of the Core Management domain. For this deployment, vSAN Datastore is

selected as the datastore and Core_Mgmt_VM_NW as the network for eth0 and eth1.5. Deploy the Core-VCD-Cell03 in the Management cluster of the Core Management domain. For this deployment, vSAN Datastore is

selected as the datastore and Core_Mgmt_VM_NW as the network for eth0 and eth1.

a. Repeat the steps provided in this section to deploy vCloud Director cells in the Regional Management domain.

Configure the vCloud Director to use vCenterSingle Sign OnAfter the deployment of vCloud Director, configure the Single Sign On (SSO) using management vCenter server on both Core andRegional vCloud Director.

Steps

VMware vCloud Director deployment and configuration 51

Page 52: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

1. Log in to the vCloud Director Cloud page using administrator credentials at: https://<IP-VCD>/cloud.2. Click the Administration tab and click Federation in the left pane.3. Click Register.4. Enter the Management vCenter Lookup Service URL.5. Enter the user name of the vSphere SSO user with administrator privileges.6. Enter the vSphere SSO password for the user name entered above.7. Click OK.8. Select Use vSphere Single Sign-On and click Apply.

NOTE: System administrators are asked for vCenter SSO credentials to log in to the vCloud Director.

9. Repeat the steps provided in this section to configure SSO for the Regional Management domain.

Integration of the vCloud Director with othercomponentsVMware vCloud Director is integrated with:

• The Resource vCenter Server to manage the workloads• The NSX Manager associated with tenant networking

Prerequisites

• The vCenter Server is up and running.• The NSX-T Manager is up and running. See Deployment and configuration of NSX-T.• All the vCloud Director cells are deployed.

Steps

1. In a web browser, log in to the Core-VCD-Cell01 of the Core Management domain at: https://<<VCD-Cell-01-fqdn/IP>>/Provider.2. Add vCenter to the vCloud Director:

a. Navigate to: Main Menu > vSphere Resources > vCenters > Add.b. Provide the required information using the following table:

Table 57. vCenter details

Field Description

Name Resource vCenter name

URL https://<Resource-vCenter-FQDN/IP>

Username Administrator username of resource vCenter

Password Administrator password of resource vCenter

c. Select the vSphere Web Client URL radio button, and then enter the FQDN of the Resource vCenter. Click Next.d. On the NSX Manager page, turn off the Configure Settings toggle, and click Next.e. On the Ready to Complete page, review the provided information and click Finish.

3. Add NSX-T to the vCloud Director:

a. Navigate to Main Menu > vSphere Resources > NSX-T Managers > Add.b. Provide the required information using the following table:

Table 58. NSX-T details

Field Description

Name NSX-T Manager name

URL https://<NSX-Manager-IP>

Username Administrator user name of NSX-T Manager

Password Administrator password of NSX-T Manager

c. Save the provided information.4. Repeat the steps provided in this section to integrate vCloud Director with other components in the Regional Management domain.

52 VMware vCloud Director deployment and configuration

Page 53: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Working with vCloud Director APIs to createProvider VDCRepeat the steps provided in this section to both the Core and Regional vCloud Director.

Prerequisites

• Download and install Postman on the deployment VM. For more information, see Postman Documentation.• Open the Postman application, go to Settings, and turn-off the SSL Certificate Verification.

Creating a session token for vCloud DirectorGenerate a vCD session token to integrate vCD with the vCenter Server and the NSX-T manager. For more information, refer to thevCloud API Programming Guide for Service Providers document from VMware.

Steps

1. On the deployment VM, open the Postman application.2. POST a request to the vCD login URL and enter the vCD administrator credentials into the Authorization header of the request:

url = https://<IP>/api/sessionsMethod = POSTAuthorization Type- Basic AuthHEADERSKey VALUEAccept application/*+xml;version=32.0;

NOTE: The values provided in this section are only for reference purposes. Update the values as per your

requirement.

Update the value for the above parameter using the following table:

Table 59. Parameter description

Parameter Description

IP Enter the IP address for vCloud Director.

A Response status 200 OK display means that the session code was generated successfully.3. In the headers section of the response, note the value of the x-vcloud-authorization field. This value will be used as the

session token in all other API calls going forward.

Retrieve VIM server detailsPost a GET request on vCD to retrieve the VIM server details. For more information, refer to the vCloud API Programming Guide forService Providers document from VMware.

Steps

1. On the deployment VM, open the Postman application.2. Paste the following parameters in the Postman Headers:

url: https://<IP>/api/admin/extension/vimServerReferencesMethod: GETHeader:x-vcloud-authorization: Use the value fetched from the session API.Accept application/*+xml;version=32.0;

3. Update the values for the parameters above using the following table:

VMware vCloud Director deployment and configuration 53

Page 54: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Table 60. Parameter descriptions

Parameter Description

IP Enter the IP address for vCloud Director.

x-vcloud-authorization Enter the session ID received from the Creating a session token for vCloud Directorsection.

NOTE: The values provided in this section are only for reference purposes, update the values as per your

requirement.

The Response status 200 OK message displays.4. From the received response, make note of the href, name, and id of the vCenter Server as shown in the following example. This

information is required when creating the provider VDC.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?><vmext:VMWVimServerReferences xmlns="http://www.vmware.com/vcloud/v1.5" xmlns:vmext="http://www.vmware.com/vcloud/extension/v1.5" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:common="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:ovfenv="http://schemas.dmtf.org/ovf/environment/1" xmlns:ns9="http://www.vmware.com/vcloud/versions" type="application/vnd.vmware.admin.vmwVimServerReferences+xml"> <Link rel="up" href="https://192.168.20.122/api/admin/extension" type="application/vnd.vmware.admin.vmwExtension+xml"/> <vmext:VimServerReference href="https://192.168.20.122/api/admin/extension/vimServer/3cd1ac67-88de-4c4b-8e1a-d171e322d8d1" id="urn:vcloud:vimserver:3cd1ac67-88de-4c4b-8e1a-d171e322d8d1" name="Core_Res_VCSA" type="application/vnd.vmware.admin.vmwvirtualcenter+xml"/></vmext:VMWVimServerReferences>

Update VIM serverCreate a PUT request on the Postman to update VIM server. For more information, refer to the vCloud API Programming Guide forService Providers document from VMware.

Steps

1. On the deployment server, open the Postman application.2. Paste the following parameters in the Postman Headers:

url = https://<IP>/api/admin/extension/vimServer/{ID}Method = PUTAuthorization Type- Basic AuthHEADERSKey VALUESAccept application/*+xml;version=32.0;Content-Type application/vnd.vmware.admin.vmwvirtualcenter+xml;version=32.0;Content-Length 596x-vcloud-authorization Use the value fetched from the sessions api

Update the values for above parameters using the following table:

Table 61. Parameter descriptions

Parameter Description

IP Enter the IP address for vCloud Director.

ID Enter the VIM Server ID received from the response of Retrieve VIM server details.

Content-Length Enter the total number of characters available in the Body section.

x-vcloud-authorization Enter the session ID received from the Creating a session token for vCloud Directorsection.

54 VMware vCloud Director deployment and configuration

Page 55: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

NOTE: The values provided in this section are only for reference purposes, update the values as per your

requirement.

3. In the body tab, select the raw radio button paste the following parameters to create a PUT request to register VIM Server:

<?xml version="1.0" encoding="UTF-8"?><vmext:VimServer xmlns:vcloud="http://www.vmware.com/vcloud/v1.5" xmlns:vmext="http://www.vmware.com/vcloud/extension/v1.5" name="Core_Res_VCSA"> <vmext:Username>[email protected]</vmext:Username> <vmext:Password>*********</vmext:Password> <vmext:Url>https://FQDN:443</vmext:Url> <vmext:IsEnabled>true</vmext:IsEnabled> <vmext:IsConnected>true</vmext:IsConnected> <vmext:UseVsphereService>true</vmext:UseVsphereService></vmext:VimServer>

Update the values for above parameter meters using the following table:

Table 62. Uplink profile details

Parameter Description

name Enter the resource vCenter server name.

Username Enter the resource vCenter server administrator user name.

Password Enter the password of provided user name.

URL https://<Resource-vCenter-FQDN/IP>.

NOTE: Any change in the body section requires an update to the Content-Length in the header section. Depending

on the number of characters to add or delete in the Body section, update the Content-Length in the Header section

by the same amount.

4. Keep the remaining parameters default and POST the request.

NOTE: A Response status 202 Accepted display means that the vCenter Server was updated successfully.

Retrieve the list of available resource poolRetrieve the list of available resource pools available on the vCenter server to create a provider VDC. To retrieve the list, create a GETrequest. For more information, refer to the vCloud API Programming Guide for Service Providers document from VMware.

Steps

1. On the deployment server, open the Postman application.2. Paste the following parameters to create a GET request to retrieve the list of available resource pool:

url = https://<IP>/api/admin/extension/vimServer/<ID>/resourcePoolListMethod = GETHEADERSKey VALUESAccept application/*+xml;version=32.0;Content-Type application/vnd.vmware.admin.resourcePoolList+xml;x-vcloud-authorization Value as obtained from sessions api

3. Update the values for the parameters above using the following table:

Table 63. Parameter descriptions

Parameter Description

IP Enter the IP address for the vCloud Director.

ID Enter the VIM Server ID received from the response of Retrieve VIM server details.

x-vcloud-authorization Enter the session ID received from the Creating a session token for vCloud Directorsection.

VMware vCloud Director deployment and configuration 55

Page 56: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

NOTE: The values provided in this section are only for reference purposes, update the values as per your

requirement.

A Response status 200 OK message displays and a list of available resource pools displays.

Retrieve NSX-T Manager instance detailsPost a GET request on vCD to retrieve the NSX-T Manager details. For more information, refer to the vCloud API Programming Guide forService Providers document from VMware.

Steps

1. On the deployment VM, open the Postman application.2. Paste the following parameters in the Postman Headers:

url: https://<IP>/api/admin/extension/nsxtManagersMethod = GETHeader:x-vcloud-authorization: Use the value fetched from the session API.Accept application/*+xml;version=32.0;

3. Update the required values using the following table:

Table 64. Parameter descriptions

Parameter Description

IP Enter the IP address for the vCloud Director.

x-vcloud-authorization Enter the session ID received from the Creating a session token for vCloud Directorsection.

NOTE: The values provided in this section are only for reference purposes, update the values as per your

requirement.

A Response status 200 OK message displays.4. From the received response, make note of the Name, href and id of the NSX-T Manager, as shown in the following example. This

information is required when creating the provider VDC:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?><vmext:NsxTManagers xmlns="http://www.vmware.com/vcloud/v1.5" xmlns:vmext="http://www.vmware.com/vcloud/extension/v1.5" xmlns:ovf="http://schemas.dmtf.org/ovf/envelope/1" xmlns:vssd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:common="http://schemas.dmtf.org/wbem/wscim/1/common" xmlns:rasd="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_ResourceAllocationSettingData" xmlns:vmw="http://www.vmware.com/schema/ovf" xmlns:ovfenv="http://schemas.dmtf.org/ovf/environment/1" xmlns:ns9="http://www.vmware.com/vcloud/versions"> <Link rel="add" href="https://192.168.20.122/api/admin/extension/nsxtManagers" type="application/vnd.vmware.admin.nsxTmanager+xml"/> <Link rel="up" href="https://192.168.20.122/api/admin/extension" type="application/vnd.vmware.admin.vmwExtension+xml"/> <vmext:NsxTManager name="Core_NSX_Manager-1" id="urn:vcloud:nsxtmanager:c9ae8923-1b4e-49f6-beff-afff65518c8d" href="https://192.168.20.122/api/admin/extension/nsxtManagers/c9ae8923-1b4e-49f6-beff-afff65518c8d" type="application/vnd.vmware.admin.nsxTmanager+xml"> <Description>NSX-T Manager</Description> <vmext:Username>admin</vmext:Username> <vmext:Url>https://192.168.20.104</vmext:Url> </vmext:NsxTManager></vmext:NsxTManagers>

Create a provider VDCA provider VDC is a collection of compute, memory, and storage resources from a vCenter Server instance. For network resources, aprovider VDC uses NSX-T Data Center. A provider VDC provides resources to organization VDCs. For more information,refer to thevCloud API Programming Guide for Service Providers document from VMware.

Prerequisite

56 VMware vCloud Director deployment and configuration

Page 57: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

• A vCenter Server instance is available to provide a resource pool and storage information to the provider VDC.

Steps

1. On the deployment VM, open the Postman application.2. Paste the following parameters in the Postman Headers:

url = https://<IP>/api/admin/extension/providervdcsparamsMethod = POSTAuthorization Type- Basic AuthHEADERSKey VALUESAccept application/*+xml;version=32.0;Content-Type application/vnd.vmware.admin.createProviderVdcParams+xml;x-vcloud-authorization Use the value fetched from the sessions api

3. Update the values for above parameters using the following table:

Table 65. Parameter descriptions

Parameter Description

IP Enter the IP address for the vCloud Director.

x-vcloud-authorization Enter the session ID received from the Creating a session token for vCloud Directorsection.

NOTE: The values provided in this section are only for reference purposes, update the values as per your

requirement.

A Response status 200 OK message displays and a list of available resource pools displays.4. In the Body tab, select the raw radio button and paste the following parameters to create a POST request to create VDC provider:

<?xml version="1.0" encoding="UTF-8"?><vmext:VMWProviderVdcParamsxmlns="http://www.vmware.com/vcloud/v1.5"xmlns:vmext="http://www.vmware.com/vcloud/extension/v1.5"name="CorePvdc1"><vmext:ResourcePoolRefs> <vmext:VimObjectRef> <vmext:VimServerRef href="https://192.168.20.122/api/admin/extension/vimServer/3cd1ac67-88de-4c4b-8e1a-d171e322d8d1"/> <vmext:MoRef>resgroup-10</vmext:MoRef> <vmext:VimObjectType>RESOURCE_POOL</vmext:VimObjectType> </vmext:VimObjectRef></vmext:ResourcePoolRefs><vmext:VimServerhref="https://192.168.20.122/api/admin/extension/vimServer/3cd1ac67-88de-4c4b-8e1a-d171e322d8d1"id="urn:vcloud:vimserver:3cd1ac67-88de-4c4b-8e1a-d171e322d8d1"name="Core_Res_VCSA"type="application/vnd.vmware.admin.vmwvirtualcenter+xml"/><vmext:NsxTManagerReferencehref="https://192.168.20.122/api/admin/extension/nsxtManagers/c9ae8923-1b4e-49f6-beff-afff65518c8d"id="urn:vcloud:nsxtmanager:c9ae8923-1b4e-49f6-beff-afff65518c8d"name="Core_NSX_Manager-1"type="application/vnd.vmware.admin.nsxTmanager+xml"/><vmext:HighestSupportedHardwareVersion>vmx-14</vmext:HighestSupportedHardwareVersion><vmext:IsEnabled>true</vmext:IsEnabled><vmext:StorageProfile>*</vmext:StorageProfile></vmext:VMWProviderVdcParams>

5. Update the values for the above parameter meters using the following table:

Table 66. Parameter details

Parameter Details

name Enter the name of the provider VDC.

VMware vCloud Director deployment and configuration 57

Page 58: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Parameter Details

For ResourcePoolRefs

VimServerRef herf Provide the VimServerRef hyperlink received from the Retrieve VIM server detailsresponse.

MoRef Provide the MoRef value received from the Retrieve the list of available resource poolresponse.

VimObjectType Provide the VimObjectType value received from the Retrieve the list of availableresource pool response.

For VimServer

VimServerRef herf Provide the VimServerRef hyperlink received from the Retrieve VIM server detailsresponse.

id Provide the resource vCenter Server ID received from the Retrieve VIM server detailsresponse.

name Enter the resource vCenter Server name received from the Retrieve VIM server detailsresponse.

For NSX_ManagerReference

name Enter the NSX-T Manager name received from the Retrieve NSX-T Manager instancedetails response.

href Provide the NSX-T Manager link received from the Retrieve NSX-T Manager instancedetails response.

id Provide the NSX-T Manager ID received from the Retrieve NSX-T Manager instancedetails response.

HighestSupprtedHardwareVersion Enter the supported VMX hardware version.

6. Keep the remaining parameters default and POST the request.

A Response status 201 Accepted message display means that the Provider VDC was created successfully.

NOTE: Repeat the steps provided in Working with vCloud Director APIs to create Provider VDC on both the Core

Management domain and the Regional Management domain.

Working with VMware vCloud DirectorThis section provides information to create organization, organization VDC, catalog items, vApp templates, vApps, and create virtualmachines for vApp template and vApps. It also provides information to add networks to Organization VDC, vApp, and virtual machines.

Prerequisites

• The NSX-T Manager cluster is stable. See Deployment and configuration of NSX-T.• VMware vCloud Director cells are up and running.• A provider VDC is created. See Working with vCloud Director APIs to create Provider VDC.

Steps

1. Log in to the vCloud Director of the Core Management domain at: https://< VCD-FQDN-or-IP>/provider.2. Create organizations:

a. Navigate to: Organization > Add.b. Enter the required information to create organization.

3. Create new Organization VDC:

a. Navigate to: Organization VDC > New.b. Enter the required information to create Organization VDC.

4. Create the catalogs:

a. On the Organization VDC page, click on the name of organization VDC to view the details, then click Open in Tenant Portal.b. On the Tenant Portal, navigate to: Main Menu > Libraries > Catalogs > New.c. Enter the required information to create the catalog.

5. Create the vApp templates:

58 VMware vCloud Director deployment and configuration

Page 59: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

NOTE: Verify that you have all the OVF/OVA files for the vApp Template, and that these files do not have any

network adapter attached to them while creating OVF/OVA files.

a. On the Tenant Portal, navigate to: Main Menu > Libraries > vApp Templates > Add.b. Provide the required information to create vApp templates.

6. Create the vApps

a. On the Tenant Portal, navigate to: Main Menu > Datacenters > vApp > New vApp.b. Provide the required information to create vApp.

7. Import networks from NSX-T to the organization VDC:

a. On the Tenant Portal, navigate to: Main Menu > Datacenters > Networks > Import.b. Provide the required information to import network to organization VDC.

8. Add organization VDC network to vApp:

a. On the Tenant Portal, navigate to: Main Menu > Datacenters > vApp.b. Select the desired vApp then from the Actions drop-down list, then select Add Network.c. Select the Org VDC Network radio button, then select the desired network to add to vApp.

9. Add Network to virtual machine:

a. On the Tenant Portal, navigate to: Main Menu > Datacenters > Virtual Machines.b. Select the desired virtual machine to view the details.c. On the Hardware section, in NICs, click Add.d. Provide the required information to add network to virtual machine.

10. Add virtual machine to a vApp:

a. On the Tenant Portal, navigate to: Main Menu > Datacenters > vApps.b. Select the desired vApp, then from the Actions drop-down list, select Add VM.c. Enter the required information to add a VM.

11. Repeat the steps provided in this section on the vCloud Director of the Regional Management domain.

Multisite VCD ConfigurationThe vCloud Director Multisite feature enables users to manage and monitor the multiple vCloud Director instances as a single entity. Whenyou associate vCloud Directors of the Core Management and Regional Management domains, the administration of these two vCloudDirectors are enabled as a single entity. This also enables organizations at these vCloud Directors to form associations with each other.

When an organization is a member of an association, organization users can use the vCloud Director Tenant Portal to access organizationassets at any member site, although each member organization and its assets are local to the site it occupies.

The multisite configuration of vCloud Director is a two-step process:

• Provider site pairing• Organization site pairing

Provider site pairingFor the provider site pairing of Core and Regional vCloud Director, the site association data is retrieved from each vCloud Director. Thisretrieved data is updated to other vCloud Director. It means, the site association data of regional vCloud Director is updated to corevCloud Director and simultaneously the site association data of core vCloud Director is added to regional vCloud Director. For moreinformation, refer to the vCloud API Programming Guide for Service Providers document from VMware.

Prerequisites

• The Provider VDC (PVDC) on Regional and Core vCloud Director is created.• An Organization on Regional and Core vCloud Director is created.• Organization VDC on Regional and Core vCloud Director is created.

Steps

1. On the deployment VM, open the postman application.2. Generate a session token for core and regional vCloud Director using Creating a session token for vCloud Director.

VMware vCloud Director deployment and configuration 59

Page 60: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

3. Paste the following parameters in the Postman to create a GET request for local association data of Core Management domain vCloudDirector:

Url: https://<IP_CoreVCD>/api/site/associations/localAssociationDataMethod: GETHeader:x-vcloud-authorization: Use the value fetched from the session API.Accept application/*+xml;version=32.0;

4. Update the values for the parameters above using the following table:

Table 67. Parameter descriptions

Parameter Description

IP_CoreVCD Enter the IP address for the Core vCloud Director.

x-vcloud-authorization Enter the session ID received from the Creating a session token for vCloud Directorsection.

5. Paste the following parameters in the Postman to create a GET request for local association data of Regional Management domainvCloud Director:

Url: https://<IP_RegionalVCD>/api/site/associations/localAssociationDataMethod: GETHeader:x-vcloud-authorization: Use the value fetched from the session API.Accept application/*+xml;version=32.0;

6. Update the values for the parameters above using the following table:

Table 68. Parameter descriptions

Parameter Description

IP_RegionalVCD Enter the IP address for the Regional vCloud Director.

x-vcloud-authorization Enter the session ID received from the Creating a session token for vCloud Directorsection.

7. Site Association on Regional vCloud Director:

a. Paste the following parameters in the Postman to create a POST request:

Url: https://<IP_RegionalVCD>/api/site/associationsMethod: POSTAuthorization= No AuthorizationHeader:Content-Type =application/vnd.vmware.admin.siteAssociation+xmlAccept =application/*+xml;version=32.0;x-vcloud-authorization: Use the value fetched from the session API.

b. Update the values for the parameters above using the following table:

Table 69. Parameter descriptions

Parameter Description

IP_RegionalVCD Enter the IP address for the Regional vCloud Director.

x-vcloud-authorization Enter the session ID received from the Creating a session token for vCloud Directorsection.

c. In the Body tab, select the raw radio button and paste the response of Core vCD local association data retrieved from step 3.

Response of:

https://<IP_CoreVCD>/api/site/associations/localAssociationData8. Site Association on Core vCloud Director:

a. Paste the following parameters in the Postman to create a POST request:

Url: https://<IP_CoreVCD>/api/site/associationsMethod: POST

60 VMware vCloud Director deployment and configuration

Page 61: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Authorization= No AuthorizationHeader:Content-Type =application/vnd.vmware.admin.siteAssociation+xmlAccept =application/*+xml;version=32.0;x-vcloud-authorization: Use the value fetched from the session API.

b. Update the values for the parameters above using the following table:

Table 70. Parameter descriptions

Parameter Description

IP_CoreVCD Enter the IP address for the Core vCloud Director.

x-vcloud-authorization Enter the session ID received from the Creating a session token for vCloud Directorsection.

c. In the Body tab, select the raw radio button and paste the response of Regional vCloud Director local association data retrievefrom step 4.

Response of:

https://<IP_RegionalVCD>/api/site/associations/localAssociationData

Organization site pairingAfter site association is complete, organization administrators can associate their organizations. For more information, refer to the vCloudAPI Programming Guide for Service Providers document from VMware.

Steps

1. Log in to Regional vCloud Director tenant portal using administrator credentials at: https:<IP_ReginalVCD>/tenant/<organizationname>.

2. Navigate to: Main Menu > Administrator > Settings > Multisite.3. On the Multisite page, click Export Local Association Data to download association data xml (assoc_data.xml) file.4. Log in to the Core vCloud Director tenant portal using administrator credentials at: https:<IP_CoreVCD>/tenant/<organization

name>.5. Navigate to: Main Menu > Administrator > Settings > Multisite.6. On the Multisite page, click Create New Organization Association to upload downloaded from step 3 regional VCD association

data xml (assoc_data.xml) file.7. After the file is uploaded, click Export Local Association Data to download association data xml (assoc_data.xml) file for Core VCD.8. On the Regional VCD Multisite page, click Create New Organization Association and upload downloaded from step 7 core VCD

association data xml (assoc_data.xml) file.9. After few minutes, verify the following:

a. On the Core VCD Multisite page, associated Regional VCD organization is in Active state.

Figure 11. Multisite vCloud Director on Core Management domainb. On the Regional VCD multisite page, associated Core VCD organization is in Active state.

VMware vCloud Director deployment and configuration 61

Page 62: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Figure 12. Multisite vCloud Director on Regional Management domain

62 VMware vCloud Director deployment and configuration

Page 63: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

VMware vRealize Log Insight deployment andconfiguration

About this task

Dell EMC Ready Solution bundle uses the VMware vRealize Log Insight (vRLI) to collect the log data from ESXi hosts, and it also connectswith vCenter servers to collect the log data of server events, tasks, and alarms.

In this deployment, vRLI is deployed in a single cluster configuration on both Core Management and Regional Management domain and itconsists of three nodes:

• Master• Worker• Witness

Table 71. Naming convention

vRLI nodes Core Management domain Regional Management domain

Master Core-vRLiMaster Reg-vRLiMaster

Worker Core-vRLiWorker1 Reg-vRLiWorker1

Witness Core-vRLiWorker2 Reg-vRLiWorker2

Prerequisites

• The ESXi 6.7 U3 server is up and running.• AD-DNS and NTP is up and running.• Management and Resource VCSAs are up and running.• Manual creation of forward and reverse lookup entries for all vRealize Log Insight instances on the DNS server prior to deployment.• The vRLI ova file is available in the deployment VM.

Steps

1. Log in to the Management vCenter Server of the Core Management domain using the vSphere web client.

2. Deploy the Core-vRLiMaster node on the Management cluster.

3. Deploy the Core-vRLiWorker1 node on the Management cluster.

4. Deploy the Core-vRLiWorker2 node on the Management cluster.

5. Configure the root SSH password for vRLI virtual appliance.

6. Log in to the vRLI master UI at: https://<Core-vRLiMaster-IP/FQDN>.

7. From the Setup screen, click Next.

8. On the Choose Deployment Type, select Start New Deployment, and configure the vRLI master node.

9. Log in to the vRLI worker1 UI at: https://<Core-vRLiWorker1-IP/FQDN>.

10. From the Setup screen, click Next.

11. On the Choose Deployment Type, select Join Existing Deployment, and configure the vRLI Worker 1.

12. Log in to the vRLI worker2 UI at: https://<Core-vRLiWorker2-IP/FQDN>.

13. From the Setup screen, click Next.

14. On the Choose Deployment Type, select Join Existing Deployment, and configure the vRLI Worker 2.

15. Repeat the steps provided in this section to install and configure vRLI on the Regional Management domain.

12

VMware vRealize Log Insight deployment and configuration 63

Page 64: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Integration of vRLI with NFV componentsAbout this task

The vRLI is integrated with following components and received the various logs from the integrated components:

• Active Directory• VMware vCenter• VMware vRealize Operation Manager• VMware vRealize Orchestrator• VMware vCloud Director

Prerequisites

• Verify that all vRLI nodes and the specified Integrated Load Balancer IP address are on the same network.• Manual creation of forward and reverse lookup entries for all vRealize Log Insight instances on DNS server prior to deployment.• Content pack is imported for vSAN, vRO, NSX-T, vCD, vROps, and vSphere.

Steps

1. Log in to the vRLI master UI of the Core Management domain at: https://<Core-vRLiMaster-IP/FQDN>.

2. Add an Integrated Load Balancer (ILB) to balance incoming traffic fairly between the available vRLI nodes.

3. To integrate vRLI with AD:

a) On the vRLI master home page, go to: Administration > Authentication > Active Directory.b) Configure the parameters as describe in the following table:

Table 72. AD parameters

Parameter Description

Enable Active Directory Support Slide toggle to ON

Default domain Domain name

Username Admin user name

Password Admin user password

Connection Type Standard or can be set to Custom for testing specific ports

Require SSL Check box if SSL is required.

c) Validate the connection and save it.

4. To integrate vRLI with VMware vCenter:

a) On the vRLI master home page, go to: Administration > vSphere > + Add vCenter Server.b) Enter the required details for Management vCenter.c) Validate the connection and save it.d) Click + Add vCenter Server and add Resource vCenter.

5. To configure vRLI with vROps Manager to send alert notifications:

a) On the vRLI master home page, go to: Administration > vRealize Operations.b) Enter the required details for vROps manager.c) Validate the connection and save it.

6. To vRLI integration with vRO:

a) On the vRLI master home page, go to: Administration > Agents > Select vRealize Orchestrator from the drop-down menu.b) Click Copy Template, then enter the required information and click Copy.c) Specify the filters, then click Save New Group. The Agent Group is created.

7. To integrate vRLI with vCD:

a) On the vRLI master home page, go to: Administration > Agents > Select Cloud Director Cell Servers from the drop-downmenu.

b) Click Copy Template, then enter the required information and click Copy.c) Specify the filters then click Save New Group. The Agent Group is created.d) Download the log insight agent (LinuxRPM file) on each vCD cell.e) Copy the LinuxRPM file of the log insight agent to the .tmp folder on each vCD cell.

64 VMware vRealize Log Insight deployment and configuration

Page 65: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

f) SSH to the vCD cell with root user and run the following command to install: LinuxRPM file rpm -i VMware-Log-Insight-Agent-4.8.0-13020979.noarch_<vrli-IP>.rpm.

g) Once the installation is complete, on the etc directory, run the following command to configure the installed log insight agent: viliagent.inior vi /var/lib/loginsight-agent/liagent.ini.

h) Using the downloaded agent from log insight, verify that the Log Insight hostname is present. If it is not, add the hostname:Uncomment proto=cfapi.

i) Add the vRLI agent configuration:

1. On the vRLI master node, click Installed Content Packs > VMware vCloud Director > Agent Groups.2. Copy the Configuration.3. In the vCD cell SSH, append the configuration to the liagent.ini file.

j) Restart the log insight agent services using the following command: service liagentd restart.

k) Restart the vCD services using one of the following commands: vmware-vcd restart or service vmware-vcdrestart.

l) Repeat the steps from 7 (e) to 7 (J) for all the vCD cells.

8. Repeat the steps provided in this section to integrate the Reg-vRLI Master with NFV components on the Regional Managementdomain.

VMware vRealize Log Insight deployment and configuration 65

Page 66: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

VMware vRealize Orchestrator deploymentand configuration

About this task

The vRealize Orchestrator (vRO) is a development and process-automation platform and it contains a workflow library and a workflowengine. This allows administrators to create and run workflows to automate the orchestration processes. Orchestrator provides a standardset of plug-ins, including a plug-in for vCenter Server and vRealize automation that allows you to orchestrate the tasks in the differentenvironments.

In this deployment, one instance of vRO will be deployed on both the Core Management domain and the Regional Management domain.

Prerequisites

• The ESXi 6.7 U3 server is up and running.• AD-DNS and NTP is up and running.• The Management and Resource VCSAs are up and running.• Manual creation of forward and reverse lookup entries for all vRealize Automation on DNS server prior to deployment.• The vRealize Orchestrator OVA file is available in the deployment VM.

Steps

1. Log in to the Management vCenter of the Core Management domain of the VMware vSphere Web Client.

2. Deploy the OVA file of the vRealize Orchestrator for the Core Management domain.

3. Configure NTP in vRO:

a) Log in to the vRO Appliance Configuration page with administrator credentials at: https://<vRO-IP>>:5480/.b) Go to: Admin > Time Settings.c) In the Time Server field, enter the IP address of the NTP and save the settings.

4. Configure vSphere authentication in vRealize Orchestrator:

a) Log in to the Orchestrator Access Control with administrator credentials at: https://<<orchestrator_server_IP_or_DNSname>>:8283/vco-controlcenter.

b) On the Host Settings page, click Change.c) Configure the fields using the following table:

Table 73. vSphere authentication

Fields Description

Host name Host name or FQDN of vRO VM

Authentication mode vSphere

Host address Host name of resource vCenter

User name Admin user name of resource vCenter

Password Admin user password of resource vCenter

Default tenant Resource vCenter tenant name, for example: resvsphere.local

Admin group Search for admin group, for example: resvsphere.local\ComponentManager.Administrators

d) Validate the connection, then save the settings.

5. (Optional) Update the vRO using the ISO file.

6. Repeat the steps provided in this section to deploy and configure the vRO in the Regional Management domain.

13

66 VMware vRealize Orchestrator deployment and configuration

Page 67: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

vRealize Orchestrator integration with vCenterServerAbout this task

The vRO is integrated with the Resource vCenter Server instance of both the Core Management domain and the Regional Managementdomain.

Steps

1. Add a vCenter Server instance to vRealize Orchestrator:

a) Log in to the vRealize Orchestrator using administrator credentials at: https://<<vRo-FQDN>>.b) Select Start the Orchestrator Client and go to: Library > Workflows.c) In the Search field, search for Add a vCenter Server instance workflow, then open it.d) On the Set the vCenter Server instance properties tab, configure the fields using the following table:

Table 74. Set the vCenter Server instance properties

Fields Description

IP or host name of the vCenter Server instance to add Host name or FQDN for resource vCenter

HTTPS port of the vCenter Server instance 443

Location of the SDK that you use to connect to the vCenterServer instance

/sdk

Will you orchestrate this instance? Check this box

Do you want to ignore certificate warnings? Check this box to ignore the certificate warnings and add avCenter server

e) On the Set the connection properties tab, configure the fields using the following table:

Table 75. Set the connection properties

Fields Description

Do you want to use a session per user method to manage useraccess to the vCenter Server system?

Check the box to create new vCenter server session

User name Administrator user name of resource vCenter Server

Password Administrator password of resource vCenter Server

Domain name Domain name for Orchestrator

f) Validate the connection and save it.

2. Register the vRealize Orchestrator as a vCenter Server extension:

a) Log in to the vRealize Orchestrator using administrator credentials at: https://<<vRo-FQDN>>.b) Select Start the Orchestrator Client and go to: Library > Workflows.c) In the Search field, search for Run on the Register vCenter Orchestrator as a vCenter workflow, then open it.d) In the vCenter Server instance to register Orchestrator with field, select the Resource vCenter.e) Click Run. Once the workflow is completed, reboot the Resource vCenter Server integrated with vRO.f) Log in again to the Resource vCenter Server and verify that the vRO plug-in is present by clicking Home > Inventories.

3. Repeat the steps provided in this section for the vRealize Orchestrator in the Regional Management domain.

Configure vRealize Orchestrator to forward logs tovRLIAbout this task

Users can configure each vRealize Orchestrator to forward logs to the vRealize Log Insight.

VMware vRealize Orchestrator deployment and configuration 67

Page 68: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Steps

1. Log in to the Orchestrator Control Center with administrator credentials at: https://<<orchestrator_server_IP_or_DNSname>>:8283/vco-controlcenter.

2. Go to: Home > Log > Logging Integration.

3. Configure the fields using the following table:

Table 76. Logging integration

Fields Description

Enable logging to a remote log server Turn on the slider

Type Select Use Log Insight Agent

Host Core Management domain vRLI master host name

Port 9000

Protocol cfpi

4. Click Save.

5. Repeat the steps provided in this section to forward vRealize Orchestrator logs to vRLI in the Regional Management domain.

68 VMware vRealize Orchestrator deployment and configuration

Page 69: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

VMware vRealize Operation Managerdeployment and configuration

About this task

The vRealize Operations (vROps) Manager delivers intelligent operations management with application-to-storage visibility acrossphysical, virtual, and cloud infrastructures. Using policy-based automation, operations teams can automate key processes and improve ITefficiency.

In this deployment, three nodes will be deployed as part of the vROps Manager deployment on both the Core Management and theRegional Management domains:

• Master• Data• Data used as a master replica node

NOTE: The vROps deployment covered in this document is for a Medium configuration. The vROps OVA template

deployment is done in three nodes: data, master, and replica. The vRealize Operations Manager UI can be used to add

the Management vCenter and the Resources vCenter of the Core Management and Regional Management domains.

Table 77. Naming conventions for vROps

vROps nodes Core Management domain Regional Management domain

Master Core-vROpsMaster Reg-vROpsMaster

Data Core-vROpsData Reg-vROpsData

Replica Core-vROpsReplica Reg-vROpsReplica

Prerequisites

• The ESXi 6.7 U3 server is up and running.• AD-DNS and NTP is up and running.• Manual creation of forward and reverse lookup entries for all vROps instances on the DNS server prior to deployment.• The vROps OVA file is available in the deployment VM.

Steps

1. Log in to the Management vCenter Server of the Core Management domain using the vSphere web client.

2. Deploy the Core-vROpsMaster node using the OVA file on the Management cluster.

3. Deploy the Core-vROpsData node using the OVA file on the Management cluster.

4. Deploy the Core-vROpsReplica node using the OVA file on the Management cluster.

5. Log in to the vROpsMaster node UI at: https://<Core- vROpsMaster-IP/FQDN>.

6. Click New Installation and provide the required information to configure the vROps Master node.

7. Log in to the vROps data node UI at: https://<Core- vROpsData-IP/FQDN>.

8. Click Expand an Existing Installation and provide the required information to configure the vROps data node.

9. Log in to the vROps replica node UI at: https://<Core- vROpsReplica-IP/FQDN>.

10. Click Expand an Existing Installation and provide the required information to configure the vROps replica node.

11. From the System Status screen, enable the high availability mode for vROps cluster.

NOTE: While performing cluster configuration, if the process stops responding at the Waiting for Analytics screen,

perform the following steps:

a. Do not reboot any of the vROps nodes or stop the cluster configuration process.

b. Make sure that the NTP server is running and that all of the vROps nodes are configured to use the same NTP

server.

14

VMware vRealize Operation Manager deployment and configuration 69

Page 70: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

c. Synchronize time between all the vROps nodes and the NTP server.

d. Ensure that the time difference between vROps nodes is not greater than 30 seconds.

e. After a few seconds, the cluster configuration proceeds automatically.

12. On the System Status screen, click the Start vRealize Operation Manager button to start the vROps Manager.

13. Add the product license key for the vROps cluster.

14. Repeat the steps provided in this section to deploy and configure vROps in the Regional Management Domain.

Integration of vROps with NFV componentsAbout this task

The vROps Manager is integrated with following components:

• Active Directory• VMware vSAN• VMware vCenter• VMware vRealize Log Insight• VMware vRealize Orchestrator• VMware vCloud Director• VMware NSX-T

Prerequisites

• All the vROps nodes are deployed and configured.• Manual creation of forward and reverse lookup entries for all vROps instances on the DNS server prior to deployment.• Management pack is available for vSAN, vRO, NSX-T, vCD, vRLI, and vSphere.

Steps

1. For the Core Management domain, log in to the vRealize Operations Manager using administrator credentials.

2. Navigate to: Administration > Solutions > Repository.

3. Install and activate the vSAN, vSphere, and vRLI packages.

4. Integrate vROps with vCenter Servers:

a) Navigate to: Administration > Solutions > Configuration.b) On the Solutions screen, select vSphere Management solutions, then click the Configure icon.c) Provide the required information for the Management vCenter and save the settings.d) Repeat the steps and integrate the Resource vCenter with vROps.

5. Integrate vROps with AD:

a) Navigate to: Administration > Authentication Sources.b) Click Add and provide the required information to integrate AD with vROps.

6. Import users from AD to vROps:

a) Navigate to: Administration > Access Control > User Accounts > Import User.b) Provide the required information and save the settings.c) Log in to vROps from one of the accounts imported and verify that the selected permissions are accessible.

7. Integrate vROps with vRLI:

a) Navigate to: Administration > Solutions > Configuration.b) In the Solution window, select Management pack for VRLI, and then click the Configure icon.c) Provide the required information for vRLI and save the settings.

8. Integrate vROps with NSX-T:

a) Navigate to: Administration > Solutions > Repository.b) Click Add and upload the NSX-T Management pack to vROps.c) In the Solution window, select NSX-T Management pack, and then click the Configure icon.d) Provide the required information for NSX-T and save the settings.

9. Integrate vROps with vSAN:

a) Navigate to: Administration > Solutions > Configuration.b) In the Solution window, select Management pack for vSAN, and then click the Configure icon.

70 VMware vRealize Operation Manager deployment and configuration

Page 71: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

c) Provide the required information for Management vSAN and save the settings.d) Repeat the steps provided in this section to integrate Resource and Edge vSAN with vROps.

10. Integrate vROps with vCD:

a) Navigate to: Administration > Solutions > Repository.b) Click Add and upload the vCD Management pack to vROps.c) In the Solution window, select vCD Management pack, and then click the Configure icon.d) Provide the required information for vCD Cell 1 then save the settings.e) On the left pane, click + Add and provide the required information for vCD Cell2, then save the settings.f) On the left pane, click + Add and provide the required information for vCD Cell3, then save the settings.

11. Integrate vROps with vRealize Orchestrator:

a) Navigate to: Administration > Solutions > Repository.b) Click Add and upload the vRealize Orchestrator Management pack to vROps.c) In the Solution window, select vRealize Orchestrator Management pack, and then click the Configure icon.d) Provide the required information for the vRealize Orchestrator and then save the settings.

VMware vRealize Operation Manager deployment and configuration 71

Page 72: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

VMware vSphere Replication deployment andconfiguration

About this task

The VMware vSphere Replication is an alternative to storage-based replication. It protects virtual machines from partial or complete sitefailures by replicating the virtual machines between the following sites:

• From a source site to a target site• Within a single site from one cluster to another• From multiple source sites to a shared remote target site

In this deployment, one instance of vSphere Replication is deployed and configured on both the Core Management and RegionalManagement domains.

Table 78. Naming convention for vSphere Replication

Core Management domain Regional Management domain

Core-vRep Reg-vRep

Prerequisites

• The ESXi 6.7 U3 server is up and running.• The vSphere Replication ISO image file is downloaded and mounted on the system.• The Management pod is up and running.• Manual creation of forward and reverse lookup entries for all vSphere Replication instances on the DNS server prior to deployment.• The replication network port group is created in mgmt.-infra-VDS on the Core Management and Regional Management domains.

Steps

1. Log in to the Management vCenter Server of the Core Management domain using the vSphere web client.

2. Deploy the Core-vRep VM on the Management cluster.

3. Log in to the vSphere Replication Web Interface using the root credentials at: https://<<vSphere-Replication-FQDN/IP>>:5480.

4. Configure the network settings for vSphere Replication:

a) Navigate to: Network > Address.b) Configure the eth1 network settings, then save the settings.c) Navigate to: VR > Configuration.d) In the IP Address for Incoming Storage Traffic field, enter the IP address of network adapter used by Replication Server for

incoming replication data.e) Click Apply Network Setting. The VR network settings changed successfully message displays.f) Enter the required information using the following table:

Table 79. Replication network settings

Fields Description

LookupService Address FQDN of the Management vCenter Server on the CoreManagement domain

SSO Administrator Management vCenter SSO administrator user name

Password Management vCenter SSO administrator password

g) Click Save and Restart Service.h) Review the SSL Certificate, then click Accept. Log in again to the vSphere Replication and Management vCenter to save the

changes.

5. Configure vSphere Replication connection for Site Recovery:

a) Log in to the Management vCenter of Core Management domain using the VMware vSphere Web Client.

15

72 VMware vSphere Replication deployment and configuration

Page 73: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

b) In the Navigator tree, select vSphere Replication VM (Core-vRep). From the Menu drop-down list, select Site Recovery.c) Verify that the vSphere Replication status is OK.d) Click Open Site Recovery to open the site recovery in a new page.e) On the Site Recovery page, from the Menu drop-down list, select Replications within the same vCenter Server then click

the FQDN/Host name of the vCenter.f) On the Replications tab, click + New.g) Provide the required information and configure the Replication.

6. Repeat the steps provided in this section to configure vSphere Replication in the Regional Management domain.

VMware vSphere Replication deployment and configuration 73

Page 74: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Set up anti-affinity rulesAbout this task

The Affinity rule setting establishes a relationship between two or more VMware VMs and hosts. Anti-affinity rules enable the vSpherehypervisor platform to keep virtual entities separated.

Anti-affinity rules are created to keep two or more VMs separated on different hosts.

NOTE: The anti-affinity rules can be created for Management and Edge cluster VMs. The creation of a separate anti-

affinity rule for the VMs listed is recommended for the respective cluster. Dell Technologies recommends the execution

of this section only after every component has been deployed and configured.

To create anti-affinity rules for management and edge pod components on both Core Management and Regional Management domains,see the Create an anti-affinity rule section.

The following table displays a list of Management cluster VMs that must be kept on different hosts using an anti-affinity rule. For example,create an anti-affinity rule for Management VCSA to always keep its three VMs (Active, passive, and witness node VMs) on differenthosts.

Table 80. Management VM cluster list

Rule name Create rule for virtual machines

Mgmt VCSA VCSA-Mgmt-Active VCSA-Mgmt-Passive VCSA-Mgmt-witness

Resource VCSA VCSA-Res-Active VCSA-Res-Passive VCSA-Res-witness

vRLI_Rule vRLI-master vRLI-worker1 vRLI-worker2

vROPS_Rule vROPS-master vROPS-data vROPS-replica

NSX_Rule Nsx-Manager Nsx-1 Nsx-2

vCD_Rule vCD_Cell1 vCD_Cell2 vCD_Cell3

The following table lists edge cluster VMs that must be kept on different hosts using an anti-affinity rule:

Table 81. Edge cluster list

Rule name Create rule for VMs

Edge_Rule01 Edge01 Edge02

Edge_Rule02 Edge03 Edge04

Prerequisites

• All components of the management cluster are deployed.• All components of the resource cluster are deployed.

Steps

1. Log in to the desired vCenter Server GUI.

2. Navigate to: Configure > Configuration > VM/Host Rules > Add.

3. In the Create VM/Host Rule dialog box, enter a Name for the rule, for example Mgmt VCSA, and check the Enable rule box.

4. From the Type drop-down menu, select Separate Virtual Machines, then click Add.

5. From the Add Rule Member window, select the virtual machines to keep on different hosts then click OK.

6. Click OK to create the rule.

7. Repeat the steps provided in this section to create the anti-affinity rule for the Management, Resource, and Edge cluster VMs on boththe Core Management and Regional Management domains.

16

74 Set up anti-affinity rules

Page 75: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Enable vSphere DRSAbout this task

This section provides steps to enable vSphere DRS on the Management cluster, Resource cluster, and Edge cluster.

Prerequisite

• Anti-affinity rules are created for VMs.

Steps

1. Log in to the desired vCenter Server GUI.

2. Navigate to: Configure > Services > vSphere DRS > Edit.

3. On the Edit cluster window, check the Turn ON vSphere DRS box.

4. Set the DRS Automation to Fully Automated.

5. Set the Power Management to Off, and then set the Advanced Options to None. Click OK.

6. Repeat the steps provided within this section to enable DRS for the Management, Resource, and Edge cluster VMs on both the CoreManagement and Regional Management domains.

Enable vSphere availabilityAbout this task

Perform the following steps on Management, Resource, and Edge clusters to enable vSphere availability.

Steps

1. Log in to the desired vCenter Server GUI.

2. Navigate to Configure > Services > vSphere Availability > Edit.

3. Check the Turn ON vSphere HA and Proactive HA boxes.

4. In the Failures and Responses section, select Enable Host Monitoring, then click OK.

5. Repeat the steps provided within this section to enable vSphere availability for the Management, Resource, and Edge cluster VMs onboth the Core Management and Regional Management domains.

Set up anti-affinity rules 75

Page 76: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Forwarding logs to vRLI

Forwarding vROps log to vRLIAbout this task

Follow this procedure to configure vROps VM to send log messages to a remote logging server.

Prerequisites

• Configure the vRLI server to receive the logs.• The vRLI integration with vROps is completed. See Integration of vRLI with NFV components.

Steps

1. Log in to the vROps master VM.

2. Navigate to: Administrator > Management > Log forwarding.

3. Select the output logs to external log server box.

4. Select the other for Log Insight Servers:

• In Host give FQDN of vrli master.• Port: 9000• Protocol: cfapi

5. Repeat the steps provided in this section on both the Core Management and the Regional Management domains.

Forwarding logs from vCD to vRLIAbout this task

Prerequisite

• The vCloud Director agents is created on the vRLI. See Integration of vRLI with NFV components.

Steps

1. Open your browser and log in to vCD Cell1 with administrator credentials.

2. From the Administrator screen, locate the System Settings section and click General.

3. Enter the vRLI Master node IP in Syslog server 1.

4. Enter the vRLI virtual IP in Syslog server 2.

5. Click Apply to save the change.

6. Repeat the steps provided in this section on both the Core Management and the Regional Management domains.

Configure syslog server for NSX-TAbout this task

Follow this procedure to configure NSX-T appliances to send log messages to a remote logging server.

Prerequisite

• Configure the vRLI server to receive the logs.

17

76 Forwarding logs to vRLI

Page 77: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Steps

1. From the CLI, log in to the NSX-T Manager using administrator credentials.

2. Run the following command to configure a log server and the types of messages to send to the log server.NOTE: Multiple facilities or message IDs can be specified as a comma delimited list, without spaces. The Log

Message IDs table displays the list of available log Message IDs.

set logging-server <hostname-or-ip-address[:port]> proto <proto> level <level> [facility <facility>] [messageid <messageid>] [certificate <filename>] [structured-data <structureddata>]

NOTE: For more information about this command, see the NSX-T CLI Reference section.

The command can be run multiple times to add multiple logging server configurations. For example:

nsx> set logging-server 192.168.20.80 proto udp level info facility syslog messageid SYSTEM,FABRIC structured-data audit=truensx> set logging-server 192.168.20.80 proto udp level info facility auth,user

3. To view the logging configuration, enter the get logging-server command. For example:

nsx-manager> get logging-servers 192.168.20.80 proto udp level info facility syslog messageid SYSTEM,FIREWALL structured-data audit="true"192.168.20.80 proto udp level info facility syslog messageid SYSTEM,MONITORING structured-data audit="true"192.168.20.80 proto udp level info facility syslog messageid SYSTEM,DHCP structured-data audit="true"192.168.20.80 proto udp level info facility syslog messageid SYSTEM,ROUTING structured-data audit="true"192.168.20.80 proto udp level info facility syslog messageid SYSTEM,SWITCHING structured-data audit="true"192.168.20.80 proto udp level info facility syslog messageid SYSTEM,FIREWALL-PKTLOG structured-data audit="true"192.168.20.80 proto udp level info facility syslog messageid SYSTEM,- structured-data audit="true"192.168.20.80 proto udp level info facility syslog messageid SYSTEM,SYSTEM structured-data audit="true"192.168.20.80 proto udp level info facility syslog messageid SYSTEM,GROUPING structured-data audit="true"

4. Repeat the steps in this section for the NSX-T VMs to configure remote logging on each node individually on both the CoreManagement and the Regional Management domains.

Log Message IDs

About this task

In a log message, the message ID field identifies the type of message. You can use the messageid parameter in the set logging-servercommand to filter which log messages are sent to a logging server. The following table displays the list of available log Message IDs:

Table 82. Log Message IDs

Message ID Examples

FABRIC • Host node• Host preparation• Edge node• Transport zone• Transport node• Uplink profiles• Cluster profiles• Edge cluster• Bridge clusters and endpoints

Forwarding logs to vRLI 77

Page 78: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Message ID Examples

SWITCHING • Logical switch• Logical switch ports• Switching profiles• Switch security features

ROUTING • Logical router• Logical router ports• Static routing• Dynamic routing

FIREWALL • Firewall rules• Firewall connection logs

FIREWALL-PKTLOG • Firewall connection logs• Firewall packet logs

GROUPING IP sets • Mac sets• NSGroups• NSServices• NSService groups• VNI Pool• IP Pool

DHCP • DHCP relay

SYSTEM • Appliance management (remote syslog, ntp, and so on)• Cluster management• Trust management• Licensing• User and roles• Task management• Install (NSX-T Manager, NSX-T Controller)• Upgrade (NSX-T Manager, NSX-T Controller, NSX-T Edge and

host-packages upgrades)• Realization• Tags

MONITORING • SNMP• Port connection• Traceflow• All other log messages

78 Forwarding logs to vRLI

Page 79: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Data protectionAbout this task

The Dell EMC Ready Solution Bundle for vCloud NFV Edge has a backup cluster for data protection. It helps to create a backup, restorethe created backup, and replicate the backup data.

Avamar Virtual Edition (AVE) and Data Domain Virtual Edition (DD VE) are integrated together to protect the data in the NFV Edgeenvironment.

Data protection architectureAbout this task

A Data Domain (DD) system performs deduplication through DD OS software. Avamar source-based deduplication to a Data Domainsystem is facilitated through the use of the Data Domain Boost library.

Avamar and Data Domain integration allows you to specify whether specific datasets in an Avamar backup policy can target an Avamarserver or a Data Domain system.

Avamar uses the DD Boost library through API-based integration to access and manipulate directories and files on the DD file system.

This integration allows Avamar to control backup images stored on DD, and it also enables Avamar to manage maintenance activities andcontrol replication on remote DD.

Figure 13. Backup cluster

Backup operation in data protectionAbout this task

When an Avamar server is selected as backup target, the Avamar client performs deduplication segment processing. Data and metadataare stored on the Avamar server.

When a Data Domain system is selected as a backup target, backup data is transferred to the Data Domain system. The related metadatagenerated by the Avamar client is simultaneously sent to the Avamar server for storage. The metadata enables the Avamar managementsystem to perform restore operations directly from the Data Domain system.

Replication operation in data protectionAbout this task

The Avamar replication feature transfers the data from a source Avamar server to target Avamar server. When DD is integrated withAvamar, the replication process transfers the Avamar data from source DD to target DD. If the source Avamar server is configured with aData Domain system, then the target server must have a corresponding configured Data Domain system.

18

Data protection 79

Page 80: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

The replication will fail for backups on source DD, if the target Avamar is not configured with target DD.

The Data Protection deployment and configuration complete in three steps:

• Deployment and configuration of Avamar• Deployment and configuration of Data Domain• Integration of Data Domain with Avamar

Deployment and configuration of Data DomainAbout this task

In this deployment, two DD VE instances will be deployed on two separate ESXi servers in the backup pod using vCenter. Each DD VEinstance has its own VMFS datastore. One DD VE instance serves as a primary backup, and another is used for replication, in case theprimary crashes. The DD VE instances will only be deployed on Core Management domain. For more information, see the Dell EMC DataDomain Virtual Edition - Installation and Administration Guide.

NOTE: For best performance, use a dedicated datastore not shared with any other virtual machines.

Prerequisites

• Management pod is configured and has internet connectivity.• Manual creation of forward and reverse lookup entries are completed for all DD instances on the DNS server prior to deployment.• NTP is up and running.• Management network and backup network is configured.• The DD VE OVA file is available on the deployment VM.• Two VMFS datastores on each ESXi are configured. See Configuring the VMFS datastore on the backup cluster.

Steps

1. Log in to the Core Management vCenter using the VMware vSphere Web Client.

2. Deploy the first DD VE instance on the VMFS datastore. For this deployment, Core_Bck_DD_Bck_NW and Core_Bck_VM_NW areselected as network port groups.

3. Deploy the second DD VE instance on the VMFS datastore. For this deployment, Core_Bck_DD_Bck_NW and Core_Bck_VM_NWare selected as network port groups.

4. Add the virtual disk:

a) Right-click on DD VM and select Edit Settings.b) On the Virtual Hardware tab, select New Hard Disk from the New Device drop-down list then click Add.c) Enter the required disk capacity for the newly added disk.d) Click OK to save.e) Repeat the steps provided in this section to add disks into other DD instances.

Configuration of DD VE

About this task

After the DD VMs are deployed, you must create and enable file system and DD boost on the data domain VMs.

Prerequisites

• Manual creation of forward and reverse lookup entries are completed for all DD VE instances on the DNS server prior to deployment.• Virtual Disk is added to the DD VMs.

Steps

1. Log in to the Management vCenter using the VMware vSphere Web Client.

2. Select then power-on the DD VM.

3. Log into the DD VM console using the following credentials:

a) Username: sysadminb) Password: changeme

4. Press any key then press enter to acknowledge the receipt of EULA information.

80 Data protection

Page 81: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

5. Set new password for sysadmin.

6. Type yes to configure the system using GUI wizard and press Enter.

7. Type yes to configure the system for network connectivity and press Enter.

8. Type no to configure DHCP parameters manually and press Enter.

9. Type FQDN for the host name and press Enter.

10. Type the DNS name and press Enter.

11. Enable then configure each Ethernet interface:

a) Type No for Use DHCP on ethernet port ethVo and press Enter.b) IP for ethV0: enter the IP for Core_Bck_VM_NW and press Enter.c) Type Netmask for ethV0 and press Enter.

12. Configure the ethV1 for Core_Bck_DD_Bck_NW (backup network) as configured in above step 11.

13. Enter the IP address of the default routing gateway and press Enter.

14. Enter the IP address of DNS server and press Enter. You can add up to three DNS servers.

15. Verify the provided information if correct, then save.

16. Log in to the DD System Manager GUI at: https://<<DD FQDN or IP>>/.

17. On Apply Your License window, provide the license information and click Apply.

18. On the Network page, click Yes to verify the configured network settings.

19. On the Summary page, verify the information provided then click Submit.

20. On the File System page, click Yes to configure File System for DD:

a) On the Configure Active Tier page, in the Addable storage section, select the virtual disk.b) Click Add to Tier. The hard disk is added to active tier. Click Next.c) On the Configure Cloud Tier page, keep the defaults and click Next.d) On the Start Deployment Assessment page, click Using Only DD Boost for Backup. The deployment assessment begins.e) After the deployment assessment is completed, click Next.f) On the Summary page, select the checkbox for Enable file system after creation, then click Submit.g) The File System is created and enabled on the DD VE. Click OK.

21. Click Yes to configure the system settings:

a) (Optional) On the Administrator page, In the Admin Email field, enter the admin email address. Click Next.

NOTE: To configure the email notification and alerts check the required box on this page.

b) On the Email / Location page, click Next.

NOTE: To configure the email notification and alerts enter the required information on this page.

c) On the Summary page, verify the information provided, then click Submit.d) Once the system settings are configured, click OK.

22. Click Yes to configure the DD Boost Protocol:

a) Enter the storage unit name.b) From the select or create user drop-down list select sysadmin.c) Click Next.d) On the Summary page, verify the information provided, then click Submit.e) Once the DD Boost is enabled and storage unit is created, click OK.

23. On the CIFS protocol page, keep the defaults then click No.

24. On the NFS protocol page, keep the defaults then click No.

25. Configure SNMP Community settings:

a) Navigate to: Administration > Settings > SNMP.b) On the SNMP v2C Configuration section, click Create to create communities.c) On the Create SNMP v2c Community page, enter the community string value.d) In the Access field, select the read-write radio button.e) Click OK.

26. Set passphrase:

a) Navigate to: Administration > Access > Administrator Access > Set Passphrase.b) Enter the passphrase value in the New Passphrase and Confirm Passphrase fields.c) Click Next.

Data protection 81

Page 82: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Deployment and configuration of AvamarAbout this task

In the Dell EMC vCloud NFV environment, Avamar Virtual Edition integrates the Avamar software with SUSE Linux as a virtual machine.Avamar has two ways to protect data on the Virtual Machines (VMs): image backup and guest backup.

In the Dell EMC vCloud NFV deployment, we configure both image backup and guest backup. Avamar proxy virtual machines are requiredto deploy on the vCenter to create backups then restore the backup.

For more information, see the Avamar Virtual Edition - Version 19.2 - Installation and Upgrade Guide.

In this deployment, two AVE instances are deployed:

• Source AVE: Creates the backup.• Destination AVE: Replicates the backup of source AVE.

Prerequisites

• Management pod is configured.• Manual creation of forward and reverse lookup entries for all AVE instances on the DNS server prior to deployment. Add two DNS

entries for Avamar using the same host name:

• One for VM-Management network (Core_Bck_VM_NW)• One for backup network (Core_Bck_DD_Bck_NW)

• NTP is up and running.• Management and backup network is configured.• The AVE OVA file is available on the deployment VM.

Steps

1. Log in to the Core Management vCenter using the VMware vSphere Web Client at: https://vCenter-server-IP-or-FQDN/.

2. Deploy the first AVE instance on the vSAN datastore. For this deployment, Core_Bck_VM_NW is selected as the network port group.

3. If you are deploying a 2 TB or larger AVE configuration, remove the existing 250 GB virtual disks that the template created.

NOTE: Do not perform this step for 0.5 TB or 1 TB AVE configurations.

4. Update the hardware properties as per your AVE configuration.

5. Create additional virtual hard disks for 1TB or above AVE configuration.

6. Power on the AVE virtual machine and install the Avamar software:

a) Log in to the Avamar Installation Manager using the following credentials at: https://Avamar-server-IP-or-FQDN:7543/avi.

• User name: root• Password: changeme

b) Navigate to: Main Menu > SW Releases > Package List > ave-config workflow package.c) Click Install.d) On the Installation Setup page, enter the required information in each tab, then click Continue to install.

7. Configure the network settings for eth1:

a) Log in to the Core Management vCenter using the VMware vSphere Web Client at https://vCenter-server-IP-or-FQDN/.b) Right click on AVE VM and select Edit Settings.c) On the Virtual Hardware tab, from the New devices drop-down list, select Network then click Add.d) Select Backup Network (Core_Bck_DD_Bck_NW) for the newly added network device and click OK.e) Log in to the AVE VM console using administrator credentials.f) Shutdown the Avamar server services using following command: dpnctl stop.

g) Switch to root user using following command: su.

h) Change directory to /etc/sysconfig/network.i) Create a new ifcfg-eth1 file by using the existing ifcfg-eth0 using following command or using scp tool: cp -p -r -f "ifcfg-

eth0" "/etc/sysconfig/network/ifcfg-eth1".

j) Open the ifcfg-eth1 file and update the IPADDR field with the new IP address to be assigned, then Save the file.k) Updating probe.xml:

1. Go to the path /usr/local/avamar/var and update the probe.xml file to add the properties for the newly configured eth1 vNIC.

82 Data protection

Page 83: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Figure 14. Probe.xml2. Save the file.

l) Restart the network by command: service network restart.

m) Reboot the AVE VM server by using: rebootn) Once the VM is up, check whether the Avamar services are running (if not, start the services): dpnctl status/start

8. Repeat the steps provided in this section to deploy and configure a second AVE instance on the Core Management domain.

Installing Avamar Administrator software

About this task

This section provides instructions to install the Avamar Administrator software on a Microsoft Windows virtual machine.

Steps

1. Open a web browser and type the following URL: https://<Avamar FQDN or IP>/dtlt/home.html.

2. Click Downloads.

3. Navigate to Windows (64-bit) > Microsoft Windows Vista, 7, 8, 8.1, 10 and Microsoft Windows Server 2008, 2008 R2,2012, 2012 R2 (Console) then download AvamarConsoleMultiple-windows-x86-version.exe.

4. Run the downloaded .exe file and complete the Avamar Administrator software installation.

Import vCenter certificate in Avamar UI

About this task

This section provides instructions to import the VMware vCenter server certificate to Avamar.

Steps

1. Log in to the Core Management vCenter using the VMware vSphere Web Client at: https://vCenter-server-IP-or-FQDN/.

2. On the Getting Started page, click the Download trusted root CA certificates option in the bottom-right of the page.

Data protection 83

Page 84: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Figure 15. Download trusted root CA certificates

3. The certificate displays on a new tab. Right-click on it and save the certificate zip file.

4. Extract the certificate from the zip file.

5. Log in to the Avamar UI using the root credentials at: https://<Avamar FQDN or IP>/aui.

6. Navigate to: Administration > System > Certificate > Trust Certificate > Import certificate.

7. On the Import Certificate page, enter the alias name for vCenter.

8. Click the Browse button and navigate to the security certificate for the applicable OS.

9. On the Validation page:

a) Check the Validate the Certificate box.b) Enter the FQDN or IP of vCenter.c) Click Validate.

10. Once the validation is successful, click Finish.

11. Repeat the steps provided in this section for all the vCenter Server instances to be added to Avamar.

Add vCenter as an Avamar client in AUI

About this task

This section provides steps to add the VMware vCenter server into Avamar User Interface (AUI) as an Avamar Client.

Prerequisite

• The vCenter certificate is imported to AUI. See Import vCenter certificate in Avamar UI.

Steps

1. Log in to the Avamar AUI using root credentials at: https://<Avamar FQDN or IP>/aui.

2. Navigate to: left pane > Asset Management > Domain.

3. Select domain as root(/).

4. Click Add Client, and then select Add VMware vCenter.

5. In the New vCenter Client window, in the New Client Name or IP field, enter the FQDN or IP of vCenter, then click Next.

6. On the vCenter Information page, provide the required information for the vCenter, then click Next.

7. On the Advanced page, keep the defaults and click Next.

84 Data protection

Page 85: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

8. On the Optional information page, keep the defaults and click Next.

9. Review the client summary information, and then click ADD.

10. Click Finish.

11. Repeat the steps provided in this section for all the vCenter Server instances that are required to be added in Avamar.

Avamar Proxy installation and configuration

About this task

You must deploy and configure one proxy VM for each vCenter Server to perform image backup and restore operations on that vCenter.In this deployment, a proxy VM is also deployed for each AVE instance.

Prerequisites

• Add DNS entries for each proxy you intend to deploy.• Add the vCenter as a vCenter client in Avamar. See Add vCenter as an Avamar client in AUI.• Add two DNS entries for each AVE using the same host name: one for VM-management (Core_Bck_VM_NW) and one for backup

network (Core_Bck_DD_Bck_NW).

Steps

1. Download the proxy appliance template file:

a) Log in to the Avamar Web Restore page at: https://<<Avamar-server-IP-or-FQDN>>.b) Click Downloads then expand the folders as: VMware vSphere > EMC Avamar VMware Image Backup > FLR Appliancec) Click the AvamarCombinedProxy-linux-sles12sp1-x86_64-version.ova file.d) Save AvamarCombinedProxy-linux-sles12sp1-x86_64-version.ova to a temporary folder, such as C:\Temp, or the desktop.

2. Deploy the proxy appliance in the vCenter:

a) Log in to the Management vCenter of the Core Management domain at: https://<<vCenter-server-IP-or-FQDN:9443>>/.b) Using the proxy appliance template OVA file, deploy proxy VMs.

3. Register the proxy with Avamar server:

a) Log in to the Management vCenter of Core Management domain at: https://<<vCenter-server-IP-or-FQDN>>/.b) Power-on the proxy VM, then open the console.c) On the Main Menu, type 1, and then press Enter.d) Enter the IP address of Avamar server then press Enter. For this deployment, the Backup network IP address of Avamar is

provided here.NOTE: If the Avamar proxy is unable to access the Avamar server, fix the connectivity issue and run the following

command from proxy VM console to register the proxy VM: /usr/local/avamarclient/etc/initproxyappliance.sh register

e) Enter the Avamar server domain name, and then press Enter. The default domain is clients.f) The Registration process starts. After the Registration is completed, the Avamar services initialize.g) Enter Y when prompted by the following text: Do you want this proxy to be managed by PDM in AVE?

OR

If not prompted, then run the following command: /usr/local/avamarclient/etc/initproxyappliance.shmanageby

h) Enter the vCenter server FQDN.i) Enter the vCenter administrator user name.j) Enter the vCenter administrator password.

4. Configure Proxy settings in Avamar UI:

a) Log in to the Avamar AUI at: https://<<Avamar-server-IP-or-FQDN>>/AUI/.b) Navigate to: Left pane > Asset Management.c) From the Domain column, select Clients.d) Select the proxy VM, click more actions > Edit Client > VMware.

1. On the Datastore tab, select all the vCenter datastores of the hosts that have VMs you want to protect using this proxy.2. On the Groups tab, select the check-box next to each group to assign this proxy.3. Click Update to save.

Data protection 85

Page 86: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

NOTE: In the Asset Management, for proxy VM the Enabled and Activated status should be TRUE.

5. Repeat the steps provided in this section to deploy and configure more proxies for each Source and Destination AVE instance.

Configure MCS support

About this task

In order to support using both image and guest backup to protect the same virtual machine, you must configure the Avamar MCS to allowduplicate client names.

Steps

1. Open the command shell and log in to the Avamar server using admin credentials.

2. Run the following command and stop MCS: dpnctl stop mcs3. Open /usr/local/avamar/var/mc/server_data/prefs/mcserver.xml in text editor.

4. Set the allow_duplicate_client_names entry to true: <entry key="allow_duplicate_client_names"value="true" />

5. Save your changes and close the mcserver.xml.

6. Run the following command to start the MCS: dpnctl start mcs7. Run the following command to start the scheduler: dpnctl start sched

Configure Avamar Client for Guest Backup

About this task

Guest backup protects virtual machine data by installing Avamar client software on the virtual machine, then registering and activatingthat client with an Avamar server. Avamar clients can be configured on both Windows and Linux platforms.

For more information, refer to:

• Dell EMC Avamar Backup Clients User Guide for Linux• Dell EMC Avamar for Windows Servers User Guide for Windows

Prerequisite

• The Avamar server instance is accessible from the client virtual machine.

Configure Avamar Client for Linux

About this task

This section provides steps to download, install, and register the Avamar Client for Linux VM. For more information, refer to the Dell EMCAvamar Backup Clients User Guide.

Prerequisites

• Download the UpgradeClientDownloads package from the EMC repository to the Avamar server. After downloading the file, youmust install it on the server for the client installation packages to be available. For more information, see the Dell EMC Avamar BackupClients User Guide.

• Configure Avamar Client for Windows.

Steps

1. Login to the Avamar Web Restore page at: https://<<AVE-FQDN-OR-IP>>/dtlt/home.html.

2. Download the Avamar client package for Linux then copy the downloaded packages to a temporary folder in Linux client VM.

3. Open the command shell in the Linux client VM, then log in to it using root credentials.

4. Change the directory to the temporary folder where the Avamar client package is copied.

5. Run the following command to install the client software in the directory: rpm -ih packageNOTE: Package is the filename of the Avamar client install package.

86 Data protection

Page 87: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

6. Register and activate the Avamar client with Avamar Server using following command: /usr/local/avamar/bin/avregisterThe client registration and activation script starts.

7. Provide the required information to complete the registration and activation.

Configure Avamar Client on Windows

About this task

This section provides steps to download, install, and register the Avamar Client for Windows VM. For more information, refer to Dell EMCAvamar for Windows Servers User Guide.

Steps

1. Log in to the Avamar Web Restore page at: https://<<AVE-FQDN-OR-IP>>/dtlt/home.html.

2. Download the Avamar client package, then copy the downloaded packages to a temporary folder in the Windows client VM.NOTE: You must download both the Avamar client installation package and the Avamar Config Checker installation

package and save to a temporary folder in the Windows client VM.

3. Verify the environment:

a) On the Windows Avamar client VM, unzip the Avamar Config Checker installation packages.b) Run the setup file to install the software.c) To start the config checker, navigate to: Start > Avamar Config Checker.d) Provide the required information, then click Run Tests.e) Save the results in HTML format, then click Finish.f) Review the HTML result file and correct all the failed checks, if any.g) Run the Config Checker again to ensure that all the checks are successful.

4. Install and register the Avamar client package:

a) On the Windows Avamar client VM, navigate to the downloaded Avamar client installation package, then double-click on it to run.b) Provide the required information and click Next.c) On the Please Enter Server Information page, enter the Avamar server IP address in the MC server field.d) In the MC domain field, enter the Avamar server DNS name, and click Next.e) Verify the provided information and click Install.f) Once the installation completes, click Finish.

Data Domain integration with Avamar ServerAbout this task

This section provides information about how to integrate the Data Domain virtual edition with the Avamar virtual edition.

Prerequisites

• Data Domain is installed and configured. See Deployment and configuration of Data Domain.• Avamar is installed and configured. See Deployment and configuration of Avamar.• Ensure forward and reverse DNS lookups work between the following systems:

• Avamar server• Data Domain system• Backup and restore clients

• NTP is configured on Avamar server and Data Domain.• On the Data Domain, verify that:

• DD boost is enabled• PASSPHRASE is set• SNMP is configured

Data protection 87

Page 88: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Adding the Data Domain System to Avamar

About this task

You can add a Data Domain system to Avamar by authenticating the Data Domain system with credentials. This integration allows you tostore backups on Data Domain and restore the created backups from them. For Replication, see Replication.

Steps

1. Log in to the Avamar Administrator UI at: https://Avamar-server-IP-or-FQDN/AUI.

2. Navigate to: Left pane > Administration > System > Data Domain > Add.The Add Data Domain window opens.

3. On the System page:

a) In the Data Domain System Name field, enter the IP address of the Data Domain system. For this deployment, backup networkIP address is provided in this field.

b) In the User Name (DDBoost) field, enter DD boost user name. For this deployment we have used sysadmin.c) In the Password field, enter DD Boost user password.d) In the Verify Password field, enter the password again for verification.e) In the Misc section:

1. To use this Data Domain as default replication storage, check the Use system as default Replication storage box.2. To store checkpoints for a single-node Avamar server or AVE server on the Data Domain system instead of the Avamar server,

check the Use as target for Avamar Checkpoint Backups box.f) Click Validate, then click Next.

4. On the SNMP page:

a) In the Getter/Setter Port Number field, enter the port number on the Data Domain system to receive and set SNMP objects.The default value is 161.

b) In the SNMP Community String box, enter the community string of Data Domain. See Configuration of DD VE.c) In the Trap Port Number field, enter the trap port number on the Avamar server. The default value is 163.d) Click Next.

5. On the Tiering tab, click Finish.A progress message appears.

6. When the operation completes, click Close.

7. Repeat the steps provided in this section to add more Data Domain to Avamar and also to integrate the destination Data Domain toAvamar.

ReplicationAbout this task

Replication jobs copy client backups from Source Data Domain to Destination Data Domain. Avamar replication uses DD Boost to copybackups from the original Data Domain system and to create replicas on another Data Domain system.

On a destination Avamar system, replicas are available in the REPLICATE domain. The following details apply to Avamar replication withData Domain systems:

• Data transfer during replication is between the Data Domain systems, without intermediate staging.• Replication uses DD Boost to copy backups and to write replicas.• Replication requires a Data Domain replication license.• Replication does not use Data Domain replication.• Replication is configured and monitored on the Avamar server.• Replication task scheduling uses Avamar replication schedules only.• Data Domain administration tools are not used.

88 Data protection

Page 89: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Add an Avamar system as a replication destination

About this task

This section provides steps to add Avamar system as a replication destination.

Prerequisite

• The replication destination has a minimum of one DD system attached.

Steps

1. Log in to the Avamar UI at: https://<Avamar FQDN or IP>/aui.

2. Navigate to left pane > Administration > System > Replication Destination > + Add.

3. In the Add New Replication Destination window, in the Name field, enter a reference name for the destination Avamar system.

4. From the Encryption drop-down list, select encryption level.

5. In the Target Server Address field, enter the FQDN of destination Avamar.

6. Enter the root credentials of the destination Avamar system in User ID on target server and Password on target server fields.

7. Click Validate.

8. After the validation, click OK.

Add a replication policy and create a replication group

About this task

This section provides information to create a replication policy and a replication group.

Prerequisite

• Add a destination Avamar server to the configuration on the source Avamar server.

Steps

1. Log in to the Avamar UI at: https://<Avamar FQDN or IP>/aui.

2. Navigate to left pane > Policy > Replication Policy > + Add.

3. On the Properties page, in the Name field, enter a name for replication policy.

4. If pool-based replication is used to enable multiple parallel replication backups from a Data Domain source to a Data Domaindestination, select Replicate client backups in parallel. Else, select Replicate client backups in alphabetical order.For the Replication order of client backups, select one of the following:

• Oldest to Newest begins replication with the oldest backup first.• Newest to Oldest begins replication with the newest backup first.

5. Click Next.

6. On the Members page, select Choose Membership radio button to select specific domains or clients.

7. Expand the list of domains/clients, and then select or unselect the checkbox next to a domain/client.

8. Select the members to be added.

9. Click Next.

10. On the Backup Filters page, select the Replicate all backups radio button.

11. On the Schedule and Retention page, keep the default values and click Next.

12. On the Destination page, select an existing destination server and click Next.

13. On the Summary page, verify the provide information and click Finish.

Data protection 89

Page 90: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Integration of Avamar with VMware vRealizeOperations ManagerAbout this task

The Avamar Virtual Edition can be integrated with VMware vRealize Operation Manager to:

• Monitor the health of the Avamar and proxy VMs• Send early warnings and alerts

If the AVE is correctly integrated with the DD VE then vROps also displays the DD VE information on Avamar dashboard.

Prerequisites

• Download the vROps Management pack for Avamar (Dell EMC Storage Analytics) from the VMware Marketplace.• All the Avamar and proxy VMs is deployed and configured. See Deployment and configuration of Avamar.

Steps

1. Log in to the vRealize Operations Manager with Administrator credentials.

2. Click Add and upload the Avamar management pack to vROps.

3. In the Solution window, select Avamar (Dell EMC Storage Analytics) Management pack, then complete the installation.

4. In the Solution window, select Avamar Management pack, and then click the Configure icon.

5. Provide the required information for Avamar and then save the settings.

6. Add more AVE instances to vROps using the steps provided in this section.

Integration of Avamar with VMware vRealize LogInsightAbout this task

You can configure Avamar with VMware Log Insight to view the Avamar logs on vRLI. You need to create a Avamar event profile toforward the logs to vRLI.

Prerequisite

• All the Avamar and proxy VMs are deployed and configured. See Deployment and configuration of Avamar.

Steps

1. Log in to the Avamar console as a root user.

2. Configure the log central reporting services:

a) Navigate to the usr/local/emc-lcrs/etc/ directory.b) Open the lcrs.ini file in a text editor and update:

1. forward.server: Enter the IP address of Core Management vRLI master.2. upload.forward: set true to forward the log to vRLI.

c) Save and close the Icrs.ini file.d) Repeat the steps provided in this section on the second instance of AVE.

3. Configure the Log Forwarding Agent on Avamar Proxy:

a) Log in to Avamar Proxy server as a root user.b) Navigate to the /usr/local/avamarclient/etc directory.c) Open the console and run following command: Run ./proxylfa_setup.shd) On the Main Menu, four options are displayed.e) Enter 1 to Setup LCRS IP address, then enter the IP address of Avamar utility ode or AVE running the Log Central Reporting

Services.f) Enter 2 to enable the LFA cron job. The cron job will forward logs from the proxy to the LCRS every 10 minutes.g) Enter 4 to quit.

90 Data protection

Page 91: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

h) Repeat the steps provided in this section on all the proxy VMs.

4. Open the Interactive Analytics view in the vRealize Log Insight UI to view Avamar events.

vCloud Director Data Protection ExtensionAbout this task

This section provides information to install and configure vCloud Director Data Protection Extension (vCD DPE) in Dell EMC vCloud NFVEdge environment. This extension provides data protection services in the vCloud Director. The vCD DPE provides image-based dataprotection for the workloads running on the vCloud Director.

The vCD DPE consists of two types of administrator:

• Backup Administrator or Cloud Administrator (vCD SA) to define and provide data protection services to its tenants.• Organization Administrator or Tenant Administrator (vCD OA) to manage data protection self-services including backup, restore, and

replication operations.

For more information, see:

• Dell EMC vCloud Director Data Protection Extension Installation and Upgrade Guide• Dell EMC vCloud Director Data Protection Extension Administration and User Guide

Prerequisites

• Verify all the prerequisites for following components are satisfied as specified in the Dell EMC vCloud Director Data ProtectionExtension Installation and Upgrade Guide:

• Hardware requirements• Software requirements• vCloud Director• vSphere requirements

• Management and Resource vCenters are added in the AUI as an Avamar Client. See Add vCenter as an Avamar client in AUI.• The Image proxies are deployed in the resource vCenter. See Avamar Proxy installation and configuration.• In the mcserver.xml file, the allow_duplicate_client_names entry is set to true. See Configure MCS support.• Manual creation of forward and reverse lookup entries are completed for all instances on the DNS server prior to deployment.• Management and Resource VCSA clusters, vCD Cells, and Avamar instances are up and running.

vCD DPE installation

About this task

The vCD DPE requires deployment of multiple VMs in the cloud infrastructure where a VM or a group of VMs are configured with aspecific application payload (cell, backup gateway, utility node (RabbitMQ and PostgreSQL), UI server, reporting server, and FLR UIserver). Instead of delivering specific OVAs for each application, the vCD DPE comes as a single OVA (the vPA) which acts as adeployment manager. With the Virtual Provisioning Appliance (vPA), all components of the vCD DPE will be deployed automatically with asingle click.

For more information, refer to the Dell EMC vCloud Director Data Protection Extension Installation and Upgrade Guide.

Deployment of vPA on management vCenter

About this task

The installation of vCD DPE begins with the deployment of vPA. After the vPA is deployed, the management tools deploys, upgrades,migrates, and configures the VMs from the vPA. The management tool uses the deploy_plan.conf file for required information to deploythe VMs.

Steps

1. Log in to the management vCenter of the Core Management domain using VMware vSphere Web Client.

2. Deploy the vPA on the management cluster of the Core Management domain. For this deployment, vSAN Datastore is selected asdatastore and Core_Mgmt_VM_NW as network port group.

3. After the deployment completes, log in to vPA using root credentials.

Data protection 91

Page 92: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

NOTE: The first time password for root user is changeme.

4. Open the console run the following command to check network settings: hostname5. Verify that the hostname is same as FQDN. If the hostname is not same as FQDN, then network settings are not correct. Check the

network settings and redeploy the vPA.

Install the VMware components

About this task

After the vPA is deployed, install the VMware OVF tool and VMware vSphere CLI on vPA.

Prerequisites

• Verify that, the following packages are available on /root directory on vPA:

• VMware-ovftool-4.3.0-12320924-lin.x86_64.bundle• VMware-vSphere-CLI-6.7.0-8156551.x86_64.tar.gz

Steps

1. Log in to vPA using the root credentials.

2. On the root directory run the following command to install VMware OVF tool: VMware-ovftool-4.3.0-12320924-lin.x86_64.bundle

3. Install the VMware vSphere CLI:

a) Extract the tar file using following command: tar -xzvf VMware-vSphere-CLI-6.7.0-8156551.x86_64.tar.gzb) Run the following command to change the directory: cd vmware-vsphere-cli-distribc) Run the installer file using following command: ./vmware-install.pld) Review and accept the terms in the EULA for the VMware vSphere CLI.

Deploy the vCD DPE nodes

About this task

This section provides information to deploy vCD DPE nodes in the Dell EMC NFV Edge environment. The management tool (vcp-management-tool) enables you to deploy multiple VMs based on the requirements of your backup environment. The following vCD DPEnodes are deployed:

• Cell• Backup gateway• Utility node (RabbitMQ and PostgreSQL)• UI server• (Optional) FLR UI server• (Optional) Reporting server

For more information, refer to the Dell EMC vCloud Director Data Protection Extension Installation and Upgrade Guide.

Prerequisite

• Prepare the deployment plan and provide the values for deployment plan parameters in the deploy_plan.conf file using the Dell EMCvCloud Director Data Protection Extension Installation and Upgrade Guide.

Steps

1. Log in to vPA using the root credentials.

2. Run the following command to start the deployment process: vcp-management-tool --deploy3. Configure the AMQP settings for the RabbitMQ server in the vCloud Director UI:

a) Log in to the vCloud Director of Core Management domain.b) Navigate to: System > Administration > System Settings > Extensibility.c) Configure the AMQP settings.

92 Data protection

Page 93: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Figure 16. AMQP settings

4. Restart the services:

a) Log in to the cell node of vPA using the root user credentials.b) Open the console and run the following commands: systemctl stop vcpsrv and systemctl start vcpsrvc) Log in to the UI server node using the root user credentials.d) Run the following commands: systemctl stop vcpui and systemctl start vcpui

NOTE: The deployment process creates a folder with the name truststore within the /root/deploy_plandirectory. Do not delete this folder or any files within this folder. The all-in-one deployment method automatically

installs the vCD DPE UI plug-in on the vCloud Director.

Configuration of vCD DPE

About this task

The vCD DPE allows Cloud Administrators to configure and manage backup appliances to map vCD resources on Avamar backup store(including Avamar with Data Domain).

A backup appliance provides a representation layer to the Avamar server. This enables the backup and recovery operations in vCloudDirector. It exposes the backup capacities of Avamar and integrated DD systems to vCloud Director as backup stores.

The Cloud Administrator can use backup repositories to associate an Org VDC to a backup appliance. The backup repositories are logicalentities that are mapped to a backup store on a backup appliance with an Org VDC. An Org VDC can map to multiple backup repositories,but only one repository can be made "active" for backups. Non-active repositories are in restore-only mode. This logical construct alsoenables advanced restore use cases that use filters to look up backups for protected vApps that are deleted or are from another cloud.

After VM or vApp backups have been created and replicated, the administrator (vCD SA or vCD OA) can browse and restore the backupsof vApps and individual VMs to their original location or to a new location.

Data protection 93

Page 94: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Add a backup appliance

About this task

A backup appliance represents an Avamar server. Before you can configure the vCD DPE to perform backups, the vCD SA must add abackup appliance through the vCD DPE UI.

Steps

1. Log in to the vCD DPE UI using vCD SA credentials at: https://<UI-server-IP-or-FQDN>/vcp-ui-server/vcp-ui/.

2. Navigate to: main menu > Configure > Backup Appliances > Add.

3. Provide the required information to add a backup appliance.

Configure organizations and repositories

About this task

Register an organization with vCD DPE to associate any of its Org VDCs with a backup appliance, then assign the backup polices to it.

Steps

1. Log in to the vCD DPE UI using vCD SA credentials at: https://<UI-server-IP-or-FQDN>/vcp-ui-server/vcp-ui/.

2. Navigate to: main menu > Configure > Organizations.

3. Select the organization from the list and click Register. After the registration completes, a green check mark appears.

4. To add a repository:

a) On the registered organization, select the desired Org VDC, then in the Repositories tab, click Add.b) Provide the required information and select the backup appliances from the Select Backup Store section, then click Add.

Configure the backup policy templates

About this task

A catalog holds a collection of backup policy templates. A backup policy template is a combination of a backup schedule, a retentionperiod, and an option set. Backup policies are created using policy templates and control when vApps are backed up, how long to keep thebackups, and which, if any, options will be invoked during the backup process.

For more information, refer to the Dell EMC vCloud Director Data Protection Extension Administration and User Guide.

Steps

1. Log in to the vCD DPE UI using vCD SA credentials.

2. Create a backup schedule.

3. Create a backup retention period.

4. Create a backup option set.

5. Create a catalog of backup policy templates.

Create a backup policy for an Org VDC

About this task

This section provides the information to create backup policy and assign it to an Org VDC. For more information, refer to the Dell EMCvCloud Director Data Protection Extension Administration and User Guide.

Steps

1. Log in to the vCD DPE UI using vCD SA credentials at: https://<UI-server-IP-or-FQDN>/vcp-ui-server/vcp-ui/.

2. Navigate to: main menu > Configure > Organizations.

3. In the Organization pane, select the desired organization that the backup policy will be assigned to.

4. On the Policies tab, click Add.

94 Data protection

Page 95: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

5. Provide the required information to create a new backup policy.

NOTE: It is recommended to set a default backup policy.

Best Practice for vCD DPE

About this task

The following best practices apply to performing backups using the vCD DPE:

• The vCD DPE does not support the backup or restore of fast-provisioned VMs. If you back up vApps and VMs that have been fast-provisioned, restore attempts fail.

• The vCD DPE does not support using the Avamar management console or the AUI to manage backups and backup policies.• Use the vCloud Director tenant portal UI to perform stand-alone VM backup. The vCD DPE does not support using the vCD DPE UI to

perform this task.

Data Protection in the vCloud Director Tenant Portal UI

About this task

In the vCloud Director Tenant portal, the Data Protection option is enabled to perform backup and restore operations for VMware cloud.The organization or tenant administrator can perform this task and view tenant protection policies, modify policies and monitor policyquotas, and perform virtual machine and vApp backups and recoveries.

The vCloud Director Tenant Portal UI's File Level Restore wizard enables you to restore specific files and folders contained within a vAppbackup from a source VM to a destination VM.

To access the vCloud Director tenant portal UI, log in to the vCloud Director using vCD OA credentials at: https://vCD_IP_OR_FQDN/tenant/tenant_name.

For more information, refer to the Dell EMC vCloud Director Data Protection Extension Administration and User Guide.

Figure 17. Data protection on vCloud Director tenant portal

NOTE: You cannot add or modify policies by using the vCloud Director tenant portal UI.

Data protection 95

Page 96: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Replication of vApps

About this task

The vCD Data Protection Extension provides the capability to replicate the vApp backups that it has created to a destination Avamarserver. To replicate backups, you must create a replication policy and then apply the policy to the vApps whose backups you want toreplicate.

For more information, refer to the Dell EMC vCloud Director Data Protection Extension Administration and User Guide.

96 Data protection

Page 97: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Bill of materials for Dell EMC Networking

Table 83. Bill of materials - Dell EMC Networking S4048T-ON (three units)

SKU Description Quantity

210-AHMR S4048T-ON,IO-PSU 3

450-AASX 250V,12A,2MTR,C13/C14 6

343-BBEM Doc Kit, S4048T-ON 3

332-1286 US Order 3

973-2426 INFO Declined Remote Consulting Service 3

900-9997 ONSITE INSTL DECLINED 3

Table 84. Bill of materials - Dell EMC Networking Z9264F-ON

SKU Description Quantity

619-AGYQ NW Software NO-OS Installed 1

343-BBKB Dell EMC Z9264 Series User Guide 1

332-1286 US Order 1

210-APGE Z9264F-ON,PSU to IO,2xPSU,No OS 1

817-8294 HW WRTY-SVC NW Z9264F 1

817-8295 RTD PARTS NW Z9264F 1YR 1

817-8296 SW WARRANTY NW Z9264F 90D 1

900-9997 ONSITE INSTL DECLINED 1

450-AASX 250V,12A,2MTR,C13/C14 2

Table 85. Bill of materials - Dell EMC Networking S5232-ON (six units)

SKU Description Quantity

634-BRUO OS10 Enterprise, S5232F-ON 6

343-BBLP Dell EMC S52XX-ON User Guide 6

332-1286 US Order 6

210-APHN S5232F-ON, PSU to IO air, 2x PSU, OS10 6

818-5110 HW WRTY-SVC NW S5232F-ON 6

818-5111 RTD PARTS NW S5232F-ON 1YR 6

818-5112 SW WARRANTY NW S5232F-ON 90D 6

470-ABOV CBL,Q28-Q28,100 G,1M Direct Attach 6

470-ABOR Cbl,100 GbE QSFP28 Pssv, 1 Meter 32

470-ABOS Cbl,100 GbE QSFP28 Pssv, 2 Meter(optional)

32

470-AAGE Dell, QSFP+ Breakout Cable DAC, 3 m 6

997-6306 Info 3rd Party O/S Warranted by Vendor 6

A

Bill of materials for Dell EMC Networking 97

Page 98: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

SKU Description Quantity

900-9997 ONSITE INSTL DECLINED 6

450-AASX 250V,12A,2MTR,C13/C14 12

470-AAGE Dell, QSFP+ Breakout Cable DAC, 3m 2

98 Bill of materials for Dell EMC Networking

Page 99: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Bill of materials for Dell EMC PowerEdgeservers

The default kit consists of eight Dell EMC PowerEdge R740 servers for the management cluster (four servers for each of the Core andthe Regional Management domains). And sixteen PowerEdge R740xd servers for which four servers are allocated to the Resource clusterand four servers allocated to the Edge cluster of the Core Management domain, furthermore, the Regional Edge site contains four serversfor the Resource cluster and four servers allocated to the Edge cluster. All servers are based on the vSAN Ready Node configuration.Components can be changed according to your needs, however, the primary objective is high availability (HA), with the ability to growbased on VNF use case requirements. Each cluster must have identical node configurations and be vSAN Ready certified by VMware.

NOTE: A minimum of four servers is required for each cluster.

Table 86. Bill of materials - Dell EMC PowerEdge R740xd (sixteen units) for edge and resource clusters

SKU Description Quantity

210-AKZR PowerEdge R740XD Server 16

461-AADZ No Trusted Platform Module 16

340-BLBE PowerEdge R740XD Shipping 16

343-BBFU PE R740 Shp Mtl 16

370-AAIP Performance Optimized 16

619-ABVR No Operating System 16

421-5736 No Media Required 16

385-BBKT iDrac9, Enterprise 16

528-BBWT OME Server ConfigMgmt 16

379-BCQV iDRAC Group Manager, Enabled 16

384-BBPZ 6 PerfFans forR740/740XD 16

631-AACK No Systems Docs, No OM DVD Kit 16

332-1286 US Order 16

973-2426 INFO Declined Remote Consulting Service 16

800-BBDM UEFI BIOS Boot Mode with GPT Partition 16

450-ADWM Dual, Redundant, Hot-plug PS, 1100 W 16

350-BBJV No Quick Sync 16

770-BBBQ Slide RdyRL, No CMA 16

900-9997 ONSITE INSTL DECLINED 16

379-BCSG iDRAC, Legacy Password 16

384-BBBL Performance BIOS Settings 16

385-BBLE IDSDM and Combo Card Reader 16

385-BBCF Redundant SD Cards Enabled 16

385-BBKG 16 GB microSDHC/SDXC Card 16

385-BBKG 16 GB microSDHC/SDXC Card 16

330-BBHH Riser Config 4, 3x8, 4x16 slots 16

B

Bill of materials for Dell EMC PowerEdge servers 99

Page 100: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

SKU Description Quantity

321-BCPY 2.5" Chassis up to 24HD, 2P 16

412-AAIR Standard 2U Heatsink 16

412-AAIR Standard 2U Heatsink 16

780-BCDI No RAID 16

405-AANK HBA330 CTRL Card Adapter, LP 16

400-ASGV 900GB, HDD 15K SAS, 12 Gb, 512n, 2.5, HP 128

540-BBUY X550 QP 10 GbE, Base-T, rNDC 16

813-6236 HW WRTY + SVC,PE R740XD,BZ 16

813-6276 PSP NBD OS,PE R740XD,3YR,BZ 16

813-6277 PSP TECH SPT,PE R740XD,3YR,BZ 16

972-0500 INFO,PSP TECH SPT CONTACT,ENT,BZ 16

350-BBBW No Bezel 16

389-BTTO PE R740XD Luggage Tag 16

492-BBDH C13-C14, PDU, 12 A, 2 ft, 0.6 m, NA 32

400-BCQN 960GB SSD SAS 12 Gbps 512e 2.5 in Hot-Plug

32

338-BSGN Gold 6240 2.6 G, 24.75 M, 150 W 16

329-BEIK PowerEdge R740 MLK Motherboard 16

338-BSGN Gold 6240 2.6G, 24.75 M, 150 W 16

379-BDCO Additional Processor Selected 16

370-AEPP 2933 MT/s RDIMMs 16

370-AERE 32 GB RDIMM, 2933 MT/s, DR, BCC 96

540-BBIV X710 DP, 10 Gb DA/SFP+, CvNwAd, FH(Optional)

40

540-BCDH XXV710 DP 25 GbE PCIe, FH 40

Table 87. Bill of materials - Optional Dell EMC PowerEdge R740 (eight units) for management clusters

SKU Description Quantity

210-AKXJ PowerEdge R740 Server 8

461-AADZ No Trusted Platform Module 8

321-BCSM 2.5" up to 8 SAS/SATA HD, 2P 8

340-BLKS PowerEdge R740 Shipping 8

343-BBFU PE R740 Shp Mtl 8

370-AAIP Performance Optimized 8

780-BCDS Unconfigured RAID 8

619-ABVR No Operating System 8

421-5736 No Media Required 8

385-BBKT iDrac9, Enterprise 8

528-BBWT OME Server ConfigMgmt 8

379-BCQV iDRAC Group Manager, Enabled 8

379-BCSG iDRAC, Legacy Password 8

384-BBPY 6 Standard Fans for R740/740XD 8

100 Bill of materials for Dell EMC PowerEdge servers

Page 101: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

SKU Description Quantity

450-ADWS Dual,Redundant, Hot-plug PS, 750 W 8

325-BCHU PowerEdge 2U Standard Bezel 8

350-BBKG Dell EMC Luggage Tag 8

631-AACK No Systems Docs, No OM DVD Kit 8

332-1286 US Order 8

412-AAIR Standard 2U Heatsink 8

412-AAIR Standard 2U Heatsink 8

492-BBDH C13-C14, PDU, 12 A, 2 ft, 0.6 m, NA 16

385-BBLE IDSDM and Combo Card Reader 8

770-BBBQ Slide RdyRL,No CMA 8

384-BBBL Performance BIOS Settings 8

350-BBJV No Quick Sync 8

911-0418 ONSITE INSTL DECLINED LA/BZ 8

400-ASGV 900GB, HDD 15K SAS, 12 Gb, 512n, 2.5, HP 64

540-BBUY X550 QP 10 GbE, Base-T, rNDC 8

800-BBDM UEFI BIOS Boot Mode with GPT Partition 8

429-ABBJ No Internal Optical Drive 8

385-BBCF Redundant SD Cards Enabled 8

385-BBKG 16 GB microSDHC/SDXC Card 8

385-BBKG 16 GB microSDHC/SDXC Card 8

400-BCQN 960GB SSD SAS 12 Gbps 512e 2.5 in Hot-Plug

8

405-AANP PERC H330 RAID Adapter, LP 8

338-BSGN Gold 6240 2.6 G, 24.75 M, 150 W 8

329-BEIK PowerEdge R740 MLK Motherboard 8

338-BSGN Gold 6240 2.6G, 24.75M, 150 W 8

379-BDCO Additional Processor Selected 8

370-AEPP 2933 MT/s RDIMMs 8

370-AEQH 32GB RDIMM, 2933 MT/s, Dual Rank 48

330-BBGZ Riser Config 1, 4x8 slots 8

540-BBIV X710 DP, 10 Gb DA/SFP+, CvNwAd, FH(Optional)

16

540-BCDH XXV710 DP 25 GbE PCIe, FH 16

Table 88. Bill of materials - Dell EMC PowerEdge R640 deployment server

SKU Description Quantity

210-AKWU PowerEdge R640 Server 1

461-AADZ No Trusted Platform Module 1

340-BKNE PowerEdge R640 Shipping 1

370-ABWE DIMM Blanks for System with 2 Processors 1

412-AAIQ Standard 1U Heatsink 1

412-AAIQ Standard 1U Heatsink 1

Bill of materials for Dell EMC PowerEdge servers 101

Page 102: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

SKU Description Quantity

370-AAIP Performance Optimized 1

619-ABVR No Operating System 1

421-5736 No Media Required 1

385-BBKT iDrac9, Enterprise 1

528-BBWT OME Server ConfigMgmt 1

379-BCQV iDRAC Group Manager, Enabled 1

384-BBQJ 8 Standard Fans for R640 1

450-ADWS Dual, Redundant, Hot-plug PS, 750 W 1

631-AACK No Systems Docs, No OM DVD Kit 1

332-1286 US Order 1

379-BCSG iDRAC,Legacy Password 1

350-BBKB No Quick Sync 1

770-BBBC Slide RdyRL,No CMA 1

900-9997 ONSITE INSTL DECLINED 1

400-ASGV 900GB, HDD 15K SAS, 12 Gb, 512 n, 2.5, HP 7

385-BBLE IDSDM and Combo Card Reader 1

492-BBDI C13-C14, PDU, 12 A, 6.5 ft, 2 m, NA 2

800-BBDM UEFI BIOS Boot Mode with GPT Partition 1

330-BBGN Riser Config 2, 3x 16 LP 1

384-BBBL Performance BIOS Settings 1

540-BBUY X550 QP 10 GbE, Base-T, rNDC 1

385-BBCF Redundant SD Cards Enabled 1

385-BBKG 16 GB microSDHC/SDXC Card 1

385-BBKG 16 GB microSDHC/SDXC Card 1

780-BCDI No RAID 1

405-AAJU HBA330 12 Gbps Cntrllr Mncrd 1

400-ASFI 960G SSD SAS, MU, 12, 2.5, HP, PX05SV 1

321-BCQJ 2.5" Chassis with up to 8 HD, 3 PCIe 1

343-BBEV PE R640 x8 Drive Shp Mtl 1

429-ABBF No Internal Optical Drive x4, x8 Chassis 1

350-BBBW No Bezel 1

350-BBJS Dell EMC Luggage Tag 1

555-BCKN X710 DP, 10 Gb DA/SFP+, CvNwAd, LP 1

1338-BSGN Gold 6240 2.6 G, 24.75 M,150 W 1

329-BEIJ PowerEdge R640 MLK Motherboard 1

338-BSGN Gold 6240 2.6 G, 24.75 M,150 W 1

379-BDCO Additional Processor Selected 1

370-AEPP 2933 MT/s RDIMMs 1

370-AEQH 32 GB RDIMM, 2933 MT/s, Dual Rank 6

102 Bill of materials for Dell EMC PowerEdge servers

Page 103: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Avamar\Data Domain Data ProtectionAvamar and Data Domain Data Protection utilize two Dell EMC PowerEdge R740xd servers, one for backup and another for replication.The Bill of materials - Dell EMC PowerEdge R740xd (sixteen units) for edge and resource clusters table can be used to order the twoservers, however, only two PCIe NICs are required for each server.

Bill of materials for Dell EMC PowerEdge servers 103

Page 104: Dell EMC Ready Architecture for VMware vCloud NFV 3.2.1 vCloud … · VNF Virtual Network Function running in a virtual machine VR vSphere Replication vRLI VMware vRealize Log Insight

Reference documentationFor additional system information, see the following documents:

Dell EMC PowerEdge R640 Installation and Service Manual

Dell EMC PowerEdge R740 Installation and Service Manual

Dell EMC PowerEdge R740xd Owner's Manual

Dell EMC Avamar for VMware

VMware vCloud NFV Reference Architecture

vCloud Director Installation, Configuration, and Upgrade Guide

VMware NSX Data Center for vSphere Documentation

VMware vSphere Documentation

Postman Documentation

Avamar-Virtual-Edition-19.2-Installation-and-Upgrade-Guide

Data-Domain-Virtual-Edition-Installation-and-Administration Guide

C

104 Reference documentation