Setup guide nos-v3_5

36
Setup Guide NOS 3.5 26-Sep-2013

Transcript of Setup guide nos-v3_5

Setup Guide

NOS 3.526-Sep-2013

Copyright | Setup Guide | NOS 3.5 | 2

Notice

Copyright

Copyright 2013 Nutanix, Inc.

Nutanix, Inc.1740 Technology Drive, Suite 400San Jose, CA 95110

All rights reserved. This product is protected by U.S. and international copyright and intellectual propertylaws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marksand names mentioned herein may be trademarks of their respective companies.

Conventions

Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command The commands are executed in the Nutanix nCLI.

user@host$ command The commands are executed as a non-privileged user (such as nutanix)in the system shell.

root@host# command The commands are executed as the root user in the hypervisor host(vSphere or KVM) shell.

output The information is displayed as output from a command or in a log file.

Default Cluster Credentials

Interface Target Username Password

Nutanix web console Nutanix Controller VM admin admin

vSphere client ESXi host root nutanix/4u

SSH client or console ESXi host root nutanix/4u

SSH client or console KVM host root nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

IPMI web interface or ipmitool Nutanix node ADMIN ADMIN

IPMI web interface or ipmitool Nutanix node (NX-3000) admin admin

Version

Last modified: September 26, 2013 (2013-09-26-13:03 GMT-7)

3

Contents

Overview........................................................................................................................... 4Setup Checklist....................................................................................................................................4Nonconfigurable Components............................................................................................................. 4Reserved IP Addresses...................................................................................................................... 5Network Information............................................................................................................................ 5Product Mixing Restrictions.................................................................................................................6Three Node Cluster Considerations....................................................................................................7

1: IP Address Configuration.......................................................................8To Configure the Cluster.....................................................................................................................8To Configure the Cluster in a VLAN-Segmented Network............................................................... 11

To Assign VLAN Tags to Nutanix Nodes...............................................................................11To Configure VLANs (KVM)................................................................................................... 12

2: Storage Configuration.......................................................................... 14To Create the Datastore................................................................................................................... 14

3: vCenter Configuration.......................................................................... 16To Create a Nutanix Cluster in vCenter........................................................................................... 16To Add a Nutanix Node to vCenter.................................................................................................. 19vSphere Cluster Settings...................................................................................................................21

4: Final Configuration............................................................................... 23To Set the Timezone of the Cluster................................................................................................. 23To Make Optional Settings................................................................................................................23Diagnostics VMs................................................................................................................................24

To Run a Test Using the Diagnostics VMs............................................................................24Diagnostics Output................................................................................................................. 25

To Test Email Alerts......................................................................................................................... 26To Check the Status of Cluster Services......................................................................................... 27

Appendix A: Manual IP Address Configuration..................................... 28To Verify IPv6 Link-Local Connectivity............................................................................................. 28To Configure the Cluster (Manual)................................................................................................... 29Remote Console IP Address Configuration...................................................................................... 31

To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000).....................31To Configure the Remote Console IP Address (NX-3000).................................................... 32To Configure the Remote Console IP Address (NX-2000).................................................... 32To Configure the Remote Console IP Address (command line)............................................ 33

To Configure Host Networking..........................................................................................................34To Configure Host Networking (KVM).............................................................................................. 35To Configure the Controller VM IP Address..................................................................................... 36

Overview | Setup Guide | NOS 3.5 | 4

Overview

This guide provides step-by-step instructions on the post-shipment configuration of a Nutanix VirtualComputing Platform.

Nutanix support recommends that you review field advisories on the support portal before installing acluster.

Setup Checklist

Confirm network settings with customer.

Network Information on page 5

Unpack and rack cluster hardware.

Refer to the Physical Installation Guide for your hardware model

Connect network and power cables.

Refer to the Physical Installation Guide for your hardware model

Assign IP addresses to all nodes in the cluster.

IP Address Configuration on page 8

Configure storage for the cluster.

Storage Configuration on page 14

(vSphere only) Add the vSphere hosts to the customer vCenter.

vCenter Configuration on page 16

Set the timezone of the cluster.

To Set the Timezone of the Cluster on page 23

Make optional configurations.

To Make Optional Settings on page 23

Run a performance diagnostic.

Diagnostics VMs on page 24

Test email alerts.

To Test Email Alerts on page 26

Confirm that the cluster is running.

To Check the Status of Cluster Services on page 27

Nonconfigurable Components

The components listed here are configured by the Nutanix manufacturing process. Do not modify any ofthese components except under the direction of Nutanix support.

Overview | Setup Guide | NOS 3.5 | 5

Caution: Modifying any of the settings listed here may render your cluster inoperable.

In particular, do not under any circumstances use the Reset System Configuration option ofESXi or delete the Nutanix Controller VM.

Nutanix Software

• Local datastore name• Settings and contents of any Controller VM, including the name

Important: If you create vSphere resource pools, Nutanix Controller VMs must have the topshare.

ESXi Settings

• NFS settings• VM swapfile location• VM startup/shutdown order• iSCSI software adapter settings• vSwitchNutanix standard virtual switch• vmk0 interface in port group "Management Network"• SSH enabled• Firewall disabled

KVM Settings

• iSCSI settings• Open vSwitch settings

Reserved IP Addresses

The Nutanix cluster uses the following IP addresses for internal communication:

• 192.168.5.1• 192.168.5.2• 192.168.5.254

Important: The ESXi and CVM interfaces on vSwitch0 cannot use IP addresses in any subnetsthat overlap with subnet 192.168.5.0/24.

Network Information

Confirm the following network information with the customer before the new block or blocks are connectedto the customer network.

• 10 Gbps Ethernet ports [NX-3000, NX-3050, NX-6000: 2 per node/8 per block] [NX-2000: 1 per node/4per block]

• (optional) 1 Gbps Ethernet ports [1-2 per node/4-8 per block]• 10/100 Mbps Ethernet ports [1 per node/4 per block]• Default Gateway• Subnet mask• (optional) VLAN ID• NTP servers• DNS domain• DNS servers

Overview | Setup Guide | NOS 3.5 | 6

• Host servers IP Addresses (remote console) [1 per node/4 per block]• Host servers IP Addresses (hypervisor management) [1 per node/4 per block]• Nutanix Controller VMs IP addresses [1 per node/4 per block]• Reverse SSH port (outgoing connection to nsc.nutanix.com) [default 80]• (optional) HTTP proxy for reverse SSH port

Product Mixing Restrictions

While a Nutanix cluster can include different products, there are some restrictions.

Caution: Do not configure a cluster that violates any of the following rules.

Compatibility Matrix

NX-1000 NX-2000 NX-2050 NX-3000 NX-3050 NX-6000

NX-10001 • • • • • •

NX-2000 • • • • •

NX-2050 • • • • • •

NX-3000 • • • • • •

NX-3050 • • • • • •

NX-60002 • • • • 3 •

1. NX-1000 nodes can be mixed with other products in the same cluster only when they are running 10GbE networking; they cannot be mixed when running 1 GbE networking. If NX-1000 nodes are usingthe 1 GbE interface, the maximum cluster size is 8 nodes. If the nodes are using the 10 GbE interface,the cluster has no limits other than the maximum supported cluster size that applies to all products.

2. NX-6000 nodes cannot be mixed NX-2000 nodes in the same cluster.3. Because it has a larger Flash tier, NX-3050 is recommended to be mixed with NX-6000 over other

products.

• Any combination of NX-2000, NX-2050, NX-3000, and NX-3050 nodes can be mixed in the samecluster.

• All nodes in a cluster must be the same hypervisor type (ESXi or KVM).• All Controller VMs in a cluster must have the same NOS version.• Mixed Nutanix clusters comprising NX-2000 nodes and other products are supported as specified

above. However, because the NX-2000 processor architecture differs from other models, vSphere doesnot support enhanced/live vMotion of VMs from one type of node to another unless Enhanced vMotionCapability (EVC) is enabled. For more information about EVC, see the vSphere 5 documentation andthe following VMware knowledge base articles:

• Enhanced vMotion Compatibility (EVC) processor support [1003212]• EVC and CPU Compatibility FAQ [1005764]

Overview | Setup Guide | NOS 3.5 | 7

Three Node Cluster Considerations

A Nutanix cluster must have at least three nodes. Minimum configuration (three node) clusters provide thesame protections as larger clusters, and a three node cluster can continue normally after a node failure.However, one condition applies to three node clusters only.

When a node failure occurs in a cluster containing four or more nodes, you can dynamically remove thatnode to bring the cluster back into full health. The newly configured cluster will still have at least threenodes, so the cluster is fully protected. You can then replace the failed hardware for that node as neededand add the node back into the cluster as a new node. However, when the cluster has just three nodes,the failed node cannot be dynamically removed from the cluster. The cluster continues running withoutinterruption on two healthy nodes and one failed node, but the failed node cannot be removed when thereare only two healthy nodes. Therefore, the cluster is not fully protected until you fix the problem (such asreplacing a bad root disk) for the existing node.

IP Address Configuration | Setup Guide | NOS 3.5 | 8

1IP Address Configuration

NOS includes a web-based configuration tool that automates the modification of Controller VMs andconfigures the cluster to use these new IP addresses. Other cluster components must be modifiedmanually.

Requirements

The web-based configuration tool requires that IPv6 link-local be enabled on the subnet. If IPv6 link-local isnot available, you must configure the Controller VM IP addresses and the cluster manually. The web-basedconfiguration tool also requires that the Controller VMs be able to communicate with each other.

All Controller VMs and hypervisor hosts must be on the same subnet. If the IPMI interfaces are connected,Nutanix recommends that they be on the same subnet as the Controller VMs and hypervisor hosts.

Guest VMs can be on a different subnet.

To Configure the Cluster

Before you begin.

• Confirm that the system you are using to configure the cluster meets the following requirements:

• IPv6 link-local enabled.• Windows 7, Vista, or MacOS.• (Windows only) Bonjour installed (included with iTunes or downloadable from http://

support.apple.com/kb/DL999).

• Determine the IPv6 service of any Controller VM in the cluster.

IPv6 service names are uniquely generated at the factory and have the following form (note the finalperiod):

NTNX-block_serial_number-node_location-CVM.local.

On the right side of the block toward the front is a label that has the block_serial_number (for example,12AM3K520060). The node_location is a number 1-4 for NX-3000, a letter A-D for NX-1000/NX-2000/NX-3050, or a letter A-B for NX-6000.

IP Address Configuration | Setup Guide | NOS 3.5 | 9

If you need to confirm if IPv6 link-local is enabled on the network or if you do not have access to get thenode serial number, see the Nutanix support knowledge base for alternative methods.

1. Open a web browser.

Nutanix recommends using Internet Explorer 9 for Windows and Safari for Mac OS.

Note: Internet Explorer requires protected mode to be disabled. Go to Tools > InternetOptions > Security, clear the Enable Protected Mode check box, and restart the browser.

2. Navigate to http://cvm_host_name:2100/cluster_init.html.

Replace cvm_host_name with the IPv6 service name of any Controller VM that will be added to thecluster.

IP Address Configuration | Setup Guide | NOS 3.5 | 10

Following is an example URL to access the cluster creation page on a Controller VM:

http://NTNX-12AM3K520060-1-CVM.local.:2100/cluster_init.html

If the cluster_init.html page is blank, then the Controller VM is already part of a cluster. Connect to aController VM that is not part of a cluster.

3. Type a meaningful value in the Cluster Name field.

This value is appended to all automated communication between the cluster and Nutanix support. Itshould include the customer's name and if necessary a modifier that differentiates this cluster from anyother clusters that the customer might have.

Note: This entity has the following naming restrictions:

• The maximum length is 75 characters.• Allowed characters are uppercase and lowercase standard Latin letters (A-Z and a-z),

decimal digits (0-9), dots (.), hyphens (-), and underscores (_).

4. Type the appropriate DNS and NTP addresses in the respective fields.

5. Type the appropriate subnet masks in the Subnet Mask row.

6. Type the appropriate default gateway IP addresses in the Default Gateway row.

7. Select the check box next to each node that you want to add to the cluster.

All unconfigured nodes on the current network are presented on this web page. If you will be configuringmultiple clusters, be sure that you only select the nodes that should be part of the current cluster.

8. Provide an IP address for all components in the cluster.

Note: The unconfigured nodes are not listed according to their position in the block. Ensurethat you assign the intended IP address to each node.

9. Click Create.

Wait until the Log Messages section of the page reports that the cluster has been successfullyconfigured.

Output similar to the following indicates successful cluster configuration.

Configuring IP addresses on node 12AM2K420010/A...Configuring IP addresses on node 12AM2K420010/B...Configuring IP addresses on node 12AM2K420010/C...Configuring IP addresses on node 12AM2K420010/D...Configuring Zeus on node 12AM2K420010/A...Configuring Zeus on node 12AM2K420010/B...Configuring Zeus on node 12AM2K420010/C...Configuring Zeus on node 12AM2K420010/D...Initializing cluster...Cluster successfully initialized!Initializing the cluster DNS and NTP servers...Successfully updated the cluster NTP and DNS server list

10. Log on to any Controller VM in the cluster with SSH.

11. Start the Nutanix cluster.

nutanix@cvm$ cluster start

If the cluster starts properly, output similar to the following is displayed for each node in the cluster:

CVM: 172.16.8.167 Up, ZeusLeader Zeus UP [3148, 3161, 3162, 3163, 3170, 3180] Scavenger UP [3333, 3345, 3346, 11997]

IP Address Configuration | Setup Guide | NOS 3.5 | 11

ConnectionSplicer UP [3379, 3392] Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447] Medusa UP [3488, 3501, 3502, 3523, 3569] DynamicRingChanger UP [4592, 4609, 4610, 4640] Pithos UP [4613, 4625, 4626, 4678] Stargate UP [4628, 4647, 4648, 4709] Cerebro UP [4890, 4903, 4904, 4979] Chronos UP [4906, 4918, 4919, 4968] Curator UP [4922, 4934, 4935, 5064] Prism UP [4939, 4951, 4952, 4978] AlertManager UP [4954, 4966, 4967, 5022] StatsAggregator UP [5017, 5039, 5040, 5091] SysStatCollector UP [5046, 5061, 5062, 5098]

To Configure the Cluster in a VLAN-Segmented Network

The automated IP address and cluster configuration utilities depend on Controller VMs being able tocommunicate with each other. If the customer network is segmented using VLANs, that communication isnot possible until the Controller VMs are assigned to a valid VLAN.

Note: The web-based configuration tool requires that IPv6 link-local be enabled on the subnet. IfIPv6 link-local is not available, see To Configure the Cluster (Manual) on page 29.

1. Configure the IPMI IP addresses by following the procedure for your hardware model.

→ To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000) on page 31→ To Configure the Remote Console IP Address (NX-3000) on page 32→ To Configure the Remote Console IP Address (NX-2000) on page 32

Alternatively, you can set the IPMI IP address using a command-line utility by following To Configurethe Remote Console IP Address (command line) on page 33.

2. Configure the hypervisor host IP addresses.

→ vSphere: To Configure Host Networking on page 34→ KVM: To Configure Host Networking (KVM) on page 35

3. Assign VLAN tags to the hypervisor hosts and Controller VMs by following .

→ vSphere: To Assign VLAN Tags to Nutanix Nodes on page 11→ KVM: To Configure VLANs (KVM) on page 12

4. Configure the Controller VM IP addresses and the cluster using the automated utilities by following ToConfigure the Cluster on page 8 .

To Assign VLAN Tags to Nutanix Nodes

1. Assign the ESXi hosts to the pre-defined host VLAN.

a. Access the ESXi host console.

b. Press F2 and then provide the ESXi host logon credentials.

c. Press the down arrow key until Configure Management Network is highlighted and then pressEnter.

d. Select VLAN (optional) and press Enter.

IP Address Configuration | Setup Guide | NOS 3.5 | 12

e. Type the VLAN ID specified by the customer and press Enter.

f. Press Esc and then Y to apply all changes and restart the management network.

g. Repeat this process for all remaining ESXi hosts.

2. Assign the Controller VMs to the pre-defined virtual machine VLAN.

a. Log on to an ESXi host with the vSphere client.

b. Select the host and then click the Configuration tab.

c. Click Networking.

d. Click the Properties link above vSwitch0.

e. Select VM Network and then click Edit.

f. Type the VLAN ID specified by the customer and click OK.

g. Click Close.

h. Repeat this process for all remaining ESXi hosts.

To Configure VLANs (KVM)

In an environment with separate VLANs for hosts and guest VMs, VLAN tagging is configured differentlyfor each type of VLAN.

Perform these steps on every KVM host in the cluster.

1. Log on to the KVM host with SSH.

2. Configure VLAN tagging on the host interface.

a. Set the tag for the bridge.

root@kvm# ovs-vsctl set port br0 tag=host_vlan_tag

Replace host_vlan_tag with the VLAN tag for hosts.

b. Confirm VLAN tagging on the interface.

root@kvm# ovs-vsctl list port br0

Check the value of the tag parameter that is shown.

3. Configure VLAN tagging for guest VMs.

a. Copy the existing network configuration and open the configuration file.

root@kvm# virsh net-dumpxml VM-Network > /tmp/network.xmlroot@kvm# vi /tmp/network.xml

b. Update the configuration file to describe the new network.

• Delete the uuid and mac parameters.• Change the name and portgroup name parameters.• Add a vlan tag element with the ID for the guest VM VLAN.

IP Address Configuration | Setup Guide | NOS 3.5 | 13

The resulting configuration file should look like this.

<network connections='1'> <name>new_network_name</name> <forward mode='bridge'/> <bridge name='br0' /> <virtualport type='openvswitch'/> <portgroup name='new_network_name' default='yes'> </portgroup> <vlan> <tag id="vm_vlan_tag"> </vlan></network>

• Replace new_network_name with the desired name for the network. Ensure that both instancesof this parameter match.

• Replace vm_vlan_tag with the VLAN tag for guest VMs.

c. Create the new network.

root@kvm# virsh net-define /tmp/network.xml

d. Start the new network.

root@kvm# virsh net-start new_network_nameroot@kvm# virsh net-autostart new_network_name

e. Confirm that the new network is running.

root@kvm# virsh net-list --all

To create a VM on this VLAN, specify new_network_name instead of VM-Network.

Storage Configuration | Setup Guide | NOS 3.5 | 14

2Storage Configuration

At the conclusion of the setup process, you will need to create the following entities:

• 1 storage pool that comprises all physical disks in the cluster.• 1 container that uses all available storage capacity in the pool.• 1 NFS datastore that is mounted from all hosts in the cluster.

A single datastore comprising all available cluster storage will suit the needs of most customers. If thecustomer requests additional NFS datastores during setup, you can create the necessary containers,and then mount them as datastores. For future datastore needs, refer the customer to the NutanixAdministration Guide.

To Create the Datastore

1. Sign in to the Nutanix web console.

2. In the Storage dashboard, click the Storage Pool button.The Create Storage Pool dialog box appears.

3. In the Name field, enter a name for the storage pool.

• For vSphere clusters, name the storage pool sp1.• For KVM clusters, name the storage pool default.

4. In the Capacity field, check the box to use the available unallocated capacity for this storage pool.

5. When all the field entries are correct, click the Save button.

Storage Configuration | Setup Guide | NOS 3.5 | 15

6. In the Storage dashboard, click the Container button.The Create Container dialog box appears.

7. Create the container.

Do the following in the indicated fields:

a. Name: Enter a name for the container.

• For vSphere clusters, name the container nfs-ctr.• For KVM clusters, name the container default.

b. Storage Pool: Select the sp1 (vSphere) or default (KVM) storage pool from the drop-down list.

The following field, Max Capacity, displays the amount of free space available in the selectedstorage pool.

c. NFS Datastore: Select the Mount on all hosts button to mount the container on all hosts.

d. When all the field entries are correct, click the Save button.

vCenter Configuration | Setup Guide | NOS 3.5 | 16

3vCenter Configuration

VMware vCenter enables the centralized management of multiple ESXi hosts. The Nutanix cluster invCenter must be configured according to Nutanix best practices.

While most customers prefer to use an existing vCenter, Nutanix provides a vCenter OVF, which is onthe Controller VMs in /home/nutanix/data/images/vcenter. You can deploy the OVF using the standardprocedures for vSphere.

To Create a Nutanix Cluster in vCenter

1. Log on to vCenter with the vSphere client.

2. If you want the Nutanix cluster to be in its own datacenter or if there is no datacenter, click File > New >Datacenter and type a meaningful name for the datacenter, such as NTNX-DC. Otherwise, proceed to thenext step.

You can also create the Nutanix cluster within an existing datacenter.

3. Right-click the datacenter node and select New Cluster.

4. Type a meaningful name for the cluster in the Name field, such as NTNX-Cluster.

5. Select the Turn on vSphere HA check box and click Next.

6. Select Admission Control > Enable.

7. Select Admission Control Policy > Percentage of cluster resources reserved as failover sparecapacity and enter the percentage appropriate for the number of Nutanix nodes in the cluster the clickNext.

Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage

1 N/A 9 23% 17 18% 25 16%

2 N/A 10 20% 18 17% 26 15%

3 33% 11 18% 19 16% 27 15%

4 25% 12 17% 20 15% 28 14%

5 20% 13 15% 21 14% 29 14%

6 18% 14 14% 22 14% 30 13%

7 15% 15 13% 23 13% 31 13%

8 13% 16 13% 24 13% 32 13%

8. Click Next on the following three pages to accept the default values.

• Virtual Machine Options• VM monitoring• VMware EVC

vCenter Configuration | Setup Guide | NOS 3.5 | 17

9. Verify that Store the swapfile in the same directory as the virtual machine (recommended) isselected and click Next.

10. Review the settings and then click Finish.

11. Add all Nutanix nodes to the vCenter cluster inventory.

See To Add a Nutanix Node to vCenter on page 19.

12. Right-click the Nutanix cluster node and select Edit Settings.

13. If vSphere HA and DRS are not enabled, select them on the Cluster Features page. Otherwise,proceed to the next step.

Note: vSphere HA and DRS must be configured even if the customer does not plan to usethe features. The settings will be preserved within the vSphere cluster configuration, so if thecustomer later decides to enable the feature, it will be pre-configured based on Nutanix bestpractices.

14. Configure vSphere HA.

a. Select vSphere HA > Virtual Machine Options.

b. Change the VM restart priority of all Controller VMs to Disabled.

Tip: Controller VMs include the phrase CVM in their names. It may be necessary to expandthe Virtual Machine column to view the entire VM name.

c. Change the Host Isolation Response setting of all Controller VMs to Leave Powered On.

vCenter Configuration | Setup Guide | NOS 3.5 | 18

d. Select vSphere HA > VM Monitoring

e. Change the VM Monitoring setting for all Controller VMs to Disabled.

f. Select vSphere HA > Datastore Heartbeating.

g. Click Select only from my preferred datastores and select the Nutanix datastore (NTNX-NFS).

h. If the cluster has only one datastore as recommended, click Advanced Options, add an Optionnamed das.ignoreInsufficientHbDatastore with Value of true, and click OK.

i. If the cluster does not use vSphere HA, disable it on the Cluster Features page. Otherwise,proceed to the next step.

15. Configure vSphere DRS.

a. Select vSphere DRS > Virtual Machine Options.

b. Change the Automation Level setting of all Controller VMs to Disabled.

vCenter Configuration | Setup Guide | NOS 3.5 | 19

c. Select vSphere DRS > Power Management.

d. Confirm that Off is selected as the default power management for the cluster.

e. If the cluster does not use vSphere DRS, disable it on the Cluster Features page. Otherwise,proceed to the next step.

16. Click OK to close the cluster settings window.

To Add a Nutanix Node to vCenter

The cluster must be configured according to Nutanix specifications given in vSphere Cluster Settings onpage 21.

Tip: Refer to Default Cluster Credentials on page 2 for the default credentials of all clustercomponents.

1. Log on to vCenter with the vSphere client.

2. Right-click the cluster and select Add Host.

3. Type the IP address of the ESXi host in the Host field.

4. Enter the ESXi host logon credentials in the Username and Password fields.

5. Click Next.

If a security or duplicate management alert appears, click Yes.

6. Review the Host Summary page and click Next.

7. Select a license to assign to the ESXi host and click Next.

8. Ensure that the Enable Lockdown Mode check box is left unselected and click Next.

Lockdown mode is not supported.

9. Click Finish.

10. Select the ESXi host and click the Configuration tab.

11. Configure DNS servers.

a. Click DNS and Routing > Properties.

b. Select Use the following DNS server address.

vCenter Configuration | Setup Guide | NOS 3.5 | 20

c. Type DNS server addresses in the Preferred DNS Server and Alternate DNS Server fields andclick OK.

12. Configure NTP servers.

a. Click Time Configuration > Properties > Options > NTP Settings > Add.

b. Type the NTP server address.

Add multiple NTP servers if required.

c. Click OK in the NTP Daemon (ntpd) Options and Time Configuration windows.

d. Click Time Configuration > Properties > Options > General.

e. Select Start automatically under Startup Policy.

f. Click Start

g. Click OK in the NTP Daemon (ntpd) Options and Time Configuration windows.

13. Click Storage and confirm that NFS datastores are mounted.

14. Set the Controller VM to start automatically when the ESXi host is powered on.

a. Click the Configuration tab.

b. Click Virtual Machine Startup/Shutdown in the Software frame.

c. Select the Controller VM and click Properties.

d. Ensure that the Allow virtual machines to start and stop automatically with the system checkbox is selected.

e. If the Controller VM is listed in Manual Startup, click Move Up to move the Controller VM into theAutomatic Startup section.

vCenter Configuration | Setup Guide | NOS 3.5 | 21

f. Click OK.

15. (NX-2000 only) Click Host Cache Configuration and confirm that the host cache is stored on the localdatastore.

If it is not correct, click Properties to update the location.

vSphere Cluster Settings

Certain vSphere cluster settings are required for Nutanix clusters.

vSphere HA and DRS must be configured even if the customer does not plan to use the feature. Thesettings will be preserved within the vSphere cluster configuration, so if the customer later decides toenable the feature, it will be pre-configured based on Nutanix best practices.

vSphere HA Settings

Enable host monitoring

Enable admission control and use the percentage-based policy with a value based on thenumber of nodes in the cluster.

Set the VM Restart Priority of all Controller VMs to Disabled.

Set the Host Isolation Response of all Controller VMs to Leave Powered On.

Disable VM Monitoring for all Controller VMs.

Enable Datastore Heartbeating by clicking Select only from my preferred datastores andchoosing the Nutanix NFS datastore.

If the cluster has only one datastore, add an advanced optiondas.ignoreInsufficientHbDatastore=true.

vCenter Configuration | Setup Guide | NOS 3.5 | 22

vSphere DRS Settings

Disable automation on all Controller VMs.

Leave power management disabled (set to Off).

Other Cluster Settings

Store VM swapfiles in the same directory as the virtual machine.

(NX-2000 only) Store host cache on the local datastore.

Failover Reservation Percentages

Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage

1 N/A 9 23% 17 18% 25 16%

2 N/A 10 20% 18 17% 26 15%

3 33% 11 18% 19 16% 27 15%

4 25% 12 17% 20 15% 28 14%

5 20% 13 15% 21 14% 29 14%

6 18% 14 14% 22 14% 30 13%

7 15% 15 13% 23 13% 31 13%

8 13% 16 13% 24 13% 32 13%

Final Configuration | Setup Guide | NOS 3.5 | 23

4Final Configuration

The final steps in the Nutanix block setup are to confirm email alerts, set the timezone, and confirm that itis running.

To Set the Timezone of the Cluster

1. Log on to any Controller VM in the cluster with SSH.

2. Locate the timezone template for the customer site.

nutanix@cvm$ ls /usr/share/zoneinfo/*

The timezone templates of some common timezones are shown below.

Location Timezone Template

US East Coast /usr/share/zoneinfo/US/Eastern

England /usr/share/zoneinfo/Europe/London

Japan /usr/share/zoneinfo/Asia/Tokyo

3. Copy the timezone template to all Controller VMs in the cluster.

nutanix@cvm$ for i in `svmips`; do echo $i; ssh $i "sudo cp template_path /etc/localtime; date"; done

Replace template_path with the location of the desired timezone template.

If a host authenticity warning is displayed, type yes to continue connecting.

The expected output is the IP address of each Controller VM and the current time in the desiredtimezone, for example:

192.168.1.200Fri Jan 25 19:43:32 GMT 2013

To Make Optional Settings

You can make one or more of the following settings if necessary to meet customer requirements.

• Add customer email addresses to alerts.

Web Console > Alert Email Contacts

nCLI ncli> cluster add-to-email-contacts email-addresses="customer_email"

Replace customer_email with a comma-separated list of customer email addressesto receive alert messages.

Final Configuration | Setup Guide | NOS 3.5 | 24

• Specify an outgoing SMTP server.

Web Console > SMTP Server

nCLI ncli> cluster set-smtp-server address="smtp_address"

Replace smtp_address with the IP address or name of the SMTP server to use foralert messages.

• If the site security policy allows the remote support tunnel, enable it.

Warning: Failing to enable remote support prevents Nutanix support from directly addressingcluster issues. Nutanix recommends that all customers allow email alerts at minimum becauseit allows proactive support of customer issues.

Web Console > Remote Support Services > Enable

nCLI ncli> cluster start-remote-support

• If the site security policy does not allow email alerting, disable it.

Web Console > Email Alert Services > Disable

nCLI ncli> cluster stop-email-alerts

Diagnostics VMs

Nutanix provides a diagnostics capability to allow partners and customers run performance tests on thecluster. This is a useful tool in pre-sales demonstrations of the cluster and while identifying the source ofperformance issues in a production cluster. Diagnostics should also be run as part of setup to ensure thatthe cluster is running properly before the customer takes ownership of the cluster.

The diagnostic utility deploys a VM on each node in the cluster. The Controller VMs control the diagnosticVM on their hosts and report back the results to a single system.

The diagnostics test provide the following data:

• Sequential write bandwidth• Sequential read bandwidth• Random read IOPS• Random write IOPS

Because the test creates new cluster entities, it is necessary to run a cleanup script when you are finished.

To Run a Test Using the Diagnostics VMs

Before you begin. Ensure that 10 GbE ports are active on the ESXi hosts using esxtop or vCenter. Thetests will run very slow if the nodes are not using the 10 GbE ports. For more information about this knownissue with ESXi 5.0 update 1, see VMware KB article 2030006.

1. Log on to any Controller VM in the cluster with SSH.

2. Set up the diagnostics test.

→ vSphere

nutanix@cvm$ ~/diagnostics/diagnostics.py cleanup

Final Configuration | Setup Guide | NOS 3.5 | 25

In vCenter, right-click any diagnostic VMs labeled as "orphaned", select Remove from Inventory,and click Yes to confirm removal.

→ KVM

nutanix@cvm$ ~/diagnostics/setup_diagnostics_kvm.py --force

3. Start the diagnostics test.

→ vSphere

nutanix@cvm$ ~/diagnostics/diagnostics.py run

→ KVM

nutanix@cvm$ ~/diagnostics/diagnostics.py --hypervisor kvm --skip_setup run

Include the parameter --default_ncli_password='admin_password' if the Nutanix admin user passwordhas been changed from the default.

If the command fails with ERROR:root:Zookeeper host port list is not set, refresh the environmentby running source /etc/profile or bash -l and run the command again.

The diagnostic may take up to 15 minutes to complete.

The script performs the following tasks:

1. Installs a diagnostic VM on each node.2. Creates cluster entities to support the test, if necessary.3. Runs four performance tests, using the Linux fio utility.4. Reports the results.

4. Review the results.

5. Remove the entities from this diagnostic.

→ vSphere

nutanix@cvm$ ~/diagnostics/diagnostics.py cleanup

In vCenter, right-click any diagnostic VMs labeled as "orphaned", select Remove from Inventory,and click Yes to confirm removal.

→ KVM

nutanix@cvm$ ~/diagnostics/setup_diagnostics_kvm.py --cleanup_ctr

Perform these steps for each KVM host.

a. Log on to the KVM host with SSH.

b. Get the diagnostics VM name.

root@kvm# virsh list | grep -i diagnostics

c. Destroy the diagnostics VM.

root@kvm# virsh destroy diagnostics_vm_name

Replace diagnostics_vm_name with the VM name found in the previous step.

Diagnostics Output

System output similar to the following indicates a successful test.

Final Configuration | Setup Guide | NOS 3.5 | 26

Checking if an existing storage pool can be used ... Using storage pool sp1 for the tests. Checking if the diagnostics container exists ... does not exist.Creating a new container NTNX-diagnostics-ctr for the runs ... done.Mounting NFS datastore 'NTNX-diagnostics-ctr' on each host ... done. Deploying the diagnostics UVM on host 172.16.8.170 ... done. Preparing the UVM on host 172.16.8.170 ... done. Deploying the diagnostics UVM on host 172.16.8.171 ... done. Preparing the UVM on host 172.16.8.171 ... done. Deploying the diagnostics UVM on host 172.16.8.172 ... done. Preparing the UVM on host 172.16.8.172 ... done. Deploying the diagnostics UVM on host 172.16.8.173 ... done. Preparing the UVM on host 172.16.8.173 ... done. VM on host 172.16.8.170 has booted. 3 remaining. VM on host 172.16.8.171 has booted. 2 remaining. VM on host 172.16.8.172 has booted. 1 remaining. VM on host 172.16.8.173 has booted. 0 remaining. Waiting for the hot cache to flush ... done. Running test 'Prepare disks' ... done. Waiting for the hot cache to flush ... done. Running test 'Sequential write bandwidth (using fio)' ... bandwidth MBps Waiting for the hot cache to flush ... done. Running test 'Sequential read bandwidth (using fio)' ... bandwidth MBps Waiting for the hot cache to flush ... done. Running test 'Random read IOPS (using fio)' ... operations IOPS Waiting for the hot cache to flush ... done. Running test 'Random write IOPS (using fio)' ... operations IOPS Tests done.

Note:

• Expected results vary based on the specific NOS version and hardware model used. Refer tothe Release Notes for the values appropriate for your environment.

• The IOPS values reported by the diagnostics script will be higher than the values reported bythe Nutanix management interfaces. This difference is because the diagnostics script reportsphysical disk I/O, and the management interfaces show IOPS reported by the hypervisor.

• If the reported values are lower than expected, the 10 GbE ports may not be active. For moreinformation about this known issue with ESXi 5.0 update 1, see VMware KB article 2030006.

To Test Email Alerts

1. Log on to any Controller VM in the cluster with SSH.

2. Send a test email.

nutanix@cvm$ ~/serviceability/bin/email-alerts \--to_addresses="[email protected], customer_email" \--subject="[alert test] `ncli cluster get-params`"

Replace customer_email with a customer email address that receives alerts.

3. Confirm with Nutanix support that the email was received.

Final Configuration | Setup Guide | NOS 3.5 | 27

To Check the Status of Cluster Services

Verify that all services are up on all Controller VMs.

nutanix@cvm$ cluster status

If the cluster is running properly, output similar to the following is displayed for each node in the cluster:

CVM: 172.16.8.167 Up, ZeusLeader Zeus UP [3148, 3161, 3162, 3163, 3170, 3180] Scavenger UP [3333, 3345, 3346, 11997] ConnectionSplicer UP [3379, 3392] Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447] Medusa UP [3488, 3501, 3502, 3523, 3569] DynamicRingChanger UP [4592, 4609, 4610, 4640] Pithos UP [4613, 4625, 4626, 4678] Stargate UP [4628, 4647, 4648, 4709] Cerebro UP [4890, 4903, 4904, 4979] Chronos UP [4906, 4918, 4919, 4968] Curator UP [4922, 4934, 4935, 5064] Prism UP [4939, 4951, 4952, 4978] AlertManager UP [4954, 4966, 4967, 5022] StatsAggregator UP [5017, 5039, 5040, 5091] SysStatCollector UP [5046, 5061, 5062, 5098]

Manual IP Address Configuration | Setup Guide | NOS 3.5 | 28

Appendix AManual IP Address Configuration

To Verify IPv6 Link-Local Connectivity

The automated IP address and cluster configuration utilities depend on IPv6 link-local addresses, whichare enabled on most networks. Use this procedure to verify that IPv6 link-local is enabled.

1. Connect two Windows, Linux, or Apple laptops to the switch to be used.

2. Disable any firewalls on the laptops.

3. Verify that each laptop has an IPv6 link-local address.

→ Windows (Control Panel)

Start > Control Panel > View network status and tasks > Change adapter settings > LocalArea Connection > Details

→ Windows (command-line interface)

> ipconfig

Ethernet adapter Local Area Connection:

Connection-specific DNS Suffix . : corp.example.com Link-local IPv6 Address . . . . . : fe80::ed67:9a32:7fc4:3be1%12 IPv4 Address. . . . . . . . . . . : 172.16.21.11 Subnet Mask . . . . . . . . . . . : 255.240.0.0

Manual IP Address Configuration | Setup Guide | NOS 3.5 | 29

Default Gateway . . . . . . . . . : 172.16.0.1

→ Linux

$ ifconfig eth0

eth0 Link encap:Ethernet HWaddr 00:0c:29:dd:e3:0b inet addr:10.2.100.180 Bcast:10.2.103.255 Mask:255.255.252.0 inet6 addr: fe80::20c:29ff:fedd:e30b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2895385616 errors:0 dropped:0 overruns:0 frame:0 TX packets:3063794864 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2569454555254 (2.5 TB) TX bytes:2795005996728 (2.7 TB)

→ Mac OS

$ ifconfig en0

en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 70:56:81:ae:a7:47 inet6 fe80::7256:81ff:feae:a747 en0 prefixlen 64 scopeid 0x4 inet 172.16.21.208 netmask 0xfff00000 broadcast 172.31.255.255 media: autoselect status: active

Note the IPv6 link-local addresses, which always begin with fe80. Omit the / character and anythingfollowing.

4. From one of the laptops, ping the other laptop.

→ Windows

> ping -6 ipv6_linklocal_addr%interface

→ Linux/Mac OS

$ ping6 ipv6_linklocal_addr%interface

• Replace ipv6_linklocal_addr with the IPv6 link-local address of the other laptop.• Replace interface with the interface identifier on the other laptop (for example, 12 for Windows, eth0

for Linux, or en0 for Mac OS).

If the ping packets are answered by the remote host, IPv6 link-local is enabled on the subnet. If theping packets are not answered, ensure that firewalls are disabled on both laptops and try again beforeconcluding that IPv6 link-local is not enabled.

5. Reenable the firewalls on the laptops and disconnect them from the network.

Results.

• If IPv6 link-local is enabled on the subnet, you can use automated IP address and cluster configurationutility.

• If IPv6 link-local is not enabled on the subnet, you have to manually create the cluster by following ToConfigure the Cluster (Manual) on page 29, which includes manually setting IP addresses.

To Configure the Cluster (Manual)

Use this procedure if IPv6 link-local is not enabled on the subnet.

1. Configure the IPMI IP addresses by following the procedure for your hardware model.

Manual IP Address Configuration | Setup Guide | NOS 3.5 | 30

→ To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000) on page 31→ To Configure the Remote Console IP Address (NX-3000) on page 32→ To Configure the Remote Console IP Address (NX-2000) on page 32

Alternatively, you can set the IPMI IP address using a command-line utility by following To Configurethe Remote Console IP Address (command line) on page 33.

2. Configure networking on node the by following the hypervisor-specific procedure.

→ vSphere: To Configure Host Networking on page 34→ KVM: To Configure Host Networking (KVM) on page 35

3. Configure the Controller VM IP addresses by following To Configure the Controller VM IP Address onpage 36.

4. Log on to any Controller VM in the cluster with SSH.

5. Create the cluster.

nutanix@cvm$ cluster -s cvm_ip_addrs create

Replace cvm_ip_addrs with a comma-separated list of Controller VM IP addresses. Include allController VMs that will be part of the cluster.

For example, if the new cluster should comprise all four nodes in a block, include all the IP addresses ofall four Controller VMs.

6. Start the Nutanix cluster.

nutanix@cvm$ cluster start

If the cluster starts properly, output similar to the following is displayed for each node in the cluster:

CVM: 172.16.8.167 Up, ZeusLeader Zeus UP [3148, 3161, 3162, 3163, 3170, 3180] Scavenger UP [3333, 3345, 3346, 11997] ConnectionSplicer UP [3379, 3392] Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447] Medusa UP [3488, 3501, 3502, 3523, 3569] DynamicRingChanger UP [4592, 4609, 4610, 4640] Pithos UP [4613, 4625, 4626, 4678] Stargate UP [4628, 4647, 4648, 4709] Cerebro UP [4890, 4903, 4904, 4979] Chronos UP [4906, 4918, 4919, 4968] Curator UP [4922, 4934, 4935, 5064] Prism UP [4939, 4951, 4952, 4978] AlertManager UP [4954, 4966, 4967, 5022] StatsAggregator UP [5017, 5039, 5040, 5091] SysStatCollector UP [5046, 5061, 5062, 5098]

7. Set the name of the cluster.

nutanix@cvm$ ncli cluster edit-params new-name=cluster_name

Replace cluster_name with a name for the cluster chosen by the customer.

8. Configure the DNS servers.

nutanix@cvm$ ncli cluster add-to-name-servers servers="dns_server"

Replace dns_server with the IP address of a single DNS server or with a comma-separated list of DNSserver IP addresses.

Manual IP Address Configuration | Setup Guide | NOS 3.5 | 31

9. Configure the NTP servers.

nutanix@cvm$ ncli cluster add-to-ntp-servers servers="ntp_server"

Replace ntp_server with the IP address or host name of a single NTP server or a with a comma-separated list of NTP server IP addresses or host names.

Remote Console IP Address Configuration

The Intelligent Platform Management Interface (IPMI) is a standardized interface used to manage a hostand monitor its operation. To enable remote access to the console of each host, you must configure theIPMI settings within BIOS.

The Nutanix cluster provides a Java application to remotely view the console of each node, or host server.You can use this console to configure additional IP addresses in the cluster.

The procedure for configuring the remote console IP address is slightly different for each hardwareplatform.

To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000)

1. Connect a keyboard and monitor to a node in the Nutanix block.

2. Restart the node and press Delete to enter the BIOS setup utility.You will have a limited amount of time to enter BIOS before the host completes the restart process.

3. Press the right arrow key to select the IPMI tab.

4. Press the down arrow key until BMC network configuration is highlighted and then press Enter.

5. Select Configuration Address source and press Enter.

6. Select Static and press Enter.

7. Assign the Station IP address, Subnet mask, and Router IP address.

Manual IP Address Configuration | Setup Guide | NOS 3.5 | 32

8. Review the BIOS settings and press F4 to save the configuration changes and exit the BIOS setuputility.The node restarts.

To Configure the Remote Console IP Address (NX-3000)

1. Connect a keyboard and monitor to a node in the Nutanix block.

2. Restart the node and press Delete to enter the BIOS setup utility.You will have a limited amount of time to enter BIOS before the host completes the restart process.

3. Press the right arrow key to select the Server Mgmt tab.

4. Press the down arrow key until BMC network configuration is highlighted and then press Enter.

5. Select Configuration source and press Enter.

6. Select Static on next reset and press Enter.

7. Assign the Station IP address, Subnet mask, and Router IP address.

8. Press F10 to save the configuration changes.

9. Review the settings and then press Enter.The node restarts.

To Configure the Remote Console IP Address (NX-2000)

1. Connect a keyboard and monitor to a node in the Nutanix block.

2. Restart the node and press Delete to enter the BIOS setup utility.You will have a limited amount of time to enter BIOS before the host completes the restart process.

3. Press the right arrow key to select the Advanced tab.

Manual IP Address Configuration | Setup Guide | NOS 3.5 | 33

4. Press the down arrow key until IPMI Configuration is highlighted and then press Enter.

5. Select Set LAN Configuration and press Enter.

6. Select Static to assign an IP address, subnet mask, and gateway address.

7. Press F10 to save the configuration changes.

8. Review the settings and then press Enter.

9. Restart the node.

To Configure the Remote Console IP Address (command line)

You can configure the management interface from the hypervisor host on the same node.

Perform these steps once from each hypervisor host in the cluster where the management networkconfiguration need to be changed.

1. Log on to the hypervisor host with SSH or the IPMI remote console.

2. Set the networking parameters.

root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 ipsrc staticroot@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 ipaddr mgmt_interface_ip_addrroot@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 netmask mgmt_interface_subnet_addrroot@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 defgw ipaddr mgmt_interface_gateway

root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 ipsrc staticroot@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 ipaddr mgmt_interface_ip_addrroot@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 netmask mgmt_interface_subnet_addrroot@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 defgw ipaddr mgmt_interface_gateway

3. Show current settings.

root@esx# /ipmitool -v -U ADMIN -P ADMIN lan print 1

root@kvm# ipmitool -v -U ADMIN -P ADMIN lan print 1

Manual IP Address Configuration | Setup Guide | NOS 3.5 | 34

Confirm that the parameters are set to the correct values.

To Configure Host Networking

You can access the ESXi console either through IPMI or by attaching a keyboard and monitor to the node.

1. On the ESXi host console, press F2 and then provide the ESXi host logon credentials.

2. Press the down arrow key until Configure Management Network is highlighted and then press Enter.

3. Select Network Adapters and press Enter.

4. Ensure that the connected network adapters are selected.

If they are not selected, press Space to select them and press Enter to return to the previous screen.

5. If a VLAN ID needs to be configured on the Management Network, select VLAN (optional) and pressEnter. In the dialog box, provide the VLAN ID and press Enter.

6. Select IP Configuration and press Enter.

7. If necessary, highlight the Set static IP address and network configuration option and press Spaceto update the setting.

8. Provide values for the following: IP Address, Subnet Mask, and Default Gateway fields based on yourenvironment and then press Enter .

9. Select DNS Configuration and press Enter.

10. If necessary, highlight the Use the following DNS server addresses and hostname option and pressSpace to update the setting.

11. Provide values for the Primary DNS Server and Alternate DNS Server fields based on yourenvironment and then press Enter.

12. Press Esc and then Y to apply all changes and restart the management network.

13. Select Test Management Network and press Enter.

14. Press Enter to start the network ping test.

Manual IP Address Configuration | Setup Guide | NOS 3.5 | 35

15. Verify that the default gateway and DNS servers reported by the ping test match those that youspecified earlier in the procedure and then press Enter.

Ensure that the tested addresses pass the ping test. If they do not, confirm that the correct IPaddresses are configured.

Press Enter to close the test window.

16. Press Esc to log out.

To Configure Host Networking (KVM)

You can access the hypervisor host console either through IPMI or by attaching a keyboard and monitor tothe node.

1. Log on to the host as root.

2. Open the network interface configuration file.

root@kvm# vi /etc/sysconfig/network-scripts/ifcfg-br0

3. Press A to edit values in the file.

4. Update entries for netmask, gateway, and address.

The block should look like this:

ONBOOT="yes" NM_CONTROLLED="no" NETMASK="subnet_mask" IPADDR="host_ip_addr" DEVICE="eth0" TYPE="ethernet" GATEWAY="gateway_ip_addr" BOOTPROTO="none"

• Replace host_ip_addr with the IP address for the hypervisor host.• Replace subnet_mask with the subnet mask for host_ip_addr.• Replace gateway_ip_addr with the gateway address for host_ip_addr.

5. Press Esc.

6. Type :wq and press Enter to save your changes.

7. Open the name services configuration file.

root@kvm# vi /etc/resolv.conf

8. Update the values for the nameserver parameter then save and close the file.

Manual IP Address Configuration | Setup Guide | NOS 3.5 | 36

9. Restart networking.

root@kvm# /etc/init.d/network restart

To Configure the Controller VM IP Address

1. Log on to the hypervisor host with SSH or the IPMI remote console.

2. Log on to the Controller VM with SSH.

root@host# ssh [email protected]

Enter the Controller VM nutanix password.

3. Change the network interface configuration.

a. Open the network interface configuration file.

nutanix@cvm$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0

Enter the nutanix password.

b. Press A to edit values in the file.

c. Update entries for netmask, gateway, and address.

The block should look like this:

ONBOOT="yes" NM_CONTROLLED="no" NETMASK="subnet_mask" IPADDR="cvm_ip_addr" DEVICE="eth0" TYPE="ethernet" GATEWAY="gateway_ip_addr" BOOTPROTO="none"

• Replace cvm_ip_addr with the IP address for the Controller VM.• Replace subnet_mask with the subnet mask for cvm_ip_addr.• Replace gateway_ip_addr with the gateway address for cvm_ip_addr.

d. Press Esc.

e. Type :wq and press Enter to save your changes.

4. Restart the Controller VM.

nutanix@cvm$ sudo reboot

Enter the nutanix password if prompted. Wait to proceed until the Controller VM has finished starting,which takes approximately 5 minutes.