VMware vSphere Networking deep dive

113
vSphere Networking vSphere 6.0 VEPSUN TECHNOLOGIES

Transcript of VMware vSphere Networking deep dive

Page 1: VMware vSphere Networking deep dive

vSphere Networking vSphere 6.0

VEPSUN TECHNOLOGIES

Page 2: VMware vSphere Networking deep dive

Introduction to vSphere Standard Switches

Types of Virtual Switch Connections

A virtual switch provides two connection types to hosts and virtual machines:

Connecting virtual machines to the physical network.

Connecting VMkernel services to the physical network. VMkernel services include access to IP storage,

such as NFS or iSCSI, VMware vSphere, vMotion migrations, and access to the management network.

The VMware ESXi management network port is used to connect to network or remote services, including

VMware vSphere Web Client. Each ESXi management network port and each VMkernel port must be configured

with its own IP address, netmask, and gateway.

To help configure virtual switches, you can create port groups. A port group is a template that stores

configuration information to create virtual switch ports on a virtual switch. Virtual machine port groups are

used to connect virtual machines to one another with common networking properties.

Virtual machine port groups and VMkernel ports connect to the outside world through the physical Ethernet

adapters that are connected to the virtual switch uplink ports.

Virtual Switch Connection Examples

When you are designing your networking environment, VMware vSphere enables you to place all your networks

on a single virtual switch. Or you can opt for multiple virtual switches, each with a separate network. The

decision partly depends on the layout of your physical networks. For example, you might not have enough

network adapters to create a a seperate virtual switch for each network. Instead, you might team your network

adapters in a single virtual switch and isolate the networks by using VLANs.

Page 3: VMware vSphere Networking deep dive

Because physical NICs are assigned at the virtual switch level, all ports and port groups that are defined for a

particular switch share the same hardware.

Types of Virtual Switches

A standard switch is a virtual switch configuration at the host level.

A distributed switch has similar components as those of a standard switch. A distributed switch

functions as a single virtual switch across all associated hosts. A distributed switch is configured in

VMware vCenter Server at the data center level.

Standard Switch Components

Page 4: VMware vSphere Networking deep dive

The image shows five standard switches, each devoted to a different purpose. From left to right, the switches

are in numerical order:

1. A standard switch with a single outbound adapter. this switch is used only by VM1.

2. An internal-only standard switch, which enables virtual machines in a single ESXi host to communicate

directly with other virtual machines connected to the same standard switch. VM2 and VM3 can use this

Switch communicate with each other.

3. A standard switch with teamed NICs. A NIC team provides automatic distribution of packets and failover.

4. A standard switch that is used by the VMkernel for accessing iSCSI-or NAS-based storage.

5. A standard switch that is used by the VMkernel to enable remote management capabilities.

Viewing the Standard Switch Configuration

The image shows the standard switch vSwitch1 on an ESXi host. By default, the ESXi installation creates a virtual

machine port group named VM Network and a VMkernel port named management Network.

A good practice is to remote the VM Network virtual machine port group and keep virtual machine networks

and management networks separated for performance and security reasons.

To remove a standard switch, Click the Red X next to the switch to be deleted. To display virtual switch

properties, Click the pencil icon above the virtual switch.

Port group properties for a port or port group can be displayed. If applicable, Cisco Discovery Protocol (CDP)

information can be shown for a physical adapter.

CDP enables ESXi administrators to determine which Cisco switch port is connected to a given virtual switch.

When CDP is enabled for a particular virtual switch, you can view properties of the cisco switch from the

vSphere Web Client. Properties include device type, Port ID, hardware capabilities, and so on.

Page 5: VMware vSphere Networking deep dive

About VLANs

VLANs provide for logical groupings of switch ports, enabling communications as if all virtual machines or ports

in a VLAN were on the same physical LAN segment. A VLAN is a software-configured broadcast domain. Using a

VLAN has the following benefits:

Creation of logical networks that are not based on the physical topology

Improved performance by confining broadcast traffic to a subset of ports on a switch

Cost saving by partioning the network without overhead of deploying new routers

VLANs can be configured at the port group level. The ESXi host provides VLAN support through virtual switch

tagging, which is provided by giving a port group a VLAN ID. By default, a VLAN ID is optional. The VMkernel then

takes care of all tagging and untagging as the packets pass through the virtual switch.

The Port on a physical switch to which an ESXi host is connected must be defined as a static trunk port. A trunk

port is a port on a physical Ethernet switch that is configured to send and receive packets tagging with a VLAN

ID. No VLAN configuration is required in the virtual machine. In fact, the virtual machine does not know that it is

connected to a VLAN.

Network Adapter Properties

You can change the connection speed and duplex of a physical adapter to transfer data in compliance with the

traffic rate.

If the physical adapter supports SR-IOV, you can enable it and configure the number of virtual functions to use

for virtual machine networking.

Page 6: VMware vSphere Networking deep dive

Network Switch and Port Policies

Network security policy provides protection against MAC address impersonation and unwanted port scanning.

Traffic shaping is useful when you want to limit the amount of traffic to a virtual machine or a group of virtual

machines. You do traffic shaping to either protect a virtual machine or traffic in an oversubscribed network.

Use the teaming and failover policy to determine how the network traffic of virtual machines and VMkernel

adapters that are connected to the switch is distribute between physical adapters, and how the traffic should be

rerouted if an adapter fails.

These policies are defined for the entire standard switch and can be defined for a VMkernel port or a virtual

machine port group. When a policy is defined for an individual port group, the policy at this level overrides the

default policies defined for the standard switch.

Configuring Security Policies

Page 7: VMware vSphere Networking deep dive

For a vSphere standard switch, you can configure security policy to reject MAC address ad promiscuous mode

changes in the guest operating system of a virtual machine.

The network security policy contains the following exceptions:

Promiscuous mode: Promiscuous mode allows a virtual switch or port group to forward all traffic

regardless of their destinations. Default is Reject.

MAC address changes: When set to Reject, if the guest attempts to change MAC address assigned to the

virtual NIC, it stops receiving Frames. Default is Accept

Forged Transmits: A frame's source address field may be altered by the guest, and contain a MAC

address other than the assigned virtual NIC MAC address. you can set the forged transmits parameter to

accept or reject such frames. Default is Accept.

In general, these policies give you the option of disallowing certain behaviors that might compromise security.

For example, a hacker might use a promiscuous mode device to capture network traffic for unscrupulous

activities. Or someone might impersonate a node and gain unauthorized access by spoofing its MAC address.

Set Promiscuous mode to Accept to use an application in a virtual machine that analyzes or sniffs packets, such

as a network-based intrusion detection system.

Set MAC address changes and Forged transmits to reject to help protect against certain attacks launched by a

rogue guest operating system.

Leave MAC address changes and Forged transmits at their default values (Accept) if your applications change

the mapped MAC address, as do some operating system-based firewall.

Traffic-Shaping Policy

Page 8: VMware vSphere Networking deep dive

A virtual machine's network bandwidth can be controlled by enabling the network traffic shaper. The network

traffic shaper, when used on a standard switch, shapes only outbound network traffic. To control inbound

traffic, use a load-balancing system, or turn on rate-limiting features on your physical router.

Configuring Traffic Shaping

The ESXi host shapes only outbound traffic by establishing parameters for the following traffic characteristics:

Average bandwidth (Kbps): Establishes the number of kilobits per second to allow across a port, average

over time. The average bandwidth is the allowed average load.

Peak Bandwidth (Kbps): The maximum number of kilobits per second to allow across a port when it is

sending a burst of traffic. this number tops the bandwidth that is used by a port whenever the port is

using its burst bonus.

Burst size (KB): The maximum number of kilobytes to allow in a burst. if this parameter is set, a port

might gain a burst bonus if it does not use all its allocated bandwidth. Whenever the port needs more

bandwidth than specified in Average bandwidth, the port might have allowed to temporarily transmit

data at a higher speed if a burst bonus is available. This parameter tops the number of kilobytes that

have accumulated in the burst bonus and thus transfers at a higher speed.

Network traffic shaping is off by default.

To configure traffic shaping in vSphere Web Client

1. Navigate to the host.

2. On the Manage tab, Click Networking, and Select Virtual switches.

3. Navigate to the traffic shaping policy on the standard switch or port group, and configure the traffic

shaping policy by setting the average bandwidth, peak bandwidth, and burst size values.

Page 9: VMware vSphere Networking deep dive

Although you can establish a traffic-shaping policy at either the virtual switch level or the port group level,

settings at the port group level override settings at the virtual switch level.

NIC Teaming and Failover Policies

NIC teaming and Failover Policies enable you to determine how the network traffic is distributed between

adapters and how to reroute traffic in the event of an adapter failure. NIC teaming policies include load-

balancing and failover settings. Default NIC teaming and Failover policies are set for the entire standard switch.

these default settings can be overridden at the port group level. The policies show what is inherited from the

settings at the switch level.

To edit teaming and failover settings in the vSphere Web Client

1. Navigate to the host

2. On the Manage tab, Click Networking, and select Virtual Switches.

3. Select a switch from the list.

4. In the topology diagram of the switch, click the port group name.

5. Click Edit under the topology diagram title and select Teaming and failover.

6. Configure settings in this section.

Page 10: VMware vSphere Networking deep dive

Load-Balancing Method: Originating Virtual Port ID

With this method, a virtual machine's outbound traffic is mapped to a specific physical NIC. the NIC is

determined by the ID of the Virtual port to which this virtual machine is connected. This method is simple and

fast and does not require the VMkernel to examine the frame foe necessary information.

When the load is distributed in the NIC team using the Port-based method, no single-NIC virtual machine gets

more bandwidth than can be provided by a single physical adapter.

Load-Balancing Method: Source MAC Hash

Page 11: VMware vSphere Networking deep dive

In this Load-balancing method, each virtual machine's outbound traffic is mapped to a specific physical NIC that

is based on the virtual NIC's MAC address.

This method has low overhead and is compatible with all switches, but it might not spread traffic evenly across

all the physical NIC's.

When the load is distributed in the NIC team using the MAC-based method, no single-NIC virtual machine gets

more bandwidth than can be provided by a single physical adapter.

You can also balance your traffic based on the current traffic loads of the physical NIC's. The NIC with less load is

more likely to be chosen.

Load-Balancing Method: Source and Destination IP Hash

In This Load-balancing method, a NIC for each outbound packet is selected based on its source and destination

IP addresses.

The IP-based method requires 802.3ad link aggregation support or Etherchannel on the switch. the Link

Aggregation Control protocol is a method to control the bundling of several physical ports to form a single

logical channel. LACP is part of the IEEE 802.3ad specification.

Etherchannel is a port trunking technology that is used primarily on Cisco switches. This technology enables

grouping several physical Ethernet links to create one logical Ethernet link for providing fault tolerance and high-

speed links between switches, routers, and servers.

When the load is distributed in the NIC team using the IP-based method, a single NIC virtual machine might use

the Bandwidth of multiple physical adapters.

Page 12: VMware vSphere Networking deep dive

The IP-based load-balancing method only affects outbound traffic. for example, a virtual machine might choose

a particular NIC to communicate with a particular destination virtual machine. The return traffic might not arrive

on the same NIC as the outbound traffic. the return traffic might arrive on another NIC in the same NIC team.

Detecting and Handling Network Failure

Monitoring the link status provided by the network adapter detects failures like cable pulls and physical switch

power failures. this monitoring does not detect configuration errors, such as a physical switch port being

blocked by the spanning tree protocol or misconfigures VLAN membership. This method cannot detect

upstream, no directly connected physical switch or cable failures.

Beaconing introduces a load of a 62-byte packet approximately every 1 second per physical NIC. When

beaconing is activated, the VMkernel sends out and listens for probe packets on all NICs in the team. This

technique can detect failure that link-status monitoring alone cannot. consult your switch manufacturer to

confirm the support of beaconing in your environment.

A physical switch can be notified by the VMkernel Whenever a virtual NIC is connected to a virtual switch. A

physical switch can also be notified whenever a failover event causes a virtual NIC's traffic to be routed over a

different physical NIC. the notification is sent out over the network to update the lookup tables on physical

switches. in most cases, this notification process is desirable because otherwise virtual machines would

experience greater latency after failovers and vSphere vMotion operation. But do not set this option when the

virtual machines connected to the portgroup are running unicast-mode Microsoft Network Load Balancing

(NLB). NLB in multicast mode is unaffected.

When using explicit failover order, always use the highest order uplink from the list of active adapters that pass

failover-detection criteria.

Page 13: VMware vSphere Networking deep dive

The Failback option determines how a physical adapter is returned to active duty after recovering from a failure.

If Failback is set to Yes, the failed adapter is returned to active duty immediately upon recovery, displacing the

standby adapter that took its place at the time of failure.

If Failback is set to No, a failed adapter is left inactive even after recovery, until another currently active adapter

fails, requiring its replacement.

Page 14: VMware vSphere Networking deep dive

Introduction to vSphere Distributed Switches

Benefits of vSphere Distributed Switches

Standard switch and distributed switch features comparison

VMware vCenter Server owns the configuration of the distributed switch. the configuration is consistent across

all hosts that use the distributed switch.

Page 15: VMware vSphere Networking deep dive

Distributed switch architecture

A distributed switch components move network management to the data center level.

A distributed switch is managed entirely configured in vCenter server. the distributed switch abstracts a set of

standard switches that are configured on each associated host. vCenter Server manages the configuration of

distributed switches and all configuration is consistent across all hosts. Consider a distributed switch as template

for the network configuration on each ESXi host.

Each distributed switch includes distributed ports. you can connect any networking entity, such as a virtual

machine or a VMkernel interface, to a distributed port.

vCenter server stores the state of distributed ports in the vCenter server database. Networking statistics and

policies migrate with virtual machines when the virtual machines are moved from host to host.

A distributed port group enables you to logically group distributed ports to simplify configuration. A distributed

port group specifies port configuration options for each member port on distributed switch. Distributed port

groups define how a connection is made through a distributed switch to a network. ports can also exist without

port group.

An uplick is an abstraction to associate vmnics on multiple hosts to a single distributed switch. an uplink in a

distributed switch performs a similar function as that of vmnic standard switch. Two Virtual machines on

different hosts can communicate with each other only if both virtual machines have uplinks in the same

broadcast domain.

The distributed switch architecture consists of two planes: the control plane and the I/O plane.

the control plane resides in vCenter server. the control plane is responsible for configuring distributed switches,

distributed port groups, distributed ports, uplinks, NIC teaming, and so on. the control plane also coordinates

Page 16: VMware vSphere Networking deep dive

the migration of the ports and is responsible for the switch configuration. For Example, if a conflict arises in the

assignment of a distributed port, the control plane acts as the deciding authority to resolve the conflict.

The I/O plane is implemented as hidden virtual switch in the VMkernel of each ESXi host. The I/O plane manages

the I/O hardware on the host and is responsible for forwarding packets.

Distributed Switch Example

Viewing Distributed Switch

Page 17: VMware vSphere Networking deep dive

Consider the following points when you configure distributed switch settings:

Uplink ports connect the distributed switch to physical NICs on associated hosts. The number of uplink

ports is the maximum number of allowed physical connections to the distributed switch per host.

By using VMware vSphere Network I/O Control, you can prioritize the access to network resources for

certain types of infrastructure and workload traffic according to the requirements of your department.

Network I/O Control continuously monitors the I/O load over the network and dynamically allocates

available resources.

After you add the distributed switch, if your system has custom port group requirements, create

distributed port groups that meet those requirements.

Creating a Distributed Switch

Editing General and Advanced Distributed Switch Properties

vSphere Distributed Switch supports basic and snooping models for filtering of multicast packets that are related

to individual multicast groups. Choose a model according to the number of multicast groups to which the VMs

on the switch subscribe.

Page 18: VMware vSphere Networking deep dive

The distributed switch supports the default basic mode for filtering multicast traffic. The distributed switch also

supports multicast snooping that forwards multicast traffic in a more precise way based on the Internet Group

Management Protocol (IGMP) and Multicast Listener Discovery (MLD) messages from VMs.

Basic Multicast Filtering

In basic multicast filtering mode, a VM sends out IGMP join requests through the network, indicating the VM’s

intention of joining a particular multicast group. A standard switch or a distributed switch forwards multicast

traffic for VMs according to the destination MAC address of the multicast group. The switch saves the mapping

between the port and the destination multicast MAC address in a local forwarding table.

The switch does not interpret the IGMP messages that a VM sends. The switch forwards them to the local

multicast router. The router interest the IGMP messages, and joins the VM to a multicast group, or removes it

from a multicast group.

The basic mode has the following restrictions:

A Virtual machine might receive packets from groups that it is not subscribed for because the switch

forwards packets according to the destination MAC address of a multicast group, which can be mapped

upto 32 IP multicast groups.

A virtual machine that is subscribed for traffic from more than 32 multicast MAC addresses receives

packets that it is not subscribed for because of a limitation in the forwarding model.

The switch does not filter packets according to source address as defined in IGMP version 3.

Page 19: VMware vSphere Networking deep dive

Multicast Snooping

In multicast snooping mode, a distributed switch provides IGMP and MLD snooping according to RFC 4541. The

switch dispatches multicast traffic more precisely by using IP addresses. This mode supports IGMPv1, IGMPv2,

and IGMPv3 for IPv4 multicast group addresses, and MLDv1 and MLDv2 for IPv6 multicast group addresses.

The switch dynamically detects the membership of a Virtual Machine. When a Virtual Machine sends a packet

that contains IGMP or MLD membership information through a switch port, the switch snoops the packet and

creates a mapping entry. This entry records the destination IP address of the multicast group, and for IGMPv3,

the source IP address from which the VM prefers to receive multicast traffic. If a VM does not renew its

multicast group membership within a certain period of time, the switch removes the previously recorded entry

from its mapping records.

In the multicast snooping mode of a distributed switch, a virtual machine can receive multicast traffic on a single

switch port from up to 256 groups and 10 sources.

Migrating Network Adapters to a Distributed Switch

You can migrate physical NICs, VMKernel adapters and Virtual machine network adapters at the same time.

If you want to migrate Virtual Machine network adapters or VMkernel adapters, ensure that the destination

distributed port groups have at least one active uplink and that the uplink is connected to a physical NIC on this

host. Alternatively, migrate physical NICs, virtual network adapters and VMkernel adapter at once.

If you want to migrate physical NICS, ensure that the source port groups on the standard switch have at least

one physical NIC to handle their traffic. For example, if you migrate a physical NIC that is assigned to a port

group for virtual machine networking, ensure that the port group is connected to at least one physical NIC.

Otherwise, the virtual machines on the VLAN on the standard switch will have connectivity between each other

but not to the external network.

Page 20: VMware vSphere Networking deep dive

Assigning a physical NIC of a host to a distributed switch

You can assign physical NICs of a host that is associated with a distributed switch to an uplink port on the host

proxy switch.

Connecting Virtual Machines to a Distributed Switch

Connect Virtual Machines to a distributed switch either by configuring an individual Virtual Machine NIC or by

migrating groups of Virtual machines from the distributed switch.

Page 21: VMware vSphere Networking deep dive

Connect virtual machines to distributed switches by connecting their associated virtual network adapters to

distributed port groups. For an individual virtual machine, you must modify the virtual machine’s network

adapter configuration. For a group of virtual machines, you must migrate virtual machines from a virtual

network to a vSphere distributed switch.

Editing Distributed Port Group General Properties

A distributed port group specifies port configuration options for each member port on a distributed switch. You

can edit the distributed port group settings to define how a connection is made to a network.

Port binding options include static, dynamic and ephemeral.

Page 22: VMware vSphere Networking deep dive

Editing Distributed Port Group Advanced Properties

You can also enable the rest of any configuration that is set per port when a distributed port disconnects form a

Virtual machine.

Page 23: VMware vSphere Networking deep dive

About the VMkernel Networking Level

Consider the following key points about TCP/IP stacks at the VMkernel level

Default TCP/IP stack: Provides networking support for the management traffic between vCenter Server

and Esxi hosts and for system traffic such as vSphere vMotion, IP storage and vSphere FT.

vMotion TCP/IP stack: Supports the traffic for live migration of virtual machines. Use the vMotion TCP/IP

stack to provide better isolation for the vSphere vMotion traffic. After you create a VMkernel adapter in

the vMotion TCP/IP stack, you can use only this stack for vSphere vMotion migration of this host. The

VMkernel adapters on the default TCP/IP stack are disabled for the vSphere vMotion Service. If a live

migration uses the default TCP/IP stack while you configure VMkernel adapters with the vMotion TCP/IP

stack, the migration completes successfully. However, the involved VMkernel adapters on the TCP/IP

stack are disabled for future vSphere vMotion sessions.

Provisioning TCP/IP stack: Supports the traffic for virtual machine cold migration, cloning, and snapshot

creation. You can use the provisioning TCP/IP stack to handle NFC traffic during long-distance vSphere

vMotion migration. VMkernel adapters configured with the provisioning TCP/IP stack handle the traffic

form cloning operations on a separate gateway. After you configure a VMkernel adapter with the

provisioning TCP/IP stack, all adapters on the default TCP/IP stack are disabled for the provisioning

traffic.

Custom TCP/IP stacks: You can add custom TCP/IP stacks at the VMkernel level to handle networking

traffic of custom applications.

Page 24: VMware vSphere Networking deep dive

Creating a VMkernel Adapter on a Host Associated with a Distributed Switch

Consider the following important points when creating a VMkernel adapter on a host that is associated with a

distributed switch:

You should dedicate a single distributed port group per VMkernel adapter

For better isolation, you should configure one VMkernel adapter with one traffic type.

Page 25: VMware vSphere Networking deep dive

MAC Address Management

MAC addresses are used in the Layer 2 (Data Link Layer) of the network protocol stack to transmit frames to a

recipient. In vSphere, vCenter Server generates MAC addresses for virtual machine adapters and VMkernel adapters, or

you can assign addresses manually.

Each network adapter manufacturer is assigned a unique three-byte prefix called an Organizationally

Unique Identifier (OUI), which it can use to generate unique MAC addresses.

VMware supports several address allocation mechanisms, each of them with a separate OUI:

Generated MAC addresses

o Assigned by vCenter Server

o Assigned by the ESXi host

Manually set MAC addresses

Generated for legacy virtual machines, but no longer used with ESXi

If you reconfigure the network adapter of a powered off virtual machine, for example by changing the automatic MAC

address allocation type, or setting a static MAC address, vCenter Server resolves any MAC address conflict before the

adapter reconfiguration takes effect.

MAC Address Assignment from vCenter Server

vSphere 5.1 and later provides several schemes for automatic allocation of MAC addresses in

vCenter Server. You can select the scheme that best suits your requirements for MAC address duplication,

OUI requirements for locally administered or universally administered addresses, and so on.

The following schemes of MAC address generation are available in vCenter Server:

VMware OUI allocation, default allocation

Prefix-based allocation

Range-based allocation

After the MAC address is generated, it does not change unless the virtual machine's MAC address conflicts with that of

another registered virtual machine. The MAC address is saved in the configuration file of the virtual machine.

Preventing MAC Address Conflicts

The MAC address of a powered off virtual machine is not checked against the addresses of running or suspended

virtual machines.

When a virtual machine is powered on again, it might acquire a different MAC address. The change might

be caused by an address conflict with another virtual machine. While this virtual machine has been powered

off, its MAC address has been assigned to another virtual machine that has been powered on.

Page 26: VMware vSphere Networking deep dive

If you reconfigure the network adapter of a powered off virtual machine, for example, by changing the automatic MAC

address allocation type or setting a static MAC address, vCenter Server resolves MAC address conflicts before the

adapter reconfiguration takes effect.

VMware OUI Allocation

VMware Organizationally Unique Identifier (OUI) allocation assigns MAC addresses based on the default

VMware OUI 00:50:56 and the vCenter Server ID.

VMware OUI allocation is the default MAC address assignment model for virtual machines. The allocation works with

up to 64 vCenter Server instances, and each vCenter Server can assign up to 64000 unique MAC addresses. The

VMware OUI allocation scheme is suitable for small scale deployments.

MAC Address Format

According to the VMware OUI allocation scheme, a MAC address has the format 00:50:56:XX:YY:ZZ where

00:50:56 represents the VMware OUI, XX is calculated as (80 + vCenter Server ID), and YY and ZZ are

random two-digit hexadecimal numbers.

The addresses created through the VMware OUI allocation are in the range 00:50:56:80:YY:ZZ -

00:50:56:BF:YY:ZZ.

Prefix-Based MAC Address Allocation

On ESXi hosts 5.1 and later, you can use prefix-based allocation to specify an OUI other than the default one

00:50:56 by VMware, or to introduce Locally Administered MAC Addresses (LAA) for a larger address

space.

Prefix-based MAC address allocation overcomes the limits of the default VMware allocation to provide unique

addresses in larger scale deployments. Introducing an LAA prefix leads to a very large MAC address space (2 to the

power of 46) instead of an universally unique address OUI which can give only 16 million MAC addresses.

Verify that the prefixes that you provide for different vCenter Server instances in the same network are unique.

vCenter Server relies on the prefixes to avoid MAC address duplication issues.

Range-Based MAC Address Allocation

On ESXi hosts 5.1 and later you can use range-based allocation to include or exclude ranges of Locally

Administered Addresses (LAA).

You specify one or more ranges using a starting and ending MAC addresses, for example, (02:50:68:00:00:02,

02:50:68:00:00:FF). MAC addresses are generated only from within the specified range.

Page 27: VMware vSphere Networking deep dive

You can specify multiple ranges of LAA, and vCenter Server tracks the number of used addresses for each range.

vCenter Server allocates MAC addresses from the first range that still has addresses available. vCenter Server checks for

MAC address conflicts within its ranges.

When using range-based allocation, you must provide different instances of vCenter Server with ranges that do not

overlap. vCenter Server does not detect ranges that might be in conflict with other vCenter Server instances. See the

vSphere Troubleshooting documentation for more information about resolving issues with duplicate MAC addresses.

Assigning a MAC Address

Use the vSphere Web Client to enable prefix-based or range-based MAC address allocation and to adjust the allocation

parameters.

If you are changing from one type of allocation to another, for example changing from the VMware OUI allocation to a

range-based allocation, use the vSphere Web Client. However, when a schema is prefix-based or range-based and you

want to change to a different allocation schema, you must edit the vpxd.cfd file manually and restart vCenter Server.

Change to or Adjust Range- or Prefixed-Based Allocations in the vSphere Web Client

By switching from the default VMware OUI to range- or prefixed-based MAC address allocation through the vSphere

Web Client, you can avoid and resolve MAC address duplication conflicts in vSphere deployments.

Change the allocation scheme from the default VMware OUI to range- or to prefixed-based allocation by using the

Advanced Settings available for the vCenter Server instance in the vSphere Web Client.

To switch from range- or prefixed-based allocation back to VMware OUI allocation, or between range- and prefixed-

based allocation, edit the vpxd.cfg file manually.

Set or Change Allocation Type

If you are changing from range- or prefixed-based allocation to the VMware OUI allocation, you must set the allocation

type in the vpxd.cfd file and restart the vCenter Server.

Procedure

1 On the host machine of vCenter Server, navigate to the directory that contains the configuration file:

On a Windows Server operating system, the location of the directory is vCenter Server home

directory\Application Data\VMware\VMware VirtualCenter.

On the vCenter Server Appliance, the location of the directory is /etc/vmware-vpx.

2 Open the vpxd.cfg file

3 Decide on an allocation type to use and enter the corresponding XML code in the file to configure the

allocation type.

Page 28: VMware vSphere Networking deep dive

The following are examples of XML code to use.

VMware OUI allocation

<vpxd>

<macAllocScheme>

<VMwareOUI>true</VMwareOUI>

</macAllocScheme>

</vpxd>

Prefix-based allocation

<vpxd>

<macAllocScheme>

<prefixScheme>

<prefix>005026</prefix>

<prefixLength>23</prefixLength>

</prefixScheme>

</macAllocScheme>

</vpxd>

Range-based allocation

<vpxd>

<macAllocScheme>

<rangeScheme>

<range id="0">

<begin>005067000001</begin>

<end>005067000001</end>

</range>

</rangeScheme>

</macAllocScheme>

</vpxd>

4 Save the vpxd.cfg.

5 Restart the vCenter Server host.

MAC Address Generation on ESXi Hosts

An ESXi host generates the MAC address for a virtual machine adapter when the host is not connected to

vCenter Server. Such addresses have a separate VMware OUI to avoid conflicts.

The ESXi host generates the MAC address for a virtual machine adapter in one of the following cases:

The host is not connected to vCenter Server.

The virtual machine configuration file does not contain the MAC address and information about the

MAC address allocation type.

Page 29: VMware vSphere Networking deep dive

MAC Address Format

The host generates MAC addresses that consists of the VMware OUI 00:0C:29 and the last three octets in hexadecimal

format of the virtual machine UUID. The virtual machine UUID is based on a hash calculated by using the UUID of the

ESXi physical machine and the path to the configuration file (.vmx) of the virtual machine.

Preventing MAC Address Conflicts

All MAC addresses that have been assigned to network adapters of running and suspended virtual machines on a given

physical machine are tracked for conflicts.

If you import a virtual machine with a host-generated MAC address from one vCenter Server to another, select the I

Copied It option when you power on the virtual machine to regenerate the address and avoid potential conflicts in the

target vCenter Server or between the vCenter Server systems.

Setting a Static MAC Address to a Virtual Machine

In most network deployments, generated MAC addresses are a good approach. However, you might need to set a static

MAC address for a virtual machine adapter with unique value.

The following cases show when you might set a static MAC address:

1 Virtual machine adapters on different physical hosts share the same subnet and are assigned the same

MAC address, causing a conflict.

2 Ensure that a virtual machine adapter always has the same MAC address.

By default, VMware uses the Organizationally Unique Identifier (OUI) 00:50:56 for manually generated addresses, but

all unique manually generated addresses are supported.

Assign a Static MAC Address with the vSphere Web Client

You can assign static MAC addresses to the virtual NIC of a powered off virtual machine by using the vSphere Web

Client.

Procedure

1 Locate the virtual machine in the vSphere Web Client.

a. Select a datacenter, folder, cluster, resource pool, or host and click the Related Objects tab.

b. Click Virtual Machines and select the virtual machine from the list.

2 Power off the virtual machine.

3 On the Manage tab of the virtual machine, select Settings > VM Hardware.

4 Click Edit and select the Virtual Hardware tab.

5 In the Virtual Hardware tab, expand the network adapter section.

6 Under MAC Address, select Manual from the drop-down menu.

7 Type the static MAC address, and click OK.

8 Power on the virtual machine.

Page 30: VMware vSphere Networking deep dive

Assign a Static MAC Address in the Virtual Machine Configuration File

To set a static MAC address for a virtual machine, you can edit the configuration file of the virtual machine by using the

vSphere Web Client.

Procedure

1 Locate the virtual machine in the vSphere Web Client.

a. Select a datacenter, folder, cluster, resource pool, or host and click the Related Objects tab.

b. Click Virtual Machines and select the virtual machine from the list.

2 Power off the virtual machine.

3 On the Manage tab of the virtual machine, select Settings.

4 On the VM Options tab, expand Advanced.

5 Click Edit Configuration.

6 To assign a static MAC address, add or edit parameters as required.

ethernetX.addressType static

ethernetX.address MAC_address_of_the_virtual_NIC

X next to ethernet stands for the sequence number of the virtual NIC in the virtual machine.

For example, 0 in ethernet0 represents the settings of the first virtual NIC device added to the virtual

machine.

7 Click OK.

8 Power on the virtual machine.

Page 31: VMware vSphere Networking deep dive

Choosing a network adapter for your virtual machine

Network adapter choices depend on the version number and the guest operating system running on the virtual machine.

Vlance: This is an emulated version of the AMD 79C970 PCnet32- LANCE NIC, and it is an older 10 Mbps NIC with drivers available in most 32-bit guest operating systems except Windows Vista and later. A virtual machine configured with this network adapter can use its network immediately.

VMXNET: The VMXNET virtual network adapter has no physical counterpart. VMXNET is optimized for performance in a virtual machine. Because operating system vendors do not provide built-in drivers for this card, you must install VMware Tools to have a driver for the VMXNET network adapter available.

Flexible: The Flexible network adapter identifies itself as a Vlance adapter when a virtual machine boots, but initializes itself and functions as either a Vlance or a VMXNET adapter, depending on which driver initializes it. With VMware Tools installed, the VMXNET driver changes the Vlance adapter to the higher performance VMXNET adapter.

E1000: An emulated version of the Intel 82545EM Gigabit Ethernet NIC. A driver for this NIC is not included with all guest operating systems. Typically Linux versions 2.4.19 and later, Windows XP Professional x64 Edition and later, and Windows Server 2003 (32-bit) and later include the E1000 driver. Note: E1000 does not support jumbo frames prior to ESXi/ESX 4.1.

E1000e: This feature emulates a newer model of Intel Gigabit NIC (number 82574) in the virtual hardware. This is known as the "e1000e" vNIC. e1000e is available only on hardware version 8 (and newer) virtual machines in vSphere 5. It is the default vNIC for Windows 8 and newer (Windows) guest operating systems. For Linux guests, e1000e is not available from the UI (e1000, flexible vmxnet, enhanced vmxnet, and vmxnet3 are available for Linux).

VMXNET 2 (Enhanced): The VMXNET 2 adapter is based on the VMXNET adapter but provides some high-performance features commonly used on modern networks, such as jumbo frames and hardware offloads. This virtual network adapter is available only for some guest operating systems on ESXi/ESX 3.5 and later. Because operating system vendors do not provide built-in drivers for this card, you must install VMware Tools to have a driver for the VMXNET 2 network adapter available.

VMXNET 3: The VMXNET 3 adapter is the next generation of a para-virtualized NIC designed for performance, and is not related to VMXNET or VMXNET 2. It offers all the features available in VMXNET 2, and adds several new features like multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery. Because OS vendors do not provide built-in drivers for this card, you must install VMware Tools or open-vm-tools to have a driver for the VMXNET 3 network adapter available. Note: VMXNET 3 is supported only for virtual machines version 7 and later, with a limited set of guest operating systems.

Page 32: VMware vSphere Networking deep dive

vSphere CLI Commands to Troubleshoot ESXi Network Configurations

To troubleshoot networking configurations from the ESXi command line, ESXCLI is the tool to use or PUTTY can be used.

There are a number of options available when running ‘esxcli’ in terms of network settings:

~ esxcli network

We’ll go through some of the options that these namespaces offer.

Listing vSwitch Configuration

You can list the vSwitches configured on a ESXi host by running:

~ esxcli network vswitch standard list

To list distributed vSwitches instead, swap ‘standard’ for ‘dvs’ in the command. Following on from the output above you can also dig down to look at the Policy, Portgroup settings for the vSwitch:

~ esxcli network vswitch standard

Page 33: VMware vSphere Networking deep dive

For example, to display the failover settings for vSwitch0, the following command can be run:

~ esxcli network vswitch standard policy failover get -v vSwitch0

Listing VMKernel Interfaces

To list the VMkernel ports on a host you can run:

~ esxcli network ip interface list

The command will display the interface name, MAC address, and which vSwitch and Portgroup it belongs to. To list the IP address configuration for the VMkernel ports:

~ esxcli network ip interface ipv4 get

Page 34: VMware vSphere Networking deep dive

Listing Connections and Neighbors

To list established connections on your host you can run:

~ esxcli network ip connection list

This is the equivalent of running ‘netstat’ on a Windows machine. To list the host’s ARP cache (or neighbors table) you can run:

~ esxcli network ip neighbor list

This can be useful when troubleshooting connectivity, for example, when a host is failing to connect to another over the vMotion network.

You can list the host’s routing table by running:

~ esxcli network ip route ipv4 list

Page 35: VMware vSphere Networking deep dive

Troubleshooting Network Connectivity Using Netcat

Netcat can be used to test connectivity to and from your ESXi host.

You can view the options available by running:

~ nc -h

The basic syntax is:

~ nc -z <DestinationIP> <Destination Port>

For example, to test connectivity on to port 80 you could run:

~ nc -z 192.168.1.15 80

You can also use netcat to test a range of ports on a remote host:

~ nc -w 1 -z 192.168.1.15 80-85

Netcat will report back with the ports it has found to be open within the specified range.

Page 36: VMware vSphere Networking deep dive

Troubleshooting network connectivity with ping and vmkping

You can test connectivity to remote ESXi host using the ping and vmkping utilities. Using vmkping to test connectivity via vMotion interfaces is a common practice. For example:

~ vmkping 192.168.1.20

If you use vmkping with the ‘-D’ switch, you can test the host’s IP stack as the command will automatically test configured IP addresses:

~ vmkping -D

Page 37: VMware vSphere Networking deep dive

Troubleshooting SSL port connectivity with openssl

You can use the open ssl client present on an ESXi host to test connectivity to an ssl port – for example to vCenter or to another host. To do so:

~ openssl s_client -connect 192.168.1.100:443

The output will also contain details about the certificate, which can be useful when troubleshooting certificate problems.

Page 38: VMware vSphere Networking deep dive

Capturing Traffic with tcpdump-uw

To display packets on interface vmk0 you can run:

~ tcpdump-uw -i vmk0 | more

To output the traffic capture to a file you can run:

~ tcpdump-uw -i vmk0 -s 1514 -w /vmfs/volumes/datastore1/traffic.pcap

You can then open the resulting .pcap file in a tool such as Wireshark, for analysis:

Page 39: VMware vSphere Networking deep dive

Viewing Physical NIC Configuration

You can list the physical NICs installed in the host by using:

~ esxcli network nic list

Using the ‘esxcli network nic’ namespace you can also bring interfaces up and down (which is useful for testing), and you can view interface statistics.

~ esxcli network nic <cmd> <cmd options>

An example of how to display interface statistics is shown here:

~ esxcli network nic stats get -n vmnic0

Page 40: VMware vSphere Networking deep dive

Configuring the speed and duplex of an ESXi or ESX host network adapter

ESXi/ESX recommended settings for Gigabit-Ethernet speed and duplex while connecting to a physical switch port:

Auto Negotiate <-> Auto Negotiate (For 1 Gbps)

Auto Negotiate <-> Auto Negotiate (For 10 Gbps, supported only by ESXi/ESX 3.5 Update 1 and above)

Note: Many drivers do not support forced 1000Mbps or 10000Mbps speeds, and require the auto-negotiation set for this to work correctly. Auto-negotiation is considered as the normal and official way that both Gigabit and 10-Gigabit networking is designed to function. For more information, see the IEEE 802.3ab, 802.3an, and 802.3ae standards. Many drivers do not allow forced 1000Mbps or 10000Mbps because it is not officially supported by the IEEE standards.

When working with 10 GB Fiber Channel over Ethernet (FCoE) configurations, Auto Negotiatemay or may not be supported or recommended. For more information, consult your networking equipment vendor or administrator.

1000 MB / Full Duplex <-> 1000 MB / Full Duplex

VMware does not recommend mixing a hard-coded setting with Auto-negotiate

Fast Ethernet – 100 MB /Full Duplex <-> 100 MB /Full Duplex

Duplex Mismatch

A common issue with speed/duplex is when the duplex settings are mismatched between two switches, between a switch and a router or between the switch and a workstation or server.

This can occur when manually hard coding the speed and duplex or from auto negotiation issues between the two devices.

The advantages of utilizing auto negotiation on Gigabit-Ethernet Interfaces:

Auto negotiation is highly recommended on ESXi/ESX Gigabit-Ethernet Interface cards and physical Gigabit switch ports for these reasons:

o Although hard coding the speed and duplex will work and is in the documentation, in some cases there are performance issues after an ESXi/ESX upgrade 3.5 – setting the configuration to Auto Negotiate seems to resolve performance issues.

o It resolves issues with iSCSI, vMotion, network performance, and related network issues. Duplex settings: While Cisco devices only support full duplex, the IEEE 802.3z standard does have support for half duplex Gigabit-Ethernet. Because of this, duplex is negotiated between Gigabit-Ethernet devices.

o Flow Control: Because of the amount of traffic that can be generated by Gigabit-Ethernet, there is a PAUSE functionality built into Gigabit-Ethernet.

Page 41: VMware vSphere Networking deep dive

Note: The PAUSE frame is a packet that tells the far-end device to stop the transmission of packets until the receiver is able to handle all the traffic and clear its buffers. The PAUSE frame has a timer included, which tells the far-end device when to start to send packets again. If that timer expires without getting another PAUSE frame, the far-end device can then send packets again. Flow Control is an optional item and must be negotiated. Devices can be capable of sending or responding to a PAUSE frame, and it is possible they will not agree to the flow-control request of the far-end.

Fast Ethernet – 100 / Full <–> 100 / Full: VMware recommends forcing the network adapter on the ESX server host and the physical switch port to which it connects to 100 / Full when using 100 MB links with an ESX server host.

Configuring the speed and duplex of the ESXi/ESX server network adapter using the vSphere Client

1. Log in to the ESXi/ESX host using the vSphere Client as the root user or a user with equivalent permissions. 2. Highlight the ESXi/ESX server host and click the Configuration tab. 3. Click the Networking link. 4. Click Properties next to the appropriate virtual switch. 5. Click the Network Adapters tab. 6. Highlight the desired network adapter, and click Edit. 7. Select appropriate speed and duplex from the dropdown.

About the esxcfg-nics command, which is used to configure Network Interface Cards

~ esxcfg-nics <options> [nic]

The esxcfg-nics command provides information about the physical NICs in use by the VMkernel.

This prints the VMkernel name for the NIC, its PCI ID, driver, link state, speed, duplex, and a short PCI description of the card. It also allows users to set speed and duplex settings for a specific NIC.

Page 42: VMware vSphere Networking deep dive

The below Esxi has 2 nics with 1GBps speed.

We are forcing the vmnic0 to use fast Ethernet at half duplex speed

~ esxcfg-nics -s 100 -d half vmnic0

The changes will get reflected in the GUI.

Page 43: VMware vSphere Networking deep dive

Verifying the integrity of the physical network adapter on an ESX/ESXi host

If the Esxi doesn’t recognize the NIC card, As part of troubleshooting networking issues related to the physical network adapter, it is necessary to attempt to rule out a hardware problem. The following steps assume that the network adapter is not seen within the vSphere Client.

To rule out if the issue is caused by a hardware problem:

~ lspci -p

In this example, there are two physical Broadcom NICs and two Intel NIC recognized by the host. Under the Module column, it shows the name of the driver that has been loaded for each card. Under the Name column, it specifies the vmnic that has been assigned to each card. If there is no driver or vmnic designation associated with a network adapter, the host detects the card in the slot, but does not recognize the brand/model.

If the lines do not exist at all for the card that has been added to the server, then steps need to be taken to rule out faulty hardware. To rule out faulty hardware:

If it is an add-in network adapter, reseat it or move it to an alternate PCI slot on the server’s motherboard.

Try an alternate network card.

Update the BIOS of the server to the latest version recommended by the manufacturer.

Run hardware diagnostics to identify any potential hardware issues.

Page 44: VMware vSphere Networking deep dive

Packet Tracing

The basic command to view the data flow is the esxtop

~ esxtop

The pktcap-uw tool is an enhanced packet capture and analysis tool that can be used in place of the legacy tcpdump-uw tool. The pktcap-uw tool is included by default in ESXi 5.5.

Get help and Syntax information:

~ pktcap-uw -h |more

Page 45: VMware vSphere Networking deep dive

To View the live Capture of VMKernel Interface Traffic

~ pktcap-uw –vmk vmk0

Page 46: VMware vSphere Networking deep dive

To capture the output to a file, use -o option:

~ pktcap-uw --vmk vmk0 -o /tmp/vmk0capture.pcap

You can limit the data being captured using the ‘-c’ option, which allows you to specify the number of packets you wish to capture

~ pktcap-uw --vmk vmk0 -c 1

Page 47: VMware vSphere Networking deep dive

Capture Traffic of a specific physical network card(vmnic) on ESXi Host:

~ pktcap-uw --uplink vmnic0

Capture traffic from a virtual switchport on a dvSwitch:

~ pktcap-uw --switchport <Switch-Portnumber>

Ex, pktcap-uw --switchport 33554433

To get the Switch port ID

~ esxtop -> Press n -> PORT-ID

To Capture packets for multiple points simultaneously, Capture Packets of both Switch port and physical adapter at same time using the below command:

~ pktcap-uw --switchport 33554433 -o /tmp/33554433.pcap & pktcap-uw --uplink vmnic0 -o /tmp/vmnic0.pcap &

Page 48: VMware vSphere Networking deep dive

Stop pktcap-uw tracing with the kill command:

~ kill $(lsof |grep pktcap-uw |awk ‘{print $1}’| sort -u)

To check that all pktcap-uw traces are stopped:

~ lsof |grep pktcap-uw |awk ‘{print $1}’| sort -u

Captured packets can be viewed in Sniffer tools such as Wireshark.

First, transfer the packets using WinSCP.

Page 49: VMware vSphere Networking deep dive

Troubleshoot ESXi Host DNS and Routing Related Issues

It is important that name resolution is configured correctly on your ESXi hosts, as it is important for many vSphere features. Your hosts and vCenter servers should be able to do lookups against their configured DNS servers, and there should be both forward and reverse DNS records created for vCenter and each of the hosts. Using the vSphere client, the DNS settings can be found under the host’s Configuration tab, in the DNS and Routing section:

You can also list the configured DNS servers using the CLI. Do to so, use the ‘esxcli network ip dns’ namespace. For example:

~ esxcli network ip dns server list

You can also add and remove name servers using commands under this namespace. To list, and modify, the host’s search domains, you can use:

~ esxcli network ip dns search list

Of course, having these parameters set correctly will only be of use if you can communicate with the configured DNS servers.

Page 50: VMware vSphere Networking deep dive

You can use Netcat to test connectivity:

~ nc -z 192.168.1.15 53

And you can use NSLOOKUP to confirm that you can perform queries against the DNS server:

~ nslookup <FQDN>

As an alternative to esxcli, you can also use the vicfg-dns command from the vMA or vSphere CLI. Running the command without any parameters will display a host’s DNS configuration:

vi-admin@vma:~> vifptarget --set 192.168.88.134

vi-admin@vma:~[192.168.88.134]> vicfg-dns

DNS Configuration

Host Name esxi1

Domain Name vmlab.local

DHCP false

DNS Servers 10.0.0.1

You can use vicfg-dns to change a hosts DNS configuration. Running ‘vicfg-dns –help’ will display the available options. As an example here, to change the DNS server(s) that a host will use you can run:

vi-admin@vma:~[192.168.88.134]> vicfg-dns -D 10.0.0.10,10.0.0.11 Updated Host DNS network configuration successfully.

Page 51: VMware vSphere Networking deep dive

Checking the configuration again, there are now two DNS servers configured:

vi-admin@vma:~[192.168.88.134]> vicfg-dns

DNS Configuration

Host Name esxi1

Domain Name vmlab.local

DHCP false

DNS Servers 10.0.0.10

10.0.0.11

Troubleshooting ESXi Routing Configuration

There are a number of ways to display and configure a host’s routing configuration. As I ended the last section talking about vicfg-dns, I’ll start this one by looking at vicfg-route.

To display the host’s routing table, use:

vi-admin@vma:~[192.168.88.134]> vicfg-route -l

VMkernel Routes:

Network Netmask Gateway Interface

default 0.0.0.0 10.0.0.1 vmk3

10.0.0.0 255.255.255.0 Local Subnet vmk3

10.10.20.0 255.255.255.0 Local Subnet vmk1

192.168.88.0 255.255.255.0 Local Subnet vmk0

The vicfg-route command can be used to add and remove static routes. To set the default vmkernel gateway:

vi-admin@vma:~[192.168.88.134]> vicfg-route 10.0.0.2

vi-admin@vma:~[192.168.88.134]> vicfg-route -l

VMkernel Routes:

Network Netmask Gateway Interface

default 0.0.0.0 10.0.0.2 vmk3

10.0.0.0 255.255.255.0 Local Subnet vmk3

10.10.20.0 255.255.255.0 Local Subnet vmk1

192.168.88.0 255.255.255.0 Local Subnet vmk0

Page 52: VMware vSphere Networking deep dive

Troubleshoot VMKernel Related Network Configuration Issues

VMkernel interfaces are used for a number of functions, such as management traffic, vMotion, Fault Tolerance logging, iSCSI and NFS. VMkernel interfaces are given the name ‘vmkx’, where X is the number (in order that it was created on the host). You can see VMkernel interfaces clearly when looking at a host’s networking configuration:

Looking at the properties of the VMkernel portgroup you can see what type of traffic the vmk interface is being used for:

Testing the Management Network using the DCUI

You can test management network connectivity using the ‘Test Management Network’ option available in the DCUI.

This can be useful to test management connectivity following installing ESXi. Common issues that may cause these tests to fail include setting an incorrect VLAN for the management network (or not setting one at all), specifying incorrect DNS or default gateway IPs, and selecting the wrong physical adapter to be used for management traffic.

Page 53: VMware vSphere Networking deep dive

Troubleshooting VMkernel Issues using the CLI

There are a number of CLI commands available to help you in troubleshooting VMKernel issues. To list the VMKernel ports on a ESXi host you can run:

~ esxcli network ip interface list

If you wanted to add a new VMKernel port you can do so using the following:

~ esxcli network ip interface add --interface-name=vmk6 –portgroup-name=”FT”

You can then assign it an IP address using:

~ esxcli network ip interface ipv4 set --ipv4=10.20.10.1 --netmask=255.255.255.0 --type=static --interface-name=vmk6

To remove a VMK interface, you can use:

~ esxcli network ip interface remove --interface-name=vmk6

Testing VMKernel Interface Connectivity

For example, to test connectivity to another hosts management IP I could run:

~ esxcli network diag -H 192.168.1.10

Page 54: VMware vSphere Networking deep dive

Troubleshooting ESXi VLAN Configurations using Command Line Tools

There are a few ways to troubleshoot network/VLAN configuration from a command line, using tools such as ESXCLI and vicfg/esxcfg either locally on the ESXi host or remotely using vCLI or the vMA. Here, I’ll aim to give examples using esxcfg commands ran from a vMA, and ESXCLI commands executed locally on the ESXi host.

Listing Port Group Settings:

~ esxcli network vswitch standard portgroup list

Configuring VLAN Tags:

~ esxcli network vswitch standard portgroup set -p VMNetwork --vlan-id 10

Adding and Removing Port Groups:

~ esxcli network vswitch standard portgroup add -p testpg -v vSwitch0

~ esxcli network vswitch standard portgroup set -p testpg --vlan-id 7

Page 55: VMware vSphere Networking deep dive

If you check in GUI:

To delete a port group:

~ esxcli network vswitch standard portgroup remove -p testpg -v vSwitch0

Page 56: VMware vSphere Networking deep dive

Ghosted Adapters

After a P2V, some VMs may not get the Network connectivity because of the NIC driver mismatch. These network adapters are called as Ghosted Adapters which needs to be removed from the VM. To do so,

On your VM go to Start > RUN > CMD > Enter > Type “

> set devmgr_show_nonpresent_devices=1

While still in the command prompt window type:

devmgmt.msc

and then open Device Manager and click on the Menu go to View > Show Hidden Devices(like on the pic).

Then you should see which devices are marked like ghosted devices. They are grayed out. Those devices you can safely remove from the device manager.

Page 57: VMware vSphere Networking deep dive

LUN Masking

Alright – here we go – the push is on. 8 weeks to cover some random, sparingly used topics

off of the VCAP5-DCA blueprint. Today, let's tackle an item out of the very first objective on the

blueprint; LUN masking.

LUN masking is essentially a process that will mask away LUNs, or make those LUNs inaccessible

to certain ESXi hosts. You know when you go into your backend array and say which hosts can

have access to which LUNs – yeah, that's basically LUN masking. However, for the sake of this

exam, it's performed on the host itself through something called claimrules. That said, it's much

harder, but explained below…

So first off, we need to decide on a LUN that we want to mask. There are many ways to list all of

your LUNs/Datastores through the CLI and through the vSphere Client, so pick your beast. What

we need to get is the LUNs identifier – the long string of characters that ESXi uses to uniquely

identify the LUN. Since the claimrule is create within the CLI we might as well just find these

numbers inside of the CLI as well – since you may be pressed for time on the exam. So, let's first

list our LUNs, showing each identifier.

esxcli storage core device list | less

As you can see I piped the output to less. If we don't do this and there are a lot of LUNs attached to

your host then you may get a little overwhelmed with the output. "esxcfg-scsidevs -m" will also give

you some great information here, which may be a little more compact than the esxcli command.

Chose your weapon, so long as you can get the identifier. The LUN shown in the above image has

an identifier of "naa.6006048c6fc141bb051adb5eaa0c60a9" – this is the one I'm targeting.

So now we have our identifier it's time to do some masking. We have some decisions at this point

to make though. We can mask by path (removing individual path visibility), by vendor (this will mask

all LUNs to a specific vendor), or by storage transport (yeah, like all iSCSI or all FC). If we look at

the currently defined claimrules we can see most types are utilized. To do so, use the following

command

esxcli storage core claimrule list

Page 58: VMware vSphere Networking deep dive

For our sake here we will go ahead an perform our masking by path. I will note below though if you

were to choose vendor or transport where that would be setup.

So, in order to do it by path, we need to see all of the paths associated with our identifier. To do so,

we can use the following command along with grepping for our identifier.

esxcfg-mpath &nbsp;-m | grep naa.6006048c6fc141bb051adb5eaa0c60a9

Alright, so you can see we have 2 paths. That means in order to completely mask away this LUN

we will need to do all of the following twice; once using the vmhba32:C1:T0:L0 path and once using

vmhba32:C0:T0:L0.

Now, time to begin constructing our claimrule! First off we will need an ID number. Certailny don't

use one that is already taken (remember "esxcli storage core claimrule list") or you can use the "-u"

to autoassign a number. I like to have control over this stuff so I'm picking 200. Also to note is the -

t option – this specifies the type of claimrule (remember when i said we could mask by vendor). Our

-t to do a path will be location, however this could be vendor or transport as well.** Running "esxcli

storage core claimrule add" with no arguments will output a bunch of examples ** So, in order to

mask by location we will specify -A, -C, -T, and -L parameters referencing our path and the -P states

we want to use the MASK_PATH plugin. The command should look like the one below.

esxcli storage core claimrule add -r 200 -t location -A vmhba32 -C 1 -T 0 -L 0 -P

MASK_PATH

and for our second path – don't forget to put a new rule ID

esxcli storage core claimrule add -r 201 -t location -A vmhba32 -C 0 -T 0 -L 0 -P

MASK_PATH

Running "esxcli storage core claimrule list" will now show our newly created rules, however they

haven't been applied yet. Basically they are running in "file" – we need them to be in "runtime" This

is as as easy as running

esxcli storage core claimrule load

Page 59: VMware vSphere Networking deep dive

Now we are all set to go – kinda. They are in runtime, but the rules will not be applied until that

device is reclaimed. So, a reboot would work here – or, a more ideal solution, we can run a reclaim

on our device. To do so we will need that device identifier again and the command to run is…

esxcli storage core claiming reclaim -d naa.6006048c6fc141bb051adb5eaa0c60a9

And done! And guess what – that LUN is gonzo!!! Congrats Master Masker!

HEY! Wait! I needed that LUN

Oh SNAP! This is my lab environment and I need that LUN back, well, here's how we can undo

everything we just did!

First off, let's get rid of those claimrules we just added

esxcli storage core claimrule remove -r 200

esxcli storage core claimrule remove -r 201

Listing them out will only show them in runtime now, they should no longer be in file. Let's get them

out runtime by loading our claimrule list again.

esxcli storage core claimrule load

Now a couple of unclaim commands on our paths. This will allow them to be reclaimed by the

default plugin.

esxcli storage core claiming unclaim -t location -A vmhba32 -C 0 -T 0 -L 0

esxcli storage core claiming unclaim -t location -A vmhba32 -C 1 -T 0 -L 0

A rescan of your vmhba and voila! Your LUN should be back! Just as with Image Builder I feel like

this would be a good thing to know for the exam. Again, it's something that can easily be marked

and tracked and very specific! Happy studying!

iSCSI Port Binding

My plan is to go over all the skills in Objective 1.3 but before we get into PSA commands and what

not let's first configure iSCSI port bonding – this way we will have a datastore with multiple paths

that we can fiddle around with

First off iSCSI port binding basically takes two separate paths to an iSCSI target (the paths are

defined by vmkernel ports) and bonds them together. So, we need two vmkernel ports. They can

be on the same switch or separate switches, but the key is that you can only have one network

adapter assigned to it. Meaning the vSwitch can contain multiple nics, but you need to ensure that

the config is overridden on the vmkernel level to only have one NIC active. Let's have a look at this.

Below you will see the current setup of my vmkernel ports (IPStore1 and IPStore2).

Page 60: VMware vSphere Networking deep dive

As you can see, my configuration here is actually wrong and needs to be adjusted – remember, one

nic per vmkernel port. So, with a little click magic we can turn it into what you see below.

Basically, for IPStore1 I have overridden the default switch config on the vmkernel port, setting

vmnic0 as active and vmnic1 as unused. For IPStore2 we will do the same except the opposite

(hehe, nice, that makes no sense) – basically, override but this time set vmnic1 as active and

vmnic0 as unused. This way we are left with two vmkernel ports, each utilizing a different NIC.

Page 61: VMware vSphere Networking deep dive

Now that we have the requirements setup and configured we can go ahead and get started on

bonding the vmkernel ports together. This is not a hard thing to do! What we are going to want to

do is right-click on our software iSCSI initiator and select 'Properties'. From there we can browse to

the 'Network Configuration' tab and simply click 'Add'. We should now see something similar to

below.

As you can see above, our VMkernel adapters are listed. If they weren't, that would indicate that

they are not compatible to be bonded, meaning we haven't met the requirements outlined earlier.

By selecting IPStore1 and then going back in and selecting IPStore2 ( I know, you can't do it at the

same time ), then selecting OK, then performing the recommended rescan you will have completed

the task. We can now see that below inside of our 'Manage Paths' section for a datastore that has

been mounted with our iSCSI initiator we have some nifty multipath options. First, we have an

additional channel and path listed, as well, we are able to switch our PSP to thinks like Round

Robin!

Page 62: VMware vSphere Networking deep dive

Random Storage Scenarios (Section 1 – Part 1)

Scenario 1

Let's say we've been tasked with the following. We have an iSCSI datastore (iSCSI2) which

utlizes iSCSI port bonding to provide multiple paths to our array. We want to change the default PSP

for iSCSI2 from mru to fixed, and set the preferred path to travel down CO:T1:L0 – only one problem,

C0:T1:L0 doesn't seem to be available at the moment. Fix the issues with C0:T1:L0 and change the

PSP on iSCSI2 and set the preferred path.

Alright, so to start this one off let's have a look first why we can't see that second path to our

datastore. If browsing through the GUI you aren't even seeing the path at all, the first place I would

look at is claimrules (now how did I know that ) and make sure that the path isn't masked away –

remember the LUN Masking section. So ssh on into your host and run the following command. esxcli storage core claimrule list

Page 63: VMware vSphere Networking deep dive

As you can see from my output lun masking is most certainly the cause of why we can't see the

path. Rule 5001 loads the MASK_PATH plugin on the exact path that is in question. So, do you

remember from the LUN Masking post how we get rid of it? If not, we are going to go ahead and do

it here again.

First step, we need to remove that rule. That's done using the following command.

esxcli storage core claimrule remove -r 5001

Now that its gone we can load that current list into runtime with the following command

esxcli storage core claimrule load

But we aren't done yet! Instead of waiting for the next reclaim to happen or the next reboot, let's go

ahead and unclaim that path from the MASK_PATH plugin. Again, we use esxcli to do so

esxcli storage core claiming unclaim -t location -A vmhba33 -C 0 -T 1 -L 0

And rescan that hba in question – why not just do it via command line since we are already there…

esxcfg-rescan vmhba33

And voila – flip back into your Manage Paths section of iSCSI2 and you should see both paths are

now available. Now we can move on to the next task, which is switching the PSP on iSCSI2 from

MRU to Fixed. Now we will be doing this a bit later via the command line, and if you went into the

GUI to check your path status, and since we are only doing it on one LUN we probably can get

away with simply changing this via the vSphere Client. Honestly, it's all about just selecting a

dropdown at this point – see

below.

I circled the 'Change' button on this screenshot because it's pretty easy to simply select from the

drop down and go and hit close. Nothing will happen until you actually press 'Change' so don't

forget that. Also, remember, PSP is done on a per-host basis. So if you have more than one host

and the VCAP didn't specify to do it on only one host, you will have to go and duplicate everything

you did on the other host. Oh, and setting the preferred path is as easy as right-clicking the desired

path and marking it as preferred. And, this scenario is completed!

Page 64: VMware vSphere Networking deep dive

Scenario 2

The storage team thanks you very much for doing that but requirements have changed and they now

wish for all of the iSCSI datastores, both current and any newly added datastores, to utilize the Round

Robin PSP. How real life is that, people changing their mind

No problem you might say! We can simply change the PSP on each and every iSCSI datastore –

not a big deal, there's only three of them. Well, you could do this, but the question specifically

mentions that we need to have the PSP set to Round Robin on all newly added iSCSI datastores as

well, so there's a bit of command line work we have to do. And, since we used the vSphere Client

to set the PSP in the last scenario, we'll do it via command line in this one.

First up, let's switch over our existing iSCSI datastores (iSCSI1, iSCSI2, iSCSI3). To do this we will

need their identifier which we can get from the GUI, however since we are doing the work inside the

CLI, why not utilize it to do the mappings. To have a look at identifiers and their corresponding

datastore names we can run the following

esxcfg-scsidevs -m

As you can see there are three datastores we will be targeting here. The identifier that we need will

be the first string field listed beginning with t10 and ending with :1 (although we don't need the :1).

Once you have the string identifier of the device we want to alter we can change its' PSP with the

following command. esxcli storage nmp device set -d&nbsp;t10.FreeBSD_iSCSI_Disk______000c299f1aec010_________________ -P

VMW_PSP_RR

So, just do this three times, once for each datastore. Now, to handle any newly added datastores to

defaulr to round robin we need to first figure out what SATP the iSCSI datastores are utilizing, then

associate the VMW_PSP_RR PSP to it. We can use the following command to see which SATP is

associated with our devices.

esxcli storage nmp device list

Page 65: VMware vSphere Networking deep dive

As you can see, our iSCSI datastores are being claimed by the VMW_SATP_DEFAULT_AA SATP.

So, our next step would be to associate the VMW_PSP_RR PSP with this SATP – I know, crazy

acronyms! To do that we can use the following command. esxcli storage nmp satp set -s VMW_SATP_DEFAULT_AA -P VMW_PSP_RR

This command will ensure that any newly added iSCSI datastores claimed by the default AA SATP

will get the round robin PSP.

At this point we are done this scenario but while I was doing this I realized there might be a quicker

way to to change those PSP's on our existing LUNs. If we set associate our SATP with our PSP

first then we can simply utilized the following command on each of our datastores to force them to

change their PSP back to default (which will be RR since we just changed it).

esxcli storage nmp device set -d

t10.FreeBSD_iSCSI_Disk______000c299f1aec010_________________ -E

Of course we have to run this on each datastore as well – oh, and on every host

Scenario 3

Big Joe, your coworker just finished reading a ton of vSphere related material because his poor little

SQL server on his iSCSI datastore just isn't cutting it in terms of performance. He read some best

practices which stated that the max IOPs for the Round Robin policy should be changed to 1. He

requested that you do so for his datastore (iSCSI1). The storage team has given you the go ahead but

said not to touch any of the other datastores or your fired.

Nice, so there is really only one thing to do in this scenario – change our default max IOPs setting

for the SCSI1 device. So, first off, let's get our identifier for SCSI1

esxcfg-scsidevs -m

Once we have our identifier we can take a look on the roundrobin settings for that device with the

following command

esxcli storage nmp psp roundrobin deviceconfig get -d

t10.FreeBSD_iSCSI_Disk______000c299f1aec000_________________

Page 66: VMware vSphere Networking deep dive

As

we can see, the IOOperation Limit is 1000, meaning it will send 1000 IOPs down each path before

switching to the next. The storage team is pretty adamant we switch this to 1, so let's go ahead and

do that with the following command. esxcli storage nmp psp roundrobin deviceconfig set -d t10.FreeBSD_iSCSI_Disk______000c299f1aec000_________________ -t

iops -I 1

Basically what we define with the above command is that we will change that 1000 to 1, and specify

that the type of switching we will use is iops (-t). This could also be set with a -t bytes and entering

the number of bytes to send before switching.

So, that's basically it for this post! Let me know if you like the scenario based posts over me just

rambling on about how to do a certain task! I've still got lots more to cover so I'd rather put it out

there in a format that you all prefer! Use the comments box below! Good Luck!

Storage Scenarios (Section 1 – Part 2)

Hopefully you all enjoyed the last scenario based post because you are about to get another

one Kind of a different take on covering the remaining skills from the storage section, section 1.

So, here we go!

Scenario 1

A coworker has come to you complaining that every time he performs storage related functions from

within the vSphere client, VMware kicks off these long running rescan operations. He's downright

sick of seeing them and wants them to stop, saying he will rescan when he feels the need to, rather

than having vSphere decide when to do it. Make it happen!

So, quite the guy your coworker, thinking he's smarter than the inner workings of vSphere but luckily

we have a way we can help him. And also the functions we are going to perform are also part of

the VCAP blueprint as well – coincidence? Either way, the answer to our coworkers prayers is

something called vCenter Server storage filters and there are 4 of them, explained below…

RDM Filter (config.vpxd.filter.rdmFilter) – filters out LUNs that are already mapped as an RDM

VMFS Filter (config.vpxd.filter.vmfsFilter) – filters out LUNs that are already used as a VMFS

datastore

Same Hosts and Transports Filter (config.vpxd.filter.sameHostsAndTransporstFilter) – Filters

out LUNS that cannot be used as a datastore extent

Page 67: VMware vSphere Networking deep dive

Host Rescan Filter (config.vpxd.filter.hostRescanFilter) – Automatically rescans storage

adapters after storage-related management functions are performed.

As you might of concluded it's the Host Rescan Filter that we will need to setup. Also, you may

have concluded that these are advanced vCenter Server settings, judging by the config.vpxd

prefixes. What is conclusive is that all of these settings are enabled by default – so if we need to

disable one, such as the Host Rescan Filter, we will need to set the corresponding key to false.

Another funny thing is that we won't see these setup by default. Basically they are silently enabled.

Anyways, let's get on to solving our coworkers issue.

Head into the advanced settings of vCenter Server (Home-vCenter Server Settings->Advanced

Options). From here, disabling the host rescan filter is as easy as adding the

config.vpxd.filter.hostRescanFilter and false values to the text boxes near the bottom of the screen

and clicking 'Add' – see below

And voila! That coworker of yours should no longer have to put up with those pesky storage

rescans after he's done performing his storage related functions.

Scenario 2

You work for the mayors office in the largest city in Canada. The mayor himself has told you that he

installed some SSD into a host last night and it is showing as mpx.vmhba1:C0:T0:L0 – but not being

picked up as SSD! You mention that you think that is simply SAS disks but he persists it isn't (what is

this guy on crack :)). Either way, you are asked if there is anything you can do to somehow 'trick'

vSphere into thinking that this is in fact an SSD.

Page 68: VMware vSphere Networking deep dive

Ok, so this one isn't that bad really, a whole lot of words for one task. Although most SSD devices

will be tagged as SSD by default there are times when they aren't. Obviously this datastore isn't an

SSD device, but the thing is we can tag it as SSD if we want to. To start, we need to find the

identifier of the device we wish to tag. This time I'm going to run esxcfg-scsidevs to do so (with -c to

show a compact display).

esxcfg-scsidevs -c

From there I'll grab the UUID of the device I wish to tag, in my case mpx.vmhba1:C0:T0:L0 – (crazy

Rob Ford). Now if I have a look at that device with the esxcli command I can see that it is most

certainly not ssd. esxcli storage core device list -d mpx.vmhba1:C0:T0:L0

So, our first step is to

find out which SATP is claiming this device. The following command will let us do just that esxcli storage nmp device list -d mpx.vmhba1:C0:T0:L0

Alright, so now that we know the SATP we can go ahead and define a SATP rule that states this is

SSD esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d mpx.vmhba1:C0:T0:L0 -o

enable_ssd

And from here we need to reclaim the device

esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T0:L0

And, another look at our listing out of the device should now show us that we are dealing with a

device that is SSD.

Page 69: VMware vSphere Networking deep dive

esxcli storage core device list -d mpx.vmhba1:C0:T0:L0

So there you go Mr. Ford, I mean Mr. Mayor – it's now SSD!!!!

Migrating to vSphere Distributed Switches

Before we get to involved in the details I'll go through a few key pieces of information. As you can

see below, there are a lost of port groups that I will need to migrate. These are in fact the port

groups that are setup by default with AutoLab. Also, I'm assuming you have redundant NICs setup

on all your vSwitches. This will allow us to migrate all of our VM networks and port groups without

incurring any downtime. As stated before, there are many blog posts around this subject and many

different ways to do the migration. This is just the way I've done it in the past and I'm sure you can

probably do this in a smaller amount of steps but this is just the process I've followed.

Page 70: VMware vSphere Networking deep dive

Step 1 – Create the shell

So the first step is to create our distributed switch. This is pretty simple! Just head into your

network view and select 'New vSphere Distributed Switch', Follow the wizard, it's not that hard.

Pay attention to the number of uplinks you allow as you need to be sure that you have as many

uplinks in your distributed switch as you have physical adapters assigned to your standard switches.

Also, I usually add my hosts into the distributed switch during this process, just not importing and

physical NICs. Basically we're left with a distributed switch containing our hosts with no uplinks

assigned. Once we have our switch we need to duplicate all of the por tgroups we wish to migrate

(Management, vMotion, FT, VM, etc.) If you are following along with the autolab you should end up

with something similar to the following (ignore my PVLAN port groups – that's another blog post).

Page 71: VMware vSphere Networking deep dive

One note about the uplinks that you can't see

in the image above. I've went into each of my port groups and setup the teaming/failover to mimic

that of the standard switches. So, for the port groups that were assigned to vSwitch0, i've set

dvUplink1 and 2 as active, and 3/4 as unused. For those in vSwitch1, 3/4 are active and 1/2 are

unused. This provides us with the same connectivity as the standard switches and allows us to

segregate the traffic the exact same way that the standard switches did. This can by editing the

settings of you port group and modifying the Teaming and Failover section. See below.

Step 2 – Split up your NICs

Alright! Now that we have a shell of a distributed switch configured we can begin the migration

process. This is the process I was mentioning at the beginning of the post that can be performed a

million and one ways! This is how I like to do it. From the hosts networking configuration page, be

sure you have switched to the vSphere Distributed Switch context. The first thing we will do is

Page 72: VMware vSphere Networking deep dive

assign a physical adapter from every vSwitch on each host to an uplink on our dvSwitch. Now, we

have redundant NICs on both vSwitches so we are able to do this without affecting our connectivity

(hopefully). To do this, select the 'Manage Physical Adapters' link in the top right hand corner of the

screen. This will display our uplinks with the ability to add NICs to each one.

Basically, we want to add vmnic0 to dvUplink1 and vmnic2 to dvUplink3. This is because we want

one NIC from each standard switch into each of the active/unused configurations that we have

setup previously. It's hard to explain, but once you start doing it you should understand. To do this,

just click the 'Click to Add NIC' links on dvUplink1 and 3 and assign the proper NICs. You will get a

warning letting you know that you are removing a NIC from one switch and adding it to another.

Be sure you repeat the NIC additions on each host you have, paying close attention to the uplinks

you are assigning them to.

Page 73: VMware vSphere Networking deep dive

Step 3 – Migrate our vmkernel port groups

Once we have a couple of NICs assigned to our dvSwitch we can now begin to migrate our

vmkernel interfaces. To do this task, switch to the networking inventory view, right click on our

dvSwitch and select 'Manage Hosts'. Select the hosts we want to migrate from (usually all of them

in the cluster). The NICs that we just added should already be selected the in 'Select Physical

Adapters' dialog. Leave this as default, we will come back and grab the other NICs once we have

successfully moved our vmkernel interfaces and virtual machine networking, it's the next screen, the

'Network Connectivity' dialog which we will perform most the work. This is where we say what

source port group should be migrated to what destination port group. An easy step, simply

adjusting all of the dropdowns beside each port group does the trick. See below. When your done,

skip the VM Networking for now and click 'Finish'.

After a little bit of time we should now have all of our vmkernel interfaces migrated to our distributed

switch. This can be confirmed by looking at our standard switches and ensuring we see no

vmkernel interfaces What you might still see though is VMs attached to Virtual Machine port groups

on the standard switches. This is what we will move next.

Step 4 – Move your virtual machine port groups

Again, this is done through the Networking Inventory and is very simple. Right-click your dvSwitch

and select 'Migrate Virtual Machine Networking'. Set the VM network you wish to migrate as your

source, and the one you created for it in your dvSwtich as your destination (see below). When you

click next you will be presented with a list of VMs on that network, and whether or not the

destination network is accessible or not. If we have done everything right up to this point it should

be. Select all your VMs and complete the migration wizard.

Page 74: VMware vSphere Networking deep dive

This process will have to be done for each and every virtual machine port group you wish to

migrate. In the case of autolab, Servers and Workstations. Once we are done this we have

successfully migrated all of our port groups to our distributed switch.

Step 5 – Pick up the trash!

The only thing left to do at this point would be to go back to the hosts' view of the distributed switch,

select 'Manage Physical Adapters' and assign the remaining two NICs from our standard switches

to the proper uplinks in our dvSwitch.

Netflow, SNMP, and Port Mirroring

Objective 2.1 covers off some other components in regards distributed switches so I thought I would

just group them all together in this post since there isn't a whole lot to getting the setup.

First up, SNMP

Remember a week or so ago when we went over how to manage hosts with the vSphere

Management Assistant? Well I hope you paid attention as we will need to have our hosts

connected to the vMA in order to configure SNMP (technically you could do it with any instance of

the vSphere CLI but the vMA is already there for you on the exam so you might as well use it). We

Page 75: VMware vSphere Networking deep dive

will need to use a command called vicfg-snmp in order to setup a trap target on our hosts. So to

start off, let's set a host target with the following command vifptarget -s host1.lab.local

Once our host is set as the target host we can start to configure SNMP. First off, let's specify our

target server, port, and community name. For a target server of 192.168.199.5 on the default port of

162 and a community name of Public we can use the following command vicfg-snmp -t 192.168.199.5@162/Public

Now, simply enable SNMP on the host with -E

vicfg-snmp -E

You know what, your done! Want to test it, use -T. Check your SNMP server to be sure you have

recieved the trap!

vicfg-snmp -T

I would definitely recommend exploring the rest of the options with vicfg-snmp. You can do so by

browsing the help of the command. Look at things like multiple communities (-c), how to reset the

settings to default (-r), and how to list out the current configuration (-s) etc… vicfg-snmp --help

Also, don't forget you need to do this on all of your hosts! Keep in mind that vCenter also has

SNMP settings. These are configured in the vCenter Server Settings under the SNMP section.

There is a complete GUI around this so I'm not going to go over how to configure these.

NetFlow

Netflow is configured on the settings of your dvSwitch (Right-click dvSwitch->Edit Settings) on the

NetFlow tab. There are a number of items we can configure here. First off, our collector IP and port.

This is the IP and port of the actual NetFlow collector where we are sending the data too. To allow

all of your traffic to appear as coming from a single source, rather than multipleESX management

networks you can specify an IP address for the dvSwitch here as well. This doesn't actually live on

your network, just shows up in your NetFlow collector.

Page 76: VMware vSphere Networking deep dive

There are a few other settings here as well; Active Flow Export Timeout and Idle Flow Export

Timeout handle timeouts for the flows, whereas the sampling rate determins what portion of data to

collect. IE, a sampling rate of 2 will collect every other packet, 5, every fifth packet and so on. The

Process internal flows only will only collect data between VMs on the same host. That's really it for

netflow, not that hard to configure.

Port Mirroring

I supposed you may be asked to mirror a certain port to an uplink or VM on the exam so it's probably best to

go over this. First off if you were asked to mirror traffic from VMA to VMB thenyo1u need to determine what

ports these VMs are attached to. You can see this on the Ports tab of the dvSwitch. Just sort by the

'Connectee' column and find their corresponding Port IDs. For the sake of this example let's say VMA is on

port 150 and VMB is on 200.

To do the actual mirroring we need to be on the Port Mirroring tab of the dvSwitches settings. Here we can

click 'Add' to setup the mirror. As shown we give our session a name and description as well as there is a few

settings regarding encapsulating VLANs and the maximumlenght or packet to capture.

Page 77: VMware vSphere Networking deep dive

The next couple of steps simply setup our source and destination for our mirror. To follow our

example we can use 150 for the source, and port 200 for the destination. Unless we explicity check

the 'Enable' box when completing the setup, all port mirrors are disabled by default. They can be

enabled by going back into the session and explicitly enabling the session.

CDP and LLDP

Well, 8 weeks of VCAP has dwindled down into a serious 8 days of VCAP – and for now, how about

a little bit of random information from the Networking section of the blueprint.

First up, CDP and LLDP

These are relatively easy to configure, however there are a few different modes that they can be run

in, therefore I thought it would be best if I write them down in hopes that maybe I’ll remember them if

any scenarios require me to configure them.

Basically the functionality of the two protocols is identical – they both provide discovery of ports

connected to a virtual switch. CDP however supports just Cisco physical switches whereas LLDP

supports any switch supporting LLDP. Another note, CDP can be enabled on both vSphere

Standard Switches and vSphere Distributed Switches – LLDP – dvSwitch only!

So let’s have a look at the dvSwitch config first. Like I mentioned earlier it’s pretty simple. From the

properties tab of a vSphere Distributed Switch select ‘Advanced’. From here its as simple as setting

the status to Enabled, the type to either CDP or LLDP, and the Operation mode (explained below).

Listen – ESXi detects and displays information from the associated physical switch port,

but all information in regards to the virtual switch is not available to the physical switch.

Advertise – ESXi presents information in regards to the virtual switch available to the

physical switch, but doesn’t detect any information in regards to the physical switch port

Both – Does both advertise and listen.

Page 78: VMware vSphere Networking deep dive

Now that we are enabled we can view what information we receive inside of the Networking section

of a hosts configuration tab. To do so, simply expand out your physical uplinks and click the

information icon (shown below).

And that’s all there is for that – with the distributed switch anyways. To get CDP working on a

standard switch we are once again back into the command line interface. Probably good to brush

up on these commands anyways since its also mentioned in the blueprint. So, Let’s say we wanted

to configure CDP on a vSphere Standard Switch called vSwitch0 to a value of Both. We could use

the following command

Page 79: VMware vSphere Networking deep dive

esxcli network vswitch standard set –v vSwitch0 –c both

And that’s all there is to that – valid options for –c would be both, listen, advertise or down. To view

we could use the same process as above.

Private VLANS

While we are on the topic of vSphere Distributed Switches why not just cover Private VLANs.

Private VLANs is something I've never used in production, thus the reason I'm covering it in this

series. Honestly, this lazy Sunday night is the first time I've even touched them and they are very

very easy to configure technically so long as you understand the concepts first.

What is a PVLAN?

A Private VLAN is essentially a VLAN within a VLAN! Can somebody say inception!! Basically they

allow us to take one VLAN and split it into three different private VLANs each containing restrictions

in regards to connectivity to each other. As far as use cases, the most common I can see is in a

DMZ type scenario where lots of restrictions and security is in place. The three types are

promiscuous, community, and isolated and are explained below.

Promiscuous PVLAN.

A Promiscuous VLAN has the same VLAN ID as your main VLAN. Meaning if you wanted to setup

some Private VLANs on VLAN 200, the promiscuous vlan would have an ID of 200. VMs attached

to the promiscuous VLAN can see all other VMs on other PVLANs, and all other VMs on the PVLAN

can see any VMs on the promiscuous VLAN. In the DMZ scenario, Firewalls and network devices

are normally placed on the promiscuous VLAN as all VMs normally need to to see them.

Community PVLAN

VMs that are a member of the Community PVLAN can see each other, as well as see VMs in the

promiscuous VLAN. They cannot see any VMs in the Isolated PVLAN. Again, in the DMZ scenario

a Community PVLAN could house VMs that need inter connectivity to each other, such as a web

and database server.

Isolated PVLAN

VMs in an isolated PVLAN are just that; isolated! The only other VMs they would be able to

communicate with are those in promiscuous VLAN. They cannot see any VMs that are in the

community VLAN, nor can they see any other VMs that might be in the Isolated VLAN. A good spot

to put a service that only needs connectivity to the firewall and nothing else.

PVLANs in vSphere

Page 80: VMware vSphere Networking deep dive

PVLANs can be implemented within vSphere only on a vSphere Distributed Switch. Before we can

assign a VM to a PVLAN there is a little leg work that needs to be done on the switch itself in terms

of configuring the PVLAN. To do so, right-click your dvSwitch and select 'Edit Settings'. On the

Private VLAN tab (shown below) is where you initially setup your PVLAN. As you can see, I've

setup my main private VLAN ID as 200, therefore my promiscuous PVLAN is also 200. Then, I

have an isolated and community PVLAN configured with and ID of 201 and 202 respectively.

Now our Private VLAN is setup to be consumed. The only thing left to do is create some port

groups that contain the Private VLAN. We need the port groups in order to assign VMs on the

respective network. Again, right-click your dvSwitch and select 'New Port Group'. Give your port

group a name, and set the VLAN type to Private VLAN. Once this happens you will see another box

appear where we can select either the Promiscuous, Isolated, or Community entry of our PVLAN.

Go ahead and make three port groups, each one being assigned to either 200, 201, or 202.

Page 81: VMware vSphere Networking deep dive

Now it is as simple as attaching your VMs network adapters to the desired port group. For my

testing I created 4 small Linux instances; a firewall, a web server, a database server and a video

streaming server. Trying to recreate a DMZ type scenario I assigned the web and database server

to the community PVLAN as they needed to communicate with each other. I assigned the video

streaming server to an isolated PVLAN as it has no need to communicate with either the web or db

server. And I assigned the firewall to the promiscuous PVLAN, as all VMs need to be able to

communicate with it in order to gain access to the outside world. After 'much a pinging' I found that

everything was working as expected. So try it out for yourself. Try reassigning VMs to different port

groups and watch how the ping responses stop. Like I said, these are very easy to setup

technically, just understand the implications of what happens when VMs do not belong to the proper

PVLAN. Good Luck!

The rest of Section 2 – Port Binding, CLI, and DPIO

Section 2 of the blueprint is a pretty big one, and some of the pieces warranted their own post –

however there are a lot of small little skills that don’t really require a complete tutorial so I thought I

would just slam them all in here!

Determine use cases for and apply Port Binding settings

vSphere offers three types of port binding in their vSwitch settings (Distributed Virtual Switch

only)– all of which are explained below

Page 82: VMware vSphere Networking deep dive

Static – the port will be assigned immediately on connection to the vSwitch. The VM will

stay connected to this port even when it’s powered off. The only way to free up the port is to

explicitly remove the NIC from the VM. Static Ports are managed through vCenter Server

Dynamic – Port is connected when the VM is powered on and then disconnected when

the VM is powered off. Dynamic ports are managed through vCenter Server. This method has

been depreciated in vSphere 5.x

Ephemeral – Both static and dynamic port binding has a set number of ports, in

ephemeral, the ports are actually created and destroyed on the VM power on/power off event

therefore requiring a bit more overhead. That said, these are managed by the host, therefore,

networking can still be connected/disconnected in the event that vCenter Server is unavailable.

Choosing a port binding method is pretty easy – Right click on your port group, chose edit settings

and it should be front and centre in the General section.

As far as use-cases go, really ephemeral only needs to be used in recovery purposes since they are

a bit more demanding in terms of overhead. Also, ephemeral does not maintain port-level

permissions and controls when a VM is rebooted, since the port will be destroyed and recreated.

For the most part it’s best to use Static port binding – and since 5.0 offers an auto expand feature to

dynamically grow the number of ports by a specified interval, you shouldn’t have to worry about

running out of ports.

Page 83: VMware vSphere Networking deep dive

Command Line goodness

The networking section references the ability to use command line tools to manage both standard

and distributed virtual switches. Obviously I can’t go over every command and every switch. Just

be sure to know how to use esxcfg-vswitch, esxcfg-vmknic, esxcfg-route, the networking

namespaces in esxcli, as well as some of the PowerCLI cmdlets around networking (Get-

VirtualSwitch, Get-NetworkAdapter, Get-VMHostNetwork, etc).

Hint – for the PowerShell command line stuff you can quickly find the PowerCLI commands

associated with networking (or anything for the matter) by utilizing the Get-VICommand cmdlet and

passing a search string. IE, to return all cmdlets containing ‘net’ you can use the following

Get-VICommand –Name *Net*

Determine use cases for and applying VMware DirectPath I/O

I’ve never used DPIO – that said, there it is on the blueprint so I’d better figure it out. As for use

cases, honestly I haven’t seen many. For the most part utilizing the virtualized hardware seems to

perform well enough, but if you need the tiny bit performance improvement it claims to provide there

are a couple of steps to get it running.

First up we need to configure pass-through on the host itself. This is done on the Configuration tab

under ‘Advanced Settings’. Simply select ‘Configure Pass-through’ and select the device you want

to present to a VM.

Page 84: VMware vSphere Networking deep dive

Once you are done this you will need to restart the host in order to complete the next step, so go

ahead and do that.

As for presenting the pass-through device to the VM this is done just as you would do any other

piece of hardware (In ‘Edit Settings’ of a VM). Simply select PCI Device as your hardware and

follow the wizard. You should see your device that you had setup for pass-through earlier in the

dropdown box as shown

below.

From here you will need to ensure that your guest OS has the correct drivers in order to install this

hardware as it is presented directly to the VM. Aside from creating a memory reservation on your

VM there are also a ton of features that are unavailable when you utilize DPIO. Things such as

vMotion, HA, DRS, Snapshots, Hot add, Fault tolerance are all not supported – probably why there

is such low adoption.

And I think that should just about wrap up networking. There is some teaming information

mentioned, but honestly I find this to be VCP level knowledge and I’m just going to assume you

already know it Good Luck!

vSphere Network I/O Control Alright – here we go, Network I/O Control – Objective 2.4 of the blueprint lists this as a skill you must know. Honestly, I've never used this before writing this post…thankfully, it's a very very easy thing to configure. Unless I'm missing something, in which case I'm in for some trouble come exam time

Page 85: VMware vSphere Networking deep dive

First up, let's have a look at the requirements.

Enterprise Plus licensing – since you need a distributed switch to use NIOC, in turn you need Ent+ licenses.

OK, maybe I should of said requirement – not plural. I can't seem to find any other requirements for

using NIOC. Anyways, the first step in getting NIOC setup is to enable it, and this in itself is a

matter of checking a box. From within the Networking inventory view on the Resource Allocation

tab select ‘Properties’ and check the box

System Network Resource Pools

Easy enough right! Now on to our Network resource pools. As you can see, there are some default

system network resource pools already setup within NIOC.

Fault Tolerance

iSCSI

Management Traffic

Virtual Machine Traffic

vMotion

vSphere Replication

I’ll leave it to your imagination as to what traffic these represent. Basically these resource pools are

automatically applied to their corresponding traffic type when we enable NIOC. NIOC utilizes the

same type of sharing mechanism that resource pools utilize. Meaning each resource pool is

assigned a share value, one that will apply relatively to the other pools during network contention.

Thus, if going by the example in the Networking guide, if we assign FT and iSCSI a share value of

100, while all other resource pools having 50 shares, iSCSI and FT would each get 25% while the

remaining resource pools would receive 12.5% of the available bandwidth (during contention). The

table below should help with that formula

Resource Pool Shares Total Shares Percentage

iSCSI 100 400 25%

FT 100 400 25%

Management 50 400 12.5%

Page 86: VMware vSphere Networking deep dive

VM 50 400 12.5%

vMotion 50 400 12.5%

Replication 50 400 12.5%

What if I want to further segregate my VM traffic?

A valid question. To resolve this NIOC allows us to create our own User-defined network resource

pools. Again, this is a very easy process. Selecting ‘New Network Resource Pool’ will get the dialog

box open that we need. See Below..

As you can see, we can create our own resource pool, assign either a predefined (high, normal,

low) share value to it (or we can set a custom number) as well as a QoS priority tag if we need to

tag outbound QoS from our virtual switch. Just a note, we can change the values and QoS tags on

our system defined resource pools as well if need be.

Now that we have our resource pool created there’s only one final step in applying it. Using the

‘Manage Port Groups’ link we can assign our newly created resource pool to one of our

dvPortGroups. Below I’ve done just that by assigning ‘My Server Traffic’ to dvServers.

Page 87: VMware vSphere Networking deep dive

And that’s all there is to NIOC really. Again, not too hard, but something I’ve never touched before

now. Also, something that could of caught me off guard on the exam – the last thing i want to do is

spend time reading documentation! Good luck studying!

Network Scenario

Your company leverages the full Enterprise Plus licensing and has set up a Distributed

vSwitch. Recently, the number of ports needed on a particular portgroup exceeded the

number configured. You are tasked with creating a new Portgroup, called

DvS_ProductionNetwork which only connects the running VM’s and also functions when

vCenter is down.

Off we go again. So, let’s recall. There are 3 different options of port binding on a DvS.

Static binding – Which creates a port group with a manual set number of ports. A port is assigned

whenever a vNIC is added to a VM. You can connect a vNIC static binding only through vCenter.

Dynamic binding (Deprecated in vSphere 5.0!) – A port is assigned to a vNIC when the VM is

powered on, and it’s vNIC is in a connected state. You can connect this dynamic binding only

through vCenter.

Empheral binding – A port is assigned to a vNIC when the VM is powered on, and it’s vNIC is in a

connected state. This binding method allows the bypass of vCenter, allowing you to manage virtual

machine networking when vCenter is down.

So, that’s the one we need! Empheral binding! Luckily, it’s quite simple to configure. Hop over to the

networking inventory (Ctrl + Shift + N) and create the new port group. Give it a name and leave the

number of ports on the default of 128.

Page 88: VMware vSphere Networking deep dive

Now edit the settings of this port group, and select the Empheral binding under the port binding

dropdown. Also note, that the number of ports is greyed out now.

More Networking Scenarios

Your recent work on the new portgroup was top notch! Now, the network administrators

have some new requirements. You currently use one vNIC for the DvS. A second pNIC has

been connected to the network and you have been tasked with adding it to the DvS. Also

ensure that the DvS_StorageNetwork Port Group only uses the new pNIC and does VLAN

tagging on VLAN ID 20.

Another networking objective. Whoohoo! Allright, let us first check out the current network adapters

available on the host:

Allright, so vmnic2 is the one that we can add to the DvS_AMS01. Go over to the networking view

(Ctrl + Shift + N) and edit the settings of your DvS. We first need to check if the DvSallows for

2 uplinks, instead of just 1.

Page 89: VMware vSphere Networking deep dive

And check this out! It’s still set to 1. This is a good one to remember for the exam, on the DvS object

itself, you configure the maximum number of physical adapters (also called uplink ports) per host.

So set that one to 2 and let’s continue with adding vmnic2 to the DvS.

Since the host is already connected to the DvS, click the DvS and select Manage Hosts. You will

find your host, and you can add the second nic.

You could also do this from the hosts and clusters view, do whatever works for you.

Now that we have added that pNIC to the DvS, we need to create the DvS_StorageNetwork port

group. Remember that we need to do VLAN tagging on VLAN ID 20 here. Create the new port

group now, it’s settings should look like this:

Now, for the last part: As ESXi does load balancing by default (originating port ID based) we will

now have load balancing on the DvS_ProductionNetwork, which is great, but not what we need for

the Storage Network.

Open up the settings of that port group and go to the Teaming and Failover section.

Both uplink ports are now under Active Uplinks. Let’s review real quick what the options are:

Active Uplinks – actively being used for traffic flow

Standby Uplinks – will only become active until a failure occurs on one of the active uplinks

Unused Uplinks – this adapter will never be used for this port group

We need to ensure that it will never use this uplink, so move the dvUplink1 over to the

UnusedUplinks. It should then look like this:

Page 90: VMware vSphere Networking deep dive

Host Cache Scenario

You recently acquired some SSD drives for in your hosts. You’re not running vSphere 5.5 yet

so vFRC is not an option. You read something about swap to host cache, and you think it

might be wise to configure your SSD drive for usage as host cache.

Well, the process of configuring this isn’t that hard. The swap to host cache will be used as a last

resort and a replacement of swapping to “disk”. Remember that vSphere has 4 main memory

management techniques:

1) Transparent page sharing: Eliminates redundant copies of memory pages by removing them

from memory and creating a reference instead.

2) Memory ballooning: In times of contention, the balloon driver (comes with VMware Tools) will

ask the guest OS for unused memory and returns this back to vSphere

3) Memory compression: After ballooning runs out, try compressing the memory (basically

gzipping it).

4) Swap to disk / host cache: Swap memory to a disk of some sort.

So, the swapping itself comes last in a process of memory management. While it’s still not wanted,

swapping to an SSD is still better than to storage or slow local storage.

You configure this by offering up a (portion of a) n SSD tagged datastore as host cache. Go to

Configuration -> Host cache configuration

Page 91: VMware vSphere Networking deep dive

All devices that are being recognized as SSD drive will show up here. You can right click the

datastores and set the amount of disk space that you are willing to spend on host cache. If you

haven’t formatted a datastore yet, but do have an SSD in place, you can use the Add storage

wizard mentioned above.

Once you’ve configured this, you can browse the datastore which you have (partially) allocated to

Host cache. On your datastore, you will find a hashed folder, and in that folder a folder named

hostCache.

Something like this: 5241d252-0687-cf96-f89a-10ddb1eabcf5/hostCache

In this folder, you will find as much .vswp files as the total number of GB’s that you have allocated to

host cache.

Page 92: VMware vSphere Networking deep dive

HA

Although High Availability is something I’ve been configuring for many years now I thought it might

be a good idea to go over the whole process again. This became especially evident after watching

the HA section of Jason Nash’s TrainSignal/PluralSight course, as I quickly realized there are a lot

of HA advanced settings that I’ve never modified or tested – with that said, here’s the HA post.

First off I’m not going to go over the basic configuration of HA – honestly, it’s a checkbox right – I

think we can all handle that. I will give a brief description of a few of HA bullet points that are listed

within the blueprint and point everyone where we can manage them.

First up, Admission Control

When an HA event occurs in our cluster, we need to ensure that enough resources are available to

successfully failover our infrastructure – Admission control dictates just how many resources we will

set aside for this event. If our admission control policies are violated, no more VMs can be powered

on inside of our cluster – yikes! There are three types…

Specify Failover Host – Ugly! Basically you assign a host as the host that will be used in the event

of an HA event. The result of an HA event is the only time that this host will have VMs running on it

– all other times, it sits there wasting money

Host failures cluster tolerates – This is perhaps the most complicated policy. Essentially a

slot size is calculated for CPU and memory, the cluster then does some calculations in order to

determine how many slot sizes are available. It then reserves a certain number of failover slots in

your cluster to ensure that a certain number of hosts are able to failover. There will be much more

on slot size later on in this post so don’t worry if that doesn’t make too much sense.

Percentage of Cluster resources reserved – This is probably the one I use most often. Allows

you to reserve a certain percentage of both CPU and Memory for VM restarts.

So, back to slot size – a slot is made up of two components; memory and cpu. HA will take the

largest reservation of any powered on VM in your environment and use that as its memory slot size.

So even if you have 200 VMs that have only 2GB of RAM, if you place a reservation on just one VM

Page 93: VMware vSphere Networking deep dive

of say, oh, 8GB of RAM, your memory slot size will be 8GB. If you do not have any reservations

set, the slot size is deemed to be 0MB + memory overhead.

As for CPU, the same rules apply – the slot size is the largest reservation set on a powered on VM.

If no reservations are used, the slot size is deemed to be 32MGHz. Both the CPU and Memory slot

sizes can be controlled by a couple of HA advanced settings – das.slotCpuInMhz and

das.slotMemInMb (**Note – all HA advanced setting start with das. – so if you are doing the

test and you can’t remember one, simply open the Availability doc and search for das –

you’ll find them ). These do not change the default slot size values, but more so specify an upper

limit in wich a slot size can be.

So let’s have a look at these settings and slot size – first up, we can see our current slot size by

selecting the ‘Advanced Runtime Info’ link off of a clusters’ Summary tab. As shown below my

current slot size for CPU is 500Mhz and 32MB for memory, also I have 16 total slots, 4 of which

have been taken.

So let’s now set the advanced setting das.slotCpuInMhz setting to something lower than 500 – say

we only ever want our CPU slot size for a VM to be 64Mhz. Within the clusters’ HA settings (Right-

click cluster->Edit Settings, vSphere HA) you will see an Advanced Options button, select that and

set our das.slotCpuInMhz to 64 as shown below.

Page 94: VMware vSphere Networking deep dive

Now

we have essentially stated that HA should use the smallest of either the largest VM CPU

reservation, or the value for das.slotCpuInMhz as our CPU slot size. A quick check on our runtime

settings again reflects the change we just made. Also, if you look, you will see that we have also

increased our total available slots to 128, since we are now using a CPU slot size of 64 Mhz rather

than 500.

So that’s admission control and slot sizes in a nutshell. Seems like a good task to have you limit or

change some slot sizes on the exam. Also, I’m not sure how much troubleshooting needs to be

performed on the exam but if presented with any VMs failing to power on scenarios, slot sizes and

admission control could definitely be the answer.

More Advanced Settings

As you may have seen in the earlier screenshots there were a few other of those das. advanced

settings shown. Here’s a few that you may need to know for the exam, maybe, maybe not, either

way, good to know…

Page 95: VMware vSphere Networking deep dive

das.heartbeatDsPerHost – used to increase the number of heartbeat datastores used – default is

2, however can be overridden to a maximum of 5. Requires complete reconfiguration of HA on the

hosts.

das.vmMemoryMinMb – value to use for the memory slot size if no reservation is present – default

of 0

das.slotMemInMb – upper value of a memory slot size – meaning we can limit how large the slot

size can be by using this value.

das.vmCpuMinMhz – value to use for the cpu slot size if no reservations are present – default of

32.

das.slotCpuInMhz – upper value of a CPU slot size – meaning we can limit how large the slot size

can be by using this value

das.isolationAddress – can be used to change the IP address that HA pings when determining

isolation – by default this is the default gateway.

das.isolationAddressX – can be used to add additional IPs to ping – X can be any number

between 0 and 9.

das.useDefaultIsolationAddress – can be used to specify whether HA should even attempt to use

the isolation address.

Anyways, those are the most commonly used settings – again, any others will be listed in the

availability guide so use that if needed to find others on the exam – but remember, having to open

those pdf’s will take away valuable time.

Other random things

Just a few notes on some other parts of HA that I haven’t used that often. The first being VM

Monitoring. VM Monitoring is a process that will monitor for heartbeats and I/O activity from the

VMware tools service inside your virtual machines. If it doesn’t detect activity from the VM, it

determines that it has failed and can proceed with a reboot of that VM. vSphere has a few options

as it pertains to VM monitoring that we can use to help prevent false positives and un needed VM

reboots.

Failure Interval – Amount of time in seconds to check for heartbeats and I/O activity.

Minimum Uptime – The amount of time in seconds that VM monitoring will wait after a power on or

restart before it starts to poll.

Maximum Per VM Resets – the number of times that a VM can be reset in a given time period

(Reset Time Window)

Reset Time Window – used fo the maximum VM resets – specified in hours

The blueprint also mentions heartbeat datastore dependencies and preferences. Quickly, vSphere

will chose which datastores to use as HA heartbeat datastores automatically, depending on a

number of things like storage transport, number of hosts connected, etc. We can change this as

well in our options. We can instruct vSphere to only chose from our preferred list (and by which only

selecting 2 datastores will in turn allows us to determine which datastores are used) or we can say

to use our preferred if possible, but if you can’t, go ahead and chose the ones you want.

Page 96: VMware vSphere Networking deep dive

As well, most all of the settings we set for defaults such as isolation response and restart priority

can be set on a per-VM basis as well. This is pretty easy so I won’t explain it but just wanted to

mention that it can be done.

I’d say that’s enough for HA – it’s not a hard item to administer. That said, lab it, lab all of it!

Practice Practice Practice.

Fault Tolerance

You might know VMware Fault Tolerance already, since the VCAP exam builds on the VCP

knowledge. But still, it is in the blueprint, so it might be wise to go over it.

Fault Tolerance, often abbreviated as FT is a technique in which a shadow VM of a running VM is

kept in lockstep with the primary. This basically means that all memory and CPU calculations on the

primary VM also will be executed on the secondary VM.

In case of a host failover, a VM with fault tolerance enabled can switch over from the primary to the

second VM in a matter of seconds, taking right over where the primary stopped. This allows for a

better uptime of that VM and avoids the VM restart that HA would do.

There are a few host requirements for running FT:

-> You need to have a cluster where HA is enabled

-> All hosts needs to access the same (shared) datastores

-> There needs to be physical processor support

-> VMkernel ports need to be configured for vMotion and FT logging

There are also some VM requirements for running FT:

-> The VM can only have one (1) vCPU, so no vSMP

-> The VM disks need to be eager zeroed thick provisioned

-> No non re-playable devices (CD ROM, USB devices etc).

-> No snapshots

Configuring the VMkernel port for FT logging

Conform VMware best practices for FT, it it wise to use a dedicated NIC for FT logging (preferably

even 10 gigabit), but configuring FT logging is as easy as selecting a checkbox on a VMkernel port:

Page 97: VMware vSphere Networking deep dive

Enabling FT on a VM

Enabling FT is rather simple, right-click the VM -> Fault Tolerance -> Turn on Fault Tolerance. You

might get a popup saying that a reservation (memory) will be created for the full memory allocation

of this VM, and that the disk will be eager zeroed out.

After it walks through the process of enabling fault tolerance, you get a nice blue icon in your

inventory:

After powering on the FT VM, on the summary page, you also see some info about the FT status:

Page 98: VMware vSphere Networking deep dive

Testing VMware FT

Now that we have a running FT VM, we might as well test it. We have 2 options for testing it:

Test failover – The primary VM does a failover to the primary VM, and then spawns up a new

secondary VM.

Test restart secondary – The secondary VM is re-spawned and the FT configuration is protected

again.

After doing a failover of the primary VM, a new secondary VM will be spawned, so the status after

doing the failover might be like this:

Troubleshooting VMware FT

So, all is happy, but since we’re doing the VCAP exam, we might expect some troubleshooting.

Page 99: VMware vSphere Networking deep dive

On the summary page of the host, you can see if the host is configured and ready for FT. If it isn’t,

the reason why will also be mentioned:

In the image above, there isn’t a VMkernel port configured for FT logging. So go into your

networking and check that FT logging box.

Also, when the VM mentions something like this, the secondary VM is not running, so do a restart or

migrate secondary:

Netflow, SNMP, and Port Mirroring

First up, SNMP

Remember a week or so ago when we went over how to manage hosts with the vSphere

Management Assistant? Well I hope you paid attention as we will need to have our hosts

connected to the vMA in order to configure SNMP (technically you could do it with any instance of

the vSphere CLI but the vMA is already there for you on the exam so you might as well use it). We

will need to use a command called vicfg-snmp in order to setup a trap target on our hosts. So to

start off, let's set a host target with the following command vifptarget -s host1.lab.local

Once our host is set as the target host we can start to configure SNMP. First off, let's specify our

target server, port, and community name. For a target server of 192.168.199.5 on the default port of

162 and a community name of Public we can use the following command vicfg-snmp -t 192.168.199.5@162/Public

Page 100: VMware vSphere Networking deep dive

Now, simply enable SNMP on the host with -E

vicfg-snmp -E

You know what, your done! Want to test it, use -T. Check your SNMP server to be sure you have

recieved the trap!

vicfg-snmp -T

I would definitely recommend exploring the rest of the options with vicfg-snmp. You can do so by

browsing the help of the command. Look at things like multiple communities (-c), how to reset the

settings to default (-r), and how to list out the current configuration (-s) etc… vicfg-snmp --help

Also, don't forget you need to do this on all of your hosts! Keep in mind that vCenter also has

SNMP settings. These are configured in the vCenter Server Settings under the SNMP section.

There is a complete GUI around this so I'm not going to go over how to configure these.

NetFlow

Netflow is configured on the settings of your dvSwitch (Right-click dvSwitch->Edit Settings) on the

NetFlow tab. There are a number of items we can configure here. First off, our collector IP and port.

This is the IP and port of the actual NetFlow collector where we are sending the data too. To allow

all of your traffic to appear as coming from a single source, rather than multipleESX management

networks you can specify an IP address for the dvSwitch here as well. This doesn't actually live on

your network, just shows up in your NetFlow collector.

Page 101: VMware vSphere Networking deep dive

There are a few other settings here as well; Active Flow Export Timeout and Idle Flow Export

Timeout handle timeouts for the flows, whereas the sampling rate determins what portion of data to

collect. IE, a sampling rate of 2 will collect every other packet, 5, every fifth packet and so on. The

Process internal flows only will only collect data between VMs on the same host. That's really it for

netflow, not that hard to configure.

Port Mirroring

I supposed you may be asked to mirror a certain port to an uplink or VM on the exam so it's

probably best to go over this. First off if you were asked to mirror traffic

from VMA to VMB thenyo1u need to determine what ports these VMs are attached to. You can see

this on the Ports tab of the dvSwitch. Just sort by the 'Connectee' column and find their

corresponding Port IDs. For the sake of this example let's say VMA is on port 150 and VMB is

on 200.

To do the actual mirroring we need to be on the Port Mirroring tab of the dvSwitches settings. Here

we can click 'Add' to setup the mirror. As shown we give our session a name and description as

well as there is a few settings regarding encapsulating VLANs and the maximumlenght or packet to

capture.

Page 102: VMware vSphere Networking deep dive

The next couple of steps simply setup our source and destination for our mirror. To follow our

example we can use 150 for the source, and port 200 for the destination. Unless we explicity check

the 'Enable' box when completing the setup, all port mirrors are disabled by default. They can be

enabled by going back into the session and explicitly enabling the session.

I'm going to practice setting these up until I can do it with my eyes closed. They are something that

I don't use that option in my day to day operations, but I also recognize that the VCAP may ask you

to do these are they can easily be scored.

The ESXi Firewall

Alright, continuing on the realm of security let's have a look at the built in firewall on ESXi. This post

will relate directly to Objective 7.2 on the blueprint! Basically, a lot of this work can be done in either

the GUI or the CLI, so chose what you are most comfortable with. I'll be jumping back and forth

from both! Some things are just easier in the GUI I find….anyways, I only have like 4 weeks to go

so let's get going…

First up, enable/disable pre configured services

Page 103: VMware vSphere Networking deep dive

Easy/Peasy! Hit up the 'Security Profile' on a hosts configuration tab and select 'Properties' in the

'Services' section. You should see something similar to that of below

I guess as far as enabling/disabling you would simply stop the service and set it to manual

automation.

Speaking of automation, that's the second skill

As you can see above we have a few options in regards to automation behavior. We can Start/Stop

with the host (basically on startup and shutdown), Start/Stop manually (we will go in here and do it),

or Start automatically when …( I have no idea what this means sorry – let me know in the

comments ). Anyways, that's all there is to this!

We are flying through this, Open/Close Ports

Same spot as above just hit the 'Properties' link on the Firewall section this time. Again, this is just

as easy – just check/uncheck the boxes beside the service containing the port you want to open or

close! Have a look below – it's pretty simple!

Page 104: VMware vSphere Networking deep dive

Another releavant spot here is the 'Firewall' button at the bottom. Aside from opening and closing a

port, we can also specify which networks are able to get through if our port is open. Below I'm

Page 105: VMware vSphere Networking deep dive

allowing access only from the 192.168.1.0/24 network.

Again this can be done within the CLI, but i find it much easier to accomplish inside of the GUI. But,

that's a personal preference so pick your poison!

That's what I get for talk about the CLI, custom services!

Aha! Too much talk of the CLI leads us to a task that can only be completed via the CLI; Custom

Services. Basically, if you have a service that utilizes ports that aren't covered off by the default

services you need to create your own spiffy little service so you can enable/disable it and

open/close those ports and allow access to it. So, off to the CLI we go…

The services in the ESXi firewall are defined by XML files located in /etc/vmware/firewall The

service.xml file contains the bulk of them and you can define yours in there, or you can simply add

any xml file in the directory and it will be picked up (so long as it is defined properly). If you have

enabled HA you are in luck – you will see an fdm.xml file there. Since the VCAP is time sensitive

this might be your quickest way out as you can just copy that file, rename it to your service and

modify as it fits. If not, then you will have to get into service.xml and copy text out of there. I'm

going to assume HA is enabled and go the copy/modify route.

So, copy fdm.xml to your service name

cp fdm.xml mynewservice.xml

Page 106: VMware vSphere Networking deep dive

Before modifying mynewservice.xml you will need to give root access to write to it, use the following

to do so…

chmod o+w mynewservice.xml

Now vi mynewservice.xml – if you don't know how to use 'vi', well, you better just learn, go find a

site Let's say we have a requirement to open up inbound tcp/udp 8000 and tcp/udp 8001 on the

outbound. We would make that file look as follows, simply replacing the name and ports and setting

the enabled flag.

Syslog scenario

Company policies state that every syslog capable device or server should send these logs to

an appropriate syslog collector. Your colleague has already set up the VMware syslog

collector on a separate machine, located at 10.10.20.45. You have been tasked with setting

up the syslog clients on the ESXi hosts, and ensuring that syslogs arrive on the syslog

server.

To configure the syslog collector on the ESXi hosts, we will be using the esxcli system

syslognamespace. This allows us to set different options regarding the local and remote (which is

what we want) syslog.

Let’s review the default config first by using the following command:

~ # esxcli system syslog config get

Default Rotation Size: 1024

Default Rotations: 8

Log Output: /scratch/log

Log To Unique Subdirectory: false

Remote Host: <none>

We see that no remote syslog is being used. Let’s configure one, using this command:

~ # esxcli system syslog config set –loghost=10.10.20.45

Now that we have configure a remote loghost, we need to reload the syslog daemon to apply the configuration changes. Esxcli can help us once again:

~ # esxcli system syslog reload

You might think that we’re ready now, but when we check our syslog, we don’t see syslog yet.

Bummer! For this problem, I’ll reference to the ESXi firewall post

Page 107: VMware vSphere Networking deep dive

(http://blog.mwpreston.net/2013/11/19/8-weeks-of-vcap-the-esxi-firewall/) as with the default

security level, this outgoing traffic will be dropped. We need to enable the firewall rule for syslog

(udp/514, tcp/1514).

~ # esxcli network firewall ruleset set -r syslog -e true

And reload our changes:

~ # esxcli network firewall refresh

And now, we see our host logs coming in. The VMware syslog collector stores it logs by default in

C:\ProgramData\VMware\VMware Syslog Collector\Data

Alright, save that bad boy, and probably it's a good idea to run 'chmod o-w mynewservice.xml' and

take away that write permission. If you go and look at your services, or simply run 'esxcli network

firewall ruleset list' you might say, "hey, where's my new service?" Well, it won't show up until you

refresh the firewall – to do so, use the following command..

esxcli network firewall refresh

Now you can go check in the GUI or do the following to list out your services…

esxcli network firewall ruleset list

Page 108: VMware vSphere Networking deep dive

Woot! Woot! It's there!

But wait, it's disabled. No biggie, we can go ahead and enable it just as we did the others in the

steps earlier in this post – or, hey, since we are in the CLI let's just do it now! esxcli network firewall ruleset set -r mynewservice -e true

And that's that! You are done! If asked to set the allowedIP information, I'd probably just jump back

to the GUI and do that!

Set firewall security level – More CLI goodness

Well before we can set the firewall security level let's first understand what security levels are

available to us. ESXi gives us three…

High – This is the default – basically, firewall blocks all incoming and outgoing ports except for the

the essential ports it needs to run.

Medium – All Incoming is blocked, except for any port you open – outgoing is a free for all

Low – Nada – have at it, everything is open.

Anyway, we can get the default action by specifying

esxcli network firewall get

and to change it we have a few options… Passing '-d false' would set us to DROP (the default

HIGH security level), passing a '-d true' will set us up to PASS traffic (I think this would be the

Page 109: VMware vSphere Networking deep dive

medium security) and setting a '-e false' will disable the firewall completely (the low settings). So, to

switch to medium we could do the following esxcli network firewall set -d true

I could be wrong here, so if I am just let me know and I'll update it

And guess what? We are done with the firewall! I would practice this stuff as it's easy measurable

and can be quickly identified as you doing something right or wrong – I'd bet this will be on the exam

in one way or another. Good Luck!

Security

Just as I said I'm going to hop around from topic to topic, so without further ado we move from HA to

security. This post will be pretty much all of objective 7 on the blueprint – some things I may graze

over while focusing heavily on others.

So first up is Objective 7.1 – now there is a lot of information in here and I'll just pull out the most

important in my opinion, as well as the task I don't commonly perform. So that said, I'm going to

leave out the users, groups, lockdown mode, and AD authentication. These things are pretty simple

to configure anyways. Also, this whole authentication proxy thing – I'm just going to hope for the

best that it isn't on the exam So, let's get started on this beast of an objective.

SSH

Yeah, we all enable it right – and we all suppress that warning with that advanced setting. The point

is, ssh is something that is near and dear to all our hearts, and we like to have the ability to access

something via the CLI in the case the GUI or vCenter or something is down. So with that said, let's

have a look at what the blueprint states in regards to SSH – customization. Aside from enabling

and disabling this, which is quite easy so I won't go over it, I'm not sure what the blue print is getting

at. I've seen lots of sites referencing the timeout setting so we can show that. Simply change the

value in the Advanced Settings of a host to the desired time in seconds (Uservars-

>ESXiShellTimeOut) as shown below

As far as

'Customize SSH settings for increased security' goes, I'm not sure what else you can enable/disable

or tweak to do so. If you are familiar with sshd I suppose you could permit root from logging in and

simply utilize SSH with a local user account.

Page 110: VMware vSphere Networking deep dive

Certificates and SSL

The blueprint mentions the enabling and disabling of certificate checking. This is simply done by

checking/unchecking a checkbox in the SSL section of the vCenter Server settings.

The blueprint also calls out the generation of ESXi host certs. Before doing any sort of certificate

generation or crazy ssl administration always back your original certs up. These are located in

/etc/vmware/ssl – just copy them somewhere. To regenerate new certs simply shell into ESXi and

run generate-certificates – this will create new certs and keys, ignore the error regarding the config

file After doing this you will need to restart your management agents (/etc/init.d/hostd restart) and

quite possibly reconnect your host to vCenter.

To deploy a CA signed cert you can simply just copy your certs to the same directory

(/etc/vmware/ssl ) and be sure they are named rui.cert and rui.key and restart hostd the same as

above.

As far as SSL timeouts I couldn't find this located in any of the recommended tools for this objective,

it's actually in the security guide (which makes sense right, we are doing the security objective #fail

– either way, you need to edit the /etc/vmware/hostd/config.xml file and add the following two

entries to modify the SSL read and handshake timeout values respectively (they are in milliseconds

remember)

<readTimeoutMs>15000</readTimeoutMs>

<handshakeTimeoutMs>15000</handshakeTimeoutMs>

Once again you will need to restart hostd after doing this!

Password policies

Yikes! You want to get confused try and understand the pam password policies. I'll do my best to

explain it – keep in mind it will be high level though – this is in the blueprint however I'm not sure if

they are going to have you doing this on the exam. Either way, it's good to know… Honestly, I don't

think I'm going to memorize this, if you work with it daily then you might, but me, no! I'll just know

that it is also in the security guide (search for PAM). Anyways, here's the command

password requisite /lib/security/$ISA/pam_passwdqc.so retry=N

min=N0,N1,N2,N3,N4

Wow! So what the hell does that mean? Well, first off N represents numbers (N = retry attempts,

N0 = length of password if only using one character class, N1 = length if using two character

classes, N2 = length of words inside passphrases, N3 = length if using three character classes, N4

= length if using all four character classes). Character classes are basically lower case, upper case,

numbers and special characters. They also confuse things by slamming the passphrase settings

right in the middle as well – Nice! Either way, this is the example from the security guide.

Page 111: VMware vSphere Networking deep dive

password requisite /bin/security/$ISA/pam_passwdqc.so retry=3 min=12,9,8,7,6

This translates into three retry attempts, 12 character password min if using only one class, 9

character minimum if using two classes, 7 character minimum if using three classes, and 6

character minimum if using all four classes. As well, passphrases are required to have words that

are at least 8 characters long.

No way can I remember this, I'm just going to remember Security Guide + CTRL+F + PAM

I'm going to cut this post off here and give the ESXi firewall its' own post – my head hurts!!!!

vSphere Management Assistant (vMA)

So, first off, let's get started with installing and configuring the vMA. Installation really doesn't even

need to be described. It comes as an ovf and it's as simple as just importing that…

Configuration can get a bit tricky, especially if you haven't used IP Pools before. We will cover IP

Pools in another blog post so I'll just leave it at that. For the moment, I just went into the vMA VM

settings and disabled all of the vApp options!

Anyways, once you finally get the appliance booted up you will be prompted to enter in some

network information – pretty simple stuff, menu driven, and then prompted to change the default

password for vi-admin. Easy stuff thus far. Speaking of authentication the vMA utilizes 'sudo' to

execute commands. This basically allows vi-admin to execute commands under the root user

account. A bit of a security and safeguard mechanism utilized in some Linux OSes.

Alright, so we are now up and running so let's just go over some common tasks that we might

perform in relation to the vSphere Management Assistant. Probably a good idea to know all of

these for the exam as vMA does have its very own objective and is referenced in many others.

vMA and your domain!

Certainly we may want to join the appliance to our domain. This will give us plenty of benefits

security wise, the biggest being we will not have to store any of our target hosts passwords within

the vMA credential store – so long as the hosts are a member of the domain as well. Commands

related to vMA and domains are as follows…

To join vMA to a domain, obviously substituting your domain name and authentication…requires a

restart of the appliance afterwards.

sudo domainjoin-cli join FQDN user_with_priveledges

And, to remove the vMA it's the same command, different parameters

sudo domainjoin-cli leave

And to view information

Page 112: VMware vSphere Networking deep dive

sudo domainjoin-cli query

So as mentioned above we can do some un-attended active directory authentication to our hosts.

This is a pretty long drawn out process so I doubt it will be asked, but then again I'm wrong 100% of

50% of the time – I'd just know where this information is in the vSphere Management Assistant user

guide (HINT: Page 15).

Host Targets

Before we can use the vMA to execute commands on hosts we need to, well, add hosts to our vMA.

Within vMA terms, our hosts are called targets; targets on which we can execute commands. So

when adding hosts we have to provide the hostname and some credentials. Thus we have a couple

of options in regards to how we authenticate; adauth of fpauth (Default). Examples of adding a host

with both authentication types are below…along with some other host options..

Using local ESXi credentials

vifp addserver HOSTNAME

Using AD credentials

vifp addserver HOSTNAME --authpolicy adauth

Viewing the hosts we have added

vifp listservers

Removing a server

vifp removeserver HOSTNAME

Set a host as the target server – meaning set it up so you can run a command on the host without

authentication

vifptarget -s HOSTNAME

To clear the current target

vifptarget -c

Security and user related functions

The vMA also has a few commands we can run to help better secure our systems. When you add a

host to vMA, it actually creates a vi-admin and vi-user account on your ESXi host. You can tell vMA

to rotate these passwords using the following command.

vifp rotatepassword (--now, --never or --days #)

vMA also has a vi-user account locally, which by default is disabled, since it has no password. This

account can be used to run commands on an ESXi host that would not normally require

Page 113: VMware vSphere Networking deep dive

administrative priviledges. Enabling this account is as easy as simply setting a password on it using

the following

sudo passwd vi-user

For now that's it – That's all I can think of that is vMA related – Now we will be using it for some

other components in the futre, like setting up snmp and different things, but I wanted to keep this

post strictly about vMA specific commands. Happy Studying!