Hyper-V Networking
Aidan Finn
About Aidan Finn
Technical Sales Lead at MicroWarehouse (Dublin)
Working in IT since 1996
MVP (Virtual Machine)
Experienced with Windows Server/Desktop, System Center, virtualisation, and IT infrastructure
@joe_elway
http://www.aidanfinn.com
http://www.petri.co.il/author/aidan-finn
Published author/contributor of several books
Books
System Center
2012 VMM
Windows Server
2012 Hyper-V
Networking Basics
Hyper-V Networking Basics
6
Management OS Virtual Machines
VLAN Trunk
VLAN ID = 101 VLAN ID = 102
Virtual NICs
Generation 1 VMs can have:
(Synthetic) network adapter
Requires drivers (Hyper-V integration components/services)
Does not do PXE boot
Best performance
Legacy network adapter
Emulated - does not require Hyper-V drivers
Does offer PXE
Bad performance
Generation 2 VMs have synthetic network adapters with PXE
7
Hyper-V Extensible
Switch
Replaces Virtual Network
Handles network traffic between:
Virtual machines
The physical network
The management OS
Layer-2 virtual interface
Programmatically managed
Extensible 8
NIC = network
adapter
Hyper-V Extensible Switch
Virtual Switch Types
External:
Allow VMs to talk to each other physical network and host
Normally used
Internal
Allow VMs to talk to each other and host
VMs cannot communicate to VMs on another host
Normally only ever seen in a lab
Private
Allow VMs to talk to each other
VMs cannot communicate to VMs on another host
Sometimes seen but replaced by Hyper-V network virtualization or VLANs
9
Extension Types
Capturing
Monitoring
Example: InMon sFlow
Filtering
Packet monitoring/security
Example: 5nine Security
Forwarding
Does all the above & more
Example: Cisco Nexus 1000V
Switch Extensibility
NIC Teaming
Provides load balancing and failover (LBFO)
Load balancing:
Spread traffic across multiple physical NICs.
This provides link aggregation not necessarily a single virtual pipe.
Failover:
If one physical path (NIC or top-of-rack switch) fails then traffic automatically moved to another NIC in the
team.
Built-in and fully supported for Hyper-V and Failover Clustering since WS2012
NIC Teaming
Microsoft supported no more calls to NIC vendors for teaming support or getting told to turn
off teaming
Vendor agnostic can mix NIC manufacturers in a single team
Up to:
32 NICs at same speed in physical machines
2 virtual NICs at same speed in a VM
Configure teams to meet server needs
Team management is easy!
Server Manger, LBFOADMIN.EXE, VMM, or PowerShell
NIC Teaming Features
Team
members
--or--
Network
Adapters
Team
Team
Interfaces,
Team NICs, or
tNICs
Terminology
Switch Independent mode
Doesnt require any configuration of a switch
Protects against adjacent switch failures
Allows Standby NIC
Switch dependent modes
1. Static Teaming
Configured on switch
2. LACP Teaming
Also known as IEEE 802.1ax or 802.3ad
Requires configuration of the adjacent switch
Switch
dependent
team
Switch
independent
team
Connection Modes
1. Address Hash comes in 3 flavors
4-tuple hash: (Default distribution mode) uses the RSS hash if available, otherwise hashes the TCP/UDP ports and the IP
addresses. If ports not available, uses 2-tuple instead.
2-tuple hash: hashes the IP addresses. If not IP traffic uses MAC-address hash instead.
MAC address hash: hashes the MAC addresses.
2. Hyper-V port
Hashes the port number on the Hyper-V switch that the traffic is coming from. Normally this equates to per-VM traffic. Best if
using DVMQ.
3. Dynamic (Added in WS2012 R2)
Spread a single stream of data across team members using flowlets. The default option in WS2012 R2.
Load Distribution Modes
Choose the team connection mode that is required by your
switches
Choose either Hyper-V Port or Dynamic (WS2012 R2) load
distribution
Hyper-V Port provides predictable incoming paths and
DVMQ acceleration.
Dynamic enables a single virtual NIC to spread traffic across
multiple team members at once.
NIC Teaming Virtual Switch
NIC Team
Choose the team connection mode that is required by your
switches
Choose either Address Hash or Dynamic load distribution
Address Hash will isolate a single stream of traffic on one
physical NIC.
Dynamic enables a since virtual NIC to spread traffic across
multiple team members at once.
NIC Teaming Physical NICs
NIC Team
Netw
orking Stack
Can be configured in guest OS of a WS2012 or later VM.
Teams the VMs virtual NICs.
Configuration is locked.
You must allow NIC teaming in the advanced properties of
the virtual NIC in the VM
settings.
Set-VMNetworkAdapter VM01AllowTeaming On/Off
NIC Teaming Virtual Machines
Virtual Machine NIC Team
Demo: NIC Teaming
Hardware Offloads
Core 2 Core 3 Core 4 Core 5 Core 6Core 1
Management OS
Virtual Machine NIC Team
Management
Live Migratio
n
Clu
ster
SMB
3.0
Backu
p
rNIC1 rNIC2
0 1 2 3 4 5 6 7 8 9 10 11
CPU 0
Core 8 Core 9 Core 10 Core 11 Core 12Core 7
12 13 14 15 16 17 18 19 20 21 22 23
CPU 1Processors (Hyperthreading) {
Cores {
Logical Processors {
RSS100% utilized
Core 2 Core 3 Core 4 Core 5 Core 6Core 1
Management OS
Virtual Machine NIC Team
Management
Live Migratio
n
Clu
ster
SMB
3.0
Backu
p
rNIC1 rNIC2
0 1 2 3 4 5 6 7 8 9 10 11
CPU 0
Core 8 Core 9 Core 10 Core 11 Core 12Core 7
12 13 14 15 16 17 18 19 20 21 22 23
CPU 1Processors (Hyperthreading) {
Cores {
Logical Processors {
DVMQ100% utilized
RSS and DVMQ Consult your network card/server
manufacturer
Can use Get- Set-NetAdapterRSS to configure.
Dont change anything unless you need to
RSS and DVMQ are incompatible on the same NIC
so design hosts accordingly
vRSS Added in WS2012 R2
RSS provides extra processing capacity for inbound traffic to a physical server
Using cores beyond Core 0.
vRSS does the same thing in the guest OS of a VMM
Using additional virtual processors.
Allows inbound networking to VMM to scale out.
Obviously requires VMs with additional virtual processors.
The physical NICs used by the virtual switch must support DVMQ.
Enable RSS in the advanced NIC properties in the VMs guest OS
Management OS
Virtual Machine NIC Team
Management
Live Migratio
n
Clu
ster
SMB
3.0
Ba cku
p
rNIC1 rNIC2
CPU 0 CPU 1 CPU 2 CPU 3 CPU 4 CPU 5 CPU 6 CPU 7
vRSS100% utilized
Demo: vRSS
Single-Root I/O (SR-IOV) Virtual function on capable NIC presented directly to VM
Bypasses user mode in Management OS
Network stack
Virtual Switch (logical connection present)
Cannot team NICs in Management OS can team NICs in VM
Super low latency virtual networking, less h/w usage
Requires SR-IOV ready:
Motherboard
BIOS
NIC
Windows Server 2012/Hyper-V Server 2012 (or later) host
Can Live Migrate to/from capable/incapable hosts
Host
Network I/O path without SRIOV Network I/O path with SRIOV
Root Partition
Hyper-V Switch
Physical
NIC
Virtual
Machine
Virtual NIC
Routing
VLAN Filtering
Data Copy
Host
Root Partition
Hyper-V Switch
SR-IOV Physical NIC
Virtual
Machine
Virtual
Function
Routing
VLAN Filtering
Data Copy
SR-IOV Illustrated
Implementing SR-IOV All management OS
networking features are bypassed
You must create SR-IOV virtual switches to begin with: New-VMSwitch IOVSwitch1 -
NetAdapterName pNIC1 EnableIOV $True
Install Virtual Function driver in guest OS
To get teaming: Create 2 virtual switches
Enable guest OS teaming in vNICadvanced settings
Team in the guest OS
NIC Team
SR-IOV Enabled Virtual Switch 1
SR-IOV Enabled Virtual Switch 2
Virtual NIC 1 Virtual NIC 2
Physical NIC 1 Physical NIC 2
The Real World: SR-IOV Not cloud or admin friendly:
Requires customization in the guest OS
How many hosting or end users can you trust with admin rights over in-guest NIC teams?
In reality:
SR-IOV is intended for huge hosts or few VMs with low latency requirements
You might never implement SR-IOV outside of a lab
IPsec Task Offload (IPSecTO) IPsec encrypts/decrypts traffic between a client and server.
Done automatically based on some rule.
Can be implemented by a tenant independently of the cloud administrators
It uses processor resources in a cloud this could have a significant impact.
Using IPSecOffloadV2 enabled NICs, Hyper-V can offload IPsec processing from VMs to the hosts NIC(s).
Consistent Device Naming (CDN) Every Windows admin hates Local Area Connection, Local Area
Connection 2, etc.
Network devices randomly named based on order of PNP discovery
Modern servers (Dell 12th gen, HP Gen8) can store network port device names
WS2012 and later can detect these names
Uses device name to name network connections:
Port 1
Port 2
Slot 1 1
Slot 1 1
Converging Networks Not a new concept from hardware vendors
Introduces as a software solution in WS2012
Will cover this topic in the High Availability session
SMB 3.0 No longer just a file & print protocol
Learn more in the SMB 3.0 and Scale-Out File Server session
Thank You!
Aidan Finn
@joe_elway
www.aidanfinn.com
Petri IT Knowledgebase
Top Related