Intel- OpenStack Summit 2016/Red Hat NFV Mini Summit

7

Click here to load reader

Transcript of Intel- OpenStack Summit 2016/Red Hat NFV Mini Summit

Page 1: Intel- OpenStack Summit 2016/Red Hat NFV Mini Summit

INTEL CONFIDENTIALPlaceholder Footer Copy / BU Logo or Name Goes Here

Accelerating NFV DeploymentEdel Curley, James Chapman Oct 2016

Page 2: Intel- OpenStack Summit 2016/Red Hat NFV Mini Summit

INTEL CONFIDENTIAL

Havana Juno Kilo Mitaka

2

Host CPU feature request, PCI pass-through & SR-IOV Support

NUMA awareness and placement

Hugepage support, CPU pinning, NUMA locality of PCI devices, OVDS+DPDK (Separate agent),

CPU threading policies, Security Groups for OVS+DPDK (Stateless), Telemetry Capture (via collectd), OVS+DPDK (Merged with OVS agent), OVS+DPDK controlled by ODL, OVF Meta-Data Import

Intel Contributions

Page 3: Intel- OpenStack Summit 2016/Red Hat NFV Mini Summit

INTEL CONFIDENTIAL

CPU pinningBy default guest vCPUs will be allowed to freely float across host pCPUs

Bind vCPU used by guest to pCPU.

Guest gets a dedicated pCPU for more deterministic performance.

The Kilo release of OpenStack added CPU pinning capability.

http://openstack-in-production.blogspot.com/2015/08/numa-and-cpu-pinning-in-high-throughput.html

Page 4: Intel- OpenStack Summit 2016/Red Hat NFV Mini Summit

INTEL CONFIDENTIAL

CPU thread policy 1/2

When running workloads on SMT hosts, it is important to be aware of the impact that thread siblings can have.

Thread siblings share a number of components and contention on these components can impact performance.

Page 5: Intel- OpenStack Summit 2016/Red Hat NFV Mini Summit

INTEL CONFIDENTIAL

CPU thread policy 2/2

Prefer Isolate

Require

Page 6: Intel- OpenStack Summit 2016/Red Hat NFV Mini Summit

INTEL CONFIDENTIAL

SummaryIsolate model • Is the most powerful in terms of predictable compute capacity increase. 2x cores is really 2x more

cores. • Highest performance apps may need this to help with SLA compliance. • Possible lower efficient compute capacity.

Prefer model� Drives up platform density by aiming to pack CPUs. � Good for platform utilisation rates� Unpredictable in terms of additional compute capacity that will be added. � SLAs to be rigorously monitored via telemetry.

Require� Almost as good a Prefer to drive up platform density by aiming to pack CPUs. � Good for platform utilisation rates� More predictable (than prefer) in terms of additional compute capacity that will be added. � SLAs to be rigorously monitored via telemetry

6

Page 7: Intel- OpenStack Summit 2016/Red Hat NFV Mini Summit

INTEL CONFIDENTIAL

Backup

7