HPE Helion OpenStack Carrier Grade for enabling communications ...

4
Critical Pathways in Cardiology •  Volume 13, Number 3, September 2014 www.critpathcardio.com  |  85 ORIGINAL ARTICLE Objective: The objective of our study was to evaluate the effect of the STOP STEMI© medical application on door-to-balloon (D2B) time in patients arriving to our emergency department with an acute ST Elevation Myocardial Infarction (STEMI). STOP STEMI© is a novel medical application developed by physicians to improve the coordination and communication tasks essential to rapid assessment and care of the patients suffering from a STEMI. Methods: We conducted a retrospective before and after review of the Good Shepherd Health System STEMI quality assurance/improvement dashboard for a 10-month period between November, 2012 and September, 2013 (4 months before STOP STEMI© and 6 months after). Data was collected using a standard data collection form and entered on the dashboard by a STEMI coordinator blinded to study objectives. We calculated the average D2B times before and after initiation of STOP STEMI© along with the improvement in the benchmarks of D2B less than 90 min and D2B less than 60 minutes. A subgroup analysis of Center for Medicare and Medicaid services (CMS) reportable cases was conducted to evaluate these benchmarks in the subset of patients meeting the criteria for CMS reporting by our facility. Results: During the study period, we received 155 STEMI patients, average 0.5 patients per day. One hundred twelve of the patients underwent percutaneous coronary intervention (PCI), 37 preSTOP STEMI©, and 75 postSTOP STEMI©. Of the 112 PCI cases, 7 were excluded leaving 105 cases for analysis, 36 preapplication and 69 postapplication. We found a 22% reduction in the average door-to-balloon time after implementing the STOP STEMI© application (9171 minutes) respectively, the average difference of 20 minutes P = 0.05 (95% CI, -1–40minutes). In the analysis of CMS reportable cases (n = 64 cases), we observed a decrease in the average D2B of 15 minutes (6853 minutes), a 22% reduction P = 0.03 (95% CI 129min). In the CMS reportable cases, we saw an improvement in the D2B time less than 90 minutes from 7895% and less than 60 minutes D2B improvement from 5680%. We also observed an appropriate absolute reduction in PCI resource utilization by 11%. Conclusions: In this cohort of patients, the utilization of STOP STEMI© decreased the average door-to-balloon times by 22% in the patients with acute STEMI arriving at our emergency department. This effect was maintained when looking at the subset of all STEMI cases reportable to CMS. We also observed modest improvements in meeting the less than 60-minute, less than 90-minute benchmarks, and improvements in the resource utilization. Key Words: STEMI, STOP STEMI©, percutaneous coronary intervention, door-to balloon-time (Crit Pathways in Cardiol 2014;13: 85–88) W e sought to determine if implementation of the STOP STEMI© medical application would improve door-to-balloon (DTB) times in the patients presenting with a STEMI to our level II emer- gency department (ED) (Figure 1). Our ED is part of a community medical center with an annual volume of 90,000. STOP STEMI© is a novel medical application developed by physicians to enhance the coordination and communication tasks essential to the rapid assessment and care of the patients suffering from a STEMI. Medical personnel activate the application when a STEMI identified (Figure 2). An image is taken of the electrocar- diogram (ECG) and, along with the patient details, transmitted to all pertinent members of the STEMI care team (Emergency Medical Services [EMS], ED, catheterization laboratory personnel, cardiolo- gist on call, etc.). These members are immediately alerted by a siren tone on the application and have immediate access to the ECG and relevant patient information in real time (Figure 3). When each link in the care team is ready to receive the patient, they can update their readiness status on the application to facilitate coordination of care (Figure 4). The application has a universal clock that promotes a sense of urgency among all stakeholders. No spe- cialized hardware is required as the application is compatible with smartphones and tablets. (see supplemental digital content 1, STOP STEMI© introduction video) STOP STEMI has several additional benefits. This application tracks the healthcare provider performance compared to the standard benchmarks for quality improvement monitoring. There are also significant resource utilization advantages. Because physicians can rapidly review ECG findings, appropriate modifications to EMS and inhospital STEMI activations can be made rapidly. STOP STEMI© also provides immediate feedback on elapsed time inasmuch as activation by utilizing a universal clock comparing the current case against national benchmarks. METHODS The data was abstracted by the Good Shepherd Health System STEMI coordinator from the medical records using a standardized abstraction form and entered into a quality improvement database (Good Shepherd STEMI portal). We included any patient arriving by EMS or private vehicle to the emergency department with an acute STEMI who received PCI therapy between November 04, 2012 to September 29, 2013. The STEMI coordinator used a standard abstrac- tion form and was blinded to study the objectives. The study includes all available data from the STEMI dashboard 4 months before the initiation of STOP STEMI© (November 04, 2012March 4, 2013) and 6 months after the application was introduced (March 5, 20139/29/2013). Investigators used the recorded door and device times on the STEMI dashboard. Data was analyzed using Excel software sta- tistics package (paired 2 tailed student’s t test). All eligible PCI cases were analyzed for this report and subgroup analysis was done on all the cases that met the criteria for Centers for Medicare Services (CMS) reporting. The subgroup contained patients who arrived with STEMI and received PCI that met CMS criteria for reporting. These STOP STEMI©-A Novel Medical Application to Improve the  Coordination of STEMI Care: A Brief Report On Door-to- Balloon Times After Initiating the Application Robert Dickson, MD, FAAEM, FACEP,* Adrian Nedelcut, MD,* Rawle Seupaul, MD,† and Mohammed Hamzeh, MD‡ Copyright © 2014 by Lippincott Williams & Wilkins. This is an open-access article distributed under the terms of the Creative Commons Attribution- NonCommercial-NoDerivatives 3.0 License, where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially. ISSN: 1003-0117/14/1303-0085 DOI: 10.1097/HPC.0000000000000019 From the *Good Shepherd Health System, Longview, TX; †University of Arkansas School of Medicine Department of Emergency Medicine, Little Rock, AR; and ‡Scott and White Department of Emergency Medicine, Temple, TX. Reprints: Robert Dickson, MD, Good Shepherd Health System, Longview, TX. E-mail: [email protected]

Transcript of HPE Helion OpenStack Carrier Grade for enabling communications ...

Page 1: HPE Helion OpenStack Carrier Grade for enabling communications ...

HPE Helion OpenStack Carrier Grade Enabling communications service providers to deploy NFV applications on open source software platforms

Contents NFV—A CSP reality ................................................................................................................................................................................................................................................................ 2

OpenStack, NFV, and SDN ................................................................................................................................................................................................................................................ 2

HPE NFV Reference Architecture ............................................................................................................................................................................................................................. 2

HPE Helion OpenStack Carrier Grade ................................................................................................................................................................................................................... 4

Key new capabilities in HPE Helion OpenStack Carrier Grade 2.0 .............................................................................................................................................. 4

HPE Helion OpenStack Carrier Grade components overview ......................................................................................................................................................... 5

Carrier grade Linux and OpenStack features .................................................................................................................................................................................................. 6

OpenStack ................................................................................................................................................................................................................................................................................. 6

Carrier grade Linux ........................................................................................................................................................................................................................................................... 7

Carrier grade KVM—Open virtualization platform .............................................................................................................................................................................. 8

Carrier grade virtual switch........................................................................................................................................................................................................................................ 8

Configuration assumptions........................................................................................................................................................................................................................................ 9

Management and middleware ............................................................................................................................................................................................................................ 10

VNF guest SDK ................................................................................................................................................................................................................................................................. 11

HPE Helion Lifecycle Management ............................................................................................................................................................................................................... 12

Why DCN integration is important to HPE Helion Carrier Grade ....................................................................................................................................... 12

Hardware support .......................................................................................................................................................................................................................................................... 14

Conclusion ................................................................................................................................................................................................................................................................................... 14

Technical white paper

Page 2: HPE Helion OpenStack Carrier Grade for enabling communications ...

Technical white paper Page 2

NFV—A CSP reality Network functions virtualization (NFV) introduces revolutionary implementation concepts in deploying telecommunication infrastructure. Communications service providers (CSPs) are looking at NFV as a vehicle to introduce new services, protect top line revenue, minimize OPEX, gain market share through accelerated deployment of new services, and maximize flexibility through adoption of open source solutions.

A core structural change in the way telecommunications infrastructure gets deployed, NFV in turn brings significant changes to the way applications are delivered to service providers. By disaggregating the traditional roles and technology involved in telecommunications applications, NFV enables innovation in the telecommunications industry infrastructure and applications, and creates greater efficiencies and time-to-market improvements.

While implementing NFV, CSPs must also overcome technical challenges such as the integration of service automation and management services with existing operational support system (OSS)/business support system (BSS) infrastructure. In addition, CSPs must address carrier grade performance, service-level agreements (SLAs), and security measures.

OpenStack®, NFV, and SDN The open source project, OpenStack has become the de facto open source cloud operating system, enabling enterprise and telecom companies to build private and public clouds by virtualizing compute, storage, and networking. The OpenStack Foundation is backed by all major enterprise IT and telecom vendors, CSPs, and many startup companies, fueling a revolution in cloud infrastructure management.

Software-defined networking (SDN) enables the emerging software-based networks that allow cloud operators to apply business logic directly and dynamically to introduce new services faster, at lower management costs and complexity. SDN also commoditizes network functions, thereby reducing CAPEX. SDN is an enabling technology that challenges current practices by decoupling the control plane from the data forwarding mechanisms.

HPE NFV Reference Architecture The European Telecommunications Standards Institute (ETSI) has been chosen as the home of the industry specification group (ISG) for NFV. Membership of the ISG for NFV has grown to more than 260 individual companies, including 37 of the world’s major service providers as well as representatives from both telecoms and IT vendors. In context of NFV, ETSI has published a reference architecture (Figure 1) that is regarded as the standard for NFV implementation.

Page 3: HPE Helion OpenStack Carrier Grade for enabling communications ...

Technical white paper Page 3

Figure 1. ETSI reference architecture

Following the ETSI model, the HPE NFV Reference Architecture (Figure 2) is based on a layered approach that allows customers to incrementally add new functionality and also gives them the openness and flexibility to work with platforms and solutions from different vendors at each layer.

Figure 2. HPE NFV Reference Architecture

Os-Ma

EMS 2VNF

manager(s)

Or-Vi

OSS/BSS

Service, VNF, andinfrastructure description

Orchestrator

NFV managementand orchestration

Executionreference points

Otherreference points

Main NFVreference points

Ve-Vnfm

Or-Vnfm

Se-Ma

EMS 1 EMS 3

Nf-Vi

VNF 1 VNF 2 VNF 3

VI-Ha

NFVI

Virtualization layer

Hardware resources

Virtualnetwork

Storagehardware

Computinghardware

Virtualnetwork

Virtualstorage

Virtualcomputing

Virtualizedinfrastructuremanager(s)

Vi-Vnfm

Page 4: HPE Helion OpenStack Carrier Grade for enabling communications ...

Technical white paper Page 4

HPE Helion OpenStack Carrier Grade One key component of the ETSI NFV Reference Architecture is the virtualization infrastructure manager (VIM), which orchestrates the resources of a virtualized pool of physical infrastructure (servers, storage, and networking) to host virtual network functions (VNFs) on an NFV Infrastructure-as-a-Service (NFVIaaS) platform.

An open source project with significant industry support, OpenStack has become the preferred open source cloud management system of CSPs. However, OpenStack is still in its early stages of technology maturity and is primarily focused on enterprise IT and public cloud use cases. Within the OpenStack community, there’s an emerging trend to understand the NFV use cases and develop features and capabilities to meet CSPs’ needs for NFV. The interest in NFV is in its early stage; it will likely take some time before it delivers on a critical mass of capabilities required for NFV deployments.

Deploying a production-ready VIM to manage an NFV platform requires the successful integration of many core technologies that are outside the realm of the OpenStack project. Some key examples include the host operating system (for example, Linux®), server virtualization (KVM), network virtualization (vSwitch and SDN controllers), an installer and cloud lifecycle management framework, fault and performance management, and high-availability framework for the OpenStack control plane and the NFVI layer.

In addition, some of the previously mentioned open source software components beyond OpenStack are not optimized for NFV use cases. For example, standard KVM is not suitable for virtualizing a server running VNFs like a vIMS or a virtualized packet gateway (vPGW) that cannot tolerate jitter or high latency. Similarly, a standard open virtual switch (OVS) does not have the performance to deliver packets of all sizes at a high throughput to VNFs that require handling multiple gigabits per second of bearer traffic.

To enable CSPs to deploy NFV applications on open source software platforms, Hewlett Packard Enterprise created the VIM plus compute and network virtualization offering for NFV. Known as HPE Helion OpenStack Carrier Grade (HPE Helion Carrier Grade), it’s built on the same foundation as the HPE Helion OpenStack Enterprise Edition, and adds significant enhancements in three key areas, providing carrier grade features for manageability, availability, and performance (Figure 3).

Figure 3. Key enhancements in HPE Helion Carrier Grade

Key new capabilities in HPE Helion OpenStack Carrier Grade 2.0 The current release of HPE Helion Carrier Grade 2.0 brings certain additional key capabilities:

• OpenStack Kilo release—HPE HCG 2.0 is built on top of HPE Helion OpenStack Enterprise Ed. 2.0, which is in turn based on OpenStack Kilo release.

• Multiregion support:

– High performance KVM region

– VMware vSphere®/VMware® ESXi™ region

– Bare metal region

Carrier gradeavailability

Carrier gradeperformance

Carrier grademanageability

Availability and reliability

• Advanced self-healing

• No single point of failure

Performance

• Nearline rate networking throughout

• Bare metal compute performance

Manageability

• In-service upgrade capabilities

• Enhanced security

• Advanced resource scheduling and orchestration capabilities

Enhancements and defect fixes made in the OpenStack projects includedin HPE Helion Carrier Grade will be submitted to the community to include upstreamopen source. To deliver on the previously mentioned three main areas, HPE HelionCarrier Grade includes other open source and non-open source components as well.

CSP requirements for NFV platforms

Page 5: HPE Helion OpenStack Carrier Grade for enabling communications ...

Technical white paper Page 5

• SDN—Integration with HPE Distributed Cloud Networking (DCN) enabling

– Policy-driven network automation

– DPDK-Accelerated VRS virtual switch

– Network function chaining

– Inter-region and across data centers

• Headless topology—KVM compute hosts connected to HCG controller over WAN

– Increased scalability: Up to 50 KVM compute nodes

HPE Helion OpenStack Carrier Grade components overview The main components of HPE Helion Carrier Grade are

• OpenStack—Core services include Horizon, Nova, Neutron, Cinder, Glance, Swift, Keystone, Ceilometer, and Heat. Other services will be added as needed in the future

• Compute node software package including

– Carrier grade Linux (host operating system [OS] running on the compute nodes)

– Carrier grade KVM (for virtualization of the compute node)

– Carrier-grade-Accelerated virtual switch (AVS and AVRS), Data Plane Development Kit (DPDK)-enabled high-performance virtual switch

• Management and middleware—Composed of high-availability (HA) management; operations, administration, and maintenance (OAM) software image management; and fault and performance management

• Guest software development kit (SDK) for inclusion in the guest OS of the VNF for performance acceleration and improving the HA framework above and beyond standard vIMS

• HPE Helion Lifecycle Manager (HLM)—Cloud installer and lifecycle manager (not shown in Figure 4)

Note AVRS or Accelerated Virtual Routing Switch is not available during GA.

Page 6: HPE Helion OpenStack Carrier Grade for enabling communications ...

Technical white paper Page 6

Figure 4. Key functional components of HPE Helion Carrier Grade

Carrier grade Linux and OpenStack features OpenStack HPE Helion OpenStack Carrier Grade services include Keystone, Horizon, Cinder, Glance, Swift, Nova, Neutron, Ceilometer, and Heat; more services will likely be added in the future. These services are based on the Kilo community version with enhancements in some of the key services. All enhancements and defect fixes made in these OpenStack components are planned to be submitted to the OpenStack community.

In addition to these enhancements, some of these services provide plug-ins to integrate with third-party software and hardware. For example, HPE Helion Carrier Grade provides Keystone plug-ins for Lightweight Directory Access Protocol (LDAP) and SQL Servers, Cinder plug-ins for HPE 3PAR, and virtual storage appliance (VSA) drivers for storage. HPE Helion Carrier Grade provides embedded Swift service.

Nova in HPE Helion Carrier Grade has enhancements to provision single-root input/output virtualization (SR-IOV) and PCI-passthrough, as well as enable NUMA node affinity, CPU core pinning, large memory page assignment to virtual machines (VMs, 1 GB/2 MB), vCPU scaling, network performance-based scheduling, CPU type, and NUMA node aware scheduling. These features are all important to ensure a VNF has the resources needed for high performance. In addition to the Nova features that enable performance, HPE Helion Carrier Grade has features that enable high availability of VNFs like live migration and VM evacuation. HPE Helion Carrier Grade’s Nova supports live migration for VNFs using DPDK and only when they’re using external shared storage (not ephemeral or local disk) and when connected to the Accelerated Virtual Switch (AVS, not to an SR-IOV VF or using PCI-passthrough).

Neutron, a key service in HPE Helion Carrier Grade, provides virtual network orchestration for VNFs and provides tenant separation using provider virtual local area networks (VLANs), and virtual extensible local area networks (VXLANs), distributed virtual routing (DVR) functionality, security, and quality of service (QoS) policy. Neutron in HPE Helion Carrier Grade provides a modular layer 2 (ML2) plug-in mechanism to control and program the AVS and AVRS, as well as the ability to send VLAN-tagged packets to and from VNFs. This feature preserves the existing implementation of VNFs where traffic to and from VNFs is expected to be tagged with 802.1Q VLANs.

Page 7: HPE Helion OpenStack Carrier Grade for enabling communications ...

Technical white paper Page 7

Heat is an important service in HPE Helion Carrier Grade as it’s used to automate the orchestration of a VNF through its lifecycle, for example, onboarding, and auto-scaling.

HPE Helion Carrier Grade includes key additional features, enhancements, and defect fixes to Heat; some of the more notable features and benefits include:

• Lifecycle management of a composite application (Stack)

• Ability to define a set of resources (VMs, networks, Cinder volumes, Swift databases, and so on) that make up a larger, more complex application

• File- and template-based definitions

• Support for lifecycle commands such as start, modify, and stop

• VM instance bootstrapping/configuration through cloud-initiative and cloud-formation mechanisms

• Application auto-scaling (with low-scale load balancing)

• Integration of protection groups

• Improved usability of templates

• Improved detection time of failed entities

• Improved usability of Heat’s Horizon dashboards

Ceilometer is a service in HPE Helion Carrier Grade that provides monitoring and statistics gathering for physical and virtual elements on a server. These statistics help in the performance management aspects of HPE Helion Carrier Grade’s middleware. In addition, Ceilometer’s enhanced metrics such as network statistics can be used by the Nova scheduler to make intelligent choices in VNF placement. Other enhancements in Ceilometer enable northbound telco type interfaces in the form of a secure file transfer protocol (SFTP) pull interface of a comma-separated value (CSV) format file with performance metrics.

Horizon dashboard provides the graphical user interface (GUI) for HPE Helion Carrier Grade users. HPE Helion Carrier Grade’s Horizon also provides a detailed dashboard of system-level inventory of the compute nodes in the cloud and also has a dashboard webpage to display the active alarm list (dynamically updated) and a webpage to display the historical alarm list (dynamically updated). In addition to Horizon, command-line interface (CLI) and Representational State Transfer (REST) application programming interfaces (APIs) are also provided to the OpenStack services.

HPE Helion Carrier Grade also provides integration capabilities through standards-based and open APIs with other layers and components of the ETSI architecture, especially the service orchestration and OSS components of the management and orchestration (MANO) layer, such as the HPE NFV Director (service orchestration and monitoring), HPE Operations Manager i (OMi), Network Node Manager i (NNMi), SiteScope for OSS/BSS, and third-party VNF managers.

Carrier grade Linux Carrier grade Linux is a set of specifications that details standards of availability, scalability, manageability, and service response characteristics that must be met for the Linux kernel-based operating system to be considered carrier grade (that is, ready for use within the telecommunications industry). The term is particularly applicable as telecom converges technically with data networks and commercial-off-the-shelf hardware. The latest version of the carrier grade Linux specification is 5.0.

HPE Helion Carrier Grade distribution includes the Wind River carrier-grade Linux package as the host operating system to run VNFs, which is itself carrier grade Linux 5.0 registration specifications compliant.

Page 8: HPE Helion OpenStack Carrier Grade for enabling communications ...

Technical white paper Page 8

Carrier grade KVM—Open virtualization platform High-performance VNFs like vIMS, voice over LTE (VoLTE), vPGW are required to perform at deterministic levels. Deterministic performance is not just about bandwidth but also about latency, and it’s critical that the latency induced by the underlying virtualization layer be very low and consistent.

The HPE Helion Carrier Grade compute node software package also includes a carrier-grade KVM with real-time kernel extensions to provide low-latency throughput, resiliency, availability, and security optimized for NFV workloads. It adds kernel preemption support to the industry-standard kernel-based virtual machine (KVM) hypervisor, with a 40X decrease in jitter and 74 % decrease in average latency, providing deterministic and predictable performance for VNFs.

Carrier grade virtual switch A key component in the data plane of a VNF is the virtual switch (vSwitch). The throughput and efficiency of the vSwitch is critical for many VNFs to handle multiple Gbps of bearer plane traffic. Since the vSwitch is a software element, it also must utilize the CPU resources efficiently.

HPE Helion Carrier Grade compute node software includes the Accelerated vSwitch, a high-performance user space vSwitch based on an enhanced version of the Intel® Data Plane Development Kit (Intel DPDK) that enables high-performance VM-to-VM communication without the need to use the Linux kernel OVS, as well as high-performance packet processing from the server network interface card (NIC) to applications in VMs.

The AVS uses only two cores per CPU to provide 20 Gbps, and when compared to the standard Linux kernel-based OVS, is about 6.7X more efficient in its CPU utilization. This efficient utilization of the CPU by the AVS while delivering high bandwidth enables an increase in the density of high-performance VNFs on a CPU, thereby improving the economics of NFV (Figure 5).

What is virtual network interface card (vNIC)?

• Supports VNFs written for OVS and native Linux

• Virtual NIC driver provides 10X better performance than native Linux

• Optimized for Intel DPDK—Even better performance for DPDK apps

• Supports multiple guest OSs

• Application not aware that network interfaces are virtual

Page 9: HPE Helion OpenStack Carrier Grade for enabling communications ...

Technical white paper Page 9

Figure 5. AVS supportability matrix and performance at different packet sizes

Table 1. AVS efficiency improving VNF density

Increase performance, capacity, and scalability

vSwitch analysis Standard OVS Wind River AVS

Switching bandwidth for first core 0.3 mpps 12 mpps

Switching bandwidth for subsequent cores 0.3 mpps 12 mpps

Total switching bandwidth required 6 mpps 40 mpps

Number of cores required for vSwitch 20 4

Number of cores available for VMs 3 20

Unused cores 1 0

VM density improvement 6.7X

Configuration assumptions Cores: 24

VMs/core: 1

Bandwidth: 2 mpps

Besides higher performance, customers achieve increased capacity, meaning they can have more payloads with fewer servers.

Page 10: HPE Helion OpenStack Carrier Grade for enabling communications ...

Technical white paper Page 10

Management and middleware The management and middleware layer of HPE Helion Carrier Grade provides performance, fault, high availability, and software management features. Performance management in HPE Helion Carrier Grade relies on plug-ins to the Ceilometer service. Performance metrics are collected for both the compute server and VMs. Enhanced statistics beyond basic OpenStack such as per-vNIC bandwidth, aggregate bandwidth, and TX/RX frame, bytes, and errors are collected. HPE Helion Carrier Grade provides access to performance measurement samples in the form of CSV files, which offer the following benefits:

• Offline permanent storage of large sample history

• The use of offline tools for data analysis, accounting, and archiving

The CSV files are expected to be retrieved from the controllers using any suitable file transfer client application such as SFTP or SCP. This can be done manually by the system administrator, or automatically by an operations and support server (OSS/BSS) configured with the appropriate login credentials.

Fault management in HPE Helion Carrier Grade makes available information about faults and other events related to physical and virtual elements in the form of the following alerts.

SNMPv2c:

• Mandatory management information bases (MIBs)/tables (system, simple network management protocol [SNMP], community)

• Traps

• Notifications MIB (active alarm list and historical alarm list)

Hypertext transfer protocol (HTTP) to GUI:

• Horizon dashboard webpage to display active alarm list (dynamically updated)

• Horizon dashboard webpage to display historical alarm list (dynamically updated)

Secure shell (SSH) to CLI:

• Command(s) to display active alarm list

• Command(s) to display historical alarm list

• Command(s) to enable streaming of alarms as they occur to CLI session

The high-availability management aspect of HPE Helion Carrier Grade’s middleware is responsible for ensuring the high availability of VNFs in the midst of failures or planned outages in the physical or virtual elements of a compute server. Some of the key aspects of this HA manager include.

• Automatic VM recovery on failure of a host compute node (node failure detection in seconds rather than minutes)

• Automatic VM recovery on failure of a VM (VM failure detection 60X faster than standard IT grade)

• Live migration of VMs using Intel DPDK (not available in IT-grade OpenStack)

• Controller node redundancy and automatic failover (not available in IT-based OpenStack)

• VM monitoring tied into application health checks within the guest

• VM protection groups (ensuring VMs of the same group are created on different compute nodes)

Page 11: HPE Helion OpenStack Carrier Grade for enabling communications ...

Technical white paper Page 11

Table 2 illustrates how HPE Helion Carrier Grade’s virtualization meets carrier grade requirements compared to standard and enterprise OpenStack.

Table 2. Meeting CSP requirements

Attribute HPE Helion OpenStack Carrier Grade Standard OpenStack and Enterprise Linux

Detection of failed VM 500 ms > 1 minute

Compute node failure detection Sub second 1 minute or longer

Controller node failure recovery Sub 10 seconds Requires custom development to enable

Live migrate DPDK apps Fully supported Not possible

Network failure detection for compute notes 50 ms Depends on Linux distribution

VNF guest SDK On top of HPE Helion Carrier Grade, any VNF running on any industry-standard Linux guest operating system can utilize either the standard Virtio-based Linux drivers or it can use a special AVP vNIC driver to experience higher throughput (6X to 10X better performance than Virtio), and the ability to do queuing per-vNIC basis. This enhanced networking capability can be achieved simply by loading a kernel loadable module on the guest VM without the complication of any code compilation.

The guest VMs can also take advantage of an SDK to participate in application health checks (as part of HPE Helion Carrier Grade’s high-availability management) to enable VM and application health monitoring. VM protection groups enable grouping of similar VMs on different nodes to exchange state information so they can benefit from state change information of their peer VMs. This guest SDK also provides the ability to listen to shutdown request messages from HPE Helion Carrier Grade’s HA manager so that a VM can be gracefully shut down before evacuation or to accelerate a VM live migration when experiencing high bearer traffic loads. See Table 3 for guest SDK information.

Table 3. Benefits that require SDK inclusion in VNFs

Functionality SDK module Benefits Module inclusion method

High-performance I/O (AVP)* AVP-KMOD 7X I/O performance Compiled with kernel distribution

High-performance DPDK driver (PMD)

AVP-PMD 40X I/O performance Compiled as a component of Intel DPDK driver with application

vCPU scaling guest-scale server-group

Allows guest to add or remove vCPUs Requires the modules to be compiled in the guest OS instance

Guest application control heartbeat Allows HCG control plane to monitor health and send messages to VMs to shut down, quicken live migration, and enable auto-evacuation to be more graceful**

Application requires recompilation to take full advantage of this feature

Server group messaging server-group Provides a group of peer VMs a simple messaging channel to exchange state change information, and so on

Application requires recompilation to take full advantage of this feature

* If VNFs do not include AVP, they can use standard Linux Virtio—allow performance I/O option. SR-IOV has no dependency upon inclusion of this driver. ** VMs have the ability to reject shutdown requests. Live migration and auto-evacuation will be supported without inclusion of this SDK.

Page 12: HPE Helion OpenStack Carrier Grade for enabling communications ...

Technical white paper Page 12

To include these SDKs in a VNF, the guest OS has to be of a certain version. See Table 4 for supported guest OS versions.

Table 4. HPE Helion Carrier Grade guest OS support matrix

Distribution OS (kernel) version SDK support date Supported SDK components

Fedora 19 (3.9) 15 June 2015 All

openSUSE 12.3 (3.7) Under investigation All

Wind River 6 (3.10) 29 May 2015 All

Wind River 5 (3.4) 29 May 2015 All

Red Hat Enterprise Linux (RHEL) 6.5 (2.6) Under investigation Guest-scale, server-group, heartbeat, heat-template

CentOS 6.4 (2.6) Under investigation Guest-scale, server-group, heartbeat, heat-template

RHEL 7 (3.10)*** Under investigation All

RHEL 6.4 (2.6)*** Under investigation Guest-scale, server-group, heartbeat, heat-template

*** These operating systems are still being tested, and full support is not yet verified.

HPE Helion Lifecycle Management HPE Helion Lifecycle Management (HLM) is responsible for the software management in OpenStack and other components within HPE Helion Carrier Grade using a model-based runbook automation approach to cloud lifecycle management. This method enables the software to be installed as packages and Python virtual environments. Lifecycle operations are implemented as Ansible runbooks (install/reconfigure/scale up/upgrade) using a pluggable OpenStack installation layer. It also provides default service configuration, but doesn’t constrain the customer to it. It updates and upgrades via selective software (re)installation and reboots the system only for kernel changes. It doesn’t need a dedicated deployment node or transient deployer (doesn’t need to be running after deployment). The network is defined by user and instantiated during installation only. HLM also ensures that the HPE Helion Carrier Grade control plane is deployed in such a fashion that there is no single point of failure that can bring down the cloud operations.

Simplified installation and management in HPE Helion Carrier Grade takes care of the redundancy and fast recovery of failure in the control plane in a highly efficient manner. The dual redundant control node architecture design in HPE Helion Carrier Grade ensures the high availability of the components. With automatic control and compute node failure detection and recovery, the state of the OpenStack controllers is stored, without any loss and without any effect on the VMs. During maintenance, only the individual OpenStack services are restarted, which is a more granular fashion than in non-carrier grade OpenStack versions.

Why DCN integration is important to HPE Helion Carrier Grade Bridging the data center of today and tomorrow for CSPs requires networks to evolve at the speed of software. Hence to facilitate reaping maximum business value from NFV and cloud deployments, integration with SDN becomes not only important but also crucial for the CSPs. Integrating HPE DCN SDN solution with HPE Helion OpenStack Carrier Grade 2.0 provides a choice of flexibility in terms of SDN solution for VIM layer not only with native Accelerated vSwitch (AVS) but also with AVRS (Accelerated Virtual Service Router, part of HPE DCN) and whole DCN SDN suite to achieve various benefits mentioned in the following and provides accelerated service velocity with complete compute and network virtualization.

Multi-data center and hybrid environments: HPE DCN allows deployment across multi-data center and facilitates hybrid environments by using local domain routing for intra-data center connectivity and enables multi-domain data center connectivity with VXLAN. It allows:

• BGP Federation

• Scale

• Disaster recovery

Page 13: HPE Helion OpenStack Carrier Grade for enabling communications ...

Technical white paper Page 13

Application policy: Automating and simplifying network provisioning in a similar fashion as cloud-native apps and portability of network objects without reconfiguration with self-provisioning are extremely crucial for faster deployments of network within NFV:

• Zero Touch Provisioning

• Workload mobility without reconfiguration

• End user self-provisioning—Within a defined sandbox

Service chaining: Service chaining can be defined as a set of network services like load balancers or packet forwarders, which are often interconnected via network to facilitate a virtual network function (VNF). With the advent of NFV and SDN deploying VNFs, service chaining process can be highly accelerated by moving the management functions out of proprietary hardware to a network controller software running on any virtual machine. Common standard communication mechanism between SDN controllers and network devices help provisioning and reconfiguration of service chaining much faster and immensely reduces errors during the configuration process due to software-based controller having an overall view of the network. DCN allows:

• Open Integration Framework

• Service fabric for integration of third-party network service—physical or virtual (for example, Palo Alto Network firewall, F5 Load Balancer)

Deployability: Due to the complexity and hybrid nature of service provider network, SDN should be able to handle more than one scenario and can be used in implementing multiple network services. HPE DCN allows this through:

• High performance L2/L3 gateways

• Standards-based integration with HPE 5930 gateways

• Operational tool set—Underlay/overlay management

Hybrid environments: HPE DCN shines in hybrid environments mix of different hypervisors (KVM and VMware ESXi/vSphere) and bare metal servers.

Figure 6. DCN architecture

Distributed Cloud Networking (DCN) architecture/components

DCNDistributed Cloud Networking

Virtualized Services Directory (VSD)• Network policy engine—Abstracts complexity• Service templates and analytics

Virtualized Services Controller (VSC)• SDN controller, programs the network• Rich routing feature set

Virtualized Routing & Switching (VRS)• Distributed switch/router—L2–4 rules• Integration of bare metal assets

IP fabric

5930HWgateway

Brooklyn data center—Zone 1

Cloud serviceManagement plane

Virtualized ServicesDirectory

MP-BGP

Data centerControl plane

Virtualized ServicesController

Data centerData plane

Virtual Routing& Switching

Hypervisor Hypervisor

Hypervisor Hypervisor

Hypervisor Hypervisor

Page 14: HPE Helion OpenStack Carrier Grade for enabling communications ...

Technical white paper Page 14

HPE DCN comprises of three key components: VSD (network policy engine), VSC (SDN controller), and VRS (distributed switch/router). Out of these VRS is OpenFlow-based L2–L4 vSwitch providing VXLAN and MPLSoGRE tunnel encapsulation.

Figure 7. DCN VRS features and components

Hardware support HPE Helion Carrier Grade control plane can be installed on HPE ProLiant DL360 Gen9 Servers and the compute nodes can be ProLiant DL360 Gen9 or HPE c7000 BladeSystem Gen9 with 6125XLG switches and FlexFabric VC modules. HPE Helion Carrier Grade supports the HPE StoreVirtual VSA and HPE 3PAR StoreServ Storage system as shared storage for the OpenStack Cinder volume service.

Conclusion As an NFV-ready cloud management system, HPE Helion OpenStack Carrier Grade addresses carrier grade requirements for management, performance, and reliability. Built on top of open source components by a leading contributor to OpenStack, the solution enables greater innovation in the telecom industry infrastructure and applications, leading to efficiencies and time-to-market improvements that can ultimately create a competitive edge.

Virtualized Routing Switching (VRS)

Virtualized ServicesDirectory

XMPP

IPTraffic

Virtualized ServicesController

Virtual Routing& Switching

Hypervisor

Virtual Routing & Switching (VRS)

L2–L4 Virtual• Open vSwitch based

• Provides VXLAN & MPLS GRE tunnel encapsulation

• Programmed through OpenFlow from VSC

• Automatic detection of VM instantiation andteardown

VRS-KKVMhypervisors

VRS-VVMware vSpherehypervisors

VRS-GGateway for baremetal servers

Virtual Routing& Switching

Hypervisors

(VLAN, VXLAN, GRE)

L2 or L3

Page 15: HPE Helion OpenStack Carrier Grade for enabling communications ...

Technical white paper

Sign up for updates

© Copyright 2015–2017 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.

Intel is a trademark of Intel Corporation in the U.S. and other countries. Red Hat is a registered trademark of Red Hat, Inc. in the United States and other countries. The OpenStack Word Mark and the OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and is used with the OpenStack Foundation’s permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation or the OpenStack community. Pivotal and Cloud Foundry are trademarks and/or registered trademarks of Pivotal Software, Inc. in the United States and/or other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. VMware vSphere, VMware ESXi, and the VMware logo are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other third-party trademark(s) is/are property of their respective owner(s).

4AA5-9581ENW, May 2017, Rev. 4

Learn more at hpe.com/dsp/infrastructure