Network Functions Implementation and Testing - Interim

75
Deliverable D5.31 Network Functions Implementation and Testing - Interim Editor P. Paglierani (ITALTEL) Contributors P. Comi, M. Arnaboldi (ITALTEL), B. Parreira (PTINS), Y. Rebahi (FOKUS), A. Abujoda (LUH), F. Pedersini (UNIMI), N. Herbaut (VIOTECH), A. Kourtis, G. Xilouris (NCSRD), G. Dimosthenous (PTL). Version 1.0 Date November 30 th , 2015 Distribution PUBLIC (PU)

Transcript of Network Functions Implementation and Testing - Interim

Page 1: Network Functions Implementation and Testing - Interim

DeliverableD5.31

NetworkFunctionsImplementationandTesting-Interim

Editor P.Paglierani(ITALTEL)

Contributors P.Comi,M.Arnaboldi(ITALTEL),B.Parreira(PTINS),Y.Rebahi(FOKUS),A.Abujoda(LUH),F.Pedersini(UNIMI),N.Herbaut(VIOTECH),A.Kourtis,G.Xilouris(NCSRD),G.Dimosthenous(PTL).

Version 1.0

Date November30th,2015

Distribution PUBLIC(PU)

Page 2: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium2

ExecutiveSummary

Thisdeliverableprovidesgeneralguidelinesandsomespecificinformationthatcanbeusedby Function Developers to develop Virtual Network Functions (VNFs) within the T-Novaframework. Also, it contains the description of six VNFs that are currently underdevelopmentinT-Nova.TheVNFsarethefollowing:

• SecurityAppliance• SessionBorderController• VideoTranscodingUnit• TrafficClassifier• Homegateway• ProxyasaService.

TheVNFsdevelopedinT-NovaspanaverywideareaoftheNetworkFunctiondomain,andcan thus represent, from a developer’s perspective, a set of highly significantimplementation use cases, in which many problems related to network functionvirtualization have been faced and solved. Also, in the VNFs presented in this documentdifferent technologies have been adopted by T-Nova developers.Most VNFs, in fact, takeadvantage of various contributions coming from the open source community, such as[SNORT], or exploit recent technological advances, such as [DPDK], SR-IOV [Walters], orgeneralpurposeGraphicalProcessingUnits[CUDA].

Somepracticalinformationthatcanbeusefultofunctiondevelopersisalsoprovidedinthefirstpartof thedocument, related to themost implementation issuesencountered in thedevelopmentphase.Finally, somepreliminaryresultsobtained in thetests thathavebeencarriedoutarealsoreportedandbrieflydiscussed.

Page 3: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium3

TableofContents

1.INTRODUCTION..............................................................................................................6

2.NETWORKFUNCTIONSIMPLEMENTATION.....................................................................7

2.1.COMMONTOALLVNFS.......................................................................................................72.1.1.VNFgeneralarchitecture.........................................................................................72.1.2.VNFDescriptor.........................................................................................................82.1.3.VNFDandVNFInstantiation....................................................................................92.1.4.VNFCNetworking....................................................................................................92.1.5.VNFinternalstructure...........................................................................................112.1.6.VNFmanagementinterface..................................................................................132.1.7.VNFDescriptor(VNFD)..........................................................................................142.1.8.VirtualDeploymentUnit(VDU).............................................................................142.1.9.Instantiationparameters.......................................................................................172.1.10.Interactionwithconfigurationmiddleware.........................................................192.1.11.Monitoring...........................................................................................................20

2.2.VIRTUALSECURITYAPPLIANCE.............................................................................................232.2.1.Introduction...........................................................................................................232.2.2.Architecture...........................................................................................................232.2.3.Functionaldescription...........................................................................................232.2.4.Interfaces...............................................................................................................252.2.5.Technologies..........................................................................................................252.2.6.DimensioningandPerformance............................................................................252.2.7.FutureWork...........................................................................................................28

2.3.SESSIONBORDERCONTROLLER............................................................................................282.3.1.Introduction...........................................................................................................282.3.2.Architecture...........................................................................................................292.3.3.Functionaldescription...........................................................................................302.3.4.Interfaces...............................................................................................................312.3.5.Technologies..........................................................................................................322.3.6.DimensioningandPerformance............................................................................322.3.7.FutureWork...........................................................................................................34

2.4.VIDEOTRANSCODINGUNIT.................................................................................................342.4.1.Introduction...........................................................................................................342.4.2.Architecture...........................................................................................................342.4.3.Interfaces...............................................................................................................372.4.4.Technologies..........................................................................................................382.4.5.DimensioningandPerformance............................................................................392.4.6.FutureWork...........................................................................................................40

2.5.TRAFFICCLASSIFIER...........................................................................................................412.5.1.Introduction...........................................................................................................412.5.2.Architecture...........................................................................................................412.5.3.FunctionalDescription...........................................................................................412.5.4.Interfaces...............................................................................................................422.5.5.Technologies..........................................................................................................422.5.6.DimensioningandPerformance............................................................................432.5.7.DeploymentDetails...............................................................................................43

Page 4: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium4

2.5.8.FutureSteps...........................................................................................................452.6.VIRTUALHOMEGATEWAY(VIO).........................................................................................45

2.6.1.Introduction...........................................................................................................452.6.2.Architecture...........................................................................................................462.6.3.Sequencediagrams...............................................................................................472.6.4.Technology............................................................................................................482.6.5.DimensioningandPerformances...........................................................................532.6.6.FutureWork...........................................................................................................56

2.7.PROXYASASERVICEVNF(PXAAS).....................................................................................572.7.1.Introduction...........................................................................................................572.7.2.Requirements........................................................................................................572.7.3.Architecture...........................................................................................................582.7.4.Functionaldescription...........................................................................................592.7.5.Interfaces...............................................................................................................612.7.6.Technologies..........................................................................................................622.7.7.DimensioningandPerformance............................................................................63

3.CONCLUSIONSANDFUTUREWORK..............................................................................67

3.1.CONCLUSIONS..................................................................................................................673.2.FUTUREWORK..................................................................................................................67

4.REFERENCES.................................................................................................................68

5.LISTOFACRONYMS......................................................................................................73

Page 5: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium5

IndexofFigures

Figure1.VNFinternalcomponents...........................................................................................7Figure2.VNFcomposedbymanyVNFCs..................................................................................8Figure3VNF/VNFCVirtualLinksandConnectionPoints........................................................10Figure4.OpenStackoverviewoftheT-NOVAVNFdeploymentnetworks.............................11Figure5MultiVNFCcomposedVNF.......................................................................................12Figure6.VirtualLinkVNFCinterconnectionexample.............................................................12Figure7.VNFlifecycle.............................................................................................................13Figure8.Two-phaseVNFconfiguration..................................................................................19Figure9.Monitoringdatahandling(viasnmp)incaseofvSBC..............................................22Figure10.vSAhigh-levelarchitecture.....................................................................................23Figure11.AsampleofcodeoftheFWController..................................................................24Figure12.Throughputwithoutfirewall..................................................................................26Figure13.RTTwithoutfirewall...............................................................................................27Figure14.Firewallcomparisonwithoutrules(Throughput)...................................................27Figure15.Firewallcomparisonwith10rules(RTT)................................................................28Figure16.BasicvSBCinternalarchitecture.............................................................................29Figure17.EnhancedvSBCinternalarchitecture(withDPScomponent)................................30Figure18.FunctionaldescriptionofthevTU..........................................................................34Figure19.vTUlowlevelinterfaces.........................................................................................37Figure20.XMLstructureofthevTUjobdescriptionmessage...............................................38Figure21.VirtualTrafficClassifierVNFinternalVNFCtopology.............................................41Figure22.vTCPerformancecomparisonbetweenDPDK,PF_RINGandDocker..................43Figure23.ExampleoverviewofthevTCOpenStacknetworktopology.................................44Figure24ETSIhomevirtualizationfunctionality.....................................................................46Figure25Lowlevelarchitecturediagramfortranscode/streamVNF...................................47Figure26Sequencediagramforthetranscode/streamVNFexample...................................48Figure27swiftstackRegion/Zone/Node................................................................................50Figure28vCDNimplementationforSoftwareconfigurationManagement...........................51Figure29EvaluationofthebenefitsofthevHG+vCDNsystem..............................................56Figure30.PXaaShighlevelarchitecture.................................................................................59Figure31.Proxyauthentication..............................................................................................60Figure32.PXaaSDashboard....................................................................................................61Figure33.PXaaSinOpenStack................................................................................................62Figure34.Bandwidthlimitation..............................................................................................64Figure35.Accessdeniedtowww.facebook.com....................................................................64Figure36.Squid'slogs.............................................................................................................65Figure37.Resultstakenfromhttp://ip.my-proxy.com/withoutuseranonymity.................65Figure38.Resultstakenfromhttp://ip.my-proxy.com/withuseranonymity.......................66

Page 6: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium6

1. INTRODUCTION

ThisdocumentcontainsadescriptionoftheVirtualNetworkFunctions(VNFs)developedintheT-Novaproject.ThoseVNFshavebeendevelopedinordertodemonstrate,withreal-lifeapplications, the capabilities offered by the overall T-Nova framework. Moreover, otherfunction providers who want to develop new VNFs within T-Nova can use them as anexample. To this aim, general guidelines are briefly summarized anddiscussed in the firstpart of the document. In particular, some main concepts related to the VNF internaltopology, the lifecycle management, the VNF Descriptor and the interaction with themonitoring framework are discussed, and information of practical interest to VNFdevelopers is provided. Then, the specific VNFs currently being developed in T-Nova aredescribed. In particular, six different VNFs are discussed, covering a wide range ofapplications,whichare:

• VirtualSecurityAppliance;• VirtualSessionBorderController;• VirtualTranscodingUnit;• TrafficClassifier;• VirtualHomeGateway;• ProxyasaService;

For each VNF, architectural and functional descriptions are provided, along with thetechnologies used and the internal/external interfaces. In addition, some preliminary testresultsthatwereobtainedaresummarizedandbrieflydiscussed.

Page 7: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium7

2. NETWORKFUNCTIONSIMPLEMENTATION

Thissectiondescribesproperties, rules,andbestpracticesapplicable toanyVNF.ManyoftheVNF'spropertiesshouldbedescribedintheVNFDescriptor,orsimplyintheVNFD.

2.1. CommontoallVNFs

2.1.1. VNFgeneralarchitecture

In the T-NOVA framework, a VNF is defined as a group of Virtual Network FunctionComponents(VNFC);also,oneVNFCconsistsofonesingleVirtualMachine.EachVNFshallsupporttheT-NOVAVNFlifecycle(i.e.start,stop,pause,scaling,etc.)underthecontroloftheVNFM.

With the exception of some mandatory blocks (described in the following), internalimplementations of a VNF are left to the VNF Developer. Nonetheless, it is suggested toadopt a common structure for theVNF internal components as depicted in Figure 1. VNFdeveloperswhoaimtodevelopnewVNFsshouldfollowthecommonpracticesintroducedintheT-NOVAframework.

Figure1.VNFinternalcomponents

InthisarchitecturetheVNFControlleristheinternalcomponentdevotedtothesupportoftheVNF lifecycle. The Init Configuration component is responsible for the initialization ofthe VNF that happens at the beginning of the VNF execution. The Monitoring Agentcomponent transmits application-level monitoring data towards the Monitoring System.ExtensivedescriptionoftheVNFarchitectureanditsspecificationscanbefoundin[D2.41].

AlltheVNFinternalcomponentsareoptional,excepttheVNFController.TheVNFControlleroften acts asVNFMaster Function, and is responsible for the internal organisationof theVNFCs intoa singleVNFentity [NFVSWA]. Itmustbepresent ineachVNFbecause it is inchargeofsupportingtheT-Ve-VnfminterfacetowardstheVNFM.VNFdevelopersarefreetodevelopVNF internal components inanyway theypreferas longas theycomplywith theVNF lifecycle management interface T-Ve-Vnfm, defined in [D2.21]. The previous case

Page 8: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium8

applieswhentheT-NOVAgenericVNFM(VNFM-G)isused.Conversely,iftheVNFissuppliedwith a specific VNFM (VNFM-S), then the implementation of the control planewithin theVNFisentirelyuptothedeveloper.

In the case of a VNF composed by more than one VNFC, the VNF developer is free toaccommodatingtheinternalVNFcomponents.TheminimalmandatoryrequirementisthatauniqueVNFcontrollermustbeinsertedineachVNF,i.e.theVNFcontrollershallbeinstalledin justoneVNFC.Theremainingcomponents, i.e. InitConfigurationandMonitoringAgent,can be freely allocated in different VNFCs. In Figure 2 we provide an example of a VNFcomposedbytwoVNFCs.InthiscasetheInitConfigurationcomponentisallocatedinboththeVNFCs,whilethereisonlyoneMonitoringAgent.OfcoursedifferentconfigurationsarealsopossibledependingontheparticularitiesoftheVNF.

Figure2.VNFcomposedbymanyVNFCs

2.1.2. VNFDescriptor

Aligned to the ETSI Specification for VNF Descriptors [NFVMAN], T-NOVA proposes asimplifiedversionforthefirstsprintoftheT-NOVAplatformimplementation.Asdefinedin[NFVMAN], a VNFDescriptor (VNFD) is “a deployment templatewhich describes a VNF intermsofdeploymentandoperationalbehaviourrequirements”.

AstheVNFsareparticipatinginthenetworkingpath,theVNFDalsocontainsinformationonconnectivity, interfaces and KPIs requirements. The latter is critical for the correctdeploymentof theVNFas it isusedby theNFVO inorder toestablishappropriateVirtualLinkswithintheNFVIbetweenVNFCinstances,orbetweenaVNFinstanceandtheendpointinterfacetootherNetworkFunctions.

Itshouldbenotedthatthe3rdpartydevelopers, implementingVNFstobeusedinT-NOVAplatform need to follow some basic guidelines in relation to the anticipated VNF internalarchitecture and deployment as well as for the VNFM – VNF communication as wasillustrated in the previous subsection (Section 2.1.1), applicable in case the generic VNFM(VNFM-G)isbeingused.

Prior to presenting the VNFD template as it is introduced in T-NOVA, the followingsubsection elaborates on the structure and connectivity domains that are expected by T-NOVA.

Page 9: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium9

2.1.3. VNFDandVNFInstantiation

TheinstantiationofaVNFisboundtothetypeandcomplexityoftheVNF,anon-exhaustivelistoffactorsthatplayroleinthisare:

• VNFcomplexity(e.g.VNFconsistingofsingleVNFC)• VNFinternalnetworkingtopology(e.g.virtualnetworksrequiredtointerconnectthe

VNFCs)• RestrictionsonVNFCbootingorder• ExistenceofVNFspecificVNFM• UseofElementManagement(EM)

AnexamplescenariofortheinstantiationofaVNFisexploredbelow.AVNFcomprisedof3VNFCsisconsidered.EachVNFCisdescribedintheVNFD,wheretherequiredresourcesforeachVNFCaredeclared.TheVNFMinstantiatesthethreeVNFCsbysignallingtotheNFVOand through that to the VIM the creation and instantiation of the three virtualisationcontainers that will host the three VNFCs. At this point the VNFC may be seen asindependent entities not yet organised as aVNF.However, theVNFCs are interconnectedusingthestatedintheVNFDinternalnetworksandinterfaces.TheL2/L3information,vitalfor the next steps is retrieved by the VNFM who in turn communicates with the NVFController inordertoproceedwiththeconfigurationandorganisationofthoseVNFCsintotheVNF.

The next subsections provide more information on the T-NOVA pre-selected networkdomains foreseen for all the VNFs in order to simplify the deployment and instantiationprocess.

2.1.4. VNFCNetworking

According to the T-NOVA architecture each VNFC should have four separate networkinterfaces,eachoneboundtoaseparateisolatednetworksegment.ThenetworksrelatedtoanyVNFCare:management,datapath,monitoringandstorage.Thefigurebelow(Figure3)illustrates the above statement. In various cases the above rule might not be followed,especiallyinthecasewherethereisnorealrequirementforaparticularnetwork/interfacee.gaVNFCthatisnotusingpersistentstorageonthestoragearrayoftheNFVI.

Page 10: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium10

Figure3VNF/VNFCVirtualLinksandConnectionPoints

ThemanagementnetworkprovidestheconnectionandcommunicationwiththeVNFMandpasseslifecyclerelatedeventsseamlesslytotheVNF.ThemanagementnetworkconveystheinformationpassedoverfromtheVNFMtotheVNFController(T-Vnfm-Vnf interface).TheVNFM can control each particular VNFC, through themanagement interface, and providetheoverallstablefunctionalityoftheVNF.

ThedatapathnetworkprovidesthenetworkingfortheVNFtoacceptandsenddatatrafficrelated to the network function it supports. For example, a virtual Traffic Classifierwouldreceivethetrafficunderclassificationfromthisinterface.Thedatapathnetworkcanconsistofmorethanonenetworkinterfaces,thatcanreceiveandsenddatatraffic.Forexample,inthe case where the function of the VNFC is a L2 function (e.g. a bridge), the anticipatedinterfacesaretwo,onefortheingressandanotheronefortheegress.Insomecases,thoseinterfacesmightbemappedondifferentphysicalinterfacestoo.Thenumberofinterfacesfor thedatapathand theirparticularuse isdecidedby theVNFprovider.Additionally, thedatatrafficthatneedstobesharedamongdifferentVNFCsusesthedatapathnetwork.

Themonitoringnetworkprovidesthecommunicationwiththemonitoringframework.EachVNFhasamonitoringagentinstalledoneachVNFC,whichcollectsVNFspecificmonitoringdata and signals them to the Monitoring Framework (see [D4.01]) or to the VNFM/EMdependingonthesituation.TheflowofthemonitoringdataisfromtheVNFservicetothemonitoringagentandfinallytothemonitoringframework,whichcollectsdatafromallVNFs.AseparateMonitoringnetworkhasbeenintroducedinT-NOVAtocopewiththeamountoftraffic generated in large-scale deployments. Though, themonitoring traffic can be easilyaggregated with the management traffic into a single network instance, if this solutionresultsadequatetothespecificapplication.

The storage network is indented for supporting communication of the VNFCs with thestorage infrastructure provided at each NFVI. This applies to the case where a VNFC willutilize persistent storage. In many NFVI deployment scenarios the physical interface thathandlesthestoragesignaling(e.g.iSCSI)oneachcomputenodeisseparatedfromtheother

Page 11: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium11

networksegments.Thisnetworksegmentisconsideredoptionalandonlyapplicabletotheaboveusecases.

The resultingdeployment inanOpenstackenvironment is illustratedbelow in Figure4. InthisexampleadeploymentoftwoVNFCthatcompriseasingleVNFisillustrated.

Figure4.OpenStackoverviewoftheT-NOVAVNFdeploymentnetworks.

2.1.5. VNFinternalstructure

A VNF may be composed by one or multiple components namely the VNFCs. In thisperspectiveeachVNFCisdefinedasasoftwareentitydeployedinavirtualizationcontainer.Asdiscussedintheprevioussection,Figure3presentsthecasewhereaVNFiscomprisedofone VNFC,whereas in Figure 5, an example of a VNF comprised by 3 VNFC is given. TheinternalstructureoftheVNFisentirelyuptotheVNFdeveloperandshouldbeseamlesstothe T-NOVA SP. It should be also noted that for the same NF that is being virtualized,different developersmight aswell choose different implementationmethods and providethesameNFwithdifferentnumberofcomponents.TheVNFinternalstructureisdescribedintheVNFDasagraph.InsidetheVNFDthenodes(vertices)ofthegrapharetheinterfacesofeachVNFC,andtheinternalVirtualLinksinterconnectingtheVNFCaretheedgesofthegraph.

Page 12: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium12

Figure5MultiVNFCcomposedVNF

Thetopologyincludestheconnectionsbetweendifferentnetworkinterfacesofthedifferentnetworkcomponents.FromaVNFperspective,thistopologyistranslatedintothedifferentconnections over the virtual links (VLs) that connect the different VNFCs. An example ofinter-VNFC topology is shown in Figure 6. The monitoring network is considered asmultiplexedwiththemanagement.

Figure6.VirtualLinkVNFCinterconnectionexample.

Inthisexampleweallocatetwodatapathnetworksegments.Datapath1isusedtoallowincomingtraffictobothVNFCsatthesametime(usingtrafficmirroring).Datapath2isusedforthetrafficexitingtheVNF.ThemanagementisusedforallowingtheexternalcommunicationoftheVNFtotheVNFMandalsotoallowVNF1(controller)tocontroland

Page 13: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium13

coordinateVNFC2.Itisapparentthatothervariantsoftheabovenetworkingexamplecouldbepossible.ThedeveloperwillusetheVNFDinordertospecifytheexactinternalstructureandconnectivityoftheVNF.

2.1.6. VNFmanagementinterface

EachVNF shall have amanagement interface, namedT-Ve-Vnfm interface that is used tosupporttheVNFlifecycle.TheVNFlifecycleisshowninFigure7.

Figure7.VNFlifecycle

A general discussion on the VNF lifecycle can be found in [NFVMAN]. Also, theimplementation of the VNF lifecycle as envisioned in the T-Nova project is discussed in[D2.41], section 3.3. Some relevant aspects encountered by developers in the VNFdevelopmentprocessareexplainedbelow.SomeguidelinestoclarifytheactivitiesthatmustbecarriedoutinordertoproduceVNFsthatcanbeusedintheT-Novaframeworkarealsoprovided.

In T-Nova, the VNFmanagement interface is technically implemented by only one of theVNFCs constituting the VNF. Such VNFC is in charge of distributing the lifecycle-relatedinformationtoalltheothersVNFCsconstitutingaVNF.

InaccordancewiththeETSIMano,the interactionbetweenVNFMandVNF, implementingthe VNF lifecycle, is thoroughly described in the VNF Descriptor. In particular, the VNFDcontainsasectioncalled"lifecycle_event",whichprovidesallthedetailstoallowtheVNFMtointeractwiththeVNFinafullyautomatedway.

Tothisaim,eachVNFneedstobeable todeclarevarious lifecycleevents (e.g.start, stop,pause...).Foreachofthoseevents,theinformationneededtoconfiguretheVNFcanbeverydifferent.Moreover, the command to trigger the re-configuration of the VNF can changebetweenevents.TosupportthedifferenteventseachoneneedstobedetailedintheVNFD.

The information related to theVNF life-cycle is inserted in the“lifecycle_event”sectionoftheVNFD.Inparticular,insuchasectionthefollowinginformationisavailable:

• Driver: the protocol used for the connection between the VNFManager and thecontrolling VNFC. In T-Nova, two protocol can be used, the Secure Shell (ssh)protocolandthehttpprotocol.

• Authentication:suchfieldsspecifythetypeofauthenticationthatmustbeusedfortheconnection,andsomespecificdatarequiredbytheauthenticationprocess(e.g.alinktotheprivatekeyinjectedbytheVNFMatstartup).

• TemplateFileFormat:specifiestheformatofthefilethatcontainstheinformationaboutthespecificlifecycleevent,andthatmustbetransferredoncethecommandisrun.

• TemplateFile:includesthenameandthelocationoftheTemplateFile.

Page 14: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium14

• VNFContainer:specifiesthelocationoftheTemplateFile.• Command:definesthecommandtobeexecuted

2.1.7. VNFDescriptor(VNFD)

As explored in the above sections, the VNFD plays an important role in the properdeploymentofaparticularVNFintheNFVIbytheNFVO,aswellasintheportabilityoftheVNFtoNFVIvariants.ThissectionattemptsaspecificationoftheVNFDasusedcurrentlybyT-NOVA.TherefinedfinalversionwillbepublishedinD.5.32.

2.1.7.1. VNFDPreamble

The VNFD preamble provides the necessary information for the release, id, creation,provideretc.T-NOVAextentstheinformationwithMarketplacerelatedinformationsuchastradingandbilling.

2.1.8. VirtualDeploymentUnit(VDU)

The vdu VNFD segment provides information about the required resources that will beutilised in order to instantiate theVNFC. The configuration of this partmay be extremelydetailedandcomplexdependingontheplatformspecificoptionsthatareprovidedbythedeveloper.However itshouldbenotedthatthemorespecificaretherequirementsstated

release: "T-NOVA v0.2" # Release information id: "52439e7c-c85c-4bae-88c4-8ee8da4c5485" # NFStore provides the rules, not know prior to uploading and cannot be referenced on time zero provider: "NCSRD" # Provider(T-NOVA) - Vendor (ETSI) provider_id: 23 # FP given by ID by the Marketplace description: "The function identifies, classifies and forwards network traffic according to policies" descriptor_version: "0.1" version: "0.22" manifest_file_md5: "fa8773350c4c236268f0bd7807c8a3b2" # Calculated by the NFStore during upload type: "TC" date_created: "2015-09-14T16:10:00Z" # Auto inserted at the Marketplace date_modified: "2015-09-14T16:10:00Z" # Auto inserted at the Marketplace (to bechecked) trade: true billing_model: model: "PAYG" # Valid Options are PAYG and Revenue Sharing (RS) period: "P1W" # Per 1 week, e.g. P1D per 1 day price: unit: "EUR" min_per_period: 5 max_per_period: 10 setup: 0

Listing1VNFDPreample

Page 15: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium15

here the less portable the VNF might be, depending on the NFVO policies and the SLAspecifications. It is assumed thateachvdudescribeseffectively the resources required forthevirtualisationofoneVNFC.

The listingbelow(Listing2)presentsasnippetof thevdusectionof theVNFDfocusingontheITresourcesandplatformrelatedinformation.Otherfieldsmayalsobenotedsuchas:i)Lifecycleevents–where thedrivers for interfacingwith theVNFcontrolleraredefinedaswellastheappropriatecommandsallocatedtoeachlifecycleevent;ii)scaling–definingthethresholds for scaling in-out; and iii) VNFC related subsection where the networking andinter-VNFCVirtualLinksaredefined.

Page 16: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium16

2.1.8.1. VNFCsubsection

Listing2VNFDvdusection

<snip> vdu: - id: "vdu0" # VDUs numbered in sequence (TBD) vm_image: "http://store.t-nova.eu/NCSRDv/TC_ncsrd.v.022.qcow" # URL that contains the vm_image_md5: "a5e4533d63f71395bdc7debd0724f433" # generated by the NFStore # VDU instantiation resources resource_requirements: vcpus: 2 cpu_support_accelerator: "AES-NI" # Opt. if accelarators are required memory: 2 # default: GB memory_unit: "GB" # MB/GB (optional) memory_parameters: # Optional large_pages_required: "" numa_allocation_policy: "" storage: size: 20 # default: GB size_unit: "GB" # MB/GB/TB (optional) persistence: false # Storage persistence true/false hypervisor_parameters: type: "QEMU-KVM" version: "10002|12001|2.6.32-358.el6.x86_64" platform_pcie_parameters: # Opt. required if SR-IOV is used for graphics accelaration device_pass_through: true SR-IOV: true network_interface_card_capabilities: mirroring: false SR-IOV: true network_interface_bandwidth: "10Gbps" data_processing_acceleration_library: "eg DPDK v1.0" vswitch_capabilities: type: "ovs" version: "2.0" overlay_tunnel: "GRE" networking_resources: average: “7Mbps” peak: “10Mbps” burst: “200KBytes” #VDU Lifecycle enents vdu_lifecycle_events: # VDU Scaling config scale_in_out: # number of instances for scaling allowed minimum: 1 # min number of instances maximum: 5 # max number of instances, if max=min then no scaling is allowed # VNFC specific networking config vnfc:tantiation parameters

Page 17: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium17

This section of VNFD is related to the internal structure of the VNF and describes theconnectionpointsofthevdu(nameid,type)andthevirtuallinkswheretheyareconnected.TheaboveinformationisillustratedintheprovideVNFDlistingbelow.

TheaboveinformationisparsedandtranslatedtoHEATtemplatethattheNFVIVIMbasedonOpenstackisabletoparseandprovideaccordinglytherequirednetworks.

2.1.9. Instantiationparameters

TheVNFDenablesVNFdeveloperstothoroughlydescribethenetworkfunctionintermsofstatic and instantiation-specific parameters, ormore easily, instantiation parameters. Thelatter,usuallyapplytotheconfigurationoftheVNFapplicationandtheirvaluescanonlybeknown at runtime (e.g. IP addresses). Nevertheless, VNF developers still need to describewhichoftheruntimeparametersareneededandwheretheyareneeded.Forthesecases,VNFdeveloperscanuseoneoftwoavailablemechanisms:

• Get Attributes Function: cloud orchestration templates usually provide a set offunctions that enable users to programmatically define actions to be performedduring instantiation. For example, in HOT, the "get_attr" function allows theretrievalofaresourceattributevalueatruntime;

vnfc: id: "vdu0:vnfc0" #incremental reference of VNFCs in the VDU networking: # Start of T-NOVA networking section - connection_point_id: "mngt0" # for monitoring and management purposes vitual_link_reference: "mngt" type: "floating" # The type of interface i.e floating_ip, vnic, tap bandwidth: "100Mbps" - connection_point_id: "data0" # datapath interface vitual_link_reference: "data" type: "vnic" # The type of interface i.e floating_ip, vnic, tap bandwidth: "100Mbps" ...snipped ... vlink: # T-NOVA VL Section : relative to resources:networks HEAT template, subnets and dns information should be filled from the Orchestrator therefore are not put here. - vl_id: "mngt" connectivity_type: "E-Line" connection_points_reference: ["vdu0:mngt0", "vnf0:mngt0"] root_requirement: "10Gbps" leaf_requirement: "10Gbps" # Used only for E-tree qos: "BE" test_access: "active" # Active: Use ext IP to be reached, Passive: L2 access via steering, None: no access net_segment: "192.168.1.0/24" #a-priori allocation by the system during approval of the VNFD. dhcp: "TRUE" ...snipped...

Listing3VNFCsectionoftheVNFD

Page 18: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium18

• Cloud Init: cloud init isa library supportedbymultiplecloudproviders thatallowsdevelopers to configure services to be ranwithinVMs at boot time. In traditionalIaaS environments it is commonly used to upload user-data and public keys torunninginstances.

AlthoughbothmechanismscanbeusedtouploaddeploymentspecificparameterstoVNFs,theyare intrinsicallydifferentand it’sup to theVNFdevelopers todecidewhichonebestsuits their VNFs. In case, developers can also decide to use both mechanisms. The mainfunctionaldifferencesbetweenthesetwoaresummarizedinTable2.

Table2-A:Methodsforinstantiationparameterupload

Method When How

"get_attr" Inanymommentbetweenthe Deployment andTermination VNF lifecycleevents

The VIM provides the information totheOrchestrator,which forwards it totheVNFM,andthistotheVNFbyusingthemiddlewareAPI

Cloud Init +ConfigurationManagementSoftware

AttheVMboottime The VIM provides the informationthrough the Openstack Metadata APIwhichiscollectedbytheVMafterBoot

GetAttributes

This method is based on the "get_attr" function defined in HOT. This function works byreferencing in the template the logical name of the resource and the resource-specificattribute.Theattributevaluewillberesolvedatruntime.Thesyntaxforthefunctioncanbefoundat[GetA]andisasfollows: { get_attr: [<resource_name>, <attribute_name>, <key/index 1>, <key/index 2>, …] } Inwhich:

• resource_name:Theresourcenameforwhichtheattributeneedstoberesolved;• attribute_name: The attribute name to be resolved. If the attribute returns a

complexdatastructuresuchasalistoramap,thensubsequentkeysorindexescanbespecified.Theseadditionalparametersareusedtonavigatethedatastructuretoreturnthedesiredvalue;

• key/indexi:…

2.1.9.1. CloudInit+ConfigurationManagementSoftware

AnothercomplementaryapproachistodissociatetheactualinitializationoftheVManditsconfiguration through the middleware API. We delay the configuration of VNFC until acommand is received at the VNF Controller level. Following an Infrastructure-as-codeapproach, the controller, when receiving a lifecycle event from themAPI, is in charge ofenforcingthecomplianceoftheinfrastructurewiththestatereceivedfromtheOrchestratorthroughthemAPI.

Page 19: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium19

Figure8.Two-phaseVNFconfiguration

According to this technique, theallocationofaVNF iscarriedout in twodifferentphases.The firstoneorBootstrappingPhase,handledby cloud init, connects theVM togetherbysharing some very limited information revolving around networking (sharing of IP). Thesecond one, the Configuration Phase, needs a configurationmanagement software beinginstalled on the various VNFCs that "glue" them together and allow configuration beingpropagatedfromtheVNFControllerdowntotheotherVNFCs.

WhentheVNFControllerreceivesalifecycleevent(forinstancestart)fromtheMiddlewareAPI,itwillimplementtheorderusingtheconfigurationmanagementsoftwareontheotherVNFC.

Foranexampleofimplementationforthismethod,pleasereferto2.6.4.6.

This approach brings more complexity, as it imposes using configuration managementsoftware, and theVNF is responsible for applying the configuration.However, it becomesvaluablewhenconsideringcomplexVNFwithseveralVNFCsandscaling.

2.1.10. Interactionwithconfigurationmiddleware

ThemainadvantageofhavingamiddlewaretotransmitcommandstotheVNFsisbeingabletoexpressthosecommandsinatechnology-agnosticwayintheVNFD.

Themiddleware API supportsmultiple technologies to send instantiation information andtriggerstatechangesinVNFs.Currentlytwotechnologiesaresupported:

• SSH• HTTPREST-basedinterface

VNFdevelopers canuse theVNFD todefine the templates thathold the information (e.g.jsonfiles)andthecommandsthatwilltriggertheconfigurationoftheVNFs.

ThespecificationforSSHissummarizedbelow:

• Authentication:o Private-Key: a private-key is available in T-NOVA platform that enables

authentication in virtual machines when using the SSH protocol. Whendeployingthevirtualmachinesthisprivatekeyis injectedintheVMbytheVIM.

Page 20: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium20

• Operations:o Copy file to host: this operation allows the middleware API to send the

configuration file to the VM. It enables the exchange of configurationparameterswiththeVNFs;

o Run remote command: this operation is used to trigger a lifecycleevent/reconfigurationof theVNF.With thisoperationVNFdevelopers canspecify not only commands that should be run inside the VM but alsoscripts.

ThespecificationforHTTPissummarizedbelow:

• Authentication:o Basic access authentication: for T-NOVA, when using HTTP only basic

authenticationwillbesupported.ThisauthenticationisrealizedbysendingausernameandpasswordviaHTTPheaderwhenmakingarequest;

• Operations:o Fileupload: simpleHTTP fileuploadoperationusing"multipart/form-data"

encoding type. POST and PUT operations must be supported to enable asinglestepoperation(uploadfileandtriggerreconfiguration);

o Send POST request: POST request only applies to the start operationfollowingtheRESTfularchitecturebestpractices;

o Send PUT request: PUT requests are used in all lifecycle events with theexceptionofthe"start"event;

o SendDELETErequest:thisrequestwillmapinthe"destroy"lifecycleevent.

2.1.11. Monitoring

CollectingmonitoringdatabetweentheVNFandtheT-NOVAarchitectureisamajortaskinT-NOVA.BothSystemmetricsthatarenon-VNFspecificandsoftwaremetricsthatareVNFspecific are collected by the same database in the T-NOVA platform. In this section, thefunctionalitythateachVNFshouldimplementinordertopushdatamonitoringmetricsintothemonitorydatabaseisdescribed.

2.1.11.1. SystemMonitoring

Monitoring of the system information is currently collected using the collectd agent[Collectd] that is installed and run on each VNFC. Metrics such as CPU, RAM, I/O arecollectedonaVM-basedlevelandpushedtothecollectdserver.

OtheroptionsincludingusingOpenstackCeilometertocollectbasicsysteminformationarecurrentlyconsideredbyT4.4.

2.1.11.2. VNFSpecificMonitoring

Inthissection,twomethodsfortheinteractionwiththeT-Novamonitoringframeworkarepresented.

SimpleMonitoringThe VNF developer should perform some monitoring specific developments during theimplementation phase of the VNF in order to be T-NOVA compatible. Specifically, thefollowingstepsshouldbefollowedbytheVNFdeveloper:

• InstallationofPython2;

Page 21: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium21

• CopyofthesmallPythonscriptprovidedbyT-NOVAtobecalledwhenmonitoringdataistobesent;

• Themonitoringdatashouldbesenttothemonitoringsystemregularly(forexample,throughtheuseofacronjob);

• Finally, the Python 2 script should be called to push the metrics in the T-NOVAdatabase,inakey-valueformat.

Themainadvantagesofthisapproachare:

1. It doesn't requireany specific installationas longasPython2 is already installed,whichisusuallyinstalledbydefaultonthemostLinuxdistributions;

2. ItallowsVNFdeveloperstoimplementtheirownwayonhowmetricsarecollectedfrom their application. For example, in the currently developed VNFs, onedeveloperqueriesXMLfileswithXPathtooutputtuples(metricname,metricvalue)and the python script is calledwith thosemetrics. Another developermay parselogfilesandextractrelevantdatawithsomecustomtechnology.

ThemaindrawbackisthenecessitytosupportthePythonscriptasanexternalasset,whichis not acceptable fromprofessional developers if updated regularly, as itmustbe auditedbeforeintegrationtothefinalsolution.

2.1.11.3. SNMPMonitoring

ThemonitoringdatacanbeexchangedwiththeMonitoringManagerviasnmpprotocol.Thiskind of interaction requires the installation of the snmp collectd agent in the VirtualMachine.TheMonitoringManagerrequestsmetricsbysendingasnmpGetRequesttotheMonitoring Agent. Typically UDP is used as a transport protocol. In this case, the AgentreceivesRequestsonUDPport161,whiletheManagersendsRequests fromanyavailablesourceport.TheAgentreplieswithasnmpGETResponsetothesourceportoftheManager.

AnexampleoftheseproceduresisdepictedinFigure9,whichreferstothevSBC.Inthiscasethemonitoring data are collected by the O&M component, acting as a snmpMonitoringAgent.

Page 22: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium22

Figure9.Monitoringdatahandling(viasnmp)incaseofvSBC

Monitoringdata isdescribedbyaManagement InformationBase (MIB).ThisMIB includesthe VNF/VDU identifiers and uses a hierarchical namespace containing object identifiers(OIDs).EachOIDidentifiesavariablethatcanbereadviasnmpprotocol(seeRFC2578).ThesnmpGETRequestcontainstheOIDstobesenttotheMonitoringManagerandthesnmpGETResponsecontainsthevaluesofthecollecteddata.

Usually the time intervals between two consecutive collections ofmonitoringmetrics arenotsynchronizedwiththesnmpRequestcomingfromtheManager.Forthisreason,unlesstheexaminedmetric isn’taninstantaneousvalue,theVNFreturnstotheManageralldataevaluatedduringthepreviousmonitoringperiods.

Asanexample, if thesnmpGETRequest is receivedbyMonitoringagentat17:27andtheVNFmonitoringperiodis5minutes,themonitoringdatasenttotheManagerbelongtothetimeinterval“17:25–>17:30”.

Wealsoassumedthat:

• ThegenerationofthemonitoringdataisactivebydefaultinsidetheVNF;• The monitoring period of each metric is configured inside VNF components (i.e.,

IBCF, BGCF in case of vSBC). If the frequency of the snmpGET Request is greaterthan the VNF monitoring period, the same value is sent more than once to therequestingMonitoringManager.

Page 23: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium23

2.2. VirtualSecurityAppliance

2.2.1. Introduction

A Security Appliance (SA) is a “device” designed to protect computer networks fromunwantedtraffic.Thisdevicecanbeactiveandblockunwantedtraffic.This is thecaseforinstanceof firewallsandcontent filters.AsecurityAppliancecanalsobepassive.Here, itsrole is simplydetectionand reporting. IntrusionDetectionSystemsareagoodexample.AvirtualSecurityAppliance(vSA)isaSAthatrunsinavirtualenvironment.

InthecontextofT-NOVA,wehavesuggestedavirtualSecurityAppliance(vSA)composedofafirewall,anIntrusionDetectionSystem(IDS)andacontrollerthatlinkstheactivitiesofthefirewallandtheIDS.ThevSAhighlevelarchitecturewasdiscussedindetailsin[D5.01].

2.2.2. Architecture

TheideabehindthevSAistolettheIDSAnalyzethetraffictargetingtheserviceandifsometrafficlookssuspicious,thecontrollertakesadecisionby,forinstance,revisingtherulesinthefirewallandblockthistraffic.

Thearchitectureof thisappliance isdepicted inFigure10and includes the followingmaincomponents.

Figure10.vSAhigh-levelarchitecture.

2.2.3. Functionaldescription

Thecomponentsofthearchitecturearethefollowing:

• Firewall:thiscomponentisinchargeoffilteringthetraffictowardstheservice.• Intrusion Detection System (IDS): in order to improve attack detection, a

combination of a packet filtering firewall and an intrusion detection system usingbothsignaturesandanomalydetectionisconsidered.Infact,AnomalydetectionIDShas the advantage over signature based IDS in detecting novel attacks for which

Page 24: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium24

signaturesdonotexist.Unfortunately,anomalydetectionIDSsufferfromhighfalse-positive detection rate. It is expected that combining both arts of detection willimprove detection and reduce the number of false alarms. In T-NOVA, the opensourcesignaturebasedIDS[SNORT] isbeingusedandwillbeextendedtosupportanomalydetectionaswell.Themodeofoperationof the IDScomponentwasalsodiscussedindeliverable[D5.01];

• FWController: thisapplication looks into the IDS"alerts repository"andbasedontherelatedinformation,therulesofthefirewallarerevised.Figure11depictsapartoftheFWControllercode.

Figure11.AsampleofcodeoftheFWController

• MonitoringAgent:thisisascriptthatreportstothemonitoringserverthestatusoftheVNF through somemetrics suchas (Numberoferrors coming in/goingoutofthe wan/lan interface of pfsense, Number of bytes coming in/ going out of thewan/lan interfaceofpfsense,CPUusageofsnort,Percentof thedroppedpackets,generatebysnort,etc);

• vSAcontroller:thisistheapplicationinchargeofthevSAlifecycle(formoredetailspleaserefertosection2.1.1.).

Page 25: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium25

2.2.4. Interfaces

Thedifferentcomponentsofthearchitectureinteractinthefollowingway,

1. datapacketsare firstofall filteredby the firewall (ingress interface)beforebeingforwardedtotheservice(egressinterface);

2. filtered data packets are sniffed by the IDS for further inspection (internalinterface). The IDS will monitor and analyze all the services passing through thenetwork;

3. data packets go through a signature based procedure. This will help in detectingefficientlywellknowattackssuchasportscanattacksandTCPSYNfloodattacks;

4. If an attack is detected at this stage, an alarm is generated and the firewall isinstructedtoreviseitsrules(internalinterface);

5. Ifnoattackisdetected,nofurtheractionisrequired.

In addition to that, there are two extra interfaces: the first one is in charge of the vSAlifecyclemanagement, and the second onemonitors the status of the vSA and sends therelatedinformationtothemonitoringserver.

2.2.5. Technologies

As performance is one of the main issues when deploying software versions of securityappliances,westartedbyprovidingashortevaluationoffirewallssoftwarethatcouldruninvirtualenvironments.Theideawasnottogothroughalltherelevantexistingsoftwarebutjustthemostpopularonesthatcouldbeextendedtofulfilltheusecaserequirements.Thisevaluation was described in [D5.01]. It turns out that the open source firewalls that arericher andmore complete areVyattaVyOS andpfSense (please refer to [D5.01] formoredetails). Inaddition to that,VyOS seems to supportRESTAPIs for configurationwhichareimportantintheintegrationwiththerestoftheT-NOVAframework.

Thesetwooptionswerealsoevaluatedfromtheperformancepointofviewandtheresultsarediscussed in section2.3.6.Basedon thisassessment, thepfSense firewall seems tobethebestoptiontobeusedwithinthevSA.

2.2.6. DimensioningandPerformance

To study the performance of firewalls, appropriate metrics are needed. Although theactivitiesinthisareaareveryscarce,wedescribedinD5.01,potentialmetricsthatcouldbeused.Thisincludes,throughput,latency,jitter,andgoodput.

2.2.6.1. Testbedsetup

For simplicity reasons,we have used Iperf [IPERF] for generating IP traffic in our tests. Infact,other IP trafficgeneratorssuchasD-ITG [DITG],ostinato [Ostinato],and IPTraf [IPTR]couldhavealsobeenutilized.IperfmainlygeneratesTCPandUDPtrafficatdifferentrates.Diverse loads (light, medium, heavy) and different packet sizes are also considered. Foranalyzing IP traffic, we used “tcpdump” for capturing it and “tcptrace” to analyse it andgeneratestatistics.Asforthevirtualization,VirtualBox[VBOX]wasused.

Torunourtests,wedecidedtousetwohosts.Onthefirstone,wehaveinstalledtheIperfclientandserverandonthesecondone,wehavesetupthefirewallundertests.ThissetupisinfactinlinewiththerecommendationsprovidedinRFC2647.

Page 26: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium26

2.2.6.2. Testingscenarios

Theundertakentestsarebasedonthreemainscenarios,

1. No firewall: Here,we configure and check the connectivity between the Iperfclient and server without a firewall in between. This enables us to test thecapacityofthecommunicationchannel

2. TCPtrafficwithfirewallandnorules:Here,wecheckwhethertheintroductionof a firewall (running on a virtualmachine in between) generates extra delay.Wealsotestthecapacityofthefirewallinthiscontext

3. Withfirewallandincreasingnumberofrules:theobjectiveofthisscenarioistostudy theeffectof introducingrules into the firewall.Toachieve thisscenario,somescriptsforbothpfsenseandVyosareimplementedtogeneraterulesinanautomatic way. The scripts are shell scripts using specific API commands andgenerateblockingrulesforrandomsourceIPaddresses(excludingthoseusedinthe test setup) and the WAN interface. For pfsense, the easyrule function isextendedandforVyOS,the“configure”environment(setofcommands)isused.Inthisscenario,sometestsarealsoperformedusingUDPinsteadofTCP

2.2.6.3. Testsresults

When no firewall is used between the Iperf client and server, one can note that thethroughput of the communication remains good (700 Mbit/s) as long as the number ofparallelconnectionsdoesnotexceed7connections.Whenthenumberofconnectionsgoesbeyondthisvalue,thethroughputdecreasesveryfasttoreach0when20connectionsareopened(Figure12,Figure14).OnecanalsonoticethattheRoundTripTime(RTT)isseverelyaffectedwhen increasing the number of connections between the Iperf client and server(Figure13,Figure15).

Figure12.Throughputwithoutfirewall

Page 27: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium27

Figure13.RTTwithoutfirewall

Figure14.Firewallcomparisonwithoutrules(Throughput)

Page 28: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium28

Figure15.Firewallcomparisonwith10rules(RTT)

Theresultsobtainedfromafirewall(pfsenseorVyos)beingsettledbetweentheIperfclientandserver,thevariationofthethroughputandtheRTTaredepictedinFigure14andFigure15,respectively.Onecannotethatpfsense,inbothcases,presentsamorestablebehaviorwhenthenumberofconnectionsincreases.

2.2.7. FutureWork

Thenextstepsarethefollowing,

• Moveall components toonesingleVM,asusinganOVS ina separateVMfor IDSintegrationisnotfeasible,duetoOpenStacklimitations;

• Completemonitoringintegration;• CreateHeattemplate;• CreateVNFdescriptor.

2.3. SessionBorderController

2.3.1. Introduction

A Session Border Controller ﴾SBC) is typically a standalone device providing networkinterconnectionandsecurityservicesbetweentwo IPnetworks. Itoperatesat theedgeofthese networks and is used whenever a multimedia session involves two different IPdomains.Itperforms:

• thesessioncontrolonthe“control”plane,adoptingSIPasasignallingprotocol;• several functions on the “media” plane (i.e: transcoding, transrating, NAT,etc),

adoptingRealtimeTransportProtocol(RTP)formultimediacontentdelivery.

ThevSBCistheVNFimplementingtheSBCserviceinT-NOVAvirtualizedenvironment,anditisaprototypedversionofthecommercialSBCthatItaltelisdevelopingfortheNFVmarket.GeneralrequirementsforvSBCscomprisebothessentialfeatures(suchas:IPtoIPnetworkinterconnection,SIPsignalingproxy,MediaflowNAT,RTPmediasupport)andalsoadvancedrequirements (such as: SIP signaling manipulation, real-time audio and/or videotranscoding, Topology hiding, Security gateway, IPv4-IPv6 gateway, generation ofmetrics,

Page 29: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium29

etc.). For the objectives of the T-NOVA project, we focus on all essential features and asubset of advanced requirements (i.e: IPv4-IPv6 gateway; real-time audio and/or videotranscodingformobileandfixednetwork;metricsgeneration;etc).

2.3.2. Architecture

ThebasicarchitectureofthevirtualizedSBCisdepictedinFigure16.

Figure16.BasicvSBCinternalarchitecture.

ThebasicvSBCconsistsmainlyof:

• four Virtual Network Function Components (VNFCs): LB, IBCF, BGF andO&M(describedindetailbelow);

• fourNICs;• oneManagementinterface(T-Ve-Vnfm):ittransportstheHTTPcommands

ofT-NovalifecyclefromtheVNFMtotheO&Mcomponent;• oneMonitoring Interface: the monitoring data, produced by the internal

VNFCs(i.e:IBCFandBGF),arecollectedbytheO&Mandarecyclicallysent(via snmp) to the T-NOVAMonitoring. See also Task 4.4 (Monitoring andMaintenance)forfurtherdetails.

TheenhancedarchitectureofthevirtualizedSBCcomprisesalsotheDPScomponent.Thisadditionalcomponent,describedindetailinthefollowingparagraph,isbasedonDPDKaccelerationtechnologyandcanprovidehighspeedinprocessingtheaddressinginformationintheheaderofIPpackets.ThefollowingFigure17depictstheenhancedarchitectureofthevSBC.

Page 30: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium30

ManagementNetwork

MonitoringNetwork

O&M

IBCF

BGF

DataNetwork

DPSDataPlaneSwitch

Router

metrics

metrics

T-Ve-Vnfm(HTTPAPI-REST)

SNMP(metrics)

BGF ctrl

Media (RTP)

Signalling (SIP)

SIP

Egress

Ingress

EnhancedvSBC5 VNFC(DPS,LB,IBCF,BGF,O&M)3Networks(Data,Management,Monitoring)

PublicNetwork

LB

Media

TranscodedMedia

DPS ctrl

Figure17.EnhancedvSBCinternalarchitecture(withDPScomponent).

TheenhancedvSBCarchitectureconsistsof:

• fiveVirtualNetworkFunctionComponents(VNFCs):DPS,LB,IBCF,BGFandO&M(describedindetailbelow);

• twoNICsincasethesignallingandmedia,incomingandoutgoing,arehandledbyDPSwiththesameNIC;

• oneManagementinterface(T-Ve-Vnfm),asdescribedinthebasicarchitecture;

• oneMonitoringInterface,asdescribedinthebasicarchitecture.

2.3.3. Functionaldescription

• LoadBalancer(LB):itbalancestheincomingSIPmessages,forwardingthemtotheappropriateIBCFinstance.

• InterconnectionBorderControlFunction(IBCF):itimplementsthecontrolfunctionoftheSBC.ItanalyzestheincomingSIPmessages,andhandlesthecommunicationbetweendisparateSIPend-pointapplications.TheIBCFextractsfromincomingSIPmessagestheinformationaboutmediastreamsassociatedtotheSIPdialog,andinstructsmediaplanecomponents(DPSand/orBGF)toprocessthem.Itcanalsoprovide:

• SIPmessageadaptationormodification,enablinginthiswaytheinteroperabilitybetweentheinterconnecteddomains

• topologyhiding(IBCFhidesallincomingtopologicalinformationtotheremotenetwork)

• monitoringdata(IBCFcansendthisinformationtotheT-NOVAmonitoringagent)

Page 31: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium31

• othersecurityfeatures.

• Border Gateway Function (BGF): it processes media streams, applyingtranscoding and transrating algorithms when needed (transcoding transformsthealgorithmusedforcodingthemediastream,whiletransratingchangesthesending rate of IP packets carrying media content). This feature is usedwhenevertheendpointsofthemediaconnectionsupportdifferentcodecs,anditisanancillaryfunctionforanSBCbecause,incommonnetworkdeployments,only a limited subset of media streams processed by the SBC need to betranscoded.TheBGFiscontrolledbytheIBCFusingtheinternalBGctrlinterface(see Figure 16). If the transcoding /transrating function is implemented by apure software transcoder its performance dramatically decreases, unless GPU(Graphical Processing Units) hardware acceleration is available in the virtualenvironment. Otherwise this issue could be mitigated by running more BGFinstances. The BGF component can also provide metrics to the T-NOVAmonitoringagent;

• Operating and Maintenance (O&M): it supervises the operating andmaintenance functions of the VNF. In a cloud environment, theO&Mmoduleextends the traditional management operations handled by the Orchestrator(i.e.:scaling).TheO&Mcomponentinteracts(viaHTTP)withtheVNFmanager,using the T-Ve-Vnfm interface depicted in Figure 16, for applying the T-NOVAlifecycle;

• DataPlaneSwitch (DPS): it is the (optional) front-endcomponentof thevSBCbased on DPDK acceleration technology available in Intel x86 architectures toreachaperformancelevelcomparabletothehardwarebasedversionadoptingHWaccelerationtechnologies.ThiscomponentcanusethesameIPaddress,asingressoregresspoint,bothforsignallingandmediaflows.Itsgoalistoprovidehigh speed in processing the addressing information in the header of the IPpacket, leaving untouched the payload. The DPS is instructed how tomanagetheIPpacketsbytheIBCFcomponent,actingasanexternalcontrollerusinganinternaldedicatedDPSctrlinterface(seeFigure17﴿ .TheDPScomponentcan:

o eitherprovidingthepacketforwardingtotheBGF(incaseoftranscoding)o orapplyingalocalNetworkAddressTranslation﴾NAT﴿ /porttranslation

Note:DPSmaybeonoptionalcomponentofthevSBC.InthiscasetheLBcomponentactsasthefront-endoftheVNF,ofcoursewithpoorerperformances.

2.3.4. Interfaces

ThemostrelevantinternalandexternalinterfacesdepictedinFigure16are:

1. DSPctrl:this(enhanced)internalinterfaceinstructstheDataPlaneSwitch(DPS)toperformtheincomingmediapacket(forwardingthemtotheBGF,orapplyingNAT).ItisusedonlyincaseoftheenhancedvSBC;

2. BG ctrl: this internal interface instructs the Border Gateway Function (BGF) toperformmediastreamtranscoding/transratingbetweentwocodecs;

3. ManagementInterface(T-Ve-Vnfm): it isusedtotransporttheHTTPcommandsoftheT-NOVAlifecycle(i.e:start,stop,destroy,scalein/out,etc).It’ssupportedbytheO&Mcomponent;

Page 32: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium32

4. MonitoringInterface:themonitoringdata,producedbytheinternalVNFCs(i.e:IBCFandBGF), are collectedby theO&Mand cyclically sent (via snmp) to the T-NOVAMonitoringManager.

2.3.5. Technologies

ThedevelopmentofthevSBCisbasedontheuseof:

• Linuxoperatingsystem• KVMhypervisor

Services provided by telecommunication networks greatly differ from standard ITapplications running in the cloud in termsof carrier gradeavailability andhighprocessingthroughputs.Forthisreason:

• the pure software implementation, in some scenarios, shall be complemented byaccelerationtechnologiesavailablealsoincloudenvironments.ForexampletheDPScomponent uses the DPDK acceleration technology (available in Intel x86architectures)toprovidehighspeedinprocessingtheaddressinginformationintheheaderofIPpacket;

• thetranscoding/transratingfeature,providedbytheBGFcomponent,benefitsfromGPU acceleration, so that a real time transcoding and a fixed video-rate can beguaranteedduringthewholeaudio/videosession.TheGPUencodingalgorithmsareable to efficiently exploit the potential of all available cores. The overall systemconsistsofacommercialGPUboard[Nvidia],hostedinaPCIebay.Thetranscodingwillbeperformedandtestedbetweendomainsusingthemostcommoncodecs(i.e:Google’sVP8andITU-TH.264).

2.3.6. DimensioningandPerformance

ThevSBCperformancescanbemonitoredusingthemetricsgeneratedbyitsinternalVNFCs(seealso[D4.41]forfurtherinformation),forexample:

1. monitoring data related to the control plane: total number of SIPsessions/transactions, number of failed SIP sessions/transactions due to vSBCinternalproblems;

2. monitoring data related to the media plane: incoming/outgoing RTP datathroughput, RTP frame loss, latency, inter-arrival jitter, number oftranscoding/transrating procedures, number of failed transcoding/transratingproceduresduetovSBCinternalproblems;

3. base monitoring data: percentage of memory consumption, percentage of CPUutilization.

Thesemonitoringdataarestronglyaffectedby:

• packetsizes;• kindsofcall(i.e:audioorvideocalls);• audio/videocodecs(i.e:H.264,VP8,…);• transportprotocols(i.e:UDPandTCP).

Atthemomentwedon’thavedatarelatedtotheperformanceofthevSBC.Neverthelesswehavetargetrequirementsabouttheexpectedbehaviourof its internalVNFCs.ForexampleIBCFandBGFcomponentsprovidemarketsensitiveperformances:IBCFshallsupportuptoacertainnumberofsimultaneousSIPsessions(i.e:10,25,50,100,500,1000),whiletheBGF

Page 33: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium33

uptoacertainnumberofsimultaneoustranscoding/transratingoperations(i.e:20).Forthisreason,inthecommercialproduct,eachcomponentsizewillbeassociatedtoalicensefee.

2.3.6.1. vSBCtesting

Thesetests/implementationshavebeencarriedoutsofar:

• CreationofaJSONdescriptorforthevSBC;• CreationofaHEATtemplate(Hot)forthevSBC;• InstantiationofthevSBC,usingtheHEATtemplate,inanItalteltestbed.

Thesefurtherscenarioswillbetestedinthefollowingdevelopmentsteps:

• SupportofthefollowingT-NOVAlifecycleevents(usingaHTTPREST-basedinterfaceandabasicaccessauthentication):

• Start(viahttpPOST)• Stop(viahttpPUT),bothimmediateandgraceful1• Destroy(viahttpDELETE)

• Basicaudiosessions(withouttrascoding),usingthemostcommonaudiocodecs(i.e:G711,G729,…);

• Basicvideosessions(withouttrascoding),usingthemostcommonvideocodecs(i.e:H248,VP8,…);

• Audiosessionswithtranscoding(forexampleG711<->G729);• Videosessionswithtranscoding(forexampleH248<->VP8):

• withoutGPU• withGPU(NVIDIAfamily).

The requested transcoding may be mono-directional (i.e: audio/video streamdistribution) or bidirectional (i.e.: videoconferencing applications). The supportedvideoresolutionmaybe:

• PAL(576x720)• 720p(1280x720)• HD(1920x1080)• 4k(3840x2160)

WewillcheckwhethertheintroductionofaGPUacceleratorincreasesthenumberof simultaneous video transcoding sessions offered by the vSBC. Theencoding/decodingproceduresmustbehandledinareal-timeway(atleast30fps)andwithafixedframe-rateduringthewholevideosession.

• Insertion,insidethebasicvSBCarchitecture,ofthenewDPScomponentinordertooptimizeNATscenarioswithouttranscoding.TheaimistotestandcompareperformancesofthebasicarchitecturewiththoseonesofferedbythenewDPScomponent;

1 Note: this lifecycle event will be handled in an immediate or graceful modeaccordingtoaspecificvSBCinternaldata.

Page 34: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium34

• Testtheproperevaluationofmonitoringparameters,bothformediaandcontrolplan,generatedbytheinternalVNFCsinallscenariospreviouslydescribed;

• Scale-inscenario;• Scale-outscenario.

2.3.7. FutureWork

Futureworkobjectives:

1. tocompletetheharmonizationaboutthevSBCinterfaceswiththeothercomponentsofT-NOVAproject(i.e:VNFmanager);

2. toinsertaGPUacceleratorfortrascoding/transratingprocedures;3. totestthetwomostinnovativecomponentsofvSBC(DPSandBGF)inadeployment

scenariowithandwithouthardware/softwareacceleratorsavailable;4. tocompletetheimplementationofthemonitoringparameters,usedbothfor

scalingprocedures(in/out)andforSLAanalysis;5. toclarifythescalein/outscenarios;

2.4. VideoTranscodingUnit

2.4.1. Introduction

ThevTUprovidesthetranscodingfunctionforthebenefitofmanyotherVNFsforcreatingenhancedservices.

2.4.2. Architecture

2.4.2.1. Functionaldescription

The core task of the Video Transcoding Unit is to convert video streams from one videoformattoanother.Dependingontheapplications,thesourcevideostreamcouldoriginatefromafilewithinastoragefacility,aswellascominginfromofapacketizednetworkstreamfromanotherVNF.Moreover, the requested transcoding could bemono-directional, as inapplications like video stream distribution, or bi-directional, like in videoconferencingapplications.

Figure18.FunctionaldescriptionofthevTU

Page 35: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium35

Having this kind of applications in mind, it is clear that the most convenient overallarchitecture for this Unit is amodular architecture, in which the necessary encoding anddecodingfunctionalitiesaredeployedasplug-inwithina“container”Unittakingcareforallthecommunication, synchronizationand interfacingaspects. Inorder to finda convenientapproach for the development of the vTU architecture, an investigation has been carriedout,about thestateof theartofanyavailable software framework thatcouldbeusefullyemployedasstartingpointforthisarchitecture.Thisinvestigationhasidentifiedavconv,theopen-source audio-video library under Linux environments (https://libav.org/avconv.html),asthebestchoiceforthebasicplatformforthevTU,asitisopen-source,itismodularandcustomizable, and it containsmost of the encoding/decoding plug-ins that this VNF couldneed.

InordertodefinethefunctionalitiesthatbestfittotheneedsofthetargetapplicationsforthevTU,asurveyhasbeencarriedout,searchingforthemostdiffusedvideoformatsthatshould therefore be present as encoding/decoding facilities in the vTU. This analysis hasshownthatthefollowingvideoformatsshouldbeprimarilyconsidered:

• ITU-TH.264(akaAVC)• Google’sVP8

andthefollowingoneswouldbealsohighlydesirable,especiallyinthefuture:

• ITU-TH.265(akaHEVC)• Google’sVP9.

Oncethevideoformatsofinterestaredefined,thewholepanoramaoftheavailablecodecshavebeenconsideredandevaluated, inorderto identifytoolswhichcouldbesuccessfullyemployed in the Unit and those which could be possibly used as development basis. Asynthesisoftheavailablepanoramaisshowninthefollowingtable:

Theanalysisevidencedthatthechoiceofavconvasthestartingdevelopmentframeworkismostappropriateintermsofalready-availablecodecs.

Fromthepointofviewofperformance,however,avconvcouldbeunabletofulfiltheneedsof the vTU, as all the endoders/decoders provided therein do not make any use of HWacceleration facilities. Performance, in terms of encoding/decoding speed, is actually ofcrucialimportanceinthevTU,asinallonlineapplications,afixedvideoframe-ratemustbeguaranteedduringthewholevideosession.

Forthisreason,aperformanceanalysisofallavailablecodecshasbeencarriedout.Severalofthem,infact,areabletoexploithardwareaccelerationfacilities,likeGPU’sorMMX/SSEx

Page 36: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium36

instructions, whenever they are available. The tests have been carried out considering atypicalscenariofortheunderlyinghardwareinfrastructure:aXeonserverwith8cores(2x4-cores)XeonE5-2620v3,equippedwithGPU facility (1NVIDIAGeForceGTX980).Severalvideotestsequenceshavebeenconsidered,atthemostcommonresolutions.Theobtainedresults, in termsofachievedencoded/decoded framesper second,are summarized in thefollowing tables, for PAL (576x720 pixel), 720p (1280x720), HD (1920x1080) and 4k(3840x2160)resolutions:

Basedontheobtainedresults,thefollowingobservationscanbedrawn:

• As expected, encoding ismuchmore time-consuming than decoding. On average,decodingisapproximately20timesfasterthanencoding,forthesameformat.TheconsequenceisthatencodersrepresentthebottlenecktoperformanceinavTU.Forwhatdecoding concerns, the tested toolshaveperformed faster than30 fps inallsituations,atleastforH264andVP8standards,whicharethosecurrentlyuse;

• Hardware-accelerated tools are not only providingmuch better performance thanCPU-basedones,asvisibleinthetablesforallresolutions,butisnecessaryinsomecases,e.g.for4kresolutions,wherestandardalgorithmsarenotabletoreach30fpsencodingspeedandthereforecouldnotsupportareal-timetranscodingsession;

• Ashighlightedinthefirsttable,differenthardwareacceleratorscanbesuccessfullyexploited to speed-up the encoding process – not only GPU’s. In particular, X264

Page 37: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium37

performssignificantlybetterusingAssembly-leveloptimizationswhichexploitSIMDinstructions(MMX/SSE),thandelegatingcomputationtoGPUcores.

This isduetothefactthatencoding/decodingalgorithmscannotbemassivelyparallelized,fortwomainreasons:a)therearestrongsequentialcorrelationsandmanyspatial/temporaldependencieswithinthecomputation,andb)thelimitedextentofparallelismneededinallsituations where data parallelism could be applied (e.g. computing DCT/IDCT for amacroblock). Nevertheless, the huge computing power of modern GPU’s makes itreasonable to focus the research effort towards the development of GPU-acceleratedencodingalgorithms,abletoefficientlyexploitthepotentialofallavailablecores.Therefore,asshowninthetableabove,thefirstgoalonwhichwefocusisthedevelopmentofaGPU-acceleratedencoderfortheVP8standardvideoformat.

2.4.3. Interfaces

Asdescribed inSection2.4.2.1, theVirtualTranscodingUnit (vTU) isaVNFthat,during itsnormal operation, receives an input video stream, transcodes it and generates an outputvideostreaminthenewformat.Foreachtranscodingjob,thevTUalsoneedstoreceiveaproper job description, in which all necessary information is provided, like, for instance,informationonthevideoformatoftheinputstreamandonthedesiredvideoformatfortheoutput stream, the identification anddefinition of the input/output data channels (e.g. IPaddressesandports, incaseofnetworkstreams,orfileIDwithinastoragefacility,forfile-generatedstreams).

For these reasons, the vTU needs, at its inner level, to communicate through threeinterfaces,asFigure19shows:

• Inputport,receivingthevideostreamtotranscode;• Outputport,producingthetranscodedvideostream;• Controlport,receivingthejobdescriptorand, implicitly,thecommandtostartthe

job.

Figure19.vTUlowlevelinterfaces

Through the Control interface, the vTU receives the job descriptor, which contains allnecessary information to start the requested transcoding task. The starting command isimplicitinthereceptionofthejobdescriptionmessage:whensuchamessageisreceivedontheControlport,thevTUstartslisteningattheInputportandbeginsthetranscodingtask,accordingtothereceiveddescription,assoonasthefirststreampacketsarereceived.

The format of the job description message is XML based. The general structure of themessageisshowninFigure20.Thisformatallowstodefineallnecessaryparameters,suchas

Page 38: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium38

thedesiredinputandoutputvideoformatsandtheI/Ostreamchannels(files,inthiscase,buttheycouldaswellidentifynetworkchannelssending/receivingRTPpackets).

Figure20.XMLstructureofthevTUjobdescriptionmessage.

2.4.4. Technologies

Inthevirtualizationcontext,theproblemofvirtualizingaGPUisnowwell-known,andcanbe stated as follows: a guest Virtual Machine (VM), running on a hardware platformprovided with GPU-based accelerators, must be able to concurrently and independentlyaccesstheGPU’s,withoutincurringinsecurityissues[Walters],[Maurice].ManytechniquestoachieveGPUvirtualizationhavebeenpresented.However,alltheproposedmethodscanbedivided intwomaincategories,whichareusuallyreferredtoasAPIremoting[Walters](alsoknownassplitdrivermodelordriverparavirtualization)andPCIpass-through[Walters](also known as direct device assignment [Maurice]), respectively. In the vTU, thepassthrough approach has been adopted. For the sake of clarity, a brief review of thistechnologyisshortlygiveninthenextparagraph.

Pass-through techniques are based on the pass-throughmodemade available by the PCI-Express channel [Walters], [Maurice]. To perform PCI pass-through, an Input/OutputMemoryManagementUnit (IOMMU) is used. The IOMMU acts like a traditionalMemoryManagementUnit,i.e.itmapstheI/OaddressspaceintotheCPUvirtualmemoryspace,soenabling the access of the CPU to peripheral devices through Direct Memory Accesschannels.TheIOMMUisahardwaredevicewhichprovides,besidesI/Oaddresstranslations,alsodeviceisolationfunctionalities,thusguaranteeingsecureaccesstotheexternaldevices[Walters]. Currently, two IOMMU implementations exist, one by Intel (VT-d) and one byAMD (AMD-Vi). To adopt the pass-through approach, this technology must also besupportedby theadoptedhypervisor.Nonetheless, Xenserver,open sourceXen,VMWare

This XML file does not appear to have any style information associated with it. The document tree isshown below.

<vTU><in><local><stream>test.y4m</stream>

</local><rstp><ip/><port/><stream/><timeout/>

</rstp><codec><vcodec/><acodec/>

</codec></in><out><local><overwrite>y</overwrite><stream>out_test.h264</stream>

</local><rstp><ip/><port/><stream/><timeout/>

</rstp><codec><vcodec>h264</vcodec><acodec/>

</codec></out>

</vTU>

Page 39: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium39

ESXi, KVM and also the Linux containers can support pass-through, thus allowing VMsaccessingexternaldevicessuchasaccelerators inasecureway [Walter].Theperformancethatcanbeachievedbythepass-throughapproachareusuallyhigherthantheoneofferedbyAPI-remoting[Walter],[Maurice].Also,thepass-throughmethodgivesimmediateaccessto the latest GPU drivers and development tools [Walter]. A comparison between theperformanceachievableusingdifferenthypervisors(includingalsoLinuxContainers)isgivenin [Walter],where it isshownthatpass-throughvirtualizationofGPU’scanbeachievedatlowoverhead,withtheperformanceofKVMandofLinuxcontainersveryclosedtotheoneachievablewithout virtualization. Onemajor drawback of pass-through is that it can onlyassigntheentirephysicalGPUacceleratortoonesingleVM.Thus,theonlywaytosharetheGPU is to assign it to the different VMs one after the other, in a sort of “time sharing”approach [Walters]. This limitation can be overcomeby a technique also known asDirectDevice Assignment with SR-IOV (Single Root I/O Virtualization) [Walters]. A single SR-IOVcapabledevicecanexposeitselfasmultiple,independentdevices,thusallowingconcurrenthardware multiplexing of the physical resources. This way, the hypervisor can assign anisolated portion of the physical device to a VM; thus, the physical GPU resources can beconcurrently shared among different tenants. However, to the best of the author’sknowledge, the only GPU enabled to this functionality belongs to the recently launchedNVIDIA Grid family [Maurice], [Walters]; also, the only hypervisors which can currentlysupport this typeofhardware virtualizationareVMWareSphereandCitrixXenServer6.2.However,sincealsoKVMcannowsupportSR-IOV,there isapathtowardstheuseofGPUhardwarevirtualizationalsowiththishypervisor[Walters].

2.4.5. DimensioningandPerformance

In order to obtain a realistic assessment of the performance of the vTU, as if it wasembedded as a VNF in the T-NOVA framework, it is necessary to perform the tests on avirtualizedplatformresemblingasmuchaspossibletheT-NOVAinfrastructure.

TheperformanceresultspresentedinthetableofSection2.4.2.1.wereobtainedbythevTUrunningnativelyonphysical computation resources. For aVNF like thevTU,however, theactual performance achievable in the T-NOVA environment could be quite different fromthose obtained running on the physical infrastructure. This ismainly due to the followingreasons:

• CPU virtualization overheads (vCPU’s switching over physical CPU’s, at thehypervisorlevel);

• GPUvirtualization strategies (e.g.multiple vGPU’s associated to the samephysicalGPU);

• vCPU-vGPU communication overheads (switching overheads in managing time-sharingpoliciesonthePCI-Expressbus).

These reasons let one expect a possible performance loss, when running on a virtualizedenvironment,especiallyincaseofvTUrunningtaskswhichexploittheGPUresources.

For this reason, in order to obtain a realistic evaluation of the encoding/decodingcomputationspeedsintheactualT-NOVAenvironment,theperformancetestspresentedinSection 2.4.2.1. have been carried out on a virtualized environment. In order to get asignificantcomparison,theVMrunningthevTUhasbeenequippedwiththesameamountof CPU and GPU cores as in the native tests. The following table presents the obtainedresults, in terms of computation speed (frames/sec), compared to the speed obtained onphysicalresources,forthesametask.

Page 40: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium40

Asthetableshows,theobtainedresultsshowthat,foralmostalltheconsideredtasks,thereis no significant performance loss with respect to the same task running on physicalresources,eveninthetasksrunningmainlyonGPU(likeH264encodingusingNVENC).Thisencouraging result is mainly due to the high efficiency of the adopted GPU virtualizationstrategy –GPU pass-through – which assigns a virtual GPU exclusively to a physical GPU,thus allowing to bypass any overhead in the GPU-CPU communication. The cost for thisefficiency, however, is paid in termsof difficulty to share a physicalGPU resource amongmultipleVMs.

2.4.6. FutureWork

TwomainstepsareforeseenforthevTU.AfirstactivitywillfocusonscalingmechanismforthisVNF.Also,thevTUwillbecombinedwithotherVNFsdevelopedwithinT-Novainordertocreatenewservicewithawiderscope.

Page 41: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium41

2.5. TrafficClassifier

2.5.1. Introduction

TheTrafficClassifier(TC)VNFusedcomprisesoftwoVirtualNetworkFunctionComponents(VNFCs), namely the Traffic Inspection engine and Classification and Forwarding function.The two VNFCs are implemented in respective VMs. The proposed Traffic ClassificationsolutionisbaseduponaDeepPacketInspection(DPI)approach,whichisusedtoanalyzeasmallnumberofinitialpacketsfromaflowinordertoidentifytheflowtype.Aftertheflowidentificationstepnofurtherpacketsareinspected.TheTrafficClassifierfollowsthePacketBasedperFlowState(PBFS)inordertotracktherespectiveflows.Thismethodusesatableto track each session based on the 5-tuples (source address, destination address, sourceport,destinationport,andthetransportprotocol)thatismaintainedforeachflow.

2.5.2. Architecture

BothVNFCscanrunindependentlyfromoneanother,butinorderfortheVNFtohavetheexpectedbehaviourandoutcome,the2VNFCsarerequiredtooperateinaparallelmanner.

Figure21.VirtualTrafficClassifierVNFinternalVNFCtopology

Furthermore,inordertoachievetheparallelprocessingofthe2VNFCsitisrequiredforthetraffic tobemirrored towards the2VNFCs, so the2VNFCs receive identical traffic.The2VNFCs are inter-connected internally with an internal virtual link, which transfers theinformation extracted by the Traffic Inspection VNFC, and transmits it to the TrafficForwardingVNFCinordertoapplythepre-definedrules.

2.5.3. FunctionalDescription

The Traffic Inspection VNFC is the most processing intense component of the VNF. Itimplementsthefilteringandpacketmatchingalgorithmsinordertosupporttheenhancedtraffic forwarding capability of the VNF. The component supports a flow table (exploitinghashing algorithms for fast indexing of flows) and an inspection engine for trafficclassification.

TheTrafficForwardingVNFCcomponentisresponsibleforroutingandpacketforwarding.Itaccepts incomingnetwork traffic, consults theFlowTable for classification information foreach incoming flow and then applies pre-defined policies (i.e. TOS/DSCP (Type of

Page 42: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium42

Service/Differentiated Services Code Point) marking for prioritizing multimedia traffic) ontheforwardedtraffic.Itisassumedthatthetrafficisforwardedusingthedefaultpolicyuntilitisidentifiedandnewpoliciesareenforced.Theexpectedresponsedelayisconsideredtobenegligible,asonlyasmallnumberofpacketsarerequiredtoachievetheidentification.InascenariowheretheVNFCsarenotdeployedonthesamecomputenode,trafficmirroringmayintroduceadditionaloverhead.

2.5.4. Interfaces

Thevirtual Traffic classifierVNF isbasedupon theT-NOVAnetworkarchitecturebut fromtheadvisedsetofnetworkinterfaces(management,datapath,monitoringandstorage)usesthe management, datapath and the monitoring. The storage interface is not particularlyessentialtothevTC,asallthecomputationalandpacketprocessingutilizemostlyCPUandmemory resources. The VNF requires intensive CPU tasks and a large number inmemoryI/Osforthetrafficanalysis,manipulationandforwarding.Thestorage interfacewouldaddan unnecessary overhead to the already intensive process, and it was decided to beexcludedinfavourofanoptimalperformance.

2.5.5. Technologies

The vTCutilizes various technologies inorder tooffer a stable andhighperformanceVNFcompliant to thehighstandardsof legacyphysicalnetwork functions.The implementationfor the traffic inspection used for these experiments is based upon the open source nDPIlibrary [REFnDPI]. The packet capturing mechanism is implemented using varioustechnologiesinordertoinvestigatethetrade-offbetweenperformanceandmodularity.Thevariouspackethandling/forwardingtechnologiesare:

• PF_RING:PF_RINGisasetoflibrarydriversandkernelmodules,whichenablehigh-throughput, packet capture and sampling. For the needs of the vTC the PF_RINGkernelmodulelibraryisused,whichispollingthepacketsthroughtheLINUXNAPI.The packets are copied from the kernel to the PF_RING buffer and then they areanalyzedusingthenDPIlibrary.

• Docker: Docker is a platform using container virtualization technology to runapplications. Inordertoinvestigatetheprosandconsofthecontainertechnology,thevTC isdevelopedalsoasan independentcontainerapplication.Theforwardingand the inspectingof the traffic are alsousingPF_RINGandnDPI as technologies,buttheyaremodifiedtofitandfunctioninacontainerenvironment.

• DPDK:DPDKcomprisesofasetoflibrariesthatsupportefficientimplementationsofnetwork functions through access to the system’s network interface card (NIC).DPDKofferstonetworkfunctiondevelopersasetoftoolstobuildhighspeeddataplaneapplications.DPDKoperatesinpollingmodeforpacketprocessing,insteadofthe default interrupt mode. The polling mode operation adopts the busy-waittechnique, continuously checking for state changes in the network interface andlibraries forpacketmanipulationacrossdifferentcores.AnovelDPDK-enabledvTChasbeenimplementedinthistestcaseinordertooptimizethepacket-handlingandprocessing for the inspectedand forwarded traffic, bybypassing the kernel space.Theanalyzingandforwardingfunctionsareperformedentirelyonuser-spacewhichenhancesthevTCperformance.

ThevarioustechnologiesusedgenerateagreatvarietyoftestcasescenariosandexhibitarichVNFtestcase.ThePF_RINGandDockercaseshavethecapabilityofkeepingtheNIC driver, and so the VNFC maintains connectivity with the OpenStack networkconnected.Onthecontrary,inthecaseofDPDKtheNICisunloadedoftheLinux-kernel

Page 43: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium43

driver and loaded the DPDK one. However, the DPDK driver causes the VNFC to losenetwork connectivitywith thenetworkattached, the compensation is the significantlyhigherperformanceasshowninthenextsection.

2.5.6. DimensioningandPerformance

ResultsincludecomparisonofthetrafficinspectionandforwardingperformanceofthevTCusingPF_RING,DockerandDPDK.

Figure22.vTCPerformancecomparisonbetweenDPDK,PF_RINGandDocker

As it canbe seen from the evaluation results among the various approachesused for thevTC,theDPDKapproachperformssignificantlybetterfromtheother2options.Especiallyinthe case it is combined with SR-IOV connectivity it can achieve nearly 8Gbps/s ofthroughput.However,theDPDKversionasalreadymentionedhasanimpactonconnectivitywith the OpenStack network, as the kernel stack is removed from the NIC. Although thePF_RINGandDockerversionsmaintainconnectivitywiththenetwork,theirperformanceisclearlydegradedcomparedtoDPDK’s.

TheDimensioningof thevTCdue to its architecture isbasedon the infrastructureaspect.The vTC performance is dependent on whether there is SR-IOV available on the runninginfrastructure.

2.5.7. DeploymentDetails

ThevTCwasdevelopedinordertobedeployedandruninanOpenStackenvironment,theOS of the virtualized environmentwasUbuntu 14.04 LTS. The selection of theOS versionassures themaintenanceandcontinuousdevelopmentof theVNF. Inorder toconformtotheT-NOVAframeworkaRundeckjob-orientedservicefunctionalitywasimplemented.

ThevTClifecycleisperformedviatheRundeckframeworkinordertofacilitatetheseamlessfunctionalityoftheVNF.InRundeck,wehavecreateddifferentJobstodescribethedifferentlifecycleevents.EacheventhasadescriptionandispartofaWorkflow.

Anexampleworkflow:If a step fails: Stop at the failed step.

Page 44: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium44

Strategy: Step-oriented We add a step of type “Command”. The command differs according to the operation we want to implement. The operations we implemented are described below: * 1. VM Configuration – Command: “~/rundeck_jobs/build.sh” * 2. Start Service – Command: “~/rundeck_jobs/start.sh” * 3. Stop Service – Command: “~/rundeck_jobs/stop.sh” IntermsofthedatatrafficrequiredtotestthevTC,severalchangesandmodificationshadto be made in order to fit the desired traffic mirroring scenario it was tested. Detailedinformationaboutthissubjectisfurtherdiscussedinthesectionbelow.

2.5.7.1. TrafficMirroring–NormalNetworking

Inorder to supportdirect traffic forwarding,meaning thevirtualnetwork interfaceofoneVirtual Network Function Component (VNFC) be directly connected to another VNFC’svirtualnetworkinterface,amodificationonNeutron’sOVSneedstobeapplied.Eachvirtualnetwork interface of a VNFC is reflected upon one TAP-virtual network kernel device, avirtualportonNeutron’sOVS,andavirtualbridgeconnectingthem.Thisway,packetstravelfromtheVNFCtoNeutron’sOVSthroughtheLinuxkernel.ThevirtualkernelbridgesofthetwoVNFCsneedtobeshutdownandremoved,andthenanOVSDBruleneedstobeappliedat the Neutron OVS, applying an all-forwarding policy between the OVS ports of thecorrespondingVNFCs.TheOpenStacknetworkdetailedtopologyisshowninFig.15.

Figure23.ExampleoverviewofthevTCOpenStacknetworktopology.

FirstOptionunbindinterfacesfromtheOpenstacknetworkingandconnectthemdirectlyviaOVS

* Remove from br-ex, the qvo virtual interfaces * Remove from the qbr linux bridge, the qvb and the tap virtual interfaces

Page 45: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium45

* Add the tap-interfaces on the OVS directly and add a flow forwarding the traffic to them.

Thisoptionhasbeentestedandasshownintheresultssectionforthecasesofnormalnetworksetup.

2.5.7.2. TrafficMirroring–SR-IOV

SingleRoot I/Ovirtualization(SR-IOV) innetworking isaveryusefulandstrongfeatureforvirtualized network deployments. SRIOV is a specification that allows a PCI device, forexample a NIC or a Graphic Card, to share access to its resources among various PCIhardwarefunctions:

PhysicalFunction(PF)(meaningtherealphysicaldevice),fromitanumberofoneormoreVirtualFunctions(VF)aregenerated.SupposedlywehaveoneNICandwewanttoshareitsresourcesamongvariousVirtualMachines,orintermsofNFVvariousVNFCsofaVNF.Wecansplit thePF intonumerousVFsanddistributeeachonetoadifferentVM.TheroutingandforwardingofthepacketsisdonethroughL2routingwherethepacketsareforwardedtothematchingMACVF.InordertoperformourmirroringandsendalltrafficbothwaysweneedtochangetheMACaddressbothontheVMandontheVFanddisablethespoofcheck.

2.5.8. FutureSteps

FuturestepsincludetheimplementationofautomatedwaytoapplythedirectconnectionoftheVNFCs.ThisstepwillbeincludedinaHEATdeployment

• Benchmarkingalloptionsandcomparingthem.

Otheroptionstobetested,istoaddaTCP/IPstackontheDPDKandmaintaintheconnectivityoftheVNFCs.Thesealternativesinclude:

• Akernelwithtcp/ipstackontheuserspaceDPDKrumpkernel–https://github.com/rumpkernel/drv-netif-dpdk

• DPDKFreeBSDTCP/IPStackportinghttps://github.com/opendp/dpdk-odp

2.6. VirtualHomeGateway(VIO)

2.6.1. Introduction

Another VNF that T-NOVA aims to produce is currently known in the research and theindustry world under various names, notably Virtual Home Gateway (VGH), VirtualResidentialGateway,VirtualSet-TopBoxorVirtualCustomerPremiseEquipment.

WewillseehowtheinitialneedhasbeenexpandedtocoversomeaspectsoftheContentDeliveryNetworkvirtualizationaswell.

The following sections aim to provide a brief description of the proposed virtual functionalong with the requirements, the architecture design, functional description, andtechnology.

In T-NOVA, we will focus on the bottleneck points usually found in resource constrainedphysicalgatewaylikemediadelivery,streamingandcaching,mediaadaptationandcontext-

Page 46: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium46

awareness.Infact,somepreviousresearchproposalslike[Nafaa2008]or[Chellouche2012] include the Home Gateways to assist the content distribution. By using a Peer-to-Peerapproach, the idea is to offload the main networks and provide an “Assisted ContentDelivery”byusingamixofServerDeliveryandPeerdelivery.

WhenvirtualizingtheHomeGateway,thisapproachcanleadinsomeextenttothecreationofaVirtualCDNorvCDNasaVNF.

Particular attention will be given to real world deployment issues, like coexistence withlegacyhardwareandinfrastructure,compatibilitywithexistinguserpremiseequipmentandsecurityaspects.

2.6.2. Architecture

2.6.2.1. Highlevel

Next,wepresenthowtheboxandtheserverwillconnecttoperformnetworkoperations.Figure24showsthehighlevelarchitectureaspresentedbyETSIin[Netty] netty.io

[NFV001].

Figure24ETSIhomevirtualizationfunctionality

Weaimat supporting a subsetof the vHGNFaswell as somevCDNcontent caching andorchestrationrelatedNF.

2.6.2.2. LowLevel

ThevHG+vCDNvNFarecomposedfrom:

• vHG: 1 VM per user that acts as a transparent proxy that can redirect userrequeststocachesdeployedinNFVI-POP;

• vCDN/Streamer VNF: an arbitrary amount of VMs (at least 3) that stores anddeliverycontenttotheenduser;

• vCDN/Transcoding vNF: an arbitrary amount of VMs that ingest content fromthecontentproviderandprovisionatranscodedversiontoaStreamervNF;

• vCDN/Caching and Transcoding orchestrator vNF: a VM that deploys redirectrulestothevHGsandtriggerstranscodingjobs.

Home Network Public Network

Decoder

Browser

Page 47: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium47

To illustrate low level aspects of the VNFs that will be deployed, we will take a specificexample of VNF which caches, transcodes and streams the most requested videos bygatewayusers.

Figure25Lowlevelarchitecturediagramfortranscode/streamVNF

Figure25 illustratesamodulargatewaywhichactsasaHTTPproxy,notifying the contentfronted when a video is consumed by the end user. Having this information allows thecontentfrontendtotriggerthedownloadfromthecontentprovider’snetworktotheVNF.OncethevideoisenteredontheVNF,it’stranscodedandmovedtothestoragesharedbythestreamers.

Once the video resource is available to the end user, the gateway redirects the user’srequesttothestreamer,ensuringthebestQoEpossible.

2.6.3. Sequencediagrams

Thesequencediagrampresented inFigure26 isassociatedwith theexamplepresented insection2.6.2.2.

Page 48: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium48

Figure26Sequencediagramforthetranscode/streamVNFexample.

2.6.4. Technology

2.6.4.1. Netty:aJavaNon-BlockingNetworkFramework

Netty is an asynchronous event-driven network application framework [Netty] for rapiddevelopmentofmaintainablehighperformanceprotocolserversandclients.

OneofthemoststrikingfeaturesofNetty is that itcanaccessresources inanon-blockingapproach,meaningthatsomedataisavailableassoonasitgetsintheprogram.Thisavoidswasting system resources while waiting for the content to become available; instead acallbackistriggeredwheneverdataisavailable.Thisalsosavessystemresourcesbyhavingonly1threadforresourcemonitoring.

Netty is one of the building blocks to be used to implement the OSGi bundle ProxycomposingthevHG.

2.6.4.2. Restfularchitecture

Enduserapplications,GatewaysandFront-endneedtointeractthoughsecuredconnectionontheinternet.

AJavaRestfularchitecturecanbeimplementedforthosereasons:

• Architecture isstateless,whichmeansthat theservers thatexposetheir resourcesdonotneedtostoreanysessionfortheclient.Thisgreatlyeasesscalingup,sincenorealtimesessionreplicationneedstobeperformed,thereforeanewserverwillbedeployedforloadbalancingpurposes.

• Architectureisstandardandwellsupportedbytheindustry,allowingustoleveragetoolsforservicediscoveryandreconfiguration.

• Authenticationmethodsarewelldocumentedandwidespreadamongwebbrowsersandservers.

Regardingthetechnicaldetails,wewillconsiderthestandardsoftheJavaSDK,byusingJAX-RS anditsreferenceimplementation,Jersey.Thisframeworkcanbeintegratedonanyservletcontainer,JEEcontainerorlightweightNIOHTTPserverlikeGrizzlywhichisusedonthevHG.

Page 49: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium49

2.6.4.3. Transcodingworkers

One of the key features of cloud computing is its ability to produce on-demand computepower at a small cost. To take advantageof this feature,weplan to implement themostcomputingintensivetasksasanetworkofworkersusingaPythonframeworkcalledCelery.Celeryisanasynchronoustaskqueue/jobqueuebasedondistributedmessagepassing.

EveryCeleryworkerisastand-aloneapplicationbeingabletoperformoneormoretasksina parallelized manner. To achieve this goal, a general transcoding workflow has beendesignedtobeappliedonaremotevideofile.

Havinganetworkofworkersallowsustoscale-uporscale-downtheoverallcomputepowersimplyby turningavirtualmachineupordown.Oncetheworker isup, itconnects to themessage broker, and picks up the first task available on the queue. Frequent feedbackmessages are pushed to the message broker, allowing us to present the results on thegatewayassoonastheyareavailableonthestorage.

If the compute capacity is above the required level, active workers are decommissioned,leavingthepoolastheirhostvirtualmachineturnsoff.

Note that workers only carry out software transcoding, leaving room for optimizationthrough the use of hardware. The virtual Trancoding Unit (vTU) is an excellent drop-inreplacement for the transcoding vNF. However, as hardware transcoding may not beavailableeverywhere,wekeeptheslowsoftwaretranscodingasafall-backoption.

2.6.4.4. ScalableStorage

WeneedtohavecachesabletostorethemassiveamountofdataneededbyaCDN.Thesecachescanbespreadamongseveraldatacentresandmustbetoleranttofailure.Theyalsoneedtoscale,andmustsupportaddingorremovingstoragenodeasdefinedbythescalingpolicy.

Toimplementthat,wedecidedtodeploy[Swiftstack]whichproposestocreateaclusterofstoragenode to support ScalableObject storagewithHighavailability, PartitionToleranceandeventualconsistency.

StorageNodesareaccessedbyexternalusersusingaSwiftProxythathandlesthereadandwriteoperations.SwifthasabstractionswherenodesarestoredinsidezonesandregionsinFigure25.WedetailthemappingbetweenswiftabstractionandT-NOVAinTable2-B

Swift T-NOVA Meaning

Region NFVI-POP Partsoftheclusterthatarephysicallyseparated

Zone ComputeNode Zonestobeconfiguredtoisolatefailure

Node VNFC Serverthatrunoneormoreswiftprocess

Table2-BMappingbetweenSwiftandT-NOVAabstraction

Page 50: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium50

Figure27swiftstackRegion/Zone/Node.

Weuseswiftabstractiontoprovideareliablestoragesolution.Forexample,ourvCDNspansovermultipledatacentrestoprovidegoodconnectivity.Eachpopisassociatedtoaregion.With the same approach, we can have several compute nodes hosting our VNFC. Forreliabilityreasons,wedon’twantallournodeshostedonthesamecomputenode,sothatifthecomputenodegoesdown,partof theservicewillbestillavailable.Finally,eachVNFChostsaswiftNode.

Even if swift is anobject storage, it allowsusers to access andpushdataover a standardHTTPAPI.ItmeansthatthestreamervNFfeaturecanbeimplementedusingswiftaswell.

2.6.4.5. UsingDockertoprovidesafe,reliableandpowerfulapplicationdeployment

Wedecidedtouse[Docker] tosupport the implementation.Docker isanOSVirtualizationtechnologythatrunssegregatedapplicationsandlibrariesonacommonLinuxkernel.

DockercanberunonmajorLinuxDistributionlikeDebianorFedora,butitcanalsorunonsmaller, customdistribution thatprovideanexecutionenvironment for container.CoreOSproduces,maintainsandutilizesopensourcesoftwareforLinuxcontainersanddistributedsystems.Projectsaredesignedtobecomposableandcomplementingeachotherinordertoruncontainer-readyinfrastructure.2

Theapplicationswebuildarebasedonvendortechnologies (forexample, theJavaDockerimagemaintained by Oracle) that are kept updated on a regular basis.We implementedcontinuousdeployment,meaningthatwheneveranupstreamdependencygetsupdated,were-packageoursoftwarewiththenewimageandruntesttodiscoverpotentialregression.

Our approach is safer. The traditional installation of a package on an OS since everycontaineriswalledfromtheotheronesandtheOShastheonlyresponsibilityofmaintainingthe container execution environment. Vendors usually provide a shorter delay to updatetheirdockerimagesthattheLinuxDistribution.

2https://coreos.com/docs/

Page 51: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium51

OurapproachisreliableinthesensethatifaT-NOVAvirtualmachinegoesdown(excepttheVNF Controller which is not highly available for the moment) we are able to redeploycontainersontheclusteronanotheravailablemachine.

Wealsodon’thavetouploadanewvnfd+vnfimageseverytimewehaveasecurityupdate.AllweneedtodoistopushthenewreleaseonourdockerregistryandthenewimagewillbepickedupautomaticallywhenconfiguringtheVMs.

2.6.4.6. Orchestrationandscaling

In order to ease the deployment of our vNFs, we use a configuration management toolnamedSaltStack[Salt].Thenecessitytousesuchatoolisdevelopedinthenextparagraphs;we then explain why we choose salt and finally conclude with an overview of themechanismsweimplemented.

Figure28vCDNimplementationforSoftwareconfigurationManagement.

(a) Whyconfigurationmanagementtool?

Asmentioned in2.1.1,onlyoneVNFC isabletoreceiveconfigurationcommandsfromtheOrchestrationtosupportthewholeVNFlifecycle.ThismeansthattheinformationreceivedontheconfigurationinterfacemustbepropagatedtotheotherVNFCs.

When theVNF starts, someconfigurationneed tobe carriedout to initialize the softwarecomponents. For example, the storage nodes must be initialized with the DHT from theproxy, some block storage must be allocated to the node and so one. This non trivialconfigurationtasksmustbecarriedoutaftertheVMhasbooted,butalsowhenscalingoutorin.Thesetasksmayfail,buttheconsistencyofthewholesystemshouldbekeptintact.

Forthosereasons,wedecidedtouseanorchestrationtoolthatcreateanabstractionlevelover the system to manage the software deployment, system configuration, middlewareinstallationandserviceconfigurationwithease.

Page 52: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium52

(b) WhySalt?

SaltStack platform or Salt is a Python-based open source configuration managementsoftwareandremoteexecutionengine.Supportingthe"infrastructure-as-code"approachtodeploymentandcloudmanagement,itcompetesprimarilywithPuppet,Chef,andAnsible.3

SaltStackwaspreferredoverotheralternativesdue to its scalability,easeofdeployment,good support forDocker andpython source code.Wedon’t claim thatwhatwedesignedwouldnothavebeenpossiblewithotheralternative,butSaltwasthesolutionwefelt themorecomfortablewithattheend.

(c) Implementationofourconfigurationmanagement

Weimplementedtheconfigurationmanagementinatwo-phase.ItisillustratedinFigure28.

First during the bootstrap phase, each virtualmachine is injectedwith cloud-initwith thefollowingdataandprograms.

• IPaddressofthesaltmaster• Certificatestoassureasecureconnectionwiththesaltmaster• Itsroleinthesystem.• Salt-masterorsalt-minionserviceinstalledandlaunched.• The“recipes”ordesiredinfrastructurecodedeployedonthesaltmaster.

Oncethebootstrappingphaseisover,wehaveasystemcomprisedofVMssecurelyconnectedonthedatanetworkreadytotakeorderfromthemaster.NotethattheOScouldbepre-bundledwithsoftwareinordertofastenthenextphase,butthisisnotmandatory.

ThesecondphaseislaunchedwhenthestartlifecycleeventfromTeNORisreceivedthroughthemiddlewareAPI.Thisprocessestheinfrastructurecodeandverifiesthecomplianceofeachminionwiththedesiredinfrastructure.

AswecanseeinCodelisting1,theyamlDSLusedwithSaltdescribeshowtheinfrastructureshouldbe configured. Salt allowsus to “synchronise” the code infrastructure in yamlwiththe real infrastructure simply by calling the Salt API. This synchronization process installs,copies, configures, downloads the required missing software components and can evenconfiguremorelowlevelaspects.

Providing the possibility for the system the scale-in is straightforward when having theinfrastructure described as code. Installing, configuring and ramping up new VM is just amatterof“synching”theinfrastructurestatewiththenewresourcesavailable.

OurimplementationusetheUbuntuOSforundockerizedsoftware(thestorage)andCoreOSdistributiontohostapplicationthatarealreadydockerized(therestofthesoftware).

3https://en.wikipedia.org/wiki/Salt_%28software%29

Page 53: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium53

2.6.5. DimensioningandPerformances

TheresultsreportedherearenotderivedfromtheresultsfromthePilottestbed,which isnotavailable toVNFdeveloper forperformancespurposesso far.Weplan toperformthesame testson the realpilot andupdateour results in thenextdeliverable.We thought itwould be valuable to have a first vision on some results through our related work[Herbaut2015].

We described the role of the vHG+vCDN deployed on the server-side infrastructurecomposed of various vNFs: Streamers vNFs deployed in regional PoPs, Caching andTranscodingOrchestratorandTranscodingvNFsdeployedinregulardata-centers.

As our proposal aims at showing how vHG can play a role in a vNF architecture, our firstfocus for the evaluation is devoted to assessing vHG side performance, CPU andmemory

#here we make sure that the latest worker docker image is present on the system nherbaut/worker: #this command is equivalent to docker pull docker.pulled: #always use the latest version from our continus build system - tag: latest - require: #make sure that docker is installed before pulling the image - sls: docker #make sure that docker daemon is running - service.running: docker # this set of jinja2 template file is here to provide the broker's IP address {%- set minealias = salt['pillar.get']('hostsfile:alias', 'network.ip_addrs') %} {%- set addrs = salt['mine.get']('roles:broker', minealias,"grain") %} {%- set broker_ip= addrs.items()[0][1][0] %} # this set of instruction is there to provide the the swift proxy ip address {%- set addrs = salt['mine.get']('roles:swift_proxy', minealias,"grain") %} {%- set swift_proxy_ip= addrs.items()[0][1][0] %} # now we are ready to cook our docker image core-worker-container: docker.installed: - name: core-worker-container - image: nherbaut/worker:latest # now we are ready to cook our docker image - environment: - "CELERY_BROKER_URL" : "amqp://guest@{{ broker_ip }}" - "ST_AUTH" : "http://{{ swift_proxy_ip }}:8080/auth/v1.0" - "ST_USER" : "admin:admin" - "ST_KEY" : "admin" - watch: # trigger this event whenever the image is done being pulled - docker: nherbaut/worker

Codelisting1anexampleofinfrastructurecode

Page 54: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium54

footprint.Next,toevaluatethebenefitsoftheoverallvHG+vCDNsysteminabove-citedusecase, wemade extensive simulationswith hypothesis conforming to a French network ofServiceProvider.

2.6.5.1. vHG

Table2-CvHGperformances

We deployed the vHG as an OSGi bundle in the Apache Karaf OSGi runtime on a VMrunning Debian Jessie. It uses 2Gb or Ram and 1 vCore. The gateway connects the testoperatorandaPC-basedfileserver.

JMeterwasusedtocapturethenetworkmetricsofoursolution.WhilegeneratingHTTPrequests, it reports on specific performances metrics. Each experiment consisted of 10agentscontinuouslydownloadingtargetresourcesontheHTTPfileserver,1000times.

Weconsideredtwodifferentvalidationusecases:WebTrafficandFileTransfer.First,theagentshadtodownloada192MBvideofile,thenasingleHTMLpagewhichlinkedto171staticresourcescomposedofJavascriptfiles,CSSandimagesofaveragesize16KB.

WeevaluatedoursolutionwithtwodifferentroutingrulessettingsdeployedinthevHG.In the first one, we did not deploy any rule, assessing only the overhead linked to theapplication network framework. In the second one, however, we deployed 10.000 rules,causingthevHGtoprocessbothrequestsandresponseswrtthoserulestherebyassessingits ability to perform pattern matching in a timely manner. Both cases reflect the nooperationscenarioofourusecase.

WedecidedtoassesstheoverheadcausedthevHGbycomparing it tothewell-knownSquid 3 HTTP proxy with cache deactivated. To have a better grasp of the amount ofresourcesconsumedbythevHG,wealsoreportedCPUandmemoryconsumptionforeachsettingsaswell.

FromTable2-C,wecanseethattheperformancesintermofthroughputisgloballythesameacrossallsettings,withthemaximumdeviationfromthebaselinesetting1(simpleIPforwarding)beinglessthan0.3%.Insettings2-4,wealsoseethatasignificantshareofCPUpowerisdedicatedtoprocessingtherequestsforbothSquid3andthevHG,withthevHGconsuming up to a 25% extra CPU time in Settings 4 wrt Settings 1, but with no drop inthroughput.ThiscanbeexplainedbythefactthatthevHGrunsontopofaJVM,whileSquidisanativeapplicationwithlimitedoverheadwrtaJavaapplication.

This experiment does not intend to mimic real life Internet usages but to stress thesystemupitslimits.WeconcludethatevenwiththeextraCPUinvolved,oursolutiondoesnotsignificantlypenalizetheEnd-User,validatingthepossibledeploymentofvHG.

2.6.5.2. vCDN

We simulated the network deployment presented in Figure 28, with the hypothesespresentedinTable2-D

Page 55: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium55

Table2-DHYPOTHESISUSEDFORSIMULATION

TheobjectiveisfirsttoassessthefeasibilityofhavingavHG+vNFapproachandsecondtohighlight itsbenefits.Theusecaseconsiders thedeploymentofvHGandvCDN in theSP’sregionalPoPs,forvideodelivery.Letusthusinvestigatethepotentialofsuchasolution.

Streamedvideoneedsgoodbitrate toavoid re-bufferingand improveQoE.That is thereasonwhywedefinedanSLAviolationasthefailuretodelivertheproperaveragebitrateontimetoaclient.OurgoalinthissimulationistoreducetheSLAviolationsovertime.

The regional PoP being located near the End-User, its latency is reduced wrt the CPnetwork (backed by CDN), hence a possible higher throughput for HTTP traffic like videostreaming.Foroursimulation,weincludedtwotypesofpatterns.Thefirstoneiscomposedofvideorequestsemittedregularlybytheclients,whichgeneratethecruisingphasetraffic.Thesecondconsistsofpeaktrafficat25swhichischaracterizedbyagreaterrequestarrivalrateaswellasamoreconcentrateddistributionofvideos.Consumptionpeaksusuallyoccurwhenaviralvideoisposted,mostofthetimeonthelandingpageoftheContentProvider.BeingabletocachethiskindofvideoandtoserveitascloseaspossibletotheusersisakeyindicatorofsuccessforthevNF.

Page 56: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium56

Figure29EvaluationofthebenefitsofthevHG+vCDNsystem.

Figure 29 depicts two scenarios. In (A), we only rely on CP network to deliver themediawhile in (B) a single regional PoP is added to the solution. Note that global bandwidthremains the same, aswe took somebandwidth from the CP to allocate it to the regionalPoP. This is also essential for the CP for comparing at the end the solutions in terms ofbandwidthcost.

WecanseethatthecruisingphasedoesnotgenerateanySLAviolationandtheCPaloneisable to handle the traffic load. However, when the peak occurs, SLA violations increasedramatically,causingalotofrequeststobedropped.InscenarioB,however,thepresenceoftheregionalPoPasanalternative,lowlatencydatasource,mitigatethepeakeffectandreducesupto70%ofSLAviolationsontheoverallsimulationperiod.

Having a regional PoP with lower network latency to serve highly redundant requests,benefitsboththeEnd-Userandthecontentprovider.AstheformerseesanincreaseinQoE,the latter reduces its costs by avoiding the over provisioning of network capabilities. It’simportanthowever,toreserveregionalPoPbandwidthtoserveonlyhighlypopularvideos,so as to maximize its benefits, while keeping the mean latency low between clients andregionalPoPsbyspreadingPoPsovertheterritory.

2.6.6. FutureWork

Thenext step for vHG+vCDN inWP5 is todeployon a fully functional testbed, assess theperformanceoftheoverallsolutionandimplementscalingbasedonmetricanalysis.

WemayalsoconsiderausecasewherehardwareaccelerationisusedthroughvTUinordertoacceleratetheavailabilityofcachedcontentonthevStreamer.

Page 57: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium57

2.7. ProXyasaServiceVNF(PXaaS)

2.7.1. Introduction

A Proxy server is amiddleware between clients and servers. It handles requests, such asconnectingtoawebsiteorserviceandfetchingafile,sentfromaclienttoaserver. Inthemost cases a proxy acts as a web proxy allowing or restricting access to content on theWorldWideWeb. In addition, it allows clients to surf theWeb anonymously by changingtheirIPaddresstotheProxy’sIPaddress.

Aproxyservercanprotectanetworkbyfilteringtraffic.For instance,acompany’spoliciesrequire that its employees are restricted to access some specific web sites, such asFacebook,duringworkinghoursbuttheyareallowedtoaccessthemduringbreaktimesorare restricted to access adult-content sites at all times. Furthermore, a proxy server canimproveresponsetimesbycachingfrequentlyusedwebcontentand introducebandwidthlimitations to a group of users or individuals. Traditionally, proxy software resides insideusers’ LANs (behind NAT or Gateway). It is deployed on a physical machine and all localdevicescanconnecttotheInternetthroughtheproxybychangingtheirbrowser’ssettingsaccordingly.However,adevicecanbypasstheproxy.Astrongeralternativedeployment isto configure the proxy to act as a transparent proxy server so that all web requests areforcedtogothroughtheproxy.Inthisscenariothegateway/routershouldbeconfiguredtoforwardallwebrequeststotheproxyserver.

The Proxy as a Service VNF (PXaaS VNF) aims to provide proxy services on demand to aServiceProvider’s subscribers (eitherhomeuserse.g.ADSL subscribersor corporateuserssuchascompanysubscribers).TheideabehindthePXaaSVNFistomovetheproxyfromtheLAN to the cloud in order to be used “as a service”. Therefore, a subscriber (e.g. LANadministrator)willbeabletoconfiguretheproxyfromaweb-baseduserfriendlydashboardandaccordingtotheirneedssothatitcanbeappliedtothedeviceswithintheLAN.

2.7.2. Requirements

ThetablebelowprovidesthemajorrequirementsthattheVNFwillneedtofulfill.

Table2-E:PxaaSVNFrequirements

RequirementID

Requirementname

Description Prioritylevel

1 Webcaching ThePXaaSVNFshouldbeabletocachewebcontent.

High

2 Useranonymity ThePXaaSVNFshouldallow forhiding theuser’s IP address when accessing webpages.TheproxyVNF’sIPshouldbeshowninsteadoftheuser’srealIP.

High

3 Bandwidth ratelimitationperuser

The PXaaS VNF should allow for settingbandwidth rate limitations on a group ofusers or individual users by creating ACLsbasedontheiraccount.

High

4 Bandwidth ratelimitation per

The PXaaS VNF should allow for settingbandwidth rate limitations on a group of

Low

Page 58: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium58

service servicesorindividualservices.Forexample,the PXaaS VNF should limit the bandwidthusedfortorrents.

5 Bandwidththrottling on hugedownloads

The PXaaS VNF should allow for reducingthe bandwidth ratewhen huge downloadsaredetected.Itcouldbeappliedtoallusersoragroupofusersorindividuals.

High

6 Web accesscontrol

The PXaaS VNF should allow for blockingspecificwebsitesbytheusers.

High

7 Web accesscontrol(time)

ThePXaaSVNFshouldallowforblockingoraccessing specific websites by the usersbasedonthecurrenttime.

Medium

8 User Control andManagement

The user should be able to configure thePXaaS VNF using a dashboard. Thedashboardshouldberesponsiveinordertobe accessible from multiple devices andeasytouse.

High

9 Serviceavailability TheProxyVNFshouldbeavailableassoonas the user sets the configurationparameterson thedashboard.Each timeauser changes configuration, the serviceshouldbeavailableimmediately.

High

10 Serviceaccessibly The connection with the proxy should betransparent (transparent proxy). Users donot need to set the proxy’s IP on theirbrowser. The traffic should be redirectedfromtheuser’sLANtotheproxyVNF.

Low

11 Service – userauthentication

OnlysubscribedPXaaSVNFusersshouldbeabletoaccesstheservice.

High

12 Monitoring The proxy VNF should provide metrics totheT-NOVA’smonitoringagent.

High

13 Serviceprovisioning

TheproxyVNFshouldexposeanAPI tobeused by the T-NOVA’s middleware forserviceprovisioning.

High

2.7.3. Architecture

ThePXaaSVNFconsistsofoneVNFC.TheVNFCimplementsboththeproxyserversoftwareaswellas thewebserversoftware.Thefigurebelowprovidesahigh level topologyof thePXaaSVNF.TheVNFCislocatedatthePoPwhichisfoundbetweentheuser’sLANandtheOperator’s backbone. Once a user is subscribed with the PXaaS VNF the traffic from theuser’s LAN is redirected to thePoPand then it passes through thePXaaSVNF. The trafficmightpassthroughsomeotherVNFsaccordingtoservicefunctionchainingpolicies.Finally,theproxyhandlestherequestsaccordinglyandforwardsthetraffictotheInternet.Theuserisabletoconfiguretheproxythroughaneasytouseweb-baseddashboardwhichisserved

Page 59: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium59

bythewebserver.Thewebservercommunicateswiththeproxyserver inordertosetuptheconfigurationparameterswhichhavebeendefinedbytheuser.

Figure30.PXaaShighlevelarchitecture

2.7.4. Functionaldescription

2.7.4.1. SquidProxyserver

SquidProxyisacachingandawebproxy.Someofitsmajorfeaturesinclude:

• Webcaching;• AnonymousInternetaccess;• Bandwidthcontrol.Itintroducesbandwidthratelimitationsorthrottlingtoagroup

ofusersorindividuals.Forexampleitallows“normalusers”tosharesomeamountoftrafficandontheotherhanditallows“adminusers”touseadedicatedamountoftraffic;

• Webaccessrestrictionse.g.allowacompany’semployeestoaccessFacebookduringlunchtimeonlyanddenyaccesstosomespecificwebsites.

Bandwidthlimitationexamples

a)BandwidthrestrictionsbasedonIP

TheexamplebelowcreatesanAccessControlList(ACL)withthename“regular_users”andis assigned a range of IP addresses. Requests coming from those IPs are restricted to500KBpsbandwidth.acl regular_users src 192.168.1.10 – 192.168.1.20/32 # acl list based on IPs delay_pools 1 delay _class 1 1 delay_parameters 1 500000/500000 # 500KBps delay_access 1 allow regular_users ThelimitationofthisconfigurationisthatSquidshouldbelocatedinsidetheLANinordertounderstandtheprivateIPaddressspace.

Page 60: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium60

b)Bandwidthrestrictionsbasedonuser

ThefollowingscenarioperformsthesamebandwidthrestrictionsasthepreviousoneexceptthattheACLisbasedonuseraccounts.SquidsupportsvariousauthenticationmechanismssuchasLDAP,RadiusandMySQLdatabase.WeconsiderMySQLdatabaseforauthenticatingwiththePXaaSVNF.acl regular_users proxy_auth george savvas # acl list based on usernames delay_pools 1 delay _class 1 1 delay_parameters 1 500000/500000 # 500KBps delay_access 1 allow regular_users The limitationof thisconfiguration is thatusersmustauthenticatewiththeProxythe firsttimetheyvisittheirbrowser.Inthiscasetheproxyisnotconsideredasatransparentproxy.However,byusingthisscenario,SquidcanbedeployedonthecloudandcanhandledevicesbehindNATaslongastheyauthenticatewiththeproxy.

Figure31.Proxyauthentication

2.7.4.2. ApacheWebServer

Apache web server is used to serve the dashboard to the clients. The dashboard isresponsible to allow users to configure and manage the Squid proxy. Therefore, Apacheshould have write permissions on Squid’s configuration file. In addition, the LANadministratorisabletocreateuseraccountswhicharestoredintheMySQLdatabase.TheLANadministratorwillberesponsibletoassigntheuseraccountstoeachdeviceinordertoachievethelimitationsheenvisionsusingthePXaaS.

The figurebelowpresents the first versionof thedashboard (version1). Inparticular, thehome page of the dashboard is presented. The current version supports the followingfeatures:

• Usermanagement:Useraccountscanbecreatedwithausernameandpassword.Thoseaccountsareusedtoaccesstheproxyservices;

• Accesscontrol:Usersmustentertheircredentialsintheirbrowsersinordertosurftheweb;

• Bandwidthlimitations:Groupofuserscanbecreatedwithasharedamountofbandwidth.Inthiscasebandwidthlimitationscanbeintroducedtoagroupofusers;

• Websitefiltering:Groupofuserscanbecreatedwithrestrictedaccesstoalistofwebsites.Pre-definedlistswithurlsareprovided;

• Webcaching:Webcachingcanbeenabledinordertocachewebcontentandimproveresponsetime;

• UserAnonymity:Userscansurfthewebanonymously.

Page 61: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium61

Figure32.PXaaSDashboard.

2.7.4.3. MySQLDatabaseserver

MySQL Database server maintains a list of user accounts that can be used for proxyauthentication in the browser. In addition it stores all the required data needed by thedashboard.

2.7.4.4. SquidGuard

SquidGuard isusedon topof Squid inorder toblockURLs for a groupofusers. It isusedbasedonpre-definedblacklists.

2.7.4.5. MonitoringAgent

TheMonitoringAgentisresponsibleforcollectingandsendingmonitoringmetricstotheT-NOVAMonitoringcomponent.

2.7.5. Interfaces

ThefigurebelowshowstheVNFC inanOpenStackenvironment. Itconsistsof3 interfacesconnectedto3networks.

Page 62: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium62

Figure33.PXaaSinOpenStack

• eth0:Thisisthedatainterface.AfloatingIPisassociatedwiththisinterfaceinordertosendandreceivedatato/fromthePublicnetwork.

• eth1:ThisisthemonitoringinterfacewhichwillbeusedtosendmetricsperiodicallytotheMonitoringcomponent.

• eth2:ThisisthemanagementinterfacewhichwillbeusedinordertocommunicatewiththemiddlewareAPI.

2.7.6. Technologies

The development environment used for the implementation and testing of the PxaaS isVagrantwithVirtualboxonanUbuntu14.04Desktopmachine.TheVM itself runsUbuntu14.04serverOS.

As described in the Functional description section, Squid Proxy, SquidGuard, ApacheWebserverandMySqlDatabaseserverareused.Specifically,theexactversionsare:

• SquidProxy3.5.5• SquidGuard1.5• Apache22.4.7• Mysql5.5.44-0ubuntu0.14.04.1

Page 63: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium63

TheDashboardhasbeendevelopedwiththeYiiframework(aPHPframework)fortheserversideandCSS,HTML,Jqueryhavebeenusedfortheclientside.

Asregardsthemonitoringagenttwodifferentcomponentshavebeenused:

1. Collectd.ItcollectssystemperformancestatisticsperiodicallysuchasCPUandmemoryutilization.

2. ApythonscriptwhichcollectsPxaaSVNFspecificmetricssuchasthenumberofHTTPrequestsreceivedbytheproxyandthecachehitspercentage.Thescriptanalysestheresultsreceivedbythesquidclient,atoolwhichprovidesSquid'sstatistics,andsendthemtotheT-NOVAMonitoringcomponentperiodically.

MozillaFirefoxisusedforaccessingWebthroughtheproxy.

2.7.7. DimensioningandPerformance

Somepreliminarytestswereperformedinordertoverifywhethertheexpectedbehaviorisachieved.WeassumethataccesstothePXaaSDashboardisgiventoauserwhoactsastheadministrator of his LAN in a home scenario. Therefore, the "administrator" sets up theProxyserviceforhisLANviathedashboardandcreatesuseraccountsinordertoallowotherusers/devices to access the Web via the Proxy. Specifically, the current version of theDashboardwastestedagainstthefollowingtestscenarios:

a)Testingwebaccessandbandwidthcontrol.ThisscenarioaimstotestifanewlycreateduserisabletoaccesstheWebusingtheircredentialsandbandwidthlimitationisachieved.

Execution: The administrator creates a new user by providing a username and password.Then he adds the newly created user under “research” group (the group was previouslycreated by the administrator) which is restricted to 512Kbps bandwidth. The new userauthenticateswiththeproxyfromthebrowseranddownloadsabigfile.

b)Testingwebsitefiltering.Thescenariotestswhetherauserisrestrictedtoaccesssomewebsites.

Execution:Theadministratoraddstheusertothegroup“social_networks”(thegroupwaspreviouslycreatedbytheadministratorandapre-definedlistofsocialnetworkingwebsiteswasassignedtothatgroup)inwhichallsocialnetworkingwebsitesaredenied.

c)Testingwebcaching.Thisscenariotestswhetherwebcachingworksproperly.

Execution: Two different users access the same websites from different computers. Forexample “user1” accesses www.primetel.com.cy and then “user2” accesses the samewebsite.

d)Testinguseranonymity.This scenariocheckswhetherauser isable toaccess theWebanonymously. In order to test this scenario and get meaningful results we deployed thePXaaSVNFonaserverwithpublicIP.

Execution: Theadministrator enables theuser anonymity feature for auser. http://ip.my-proxy.com/websiteisusedinordertocheckwhetheruser'srealIPispublicityvisible.

2.7.7.1. Testresults

Belowtheresultsbyexecutingthetestscenariosarepresented.

a)OnceauserisauthenticatedwiththeProxyheisabletoaccesstheWeb.Thenhestartsto download an iso file. As we can see from the image below, the download speed is

Page 64: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium64

restricted to61,8KB/secwhich isaround to500Kb/sec (aswehaveexpected). Ifanotheruser starts to download a big file as well, then both users will share the 512Kb/secbandwidth.

Figure34.Bandwidthlimitation

b)Ausertriestoaccesswww.facebook.comwithnosuccess.Theproxydeniesaccesstotheparticularwebsite.

Figure35.Accessdeniedtowww.facebook.com

c) The figure below shows the Squid's logs. In particular, it shows all the HTTP requestsreceivedbySquidfromtheclientsandwhetherthoserequestsresultincachehits.ItcanbeobservedthattheparticularrequestswereservedfromtheSquid'scache.“TCP_MEM_HIT”showsthatarequestwasservedfromSquid'sMemoryCache(fromtheRAM).

Page 65: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium65

Figure36.Squid'slogs

d)Figure37showstheresultsfromhttp://ip.my-proxy.com/whenauseraccessestheWebwithouthavingtheuseranonymityfeaturedenabled.Themostimportantfieldsare:

1. “HTTP_X_FORWARDED_FOR”.Itshowstheuser'spublicIP(e.g.217.27.32.7)addressalongwiththeProxy'sIP(e.g.217.27.59.141)

2. “HTTP_VIA”.Itshowtheproxy'sversion(e.g.squid3.5.5)3. “HTTP_USER_AGENT”.Itshowstheuser'sbrowserinformation(e.g.Mozilla/5.0

(X11;Ubuntu;Linuxx86_64;rv:41.0)Gecko/20100101Firefox/41.0).

Figure37.Resultstakenfromhttp://ip.my-proxy.com/withoutuseranonymity

Page 66: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium66

Figure38showstheresultswhileauseraccessestheWebanonymously.Itcanbeobservedthat the user's real IP is hidden and instead the Proxy's IP is shown. In addition theinformationabouttheproxyandtheuser'sbrowserinformationarehidden.

Figure38.Resultstakenfromhttp://ip.my-proxy.com/withuseranonymity.

Page 67: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium67

3. CONCLUSIONSANDFUTUREWORK

3.1. Conclusions

In this document, general guidelines for the development of VirtualNetwork Functions inthe T-Nova framework have been provided. Specific descriptions of the Virtual NetworkFunctions under development in the project have been given, together with somepreliminarytestresults.ThesixVNF’sunderdevelopmenthavebeendescribedindetail,andthe different technologies as well as the contributions coming from the open sourcecommunityusedinthedevelopmenthavebeendiscussed.TheobtainedresultsclearlyshowhowVNFdevelopmentcangreatlybenefitbothfromopensourcesoftware,aswellasfromthe latest hw development. Also, some computational intensive functions have beenimplemented, without any significant loss in performance due to the adoption ofvirtualizationtechnologies.

The presented VNF’s cover a wide spectrum of applications, and can thus provide usefulguidelines tonewdeveloperswhoarewilling tocreatenewfunctions tobeused in theT-Nova framework. To this end, the initial paragraph of this document provide generalinformation thatcanbeuseful tonewdevelopers throughall thedevelopmentphaseofanewVNF.

3.2. Futurework

In the next steps, the integration of the developed Virtual Network Functions has to befinalized, considering aspects such as scaling or high availability. The combination ofdifferentVirtualNetworkFunctions inordertocreatenewattractiveNetworkServiceswillbeaddressed.

Page 68: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium68

4. REFERENCES

[Abgrall] Daniel Abgrall, “Virtual Home Gateway, How can Home Gatewayvirtualizationbeachieved?,”EURESCOM,StudyReportP055.

[Ansibl]http://en.wikipedia.org/wiki/Ansible_%28software%29

[Chef] http://en.wikipedia.org/wiki/Chef_%28software%29

[Chellouche2012] S. A. Chellouche, D. Negru, Y. Chen, et M. Sidibe, «Home-Box-assistedcontentdeliverynetwork for InternetVideo-on-Demandservices», in2012IEEE Symposium on Computers and Communications (ISCC), 2012, p.000544-000550.

[Cruz] T. Cruz, P. Simões, N. Reis, E. Monteiro, F. Bastos, and A. Laranjeira, “Anarchitectureforvirtualizedhomegateways,” in2013IFIP/IEEEInternationalSymposiumonIntegratedNetworkManagement(IM2013),2013,pp.520–526.

[Collectd] https://collectd.org/

[CUDA] NVIDIA,CUDACProgrammingGuide,2013.

[D2.1] T-NOVA:SystemUseCasesandRequirements

[D2.21] T-NOVA:OverallSystemArchitectureandInterfaces

[D2.41] T-NOVA: Specification of the Network Function framework and T-NOVAMarketplace

[D4.01] T-NOVA:InterimReportonInfrastructureVirtualisationandManagement

[D5.01] T-NOVA:InterimreportonNetworkfunctionsandassociatedframework

[D6.01] T-NOVA:InterimreportonT-NOVAMarketplaceimplementation

[DITG] D-ITG,link:http://traffic.comics.unina.it/software/ITG/

[django] https://www.djangoproject.com/

[Docker] Dockerhttps://www.docker.com/

[DPDK] DataPlaneDevelopmentKit,on-line:http://dpdk.org

[DPDK_NIC] DPDKSupportedNICs,on-line:http://dpdk.org/doc/nics

[DPDK_RN170] INTEL, “Intel DPDK Kit”, http://dpdk.org/doc/intel/dpdk-release-notes-1.7.0.pdf

[DSpace] DSpacehttp://www.dspace.org/

[Duato] Duato, J.;Pena,A.J.;Silla,F.;Fernandez, J.C.;Mayo,R.;Quintana-Orti,E.S.,"Enabling CUDA acceleration within virtual machines using rCUDA," HighPerformance Computing (HiPC), 2011 18th International Conference on ,vol.,no.,pp.1,10,18-21Dec.2011doi:10.1109/HiPC.2011.6152718

[Egi] Egi, Norbert, et al. "Evaluating xen for router virtualization." ComputerCommunications and Networks, 2007. ICCCN 2007. Proceedings of 16thInternationalConferenceon.IEEE,2007.

[Eprints] Eprintshttp://www.eprints.org/

Page 69: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium69

[ES282.001] ETSIES282001:Telecommunicationsand InternetconvergedServicesandProtocolsforAdvancedNetworking(TISPAN);NGNFunctionalArchitecture

[Fedora] http://www.fedora-commons.org/

[Felter] Felter, Wes, et al. "An Updated Performance Comparison of VirtualMachinesandLinuxContainers."technology28:32.

[Gelas] J.-P.Gelas,L.Lefevre,T.Assefa,andM.Libsie,“Virtualizainghomegatewaysfor large scale energy reduction inwireline networks,” in Electronics GoesGreen2012+(EGG),2012,2012,pp.1–7.

[GetA]http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#hot-spec-intrinsic-functions

[GlueCon14] “Containers At Scale. At Google, the Google Cloud Platform and Beyond”.JoeBeda.GlueCon2014.

[Gupta] V.Gupta,A.Gavrilovska,K.Schwan,H.Kharche,N.Tolia,V.Talwar,andP.Ranganathan, “GViM:GPU-accelerated virtualmachines,” in 3rdWorkshopon System-level Virtualization for High Performance Computing. NY,USA:ACM,2009,pp.17-24

[H2] H2http://www.h2database.com/html/main.html

[Halon] http://www.halon.se/

[Herbaut2015]HerbautN.;Negru,D.;XilourisG,ChenY." inNetworkoftheFuture(NOF),2015 International Conference andWorkshop on the , 1-2 Oct. 2015, doi:10.1109/NOF.2014.7119778

[IETF2013] https://datatracker.ietf.org/documents/LIAISON/liaison-2014-03-28-broadband-forum-the-ietf-broadband-forum-work-on-network-enhanced-residential-gateway-wt-317-and-virtual-business-gateway-wt-328-attachment-1.pdf

[Invenio] Cdswarehttp://cdsware.cern.ch/invenio/index.html

[IPERF] Iperf,link:https://iperf.fr/

[IPTraf] http://iptraf.seul.org/

[ITG] http://traffic.comics.unina.it/software/ITG/

[JBoss] JBosshttp://www.jboss.org/

[Jetty] Eclipsehttp://eclipse.org/jetty/

[Karimi] K.Karimi,N.G.Dickson,F.Hamze,APerformanceComparisonofCUDAandOpenCL,arXiv:1005.2581v3,arxiv.org.

[KeanMohd] http://core.kmi.open.ac.uk/download/pdf/11778682.pdf

[Kirk] D.B. Kirk, W.W. Hwu, Programming Massively Parallel Processors, 2nded.,MorganKaufmann,2013.

[Lauro] Di Lauro, R.; Giannone, F.; Ambrosio, L.; Montella, R., "VirtualizingGeneral Purpose GPUs for High Performance Cloud Computing: AnApplication to a Fluid Simulator," Parallel and Distributed Processingwith Applications (ISPA), 2012 IEEE 10th International Symposium on ,vol.,no.,pp.863,864,10-13July2012doi:10.1109/ISPA.2012.136

Page 70: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium70

[LR_CPE] http://www.lightreading.com/nfv/nfv-elements/huawei-china-telecom-claim-virtual-cpe-first/d/d-id/710980

[LXC] https://linuxcontainers.org/

[IPTR] Iptraf,link:http://iptraf.seul.org/

[m0n0wall] http://m0n0.ch/wall/

[Maurice] C.Maurice,C.Neumann,OlivierHeen,andA.Francillon,“ConfidentialityIssues on a GPU in a Virtualized Environment,” Proceedings of theEighteenthInternationalConferenceonFinancialCryptographyandDataSecurity(FC'14),

[Mei] Mei Hwan Loke t al. (2006) Comparison of Video Quality metrics onmultimediavideos

[Mikityuk] Mikityuk,A., J.-P.Seifert,andO.Friedrich.“TheVirtualSet-TopBox:OntheShiftofIPTVServiceExecution,ServiceAmp;UICompositionintotheCloud.” In 2013 17th International Conference on Intelligence in NextGenerationNetworks(ICIN),1–8,2013.doi:10.1109/ICIN.2013.6670887

[Modig] Modig,Dennis."Assessingperformanceandsecurityinvirtualizedhomeresidentialgateways."(2014).

[Murano] Muranohttps://murano.readthedocs.org/en/latest/

[MySQL] MySQLhttp://www.mysql.com/

[Nafaa2008] A. Nafaa, S. Murphy, et L. Murphy, «Analysis of a large-scale VODarchitecture for broadband operators: a P2P-based solution», IEEECommunicationsMagazine,vol.46,nᵒ12,p.47-55,déc.2008.

[Nec] http://www.nec.com/en/press/201410/global_20141013_01.html

[Netmap] http://info.iet.unipi.it/~luigi/netmap/

[Netty] netty.io

[NFV001] “Network Functions Virtualisation (NFV); Use Cases,” ETSI GS NFV 001V1.1.1,Oct.2013.

[NFVMAN] ETSI NFV ISG, ETSI GS NFV-MAN 001: "Network Functions Virtualisation(NFV);ManagementandOrchestration"v1.1.1,ETSI,Dec2014

[NFVSWA] ETSI NFV ISG, ETSI GS NFV-SWA 001: "Network Functions Virtualisation(NFV);VirtualNetworkFunctionsarchitecture"v1.1.1,ETSI,Dec2014

[Nvidia] Nvidia,CudaCProgrammingGuide,2015

[OpenCL] AMD,IntroductiontoOpenCLTMProgramming,2014.

[OpenNF] http://opennf.cs.wisc.edu/

[OpenStack] OpenStackhttp://www.openstack.org/

[OpenVZ] http://openvz.org/Main_Page

[OPNFV] https://www.opnfv.org/

[Ostinato] https://code.google.com/p/ostinato/

[PFRing] http://www.ntop.org/products/pf_ring/

[pfSense] https://www.pfsense.org/

Page 71: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium71

[PostgreSQL] PostgreSQLhttp://www.postgresql.org/

[Puppet] http://en.wikipedia.org/wiki/Puppet_%28software%29

[RD048] “HGREQUIREMENTS FORHGIOPENPLATFORM2.0,”HGI - RD048,May2014.

[reddit]http://www.reddit.com/r/networking/comments/1rpk3f/evaluating_virtual_firewallrouters_vsrx_csr1000v/

[REFnDPI] ntop,“OpenandExtensibleLGPLv3DeepPacketInspectionLibrary”,on-line:http://www.ntop.org/products/ndpi/

[REFPACE] IPOQUE, “Protocol and Application Classification with MetadataExtraction”,on-line:http://www.ipoque.com/en/products/pace

[REFWind] Wind River, “Wind River Content Inspection Engine” on-line:http://www.windriver.com/products/product-overviews/PO_Wind-River-Content-Inspection-Engine.pdf

[RFC1157] ASimpleNetworkManagementProtocol(SNMP)

[RFC2647] BenchmarkingTerminologyforFirewallPerformance

[RFC3511] RTPProfileforAudioandVideoConferenceswithMinimalControl

[RFC4080] NextStepsinSignaling(NSIS)

[RFC4540] NEC'sSimpleMiddleboxConfiguration(SIMCO)

[RFC4741] NETCONFConfigurationProtocol

[RFC7047] TheOpenvSwitchDatabaseManagementProtocol

[Salt] http://www.saltstack.com/

[SBC_ACME] http://www.acmepacket.com/products-services/service-provider-products/session-border-controller-net-net-session-director, RetrivedNov2014

[SBC_ALU] http://www.alcatel-lucent.com/solutions/ip-border-controllers, RetrivedNov2014

[SBC_Audiocodes] http://www.audiocodes.com/sbc,RetrivedNov2014

[SBC_Italtel] http://www.italtel.com/en/products/session-border-controller, RetrivedNov2014

[SBC_Metaswitch] http://www.metaswitch.com/products/sip-infrastructure/perimeta,RetrivedNov2014

[SBC_Sonus] http://www.sonus.net/en/products/session-border-controllers/sonus-session-border-controllers-sbc,RetrivedNov2014

[Shi] Lin Shi; Hao Chen; Jianhua Sun; Kenli Li, "vCUDA: GPU-Accelerated High-Performance Computing in Virtual Machines," Computers, IEEETransactions on , vol.61, no.6, pp.804,816, June 2012 doi:10.1109/TC.2011.112

[Sigcomm]http://www.sigcomm.org/sites/default/files/ccr/papers/2012/October/2378956-2378962.pdf

Page 72: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium72

[Silva] R.L.DaSilva,M.A.C.Fernandez,L.E.I.Gamir,andM.F.Perez,“Homerouting gateway virtualization: An overview on the architecturealternatives,” in Future Network Mobile Summit (FutureNetw), 2011,2011,pp.1–9.

[SNORT] Snort,link:https://www.snort.org/

[TheAge] http://www.theage.com.au/it-pro/business-it/telstra-and-ericsson-testing-virtual-home-gateway-20140721-zv6rh.html

[Tomcat] Tomcathttp://tomcat.apache.org/

[TR-124] BroadbandForum, “FunctionalRequirements forBroadbandResidentialGatewayDevices,”TECHNICALREPORTTR-124”,Dec.2006.

[TS23.228] 3GPPTS23.228IPMultimediaSubsystem(IMS)

[TS29.238] 3GPP TS 29.238 Interconnection Border Control Functions (IBCF) -TransitionGateway(TrGW)interface,Ixinterface

[VBOX] VirtualBox,link:https://www.virtualbox.org/

[vSwitch] INTEL, “Intel DPDK vSwitch”, on-line:https://01.org/sites/default/files/downloads/packet-processing/329865inteldpdkvswitchgsg09.pdf

[Vuurmuur] http://www.vuurmuur.org/trac/

[Vyatta] http://www.vyatta.com

[Walters] J.P.Walters,A.J.Younge,D.-I.Kang,K.-T.Yao,M.Kang,S.P.Crago,andG. C. Fox, “GPU-Passthrogh Performance: A Comparison of KVM, Xen,VMWare ESXi, and LXC for CUDA and OpenCL Applications,” inProceedings of the 7th IEEE International Conference on CloudComputing(CLOUD2014),IEEE.Anchorage,AK:IEEE,06/2014

[web2py] http://www.web2py.com/

[Wfirewalls] http://en.wikipedia.org/wiki/Comparison_of_firewalls

Page 73: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium73

5. LISTOFACRONYMS

Acronym Explanation

3GPP ThirdGenerationPartnershipProject

API ApplicationProgrammingInterface

BGF BorderGatewayFunction

CAPEX CapitalExpenditures

CLI CommandLineInterface

CRUD Create,Read,Update,Delete

DDoS DistributedDenialofService

DFA DeterministicFiniteAutomaton

DHCP DynamicHostConfigurationProtocol

DoS DenialofService

DPDK DataPlaneDevelopmentKit

DPI DeepPacketInspection

DPS DataPlaneSwitch

DSP DigitalSignalProcessor

DUT DeviceUnderTest

EMS ElementManagementSystem

ETSI EuropeanTelecommunicationsInstitute

FW Firewall

HGI HomeGatewayInitiative

HPC HighPerformanceComputing

HTTP HyperTextTransportProtocol

IBCF InterconnectionBorderControlFunction

IDS IntrusionDetectionSystem

IETF InternetEngineeringTaskForce

IOMMU I/OMemoryManagementUnit

ITU-T International Telecommunication Union –TelecommunicationStandardizationBureau

JSON JavaScriptObjectNotation

KVM Kernel-basedVirtualMachine

LB LoadBalancer

MANO ManagementandOrchestration

Page 74: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium74

Acronym Explanation

MIB ManagementInformationBase

NAT NetworkAddressTranslation

NDVR NetworkDigitalVideoRecorder

NETCONF NetworkConfigurationProtocol

NF NetworkFunction

NFaaS NetworkFunctionasaService

NFVI NetworkFunctionVirtualizationInfrastructure

NIO NonBlockioI/O

NN NeuralNetwork

NPU NetworkProcessorUnit

NSIS NextStepsInSignaling

O&M OperatingandMaintenance

OPEX OperationalExpenditures

OSGI OpenServiceGatewayInitiative

OTT over-the-top

PCI PeripheralComponentInterconnect

PCIe PeripheralComponentInterconnectExpress

POC ProofofConcept

PSNR PeakSignal-to-NoiseRatio

QoE QualityofExperience

RAID RedundantArrayofIndependentDisks

REST RepresentationalStateTransfer

RFC RequestForComments

RGW ResidentialGateway

RTP Real-timeTransportProtocol

SA SecurityAppliance

SBC SessionBorderController

SIMCO SimpleMiddleboxConfiguration

SIP SessionInitiationProtocol

SNMP SimpleNetworkManagementProtocol

SOM SelfOrganizingMaps

SQL StructuredQueryLanguage

SR-IOV SingleRootI/OVirtualization

Page 75: Network Functions Implementation and Testing - Interim

T-NOVA|DeliverableD5.31 NetworkFunctionsImplementationandTesting-Interim

©T-NOVAConsortium75

Acronym Explanation

SSH SecureShell

STB SetTopBox

TSTV TimeShiftedTV

UTM UnifiedThreatManagement

vCPE virtualizedcustomerpremisesequipment

VF VirtualFirewall

vHG VirtualHomeGateway

VIM VirtualInfrastructureManager

VM VirtualMachine

VNF VirtualNetworkFunction

VOD VideoOnDemand

VQM VideoQualityMetric

vSA VirtualSecurityAppliance

vSBC VirtualSessionBorderController

XML ExtensibleMarkupLanguage