WANSpeak Musings - Volume V

download WANSpeak Musings - Volume V

of 17

Transcript of WANSpeak Musings - Volume V

  • 8/13/2019 WANSpeak Musings - Volume V

    1/17

    Copyright Quocirca 2014

    Clive LongbottomQuocirca LtdTel : +44 118 948 3360Email: [email protected]

    Bernt OstergaardQuocirca LtdTel: +45 45 50 51 00Email: [email protected]

    WAN Speak Musings Volume V Over the last months, Quocirca has been blogging for Silver Peak System s

    independent blog site, http://www.WANSp eak.com. Here, the blog pieces arebrought together as a single report.

    January 2014

    In the continuing series of WAN Speak aggregated blog articles from theQuocirca team covering a range of topics.

    mailto:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]
  • 8/13/2019 WANSpeak Musings - Volume V

    2/17

    WAN Speak Musings Volume V

    Quocirca 2014 - 2 -

    WAN Speak Musings Volume V Over the last months, Quocirca has been blogging for Silver Peak Systems independent blog site,

    http://www.WANSpeak.com. Here, the blog pieces are brought together as a single report.Why The Future NeedsLocal CDN

    It is pointless moving vast amounts of static data around the internet which is why contentdistribution networks (CDNs) have evolved. However, the same technology can and probablyshould be used within an organisation s own network.

    Our Chief Weapon Is Understanding the dependencies between the various parts of a technology platform theservers, storage and networks components is important so as to make sure that solving oneproblem doesn t just appear as a different problem elsewhere.

    Its Here: The Mid -Range EnterpriseStorage Tsunami

    EMC made a raft of announcements around its storage systems and there was plenty of focuson mid-range systems. However, it looks like something may have been omitted

    Can Vendors Help YourTechnical Strategy?

    Vendors get a lot of stick (including from me) for misleading their prospects and customers.Maybe it is time that the vendors upped their game and worked with them in order to helddefine a suitable future strategy that works for all concerned?

    SDN An Apology SDN has been touted as the silver bullet to end all silver bullets. However, as implementationsstart to be seen, issues have come to the fore and it is time to apologise for being a full-onSDN supporter. Possibly.

    Three Pitfalls to CloudOn-Ramping

    Youve decided that cloud is for you. All your apps are now virtualised, and you are ready toflip the switch. Except for that pesky problem of all that data .

    The Future For CoLo Co-locational data centres seem to be doing well which shouldn t be much of a surprise.However, not all data centres are the same and the future will see the evolution andmaturation of a new beast: the cloud broker and aggregator.

    The Future Is CrisperAnd Clearer

    The good news is computer and consumer displays are getting better with higher definitions.The bad news is this could have major impacts on an organisation s networks. How? If thequality is there, users will utilise it.

    Wires? So Last Year,Darling!

    As EE launches super-high speed wireless broadband in the UK, does this start to challenge allthat copper and fibre that has been laid down? Could super-high speed wireless be the future

    at least for specific situations?

    NFV & SDN: You Can

    Go Your Own Way?

    As problems with the idea of software defined networks (SDN) come to the surface, serviceproviders have decided that they need to have something a little bit more focused to their

    needs. Hence network function virtualisation (NFV). Is this a binary battle to the death?

    How the Dark Side isCreating theNotwork

    Spam email may just appear to be an easily dealt with problem, where device- orserver-based software can eliminate a large percentage of the problem. However,nipping spam in the bud closer to the point of creation could have major beneficialimpacts on the internet itself.

    Hub a Dub Dub 3Things in a Tub?

    The Internet of Things (IoT) wants to break free of constrained pilot projects, but could finditself a victim of its own success. Three things need to be considered first and by gettingthese right, the IoT stands a much better chance of success.

  • 8/13/2019 WANSpeak Musings - Volume V

    3/17

    WAN Speak Musings Volume V

    Quocirca 2014 - 3 -

    Why The Future Needs Local CDN

    Content delivery (or distribution) networks (CDN) are generally perceived as being of concern only for wide areanetworks (WANs). The idea is that relatively static content can be distributed to be held as close to a user as possibleso as to reduce the stress on a single server that would result from videos, music, and other files all being streamedfrom a single place. The CDN provides stores of the files in multiple different places around the world, so leading toa more balanced network load with less latency for those needing to access the files.

    However, there is nothing to stop a CDN being implemented in a more local environment, and indeed, this may be agood idea.

    Although local area network (LAN) bandwidths are growing at a reasonably fast rate, the traffic traversing thesenetworks is growing at a faster rate. High-definition video conferencing, high-definition voice over IP (VoIP),increasing use of images in documents, and other large single-file activities have moved traffic volumes up by orders

    of magnitude in some organisations. The future will bring more data as we move toward an Internet of Things, withmachine-to-machine (M2M) and other data migrating from proprietary networks to the standardised realm of thecorporate IP network. Combine this with the need to deal with the world of Big Data using sources not only within theorganisation, but also from along its extended value chain and information sources directly from the web, and it isapparent that LAN bandwidth may remain insufficient to support t he business actual needs.

    Sure, priority and quality of service based on packet labelling can help but requires a lot of forethought andcontinuous monitoring to get right. Much of the M2M and other Internet of Things data will be small packets that arebeing pushed through the network at frequent intervals and this sort of chatter can cause problems. Some of thisdata will need to be near real time; some can be more asynchronous. The key is to ensure that it does not impact thetraffic that has to be as close to real time as possible such as the video conferencing and voice traffic.

    A large proportion of an organis ations data is relatively unchanging. Reports, contracts, standard operatingprocedures, handbooks, user guides, and other information used on a relatively constant basis by users will changeonly on an occasional basis and is ideal for a CDN to deal with. These files can be pushed out once to a data storeclose to the user so that access does not result in a long chain of data requests across the LAN and WAN to thedetriment of other more changeable or dynamic data assets. Only when the original file changes will the request fromthe user trigger the download of the file from the central repository at which point it is placed in the local storeagain, such that the next user requesting the file gets that copy.

    Many of the WAN acceleration vendors already have the equivalent of a CDN built in to their systems through the useof intelligent caching. In many cases, there is little to do to set up the capability the systems are self-learning andunderstand what changes and what doesnt. In some cases, there may be a need for some simple rules to be created,but this should be a once-only activity.

    Extending the use of WAN acceleration into the LAN could bring solid benefits to organisations that are looking tomove towards a more inclusive network and now may be the best time to investigate implementing such anapproach before it gets too late.

  • 8/13/2019 WANSpeak Musings - Volume V

    4/17

    WAN Speak Musings Volume V

    Quocirca 2014 - 4 -

    Our Chief Weapon Is

    Monty Python, that most British of eccentric comedies, had a sketch in it around nobody expecting the SpanishInquisition. A scarlet- befrocked cardinal bursts in through the door and declares Our chief weapon is surprise. Fearand surprise. Our two main weapons are fear and surprise and ruthless efficiency. Err amongst our chiefweapons are.

    This seems to me to be a pretty good analogy to what we are seeing in the IT industry these days. The role of specialistsin a world where everything is dependent on everything else leads to some interesting and often unexpected results.

    IT is still being pushed to the buyer using speed and feeds this server can do more GFlops than that one; this storagearray can do more IOPS than that one; this network can carry more bits per second than that one. However, this ismeaningless where the worlds fastest server is attached to a wax cylinder for storage, or where wet string and

    aluminium cans are being used as networks. The worlds latest sub -photonic multi-lambda network is of no use at allif all your storage can manage is to free up data at a few bytes per second.

    Users are getting confused they identify the root cause of a performance issue and invest in solving it, only to findthat the solution doesnt give the benef its they expected. All that has happened is that the problem has been movedfrom one place to another the paucity of disk performance has been solved, but the server cpu performance cantkeep up with the data; the servers have been updated, but the network is now jittering all over the place.

    Even worse is where the three main variables of server, storage, and networks are all dealt with, and everything looksgreat. Green markers on all sysadmin screens; everything is pointing toward absolutely fantastic performance fromthe IT platform. The IT manager is preparing a shelf at their home for the Employee of the Millennium award thatis surely coming their way.

    Except that the help desk manager has red flags on their screens. Users are calling in with application performanceissues; unintelligible VoIP calls; connectivity drop outs. Sure, the data centre is working like a super-charged V12engine; the problem is that the wide area network connectivity is working far more like a 1940 s John Deere tractor.

    To get that special award, the IT manager has to take a more holistic view of the entire ITC (IT & comms) environment.As virtualisation and cloud continue to become more the norm, focusing on specialisms will not provide the overalloptimisation of an organis ations IT platform that is required to make sure that IT does what it is there for supporting the business.

    End-to-end application performance monitoring is needed, along with the requisite tools to identify root cause of anissue alongside of t ools that can then play the what if? scenarios. If I solve the problem of a slow server here, whatdoes this do to the storage there? If the storage is upgraded, how will my LAN and WAN deal with the increasedtraffic?

    Only by understanding the contextual dependencies between the constituent parts of the total platform can IT besure of its part in the future of the organis ations structure.

    However, dressing up as a cardinal and charging in to management meetings shouting Nobody expects the effectiveIT manager! may at least get you noticed.. .

  • 8/13/2019 WANSpeak Musings - Volume V

    5/17

    WAN Speak Musings Volume V

    Quocirca 2014 - 5 -

    Its Here: The Mid-Range Enterprise Storage

    Tsunami I went to the virtual races with Lotus Renault in Milan in early September to partake in EMCs global mid -range storagetech announcements and you just have to g ive EMC an A for effort: making storage racks look sexy really requiresdetermination and deep marketing pockets. The event was located in a hangar-size television studio, which theupward of 500 participants entered through an umbilical cord tunnel embellished with race track motifs, accompaniedby the roar of high octane racing cars one of which was parked in black and gold Lotus colours to greet the guestsas they emerged from the tunnel.

    This was an all-day, worldwide event going out to some twenty EMC event sites in Asia, the US and several otherlocations in Europe. In the centre of the studio was a news desk where a TV host duo was continuously interviewinglaunch luminaries. The news desk fronted the stage and product presentation area, which was flanked by multiple 6foot cabinets hidden under satin drapes, waiting to be unwrapped.

    David Goulden, EMC president and COO presently took the stage and launched into the story of EMC and the fourmajor trends: mobile, cloud, big data, and social media. Together, these trends were driving WAN data volumes alongan exponential growth curve. It took EMC 20 years to sell one exabyte (thats 1 with 18 zeroes) of data storage. Thefollowing year in 2010, EMC sold one exabyte in a single year, and earlier this year the company sold one exabyte ofstorage in a single month. And its not because the boxes have gotten bigger. Memorys physical size is shrinking, justas the systems capacity to store and retrieve blocks, files and objects continues to speed up, and flash storage keepthe latency issues at bay.

    So, in a roundabout way, our capacity to store and retrieve data is expanding to keep pace with our growing need togenerate data that needs storage. Correspondingly, the physical footprint of our storage facilities remains the same,and what we pay for our enhanced storage capabilities is also pretty stable with the 2013 corporate storage budgetdelivering 5 times more storage capacity than it did just a year ago.

    Data centre virtualisation and the ascent to private/hybrid/public clouds environments is another innovation growthzone that EMC wants to play in, and the customer hook apart from storage density and price is ease-of-use. EMCwants to make private cloud solutions as easy and flexible as public cloud offerings. This is at the heart of the Nileproject an elastic cloud storage device for private clouds and on-line SPs. It provides business users with a simpleinterface to their corporate cloud store and lets them pay by the drink. They can select block, file or object storagetype, then set performance parameters and capacity and finally the amount of elastic block storage required. In themid-range VNX environment the monthly price for 500GB of storage is typically $25. Nile solutions will be on themarket in early 2014.

    An interesting omission from all the goodies announced in the mid-range storage market where EMC reigns supremeis built-in anti-virus solutions. This is an often-requested feature, which EMC with its RSA security subsidiary shouldbe eminently well positioned to deliver. Given the multiple entry points into corporate networks, the multitude ofemployee BOYD devices, and the value that stored data represents, security needs to reside on the platforms as wellas in the applications.

    Need another box with that order, sir?

  • 8/13/2019 WANSpeak Musings - Volume V

    6/17

    WAN Speak Musings Volume V

    Quocirca 2014 - 6 -

    Can Vendors Help Your Technical Strategy?

    Talking with end user organisations, it is becoming apparent that the world I grew up in the world of companieshaving 5-year plans is changing. No-one really saw the financial collapse coming, and the impact of speculators onworld markets can change situations on what seems to be a day-to-day basis. This has made corporate strategisingdifficult, and it is apparent that organisations are struggling to figure out much true planning beyond a few months and in many cases just to the 12-week cyclical horizon forced upon them by their shareholders.

    At the IT level, this is compounded by the pace of change in todays technology. Few saw cloud computing, fabricnetworks, converged computing, or big data coming along at an early enough stage to make them part of theirstrategy, and many are only now beginning to move some of these technologies into their mainstream use.

    If the end-user organisation is struggling to make longer-term strategic decisions, then can it make shorter-term onesthat will still contribute toward a valid vision of a future state?

    My belief is that it can, but at the technology level it will need help, and this help should come from those who areresponsible for technology changes: the vendors. The vendors should be able to present not only what they are doingnow, but also provide the longer-term vision to their customers and prospects for them to evaluate and buy into ifthey believe in the proposition.

    Figure 1

    OK it is equally difficult for a vendor to see the future in any clarity as it is for an end-user organisation, but they canat least provide a view on what they believe could happen. Figure 1 shows Quocircas approach to creating a vision,which we ca ll a Hi-Lo Road Map.

    This works in the following way: If an end-user asks a vendor what functionality they have right now, the vendor canonly offer what they have on their books at the moment. This gives a base point to work from of functionality againsttime or point 0,0 on the graph. However, in six months time, the vendor has the capability to change thefunctionality of their products. Depending on the financial climate, the vendor may have little money to invest in

  • 8/13/2019 WANSpeak Musings - Volume V

    7/17

    WAN Speak Musings Volume V

    Quocirca 2014 - 7 -

    R&D; should there be more money available and more pull from customers, then they could invest more. This leadsto a possible spread in functionality, shown by the dark blue segment of the graph. As time goes further out, thepossible spread of functionality becomes larger.

    However, the vendor is at least providing limits to their vision and giving the customer (or prospective customer) avision to buy into. The model allows the vendor to say to an organisation that they will ensure that the functionalitythe vendor provides will never drop below a certain point, as this would show that the vendor was being too cautiousin their approach. Neither will they push too hard into being overly ambitious and cause the customer to back out ofa technology approach to something more mainstream.

    Recently, I attended a couple of events by essentially similar vendors. One was completely open about where it wasgoing and its vision of the future. It fell easily into this model and the end user organisations present, along with thepartners, were firmly bought in to its approach. The other peppered its presentations with non-disclosure warnings.Therefore, I cannot state to its prospects or customers exactly what the vendor sees as the future, even though it hasa good portfolio of products.

    Which would you choose as a partner? The one with the open, long- term vision, or the one who says we can helpyou but we cant tell you where were going?

    SDN An Apology OK. I admit it. When I first started looking into SDN, I fell for i t. Hook, line, sinker in fact, the fishing rod and thefisherman as well. The simplicity of it all was so dazzling that I suspended my normal cynicism and went for it. SDNwas going to change the world Cisco, Juniper, and other intelligent switch manufacturers were dead, and wed allbe using $50 boxes within months as all the intelligence went to the software layer.

    Well, hopefully I wasnt that bad, and I have raised plenty of issues along the way, but I did miss one tiny little problem.Im OK with the idea of multiple planes, and Im OK with two o f them being as defined within the SDN environment.However, there is one that still niggles: the control plane.

    The data plane has to stay down at the hardware level. If it wasnt there, then the data couldnt be moved around.This could be filed under a statement of the obvious.

    The management layer can, and should, be moved to the software layer. The rules and policies to do with how datashould be dealt with are better served by being up in the software layer, as a lot of the actions are dependent on whatis happening at that level.

    Great we have two layers sorted. Now the thorny issue of the control plane. With SDN this is meant to beabstracted to the software layer too, but it just wont work unless there is a great deal of intelligence at t he hardwarelayer. However, the hardware layer is meant to have little to no intelligence in an SDN environment.

    The problem is that if every packet has to be dealt with via the control plane, then it needs to jump from the hardwareto the software layer and back down again, introducing large amounts of latency into the system. This may not be aproblem for many commercial data centres, but it is a complete no-no for service providers.

    So, to get around the latency issue, only those packets that need action taking on them should be sent up to thesoftware layer. This could work, but something has to decide what packets should go up to the software layer. Thiswould be something along the lines of, oh, I dont know how about a switch operating system and some clever ASICsor FPGAs? And while were at it, we may as well get rid of that last problem of SDN latency by not sending any of thepackets to the software layer and do everything at the hardware layer it is far more efficient and effective.In other words, a switch as we already know it.

  • 8/13/2019 WANSpeak Musings - Volume V

    8/17

  • 8/13/2019 WANSpeak Musings - Volume V

    9/17

    WAN Speak Musings Volume V

    Quocirca 2014 - 9 -

    Now, we have moved closer to our end result. However, there has been a change to the source data again that 1hour or so that it took to move the delta data across has resulted in a further 10Gb of new data being created.

    The iterations could go on for a long time but at a 10Gb level, youre pr obably at the point where the last

    synchronisation of data can be carried out while the planned switchover of the application is being carried out overwith minimum impact to the business. The key is to plan on how many iterations will be required to get to a pointwhere a final synchronisation can be more easily done.

    So there are three main areas to deal with: data volume can be dealt with through cleansing, data deduplication,and wide area network optimisation technologies; network issues can be dealt with through virtual data links and/orprioritisation; and data synchronisation handled through planned iterations of synchronisation, followed by a finaloff-line synchronisation.

    Or you could use a logistics company

    The Future For CoLo Times are looking good for co-location (co-lo) facilities. As commercial entities realise that building yet another datacentre for their own use is not exactly cost-effective, they look towards the value of using an external one for someor all of their needs.

    For some, the use of the XaaS (whatever -it-may-be as a service) model will be the way that they will go. However,there will always be the server huggers: those who realis e that owning the facility is now counter-productive, butwho still want to own the hardware and software stack within the facility. For these people, letting someone elseworry about power smoothing and distribution, environmental monitoring, auxiliary power generation, facilitycooling, and connectivity makes a lot of sense.

    And, then of course, there are those who want to be in the XaaS market but as a provider, not as a user.

    Providing services to the general public and commercial organisations can be fraught with danger particularly ifyou are new to market with a relatively new idea. Will everyone go for The Next Great Idea? If so, going it alone andbuilding a full data centre requires a crystal ball of the largest, clearest magnificence: get it wrong and too manycustomers appear, and you find yourself needing to splash out on another facility far too early. Get it wrong, and notenough customers come to you, and you find yourself trying to keep a vast empty data centre going with low cashflows.

    No better to go for co-lo, and start at the size you need with the knowledge that you can grow (or shrink) as yourneeds change. Funding the relatively small amount of money needed up front to get off the ground can then make itso that cash flows come through the hockey-stick curve rapidly, and the positive, profitable cash flow can then beused to fund the incremental increases in space that the service provider needs as customer volumes increase.

    So far, hopefully all pretty obvious.

    Co-lo does, however, offer both users and service providers further advantages. As there will be many customerswithin a single facility, they are all capable of interacting at data centre speed. Therefore, a service provider housedin the same co-lo facility as their own customer can pretty much forget about any discussions on data latency corenetwork speeds will mean that this will be down in the microseconds for a well-architected platform, rather than themilliseconds.Even where the service provider is in a different physical facility than the customer, but with the same co-lo provider,the interconnectivity between the facilities will minimise latency far below what an organisation could hope to getthrough the use of their own WAN connectivity. Even between co-lo data centres owned by different companies, theamount and quality of connectivity in place between the two facilities will still outperform the vast majority of in-

  • 8/13/2019 WANSpeak Musings - Volume V

    10/17

    WAN Speak Musings Volume V

    Quocirca 2014 - 10 -

    house data centres. Combine all of this with judicious use of WAN acceleration from the co-lo facility to the end userin the headquarters and/or remote offices of the end customer and a pretty well performing system should bepossible.

    For co-lo providers like Interxion and Equinix, this points to a need to not just be a facility; nor even a fully managed,intelligent facility. What it needs is what Interxion c alls a community of customers, partners and suppliers. Byworking as a neutral party between all three, it can help advise all its customers on how best to approach puttingtogether a suitable platform. They can also help with advising on which of its partners can provide services that couldmake life a lot easier for a customer who may through no fault of their own, apart from lack of time to check intoeverything available out there be intent on re-inventing the wheel. In some cases, this may also be advising onwhat services are provided from outside its facilities such as the use of Google or Bing Maps in mash-ups, ratherthan buying in mapping capabilities

    This breeds a new type of professional service: the co-lo provider which does not provide technology services per seover and beyond the facility itself, nor provides systems integration services, but does provide the skills to mix andmatch the right customers with the right service providers.

    Backing this up with higher-level advice on how to approach architecting the end- customers own equipment to makethe most of the data centre interconnect speed and minimise latencies to provide the best possible end-userexperience should be of real value to all co-lo customers particularly commercial companies struggling to fullyunderstand the complexities of next generation platforms.

    If you are looking for co-lo, Quocirca heavily recommends that you ask prospective providers whether they plan tooffer such a neutral brokerage service and if not, walk away.

    The Future Is Crisper And Clearer Ive attended a few events lately with vendors in the PC, laptop, tablet, and display markets. In their attempts to drivea desire in users to upgrade or change their devices, these vendors are finding it a struggle to push the speeds andfeeds as they always have done of old. Whereas there are still the technogeeks who will still pay for faster processing,better graphics, and a humungous great fast hard disk drive, the majority of users are now more like magpies: if itcatches their eye, then its a good start they are looking for something that looks nice and enables them to make apersonal statement in the bling stakes.

    Therefore, more vendors are making greater style statements in their new offerings. Ultrabooks are thinner and morestylish; tablets are lighter and sleeker; smartphones are glossier.

    However, the biggest move seems to be on the display front. With Apple having pushed the Retina display for a whilenow, others have also gone for increasing pixel density to match or exceed standard HD (19201080 pixels). However,there seems to be an increasing push now to go for either the ultra-high definition video standards, 4K (38402160pixels) or 4Q (25601440 pixels). On top of this is deep colour depth generally 8- or 10-bit, the latter being 1.7billion possible colours, or far more than the human eye can actually perceive.

    On the face of it, this is a pretty simple evolution that is becoming more of a requirement. Many screens are gettinglarger some professionals in the media markets will be using 30 or 40 screens for their work, and at that sizestandard HD can look a little grainy. Cameras and videos can now create images that are in the multiple tens ofmillions of pixels, so even a 4K di splays 8.3 million pixels will only be displaying a cut -down version of the real image.However, for a smartphone or a 10 tablet, it does seem a little on the overkill side.

    The problem for many, though, will be the impact it could have on the network. Textual documents using TrueTypefonts will not be a problem their size will stay the same. However, with pin-sharp resolution on their screens, many

  • 8/13/2019 WANSpeak Musings - Volume V

    11/17

    WAN Speak Musings Volume V

    Quocirca 2014 - 11 -

    content creators will now move from 72 dpi graphics to 300 or even 600 dpi images with a massive impact ondocument size.

    Images even those being posted on Facebook will be sized to impress those on such high resolution screens.

    Videos will be streamed using the higher resolutions by those with the capability to show them on their screens.

    This will have an impact on the underlying network, unsurprisingly. A reasonably well-compressed, visually losslessHD film currently requires around 3.5Mb/sec bandwidth. Move this to 4K, and you are looking at 14Mb/s a goodmeans of bringing a network to a halt if ever I have seen one.

    Sure, an organisation could prevent such large files from being accessed, but such a negative approach will only beputting off the inevitable. Technologies will be required that allow such high definitions and larger files to beembraced and encouraged. More efficient compression codecs; incremental viewing of files using low-resolution firstpass, building up to high resolution; content delivery networks, and other caching techniques and networkacceleration techniques will all have a part to play.

    From the noises Im hearing from the likes of Dell, Fujitsu, Lenovo, ViewSonic, and Iiyama, 2014 will be the year ofintroducing 4K/4Q displays. This will lead to an increasing network load of higher definition files through 2015 andbeyond maybe it is time to start planning for this now.

    Wires? So Last Year, Darling! EE, a mobile operator in the UK, has just started trialling a small, controlled LTE-A 4G network capable of providingbandwidth speeds of up to 300Mb/s. This is possible through aggregating different spectrum, with 20MHz of1800MHz and 20MHz of 2.6GHz spectrum being used to provide a combined ultrafast network speed.

    EE has also chosen its first roll-out location carefully. It would have been easy to choose some low-traffic environmentin a quiet backwater somewhere. EE has decided to carry out the trial in the UKs Tech City environment the centrefor technology start-ups and support companies in Londons East End. Full of techies stuffed to the gills with gadgetsand demanding the latest and greatest of everything, Tech City will be a lightning rod for any problems of bandwidthcontention, packet jitter, and collisions. With many of the companies in the area combining data with voice and video,the testing of this ultrafast service should be severe and it will be interesting to see how EE manages to deal withit all.

    300Mb/s is a great deal more than the average business connection speed in the UK at the moment, with the majorityof small and medium businesses using ADSL or ADSL2 connections giving connection speeds averaging out around the14Mb/s level for downloads, with much smaller capacity on upload speeds.

    However, there are headline speeds and there are realistic speeds. Although Virgin Media states that 66% of itscustomers can expect just over 94Mb/s from its up to 100Mb/s fibre to the home (FTTH), an independent site saysits research shows that the overall average, polled from real- world measurement of peoples service, shows that100Mb fibre tends to give closer to 40Mb/s.

    When other connectivity methods from other providers are taken into account, such as fibre to the cabinet (FTTC)and copper ADSL/ADSL2/ADSL+/SDSL, then the UKs overall average connection speed is just shy of 15Mb/s. Theproblem here is that a lot of wired systems (including optical wire, i.e. fibre) are very dependent on the distancethe signal has to travel within the wire. Therefore, if the FTTC cabinet just happens to be 2 meters from the exchange,you will get blazingly fast speeds: if it is a mile from the exchange, then your speeds will be mediocre in comparison.Then there is contention having multiple different users on the same wire at the same time will impact just howmuch of your data can be carried at any one time. At 3am in the morning, you may find that you are the only one onthe line, and everything is zipping along. At 10am as other businesses and consumers start downloading songs,videoconferencing, and generally clogging up the system, problems will occur.

  • 8/13/2019 WANSpeak Musings - Volume V

    12/17

    WAN Speak Musings Volume V

    Quocirca 2014 - 12 -

    And you will find it very difficult if you live out in the sticks: the investment is only there to provide fast services wherethere is money to be made. If you are not deemed worthy, then you will have the technical equivalent of a wet pieceof string to send your data down.

    4G does away with the wires and makes it easier to provision connectivity in out-of-the- way places. What it doesntnecessarily do is to provide unlimited bandwidth and it still suffers from the proximity issues. The EE pilot willidentify what it can do around the first issue, and should give pointers as to how dense the radio masts will have tobe to give adequate cover for an area without creating too much of a noise level.

    However, should 4G LTE-A prove to be successful, it will offer another tool in the consumer and business connectivitytoolbox, and for many it could prove to be the force behind dropping the need for a wire at all.

    NFV & SDN: You Can Go Your Own Way? SDN is maturing: many network vendors now offer variations on a theme of OpenFlow or other SDN-based devices.However, the latest launches from Cisco via its Insieme group show how the market may well evolve with no bigsurprises.

    Insieme is nominally a SDN-compliant architecture, with an interoperability to OpenStack cloud platforms. However,as is Ciscos wont, it only provides its best capabilities in a Cisco -only environment, as the Application PolicyInfrastructure Controller (APIC) is dependent on there being specific Nexus switches in place. And Cisco is nt the onlyone to be causing problems more on this later. So SDN starts to get a bit dirty, with only parts of the promisedabstraction of the control layer being possible in a heterogeneous environment.

    As SDN and OpenFlow mature and hit some problems, service providers have come up with a different group to tryand meet its own problems. Network Function Virtualisation (NFV) tries to create a standardised manner of dealingwith network functions within a service providers environment, aimed at att empting to control the tail-chasing thatthey have to do in trying to keep up with the continuous changes from the technical and vendor markets.

    From the original NFV whitepaper, we can see what the service providers are trying to do:

    Network Functions Virtualisation aims to address these problems (ranging from acquisition of skills,increasing heterogeneity of platform, need to find space and power for new equipment, the capitalinvestment required and the speed to end-of-life of hardware appliances) by leveraging standard ITvirtualisation technology to consolidate many network equipment types onto industry standard high volumeservers, switches and storage, which could be located in Datacentres, Network Nodes and in the end user premises. We believe Network Functions Virtualisation is applicable to any data plane packet processing andcontrol plane function in fixed and mobile network infrastructures.

    This sounds an admirable and suitable aim but the mention of data and control planes seems to place this well inthe face of SDN. Are we seeing a battle to the death between one standard founded in academia and pushed by thevendors (SDN) and one founded and championed by those having to deal with all the problems at the coal face (NFV)?

    Probably not. NFV is aimed at dealing with specific cases, and is really looking at how a service provider can collapsecertain data functions down into a more standardised, flexible, and longer-lived environment. SDN is looking atlayering on top of this a more complex and rich set of data functions aimed at providing applications and users a betteroverall data experience.

    The two can and should work together in h armony to each others benefit. However, there will be an ongoingneed to ensure that there is not a bifurcation of aim, that there remain adequate touch points between eachapproach, and that the standards put in place by each group work well with each other.

  • 8/13/2019 WANSpeak Musings - Volume V

    13/17

    WAN Speak Musings Volume V

    Quocirca 2014 - 13 -

    This then brings us back to the original part of the piece: although bifurcation of SDN/NFV would be bad enough, aforking of SDN by the vendors would be worse. Ciscos and other vendors oft -trodden path of providing nominalsupport for a standard but in such a way that tries to tie people to their own kit is not good for SDN, nor for theend user. Other vendors making similar noises with their own SDN projects and software, such as Juniper with

    Contrail, Alcatel-Lucent with Nuage, and VMware with Nicira, may have better intentions and a more open approach,but the overall messaging does mean that we are entering a fairly typical vendor market, where a good idea isbecoming bogged down in efforts to ensure that no vendor has to cannibalise its existing approach too much.

    In actuality, as organisations are becoming more aware of the need for data agnosticism and how the world of hybridcloud is, by its very nature, one of heterogeneity, this could rebound (and looks like it already is rebounding) on someof these vendors. With Cisco in particular, its outlook for the future, presented by CEO John Chambers, is poor.Predicting a revenue drop of between 8% and 10% in this quarter, based on year-on-year figures, it sees emergingmarkets showing a marked collapse in orders. With many of these markets being a prime target for approaches suchas SDN and NFV, due to many projects being green field, or full replacements of old kit ill-suited to new markets, itlooks like Cisco is being bypassed in favour of those who can offer a more standardised approach to a heterogeneousenvironment.

    Alongside APIC, Cisco also has its Open Network Environment (ONE) platform, and is heavily involved in theOpenDaylight project with the Linux Foundation, to which it has donated part of its ONE code. If it is going to be anongoing force in the new network markets, it will need to provide a stronger message in both the completely openSDN and NFV markets.

    It is to be hoped that the vendors do not strangle this market before it has really taken off. SDN and NFV need towork together: the vendors need to create this new network market and then fight over who does what best in aheterogeneous environment. Only through this will the end user be the winner.

    How the Dark Side is Creating theNotwork Despite the massive strides being made in the bandwidth available to users through improved consumer and businessconnectivity, often, the internet just doesnt seem responsive enough. Sure, poorly crafted web sites with poor codecan be part of the problem, but just a small part a larger overall problem.

    According to Kaspersky Labs, averaged across 2012, spam accounted for 72.1% of all emails, of which around 5%contained a malicious attachment. Depending on which source you trust, you will be told that between 150 and 300billion emails are sent every day, or up to 110 trillion emails per year, of which 80 trillion will be essentially just takingup bandwidth (and peoples time, if the messages get through to them). Although spam v olumes are falling as

    standard advertising emailers are finding it less worth their while using email for these activities, the organised, moremalicious spammers are now taking over, with more worrying possible outcomes.

    For the organised blackhat using phishing (the sending of what looks like a valid email with a targeted message andlinks to external code or other means of getting a user hooked) or using emails as a means of introducing a Trojanor other payload, email is still a tool of choice. These messages are often harder to pick up as spam, as they are moretargeted, but tools are there to try and identify them based on behavior modelling and pattern matching. With therise of emails introducing ransomware code such as CryptoLocker, more people and organisations are finding thatspam is not just time consuming, but that it can also be very expensive to deal with.

    The network hit of a single email is relatively low: however, unless the masses of spam and malicious emails arestopped at source, millions of such messages will take up a horrendous amount of overall bandwidth. With the growth

    in image and even video based spam, the average email size is increasing and slowing down the internet to a point

  • 8/13/2019 WANSpeak Musings - Volume V

    14/17

  • 8/13/2019 WANSpeak Musings - Volume V

    15/17

    WAN Speak Musings Volume V

    Quocirca 2014 - 15 -

    The main thing holding the IoT back has been a mix of cost (who wants to embed a $10 sensor into a $5 piece ofequipment?) and standards (how should data be formatted in order that the billions of connected items can all speaka lingua franca?).

    However, costs are falling, and xml and other data formatting standards are chipping away at the latter issue. Couldwe finally be ready for the IoT to become a reality?

    Certainly, with the likes of Google Glass and other wearable technology, we seem to be well on the way to the IoTbeing there in some form today is there something that will prevent it from accelerating and becoming ubiquitous?

    Heres what Quocircas analysts believe are the three most pressing issues:

    1) Chattiness consider a smart electrical grid network. Every electrical item within a house or business premiseis part of the IoT, reporting back to utility providers information on usage. This data can also be used by remotemonitors who can advise when an item is about to breakdown based on monitoring current draw, for example,or by the house or business owner who wants to be able to see just what power is being drawn at any one time,and apply controls. This is great except for the volumes of small data packets flying around all over the place.Such high volume chatter could bring networks to their knees.

    2) Security opening up so many connected devices to the internet could provide more vectors of attack forblackhats. You probably wouldnt be that bothered if someone broke into the data from your fridge and foundthat you had a secret stash of chocolate bars, but you may be a bit more worried if the data from your CCTVsecurity systems was compromised.

    3) Value although pretty much anything can be connected to the internet, is there any real value in doing so?Lighting, heating, entertainment systems, cookers and so on may make sense where a householder can get themset just as needed while they return from work is one thing an internet connected toilet seat (as can beobtained from Japan) may not offer quite so much obvious value.

    If the IoT is to be implemented as an anything-connected-to-anything mesh, we can expect it to be a complete messand for problems to occur all over the place. If instead we hub the IoT, with a house having intelligent machinesthat aggregate and filter the data to identify what is a real event and what is just noise, then far less traffic will needto be transported over the public internet and security can be applied at a more value-based level against only thedata that needs it.

    This may involve the use of programmable micro-computers, such as the Raspberry Pi, and will also require embeddeddata filtering along the line of data leak prevention (DLP), as seen from the likes of Symantec and CA, and contextually-aware security from the likes of LogRhythm and EMC/RSA can help in dealing with IoT data in a more localenvironment.

    Combine this with intelligent routing and WAN compression, and the data volumes start to look a little morecontrollable.

    If left uncontrolled, the IoT will be a case of pouring more data into a sea of similar data and trying to make sense ofit. By applying intelligent hubs, the data is more akin to being added to a small tub: this is easier to deal with, andonly what is really important then needs to be let out through the plug into the greater internet sea.

  • 8/13/2019 WANSpeak Musings - Volume V

    16/17

    About Silver Peak Systems

    Silver Peak software accelerates data between data centre s, branch offices and the cloud. The companys software -defined acceleration solves network quality, capacity and distance challenges to provide fast and reliable access to

    data anywhere in the world. Leveraging its leadership in data centre class wide area network (WAN) optimisation,Silver Peak is a key enabler for strategic IT projects like virtualisation, disaster recovery and cloud computing.

    Download Silver Peak software today at http://marketplace.silver-peak.com .

    http://marketplace.silver-peak.com/http://marketplace.silver-peak.com/http://marketplace.silver-peak.com/http://marketplace.silver-peak.com/
  • 8/13/2019 WANSpeak Musings - Volume V

    17/17

    About Quocirca

    Quocirca is a primary research and analysis company specialising in thebusiness impact of information technology and communications (ITC).With world-wide, native language reach, Quocirca provides in-depthinsights into the views of buyers and influencers in large, mid-sized andsmall organisations. Its analyst team is made up of real-world practitionerswith first-hand experience of ITC delivery who continuously research andtrack the industry and its real usage in the markets.

    Through researching perceptions, Quocirca uncovers the real hurdles totechnology adoption the personal and political aspects of anorganisations environment and the pressures of the need fordemonstrable business value in any implementation. This capability to

    uncover and report back on the end-user perceptions in the marketenables Quocirca to provide advice on the realities of technology adoption,not the promises.

    Quocirca research is always pragmatic, business orientated and conducted in the context of the bigger picture. ITChas the ability to transform businesses and the processes that drive them, but often fails to do so. Quocircas missionis to help organisations improve their success rate in process enablement through better levels of understanding andthe adoption of the correct technologies at the correct time.

    Quocirca has a pro-active primary research programme, regularly surveying users, purchasers and resellers of ITCproducts and services on emerging, evolving and maturing technologies. Over time, Quocirca has built a picture oflong term investment trends, providing invaluable information for the whole of the ITC community.

    Quocirca works with global and local providers of ITC products and services to help them deliver on the promise thatITC holds for business. Quocircas clients include Oracle, IBM, CA, O2, T -Mobile, HP, Xerox, Ricoh and Symantec, alongwith other large and medium sized vendors, service providers and more specialist firms.

    Details of Quocircas work and t he services it offers can be found at http://www.quocirca.com

    Disclaimer:This report has been written independently by Quocirca Ltd. During the preparation of this report, Quocirca may haveused a number of sources for the information and views provided. Although Quocirca has attempted whereverpossible to validate the information received from each vendor, Quocirca cannot be held responsible for any errorsin information received in this manner.

    Although Quocirca has taken what steps it can to ensure that the information provided in this report is true andreflects real market conditions, Quocirca cannot take any responsibility for the ultimate reliability of the detailspresented. Therefore, Quocirca expressly disclaims all warranties and claims as to the validity of the data presentedhere, including any and all consequential losses incurred by any organisation or individual taking any action based onsuch data and advice.

    All brand and product names are recognised and acknowledged as trademarks or service marks of their respectiveholders.

    REPORT NOTE:This report has been writtenindependently by Quocirca Ltdto provide an overview of theissues facing organisationsseeking to maximise theeffectiveness of todaysdynamic workforce.

    The report draws on Quocircasextensive knowledge of thetechnology and businessarenas, and provides advice onthe approach that organisationsshould take to create a moreeffective and efficientenvironment for future growth.

    http://www.quocirca.com/http://www.quocirca.com/http://www.quocirca.com/http://www.quocirca.com/