WANSpeak Musings - Volume IV

download WANSpeak Musings - Volume IV

of 18

Transcript of WANSpeak Musings - Volume IV

  • 8/13/2019 WANSpeak Musings - Volume IV

    1/18

  • 8/13/2019 WANSpeak Musings - Volume IV

    2/18

    WAN Speak Musings Volume IV

    Quocirca 2014 - 2 -

    WAN Speak Musings Volume IV Over the last months, Quocirca has been blogging for Silver Peak Systems independent blog site,http://www.WANSpeak.com. Here, the blog pieces are brought together as a single report.

    Living in the MaterialWorld With PryingEyes

    The impact of Edward Snowden s leaking of how the NSA monitors data continues but whatdoes this really mean to individuals and organisations?

    When AcquisitionsBite

    Buying in technology is a common way for a vendor to plug holes in its portfolio. However,without the right due diligence (which is rarely carried out), functional redundancy can occur

    and a bad case of acquisitional indigestion can set in.

    Retro Networking An Argument ForAnalogue

    Computers live or die through the use of binary code. However, modern technology may nowmake it far more effective for networks to run in octal, or decimal, or hexadecimal

    Dont Let Vendor HypeCloud YourOrganisation

    Cloud is being abused as a term and vendors are not helping by trying to shoe-horn theirexisting portfolio into the cloud arena. Don t fall for vendor hype: make sure that you areaware of how cloud measures up for your organisation.

    Killing of the Cloud Was It A ContractKilling?

    With too much hype around with the term cloud , true cloud runs the risk of being strangledat birth due to poor implementations failing to meet the expectations of users. There is a needfor a common taxonomy in how cloud systems talk to each and therefore interoperate.

    Oxymoronic DisasterRecovery

    How important is disaster recovery to you? How much more important is business continuity?Disaster recovery is merely an insurance policy one which may not get paid during leaneconomic times. Business continuity should be the focus

    Is It A Car? AnExchange? No , Its AnAbstract Plane!

    Software defined has moved on from the network through servers and storage and nowseems to be applied to pretty much anything and everything out there. The whole idea is toabstract functions away from proprietary hardware to more flexible and agnostic software.

    The 5 Keys To MDM Mobile device management (MDM) may seem like the first place to go when looking atmanaging the burgeoning world of bring your own device (BYOD). However, an MDM-freeapproach is actually far better

    Chasing LabourArbitrages Tail

    Some parts of the world have well-qualified people at far lower resource cost than others, andthis has led to countries like India and Russia evolving large labour-arbitrage markets.However, history shows that savings can be ephemeral and that outsourcing for the rightreasons will create greater savings.

    Is Your Data CentreGetting Sucked IntoThe Cloud?

    It is time to review just how your data centre operates and where cloud fits in to this.Although many claim that cloud is not on their radar, existing data centres run the risk of beingsucked into the cloud. It is far better to proactively plan for such an occurrence.

    The Internet of Thingsvs. The Network

    The internet of things (IoT) seems to be a topic de jour. Like most topics de jour, it islikely that initial implementations will get things more wrong than right andmanaging the traffic created by the billions of items that could be connected to the IoTis one that needs careful consideration.

    The Future May Be AStrange, But Familiar,Place

    Are you happy with your internet speed? If not, are you any less happy than you were 2, 5 or15 years ago? As WAN speeds increase, the traffic seems to grow exponentially to fill it. Willthe future look any different?

  • 8/13/2019 WANSpeak Musings - Volume IV

    3/18

    WAN Speak Musings Volume IV

    Quocirca 2014 - 3 -

    Living in the Material World With Prying

    EyesIts been a thrilling month on the Full Monty information disclosure front. We have been told, by a former NSAinsider, that every phone call and every item of stored data passing through major US ISPs, mobile carriers, and cloudservice providers is screened by deep packet inspector Narus on behalf of the US National Security Agency.

    The US government, helped by subcontractors, is apparently sifting through all this data looking for terrorist-relatedinformation, though it claims to stay well away from prying into commercially valuable information this, they accusethe Chinese of doing. China, the behemoth behind its Great Wall-of-Fire, is also screening all its content carriers, butinstead of seeking terrorists, it is looking for dissident political utterances, while at the same time claiming to be theinnocent spying victim in a networked world a world where the US has defined the technologies and the trafficrules that govern the Internet, as well as hosting most of the worlds great data gathere rs and cloud service providers.

    One comforting claim for the European audience is that the US screening is not directly aimed at monitoringcommunication between parties that are outside the US. However, we know that the US, Canada, UK, Australia, andNew Zealand operate the Five Eyes program (formerly known as Echelon) that monitors vast amounts ofinternational communication over any electronic channel, so perhaps all lines of electronic communication and allcloud data storage facilities are considered fair game for national security agencies to spy on.

    In addition, the use of advanced persistent threat (APT) techniques to penetrate private, public, and commercialorganisations that are suspected of harbouring a threat to any nations national security i s considered fair game. Thenthere are the million strains of malware developed to snoop on our machines data, steal our identity, or block accessto our sites malware either controlled by hackers that are out to loot us, or angry hacktivists out for gory renown.National security ops, cybercriminals, and hacktivists target any entity that has value with a lot of collateral damagealong the way.

    So, does all this make a mockery of our attempts to maintain the confidentiality and integrity of our personal andcorporate data? Actually, no we just have to live with it, like we live with the millions of bacteria and viruses whichsurround us. Sterile environments can be just as dangerous for life as we know it as the organic and digital soup wewade around in every day.

    Of course, we need to know our enemies; we need to know what we have to protect. We need to be educated, andneed to be dressed for the occasion. Going overboard with paranoia and suspicions will only slow us down and makelife miserable. And while security companies like Symantec, Sophos, Checkpoint, F-Secure and McAfee clearly havean interest in selling us their protection, and thus use the fear, uncertainty and doubt rhetoric, they are also providingbest-practice assessments based on heuristic security incident analysis.

    The great change in IT strategies that shifted us from in-house servers and data centres towards more and moreremote cloud services, clearly increases the ease of access to our data. While cloud providers may have better securitythan we have in our own data centre, we do need to be more sophisticated in our use of encryption both forcommunication, and for data storage purposes. We need to be more aware of the metadata footprint we leave incyberspace, be it location specific information, travel patterns, or knowledge access routines. And we need to educateour users on the threats stalking us in our everyday digital lives.

    But first and foremost we must recognise that the significant improvements in overall productivity and responsivenessprovided by the new communication and data access paradigms come at a price. We decided long ago to pay thatprice not out of fear, but out of hope that we can do more with less. That hope remains if we can keep our witsabout us.

  • 8/13/2019 WANSpeak Musings - Volume IV

    4/18

    WAN Speak Musings Volume IV

    Quocirca 2014 - 4 -

    When Acquisitions Bite

    Many of the vendors I talk to have been acquisitive rather than develop their own technologies, they see it as easierand more cost effective to buy any capability through acquiring another vendor and then integrating that technologyinto their own.

    However, so many seem to struggle with this integration and it often leads to problems with existing customers.For example, if a vendor already has some capability in one area, but recognises that it has holes in its overall offering,it may go out and buy another vendor that then provides functional overlap between two systems where theoriginal vendors system does some things almost the same as the acquired vendors system (note the key word:almost).

    Received wisdom is to take what is best from the two systems and create a new offering to the market. However,existing customers can then get upset they may have chosen the acquired vendor over the acquiring vendor, may

    have built up skills or written additional code that is dependent on the specific area being changed. Presenting themwith a new system may just push them into a position where they start to review the total use of the existing software something vendors always want to avoid, as this raises the possibility of the customer going elsewhere. Analternative approach is to use application programming interfaces (APIs) to cobble some form of interoperabilitytogether still leaving the problem of supporting multiple different systems over an extended period of time.

    Some vendors then choose to run the systems as stand-alone options, which I believe can be confusing to the user.Heres our best offering in this area. Or you can have this, which is also a very good offering, but doesnt do thesebits that the other offering does. Or maybe this one? No, a vendor has to bite the bullet and get things sorted but how to do this without upsetting the customer?

    Surely, with service oriented architectures (SOA) and web services (WS) technologies being proven, the answer lies

    here?

    For example, take System A and System B that are now both in a vendors portfolio through acquisition. Functions 1,2, 3, 4 and 5 are facilitated by System A; Functions 4, 5, 6, 7 and 8 by System B. As you can see, there is a redundancyin functionality in areas 4 and 5 between the two systems. Existing customer bases for both systems are wary ofchange, but supporting two products is a drain on the vendors capabilities.

    By identifying exactly what the functional overlap is, a web service or series of web services could be written whichprovides the combined functionality. In the next release of System A and System B, customers will still have the samelook and feel to their beloved chosen system, but will now be using a shared capability through calling the same webservices in these cross-functional areas. Going forwards, new customers can then buy an overall system (System AB,lets call it) which is seamless and has no functional redundancy. Over time, existing customers can be drip -fed theextra functions in the same manner through a set of web services operating through SOA architectures, resultingin a common overall system with a veneer of a different front end being the only difference.

    Cloud computing can also be utilised to make this simpler: the new functions can be provided from a public cloudenvironment, with the on-premise software being changed to point to using the new functions as needed, meaningthat a single instance of new functions is maintained and managed and more of the software management is broughtback under the vendors control.

    A functional approach to how systems work can stop acquisitive vendors from running into systems overload.

  • 8/13/2019 WANSpeak Musings - Volume IV

    5/18

    WAN Speak Musings Volume IV

    Quocirca 2014 - 5 -

    Retro Networking An Argument ForAnalogue Data a collection of 1s and 0s is pretty well understood these days, with the idea being to be able to send asmany of these 1s and 0s down a given transport mechanism (copper, fibre or wireless) as possible.

    But, just maybe, were looking at this in the wrong way maybe an analogue signal could do things so much better.

    This guys off his rocker! I feel many of you thinking and dont worry, many would agree with you. But, pleasebear with me, and I will explain.

    In a standard data stream, there are only two states the data can be in: an off state (a zero) or an on state (a one).This was done in the first place because sensing the difference between an off and an on was pretty easy to do,but se nsing differences in the on states was not accurate enough to be able to do anything with it. Therefore, an 8 -bit piece of data needs 8 pulses of on/off to be able to define it in order for a computer to understand it. Thats yourelementary bit of polishing up what you learned at school for today.

    OK, so now to more college- style stuff. Electronics have, unsurprisingly, evolved over the years, and todaysapplication-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs) are far more capable thanthey were. Therefore, a more nuanced signal can be sent through and understood.

    In figure 1, we see a saw-tooth signal that shows a pretty standard digital signal, in this case just showing a simple01010101010101 data pattern. 8 of these pulses is required to create a single meaningful piece of information as 8bits equals 1 byte. Therefore, the first 16 pulses of this equate to 2 bytes of 01010101, being 85 in decimal. Themaximum that a byte can signify is therefore 11111111, or 255 in decimal.

    Figure 1

    However, if we use an analogue approach instead, then each pulse could be re -sized, using an eighth of a pulseheight as a basis for deciding what the pulse holds as information.

  • 8/13/2019 WANSpeak Musings - Volume IV

    6/18

    WAN Speak Musings Volume IV

    Quocirca 2014 - 6 -

    Figure 2 shows this in more detail. As an analogue data stream, we do not have the stark differences between thedigital on/off stream we have a far smoother signal that covers 8 different states, with each state representing afull byte of data. An ASIC or FPGA capable of measuring these states could then be in a position to convert these intostandard numbers from the 0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6 and 0.7 states.

    In the figure, the first eight data points (our new byte) are therefore 0, 2, 0, 7, 2, 3, 4 and 5.

    Figure 2

    By then usin g octal as the computer numbering rather than digital an octal byte of information could have amaximum value of 77777777, or 16,777,215 in decimal. The first byte in figure 2 is therefore Octal 02072345,

    decimal 554213.

    We have managed to push the possible theoretical data in a stream by 65,793 times pretty impressive, eh?

    OK a computer can really only operate on binary numeric operations. However, there is no reason why networkequipment should not move from binary to octal in how it deals with the data stream or even to decimal,hexadecimal, or any other base in order to deal with the data to give even greater data throughput capability.Accuracy in measuring the various states of the data stream is an imperative, though measuring a 0.1 as a 0.2 wouldbe slightly catastrophic. Noise would need to be removed as far as possible from the signal any impact on thesignal would make such accuracy impossible. I do believe, however, that we have reached the theoretical point wherethe above should be capable of being done.

    Just two very small issues: this would be incompatible with every other piece of networking equipment that has gonebefore, and there are just a few vested interests within the existing vendor community. But why let these get in theway, eh?

    Anyway, just a bit of blue-sky thinking to brighten up your day.

  • 8/13/2019 WANSpeak Musings - Volume IV

    7/18

    WAN Speak Musings Volume IV

    Quocirca 2014 - 7 -

    Dont Let Vendor Hype Cloud YourOrganisation We are still in the midst of cloud hype. I still see far too many vendors taking their existing product range and creatinga cloud message around them, much as when VendorA with ProductB became VendorA.com with eProductB duringthe dotcom boom.

    However, just messaging an existing product as cloud does not make it so, much of what is there is just hostedsoftware, not SaaS; virtualisation, not IaaS; statically resourced, not elastic. Who are prospective cloud customers toturn to for advice that is a bit more independent?

    A little bit of motherhood and apple pie cloud is a journey, not a destination. Many vendors take the archetypalapproach of if you want to get there, I wouldnt start from here, look ing to use fork-lift upgrades from everythingthat you have in place to something new which is generally, much to the surprise of the customer, the vendorsproducts.

    No, what is really needed is someone who can look at what you already have, along with figuring out all thedependencies between the various applications and the hardware they reside on. Then, based on understanding theorganis ations own risk profile, this chosen independent partner can help define what the desired end result shouldlook like and the steps that are required to move from what you already have to get there.

    Through this mechanism, there is less emphasis on change fo r changes sake; more on optimis ing existing investmentswhere they make sense, and on proceeding at a pace that makes sense for you, the customer.

    The approach that I recommend is one where the initial asset discovery stage looks at the hardware and software,and (as stated above) looks at identifying the dependencies between these. Based on the process flows already inplace within the organisation, gaps in how well optimised these flows are can be relatively easily identified at thisstage.

    In many cases, a more independent partner will be able to look at applications and understand that whereas a vendorwith an agenda may just be of the mind of Surely not still using a mainframe? With COBOL????, the fact that theapplication is doing just what the organisation needs, is well supported and managed, and looks like it will continueto do so for a good few years means that this is not a legacy environment it is a valid, valuable environment.

    There may be other applications which are just about OK they will definitely need changing in the next 2-3 years,but at the moment, they are not a constraint on the organis ations business. These can be flagged, and a strategicplan put in place for how to update these over a period of time.

    Then there are the main priorities: applications that are already a constraint to the business. Surprisingly, many ofthese turn out to be relatively modern; applications that have been created to deal with bandwagons as they go past mobile applications that only work on iPads, business intelligence solutions that are still dependent on the ITdepartment creating new reports for users, and so on.

    The partner can then work with the business to understand the real feeling about cloud is there a real perceivedconcern about data security and the cloud? If so, even though the perception may be wrong, it may be difficult to sella cloud solution at this time to the business. Where cloud is an option, then the partner can make sure that it is theright platform, taking into account latency, availability, ongoing costs, and so on.

    However, any decisions that are made to change any part of the IT platform, require the long-term view to bemaintained, and cloud should not be completely ruled out.

  • 8/13/2019 WANSpeak Musings - Volume IV

    8/18

    WAN Speak Musings Volume IV

    Quocirca 2014 - 8 -

    Through an approach that is not tied to any particular vendor and looks at the existing and required future states, areasonable gap analysis can be carried out and suitable advice provided to the business, that can then be taken or leftas the business sees fit.

    By taking a technology agnostic view of its customers, a good service partner can provide advice that is more open

    and provides better bang for the buck. From my point of view, this approach is the best way to make sure that cloudworks for an organisation, and that vendor hype does not cloud reality.

    Killing of the Cloud Was It A ContractKilling? The forensic team are deployed, bunched around the body on the floor, trying to determine the cause of death. Thebody is of Cloud a lot had hung on Clouds success. Over in the corner, Chief Tech E Nerd makes a pronouncement:This was a contract killing.

    Sharp intakes of breath all round how can E be so certain? What are the tell -tale signs that makes this a seeminglyopen and shut case? Lets bring in the TV trick of a shimmer to go back in time.

    Cloud had been brought into the organisation to sweep away the old guard. The previous incumbent had been holdingthe business to ransom for years. It demanded ever higher payments, but never seemed to deliver on its promises. Ithad been time for a new play, and Cloud was seen as the one to do this.

    In a rush of good feeling, the old was replaced with the new a leaner operation, with less bloat; a more modernapproach to how things should be done.

    At first, everything seemed to go well. The organisation found that it could get on with its day-to-day tasks withoutthe IT platform stopping them. Those wanting to try things out found that resources were available to them theycould try something, and if it worked it could go live. If not, it could be moved to sleeping with the fishes and no-oneneed know anything more about it.

    Slowly, though, Cloud had shown his true colours. He brought in his friends from outside Public Cloud and HybridCloud joined what the organis ation now knew was only one part of the group: the Cloud they knew was really PrivateCloud. The three Clouds started to tak e over and make demands.

    Things slowed down. Any request to carry out an action had to go through a decision making process. The three Cloudsargued as to whether the task should be carried out by their teams. Often, Public Cloud would win the argument, andyet his team was the slowest of the three and with Private Cloud claiming ownership of the organis ations data,performance was poor, and the organisation suffered.

    The organisation was back where it started the great hope of a silver bullet had got bogged down in the inertia ofa mixed environment with little intelligence in where things should best be done. Governance had taken a big stepbackwards because discipline was an early casualty.

    Shimmer over back to the present.

    E saw the signs of this descent into Cloud chaos. Around the body, monitors were lit up with red; trouble ticketsstamped unresolved littered the ground. How could it all have gone so bad ly wrong?

    E recognis ed that the issue was not with the three Clouds, but how they should work together, harmoniously.Without a common language around each others strengths and weaknesses, the Cloud family was always doomed to

  • 8/13/2019 WANSpeak Musings - Volume IV

    9/18

    WAN Speak Musings Volume IV

    Quocirca 2014 - 9 -

    be a failure. Public Cloud was certain that his way was always going to be the answer, whereas Private Cloud believedthat only he could cover security Hybrid Cloud stood little chance of brokering deals between them.

    If only they had agreed contracts with each other beforehand been able to rapidly and effectively ask questions,such that if the organisation needed this task or process to be done, they would have been able to figure out the best

    mix of Clouds to enable this to happen. It wasnt quite as E had said it wasnt a contract killing, it was the lack ofcontracts that did it for Cloud.

    To make cloud computing work well for an organisation, some means of managing workloads in an intelligent andeffective manner is required. Companies such as Egenera with its PAN Manager and VMTurbo with its OperationsManager are working towards providing such tools. IBM, within its Pure Systems and EMC with its ViPR concept arealso looking at how the right workload can be in the right place at the right time.

    When bringing the Clouds into your organisation make sure that you give them the right tools. We dont wantanother Cloud killing.

    Oxymoronic Disaster Recovery I am still hearing a lot from vendors about how good their disaster recovery is. Whether it is based on in-data centreapproaches or through off- site activities, the cry seems to be Choose us our solution can give you a vague chanceof survival!

    This seems to be slightly wrong to me. With all the technologies we have available to us these days, the real callshould be Choose us well stop disasters happening!

    This is not just arguing around semantics. There is a wealth of difference between trying to rescue an organisationfrom the mire of a massive problem with its IT platform and trying to create a platform where any such problems can

    be effectively managed on the fly.

    Lets get this clear Disaster Recovery is an attempt to stop your organis ation from going bust, and BusinessContinuity is an attempt to keep everything working, at least to some extent. Disaster recovery means that theproblem has become an issue to everyone the organisation cannot carry out the activities it needs to do. Businesscontinuity may involve IT people running around screaming incoherently, but as long as this is just happening downat the data centre level, then the business can continue working.

    OK business continuity used to be only for those who had immensely deep pockets. It involved data centremirroring and mass replication of environments for it to work. Banks, government, and a few others were using it everyone else was implementing faster backup and recovery software on the premise that getting an organisation upand running again within one working day was better than four.

    What cant have passed anyone by is that the world is busy virtuali sing. This gives the capability for cost-effectivebusiness continuity to be looked at.

    Application images can be used within the data centre if the data centre fails, then a cold image held in an externalhosted data centre can be rapidly spun up. This will need the data to be replicated in real time, but this is becomingeasier to do in the majority of cases where massive transactional throughput is not an issue. Even with largetransactional volumes, the use of store-and-forward message buses (such as IBM WebSphere) can ensure thattransactional state is maintained and few transactions are lost on any failure.

    The cost of maintaining cold images is very low essentially, it should be storage costs only. The real cost kicks inwhen the image has to be spun up and used resources have to be applied and used at this point. However, through

  • 8/13/2019 WANSpeak Musings - Volume IV

    10/18

  • 8/13/2019 WANSpeak Musings - Volume IV

    11/18

    WAN Speak Musings Volume IV

    Quocirca 2014 - 11 -

    Unfortunately, though, many are too wedded to their appliance s, while others do not have the smarts to be able tomove to an intelligent SDX model in the time scales available. New players will emerge who are not constrained byearlier technology; these will play directly in the SDX model and may unseat some of the incumbents.

    The big incumbents, such as Cisco, Juniper, IBM, HP, Dell will have their choices they can invent (not a good option)

    or acquire. In the big data dash, the big players of IBM, Oracle and SAP got it wrong by picking up the old guard ofHyperion, Cognos and Business Objects respectively, rather than looking at interesting stuff the new guys were doing.I trust that lessons have been learnt and for the SDX world, those looking to acquire will be a tad more careful in theirdue diligence.

    Even with the issues that there are with SDX, it is the only pragmatic way forward to create a smooth virtual platformon top of increasing physical complexity.

    We live in interesting times may you be able to live alongside them.

    The 5 Keys To MDM Mobile devices can be a real pain. They may have been brought in by the user through BYOD, with the organisationhaving little control over them. And even where the organisation has provided them, users have a propensity forlosing the devices on a pretty regular basis. Topping it all off is the ease with which users can download apps for theirown use and then expect that these non-enterprise apps will work in their enterprise environment.

    What steps can be taken to give a degree of enterprise control over what is essentially a consumer device? Some ofthe following can be used in combination; some stand-alone:

    1) Centralise everything. Server-based computing means that not only the desktop but all the applications and datacan be held by the organisation not by the user on the device. As everything has to go through the centre,

    other tools such as data leak prevention (DLP) and VPNs can be implemented for additional security. Sure, itmeans that a good online connection is needed, but is a solution, turning the device into a lump of silicon, metals,and plastics used just as a window on the enterprise world.

    2) Sandbox wherever possible . Using a sandbox on the device gives greater granular control over data use. Forexample, the user may be prevented from cutting and pasting data, or forwarding emails. Sandboxing also meansthat even if the user trawls through every rogue web site on the planet and ends up with a massively infecteddevice, the worms, Trojans and other malware cannot pass through the walls of the sandbox, keeping theorganisation clean.

    3) Encrypt. Encrypt data on the move and at rest, so that even if data does manage to get out from the centralisedsystem and the sandbox, it is just a random collection of 1s and 0s that will need a load of compute power thrownat to try to read. Encryption of data on the move is just as important to stop man-in-the-middle attacks andsomeone taking the data in transit.

    4) Offer enterprise-quality apps. Users are a little like magpies they see something shiny and want it,immediately. App stores are full of shiny stuff. If you look at a users device, it will typically have tens to hundredsof apps on it, many of which have been paid for, most of which are unused after the first two or three times.Some will, however, endure identify these, find enterprise-quality apps that give equivalent or betterfunctionality with equal or better ease-of-use, and make them available to the users directly via a corporateportal. Make it known that the other apps are non-preferred and that any data loss caused through the useof non-preferred apps will be a disciplinary issue. Apps will result in the use of the device as more than a windowon the world, so make sure that the apps do give good levels of security and central manageability.

    5) Use secure streaming. An alternative to using device-specific apps, secure application streaming from a companysuch as Numecent means that standard enterprise applications can be rapidly, effectively, and securely runnatively o n the mobile device without any major changes to the application. The devices own compute powercan be directly harnessed and data can be temporarily stored on the device in a non-persistent manner so that

  • 8/13/2019 WANSpeak Musings - Volume IV

    12/18

    WAN Speak Musings Volume IV

    Quocirca 2014 - 12 -

    the user gets the benefits of local compute speed, rather than everything happening over the mobile or WiFinetwork.

    The one thing that you may have noticed here is that although this piece is nominally about mobile devicemanagement, there has been no mention of MDM tools. Unfortunately, MDM tools as a means of controlling

    enterprise data tend not to work, as you need to have a degree of control over the device that the user may not wantyou to have. Therefore, by following the five tips above, you could provide a data management environment that hasno need for a device management component.

    And that is how it should be the device may be worth a few hundred dollars, and managing the data effectively willalways leave the device as being worth that much. Trying to apply rules and controls over data that is storedpersistently on the device means that the device could be worth millions or even billions of dollars in the wrong hands if they can get to the data held on it.

    Chasing Labour Arbitrages Tail

    The world is shrinking. Network bandwidth and performance around the globe is making the choice of placement ofa data centre or access to a cloud-served function more a choice of politics than response time. This makes manylook at how they can save money on how their IT services are provided which, in itself is no bad thing.

    When looking at how much IT costs, the actual hardware and software licensing is, in reality, a relatively small part ofthe overall cost. Energy costs are of a pretty major concern, due to how variable they tend to be, combined with aninexorable trend upward. Most organisations are addressing this through greater use of virtualisation andconsolidation, along with newer approaches to cooling so as to minimise energy usage.

    The really painful expenditure comes from more localised costs. For example, a data centre facility in Manhattan islikely to cost a little bit more than one in Marrakesh; the cost for a skilled workforce is going to be more in Tokyo thanit is in Timbuktu. Both tend to be interlinked, and some organisations have been tempted to chase the money and

    look to countries where property prices and the cost of labour are cheap.From an HR point of view this is known as labour arbitrage arbitrage means taking advantage of a difference inprice for an item across two locations. However, such an approach can be fraught with danger.

    Firstly, consider the late 1990s. At this point, the main countries for labour arbitrage were the Philippines, theNetherlands and Ireland all based around these countries being used for contact centres. The Philippines startedto lose out as the Netherlands and Ireland pumped money into their respective markets through advantageous taxpositions funded through the EU. Ireland a country of 4 million people was soon saturated and had to encouragenon-Irish inward migration by offering high salaries, which sort of defeated the object.

    Then, India came in. With minimal government funding, the massive difference in salaries between an EU employeeand an Indian one meant that the cream of Indian university graduates could be paid to work in contact centres,

    whereas such a position in the West was seen as a job, not a career.

    However, many Western companies looking to India for a cheaper contact centre then fell into the misperceptionthat, as there were close to a billion people in India, there was an inexhaustible pot of skill to draw from.Unfortunately, a large proportion of the billion have barely seen any education, never mind university, and theavailable resources were soon used up. This led to salaries rising rapidly in the case of some regions (such asMumbai and Pune), salaries rose by up to 85% per annum. In combination with the emerging middle classes in Indiadriving up property costs, India became less of an arbitrage target.

    This left Indian companies with two choices look elsewhere around the globe to maintain arbitrage, or go for thenext level down for Indian skills. The former has led to Indian companies such as TCS setting up offices in othercountries to maintain a level of labour arbitrage. The latter has led to a collapse in the quality in Indian call centres.

  • 8/13/2019 WANSpeak Musings - Volume IV

    13/18

    WAN Speak Musings Volume IV

    Quocirca 2014 - 13 -

    As this happened, other countries were pitching for business: Kenya is a well (internet) connected country with goodskills; Egypt made a bid for being the next country on the map. However, political issues with many arbitrage optionshas led to companies finding that they have to enact a Plan B to extricate themselves and redeploy elsewhere.

    Ultimately, doing things effectively will result in cost savings and it may be effective to move certain functions to a

    different part of the globe to make use of available skills or for political or other reasons. The fact remains that chasingcost savings through arbitrage is the wrong way of doing things.

    What should not be surprising is how cyclical everything is: the Philippines has made a reappearance on the radar formany US-based companies, while Ireland has reappeared in the EU as its economy has tumbled and resulted in acollapse in the cost of human resources.

    Chasing labour arbitrage is a fools errand just do it right through investing in a flexible IT platform from the start.

    Is Your Data Centre Getting Sucked Into The

    Cloud? Remember all that talk about the $20m server? You know, the one server that tipped you over the edge to having tobuild a new data centre facility since the existing one was now too small? Seems to be so far in the past, doesnt it?

    As weve gone through the steps of application rationalisation and consolidation, and on to a more virtualised ITplatform, its far more likely that you are looking at a small pile of IT equipment sitting pathetically in the middle ofan over-sized data centre facility. Sure, you may be wondering how you are going to get enough power to that pileof ultra-dense equipment, but space is probably not that high on your priority list any longer.

    Indeed, as you find that the march of shadow IT across your organisation is leading to more of the platformdisappearing into the public cloud, you may be wondering why you bother at all. Was Nicholas Carr right is IT dead?

    Well, no, of course not. IT is morphing into a different beast (which is what Carr was on about anyway). While thedegree to which business depends on IT will continue to increase, it is now going to be implemented and used indifferent ways. This means that any facility that houses IT equipment owned by the organisation will need to bedesigned and managed differently.

    For anyone looking to build a new data centre for their own use, this may be a major problem. Historically, there wasonly one thing that seemed certain about a data centre facility: at some stage you would grow out of it. Now that isno longer the case, so trying to create a magic facility that can grow and shrink with the amount of equipment insideit may be beyond the capabilities of your organis ations facilitie s management team.

    Far better to take a long, hard look at what you are running in your existing data centre and figure out what you willneed to run in an owned facility for the foreseeable future even if that foreseeable is only around 12 months.You also need to ask yourselves, What else do we want to run on owned server, storage, and network equipment,but does not need to be put into an owned facility? Great that can all go to a co-location facility. This providesthe grow-and-shrink flexibi lity that you will need. And in addition to the flexibility factor, its no longer your problem someone else has to build the facility, implement connectivity, power distribution, cooling, auxiliary generation,negotiate energy prices, and all the rest of the grunt work that goes with owning a data centre. You get a nice cleanarea where you can put your equipment and off you go.

    Set up in this way, when you need extra functionality it simply becomes a case of deciding whether it should beprovided from within the owned facility, the colo facility, or whether that wonderful public cloud thingummy is abetter place to source that function from.

  • 8/13/2019 WANSpeak Musings - Volume IV

    14/18

  • 8/13/2019 WANSpeak Musings - Volume IV

    15/18

    WAN Speak Musings Volume IV

    Quocirca 2014 - 15 -

    house. The grid makes a request for the freezer to be turned off, and the home monitors the temperature of thefreezer itself and turns the power back on again when required without the need to send more data over the network.

    This can be done with security systems, with certain types of data where a meshed Pi network could act as a contentdelivery network and cache certain types of data for the house as a whole. It can be programmed to carry out

    advanced actions, such that a single, small data packet sent from outside kicks off a far more complex sequence ofevents in the house for example, I use my mobile to say that Im on my way home: a single command goes to thehouse, and the mesh of controllers turn on certain lights for me, set the temperature to a predetermined level, turnson the oven (or makes that slice of toast) and runs a bath for me whatever I have decided should be done.

    Without the intelligent home, the internet of things could swamp existing networks with large amounts of small butchatty data. Hopefully, the emergence of cheap, powerful systems such as the Pi will help to put in place the controlthat will be so desperately needed.

    The Future May Be A Strange, But Familiar,

    Place In the space battle years of the 1950s and 1960s, the magazines of the day took to looking to what the future may belike by the year 2000. Flying cars, jetpacks, living on Mars, and various other things were put forwards as being withinreach. Here we are in 2013, and although there have been examples of the first two, and people are now puttingthemselves forward for the last one, those views from around 40 or so years ago seem a tad quaint to us now.

    But I have lived through massive change. My first interaction with a computer was at secondary school, where I time-shared a mainframe sited at a local university over an acoustic modem. I then got my first computer a SinclairZX80, where programs had to be loaded from audio cassettes. My first proper computer , a BBC Micro, had 32kb ofmemory, could render 16 colours and connected to the world via a 1200/75 half-duplex modem.

    30 years on, and I sit at a PC with 64GB of memory with full 32-bit colour depth and am permanently connected tothe internet at 24Mb/s with access to a world of information and people.

    And it is still far too slow for me.

    This is likely to remain the problem. Whereas back in the late 1970s, just being able to connect to another systemsomewhere else on the planet seemed like the deepest magic, now it is being put forward as a basic human right, andthe things we do with the internet are continually pushing it.

    Uploading a picture was slow, so we did thumbnails of just selected scanned photos that were a bit blurry, but wereimpressive enough for people to get the idea. Now, we take thousands of already electronic pictures on cameras with

    10, 20 or even 40 million pixels and move them directly into the cloud for storage. The capability to downloadvideo at 640320 pixel density and mono sound has moved to a need for full HD with surround sound. Real-timevideo for video conferencing with relatives on the other side of the planet has moved from please dont move toomuch or your face smears to Youre looking tired, dear the wrinkles ar e more pronounced.

    The future will undoubtedly bring in a lot more of this kind of activity on the internet. 4K TV and films are already onthe horizon, and the internet will want to be at the forefront of this as a delivery mechanism. The Internet of Thingswill increase the chattiness of the internet, and this low-level, small sized but high volume traffic will need to beadequately managed in order to allow the internet to work effectively with all the other traffic going over it.

    Undoubtedly, bandwid th will continue to grow. Todays copper is being replaced with fibre, either to the cabinet ordirectly to the home. As fibre to the home becomes more of the norm, bandwidth will have the capability to grow to

  • 8/13/2019 WANSpeak Musings - Volume IV

    16/18

    WAN Speak Musings Volume IV

    Quocirca 2014 - 16 -

    the Gb/s and above levels. In the battle to drive revenues, service providers are likely to remain wedded to the useof contended connections, though meaning that a single connection will be shared between multiple users,resulting in a far slower actual connection. However, in the same time periods, the average private network (whichwill include consumer home networks) will have gone from 1Gb/s to 10Gb/s and then on to 40Gb/s or beyond. It ishighly unlikely that wide area network speeds will catch up to any great extent with the local area network speeds.

    And for the average user, this means that in 30 years time, they will likely be sat there still bemoaning the fact thatthe internet is just far too slow.

  • 8/13/2019 WANSpeak Musings - Volume IV

    17/18

    About Silver Peak Systems

    Silver Peak software accelerates data between data centre s, branch offices and the cloud. The companys software -defined acceleration solves network quality, capacity and distance challenges to provide fast and reliable access to

    data anywhere in the world. Leveraging its leadership in data centre class wide area network (WAN) optimisation,Silver Peak is a key enabler for strategic IT projects like virtualisation, disaster recovery and cloud computing.

    Download Silver Peak software today at http://marketplace.silver-peak.com .

    http://marketplace.silver-peak.com/http://marketplace.silver-peak.com/http://marketplace.silver-peak.com/http://marketplace.silver-peak.com/
  • 8/13/2019 WANSpeak Musings - Volume IV

    18/18

    WAN Speak Musings Volume IV

    About Quocirca

    Quocirca is a primary research and analysis company specialising in thebusiness impact of information technology and communications (ITC).With world-wide, native language reach, Quocirca provides in-depthinsights into the views of buyers and influencers in large, mid-sized andsmall organisations. Its analyst team is made up of real-world practitionerswith first-hand experience of ITC delivery who continuously research andtrack the industry and its real usage in the markets.

    Through researching perceptions, Quocirca uncovers the real hurdles totechnology adoption the personal and political aspects of anorganisations environment and the pressures of the need fordemonstrable business value in any implementation. This capability touncover and report back on the end-user perceptions in the marketenables Quocirca to provide advice on the realities of technology adoption,not the promises.

    Quocirca research is always pragmatic, business orientated and conducted in the context of the bigger picture. ITChas the ability to transform businesses and the processes that drive them, but often fails to do so. Quocircas missionis to help organisations improve their success rate in process enablement through better levels of understanding andthe adoption of the correct technologies at the correct time.

    Quocirca has a pro-active primary research programme, regularly surveying users, purchasers and resellers of ITCproducts and services on emerging, evolving and maturing technologies. Over time, Quocirca has built a picture oflong term investment trends, providing invaluable information for the whole of the ITC community.

    Quocirca works with global and local providers of ITC products and services to help them deliver on the promise thatITC holds for business. Quocircas clients include Oracle, IBM, CA, O2, T -Mobile, HP, Xerox, Ricoh and Symantec, alongwith other large and medium sized vendors, service providers and more specialist firms.

    Details of Quocircas work and t he services it offers can be found at http://www.quocirca.com

    Disclaimer:This report has been written independently by Quocirca Ltd. During the preparation of this report, Quocirca may haveused a number of sources for the information and views provided. Although Quocirca has attempted whereverpossible to validate the information received from each vendor, Quocirca cannot be held responsible for any errorsin information received in this manner.

    Although Quocirca has taken what steps it can to ensure that the information provided in this report is true andreflects real market conditions, Quocirca cannot take any responsibility for the ultimate reliability of the detailspresented. Therefore, Quocirca expressly disclaims all warranties and claims as to the validity of the data presentedhere, including any and all consequential losses incurred by any organisation or individual taking any action based onsuch data and advice.

    All brand and product names are recognised and acknowledged as trademarks or service marks of their respectiveholders.

    REPORT NOTE:This report has been writtenindependently by Quocirca Ltdto provide an overview of theissues facing organisationsseeking to maximise theeffectiveness of todaysdynamic workforce.

    The report draws on Quocircasextensive knowledge of thetechnology and businessarenas, and provides advice onthe approach that organisationsshould take to create a moreeffective and efficientenvironment for future growth.

    http://www.quocirca.com/http://www.quocirca.com/http://www.quocirca.com/http://www.quocirca.com/