Akamai Cloud Computing Perspective

download Akamai Cloud Computing Perspective

of 12

Transcript of Akamai Cloud Computing Perspective

  • 8/8/2019 Akamai Cloud Computing Perspective

    1/12

    White Paper

    Akamai and Cloud ComputingA Perspective from the Edge of the Cloudby Tom Leighton, co-Founder and Chief Scientist, Akamai Technologies

  • 8/8/2019 Akamai Cloud Computing Perspective

    2/12

  • 8/8/2019 Akamai Cloud Computing Perspective

    3/12

    INTRODUCTION. 1

    UNDERSTANDING.THE.CLOUD. 1

    . The.Cloud.Computing.Framework 1

    Virtualization 2

    Infrastructure-as-a-Service 2

    Platform-as-a-Service 2

    Software-as-a-Service 2

    CloudOptimizationServices 2

    . Public.Clouds.and.Private.Clouds. 3

    ANATOMY.OF.A.CLOUD . 3

    . The.Middle.Mile.Conundrum 3

    PeeringPointProblems 4

    RoutingVulnerabilities 4 InefficientCommunicationsProtocols 4

    NetworkOutages 5

    . Cloud.Computing.Architectures. 5

    CentralizedDatacentersNewOpportunity,OldApproach 5

    HighlyDistributedNetworksGettingClosetoEndUsers 5

    AKAMAIS.EDGEPLATFORM:.OPTIMIZING.THE.CLOUD. 6

    . Accelerating.Cloud.Computing.Applications 6

    RouteOptimization 6

    CommunicationsOptimization 6

    ApplicationOptimization 7

    . Distributing.Application.Components.to.the.Edge 7

    . Securing.Cloud.Applications.and.Platforms 7

    Ensuring.Site.and.Application.Availability 7

    CONCLUSION 8

    ABOUT.AKAMAI 8

    Table of Contents

  • 8/8/2019 Akamai Cloud Computing Perspective

    4/12

    Akamai.and.Cloud.Computing:.A.Perspective.from.the.Edge.of.the.Cloud 1

    IntroductionAs one of the hottest concepts in IT today, cloud computing

    proposes to transform the way IT is consumed and managed,

    with promises of improved cost efciencies, accelerated innova-

    tion, faster time-to-market, and the ability to scale applications

    on demand. While the market is abundant with hype and

    confusion, the underlying potential is real and is beginning

    to be realized.

    In particular, SaaS applications and public cloud platforms have

    already gained traction with small and startup businesses. These

    offerings enable companies to gain fast, easy, low-cost access

    to systems that would otherwise cost them millions of dollars

    to build. At the same time, cloud computing has drawn the

    cautious but serious interest of larger enterprises in search of its

    benets of efciency and exibility.

    However, as companies begin to implement cloud solutions, the

    reality of the cloud itself comes to bear. Most cloud computing

    services are accessed over the Internet, and thus fundamentallyrely on an inherently unpredictable and insecure medium. In

    order for companies to realize the potential of cloud comput-

    ing, they will need to overcome the performance, reliability, and

    scalability challenges the Internet presents.

    This whitepaper provides a framework for understanding

    the cloud computing marketplace by exploring its enabling

    technologies and current offerings, as well as the challenges it

    faces given its reliance on the Internet. With an understanding

    of these challenges, we will examine Akamais unique role as

    provider of the critical optimization services that will help cloud

    computing fulll its promise to deliver efcient, on-demand,

    business-critical infrastructure for the enterprise.

    Understanding the CloudSimply dened, cloud computing refers to computational re-

    sources (computing) made accessible as scalable, on-demand

    services over a network (the cloud). And yet, cloud comput-

    ing is far from simple. It embraces a conuence of concepts

    virtualization, service-orientation, elasticity, multi-tenancy,

    and pay-as-you-go manifesting as a broad range of cloud

    services, technologies, and approaches in todays marketplace.

    To facilitate our discussion of this diverse marketplace, we rst

    lay out a framework that gives structure to the different offer-

    ings in the cloud computing space. We will also explore

    the role of public and private clouds in the marketplace.

    The Cloud Computing Framework

    Our cloud computing framework has ve key components.

    The rst, virtualization technology, can be thought of as an

    underpinning of cloud computing. By abstracting software from

    its underlying hardware, virtualization lays the foundation for

    enabling pooled, shareable, just-in-time infrastructure. On top

    of this technology base, cloud computings principal offerings

    can be categorized into three main groups: Infrastructure-as-

    a-Service, Platform-as-a-Service, and Software-as-a-Service.

    Cloud optimization is the nal, critical piece of the framework

    encompassing the solutions that enable cloud computing

    to scale and to deliver the levels of performance and reliability

    required for it to become part of a businesss core infrastruc-

    ture. We will now look at each of these framework components

    in more detail.

    Cloud Computing Framework

  • 8/8/2019 Akamai Cloud Computing Perspective

    5/12

    Akamai.and.Cloud.Computing:.A.Perspective.from.the.Edge.of.the.Cloud 2

    Virtualization

    Virtualization is the technology that gave birth to the current

    cloud computing frenzy and is arguably the trend having the

    highest impact on the evolution of infrastructure. By abstracting

    server software from the underlying hardware, server virtualiza-

    tion improves the efciency and availability of resources and ap-

    plications running on that server. It is generally acknowledgedthat roughly 80% to 90% of enterprise computing capacity is

    unused at any given time. Virtualization enables these once-idle

    CPU cycles to be used.

    Taking the concept of server virtualization to the cloud means

    extending it going beyond the more efcient use of a single

    physical machine or cluster to the aggregation of comput-

    ing resources across multiple data centers, applications, and

    tenants, and allowing each to scale up or down on demand.

    This enables cloud providers to efciently manage and offer

    on-demand storage, server, and software resources for many

    different customers simultaneously.

    Signicant cloud virtualization technologies include:

    Microsoft (HyperV)

    VMWare (ESX, as well as multiple related VMWare offerings)

    Xen (open source hypervisor, used by Amazon EC2 and in

    Citrix XenServer)

    Infrastructure-as-a-Service

    Infrastructure-as-a-Service (IaaS) describes the category of cloud

    computing offerings that make basic computational resources

    such as storage, disk space, and servers available as

    on-demand services. Rather than using physical machines, IaaS

    customers get access to virtual servers on which they deploytheir own software, generally from the operating system on up.

    IaaS offers cost savings and risk reduction by eliminating the

    substantial capital expenditures required when deploying infra-

    structure or large-scale applications in-house. Cloud providers

    generally offer a pay-as-you-go business model that allows

    companies to scale up and down in response to real-time busi-

    ness needs, rather than having to pay up front for infrastruc-

    ture that may or may not get used, or having to overprovision

    resources to address occasional peaks in demand. To date, IaaS

    has seen heaviest adoption among small to mid-sized ISVs and

    businesses that dont have the resources or economies of scale

    to build out large IT infrastructures.

    Examples of cloud IaaS offerings include:

    Akamai (NetStorage and CDN services)

    Amazon (Elastic Compute Cloud/EC2 and Simple Storage

    Service/S3)

    GoGrid (Cloud Servers and Cloud Storage)

    Joyent (Accelerator)

    Platform-as-a-Service

    A fast-growing category of cloud computing offerings is

    Platform-as-a-Service (PaaS), which consists of offerings that

    enable easy development and deployment of scalable Web

    applications without the need to invest in or manage any

    underlying infrastructure. By providing higher-level services than

    IaaS, such as an application framework and development tools,PaaS generally provides the quickest way to build and deploy

    applications, with the trade off being less exibility and poten-

    tially greater vendor lock-in than with IaaS.

    The PaaS landscape is broad and includes vendors such as:

    Akamai (EdgeComputing)

    Elastra and RightScale (platform environments for

    Amazons EC2 infrastructure)

    Google (App Engine)

    Microsoft (Azure)

    Oracle (SaaS Platform)

    Software-as-a-Service

    The best enterprise-ready examples of cloud computing are

    in the Software-as-a-Service (SaaS) category, where complete

    end-user applications are deployed, managed, and delivered

    over the Web. SaaS continues the cloud paradigm of low-cost,

    off-premise systems and on-demand, pay-per-use models, while

    further eliminating development costs and lag time. This gives

    organizations the agility to bring services to market quickly and

    frees them from dependence on internal IT cycles. The speed

    and ease with which SaaS applications are purchased and con-

    sumed has made this category of cloud computing offerings the

    most widely-adopted today.

    Important cloud SaaS vendors and services include:

    Adobe Web Connect, Cisco WebEx, Google Mail, Hotmail,

    Yahoo! Mail (communications applications)

    Demandware (e-Commerce)

    NetSuite (Accounting, ERP, CRM, and e-Commerce)

    SAP Business ByDesign (HR, Finance and other ERP applica-

    tions)

    Workday (HR, Finance, and Payroll)

    Cloud Optimization Services

    The nal piece of the cloud computing framework, cloud opti-

    mization services provide performance, scale and reliability for

    all of the previously-described components of cloud computing.

    They enable cloud offerings to operate across an unpredict-

    able and unreliable Internet while delivering the robust levels of

    service required by enterprises.

  • 8/8/2019 Akamai Cloud Computing Perspective

    6/12

    Akamai.and.Cloud.Computing:.A.Perspective.from.the.Edge.of.the.Cloud 3

    The value of cloud optimization services can be understood as

    a direct function of application adoption, speed, uptime and

    security. Without optimization services, cloud offerings are at

    the mercy of the Internet and its many bottlenecks and the

    resulting poor performance has a direct impact on the bot-

    tom line. For example, a site leveraging IaaS components that

    fail to scale for a ash crowd will lose customers and revenue.

    Likewise, a SaaS application that is slow or unresponsive willsuffer from poor adoption. Thus, cloud optimization is essential

    for cloud computing services to be able to meet enterprise com-

    puting requirements.

    In theAnatomy o a Cloudsection below, we will take a closer

    look at the root causes of the Internets bottlenecks. This will lay

    the foundation for understanding why Akamai, with its highly-

    distributed network of servers, is uniquely positioned to provide

    the critical optimization services that can transform the Internet

    into a high-performance platform for the successful delivery of

    cloud computing services.

    Public Clouds and Private CloudsMost of the early spend and traction for cloud computing

    (including, for example, the IaaS, PaaS, and SaaS vendors

    mentioned above) have been focused onpublic cloudservices

    those that are accessed over the public Internet. Public cloud

    offerings embody the economies of scale and the exible, pay-

    as-you go benets that have driven the cloud computing hype.

    More recently, the concept ofprivate clouds (or internal clouds)

    has emerged as a way for enterprises to achieve some of the

    efciencies of cloud computing with an infrastructure internal

    to their organization, thus increasing perceived security and

    control. By implementing cloud computing technologies behind

    their rewall, enterprise IT teams can enable pooling and shar-ing of compute resources across different applications, depart-

    ments, or business units within their company.

    Private clouds require signicant up-front development costs,

    on-going maintenance, and internal expertise, and therefore

    provide a much different benet prole compared to public

    clouds. Private clouds are most attractive to enterprises that

    are large enough to achieve economies of scale in-house and

    where the ability to maintain internal control over data, ap-

    plications, and infrastructure is paramount. Even private clouds,

    however, often have at least partial dependence on the public

    Internet, as these large enterprises must support workers in

    dispersed geographic locations as well as telecommuting ormobile employees.

    In reality, most enterprise cloud infrastructures will be hybrid

    in nature where even a single application can run across a

    combination of public, private, and non-cloud environments.

    For example, a company may run highly-sensitive components

    strictly on-premise (in a non-cloud environment) while leverag-

    ing public cloud offerings for other application components to

    achieve cost-effective scalability.

    So regardless of the path that enterprise adoption of cloud

    computing takes, the public cloud that is, the Internet wil

    play a vital role. And the taming of that cloud with its inher-

    ent performance, security, and reliability challenges is anelement essential to cloud computings success.

    Anatomy of a CloudWhen wrapped up in the hype of cloud computing, it is easy

    to forget the reality that cloud computings reliance on the

    Internet is a double-edged sword. On one hand, the Internets

    broad reach helps enable the cost-effective, global, on-demand

    accessibility that makes cloud computing so attractive. On

    the other hand, the naked Internet is an inherently unreliable

    platform fraught with inefciencies that adversely impact

    the performance, reliability, and scalability of applications and

    services running on top of it.

    We now take a closer look at the causes of these bottlenecks

    and the impact that different cloud computing architectures

    can have in addressing them.1

    The Middle Mile Conundrum

    The infrastructure that supports any Web-based application,

    including cloud computing services, can be split into three

    basic parts: the frst mile, or origin infrastructure; the last mile,

    meaning the end users connectivity to the Internet; and the

    middle mile, or the paths over which data travels back and forth

    across the Internet, between origin server and end user. Each of

    these components contributes in different ways to performance

    and reliability problems for Web-based applications and services

    A decade ago, the last mile of the Internet was likely to be

    one of the biggest bottlenecks, as end users struggled with

    slow dial-up modems. Today, however, high levels of global

    broadband penetration over 400 million subscribers

    worldwide, as well as continually increasing broadband speeds,

    have not only made the last-mile bottleneck history, they have

    also increased pressure on the rest of the Internet infrastructure

    to keep pace.2

    First mile bottlenecks are fairly well-understood and, more

    importantly, fall within the origin providers control. Perhapsthe biggest rst mile challenge lies in the ability to scale the

    origin infrastructure to meet variable levels of demand. Not only

    is it difcult to accurately predict and provision for demand,

    but it is costly to have to overprovision for occasional peaks

    in demand resulting in infrastructure that is underutilized

    most of the time. Cloud computing promises to alleviate this

    to some degree, as cloud providers can pool resources among

    multiple customers so that variability in demand is smoothed

    out somewhat across the group.

  • 8/8/2019 Akamai Cloud Computing Perspective

    7/12

    Akamai.and.Cloud.Computing:.A.Perspective.from.the.Edge.of.the.Cloud 4

    This leaves the middle mile the mass of infrastructure

    that comprises the Internets core. Indeed, the term middle

    mile is itself a misnomer in that it refers to a heterogeneous

    infrastructure that is owned by many competing entities and

    typically spans hundreds or thousands of miles. While we often

    refer to the Internet as a single entity, it is actually composed

    of 13,000 different networks, joined in fragile co-opetition,

    each providing access to some small subset of end users. Thelargest single network accounts for only about 8% of end user

    access trafc, and per-network share of access trafc drops

    off dramatically from there to be spread out over a very long

    tail. This means the performance of any centrally-hosted Web

    application including cloud computing applications is

    inextricably tied to the performance of the Internet as a whole

    including its thousands of disparate networks and the tens

    of thousands of connection points between them.

    Given this complex and fragile web, there are many

    opportunities for things to go wrong. We now take a

    closer look at four of the key causes of Internet middle-mile

    performance problems.

    Peering Point Problems

    Internet capacity has evolved over the years, shaped by market

    economics. Money ows into the networks from the rst and

    last miles, as companies pay for hosting and end users pay for

    access. First- and last-mile capacity has grown 20- and 50-fold,

    respectively, over the past ve to 10 years. On the other hand,

    the Internets middle mile made up of the peering and transit

    points where networks trade trafc is literally a no mans

    land. Here, economically, there is very little incentive to build

    out capacity. If anything, networks want to minimize trafc

    coming into their networks that they dont get paid for.

    As a result, peering points are often overburdened, causing

    packet loss and service degradation, and, in turn, slow and

    uneven performance for cloud-based applications. The further

    away a cloud service is from its end customers, the greater the

    impact of Internet congestion. For enterprises that are accus-

    tomed to LAN-based speeds, this performance bottleneck can

    seriously affect the adoption of cloud computing.

    The fragile economic model of peering can have even more

    serious consequences. For example, major network provider Co-

    gent de-peered for several days with Telia and Sprint in March

    and October 2008, respectively, over peering-related business

    disputes. In both cases, the de-peering partitioned the Internet.This means, for example, that users on the Sprint network,

    as well as any other networks single-homed to Sprint, would

    not have been able to reach any cloud services or applications

    hosted on Cogent (or any network single-homed to Cogent).

    According to the Internet analyst rm Renesys, the Cogent-

    Sprint de-peering left more than 3500 networks with signi-

    cantly impaired connectivity.3

    Routing Vulnerabilities

    BGP, or Border Gateway Protocol, is the Internets inter-network

    routing algorithm the protocol that determines how data

    packets travel from one network to another within the cloud.

    While BGP is simple and scalable, it was designed neither for

    performance or efciency and thus has a number of well-docu-

    mented limitations.

    For example, BGP is vulnerable to foul play as well as human er-

    ror. This was widely evidenced in February 2008 when Pakistan

    accidentally caused a global YouTube blackout by broadcasting

    a more specic BGP route for YouTube.4

    Although BGP does respond to major changes in network con-

    gestion, it is not as good at making ne distinctions between

    the trafc on multiple routes and it reacts more slowly to

    changes in trafc levels.

    Thus, while BGP may work reasonably well when only best-

    efforts delivery is needed, requirements for enterprise cloud

    computing are typically far more stringent, demanding greaterperformance and reliability than BGP alone can deliver.

    Inefcient Communications Protocols

    Architected for reliability rather than efciency, TCP or Trans-

    mission Control Protocol the Internets primary communica-

    tions protocol is another source of drag on the Internet.

    TCP requires multiple round-trips (between the two commu-

    nicating parties) to set up and tear down connections, uses a

    conservative initial rate of data exchange, and recovers slowly

    from packet loss. This overhead is especially detrimental to the

    performance of SaaS- and PaaS-based enterprise applications,

    as these applications tend to be chatty, requiring many small,

    quick, back and forth communications.

    Another surprising effect of the way TCP works is that long

    distances between communicating parties can lead to very low

    throughputs and very high download times an effect that

    becomes increasingly pronounced as le sizes grow larger. This

    is because TCP allows only small amounts of data to be sent at

    a time before pausing and waiting for acknowledgments from

    the receiving end. Thus, even a very small network latency (the

    time it takes a single data packet to travel across the network)

    can translate into a huge delay for large les. As latency is

    directly tied to distance (i.e., it is lower-bounded by the speed

    of light), we see that for large data transfers such as media les

    end user download times are limited not by network capacity

    or last mile bandwidth, but by the distance between server and

    end user. This is a critical issue for those considering IaaS stor-

    age solutions, among others.

    Network congestion further complicates the problem, since TCP

    requires transmitters to back off and send even less data before

    waiting for acknowledgment if packet loss is detected. This

    means that the interplay between different Internet bottlenecks

    can further exacerbate an already difcult situation.

  • 8/8/2019 Akamai Cloud Computing Perspective

    8/12

    Akamai.and.Cloud.Computing:.A.Perspective.from.the.Edge.of.the.Cloud 5

    Network Outages

    As enterprises shift their computing from on-premises systems

    to the cloud, ensuring the reliability of their platform suddenly

    becomes far more complex. Internet failures can happen on

    several different levels from a single router malfunction

    to a data center blackout to an entire network going ofine.

    Unfortunately, large scale outages happen more often than onemight expect. With causes that vary from trans-oceanic cable

    cuts and power outages to DDoS attacks and natural disasters,

    wide-scale network problems can severely disrupt communica-

    tions across large regions of the globe.

    Over the last year, for example, undersea cable cuts severely

    impacted communications in Southeast Asia and the Middle

    East on two different occasions. According to TeleGeography,

    the rst incident, in January 2008, reduced bandwidth con-

    nectivity between Europe and the Middle East by 75%.5 The

    second incident, in December 2008, caused extensive outages

    for large numbers of networks in Egypt and India, according to

    Renesys.

    6

    In both cases the disruptions lasted for multiple days.

    Cloud Computing Architectures

    The middle mile bottlenecks described above create an

    environment that is difcult to rely on for business-critical

    transactions. However, by avoiding the middle mile as much

    as possible, applications and services can be delivered over

    the Internet with much greater security, speed, and reliability.

    Thus, as we begin to see a shift in focus from the hype around

    cloud computings potential benets to the reality of imple-

    menting and using cloud-based solutions, the questions that

    have been conveniently abstracted away thus far (e.g., Where

    exactly are these cloud services running? What do these

    cloud architectures look like?), now become critically relevant.

    Despite the broad variety in cloud computing offerings, their

    underlying deployment infrastructures can be categorized into

    two basic architectures centralized versus highly-distributed.

    These two network architectures have existed long before

    the cloud computing phenomenon; they are in fact the same

    architectures that underlie all Web-based infrastructures. So

    while cloud computing may revolutionize the way infrastructure

    and applications are consumed, its underlying deployment

    infrastructure is nothing new.

    Centralized Datacenters New Opportunity,

    Old Approach

    As with traditionally-architected Web sites, SaaS, PaaS and IaaS

    providers typically host their applications and services in a single

    location or a small number of datacenters.

    For example, Amazon hosts EC2 in just three US datacenters

    and a single European datacenter and private clouds, whether

    collocated or on-premises, often run from a single location.

    This approach is adequate when application users are very close

    to the application host location. For example, a single location

    can be adequate for a private cloud serving an on-premises

    group of employees.

    However, for applications with distributed users or highly-

    variable demand, the centralized datacenter approach is

    insufcient, as it results in an end user experience that suffers

    at the mercy of the Internets middle mile. This means network

    outages, peering point congestion, routing inefciencies, and

    other middle mile bottlenecks will frequently cause applicationperformance and reliability to fall short of expectations.

    Highly Distributed Networks Getting Close to End Users

    By locating cloud computing infrastructure in a highly-distributed

    manner, it is possible to overcome the challenges posed by the

    Internets middle mile. A highly-distributed architecture where

    servers are located at the edge of the Internet, close to end users

    (e.g., directly within the end users ISP, in the end users city)

    avoids the middle mile bottlenecks weve mentioned, enabling

    the delivery of LAN-like responsiveness for applications running

    over the global Internet.

    Akamai is unique in taking this approach. While a number

    of other large cloud providers including content delivery

    networks do run multi-datacenter operations, these are

    fundamentally different from the highly-distributed network

    Akamai uses. With a centralized or multi-datacenter infras-

    tructure, the cloud providers servers are still far away from

    most users and must still deliver content from the wrong side

    of the middle mile bottlenecks.

    It may seem counterintuitive that having a presence in a couple

    dozen major backbones isnt enough to achieve commercial-

    grade performance. However, even the largest of those networks

    controls very little end-user access trafc. For example, the top

    10 networks combined deliver less than one third of end-usertrafc, and it drops off quickly from there, with a very long tail

    distribution over the Internets 13,000 networks. Even with direct

    connectivity to all of the biggest backbones, the cloud applica-

    tion must travel through the morass of the middle mile to reach

    most of the Internets 1.4 billion users. Only a highly-distributed

    architecture can overcome the middle mile challenge.7

  • 8/8/2019 Akamai Cloud Computing Perspective

    9/12

    Akamai.and.Cloud.Computing:.A.Perspective.from.the.Edge.of.the.Cloud 6

    Akamais EdgePlatform:Optimizing the CloudAs computing moves into the public cloud, and as private

    clouds scale to provide global access, Cloud Optimization

    Services will be required to deliver the rigorous level of per-

    formance and scalability needed for enterprises to realize thepromise of cloud computing. These services must go well

    beyond CDN caching technologies in order to remove the

    cloud-based barriers to successful enterprise cloud computing.

    Akamai has built the worlds largest distributed cloud

    optimization network, comprised of more than 48,000 servers

    in 1,500 locations, across nearly 1,000 networks worldwide.

    With its distinctive cloud optimization capabilities, Akamais

    network will become the enabler that drives the success of the

    cloud computing movement. The network leverages Akamais

    proprietary technologies that overcome inefciencies in applica-

    tion, transport, and routing layer protocols, along with unique

    solutions that address the security and continuity of cloudservices together transforming the Internet into a reliable,

    high-performance platform for cloud services, just as it has

    done for thousands of online businesses and Web sites over the

    last ten years.

    Accelerating Cloud Computing Applications

    As enterprises think about shifting from an on-premises

    solution to a cloud-based offering, application performance

    becomes a key consideration. Unfortunately, just like traditional

    on-premises or hosted Web applications, cloud-based applica-

    tions are typically deployed in a limited number of datacenters.

    This results in slow and uneven application performance for the

    reasons weve already described. Although caching services can

    help with origin ofoad and performance for static Web sites,

    it takes substantially more capabilities than a CDN to meet the

    enterprise requirements for cloud computing performance.

    First, unlike limited-footprint CDNs, Akamais highly-distributed

    network enables delivery of content from the true edge of the

    Internet, close to end users, in order to avoid as many middle

    mile bottlenecks as possible.

    In addition, Akamai leverages unique routing, communications,

    and application optimization technologies to accelerate IaaS,

    PaaS, and SaaS services across the cloud. We now take a closerlook at each of these technologies.

    Path Optimization

    Akamais SureRouteSM technology monitors real-time Internet

    conditions to identify alternate paths over the Internet that are

    faster than default BGP-dened routes. In addition to acceler-

    ating the long-haul Internet communications that are neces-

    sary for dynamic cloud applications and uncacheable content,

    SureRoute also improves the reliability of these communications

    by routing around trouble spots, nding alternative paths that

    optimize connectivity.

    When the Middle East cable cuts occurred in December 2008,for example, a Reuters article noted that the severed cables

    were the most direct route for trafc between Western Europe

    and the Middle East, and that, in the wake of the damaged ca-

    bles, Verizon had re-routed some of its trafc sending it from

    Europe across the Atlantic, the United States, the Pacic, and

    then nally onto the Middle East.8 In situations like these, when

    outages occur or when alternate routes force trafc across

    continents and oceans, Akamais services enable consistent,

    responsive performance. The graph below illustrates the results

    of a performance test for a customer portal hosted in Europe,

    as seen by Akamais Asia-Pacic measurement agents. These

    agents measured a signicant degradation in performance

    when trying to retrieve content from the origin during the twodays before repairs began. However, portal performance as de-

    livered by Akamai remained consistent throughout the duration

    of the cable cut, as Akamai was able to identify and send trafc

    over alternative network paths on a real-time basis.

    It is worth noting that this type of optimization requires routing

    via an overlay server network and thus is only possible with a

    massive and highly-distributed network like Akamais.

    AKAMAI.HAS.A.DRAMATIC.IMPACT.ON.CUSTOMER.PORTALS.

    100

    80

    60

    40

    20

    0

    Loss

    Percentage

    Origin

    Akamai

    1/27 1/29 1/31 2/2 2/4 2/6 2/8 2/10 2/12 2/14 2/16

    Akamai protected this customer rom packet loss perormance issues ater the December 2008 Middle East cable cuts

  • 8/8/2019 Akamai Cloud Computing Perspective

    10/12

    Akamai.and.Cloud.Computing:.A.Perspective.from.the.Edge.of.the.Cloud 7

    Communications Optimization

    Akamai also streamlines data delivery between its servers by

    using a proprietary transport-layer protocol that overcomes

    TCPs and HTTPs inefciencies. The Akamai Protocol leverages

    techniques such as:

    Using persistent connections to eliminate the overhead in

    connection establishment and teardown

    Eliminating TCP slow-start and using optimal communications

    window sizing based on knowledge of real-time network

    latency conditions

    Enabling intelligent retransmission after packet loss by lever-

    aging network latency information, rather than relying on the

    standard TCP timeout and retransmission protocols

    Allowing multiple requests to be pipelined over a single con-

    nection (without the need to wait for a response between

    each request)

    Simultaneously using multiple routes when warranted to en-

    sure the fastest and most reliable communications possible.

    While Akamais route optimization reduces the round trip time

    (or latency) of server-to-server communications, these transport-

    layer optimizations reduce the total number of round trips

    required for a given communication. In fact, these two optimi-

    zations work in synergy. TCP overhead is in large part a result

    of a conservative approach that guarantees reliability in the

    face of unknown network conditions. Because Akamais route

    optimization delivers high-performance, congestion-free paths,

    it allows for a much more aggressive and efcient approach

    to transport-layer optimizations. Thus, these two approaches

    work symbiotically to boost long-haul performance and improve

    server efciency.

    Application Optimization

    Akamais application layer acceleration techniques include intel-

    ligent prefetching and content compression. Compression can

    greatly reduce the amount of data that must be transferred,

    particularly for text objects like javascript and html. Prefetching

    is a just-in-time caching technology that gives dynamic content

    the same high levels of end user performance that edge cach-

    ing provides for static content. With prefetching, Akamais edge

    servers retrieve the embedded objects in an HTML page before

    the browser requests them. The objects are then delivered to

    the user from a local Akamai edge servers memory cache,

    providing the end user with an optimal experience. Based on

    customer-congured business rules, Akamai can also prefetch

    hyperlinked content (such as a document or video) before auser clicks on it, enabling fast, local delivery of dynamically-

    generated content.

    Akamais cloud acceleration solutions leverage the optimizations

    mentioned above to dramatically improve the end user experi-

    ence and accelerate cloud application adoption. In the gure

    below, for example, we see an illustration of the type of perfor-

    mance gain achieved when using Akamais Web Application Ac

    celerator service for an application running on Amazons EC2.

    Application response times were at least twice as ast in Asia, Europe, and Australia with Akamai, compared to Amazons EC2 cloud inrastructure alone

    Improvement in Performance by Continent (Global)

    Australia

    Asia

    Europe

    North America

    Average Response Time

    12.35s29.45s

    14.71s59s

    7.98s15.9s

    5.54s7.93s

    GLOBAL.SAAS.APPLICATION.RUNNING.ON.AMAZON.EC2.INFRASTRUCTURE.ACCELERATED.BY.AKAMAIS.WEB.APPLICATION.ACCELERATOR

    Origin

    Akamai

    Distributed AkamaiEdge Servers

    Application Acceler-ated over PublicCloud with Akamai

    SaaS or CustomApplication Runningon Amazon EC2Infrastructure

    AkamaiEdge Servers

    Legend

    Amazon EC2Datacenters

    End User

    End User

    End User

  • 8/8/2019 Akamai Cloud Computing Perspective

    11/12

    Akamai.and.Cloud.Computing:.A.Perspective.from.the.Edge.of.the.Cloud 8

    Distributing Application Componentsto the Edge

    The greatest possible application performance and scalability

    are achieved when the application itself can be distributed

    to the edge of the cloud, close to the end users. Akamai intro-

    duced this ability nearly a decade ago with its EdgeComputing

    offering that enables companies to deploy and execute J2EEapplications or application components on Akamais edge

    servers. Application instances are automatically created in differ-

    ent cities and regions based on real-time demand something

    that cloud services such as Amazon EC2 and Google App

    Engine cannot do. This allows EdgeComputing customers to

    enjoy truly maintenance-free scalability in addition to unparal-

    leled end user performance.

    EdgeComputing is designed to work seamlessly within a hybrid

    cloud environment. By deploying content-centric application

    components such as site search, surveys and contests, or

    page assembly at the edge of the cloud, while running

    sensitive or transaction-oriented application components at theorigin infrastructure, the application can be scaled and the end

    user experience can be optimized, while meeting the different

    business requirements of each application component.

    Securing Cloud Applications and Platforms

    Because of their reliance on Web infrastructure, SaaS and other

    applications running on public cloud platforms are as vulnerable

    to Internet threats and service attacks as traditional Web sites

    and applications. Akamais network acts as a secure perime-

    ter that eliminates public entry points to cloud infrastructures,

    helping to keep malicious DDoS attacks, Internet worms, hacker

    threats, and attacks on application vulnerabilities outside the

    origin data center.

    Akamai also applies technologies such as DNS security, IP layer

    protection and access control, HTTP origin cloaking, and ap-

    plication request checking. As an additional layer of security,

    Akamais SiteShield can completely cloak a Web site from

    the public Internet by effectively removing the origin from the

    Internet accessible IP address space, or Akamais in-cloud Web

    Application Firewall can identify attacks in HTTP and SSL trafc

    before they get to application servers, protecting cloud services

    right from the edge of the cloud.

    Ensuring Site and Application Availability

    The Internet is an inherently unreliable medium, with failures

    constantly occurring at all different levels from machine, to

    data center, to network. In fact, this is one of the key reasons

    cloud optimization services are so necessary to provide resil-

    iency from the many potential pitfalls preventing the successful

    delivery of cloud services to end users.

    Akamai delivers this resiliency by starting with a network design

    philosophy that embraces failure as a common occurrence

    something the network must automatically and seamlessly

    recover from. Indeed, Akamais network is so fault-tolerant

    that it takes only an average of eight to 12 network operations

    personnel at any given time to manage the more than 50,000

    network devices (servers, routers, switches, etc.) worldwide that

    delivers approximately 20% of the worlds Web content every day

    Built on this zero-downtime infrastructure, Akamais cloud

    optimization services includes Site Failover, offering multiple op-

    tions for enterprise business continuity in case of origin or cloud

    server failure. In addition, while most of Akamais solutionsenable cloud providers to leverage the Akamai network as a vir-

    tual extension of their own, Akamai also offers service options

    that allow cloud providers access to Akamai networks intel-

    ligence. Akamais Global Trafc Management is a cloud-based,

    highly-scalable, on-demand service that allows an enterprise to

    balance trafc between multiple entities based on a variety of

    business policy and Internet performance factors. Providers can

    leverage this network intelligence to boost the performance

    and reliability of their own multi-site infrastructures, whether or

    not they choose to augment that infrastructure with Akamais

    massive network.

    These service options, combined with a 100% uptime SLA,enable enterprises to leverage cloud computing in any form

    they wish while maintaining the rock-solid availability their

    businesses demand.

    ConclusionAs one of todays hottest IT topics, cloud computing is covered

    daily across the press, from academic journals to technology

    blogs, and even the travel section of the New York Times.9 Mos

    of the hype has focused on offerings in the public cloud, where

    centralized architectures are currently commonplace. The draw-

    backs of this type of architecture have already begun to surface

    as many of the major cloud vendors have suffered widely-re-ported outages and downtime over the last year. Now, as cloud

    computing moves out of hype and experimentation mode into

    more mainstream adoption, businesses running applications on

    cloud platforms will rely on Akamais cloud optimization services

    to make the cloud responsive, scalable and secure.

    Cloud Computing wont have a single vendor or a single cloud

    answer; its incarnations will be as varied as the applications

    and services it supports. As The Economistrecently stated,

    The computing sky will probably always be cloudy, meaning

    that there will be many private and public clouds, and they will

    come in all shapes and sizes. And most of them will be inter-

    connected.10

    But regardless of the path the cloud computingevolution takes, Akamais cloud optimization services will play a

    critical role in driving its growth, with innovative solutions that

    enable success for both cloud computing providers and the

    enterprises that use them.

  • 8/8/2019 Akamai Cloud Computing Perspective

    12/12

    Akamai.and.Cloud.Computing:.A.Perspective.from.the.Edge.of.the.Cloud 9

    The Akamai Difference

    2010 Akamai Technologies, Inc. All Rights Reserved. Reproduction in whole or

    in part in any form or medium without express written permission is prohibited.

    Akamai and the Akamai wave logo are registered trademarks. Other trademarks

    contained herein are the property of their respective owners. Akamai believes that the

    information in this publication is accurate as of its publication date; such information

    is subject to change without notice.

    Akamai provides market-leading managed services for powering rich media, dynamic transactions, and enterprise applications online. Having pioneered the content delivery marke

    one decade ago, Akamais services have been adopted by the worlds most recognized brands across diverse industries. The alternative to centralized Web infrastructure, Akamais

    global network of tens of thousands of distributed servers provides the scale, reliability, insight and performance for businesses to succeed online. Akamai has transformed the Intenet into a more viable place to inform, entertain, advertise, interact, and collaborate. To experience The Akamai Difference, visit www.akamai.com.

    International Ofces

    Unterfoehring, Germany

    Paris, France

    Milan, Italy

    London, England

    Madrid, Spain

    Stockholm, Sweden

    Bangalore, India

    Sydney, Australia

    Beijing, China

    Tokyo, Japan

    Seoul, Korea

    Singapore

    Akamai Technologies, Inc.

    U.S. Headquarters

    8 Cambridge Center

    Cambridge, MA 02142

    Tel 617.444.3000

    Fax 617.444.3001

    U.S. toll-free 877.4AKAMAI

    (877.425.2624)

    k i

    1 Signicant portions of the content in Section 3 of this whitepaper are taken from this authors article, Improving Performance on the Internet, published in the Febru

    2008 issue of Communications of the ACM.

    2 http://www.broadband-forum.org/news/download/pressreleeases/2008/400million.pdf

    3 http://www.renesys.com/blog/2008/10/wrestling-with-the-zombie-spri.shtml

    4 http://news.cnet.com/8301-10784_3-9878655-7.html

    5 http://www.telegeography.com/cu/article.php?article_id=21528

    6 http://www.renesys.com/blog/2008/12/deja-vu-all-over-again-cables.shtml

    7 For a more in-depth examination of Internet bottlenecks and the benets of highly-distributed networks, please see this authors article,

    Improving Performance on the Internet, published in the February 2008 issue of Communications of the ACM

    8 http://www.reuters.com/article/internetNews/idUSTRE4BJ0FV20081220

    9 Cohen, Billie, In the Cottage, Yet Industrious, New York Times, April 16, 2009.

    10 Gathering Clouds, The Economist, March 19, 2009.