Dapper, a Large-Scale Distributed System Tracing Infrastructure

Post on 08-Jan-2018

228 views 0 download

description

Background Modern Internet services are often implemented as complex, large-scale distributed systems. These applications are constructed from collections of software modules that may be developed by different teams, perhaps in different programming language, and could span many thousand of machines across multiple physical facilities

Transcript of Dapper, a Large-Scale Distributed System Tracing Infrastructure

Dapper, a Large-Dapper, a Large-Scale Distributed Scale Distributed System Tracing System Tracing InfrastructureInfrastructureGoogle Technical Report, 2010Author: B.H.Sigelman, L.A.Barroso, M.Burrows, P.Stephenson, M.Plakal, D.Beaver, S.Jaspan, C.ShanbhagPresenter: Lei Jinjiang

BackgroundBackground

Modern Internet services are often implemented as complex, large-scale distributed systems. These applications are constructed from collections of software modules that may be developed by different teams, perhaps in different programming language, and could span many thousand of machines across multiple physical facilities

BackgroundBackground

Imagine a single search request coursing through Google’s massive infrastructure. A single request can run across thousands of machines and involve hundred of different subsystems. And oh by the way, you are processing more requests per second than any other system in the world.

ProblemProblem

• How do you debug such a system?• How do you figure out where the problem are?• How do you determine if programmers are

coding correctly?• How do you keep sensitive data secret and

safe?• How do ensure products don’t use more

resources than the are assigned?• How do you store all the data?• How do you make use of it?

That is where dapper comes in!That is where dapper comes in!

DapperDapper

Dapper is Google's tracing system and it was originally created to understand the system behavior from a search request. Now Google's production clusters generate more than 1 terabyte of sampled trace data per day.

Requirements and Design Requirements and Design GoalsGoals

• Requirements: (1) Ubiquitous deployment (2) Continuous monitoring• Design Goals: (1) Low overhead (2) Application-level transparency (3) Scalability

Distributed Tracing in DapperDistributed Tracing in Dapper

Two class of solutions:Black-box vs. annotation-based

Trace trees and spansTrace trees and spans

The causal and temporal relationships between five spans in a Dapper trace tree

Trees and SpansTrees and Spans

A detailed view of a single span from last Figure

Instrumentation pointsInstrumentation points• When a thread handles a traced

control path, Dapper attaches a trace context to thread-local storage.

• Most Google developers use a common control flow library to construct callbacks. Dapper ensures that all such callback store the trace context

CallbackCallback

In computer programming, a callback is a reference to executable code, or a piece of executable code, that is passed as an argument to other code. This allows a lower-level software layer to call a subroutine (or function) defined in a higher-level layer.

AnnotationsAnnotations// C++:const string& request = ...;if (HitCache()) TRACEPRINTF("cache hit for %s", request.c_str());else TRACEPRINTF("cache miss for %s", request.c_str());

// Java:Tracer t = Tracer.getCurrentTracer();String request = ...;if (hitCache()) t.record("cache hit for " + request);else t.record("cache miss for " + request);

SamplingSampling

Low overhead was a key design goal for Dapper, since service operators would be understandably reluctant to deploy a new tool of yet unproven value if it had any significant impact on performance…Therefore, … , we further control overhead by recording only a fraction of all traces.

Trace collectionTrace collection

Out-of-band trace collectionOut-of-band trace collection

• Firstly, the in-band-Dapper trace data would dwarf the application data and bias the results of subsequent analyses.

• Secondly, many middleware systems which return a result to their caller before all of their own backend have returned a final result.

Security and privacy Security and privacy considerationsconsiderations

Production coverageProduction coverage• Given how ubiquitous Dapper-instrumented

libraries are, we estimate that nearly every Google production process supports tracing.

• There are cases where Dapper is unable to follow the control path correctly. These typically stem from the use of non-standard control-flow primitives.

• Dapper tracing can be turned off as a production safety measure.

Use of trace annotationsUse of trace annotations• Currently, 70% of all Dapper spans and

90% of all Dapper traces have at least one application-specified annotation.

• 41 Java and 68 C++ applications have added custom application annotations in order to better understand intra-span activity in their sevices.

Trace collection overheadTrace collection overhead

• The daemon never uses more than 0.3% of one core of a production machine during collection, and has a very small memory footprint.

• Restrict the Dapper daemon to the lowest possible priority in the kernel scheduler.

• 426 bytes/span, less than 0.01% of the network traffic in Google’s production environment

Process count(per host)

Data Rate(per process)

Daemon CPU Usage(single CPU core)

25 10K/sec 0.125%

10 200K/sec 0.267%

50 2K/sec 0.130%

CPU resource usage for the Dapper daemon during load testing

Trace collection overheadTrace collection overheadSampling frequency

Avg. Latency(% change)

Avg. Throughput(% change)

1/1 16.3% -1.48%

1/2 9.40% -0.73%

1/4 6.38% -0.30%

1/8 4.12% -0.23%

1/16 2.12% -0.08%

1/1024 -0.20% -0.06%

The effect of different [non-adaptive] Dapper sampling frequencies on the latency and throughput of a Web search cluster. The experimental errors for these latency and throughput measurements are 2.5% and 0.15% respectively.

Adaptive sampling• Lower traffic workloads may miss important events at

such low sampling rate(1/1024).

• Workloads with low traffic automatically increase their sampling rate while those with very high traffic will lower it so that overheads remain under control.

• Reliability …

Additional sampling during Additional sampling during collectioncollection

• The dapper team also need to control the total size of data written to its central repositories, and thus we incorporate a second round of sampling for that purpose.

• For each span seen in the collection system, we hash the associate trace id as a scalar z, where 0≤z≤1. If z is less than our collection sampling coefficient, we keep the span and write it to the Bigtable. Otherwise, we discard it.

The Dapper Depot APIThe Dapper Depot API• Access by trace id• Bulk access: Access to billions of Dapper in

parallel• Indexed access: The index maps from commonly

requested trace feature(host machine, service names) to distinct dapper traces.

User interfaceUser interface

ExperiencesExperiences• Using Dapper during development (Integration with exception monitoring)• Addressing long tail latency• Inferring service dependencies• Network usage of differ services• Layered and Shared Storage Systems (e.g.

GFS)

Other Lessons LearnedOther Lessons Learned• Coalescing effect• Tracing batch workloads• Finding a root cause• Logging kernel-level information