Approximation Algorithms for Stochastic Minimum Norm ...

41
arXiv:2010.05127v2 [cs.DS] 3 Nov 2020 Approximation Algorithms for Stochastic Minimum Norm Combinatorial Optimization * Sharat Ibrahimpur Chaitanya Swamy Abstract Motivated by the need for, and growing interest in, modeling uncertainty in data, we introduce and study stochastic minimum-norm optimization. We have an underlying combinatorial optimization prob- lem where the costs involved are random variables with given distributions; each feasible solution in- duces a random multidimensional cost vector, and given a certain objective function, the goal is to find a solution (that does not depend on the realizations of the costs) that minimizes the expected objective value. For instance, in stochastic load balancing, jobs with random processing times need to be assigned to machines, and the induced cost vector is the machine-load vector. The choice of objective is typically the maximum- or sum- of the entries of the cost vector, or in some cases some other p norm of the cost vector. Recently, in the deterministic setting, Chakrabarty and Swamy [8] considered a much broader suite of objectives, wherein we seek to minimize the f -norm of the cost vector under a given arbitrary monotone, symmetric norm f . In stochastic minimum-norm optimization, we work with this broad class of objectives, and seek a solution that minimizes the expected f -norm of the induced cost vector. The class of monotone, symmetric norms is versatile and includes p -norms, and Top -norms (sum of largest coordinates in absolute value), and enjoys various closure properties; in particular, it can be used to incorporate multiple norm budget constraints f (x) B , =1,...,k. We give a general framework for devising algorithms for stochastic minimum-norm combinatorial optimization, using which we obtain approximation algorithms for the stochastic minimum-norm ver- sions of the load balancing and spanning tree problems. We obtain the following concrete results. • An O(1)-approximation for stochastic minimum-norm load balancing on unrelated machines with: (i) arbitrary monotone symmetric norms and job sizes that are Bernoulli random variables; and (ii) Top norms and arbitrary job-size distributions. • An O (log m/ log log m)-approximation for the general stochastic minimum-norm load balancing problem, where m is the number of machines. • An O(1)-approximation for stochastic minimum-norm spanning tree with arbitrary monotone symmetric norms and distributions; this guarantee extends to the stochastic minimum-norm ma- troid basis problem. Two key technical contributions of this work are: (1) a structural result of independent interest con- necting stochastic minimum-norm optimization to the simultaneous optimization of a (small) collection of expected Top -norms; and (2) showing how to tackle expected Top -norm minimization by leveraging techniques used to deal with minimizing the expected maximum, circumventing the difficulties posed by the non-separable nature of Top norms. 1 Introduction Uncertainty is a facet of many real-world decision environments, and a thriving and growing area of opti- mization, stochastic optimization, deals with optimization under uncertainty. Stochastic load-balancing and * An extended abstract is to appear in the Proceedings of the 61st FOCS, 2020. {sharat.ibrahimpur,cswamy}@uwaterloo.ca. Dept. of Combinatorics and Optimization, Univ. Waterloo, Water- loo, ON N2L 3G1. Supported in part by NSERC grant 327620-09 and an NSERC Discovery Accelerator Supplement Award. 1

Transcript of Approximation Algorithms for Stochastic Minimum Norm ...

Page 1: Approximation Algorithms for Stochastic Minimum Norm ...

arX

iv:2

010.

0512

7v2

[cs

.DS]

3 N

ov 2

020

Approximation Algorithms for Stochastic Minimum Norm

Combinatorial Optimization*

Sharat Ibrahimpur† Chaitanya Swamy†

Abstract

Motivated by the need for, and growing interest in, modeling uncertainty in data, we introduce and

study stochastic minimum-norm optimization. We have an underlying combinatorial optimization prob-

lem where the costs involved are random variables with given distributions; each feasible solution in-

duces a random multidimensional cost vector, and given a certain objective function, the goal is to find

a solution (that does not depend on the realizations of the costs) that minimizes the expected objective

value. For instance, in stochastic load balancing, jobs with random processing times need to be assigned

to machines, and the induced cost vector is the machine-load vector. The choice of objective is typically

the maximum- or sum- of the entries of the cost vector, or in some cases some other ℓp norm of the cost

vector. Recently, in the deterministic setting, Chakrabarty and Swamy [8] considered a much broader

suite of objectives, wherein we seek to minimize the f -norm of the cost vector under a given arbitrary

monotone, symmetric norm f . In stochastic minimum-norm optimization, we work with this broad class

of objectives, and seek a solution that minimizes the expected f -norm of the induced cost vector.

The class of monotone, symmetric norms is versatile and includes ℓp-norms, and Topℓ-norms (sum

of ℓ largest coordinates in absolute value), and enjoys various closure properties; in particular, it can be

used to incorporate multiple norm budget constraints fℓ(x) ≤ Bℓ, ℓ = 1, . . . , k.

We give a general framework for devising algorithms for stochastic minimum-norm combinatorial

optimization, using which we obtain approximation algorithms for the stochastic minimum-norm ver-

sions of the load balancing and spanning tree problems. We obtain the following concrete results.

• An O(1)-approximation for stochastic minimum-norm load balancing on unrelated machines with:

(i) arbitrary monotone symmetric norms and job sizes that are Bernoulli random variables; and (ii)

Topℓ norms and arbitrary job-size distributions.

• An O (logm/ log logm)-approximation for the general stochastic minimum-norm load balancing

problem, where m is the number of machines.

• An O(1)-approximation for stochastic minimum-norm spanning tree with arbitrary monotone

symmetric norms and distributions; this guarantee extends to the stochastic minimum-norm ma-

troid basis problem.

Two key technical contributions of this work are: (1) a structural result of independent interest con-

necting stochastic minimum-norm optimization to the simultaneous optimization of a (small) collection

of expectedTopℓ-norms; and (2) showing how to tackle expectedTopℓ-norm minimization by leveraging

techniques used to deal with minimizing the expected maximum, circumventing the difficulties posed by

the non-separable nature of Topℓ norms.

1 Introduction

Uncertainty is a facet of many real-world decision environments, and a thriving and growing area of opti-

mization, stochastic optimization, deals with optimization under uncertainty. Stochastic load-balancing and

*An extended abstract is to appear in the Proceedings of the 61st FOCS, 2020.†sharat.ibrahimpur,[email protected]. Dept. of Combinatorics and Optimization, Univ. Waterloo, Water-

loo, ON N2L 3G1. Supported in part by NSERC grant 327620-09 and an NSERC Discovery Accelerator Supplement Award.

1

Page 2: Approximation Algorithms for Stochastic Minimum Norm ...

scheduling problems, where we have uncertain job sizes (a.k.a processing times), constitute a prominent and

well-studied class of stochastic-optimization problems (see, e.g., [16, 10, 11, 27, 24, 15, 13]). In stochastic

load balancing, we are given job-size distributions and we need to fix a job-to-machine assignment without

knowing the actual processing-time realizations. This assignment induces a random load vector, and its

quality is assessed by evaluating the expected objective value of this load vector, which one seeks to mini-

mize; the objectives typically considered are: makespan—i.e., maximum load-vector entry—which leads to

the stochastic makespan minimization problem [16, 10, 11], and the ℓp-norm of the load vector [25]. More

generally, in a generic stochastic-optimization problem, we have an underlying combinatorial optimization

problem and the costs involved are described by random variables with given distributions. We need to take

decisions, i.e., find a feasible solution, given only the distributional information, and without knowing the

realizations of the costs. (This is sometimes called one-stage stochastic optimization.) Each feasible solu-

tion induces a random cost vector, we have an objective that seeks to quantitatively measure the quality of

the solution by aggregating the entries of the cost vector, and the goal is to find a feasible solution that min-

imizes the expected objective value of the induced cost vector. As another example in this setup, consider

the stochastic spanning tree problem, which is the following basic stochastic network-design problem: we

have a graph with random edge costs and we seek a spanning tree of low expected objective value, where

the objective is applied to the cost vector that consists of the costs of edges in the tree.

The two most-commonly considered objectives in such settings (as also for deterministic problems) are:

(a) the maximum cost-vector entry (which adopts an egalitarian view); and (b) the sum of the cost-vector

entries (i.e., a utilitarian view that considers the total cost incurred). These objectives give rise to various

classical problems: besides the makespan minimization problem in load balancing, other examples include

(deterministic/stochastic) bottleneck spanning tree (max- objective), minimum spanning tree (sum- objec-

tive), and the k-center (max- objective) and k-median (sum- objective) clustering problems. Recognizing

that the max- and sum- objectives tend to skew solutions in different directions, other ℓp-norms of the cost

vector (e.g., ℓ2 norm) have also been considered in certain settings as a means of interpolating between, or

trading off, the max- and sum- objectives. For instance, ℓp-norms have been investigated for both determin-

istic and stochastic load balancing [4, 5, 23, 25] and for deterministic k-clustering [12]. Furthermore, very

recently, Chakrabarty and Swamy [8] introduced a rather general model to unify these various problems

(including minimizing ℓp-norms), that they call minimum-norm optimization: given an arbitrary monotone,

symmetric norm f , find a solution that minimizes the f -norm of the induced cost vector.

Our contributions. In this work, we introduce and study stochastic minimum-norm optimization, which

is the stochastic version of min-norm optimization: given a stochastic-optimization problem and an arbi-

trary monotone, symmetric norm f , we seek a feasible solution that minimizes the expected f -norm of the

induced cost vector. As a model, this combines the versatility of (deterministic) min-norm optimization

with the more realistic setting of uncertain data, thereby giving us a unified way of dealing with the various

objective functions typically considered for (deterministic and stochastic) optimization problems in the face

of uncertain data. We consider problems where there is a certain degree of independence in the underly-

ing costs, so that the components of the underlying cost vector are always independent (and nonnegative)

random variables.

Our chief contribution is a framework that we develop for designing algorithms for stochastic minimum-

norm combinatorial optimization problems, using which we devise approximation algorithms for the stochas-

tic minimum-norm versions of load balancing (Theorems 6.3 and 6.2) and spanning trees (Theorem 7.1). We

only assume that we have a value oracle for the norm f ; we remark that this is weaker than the optimization-

oracle and first-order-oracle access required to f in [8] and [9] respectively.

Stochastic minimum-norm optimization can be motivated from two distinct perspectives. The class

of monotone, symmetric norms is quite rich and broad. In particular, it contains all ℓp-norms, as also

another fundamental class of norms called Topℓ-norms: Topℓ(x) is the sum of the ℓ largest coordinates of

2

Page 3: Approximation Algorithms for Stochastic Minimum Norm ...

x (in absolute value). Notice that Topℓ norms provide another means of interpolating between the min-

max (Top1) and min-sum (Topm) problems (where m is number of coordinates). One of the motivating

factors for [8] to consider the (far-reaching) generalization of arbitrary monotone, symmetric norms (in

the deterministic setting) was that it allows one to capture the optimization problems associated with these

various objectives under one umbrella, and thereby come up with a unified set of techniques for handling

these optimization problems. These same benefits also apply in the stochastic setting, making stochastic

minimum-norm optimization an appealing model to study.

Another motivation comes from the fact that the class of monotone, symmetric norms is closed un-

der various operations, including taking nonnegative combinations, and taking the maximum of any finite

collection of monotone, symmetric norms. A noteworthy and non-evident consequence of these closure

properties is that they allow us to incorporate budget constraints fℓ(x) ≤ Bℓ, ℓ = 1, . . . , k involving multi-

ple monotone, symmetric norms f1, . . . , fk using the min-norm optimization model: we can simply define

another (monotone, symmetric) norm g(x) := max

fℓ(x)/Bℓ : ℓ = 1, . . . , k

, and the (single) budget

constraint g(x) ≤ 1 can be captured by the problem of minimizing g(x). Multiple norm budget constraints

may arise, or be useful, for example, when no single norm may be a clear choice for assessing the solu-

tion quality.1 Moreover, such constraints, and, in particular, the above means of capturing them, can be

especially useful in stochastic settings, as they can provide us with more fine-grained control of the un-

derlying random cost vector, which can help offset the risk associated with uncertainty; e.g., the constraint

E[

max

Topℓ(Y )/Bℓ : ℓ = 1, . . . , k]

≤ 1, enforces a fair bit of control on the random cost vector Y and

provides safeguards against the costs being too high.

To elaborate on our framework and results, it is useful to highlight and appreciate two distinct high-

level challenges that arise in dealing with stochastic min-norm optimization. We delve into more details in

Section 2. First, how do we reason about the expectation of an arbitrary monotone, symmetric norm? [8]

prove the useful structural result that any monotone symmetric norm f can be expressed as the maximum

of a collection of ordered norms, where an ordered norm is simply a nonnegative combination of Topℓ-

norms (Theorem 3.4). While this does give a more concrete way of thinking about the expected f -norm, the

challenge nevertheless in the stochastic setting is that one now needs to reason about the expectation of the

maximum of a collection of random variables, where each random variable is the ordered norm of our cost

vector Y . One of our chief insights is that the expectation of the maximum of a collection of ordered norms

is within an O(1)-factor of the maximum of the expected ordered norms (Theorem 5.1), i.e., interchanging

the expectation and maximum operators only loses an O(1) factor! The crucial consequence is that this

provides us with a ready means for reasoning about E[

f(Y )]

, namely by controlling E[

Topℓ(Y )]

for all

indices ℓ (see Theorem 5.2). We believe that this structural result about the expectation of a monotone,

symmetric norm is of independent interest.

This brings us to the second challenge: how do we deal with a specific norm f , such as the Topℓ norm?

To our knowledge, there is no prior work on any stochastic Topℓ-norm minimization problem, and we need

to control all expected Topℓ norms. Our approach is based on carefully identifying certain statistics of the

random vector Y that provide a convenient handle on E[

Topℓ(Y )]

(see Section 4). (These statistics also

play a role in establishing our above result on the expectation of a monotone, symmetric norm.) For a spe-

cific application (e.g., stochastic load balancing, spanning trees), we formulate an LP encoding (loosely

speaking) that the statistics of our random cost vector match the statistics of the cost vector of an optimal

solution. The main technical component is then to devise a rounding algorithm that rounds the LP solution

while losing only a small factor in these statistics, and we utilize iterative-rounding ideas to achieve this.

Combining the above ingredients leads to our approximation guarantees for stochastic min-norm load

balancing, spanning trees. Our strongest and most-sophisticated results are for stochastic min-norm load

1Such considerations arise, for instance, when considering semi-supervised learning on graphs using ℓp-norm based Laplacian

regularization, where different choices of p can lead to good solutions on different instances; see e.g., [2].

3

Page 4: Approximation Algorithms for Stochastic Minimum Norm ...

balancing with: (i) arbitrary monotone symmetric norms and Bernoulli job sizes (Theorem 6.2); and (ii)

Topℓ norms and arbitrary job-size distributions (Theorem 6.1); in both cases, we obtain constant-factor

approximations. (We emphasize that we have not attempted to optimize constants, and instead chosen

to keep exposition simple and clean.) We also obtain an O(logm/ log logm)-approximation for general

stochastic min-norm load balancing (Theorem 6.3). We remark that dealing with Bernoulli distributions is

often considered to be a stepping stone towards handling general distributions (see, e.g., [16]), and so we

believe that our techniques will eventually lead to an constant-factor approximation for (general) stochastic

min-norm load balancing. For stochastic spanning trees, wherein edge costs are random, we obtain an

O(1)-approximation for arbitrary monotone symmetric norms and arbitrary distributions (Theorem 7.1).

Related work. As mentioned earlier, stochastic load balancing is a prominent combinatorial-optimization

problem that has been investigated in the stochastic setting under various ℓp norms. Kleinberg, Rabani,

and Tardos [16] investigated stochastic makesepan minimization (i.e., minimize expected maximum load)

in the setting of identical machines (i.e., the processing time of a job is the same across all machines), and

were the first to obtain an O(1)-approximation for this problem. We utilize the tools that they developed

to reason about the expected makespan. Their results were improved for specific job-size distributions

by [10]. Almost two decades later, Gupta et al. [11] obtained the first the O (1)-approximation for stochastic

makespan minimization on unrelated machines, and an O( plog p

)

-approximation for minimizing the expected

ℓp-norm of the load vector. Our O(1)-approximation result for all Topℓ-norms substantially generalizes the

makespan-minimization (i.e., Top1-norm) result of [11]. The latter guarantee was improved to a constant by

Molinaro [25] via a careful and somewhat involved use of the so-called L-function method of [18].

Our results and techniques are incomparable to those of [25]. At a high level, Molinaro argues that

E[

‖Y ‖p]

, where Y is the m-dimensional machine-load vector, can be bounded by controlling the quantity(∑

i∈[m]E[

Y pi

])1/p, and uses a notion of effective size due to Latala [18] to obtain a handle on E

[

Y pi

]

in terms of the Xij random variables of the jobs assigned to machine i. Essentially, “effective size” of

a random variable maps the random variable to a deterministic quantity that one can work with instead

(but its definition and utility depends on a certain scale parameter). Previously, for stochastic makespan

minimization, Kleinberg et al. [16] and Gupta et al. [11] utilizes a notion of effective size due to Hui [14],

which helps in controlling tail bounds of the machine loads. Molinaro leverages the full power of Latala’s

notion of effective size by applying it at multiple scales, which allows him to obtain an O(1)-approximation

for ℓp norms. A pertinent question that perhaps arises is: given the success yielded by the various notions

of effective sizes for ℓ∞ and other ℓp norms, can one come up with a notion of effective size that one can

use for a general monotone symmetric norm f? This however seems unlikely. A concrete reason for this

can be gleaned from the modeling power of general monotone, symmetric norms that arises due to their

closure properties. Recall that one can encode multiple monotone, symmetric-norm budget constraints via

one monotone, symmetric norm f (by taking a maximum of the scaled constituent norms). A notion of

effective size for f would (remarkably) translate to one deterministic quantity that simultaneously yields

some control for all the norms involved in the budget constraints; this is unreasonable to expect, even when

the constituent norms are ℓp norms.

Examples of other well-known combinatorial optimization problems that have been investigated in the

stochastic setting include stochastic knapsack and bin packing [16, 10, 19], stochastic shortest paths [19].

The works of [19, 21, 20] consider expected-utility-maximization versions of various combinatorial opti-

mization problems. In a sense, this can be viewed as a counterpart of stochastic min-norm optimization,

where we have a concave utility function, and we seek to maximize the expected utility of the underlying

random value vector induced by our solution. Their results are obtained by a clever discretization of the

probability space; this does not seem to apply to stochastic minimization problems.

Topℓ- and ordered-norms have been proposed in the location-theory literature, as a means of interpo-

lating between the k-center and k-median clustering problems, and have been studied in the Operations

4

Page 5: Approximation Algorithms for Stochastic Minimum Norm ...

Research literature [26, 17], but largely from a modeling perspective. Recently they have received much in-

terest in the algorithms and optimization communities—partly, because Topℓ norms yield an alternative (to

ℓp norms) natural means of interpolating between the ℓ∞- and ℓ1- objectives—and this work has led to strong

algorithmic results for Topℓ-norm- and ordered-norm- minimization in deterministic settings [3, 1, 6, 7, 8, 9].

2 Technical overview and organization

We discuss here the various challenges that arise in stochastic min-norm optimization, and give an overview

of the technical ideas we develop to overcome these challenges with pointers to the relevant sections for

more details. We conclude in Section 9 with a discussion of some open questions.

As noted in [8], even for deterministic min-norm optimization, the simple approach of minimizing f(~v),where ~v ranges over the cost of fractional solutions (e.g., convex combinations of integer solutions), fails

badly since this convex program often has large integrality gap. In the stochastic setting, a further problem

that arises with this approach is that the random variable f(Y ) will typically have exponential-size support

(note that Y follows a product distribution) making it computationally challenging to evaluate E[

f(Y )]

.

As noted earlier, we face two types of challenges in tackling stochastic min-norm optimization. The

first is posed by the generality of an arbitrary monotone, symmetric norm. While the work of [8] shows that

Topℓ-norms are fundamental building blocks of monotone symmetric norms, and suggests a way forward for

dealing with stochastic min-norm optimization, as we elaborate below, the stochastic nature of the problem

throws up various new issues, that create significant difficulties with leveraging the tools in [8] to reason

about the expectation of a monotone symmetric norm.

Second, stochastic min-norm optimization is complicated even for various specific norms. As mentioned

earlier, O(1)-approximation algorithms were obtained only quite recently for stochastic min-norm load

balancing under the ℓ∞ norm (i.e., stochastic makespan-minimization) [11], and other ℓp norms [25]; there is

no prior work on stochastic Topℓ-load balancing (or any other stochastic Topℓ-norm optimization problem).

The difficulties arise again due to the underlying stochasticity; even with the ℓ∞ norm, which is the most

specialized of the aforementioned norms, one needs to bound the expectation of the maximum of a collection

of random variables, a quantity that is not convenient to control. The importance of Topℓ norms in the

deterministic setting indicates that stochastic Topℓ-norm optimization is a key special case that one needs to

understand, but the non-separable nature of Topℓ norms adds another layer of difficulty.

Expectation of a monotone, symmetric norm (Section 5). Recall that Chakrabarty and Swamy [8] show

that for any monotone, symmetric norm f : Rm 7→ R≥0, there is a collection C ⊆ Rm≥0 of weight vectors

with non-increasing coordinates such that f(x) = maxw∈C wTx↓ for any x ∈ R

m≥0 (see Theorem 3.4),

where x↓ is the vector x with coordinates sorted in non-increasing order. The quantity wTx↓ is called an

(w-) ordered norm, and can be expressed as a nonnegative combination∑m

ℓ=1(wℓ −wℓ+1)Topℓ(x) of Topℓnorms (where wm+1 := 0). This structural result is quite useful for deterministic min-norm optimization,

since it yields an easier-to-work-with objective, and more significantly, it immediately shows that control-

ling all Topℓ norms suffices to control f(x); these properties are leveraged by [8] to devise approximation

algorithms for deterministic min-norm optimization.

However, it is rather unclear how this structural result helps with the stochastic problem. If Y is the

random cost vector of our solution (with independently-distributed coordinates) one can now rewrite the ob-

jective function E[

f(Y )]

as E[

supw∈C wTY ↓

]

but dealing with the latter quantity entails reasoning about

the expectation of the maximum of a collection of (positively correlated) random variables, which is a po-

tentially onerous task.2 Taking cue from the deterministic setting, a natural result to aim for in the stochastic

2The positive correlation between the random variables wTY ↓ for w ∈ C renders the techniques used in stochastic makespan

minimization inapplicable. In the latter problem, one needs to bound the expectation of the maximum of a collection of independent

5

Page 6: Approximation Algorithms for Stochastic Minimum Norm ...

setting is that, bounding all expected Topℓ norms enables one to bound E[

f(Y )]

. But unlike the determin-

istic setting, reformulating the objective as E[

supw∈C wTY ↓

]

does not yield any apparent dividends, and

it is not at all clear that such a result is actually true. The issue again is that it is difficult to reason about

E[

sup . . .]

, and interchanging the expectation and sup terms is not usually a viable option (it is not hard to

see that E[

maxZ1, . . . , Zk]

may in general be Ω(k) times maxE[

Z1

]

, . . . ,E[

Zk

]

).

One of our chief contributions is to prove that the analogue mentioned above for the stochastic setting

does indeed hold, i.e., controlling E[

Topℓ(Y )]

for all ℓ ∈ [m] allows one to control E[

f(Y )]

. The key

to this, and our main technical result here, is that, somewhat surprisingly and intriguingly, E[

f(Y )]

=E[

supw∈C wTY ↓

]

is at most a constant factor larger than supw∈C E[

wTY ↓]

(Theorem 5.1). The quantity

supw∈C E[

wTY ↓]

= supw∈C wTE[

Y ↓]

has a nice interpretation: it is simply f(

E[

Y ↓])

, and so this can

be restated as E[

f(Y )]

= O(1)·f(

E[

Y ↓])

. This result provides us with the same mileage that [8] obtain in

the deterministic setting. Since E[

wTY ↓]

is∑m

ℓ=1(wℓ−wℓ+1)E[

Topℓ(Y )]

, as in the deterministic setting,

this immediately implies that controlling E[

Topℓ(Y )]

for all ℓ ∈ [m] allows us to control E[

f(Y )]

, thereby

providing a foothold for reasoning about the fairly general stochastic min-norm optimization problem. In

particular, we infer that if E[

Topℓ(Y )]

≤ α · E[

Topℓ(W )]

for all ℓ ∈ [m] (and we can restrict to ℓs that

are powers of 2 here), then E[

f(Y )]

= O(α) · E[

f(W )]

(Theorem 5.2). We believe that our structural

result for the expectation of a monotone, symmetric norm is of independent interest, and should also find

application in other stochastic settings involving monotone, symmetric norms.

Our structural result showing that E[

f(Y )]

= O(1) · f(

E[

Y ↓])

, is obtained by carefully exploiting the

structure of monotone, symmetric norms. A key component of this is identifying suitable statistics of the

random vector Y , indexed by ℓ ∈ [m] such that: (a) the statistics for index ℓ lead to a convenient proxy func-

tion for estimating E[

Topℓ(Y )]

within O(1) factors; (b) the statistics are related to the expectations of some

random variables that are tightly concentrated around their means; and (c) Pr[

f(Y ) > σ · f(

E[

Y ↓])]

is

governed by the probability that these random variables deviate from their means. Together these properties

imply the desired bound on E[

f(Y )]

. Next, we elaborate on these statistics.

Proxy functions and statistics (Section 4). Since f is a symmetric function, it is not hard to see that

f(Y ) depends only on the “histogram”

N>θ(Y )

θ∈R≥0, where N>θ(Y ) is the number of coordinates of

Y larger than θ. But this dependence is quite non-linear; its precise form is given by the structural result

for monotone, symmetric norms, and by noting that we can write Topℓ(Y ) =∫∞0 min

ℓ,N>θ(Y )

dθ.

Despite these non-linearities, we show that the expected histogram curve

E[

N>θ(Y )]

θ∈R≥0(see Fig. 1

in Section 4) controls E[

f(Y )]

.

To show this, and also compress

E[

N>θ(Y )]

θ∈R≥0to a finite, manageable number of statistics, we

consider first the Topℓ-norm. While E[

Topℓ(Y )]

=∫∞0 E

[

minℓ,N>θ(Y )]

dθ, interestingly, we prove

that this is only a constant-factor smaller than γℓ(Y ) :=∫∞0 minℓ,E

[

N>θ(Y )]

dθ (Theorem 4.3); i.e.,

interchanging E and min only leads to an O(1)-factor loss. Defining τℓ(Y ) to be the smallest θ such

that E[

N>θ(Y )]

< ℓ, which can be viewed as an estimate of the ℓ-th largest entry of Y , we can more

compactly write γℓ(Y ) = ℓτℓ(Y ) +∫∞τℓ(Y )E

[

N>θ(Y )]

dθ. The statistics of interest to us are then the

quantities τℓ = τℓ(Y ) (and implicitly E[

N>τℓ(Y )]

) for ℓ = 1, . . . ,m. They enable us to bound the tail

probability Pr[f(Y ) > σ · f(E[

Y ↓]

)] by exp(−Ω(σ)) which implies that E[

f(Y )]

= O(1) · f(E[

Y ↓]

).Roughly speaking, this follows because we show (by exploiting the proxy function γℓ(Y )) that Pr[f(Y ) >σ · f(E

[

Y ↓]

)] is at most the probability that N>τℓ(Y ) > Ω(σ) · ℓ for some ℓ ∈ [m], and Chernoff bounds

show that N>τℓ(Y ) =∑m

i=1 Pr[Yi > τℓ] is tightly concentrated around E[

N>τℓ(Y )]

< ℓ (see Lemma 5.7).

Approximation algorithms for stochastic minimum-norm optimization: load-balancing (Section 6)

and spanning tree (Section 7). In Sections 6 and 7, we apply our framework to design the first approx-

random variables and the underlying techniques heavily exploit this independence.

6

Page 7: Approximation Algorithms for Stochastic Minimum Norm ...

imation algorithms for the stochastic minimum-norm versions of load balancing, and spanning tree (and

matroid basis) problems respectively. These sections can be read independently of each other.

Let O∗ denote the random cost vector resulting from an optimal solution. Applying our framework

entails bounding E[

Topℓ(Y )]

in terms of E[

Topℓ(O∗)]

for all ℓ ∈ [m]. For algorithmic tractability, we

only work with indices ℓ ∈ POS = POSm := 2i : i = 0, 1, . . . , ⌊log2m⌋: since Topℓ(~v) ≤ Topℓ′(~v) ≤2Topℓ(~v) holds for any ~v ∈ R

m≥0 and any ℓ ≤ ℓ′ ≤ 2ℓ, it is easy to see that with a factor-2 loss, this still

yields a bound on E[

Topℓ(Y )]

/E[

Topℓ(O∗)]

for all ℓ ∈ [m]. At a high level, we “guess” within, say a

factor of 2, E[

Topℓ(O∗)]

or certain associated quantities such as τℓ(O∗), for all ℓ ∈ POS. This guessing

takes polynomial time since it involves enumerating a vector with O(logm) monotone (i.e., non-increasing

or non-decreasing) coordinates, each of which lies in a logarithmically bounded range. We then write an

LP to obtain a fractional solution whose associated cost-vector Y roughly speaking satisfies E[

Topℓ(Y )]

≤O(

E[

Topℓ(O∗)])

for all ℓ ∈ POS. The chief technical ingredient is to devise a rounding procedure to

obtain an integer solution while preserving the E[

Topℓ(.)]

-costs (up to some factor) for all ℓ ∈ POS. We

exploit iterative rounding to achieve this, by capitalizing on the fact that (loosely speaking) our matrix of

tight constraints has bounded column sums (see Theorem 3.6).

For spanning trees (Section 7)—edge costs are random and we seek to minimize the expected norm of

the edge-cost vector of the spanning tree—the implementation of the above plan is quite direct. We guess

τ∗ℓ := τℓ(O∗) for all ℓ ∈ POS, and our LP imposes the constraints E[

N(

Y>τ∗ℓ)]

≤ ℓ for all ℓ ∈ POS.

Since the coordinates of Y correspond to individual edge costs, and we know their distributions, it is easy

to impose the above constraint (see (Tree(~t))). Iterative rounding works out quite nicely here since after

normalizing the above constraints to obtain unit right-hand-sides, each column sum is O(1). Thus, we

obtain a constant-factor approximation; also, everything extends to the setting of general matroids.

Our results for load balancing (Section 6) are the most technically sophisticated results in the paper. We

obtain constant-factor approximations for: (i) arbitrary norms with Bernoulli job processing times; and (ii)

Topℓ-norm with arbitrary distributions. We also obtain an O(logm/ log logm)-approximation for the most

general setting, where both the monotone, symmetric norm and the job-processing-time distributions may

be arbitrary.

The cost-vector Y in load balancing corresponds to the (random) loads on the machines. Each compo-

nent Yi is thus an aggregate of some random variables: Yi =∑

j assigned to i Xij , where Xij is the (random)

processing time of job j on machine i. The complication that this creates is that we do not have direct access

to (the distribution for) the random variable Yi, making it difficult to calculate (or estimate) Pr[Yi > θ] (and

hence E[

N(Y >θ)]

). (In fact, this is #P-hard, even for Bernoulli Xijs (see [16]), but if we know the jobs

assigned to i, then we can use a dynamic program to obtain a (1 + ε)-approximation of Pr[Yi > θ].)We circumvent these difficulties by leveraging some tools from the work of [16, 11] on stochastic

makespan minimization, in conjunction with an alternate proxy that we develop for E[

Topℓ(Y )]

. We

show that, for a suitable choice of θ,∑m

i=1 E[

Y ≥θi

]

is a good proxy for E[

Topℓ(Y )]

(see Lemmas 4.1

and 4.2), where Y ≥θi is the random variable that is 0 if Yi < θ and Yi otherwise. Complementing this,

the insight we glean from [16, 11] is that we can estimate E[

Y ≥θi

]

within O(1) factors using quantities

that can be obtained from the distributions of the Xij random variables (see Section 3.1). This involves

analyzing the contribution of job j (to E[

Y ≥θi

]

) differently based on whether Xij is “small” (truncated) or

“large” (exceptional), and utilizing the notion of effective size of a random variable [14, 16] to bound the

contribution from small jobs. We guess the θ values—call them t∗ℓ—corresponding to E[

Topℓ(O∗)]

for all

ℓ ∈ POS, and write an LP for finding a fractional assignment where we enforce constraints encoding that

these t∗ℓ values are compatible with the correct guesses (see (LP(~t)) in Section 6.2).

7

Page 8: Approximation Algorithms for Stochastic Minimum Norm ...

3 Preliminaries

For an integer m ≥ 0, we use [m] to denote the set 1, . . . ,m. For any integer m ≥ 0, we define

POSm = 2i : i ∈ Z≥0, 2i ≤ m; we drop the subscript m when it is clear from the context. For

x ∈ R, define x+ := maxx, 0. For an event E we use the indicator random variable 1E to denote if

event E happens. For any vector x ∈ Rm≥0 we use x↓ to denote the vector x with its coordinates sorted in

non-increasing order.

Throughout, we use symbols Y and W to denote random vectors. The coordinates of these random

vectors are always independent, nonnegative random variables. We denote this by saying that the random

vector follows a product distribution. For an event E , let 1E be 1 if E happens, and 0 otherwise. We reserve

Z to denote a scalar nonnegative random variable. Given Z and θ ∈ R≥0, define the truncated random

variable Z<θ := Z ·1Z<θ, which has support in [0, θ). Analogously, define the exceptional random variable

Z≥θ := Z · 1Z≥θ, whose support lies in 0 ∪ [θ,∞). The following Chernoff bound will be useful.

Lemma 3.1 (Chernoff bound). LetZ1, . . . , Zk be independent [0, 1] random variables, and µ ≥∑j∈[k]E[

Zj

]

.

For any ε > 0, we have Pr[∑

j∈[k]Zj > (1 + ε)µ]

≤(

(1+ε)1+ε

)µ. If ε > 1, then we also have the simpler

bound Pr[∑

j∈[k]Zj > (1 + ε)µ]

≤ e−εµ/3.

Monotone, symmetric norms. A function f : Rm 7→ R≥0 is a norm if it satisfies the following: (a)

f(x) = 0 iff x = 0; (b) (homogeneity) f(λx) = |λ|f(x), for all λ ∈ R, x ∈ Rm; and (c) (triangle

inequality) f(x+ y) ≤ f(x) + f(y) for all x, y ∈ Rm. Since our cost vectors are always nonnegative, we

only consider nonnegative vectors in the sequel. A monotone, symmetric norm f is a norm that satisfies:

f(x) ≤ f(y) for all 0 ≤ x ≤ y (monotonicity); and f(x) = f(x↓) for all x ∈ Rm≥0 (symmetry). In the

sequel, whenever we say norm, we always mean a monotone, symmetric norm. We will often assume that

f is normalized, i.e., f(1, 0, . . . , 0) = 1. We will use the following simple claim to obtain bounds on the

optimal value.

Claim 3.2. For any x ∈ Rm≥0, we have maxi∈[m] xi ≤ f(x) ≤∑i∈[m] xi.

Proof. For the lower bound observe that for any i, f(x) ≥ f(0, . . . , 0, xi, 0, . . . , 0) = xi. Here we used

that f is monotone, symmetric, and homogeneous. Next, for the upper bound we use triangle inequality:

f(x) ≤∑i∈[m] f(0, . . . , 0, xi, 0, . . . , 0) =∑

i∈[m] xi.

The following two types of monotone, symmetric norms will be especially important to us.

Definition 3.3. For any ℓ ∈ [m], the Topℓ norm is defined as follows: for x ∈ Rm≥0, Topℓ(x) is the sum of

the ℓ largest coordinates of x, i.e., Topℓ(x) =∑ℓ

i=1 x↓i .

More generally, for any w ∈ Rm≥0 satisfying w1 ≥ w2 ≥ · · · ≥ wm ≥ 0—we call such a w a non-

increasing vector—the w-ordered norm (or simply ordered norm) of a vector x ∈ Rm≥0 is defined as ‖x‖w :=

wTx↓. Observe that ‖x‖w =∑m

ℓ=1(wℓ − wℓ+1)Topℓ(x), where wℓ := 0 for ℓ > m.

Topℓ-norm minimization yields a natural way of interpolating between the min-max (Top1) and min-

sum (Topm) problems. The following result by Chakrabarty and Swamy [8], which gives a foothold for

working with an arbitrary monotone, symmetric norm, further highlights their importance.

Theorem 3.4 (Structural result for monotone, symmetric norms [8]). Let f : Rm 7→ R≥0 be a monotone,

symmetric norm.

(a) There is a collection C ⊆ Rm≥0 of non-increasing vectors such that f(x) = supw∈C w

Tx↓ for all

x ∈ Rm≥0. Hence, we have supw∈C w1 = f(1, 0, . . . , 0).

8

Page 9: Approximation Algorithms for Stochastic Minimum Norm ...

(b) If x, y ∈ Rm≥0 are such that Topℓ(x) ≤ αTopℓ(y) + β for all ℓ ∈ [m], where α, β ≥ 0, then f(x) ≤

α · f(y) + β · f(1, 0, . . . , 0).

Proof. Part (a) is quoted from [8] (see Lemma 5.2 in [8]). Part (b) follows easily from part (a). Let C ⊆ Rm

be the collection of non-increasing weight vectors given by part (a) for f . We have

f(x) = supw∈C

wTx↓ = supw∈C

ℓ∈[m]

(wℓ − wℓ+1)Topℓ(x)

≤ supw∈C

ℓ∈[m]

(wℓ − wℓ+1)(α · Topℓ(y) + β)

≤ supw∈C

ℓ∈[m]

(wℓ − wℓ+1)α · Topℓ(y) + supw∈C

ℓ∈[m]

(wℓ − wℓ+1)β = α · f(y) + supw∈C

w1 · β

= α · f(y) + β · f(1, 0, . . . , 0).

We will often need to enumerate vectors with monotone integer coordinates. We use the following

standard result, lifted from [8], to obtain a polynomial bound on the number of such vectors.

Claim 3.5. There are at most (2e)maxM,k non-increasing sequences of k integers chosen from 0, . . . ,M.

Proof. We reproduce the argument from [8]. A non-decreasing sequence a1 ≥ a2 ≥ . . . ≥ ak, where

ai ∈ 0 ∪ [M ] for all i ∈ [k], can be mapped bijectively to the set of k + 1 integers M − a1, a1 −a2, . . . , ak−1 − ak, ak from 0 ∪ [M ] that add up to M . The number of such sequences of k+1 integers is

equal to the coefficient of xM in the generating function (1+x+ . . .+xM )k. This is equal to the coefficient

of xM in (1 − x)−k, which is(M+k−1

M

)

using the binomial expansion. Let U = maxN, k − 1. We have(

M+k−1M

)

=(

M+k−1U

)

≤( e(M+k−1)

U

)U ≤ (2e)U .

Iterative rounding. Our algorithms are based on rounding fractional solutions to LP-relaxations that we

formulate for the stochastic min-norm versions of load balancing and spanning trees. The rounding algo-

rithm needs to ensure that the various budget constraints that we include in our LP to control quantities

associated with expected Topℓ norms (for multiple indices ℓ) are roughly preserved. The main technical tool

involved in achieving this is iterative rounding, as expressed by the following theorem, which follows from

a result in Linhares et al. [22].

Theorem 3.6 (Follows from Corollary 11 in [22]). Let M = (U,I) be a matroid with rank function r

(specified via a value oracle), and Q := z ∈ RU≥0 : z(U) = r(U), z(F ) ≤ r(F ) ∀F ⊆ U be its base

polytope. Let z be a feasible solution to the following multi-budgeted matroid LP:

min cT z s.t. Az ≤ b, z ∈ Q. (Budg-LP)

where A ∈ Rk×U≥0 . Suppose that for any e ∈ supp(z), we have

i∈[k]Ai,e ≤ ν for some parameter ν. We

can round z in polynomial time to obtain a basis B of M satisfying: (a) c(B) ≤ cT z; (b) AχB ≤ b+ ν1,

where 1 is the vector of all 1s; and (c) B is contained in the support of z.

Proof. We first consider a new instance where we move to the support of z, which will automatically take

care of (c). More precisely, let J = supp(z). For a vector v ∈ RU , let vJ := (ve)e∈J denote the restriction of

v to the coordinates in J . Let MJ = (J,IJ) with rank function rJ , be the restriction of the matroid M to J .

Let AJ be A restricted to the columns corresponding to J . Note that rJ(J) = r(J) ≥ z(J) = z(U) = r(U);so we have r(J) = z(J) = r(U), and therefore a basis of MJ is also a basis of M. It is easy to see now

that zJ is a feasible solution to (Budg-LP) where we replace c and A by cJ and AJ respectively, and replace

9

Page 10: Approximation Algorithms for Stochastic Minimum Norm ...

Q by the base polytope of MJ . It suffices to show how to round zJ to obtain a basis B of MJ satisfying

properties (a) and (b) (i.e., cJ(B) ≤ cTJ zJ and AJχB ≤ b + ν1), since, by construction, we have B ⊆ J .

Note that each column of AJ sums to at most ν.

We now describe how Corollary 11 in [22] yields the desired rounding of zJ . This result pertains to a

more general setting, where we have a fractional point in the the base polytope of one matroid that satisfies

matroid-independence constraints for some other matroids, and some knapsack constraints. Translating

Corollary 11 to our setting above, where we have only one matroid, yields the following:

Corollary 11 in [22] in our setting: Let A′ be obtained from AJ by scaling each row so that

maxe∈J A′ie = 1 for all i ∈ [k]. Let p1, . . . , pk ≥ 0 be such that

i∈[k]A′

iepi

≤ 1 for all e ∈ J .3

Then, we can round zJ to obtain a basis B of MJ such that cJ(B) ≤ cTJ zJ , and∑

e∈B Aie ≤bi + pimaxe∈J Aie for all i ∈ [k].

Setting pi =ν

maxe∈J Aiefor all i ∈ [k] satisfies the conditions above, since

i∈[k]A′

iepi

=∑

i∈[k]Aie/ν ≤1 for all e ∈ J , and yields the desired rounding of zJ .

3.1 Bounding E[

S≥θ]

when S is the sum of independent random variables

In stochastic load balancing, each coordinate of the random cost vector is a “composite” random variable

that is the sum of independent random variables. In such settings, where we do not have direct access to

the distributions of individual coordinates, we develop a proxy function for estimating E[

Topℓ(Y )]

that

involves computing E[

Y >θi

]

for i ∈ [m]. We discuss here to compute E[

S>θ]

for a composite random

variable S =∑

j∈[k]Zj , where Z1, . . . , Zk ≥ 0 are independent random variables. By gleaning suitable

insights from [16, 11], we show how to estimate this quantity given access to the distributions of the Zj

random variables. First, observe that for any j ∈ [k], if Zj ≥ θ then we also have S ≥ θ. So after some

simple manipulations, we can show that

E[

S≥θ]

= Θ

(

j∈[k]

E[

Z≥θj

]

+E[

(

j∈[k]

Z<θj

)≥θ]

)

. (1)

The first term above is easily computed from the Zj-distributions. So to get a handle on E[

S≥θ]

it suffices

control the second term, which measures the contribution from the sum of the truncated random variables

Z<θj to E

[

S≥θ]

. This requires a nuanced notion called effective size, a concept that originated in queuing

theory [14].

Definition 3.7. For a nonnegative random variable Z and a parameter λ > 1, the λ-effective size βλ(Z) of

Z is defined as logλE[

λZ]

. We also define β1(Z) := E[

Z]

.

The usefulness of effective sizes follows from Lemmas 3.8 and 3.9, which indicate that E[(∑

j Z<θj

)≥θ]

can be estimated by controlling the effective sizes of some random variables related to the Z<θj random

variables.

Lemma 3.8. Let Z be a nonnegative random variable and λ ≥ 1. If βλ(Z) ≤ b, then for any c ≥ 0, we

have Pr[

Z ≥ b+ c]

≤ λ−c. Furthermore, if λ ≥ 2, then E[

Z≥βλ(Z)+1]

≤ βλ(Z)+3λ .

Proof. The first part of the lemma trivially holds for λ = 1 so assume that λ > 1. From the above definition,

λβλ(Z) = E[

λZ]

. By Markov’s inequality,

Pr[

Z ≥ b+ c]

= Pr[

λZ ≥ λb+c]

≤ E[

λZ]

λb+c≤ λ−c .

3In [22], the pis are stated to be positive integers, but the proof of Corollary 11 shows that this is not actually needed.

10

Page 11: Approximation Algorithms for Stochastic Minimum Norm ...

For the second part, we have E[

Z≥βλ(Z)+1]

= (βλ(Z) + 1)Pr[Z ≥ βλ(Z) + 1] +∫∞βλ(Z)+1 Pr[Z > σ]dσ.

Using the first part, the first term is at mostβλ(Z)+1

λ and the second term is∫∞1 Pr[Z > βλ(Z) + c]dc ≤

∫∞1 λ−cdc ≤ 1

λ lnλ ≤ 2λ , where the last inequality is because λ ≥ 2.

Noting that βλ(Z+Z ′) = βλ(Z)+βλ(Z′) for independent random variables Z and Z ′, we can infer from

Lemma 3.8 that if∑

j∈[k] βλ(Z<θj /θ) = O(1), then E

[(∑

j∈[k]Z<θj /θ)>Ω(1)

]

≤ O(1)/λ, or equivalently

E[(∑

j∈[k]Z<θj )>Ω(θ)

]

≤ O(θ)/λ.

A key contribution of Kleinberg et al. [16] is encapsulated by Lemma 3.9 below, which is obtained

by combining various results from their work, and complements the above upper bound. It lower bounds

E[

S≥θ]

, where S is the sum of independent, bounded random variables (e.g., truncated random variables),

in terms of the sum of the λ-effective sizes of these random variables. The proof of this lemma is somewhat

long and technical and is deferred to Section 8.

Lemma 3.9. Let S =∑

j∈[k]Zj , where the Zjs are independent [0, θ]-bounded random variables. Let

λ ≥ 1 be an integer. Then,

E[

S≥θ]

≥ θ ·∑

j∈[k] βλ (Zj/4θ)− 6

4 Proxy functions and statistics for controlling E[

Topℓ(Y )]

and E[

f(Y )]

In this section, we identify various statistics of a random vector Y following a product distribution on Rm,

which lead to two proxy functions that provide a convenient handle on E[

Topℓ(Y )]

(within O(1) factors).

The first proxy function uses∑

i∈[m]E[

Y ≥θi

]

as a means to control E[

Topℓ(Y )]

. Roughly speaking, if

θ is such that∑

i∈[m]E[

Y ≥θi

]

= ℓθ, then we argue that ℓθ is a good proxy for E[

Topℓ(Y )]

; Lemmas 4.1

and 4.2 make this statement precise. This proxy is helpful in settings where each Yi is a sum of independent

random variables, such as stochastic min-norm load balancing (Section 6), wherein Yi denotes the load

on machine i. The second proxy function is based on a statistic that we denote τℓ(Y ), which aims to

capture the (expected) ℓ-th largest entry of Y . This statistic is defined using the expected histogram curve

E[

N>θ(Y )]

θ∈R≥0(see Fig. 1), where N>θ(Y ) is the number of coordinates of Y that are larger than θ,

and we show that this leads to an effective proxy γℓ(Y ) for E[

Topℓ(Y )]

(Theorem 4.3). Collectively, these

statistics and the γℓ(Y ) proxies (for all ℓ ∈ [m]) are instrumental in showing that the expected histogram

curve controls E[

f(Y )]

, and hence that E[

f(Y )]

can be bounded in terms of the expected Topℓ norms of

Y . This follows because the N>τℓ(Y )(Y ) random variables enjoy nice concentration properties, and their

deviations from the mean govern the tail probability of f(Y ) (see Lemma 5.7). Also, these statistics are

quite convenient to work with in designing algorithms for problems where the Yis are “atomic” random

variables with known distributions, such as stochastic min-norm spanning trees (see Section 7).

Proxy function based on E[

Y≥θ

i

]

statistics.

Lemma 4.1. For any θ ≥ 0 if∑

i∈[m]E[

Y ≥θi

]

≤ ℓθ holds, then E[

Topℓ(Y )]

≤ 2ℓθ.

Proof. We have Yi = Y <θi + Y ≥θ

i for all i ∈ [m]. So

E[

Topℓ(Y )]

≤ E[

Topℓ(Y<θ1 , . . . , Y <θ

m )]

+E[

Topℓ(Y≥θ1 , . . . , Y ≥θ

m )]

≤ ℓθ +∑

i∈[m]

E[

Y ≥θi

]

≤ 2ℓθ .

The first inequality is due to the triangle inequality; the second is because each Y <θi is at most θ.

11

Page 12: Approximation Algorithms for Stochastic Minimum Norm ...

Lemma 4.2. For any θ ≥ 0 if∑

i∈[m]E[

Y ≥θi

]

> ℓθ holds, then E[

Topℓ(Y )]

> ℓθ/2.

Proof. The proof is by induction on ℓ+m. The base case is when ℓ = m = 1, where clearly E[

Top1(Y1)]

=

E[

Y1

]

≥ E[

Y ≥θ1

]

> θ. Another base case that we consider is when m ≤ ℓ: here the Topℓ-norm is simply

the sum of all the Yis and thus,

E[

Topℓ (Y1, . . . , Ym)]

= E[

Y1 + · · ·+ Ym

]

≥∑

i∈[m]

E[

Y ≥θi

]

> ℓθ > ℓθ/2 .

Now consider the general case with m ≥ ℓ + 1. Our induction strategy will be the following. If Ym is at

least θ, then we include Ym’s contribution towards the Topℓ-norm and collect the expected Topℓ−1-norm

from the remaining coordinates. Otherwise, we simply collect the expected Topℓ-norm from the remaining

coordinates. We use Y−m to denote the vector (Y1, . . . , Ym−1).We handle some easy cases separately. The case E

[

Y ≥θm

]

> ℓθ causes a hindrance to applying the

induction hypothesis. But this case is quite easy: we have

E[

Topℓ(Y1, . . . , Ym)]

≥ E[

Ym

]

≥ E[

Y ≥θm

]

> ℓθ > ℓθ/2 .

So assume that E[

Y ≥θm

]

≤ ℓθ. Let q := Pr[

Ym ≥ θ]

. Another easy case that we handle separately is

when q = 0. Here, we have∑

i∈[m−1]E[

Y ≥θi

]

=∑

i∈[m]E[

Y ≥θi

]

, and we have E[

Topℓ(Y1, . . . , Ym)]

≥E[

Topℓ(Y−m)]

> ℓθ/2, where the last inequality is due to the induction hypothesis.

So we are left with the case where m ≥ ℓ + 1, E[

Y ≥θm

]

≤ ℓθ and q = Pr[

Ym ≥ θ]

> 0. Let

s := E[

Ym|Ym ≥ θ]

, which is well defined, and is at least θ. Note that qs = E[

Y ≥θm

]

. We define two

thresholds θ1, θ2 ∈ (0, θ] to apply the induction hypothesis to smaller cases.

θ1 :=ℓθ −E

[

Y ≥θm

]

ℓand θ2 := min

θ,ℓθ −E

[

Y ≥θm

]

ℓ− 1

Noting that E[

Y ≥ti

]

is a non-increasing function of t, observe that:

(C1)∑

i∈[m−1]E[

Y ≥θ2i

]

> (ℓ− 1)θ2: since θ2 ≤ θ, we have∑

i∈[m−1]E[

Y ≥θ2i

]

≥∑i∈[m−1] E[

Y ≥θi

]

>(

ℓθ −E[

Y ≥θm

])

≥ (ℓ− 1)θ2.

(C2)∑

i∈[m−1]E[

Y ≥θ1i

]

> ℓθ1: since θ1 ≤ θ, we have∑

i∈[m−1]E[

Y ≥θ1i

]

≥ ∑

i∈[m−1] E[

Y ≥θi

]

>

ℓθ −E[

Y ≥θm

]

= ℓθ1.

We now have the following chain of inequalities.

E[

Topℓ (Y1, . . . , Ym)]

≥ q(

s+E[

Topℓ−1 (Y1, . . . , Ym−1)])

+ (1 − q)E[

Topℓ(Y1, . . . , Ym−1)]

> q

(

s+(ℓ− 1)θ2

2

)

+(1− q)ℓθ1

2(induction hypothesis, using (C1) and (C2))

= qs+q

2·min

(ℓ− 1)θ, ℓθ −E[

Y ≥θm

]

+1− q

2

(

ℓθ −E[

Y ≥θm

]

)

=ℓθ

2+

qs

2+

q

2·min qs− θ, 0 (since qs = E

[

Y ≥θm

]

)

≥ ℓθ

2+

qs

2− qθ

2≥ ℓθ/2. (since s ≥ θ)

12

Page 13: Approximation Algorithms for Stochastic Minimum Norm ...

Proxy function modeling the ℓth largest coordinate. Our second proxy function is derived by the simple,

but quite useful, observation that for any x ∈ Rm≥0, we have Topℓ(x) =

∫∞0 min

ℓ,N>θ(x)

dθ, where

N>θ(x) is the number of coordinates of x that are greater than θ. Noting that E[

minℓ,N>θ(Y )]

≤min

ℓ,E[

N>θ(Y )]

, we therefore obtain that E[

Topℓ(Y )]

≤ γℓ(Y ) :=∫∞0 minℓ,E

[

N>θ(Y )]

dθ (see

Fig. 1). Interestingly, we show that γℓ(Y ) = O(1) ·E[

Topℓ(Y )]

.

θ

E[

N>θ(Y )]

0

mγℓ

γℓ−1∑

iE[(

Yi − τ1)+]

τ1

1

τℓ−1

ℓ− 1

τℓ

. . .

Figure 1: The expected histogram curve, E[

N>θ(Y )]

θ∈R≥0

Theorem 4.3. For any ℓ ∈ [m], we have E[

Topℓ(Y )]

≤ γℓ(Y ) ≤ 4 · E[

Topℓ(Y )]

, where γℓ(Y ) :=∫∞0 minℓ,E

[

N>θ(Y )]

dθ.

The proof of the second inequality above relies on a rephrasing of γℓ(Y ) that makes it amenable to relate

it to Lemma 4.2. For any ℓ ∈ [m], define τℓ(Y ) := inf

θ ∈ R≥0 : E[

N>θ(Y )]

< ℓ

. This infimum is

attained because Pr[

Yi ≤ θ]

is a right-continuous function of θ,4 and E[

N>θ(Y )]

= m−∑i∈[m]Pr[

Yi ≤θ]

. Also, define τ0(Y ) := ∞ and τℓ(Y ) := 0 for ℓ > m. Since E[

N>θ(Y )]

is a non-increasing function

of θ, we then have γℓ(Y ) = ℓτℓ(Y ) +∫∞τℓ(Y )E

[

N>θ]

dθ. We can further rewrite this in a way that quite

nicely brings out the similarities between γℓ(Y ) and the (exact) proxy function for Topℓ(x) used by [8] in

the deterministic setting. Claim 4.4 gives a convenient way of casting the integral in the second term, which

leads to expression for γℓ(Y ) stated in Lemma 4.5.

Claim 4.4. For any t ∈ R≥0, we have∫∞t E

[

N>θ(Y )]

dθ =∑

i∈[m]E[

(Yi − t)+]

.

Proof. We have

E[

(Yi − t)+]

=

∫ ∞

0Pr[(Yi − t)+ > θ]dθ =

∫ ∞

0Pr[Yi > t+ θ]dθ =

∫ ∞

tPr[Yi > θ]dθ.

Therefore,∑m

i=1E[

(Yi − t)+]

=∫∞t

∑mi=1 Pr[Yi > θ]dθ, which is precisely

∫∞t E

[

N>θ(Y )]

dθ.

Lemma 4.5. Consider any ℓ ∈ [m]. We have γℓ(Y ) = ℓτℓ(Y ) +∑

i∈[m]E[

(Yi − τℓ(Y ))+]

and γℓ(Y ) =

mint∈R≥0

(

ℓt+∑

i∈[m]E[

(Yi − t)+])

.

Proof. Figure 1 yields a nice pictorial proof. Algebraically, the first equality follows from the expression

γℓ(Y ) = ℓτℓ(Y ) +∫∞τℓ(Y )E

[

N>θ]

dθ (see Fig. 1) and Claim 4.4. The second equality follows again from

Claim 4.4 because we have γℓ(Y ) ≤ ℓt+∫∞t E

[

N>θ(Y )]

dθ for any t ∈ R≥0.

4If θ1, θ2, . . . is a decreasing sequence converging to θ, then Yi ≤ θ =⋂∞

n=1Yi ≤ θn, so Pr[

Yi ≤ θ]

=limn→∞ Pr

[

Yi ≤ θn]

.

13

Page 14: Approximation Algorithms for Stochastic Minimum Norm ...

Remark 1. The expression in Lemma 4.5 and τℓ(Y ) nicely mirror similar quantities in the determinis-

tic setting. When Y is deterministic: (i) τℓ(Y ) is precisely Y ↓ℓ ; the ℓ-th largest coordinate of Y ; and

(ii) Lemma 4.5 translates to γℓ(Y ) = mint≥0

(

ℓt +∑

i∈[m](Yi − t)+)

; the RHS is exactly Topℓ(Y ) [8]

and is minimized at Y ↓ℓ .

Proof of Theorem 4.3. Let ρ := infθ :∑

i∈[m]E[

Y ≥θi

]

≤ ℓθ. Note that∑

i∈[m]E[

Y >ρi

]

≤ ℓρ. By

Lemma 4.5, we have γℓ(Y ) ≤ ℓρ+∑

i∈[m]E[

(Yi − ρ)+]

≤ ℓρ+∑

i∈[m]E[

Y >ρi

]

≤ 2ℓρ. For any θ < ρ,

Lemma 4.2 implies that E[

Topℓ(Y )]

> ℓθ/2. Thus, E[

Topℓ(Y )]

≥ ℓρ/2.

The following result will be useful in Section 5.

Lemma 4.6. For any ℓ ∈ 2, . . . ,m, we have τℓ−1(Y ) ≥ γℓ(Y )− γℓ−1(Y ) ≥ τℓ(Y ).

Proof. Both inequalities are easily inferred from Fig. 1. Algebraically, we have

γℓ(Y )− γℓ−1(Y ) =

∫ ∞

0

(

minℓ,E[

N>θ(Y )]

−minℓ− 1,E[

N>θ(Y )]

)

dθ.

For θ ≥ τℓ−1(Y ), the integrand is 0 since E[

N>θ(Y )]

< ℓ − 1; for θ < τℓ(Y ), the integrand is 1 since

E[

N>θ(Y )]

≥ ℓ; and for τℓ(Y ) ≤ θ < τell−1(Y ), the integrand is at most 1 since ℓ−1 ≤ E[

N>θ(Y )]

< ℓ.It follows that τℓ−1(Y ) ≤ γℓ(Y )− γℓ−1(Y ) ≥ τℓ.

5 Expectation of a monotone, symmetric norm

Let Y follow a product distribution on Rm≥0, and f : Rm 7→ R≥0 be an arbitrary monotone, symmetric

norm. By Theorem 3.4 (a), there is a collection C ⊆ Rm≥0 of weight vectors with non-increasing coordinates

such that f(x) = supw∈C wTx↓ for all x ∈ R

m≥0. For w ∈ C, recall that we define wℓ := 0 whenever

ℓ > m. We now prove one of our main technical results: the expectation of a supremum of ordered norms

is within a constant factor of the supremum of the expectation of ordered norms. Formally, it is clear that

E[

f(Y )]

= E[

supw∈C wTY ↓

]

≥ supw∈C E[

wTY ↓]

= supw∈C wTE[

Y ↓]

= f(E[

Y ↓]

); we show that an

inequality in the opposite direction also holds. Note that Y ↓ is the vector of order statistics of Y , from maxto min: i.e., Y ↓

1 is the maximum entry of Y , Y ↓2 is the second-maximum entry, and so on.

Theorem 5.1. Let Y follow a product distribution on Rm≥0, and f : Rm → R≥0 be a monotone, symmetric

norm. Then f(E[

Y ↓]

) ≤ E[

f(Y )]

≤ 28 · f(E[

Y ↓]

).

Theorem 5.1 is the backbone of our framework for stochastic minimum-norm optimization. Since

f(E[

Y ↓]

) = supw∈C

ℓ∈[m](wℓ − wℓ+1)E[

Topℓ(Y )]

, Theorem 5.1 gives us a concrete way of bounding

E[

f(Y )]

, namely, by bounding all expected Topℓ norms. In particular, we obtain the following corollary,

that we call approximate stochastic majorization. Recall that POSm = 2i : i ∈ Z≥0, 2i ≤ m.

Theorem 5.2 (Approximate stochastic majorization). Let Y and W follow product distributions on Rm≥0.

Let f be a monotone, symmetric norm on Rm.

(a) If E[

Topℓ(Y )]

≤ α ·E[

Topℓ(W )]

for all ℓ ∈ [m], then E[

f(Y )]

≤ 28α · E[

f(W )]

.

(b) If E[

Topℓ(Y )]

≤ α ·E[

Topℓ(W )]

for all ℓ ∈ POSm, then E[

f(Y )]

≤ 2 · 28α · E[

f(W )]

.

Proof. Let C ⊆ Rm be such that f(x) = supw∈C w

Tx↓ for x ∈ Rm≥0. Part (a) follows directly from

Theorem 5.1, since for each random variable R ∈ Y,W, f(E[

R↓]

) depends only on the E[

Topℓ(R)]

14

Page 15: Approximation Algorithms for Stochastic Minimum Norm ...

quantities. We have

E[

f(Y )]

≤ 28 · f(E[

Y ↓]

) = 28 · supw∈C

ℓ∈[m]

(wℓ − wℓ+1)E[

Topℓ(Y )]

≤ 28α · supw∈C

ℓ∈[m]

(wℓ − wℓ+1)E[

Topℓ(W )]

= 28α · f(E[

W ↓]

) ≤ 28α ·E[

f(W )]

.

Part (b) follows from part (a). For any ℓ ∈ [m], let ℓ′ be the largest index in POS that is at most ℓ.Then, ℓ′ ≤ ℓ ≤ 2ℓ′, and so So Topℓ(Y ) ≤ 2Topℓ′(Y ) ≤ 2αTopℓ′(W ) ≤ 2αTopℓ(W ). It follows that

E[

Topℓ(Y )]

≤ 2αE[

Topℓ(W )]

for all ℓ ∈ [m].

Remark 2. As should be evident from the proof above, the upper bounds on the ratio E[

f(Y )]

/f(

E[

Y ↓])

in Theorem 5.1, and the ratio E[

f(Y )]

/(

α · E[

f(W )])

in Theorem 5.2 are closely related. Let κf be

the supremum of E[

f(Y )]

/f(

E[

Y ↓])

over random vectors Y that follow a product distribution on Rm≥0.

Let ζf be the supremum of E[

f(W (1))]

/E[

f(W (2))]

over random vectors W (1),W (2) that follow product

distributions on Rm≥0 and satisfy E

[

Topℓ(W(1))]

≤ E[

Topℓ(W(2))]

for all ℓ ∈ [m]. The above proof

shows that ζf ≤ κf . It is worth noting that κf ≤ ζf as well. To see this, take W (1) = Y , and W (2) to

be the deterministic vector E[

Y ↓]

, which trivially follows a product distribution. By definition, we have

E[

Topℓ(Y ))]

= Topℓ(W(2)) for all ℓ ∈ [m]. So we have that E

[

f(Y )]

≤ ζf · f(

E[

Y ↓])

.

Theorem 5.2 (b) has immediate applicability for a given stochastic minimum-norm optimization problem

such as stochastic min-norm load balancing. In our algorithms, we enumerate (say in powers of 2) all

possible sequences Bℓℓ∈POSm of estimates of

E[

Topℓ(O∗)]

ℓ∈POSm, where O∗ is the cost vector arising

from an optimal solution, and find a solution (if one exists) whose cost vector satisfies (roughly speaking)

these expected-Topℓ-norm estimates. A final step is to identify which of the solutions so obtained is a near-

optimal solution. While a probabilistic guarantee follows easily since one can provide a randomized oracle

for evaluating the objective E[

f(Y )]

(as f(Y ) enjoys good concentration properties), one can do better and

deterministically identify a good solution. To this end, we show below (Corollary 5.4) that if Bℓℓ∈POSm

is a sequence that term-by-term well estimates

E[

Topℓ(Y )]

ℓ∈POSm, either from below or or from above,

then we can define a deterministic vector ~b ∈ Rm≥0 such that f(~b) well-estimates E

[

f(Y )]

, from below or

above respectively. We will apply parts (a) and (b) of Corollary 5.4 with the cost vectors arising from our

solution and an optimal solution respectively, to argue the near-optimality of our solution.

Corollary 5.4 utilizes a slightly technical result stated in Lemma 5.3. We need the following defi-

nition. Let K := 2⌊log2 m⌋ be the largest index in POSm. Given a non-decreasing, nonnegative sequence

Bℓℓ∈POSm , its upper envelope curve b : [0,m] → R≥0 is defined by b(x) := max

y : (x, y) ∈ conv(S)

,

where S = (ℓ,Bℓ) : ℓ ∈ POSm ∪ (0, 0), (m,BK ) and conv(S) denotes the convex hull of S.

Lemma 5.3. Let f be a monotone, symmetric norm on Rm≥0, and y ∈ R

m≥0 be a non-increasing vector. Let

Bℓℓ∈POSm be a non-decreasing, nonnegative sequence such that Bℓ ≤ 2Bℓ/2 for all ℓ ∈ POSm, ℓ > 1.

Let b : [0,m] 7→ R≥0 be the upper envelope curve of Bℓℓ∈POSm . Define~b :=(

b(i) − b(i− 1))

i∈[m].

(a) If Topℓ (y) ≤ Bℓ for all ℓ ∈ POSm, then f(y) ≤ 2f(~b).

(b) If Topℓ (y) ≥ Bℓ for all ℓ ∈ POSm, then f(y) ≥ f(~b)/3.

Corollary 5.4. Let Y follow a product distribution on Rm≥0, and f be a monotone, symmetric norm on R

m.

Let Bℓℓ∈POSm be a non-decreasing sequence such that Bℓ ≤ 2Bℓ/2 for all ℓ ∈ POSm, ℓ > 1. Let~b ∈ Rm≥0

be the vector given by Lemma 5.3 for the sequence Bℓℓ∈POSm .

(a) If E[

Topℓ(Y )]

≤ αBℓ for all ℓ ∈ POSm (where α > 0), then E[

f(Y )]

≤ 2 · 28α · f(~b).(b) If Bℓ ≤ 2 ·E

[

Topℓ(Y )]

for all ℓ ∈ POSm, then f(~b) ≤ 6 · E[

f(Y )]

.

15

Page 16: Approximation Algorithms for Stochastic Minimum Norm ...

To avoid delaying the proof of Theorem 5.1, we defer the proofs of Lemma 5.3 and Corollary 5.4 to

Section 5.1. The proof of Theorem 5.1 will also show that E[

f(Y )]

can be controlled by controlling the

τℓ(Y ) statistics. This is particularly useful in settings where the Yis are “atomic” random variables, and we

have direct access to their distributions. We discuss this in Section 5.2.

Proof of Theorem 5.1

We now delve into the proof of the second (and main) inequality of Theorem 5.1. Since Y and f are fixed

throughout, we drop the dependence on these in most items of notation in the sequel. We may assume

without loss of generality that f is normalized (i.e., f(1, 0, . . . , 0) = 1) as scaling f to normalize it scales

both E[

f(Y )]

and f(E[

Y ↓]

) by the same factor. It will be easier to work with the proxy function γℓ(Y )for E

[

Topℓ(Y )]

defined in Section 4. Recall that γℓ = ℓτℓ +∫∞τℓ

E[

N>θ(Y )]

dθ, where τℓ is the smallest

θ such that E[

N>θ(Y )]

< ℓ. Define τ0 := ∞ and γ0 := 0 for notational convenience. Define LB′ :=supw∈C

ℓ∈[m](wℓ − wℓ+1)γℓ. Given Theorem 4.3, it suffices to show that E[

f(Y )]

≤ 7 · LB′.

The intuition and the roadmap of the proof are as follows. We have f(Y ) = supw∈C

ℓ∈[m](wℓ −wℓ+1)Topℓ(Y ). Plugging in Topℓ(Y ) =

∫∞0 min

ℓ,N>θ(Y )

dθ ≤ ℓτℓ +∫∞τℓ

N>θ(Y )dθ, we obtain that

f(Y ) ≤ supw∈C

[

ℓ∈[m]

(wℓ − wℓ+1)(

ℓτℓ +

∫ ∞

τℓ

N>θ(Y )dθ)

]

. (2)

Comparing (2) and LB′ (after expanding out the γℓ terms), syntactically, the only difference is that the

N>θ(Y ) terms appearing in (2) are replaced by their expectations in LB′; however the dependence of f(Y )on the N>θ(Y ) terms is quite non-linear, due to the sup operator. The chief insight that the N>θ(Y ) terms

that really matter are those for θ = τℓ for ℓ ∈ [m], and that the N>τℓ(Y ) quantities are tightly concentrated

around their expectations. This allows us to, in essence, replace the N>θ(Y ) terms for θ = τℓ, ℓ ∈ [m]with their expectations (roughly speaking) when we consider E

[

f(Y )]

, incurring a constant-factor loss, and

thereby argue that E[

f(Y )]

= O(LB′).To elaborate, since N>θ(Y ) is non-increasing in θ, we can upper bound each

∫∞τℓ

N>θ(Y )dθ expression

in terms of N>τi(Y ), for i = 2, . . . , ℓ, and∫∞τ1

N>θ(Y )dθ. Consequently, the RHS of (2) can be upper

bounded in terms of the N>τℓ(Y ) quantities for ℓ = 2, . . . ,m, and a term that depends on∫∞τ1

N>θ(Y )dθ =∑

i∈[m](Yi − τ1)+. It is easy to charge the expectation of the latter term directly to LB′. We use f−1(Y ) to

denote (an upper bound on) the contribution from the N>τℓ(Y ) quantities, ℓ = 2, . . . ,m (see Lemma 5.6).

We bound E[

f−1(Y )]

(and hence E[

f(Y )]

) by O(LB′) by proving that the tail probability Pr[f−1(Y ) >σ · LB′] decays exponentially with σ. The crucial observation here is that Pr[f−1(Y ) > σ · LB′] is at most

the probability that N>τℓ(Y ) > Ω(σ) · ℓ for some index ℓ = 2, . . . ,m (see Lemma 5.7 (a)). Each N>τℓ(Y )

is tightly concentrated around its expectation, which is at most ℓ, and so this probability is O(

e−Ω(σ))

(see

Lemma 5.7 (b)).

We begin by proving convenient lower and upper bounds on LB′ that will also relate LB′ to the f -norm

of the deterministic vector ~τ := (τℓ)ℓ∈[m]. Define ~τ−1 := (τℓ)ℓ∈2,...,m. Recall that f is normalized, and so

supw∈C w1 = 1.

Lemma 5.5. (a) LB′ ≥ f(∑

i∈[m]E[

(Yi−τ1)+]

+τ1,~τ−1

)

; (b) LB′ ≤ f(∑

i∈[m]E[

(Yi−τ1)+]

+2τ1,~τ−1

)

.

Proof. We have LB′ = supw∈C

ℓ∈[m](wℓ − wℓ+1)γℓ =∑

ℓ∈[m]wℓ(γℓ − γℓ−1). Let ~v be the vector

(γℓ − γℓ−1)ℓ∈[m]. (Recall that γ0 := 0, so v1 = γ1.) Lemma 4.6 combined with the fact that γ1 ≥ τ1, shows

that ~v has non-increasing coordinates, and so we have LB′ = f(~v). The bounds on LB′ now easily follow

from the bounds on γℓ − γℓ−1 in Lemma 4.6.

Part (a) follows immediately from the lower bound on γℓ − γℓ−1 in Lemma 4.6, since it shows that ~v is

coordinate-wise larger than the vector (γ1, τ2, . . . , τm) =(∑

i∈[m]E[

(Yi − τ1)+]

+ τ1,~τ−1

)

.

16

Page 17: Approximation Algorithms for Stochastic Minimum Norm ...

For part (b), the upper bound on γℓ − γℓ−1 from Lemma 4.6 shows that ~v is coordinate-wise smaller

than the vector ~u := (γ1, τ1, τ2, . . . , τm−1). Therefore, LB′ = f(~v) ≤ f(~u). Let ~r denote the vector

(γ1 + τ1,~τ−1). Finally, due to Theorem 3.4 (b), note that f(~u) ≤ f(~r) since it is easy to see that Topℓ(~u) ≤Topℓ(~r) for all ℓ ∈ [m].

For a given w ∈ C, the term∑

ℓ∈[m](wℓ−wℓ+1)ℓτℓ will figure frequently in our expressions, so to prevent

cumbersome notation, we denote this by A(w) in the sequel. Define the quantity

f−1(Y ) := supw∈C

(

A(w) +∑m

ℓ=2wℓ(τℓ−1 − τℓ)N>τℓ(Y )

)

.

Lemma 5.6. We have f(Y ) ≤∑i∈[m](Yi − τ1)+ + f−1(Y ).

Proof. We have f(Y ) = supw∈C

ℓ∈[m](wℓ − wℓ+1)Topℓ(Y ). Recall that τ0 := ∞. Plugging in the

inequality Topℓ(Y ) ≤ ℓτℓ +∫∞τℓ

N>θ(Y )dθ for all ℓ ∈ [m], we obtain that

f(Y ) ≤ supw∈C

(

A(w) +∑

ℓ∈[m]

(wℓ − wℓ+1)

∫ ∞

τℓ

N>θ(Y )dθ)

= supw∈C

(

A(w) +∑

ℓ∈[m]

wℓ

(

∫ ∞

τℓ

N>θ(Y )dθ −∫ ∞

τℓ−1

N>θ(Y )dθ))

= supw∈C

(

A(w) + w1

∫ ∞

τ1

N>θ(Y )dθ +

m∑

ℓ=2

wℓ

∫ τℓ−1

τℓ

N>θ(Y )dθ)

≤(

supw∈C

w1

)

i∈[m]

(Yi − τ1)+ + sup

w∈C

(

A(w) +m∑

ℓ=2

wℓ(τℓ−1 − τℓ)N>τℓ(Y )

)

=∑

i∈[m]

(Yi − τ1)+ + f−1(Y ).

The final inequality follows since∫∞t N>θ(Y )dθ =

i∈[m](Yi − t)+ for any t ≥ 0, and since N>θ(Y ) is

a non-increasing function of θ.

Lemma 5.7. Let σ ≥ 3. (a) Pr[f−1(Y ) > σ ·LB′] is at most Pr[∃ℓ ∈ 2, . . . ,m s.t. N>τℓ(Y ) > σ(ℓ−1)];and (b) the latter probability is at most 2.55 · e−σ/3.

Proof. For part (a), we show that if N>τℓ(Y ) ≤ σ(ℓ − 1) for all ℓ = 2, . . . ,m, then f−1(Y ) ≤ σf(~τ) ≤σ · LB′. The upper bounds on N>τℓ(Y ) (and since A(w) ≥ 0 for all w ∈ C) imply that f−1(Y ) is at most

σ · supw∈C

(

A(w) +∑m

ℓ=2wℓ(τℓ−1 − τℓ)(ℓ− 1))

. Expanding A(w) =∑

ℓ∈[m](wℓ − wℓ+1)ℓτℓ, this bound

simplifies to σ · supw∈C

ℓ∈[m]wℓτℓ = σ · f(~τ).For part (b), we note that E

[

N>τℓ(Y )]

< ℓ by the choice of τℓ, and N>τℓ(Y ) =∑

i∈[m] 1Yi>τℓ , where

the 1Yi>τℓ are independent random variables. Noting that ℓ − 1 ≥ ℓ/2 (as ℓ ≥ 2), by Chernoff bounds

(Lemma 3.1), we have Pr[N>τℓ(Y ) > σ(ℓ− 1)] ≤ e−σℓ/6. So

Pr[∃ℓ ∈ 2, . . . ,m s.t. N>τℓ(Y ) > σ(ℓ− 1)] ≤m∑

ℓ=2

e−σℓ/6 ≤ e−σ/3

1− e−σ/6≤ 2.55 · e−σ/3.

The final inequality follows because σ ≥ 3.

Finishing up the proof of Theorem 5.1. We have

E[

f−1(Y )]

=

∫ ∞

0Pr[f−1(Y ) > θ]dθ ≤ 3 · LB′ +

∫ ∞

3LB′Pr[f−1(Y ) > θ]dθ

≤ 3 · LB′ + LB′ ·∫ ∞

3Pr[

f−1(Y ) > σ · LB′]

17

Page 18: Approximation Algorithms for Stochastic Minimum Norm ...

where the final inequality follows by the variable-change σ = θ/LB′. Using Lemma 5.7, the factor multiply-

ing LB′ in the second term is at most∫∞3 2.55e−σ/3dσ = 3·2.55·e−1 ≤ 3. Combining this with Lemmas 5.6

and 5.5, we obtain that E[

f(Y )]

≤ 7 · LB′. Hence, by Theorem 4.3, E[

f(Y )]

≤ 28 · f(E[

Y ↓]

).

5.1 Proofs of Lemma 5.3 and Corollary 5.4

Lemma 5.3 (restated). Let f be a monotone, symmetric norm on Rm≥0, and y ∈ R

m≥0 be a non-increasing

vector. Let Bℓℓ∈POSm be a non-decreasing, nonnegative sequence such that Bℓ ≤ 2Bℓ/2 for all ℓ ∈POSm, ℓ > 1. Let b : [0,m] 7→ R≥0 be the upper envelope curve of the sequence Bℓℓ∈POSm . Define~b :=

(

b(i)− b(i− 1))

i∈[m].

(a) If Topℓ (y) ≤ Bℓ for all ℓ ∈ POSm, then f(y) ≤ 2f(~b).

(b) If Topℓ (y) ≥ Bℓ for all ℓ ∈ POSm, then f(y) ≥ f(~b)/3.

Proof. Recall that the upper envelope curve b : [0,m] → R≥0 of Bℓℓ∈POSm is given by b(x) := max

y :(x, y) ∈ conv(S)

, where S = (ℓ,Bℓ) : ℓ ∈ POSm∪(0, 0), (m,BK ) and conv(S) denotes the convex

hull of S. Define B0 := 0, and Bℓ = BK for all ℓ > K , for notational convenience.

We may assume that B1 > 0, as otherwise b(x) = 0 for all x ∈ [0,m+ 1]; so~b = ~0, and in part (a), we

have y = ~0, so both parts hold trivially. By Theorem 3.4 (b), it suffices to show that Topi(y) ≤ 2Topi(~b)

for all i ∈ [m] for part (a), and to show that Topi(y) ≥ Topi(~b)/3 for all i ∈ [m] for part (b).

It is easy to see that b is a non-decreasing, concave function: consider 0 ≤ x < x′ ≤ m. By

Caratheodory’s theorem, there at most two points (ℓ1, Bℓ1), (ℓ2, Bℓ2) in S such that (x, b(x)) lies in the

convex hull of these two points, which is a line segment L. Without loss of generality, suppose ℓ1 ≤ x ≤ ℓ2,

and so Bℓ1 ≤ b(x) ≤ Bℓ2 . If x′ ≤ ℓ2, then the point (x′, ·) on L has y-coordinate at least b(x). If x′ > ℓ2,

then b(x′) is at least the y-coordinate of the point (x′, ·) lying on the line segment joining (ℓ2, Bℓ2) and

(m,BK); this value is at least Bℓ2 ≥ b(x).Concavity follows simply from the fact that for any x1, x2 ∈ [0,m + 1] and λ ∈ [0, 1], since the points

(x1, b(x1)) and (x2, b(x2)) lie in conv(S), the point(

λx1 + (1 − λ)x2, λb(x1) + (1 − λ)b(x2))

is also in

conv(S). Hence, b(λx1 + (1− λ)x2) ≥ λb(x1) + (1− λ)b(x2).It follows that~b is a non-increasing vector, and Topi(

~b) =∑i

j=1~bj = b(i) for all i ∈ [m].

Part (a) now follows easily. Consider any i ∈ [m], and let ℓ be the largest index in POSm that is at most

i. We have

Topi(~b) = b(i) ≥ b(ℓ) ≥ Topℓ(y) ≥ Topi(y)/2.

The first inequality is due to the monotonicity of b; the second follows from our assumptions, and the last

inequality is because i ≤ 2ℓ.For part (b), consider any i ∈ [m], and again let ℓ be the largest index in POSm that is at most i. If

(i, b(i)) is an extreme point of conv(S), then it must be that i ∈ POSm ∪ m, and b(i) = Bi. Also, we

have Bi ≤ Bℓ ≤ Topℓ(y) ≤ Topi(y). So suppose (i, b(i)) is not an extreme point of conv(S). Then there

are two points (ℓ1, Bℓ1), (ℓ2, Bℓ2) in S, which are extreme points of conv(S), such that (i, b(i)) lies on the

line segment L joining these two points. Let ℓ1 ≤ i ≤ ℓ2. The slope of L is s :=Bℓ2

−Bℓ1ℓ2−ℓ1

. Noting that

b(x) = Bx for x ∈ ℓ1, ℓ2, we have b(i) = Bℓ1 + (i− ℓ1)s. We argue that s ≤ Bℓℓ , which implies that

b(i) ≤ Bℓ1 + (i− ℓ1) ·Bℓ

ℓ≤ 3Bℓ ≤ 3Topℓ(y) ≤ 3Topi(y).

Since Bℓ′ ≤ 2Bℓ′/2 for all ℓ′ ∈ POSm, ℓ′ > 1, it follows thatBℓ′

ℓ′

ℓ′∈POSmis a non-increasing sequence.

It follows that Bℓ1 ≥ ℓ1 · Bℓ2ℓ2

, and therefore s ≤ Bℓ2ℓ2

≤ Bℓℓ .

18

Page 19: Approximation Algorithms for Stochastic Minimum Norm ...

Corollary 5.4 (restated). Let Y follow a product distribution on Rm≥0, and f be a monotone, symmetric norm

on Rm. Let Bℓℓ∈POSm be a non-decreasing sequence such that Bℓ ≤ 2Bℓ/2 for all ℓ ∈ POSm, ℓ > 1. Let

~b ∈ Rm≥0 be the vector given by Lemma 5.3 for the sequence Bℓℓ∈POSm .

(a) If E[

Topℓ(Y )]

≤ αBℓ for all ℓ ∈ POSm (where α > 0), then E[

f(Y )]

≤ 2 · 28α · f(~b).(b) If Bℓ ≤ 2 ·E

[

Topℓ(Y )]

for all ℓ ∈ POSm, then f(~b) ≤ 6 · E[

f(Y )]

.

Proof. The proof follows by combining Theorem 5.1 and Lemma 5.3. Consider the vector y = E[

Y ↓]

. By

Theorem 5.1, we have E[

f(Y )]

≤ 28f(y). For part (a), we apply Lemma 5.3 (a) with the vector y and the

sequence αBℓℓ∈POSm . Note that the vector given by Lemma 5.3 for αBℓℓ∈POSm is exactly α~b. Thus,

we obtain that f(y) ≤ 2f(α~b) = 2α · f(~b).For part (b), we have E

[

f(Y )]

≥ f(y). Applying Lemma 5.3 (b) with the vector y and the sequence

Bℓ/2ℓ∈POSm , we obtain that f(y) ≥ f(~b/2)/3 = f(~b)/6.

5.2 Controlling E[

f(Y )]

using the τℓ(Y ) statistics

It is implicit from the proof of Theorem 5.1 that the τℓ statistics control E[

f(Y )]

. We now make this

explicit, so that we can utilize this in settings where we have direct access to the distributions of the Yi

random variables. As before, we drop the dependence on Y and f in most items of notation. The proof of

Theorem 5.1 coupled with Lemma 5.5 and the fact that LB′ = Θ(

f(E[

Y ↓]

))

, yields the following useful

expression involving τℓs for estimating E[

f(Y )]

. As before, ~τ(Y ) is the vector(

τℓ(Y ))

ℓ∈[m].

Lemma 5.8. Let Y follow a product distribution on Rm≥0. Let f be a normalized, monotone, symmetric

norm on Rm. We have 1

14 ·E[

f(Y )]

≤∑i∈[m]E[(

Yi − τ1(Y ))+]

+ f(

~τ(Y ))

≤ 8 · E[

f(Y )]

.

Proof. We abbreviate τℓ(Y ) to τℓ for all ℓ ∈ [m], and ~τ(Y ) to ~τ. We have E[

f(Y )]

≥ f(E[

Y ↓]

) ≥LB′/4 ≥ 1

8 ·(∑

i∈[m]E[

(Yi − τ1)+]

+ f(~τ))

. The second inequality is due to Theorem 4.3, and the third

follows from Lemma 5.5 (a).

We also have E[

f(Y )]

≤ 7 · LB′ ≤ 2 · 7(∑

i∈[m]E[

(Yi − τ1)+]

+ f(~τ))

. The first inequality is what

we show in the proof of Theorem 5.1; the second inequality is follows from Lemma 5.5 (b).

In utilizing Lemma 5.8 in an application, we work with estimates tℓℓ∈POSm of the τ∗ℓ values, for

ℓ ∈ POSm, of the cost vector arising from an optimal solution, and seek a solution whose cost vector Yminimizes

i∈[m]E[

(Yi − t1)+]

subject to the constraint that (roughly speaking) the τℓ(Y )s are bounded

by the corresponding tℓ values, for all ℓ ∈ POSm. Lemma 5.8 then indicates that if the tℓs are good

estimates of the τ∗ℓ values, then our solution is a near-optimal solution. We state a more-robust such result

below that incorporates various approximation factors, which will be particularly convenient to apply since

such approximation factors inevitably arise in the process of finding Y . We need the following notation:

given a non-increasing sequence vℓℓ∈POSm , we define its expansion to be the vector v′ ∈ Rm given by

v′i := v2⌊log2 i⌋ for all i ∈ [m].

Theorem 5.9. Let Y follow a product distributions on Rm≥0. Let f be a normalized, monotone, symmetric

norm on Rm. Let tℓℓ∈POSm be a nonnegative, non-increasing sequence, and ~t′ be its expansion.

(a) Suppose that τβℓ(Y ) ≤ αtℓ for all ℓ ∈ POSm, where α, β ≥ 1. Then, we have that

E[

f(Y )]

≤ 2 · 7 · (α+ 2)β ·(∑

i∈[m]E[

(Yi − αt1)+]

+ f(~t′))

.

(b) Suppose that τℓ(Y ) ≤ tℓ ≤ 2τℓ(Y ) + κ for all ℓ ∈ POSm, where κ ≥ 0. Then, we have that∑

i∈[m]E[

(Yi − t1)+]

+ f(~t′) ≤ 32 ·E[

f(Y )]

+mκ.

Proof. We abbreviate POSm to POS. We abbreviate τℓ(Y ) to τℓ for all ℓ ∈ [m]. Define τ0 := ∞ for

notational convenience. Let K := 2⌊log2 m⌋ be the largest index in POS. Let ~τ = (τ1, τ2, . . . , τm).

19

Page 20: Approximation Algorithms for Stochastic Minimum Norm ...

Part (a). We begin with the bound E[

f(Y )]

≤ 2·7·(∑

i∈[m]E[

(Yi−τ1)+]

+f(~τ))

given by Lemma 5.8:

We proceed to bound the RHS expression in terms of the quantities in the stated upper bound.

By Lemma 4.5, we have

τ1 +∑

i∈[m]

E[

(Yi − τ1)+]

= γ1(Y ) ≤ αt1 +∑

i∈[m]

E[

(Yi − αt1)+]

. (3)

To compare f(~τ) and f(~t′), by Theorem 3.4 (b), it suffices to compare the Topℓ-norms of ~τ and ~t′ Consider

i ∈ [m] with i ≥ β. Note that 2⌊log2(⌊i/β⌋)⌋ ∈ POS. We have

τi ≤ τβ·⌊i/β⌋ ≤ τβ·2⌊log2(⌊i/β⌋)⌋ ≤ αt2⌊log2(⌊i/β⌋)⌋ .

Therefore, for any ℓ ∈ [m], we have that

Topℓ(~τ) =

ℓ∑

i=1

τi ≤ βτ1 +

ℓ∑

i=⌈β⌉

τi ≤ βτ1 +

ℓ∑

i=⌈β⌉

αt2⌊log2(⌊i/β⌋)⌋ .

The tj terms that appear in the second summation are for indices j ∈ POS, j ≤ ℓ/β, and each such tj term

appears at most βj times. Therefore, we have Topℓ(~τ) ≤ βτ1 + β∑

j∈POS:j≤ℓ/β jtj . Observe that in ~t′,

each t′j term for j ∈ POS \ K appears j times, and∑

j∈POS:j≤ℓ/β j ≤ 2ℓ/β ≤ 2ℓ. This implies that∑

j∈POS:j≤ℓ/β jtj is at most 2 · Topℓ(~t′), and we obtain that Topℓ(~τ) ≤ βτ1 + 2β · Topℓ(~t′).By Theorem 3.4 (b) (and since f is normalized), it follows that f(~τ) ≤ βτ1+2β ·f(~t′). Adding this to β

times (3) (recall that β ≥ 1), noting that t1 ≤ f(~t′), and simplifying gives(∑

i∈[m]E[

(Yi−τ1)+]

+f(~τ))

≤(α+ 2)β

(∑

i∈[m]E[

(Yi − αt1)+]

+ f(~t′))

. Combining this with the upper bound on E[

f(Y )]

mentioned

at the beginning proves part (a).

Part (b). We now start with the bound∑

i∈[m]E[

(Yi− τ1)+]

+f(~τ) ≤ 8 ·E[

f(Y )]

given by Lemma 5.8,

and proceed to upper bound∑

i∈[m]E[

(Yi − t1)+]

+ f(~t′) in terms of the LHS of this inequality.

Since t1 ≥ τ1, it is immediate that∑

i∈[m]E[

(Yi − t1)+]

≤ ∑

i∈[m]E[

(Yi − τ1)+]

. As should be

routine by now, we compare f(~t′) and f(~τ) by comparing the Topℓ-norms of ~t′ and ~τ. First, note that for

any i ∈ [m], we have

t′i = t2⌊log2 i⌋ ≤ 2τ2⌊log2 i⌋ + κ ≤ 2τ⌈i/2⌉ + κ

where the last inequality follows since 2⌊log2 i⌋ > i/2. Now consider any ℓ ∈ [m]. We have

Topℓ(~t′) =

ℓ∑

i=1

t′i ≤ℓ∑

i=1

(

2τ⌈i/2⌉ + κ)

≤ 4

⌈ℓ/2⌉∑

j=1

τj +mκ ≤ 4Topℓ(~τ) +mκ.

It follows that f(~t′) ≤ 4 · f(~τ) +mκ, and hence we have

i∈[m]

E[

(Yi − t1)+]

+ f(~t′) ≤ 4(

i∈[m]

E[

(Yi − τ1)+]

+ f(~τ))

+mκ ≤ 32 ·E[

f(Y )]

+mκ.

20

Page 21: Approximation Algorithms for Stochastic Minimum Norm ...

6 Load Balancing

We now apply our framework to devise approximation algorithms for stochastic minimum norm load bal-

ancing on unrelated machines. We are given n stochastic jobs that need to be assigned to m unrelated

machines. Throughout, we use [n] and [m] to denote the set of jobs and machines respectively; we use j to

index jobs, and i to index machines. For each i ∈ [m], j ∈ [n], we have a nonnegative r.v. Xij that denotes

the processing time of job j on machine i, whose distribution is specified in the input. Jobs are independent,

so Xij and Xi′j′ are independent whenever j 6= j′; however, Xij and Xi′j could be correlated. A feasible

solution is an assignment σ : [n] → [m] of jobs to machines. This induces a random load vector loadσwhere loadσ(i) :=

j:σ(j)=iXij for each i ∈ [m]; note that loadσ follows a product distribution on Rm≥0.

The goal is to find an assignment σ that minimizes E[

f(loadσ)]

for a given monotone symmetric norm f .

We often use j 7→ i as a shorthand for denoting σ(j) = i, when σ is clear from the context. We use σ∗ to

denote an optimal solution, and OPT := E[

f(loadσ∗)]

to denote the optimal value. Let POS = POSm :=

1, 2, 4, . . . , 2⌊log2 m⌋.

Overview. Recall that Theorem 5.2 underlying our framework shows that in order to obtain an O (α)-approximation for stochastic f -norm load balancing, it suffices to find an assignment σ that satisfies

E[

Topℓ (loadσ)]

≤ αE[

Topℓ(loadσ∗)]

for all ℓ ∈ POS. First, in Section 6.1, we consider the simpler

problem where we only have one expected-Topℓ budget constraint, or equivalently, where f is a Topℓ-norm,

and obtain an O(1)-approximation algorithm in this case.

Theorem 6.1. There is a constant-factor approximation algorithm for stochastic Topℓ-norm load balancing

on unrelated machines with arbitrary job size distributions.

Section 6.1 will introduce many of the techniques that we build upon and refine in Section 6.2, where

we deal with an arbitrary monotone, symmetric norm. In Section 6.2.1, we devise a constant-factor ap-

proximation when job sizes are Bernoulli random variables. Observe that since a deterministic job size

can also be viewed as a trivial Bernoulli random variable, modulo constant factors, this result strictly gen-

eralizes the O(1)-approximation for deterministic min-norm load balancing obtained by [8, 9]. In Sec-

tion 6.2.2, we consider the most general setting, (i.e., arbitrary norm and arbitrary distributions), and give

an O(logm/ log logm)-approximation.

Theorem 6.2. There is a constant-factor approximation algorithm for stochastic minimum-norm load bal-

ancing on unrelated machines with an arbitrary monotone, symmetric norm and Bernoulli job-size

distributions.

Theorem 6.3. There is an O (logm/ log logm)-approximation algorithm for the general stochastic minimum-

norm load balancing problem on unrelated machines, where the underlying monotone, symmetric norm and

job-size distributions are arbitrary.

6.1 Stochastic Topℓ-norm load balancing

In this section, we prove Theorem 6.1 and devise an O(1)-approximation for stochastic Topℓ-norm load bal-

ancing. The key to our approach are Lemmas 4.1 and 4.2, which together imply that, for t chosen suitably,∑

iE[

loadσ(i)≥t]

acts as a convenient proxy for E[

Topℓ(loadσ)]

: in particular, we have

E[

Topℓ(loadσ)]

= O(ℓt) if and only if∑

iE[

loadσ(i)≥t]

= O(ℓt). Lemma 4.2 shows that if t ≥ 2OPTℓ ,

then∑

i E[

loadσ∗(i)≥t] ≤ ℓt. We write a linear program, (LP(ℓ, t)), to find such an assignment (roughly

speaking), and round its solution to obtain an assignment σ such that∑

i E[

loadσ(i)≥t] = O(ℓt); by

Lemma 4.1, this implies that E[

Topℓ(loadσ)]

= O(ℓt). Hence, if we work with t = O(

OPTℓ

)

such that

(LP(ℓ, t)) is feasible—which we can find via binary search—then we obtain an O(1)-approximation.

21

Page 22: Approximation Algorithms for Stochastic Minimum Norm ...

LP relaxation. Let t ≥ 0 be a given parameter. Our LP seeks a fractional assignment satisfying∑

iE[

load(i)≥t]

= O(ℓt). As usual, we have zij variables indicating if job j is assigned to machine i,so z belongs to the assignment polytope Qasgn := z ∈ R

m×n≥0 :

i∈[m] zij = 1 ∀i ∈ [m].

As alluded to in Section 3.1, E[

load(i)≥t]

can be controlled by separately handling the contribution

from exceptional jobs X≥tij and truncated jobs X<t

ij . Our LP enforces that both these contributions (across

all machines) are at most ℓt, thereby ensuring that∑

iE[

load(i)≥t]

= O(ℓt) (due to (1)). Constraint

(4) directly encodes that the total contribution from exceptional jobs is at most ℓt. Handling the contri-

bution from truncated jobs is more complicated. We utilize Lemma 3.9 here, which uses the notion of

effective sizes. For each machine i, let Li :=∑

j:j 7→iX<tij /t denote the scaled load on machine i due to

the truncated jobs assigned to it. We use an auxiliary variable ξi to model E[

L≥1i

]

, so that tξi models

E[

(∑

j:j 7→iX<tij )≥t

]

. Since Li is a sum of independent [0, 1]-random variables, Lemma 3.9 yields vari-

ous lower bounds on E[

L≥1i

]

; these are incorporated by constraints (5). A priori, we do not know which

is the right choice of λ in Lemma 3.9, so we simply include constraints for a sufficiently large collection

of λ values so that one of them is close enough to the right choice. Finally, constraint (6) ensures that∑

iE[

(∑

j:j 7→iX<tij )≥t

]

≤ ℓt, thereby bounding the contribution from the truncated jobs. We obtain the

following feasibility program.

(LP(ℓ, t))

i∈[m],j∈[n]

E[

X≥tij

]

zij ≤ ℓt (4)

j∈[n] βλ

(

X<tij /4t

)

zij − 6

4λ≤ ξi ∀i ∈ [m],∀λ ∈ 1, . . . , 100m (5)

i∈[m]

ξi ≤ ℓ (6)

ξ ≥ 0, z ∈ Qasgn. (7)

Claim 6.4. (LP(ℓ, t)) is feasible for any t satisfying E[

Topℓ (loadσ∗)]

≤ ℓt/2.

Proof. Consider the solution (z∗, ξ∗) induced by the optimal solution σ∗, where z∗ is the indicator vector

of σ∗, and ξ∗i := E[

L≥1i

]

for any i ∈ [m], with Li :=∑

j:σ∗(j)=i X<tij /t. The choice of t implies that by

Lemma 4.2, we have∑

iE[

loadσ∗(i)≥t]

≤ ℓt. Notice that loadσ∗(i)≥t ≥∑j:j 7→iX≥tij +

(∑

j:j 7→iX<tij

)≥t.

Therefore, ℓt ≥∑iE[

load(i)≥t]

≥∑i

j:j 7→iE[

X≥tij

]

, showing that (4) holds. Next Lemma 3.9 applied

to the composite r.v. Li, which is a sum of independent [0, 1]-bounded r.v.s, shows that constraint (5) holds

for any integral λ ≥ 1. Finally, tξ∗i = E[(∑

j:j 7→iX<tij

)≥t]for each i, and the upper bound of ℓt on

iE[

loadσ∗(i)≥t]

implies that∑

i ξ∗i ≤ ℓ.

Rounding algorithm. We assume now that (LP(ℓ, t)) is feasible. Let (z, ξ) be a feasible fractional so-

lution to this LP. For each machine i, we carefully choose a suitable budget constraint from among the

constraints (5), and round the fractional assignment z to obtain an assignment σ such that: (i) these budget

constraints for the machines are satisfied approximately; and (ii) the total contribution from the exceptional

jobs (across all machines) remains at most ℓt. The rounding step amounts to rounding a fractional solution

to an instance of the generalized assignment problem (GAP), for which we can utilize the algorithm of [28],

or use the iterative-rounding result from Theorem 3.6.

The budget constraint that we include for a machine is tailored to ensure that the total βλ(

X<tij /4t

)

-

effective load on a machine under the assignment σ is not too large; via Lemma 3.8, this will imply a

suitable bound on E[

L≥Ω(1)i

]

, where Li =∑

j:j 7→iX<tij /4t. Ideally, for each machine i we would like to

choose constraint (5) for λi = 1/ξi. This yields∑

j βλi

(

X<tij /4t

)

zij ≤ 4λiξi + 6 = 10. So if this budget

22

Page 23: Approximation Algorithms for Stochastic Minimum Norm ...

constraint is approximately satisfied in the rounded solution, say with RHS equal to some constant b, then

Lemma 3.8 roughly gives us E[

L≥b+1i

]

≤ (b+ 3) /λi = (b+ 3)ξi. This in turn implies that

i

E

[(

j:j 7→i

X<tij

)≥4(b+1)t]

= 4t ·∑

i

E[

L≥b+1i

]

≤ 4t(b+ 3)∑

i

ξi ≤ 4(b+ 3)ℓt

where the last inequality follows due to (6). The upshot is that∑

iE[

(∑

j:j 7→iX<tij )≥Ω(t)

]

= O(ℓt); cou-

pled with the fact that∑

j E[

X≥tσ(j),j

]

≤ ℓt, we obtain that∑

i E[

loadσ(i)≥Ω(t)

]

= O(ℓt), and hence

E[

Topℓ(loadσ)]

= O(ℓt). The slight complication is that 1/ξi need not be an integer in [100m], so we

modify the choice of λis appropriately to deal with this.

We remark that, whereas we work with a more general norm than ℓ∞, our entire approach—polynomial-

size LP-relaxation, rounding algorithm, and analysis—is in fact simpler and cleaner than the one used

in [11] for the special case of Top1 norm. Our savings can be traced to the fact that we leverage the

notion of effective size in a more powerful way, by utilizing it at multiple scales to obtain lower bounds on

E[(∑

j 7→iX<tij /t

)≥1](Lemma 3.9). Our rounding algorithm is summarized below.

T1. Define Mlow := i ∈ [m] : ξi < 1/2, and let Mhi := [m] \Mlow. For every i ∈ [m], define λi :=min(100m,

1/ξi⌋

) ∈ 2, . . . , 100m if i ∈ Mlow, and λi := 1 (so βλi

(

X<tij /4t

)

= E[

X<tij /4t

]

)

otherwise.

T2. Consider the following LP that has one budget constraint for each machine i corresponding to constraint

(5) (slightly simplified) for parameter λi.

min∑

i∈[m],j∈[n]

E[

X≥tij

]

ηij (B-LP)

s.t.∑

j∈[n]

βλi

(

X<tij /4t

)

ηij ≤ 10 ∀ i ∈ Mlow (8)

j∈[n]

E[

X<tij /4t

]

ηij ≤ 4ξi + 6 ∀ i ∈ Mhi (9)

η ∈ Qasgn .

Clearly, z is a feasible solution to (B-LP). Observe that Qasgn is the base polytope of the partition

matroid encoding that each job is assigned to at most one machine. We round z to obtain an integral

assignment σ, either by using GAP rounding, or by invoking Theorem 3.6.

Analysis. We now show that E[

Topℓ (loadσ)]

= O(ℓt). We first note that Theorem 3.6 directly shows

that σ satisfies constraints (8) and (9) with an additive violation of at most 1, and the total contribution from

exceptional jobs is at most ℓt.

Claim 6.5. The assignment σ satisfies: (a)∑

j E[

X≥tσ(j),j

]

≤ ℓt; (b)∑

j:j 7→i βλi

(

X<tij /4t

)

≤ 11 for all

i ∈ Mlow; and (c)∑

j:j 7→iE[

X<tij /4t

]

≤ 4ξi + 7 for all i ∈ Mhi.

Proof. This follows directly from Theorem 3.6 by noting that the parameter ν, denoting an upper bound on

the column sum of a variable, is at most 1 (in fact at most 1/4), and since z is a feasible solution to (B-LP)

of objective value at most ℓt.

Next, we bound E[

Topℓ(loadσ)]

by bounding the expected Topℓ-norm of the load induced by three

different sources: exceptional jobs, truncated jobs in Mlow, and truncated jobs in Mhi. Observe that

23

Page 24: Approximation Algorithms for Stochastic Minimum Norm ...

loadσ = Y low + Y hi + Y excep, where

Y lowi :=

j:j 7→iX<tij ; if i ∈ Mlow

0; otherwiseY hii :=

j:j 7→iX<tij ; if i ∈ Mhi

0; otherwiseY excepi :=

j:j 7→i

X≥tij .

All three random vectors follow a product distribution on Rm≥0. By the triangle inequality, it suffices to

bound the expected Topℓ norm of each vector by O (ℓt). It is easy to bound even the expected Topm norms

of Y hi and Y excep (Lemma 6.6); to bound E[

Topℓ(Ylow)]

(Lemma 6.7), we utilize properties of effective

sizes.

Lemma 6.6. We have (i) E[

Topℓ (Yexcep)

]

≤ ℓt, and (ii) E[

Topℓ(

Y hi)]

≤ 72 ℓt.

Proof. Part (i) follows immediately from Claim 6.5 (a) since E[

Topℓ(Yexcep)

]

≤ E[

Topm(Y excep)]

=∑

j E[

X≥tσ(j),j

]

.

For part (ii), we utilize Claim 6.5 (c) which gives∑

j:j 7→iE[

X<tij /4t

]

≤ 4ξi + 7 ≤ 18ξi for ev-

ery i ∈ Mhi, where the last inequality follows since ξi ≥ 1/2 as i ∈ Mhi. It follows that E[

Y hii

]

=∑

j:j 7→iE[

X<tij

]

≤ 72 t ξi. Therefore, E[

Topℓ(Yhi)]

≤ E[

Topm(Y hi)]

= 72 t∑

i∈Mhiξi ≤ 72 ℓt.

Lemma 6.7. E[

Topℓ(

Y low)]

≤ 232 ℓt.

Proof. Let W = Y low/4t. It suffices to show that∑

iE[

W≥29i

]

=∑

i∈MlowE[

W≥29i

]

≤ 29ℓ, since then

by Lemma 4.1, we have E[

Topℓ(W )]

≤ 58 ℓ, or equivalently, E[

Topℓ(Ylow)]

≤ 232 ℓt. By Claim 6.5 (c),

we have βλi(Wi) ≤ 11, where λi = min(100m,

1/ξi⌋

). Using Lemma 3.8,

E

[

W≥12i

]

≤ 14/λi = 14 max(1/100m, 1/⌊

1/ξi⌋

) ≤ 28ξi + 14/100m ,

where the last inequality is because ξi < 1/2. Summing over all machines in Mlow gives∑

i∈MlowE[

W≥12i

]

≤29ℓ, since

i ξi ≤ ℓ.

Combining the two lemmas above yields the following result.

Theorem 6.8. The assignment σ satisfies E[

Topℓ (loadσ)]

≤ 305 ℓt.

Finishing up the proof of Theorem 6.1. Given Theorem 6.8 and Claim 6.4, it is clear that if we work with

t = O(

OPTℓ

)

such that (LP(ℓ, t)) is feasible and run our algorithm, then we obtain an O(1)-approximation.

As is standard, we can find such a t, within a (1+ε)-factor, via binary search. To perform this binary search,

we show that we can come up with an upper bound UB such that UBm ≤ OPT ≤ UB. We show this even in

the general setting where we have an arbitrary monotone, symmetric norm f .

Lemma 6.9. Let f be a normalized, monotone, symmetric norm, and let OPTf be the optimal value of the

stochastic f -norm load balancing problem. Define UB :=∑

j mini E[

Xij

]

, which can be easily computed

from the input data. We have UBm ≤ OPTf ≤ UB.

Proof. Notice that UB is the optimal value of stochastic Topm-norm load-balancing, i.e., it is the objective

value of the assignment that minimizes the sum of the expected machine loads. For any assignment σ′ :[n] → [m], by Lemma 3.2, we have

E

[

Topm(loadσ′)m

]

≤ E[

Top1(loadσ′)]

≤ E[

f(loadσ′)]

≤ E[

Topm(loadσ′)]

.

Taking the minimum over all assignments, and plugging in that UB =≤ minσ′:[n]→[m]E[

Topm(loadσ′)]

,

as noted above, it follows that

UBm ≤ min

σ′:[n]→[m]E[

f(loadσ′)]

= OPTf ≤ UB.

24

Page 25: Approximation Algorithms for Stochastic Minimum Norm ...

Thus, if we binary search in the interval[

0, 2UBℓ]

, for any ε > 0, we can find in poly(

log mε

)

iterations

t ≤ 2OPTℓ + εUB

m2 ≤ (2+ ε) · OPTℓ such that (LP(ℓ, t)) is feasible. By Theorem 6.8, we obtain an assignment

whose expected Topℓ norm is at most 305 ℓt ≤(

610 +O(ε))

OPT.

6.2 Stochastic f -norm load balancing

We now focus on stochastic load balancing when f is a general monotone symmetric norm. Recall that σ∗

denotes an optimal solution, and OPT = OPTf denotes the optimal value.

Overview. As noted earlier, Theorem 5.2 guides our strategy: we seek an assignment σ that simultane-

ously satisfies E[

Topℓ (loadσ∗)]

= O(

αE[

Topℓ (loadσ∗)])

for all ℓ ∈ POSm for some small factor α. It

is natural to therefore leverage the insights gained in Section 6.1 from the study of the Topℓ-norm problem.

Since we need to simultaneously work with all Topℓ norms, we now work with a guess tℓ of the quantity

2E[

Topℓ(loadσ∗)]

/ℓ, for every ℓ ∈ POS. For each~t = (tℓ)ℓ∈POS vector, we write an LP-relaxation (LP(~t))that generalizes (LP(ℓ, t)), and if it is feasible, we round its feasible solution to obtain an assignment of jobs

to machines. We argue that one can limit the number of ~t vectors to consider to a polynomial-size set, so

this yields a polynomial number of candidate solutions. We remark that, interestingly, (LP(~t)), the rounding

algorithm, and the resulting set of solutions generated, are independent of the norm f : they only depend

only on the underlying ~t vector. The norm f is used only in the final step to select one of the candidate

solutions as the desired near-optimal solution, utilizing Corollary 5.4.

The LP-relaxation we work with is an easy generalization of (LP(ℓ, t)). We have the usual zij variables

encoding a fractional assignment. For each ℓ ∈ POS, there is a different definition of truncated r.v. X<tℓij

and exceptional r.v. X≥tℓij . Correspondingly, for each index ℓ ∈ POS, we have a separate set of constraints

(4)–(6) involving (the zijs), a variable ξi,ℓ (that represents ξi for the index ℓ, i.e., E[

(∑

j:j 7→iX<tℓij /tℓ)

≥1]

),

and the guess tℓ. For technical reasons that will become clear when we discuss the rounding algorithm

(see Claim 6.11), we include additional constraints (13), which enforce that a job j cannot be assigned

to a machine i if E[

X≥t1ij

]

> t1; observe that this is valid for the optimal integral solution whenever

t1 ≥ 2E[

Top1 (loadσ∗)]

. This yields the following LP relaxation.

(LP(~t))

i∈[m],j∈[n]

E[

X≥tℓij

]

zij ≤ ℓtℓ ∀ℓ ∈ POS (10)

j∈[n] βλ

(

X<tℓij /4tℓ

)

zij − 6

4λ≤ ξi,ℓ ∀i ∈ [m],∀λ ∈ 1, . . . , 100m, ∀ℓ ∈ POS (11)

i∈[m]

ξi,ℓ ≤ ℓ ∀ℓ ∈ POS (12)

zij = 0 ∀i ∈ [m], j ∈ [n] with E[

X≥t1ij

]

> t1 (13)

ξ ≥ 0, z ∈ Qasgn.

Claim 6.4 easily generalizes to the following.

Claim 6.10. Let ~t be such that tℓ ≥ 2E[

Topℓ (loadσ∗)]

/ℓ for all ℓ ∈ POS. Then, (LP(~t)) is feasible.

Designing an LP-rounding algorithm is substantially more challenging now. In the Topℓ-norm case,

we set up an auxiliary LP (B-LP) by extracting a single budget constraint for each machine that served to

bound the contribution from the truncated jobs on that machine. This LP was quite easy to round (e.g., using

Theorem 3.6) because each zij variable participated in exactly one constraint (thereby trivially yielding an

O(1) bound on the column sum for each variable). Since we now have to simultaneously control multiple

25

Page 26: Approximation Algorithms for Stochastic Minimum Norm ...

Topℓ-norms, for each machine i, we will now need to include a budget constraint for every index ℓ ∈ POS

so as to bound the contribution from the truncated jobs for index ℓ (i.e., E[(∑

j:j 7→iX<tℓij

)≥Ω(tℓ)]

). Addi-

tionally, unlike (B-LP), wherein the contribution from the exceptional jobs was bounded by incorporating it

in the objective, we will now need, for each ℓ ∈ POS, a separate constraint to bound the total contribution

from the exceptional jobs for index ℓ.Thus, while we can set up an auxiliary LP similar to (B-LP) containing these various budget constraints

(see (15) and (16)), rounding a fractional solution to this LP to obtain an assignment that approximately

satisfies these various budget constraints presents a significant technical hurdle. As alluded to above, every

zij variable now participates in multiple budget constraints, which makes it difficult to argue a bounded-

column-sum property for this variable and thereby leverage Theorem 3.6. (The multiple budget constraints

included for each machine i to bound the contribution from the truncated jobs for each ℓ ∈ POS present

the main difficulty; one can show that the column sum if we only consider the constraints bounding the

contribution from the exceptional jobs is O(1) (see Claim 6.11).)

In Sections 6.2.1 and 6.2.2, we show how to partly overcome this obstacle. Section 6.2.1 considers

the setting where job sizes are Bernoulli random variables. Here, we show that an auxiliary LP as above

(see (Ber-LP)) yields a constraint matrix with O(1)-bounded column sums. Consequently, this auxiliary LP

can be rounded with an O(1) violation of all budget constraints (using Theorem 3.6), which then (with the

right choice of ~t) leads to an O(1)-approximation algorithm (Theorem 6.2). In Section 6.2.2, we consider

the general case. We argue that we can set up an auxiliary LP that imposes a weaker form of budget

constraints involving expected truncated job sizes, and does have O(1) column sums. Via a suitable use of

Chernoff bounds, this then leads to an O(logm/ log logm)-approximation for general stochastic f -norm

load balancing (Theorem 6.3).

We do not know how to overcome the impediment discussed above in setting up the auxiliary LP for the

general setting, Specifically, we do not know how to set up an auxiliary LP with a suitable choice of budget

constraints for each machine i and ℓ ∈ POS that: (a) imposes the desired bound on the contribution from the

truncated jobs for index ℓ within O(1) factors; and (b) yields a constraint matrix with O(1)-bounded column

sums. We leave the question of determining the integrality gap of (LP(~t)), as also the technical question of

setting up a suitable auxiliary LP, in the general case, as intriguing open problems. We believe however that

our LP-relaxation (LP(~t)) is in fact better than what we have accounted for. In particular, we remark that

if the Komlos conjecture in discrepancy theory is true, then it is quite likely that the auxiliary LP (Ber-LP)

that we formulate in the Bernoulli case can be rounded with at most an O(√logm)-violation of the budget

constraints, which would yield an O(√logm) upper bound on the the integrality gap of (LP(~t)).

The final step involved is choosing the “right”~t vector, and this is the only place where we use the (value

oracle for) the norm f . Clearly, binary search is not an option now. Noting that the vector ~t∗ corresponding to

the optimal solution (so t∗ℓ is supposed to be a guess of 2·E[

Topℓ(loadσ∗)]

/ℓ) is a vector with non-increasing

coordinates, and the bounds in Lemma 6.9 yield an O(logm) search space for each t∗ℓ (by considering

powers of 2), there are only a polynomial number of ~t vectors to consider (see Claim 6.17). We can give

a randomized value oracle for evaluating E[

f(Y )]

for the cost vector Y resulting from each ~t vector (for

which (LP(~t)) is feasible), and therefore arrive at the best solution computed; this yields a randomized

approximation guarantee.5 We can do better and obtain a deterministic (algorithm and) approximation

guarantee by utilizing Corollary 5.4: this shows that for each ~t (for which (LP(~t)) is feasible), we can define

a corresponding vector ~b = ~b(~t) ∈ Rm≥0 such that f

(

~b(~t))

acts as a good proxy for the objective value

E[

f(Y )]

of the solution computed for ~t, provided that E[

Topℓ(Y )]

= O(

Topℓ(~b))

for all ℓ ∈ POS. It

follows that the solution corresponding to the smallest f(

~b(~t))

is a near-optimal solution.

5We would obtain an approximation guarantee that holds with high probability, or for the expected cost of the solution, where

the randomness is now over the random choices involved in the randomized value oracle.

26

Page 27: Approximation Algorithms for Stochastic Minimum Norm ...

6.2.1 Rounding algorithm and analysis: Bernoulli jobs

Assume that ~t ∈ RPOS≥0 is such that (LP(~t)) is feasible. We will further assume that tℓ is a power of 2 (pos-

sibly smaller than 1) for all ℓ ∈ POS, and tℓ/t2ℓ ∈ 1, 2 for all ℓ such that ℓ, 2ℓ ∈ POS. The rationale here

is that these properties are satisfied when ~tℓ is the smallest power of 2 that is at least 2E[

Topℓ(loadσ∗)]

/ℓ.

Let (z, ξ) be a feasible fractional solution to this LP. We show how to round z to obtain an assignment σsatisfying E

[

Topℓ(loadσ)]

= O(1) · ℓtℓ for all ℓ ∈ POS.

Our strategy is similar to that in Section 6.1. As discussed earlier, we set up an auxiliary LP (Ber-LP)

where we include a budget constraint for: (i) each machine i and index ℓ ∈ POS from (11), to bound

the contribution from the truncated jobs for index ℓ on machine i; (ii) each ℓ ∈ POS to bound the total

contribution from the exceptional jobs for index ℓ. We actually refine the above set of constraints to drop

various redundant budget constraints (i.e., constraints implied by other included constraints), so as to enable

us to prove an O(1) column sum for the resulting constraint matrix. The fractional assignment z yields

a feasible solution to (Ber-LP). Now we really need to utilize the full power of Theorem 3.6 to round

z to obtain an assignment that approximately satisfies these constraints. In the analysis, we show that

approximately satisfying the budget constraints of (Ber-LP) ensures that E[

Topℓ(loadσ)]

= O(ℓtℓ). This

argument is slightly more complicated now, due to the fact that we drop some redundant constraints, but it

is along the same lines as that in Section 6.1. We now delve into the details.

B1. For each ℓ ∈ POS, define the following quantities. Define Mℓlow := i ∈ [m] : ξi,ℓ < 1/2, and

let Mℓhi := [m] \ Mℓ

low. For every i ∈ [m], set λi,ℓ := min

100m,⌊

1/ξi,ℓ⌋

∈ 2, . . . , 100m if

i ∈ Mℓlow, and λi,ℓ := 1 otherwise.

Define POS< := ℓ ∈ POS : ℓ = 1 or tℓ =tℓ/22 , and POS= := ℓ ∈ POS : 2ℓ /∈ POS or tℓ = t2ℓ.

B2. The auxiliary LP will enforce constraints (10), and constraint (11) for parameter λi,ℓ, for every machine

i and ℓ ∈ POS, but we remove various redundant constraints. Notice that if ℓ /∈ POS< (so tℓ = tℓ/2),

then constraints (11) for ℓ and ℓ/2 have the same LHS. We may therefore assume that ξi,ℓ/2 = ξi,ℓ for

every machine i, and hence constraint (12) for index ℓ is implied by (12) for index ℓ/2. Similarly, (10)

for index ℓ is implied by (10) for index ℓ/2. Also, if ℓ /∈ POS= (so tℓ = 2t2ℓ), then constraint (10) for

index ℓ is implied by this constraint for index 2ℓ. The auxiliary LP is therefore as follows.

(Ber-LP)

i∈[m],j∈[n]

(

E[

X≥tℓij

]

/(ℓtℓ))

ηij ≤ 1 ∀ℓ ∈ POS< ∩ POS= (14)

j∈[n]

βλi,ℓ

(

X<tℓij /4tℓ

)

ηij ≤ 10 ∀ ℓ ∈ POS<, ∀ i ∈ Mℓlow (15)

j∈[n]

E[

X<tℓij /4tℓ

]

ηij ≤ 4 ξi,ℓ + 6 ∀ ℓ ∈ POS<, ∀ i ∈ Mℓhi (16)

η ∈ Qasgn.

We have scaled constraints (10) for the exceptional jobs to reflect the fact that we can afford to incur an

O(ℓtℓ) error in the constraint for index ℓ.

B3. The fractional assignment z is a feasible solution to (Ber-LP). We apply Theorem 3.6 with the above

system to round z and obtain an integral assignment σ.

Analysis. We first show that the constraint matrix of (Ber-LP) has O(1)-bounded column sums for the

support of z. Claim 6.11 shows this for the budget constraints for the exceptional jobs; this bound holds for

any distribution.

27

Page 28: Approximation Algorithms for Stochastic Minimum Norm ...

Claim 6.11. Let i, j be such that zij > 0. Then∑

ℓ∈POS< ∩POS=(

E[

X≥tℓij

]

/ℓtℓ)

≤ 4.

Proof. Crucially, we first note that E[

X≥t1ij

]

≤ t1 due to (13). So, E[

X≥tℓij

]

≤ E[

X≥t1ij

]

+E[

X<t1ij

]

≤ 2t1.

Observe that if ℓ ∈ POS< ∩ POS=, then 2ℓ 6∈ POS<. Also, tℓ drops by a factor of exactly 2 as ℓ steps over

the indices in POS<. So, ℓtℓ increases by a factor of at least 2 as ℓ steps over the indices in POS< ∩ POS=.

Thus,∑

ℓ∈POS<∩POS=

E[

X≥tℓij

]

ℓtℓ≤ 2t1

(

1

t1+

1

2t1+

1

4t1+ . . .

)

≤ 4 .

Claim 6.12 shows the O(1) column sum for constraints (15) and (16). The O(1) column sum for

constraints (16) in fact holds for any distribution (see Claim 6.18); the O(1) column sum for constraints (15)

relies on the fact that the job sizes are Bernoulli random random variables. Let aℓi,j denote the coefficient of

zij in constraints (11) and (12).

Claim 6.12. When the Xijs are Bernoulli random variables, we have∑

ℓ∈POS< aℓij ≤ 1 for any i, j.

Proof. Fix some machine i and job j. For any [0, θ]-bounded r.v. Z and a parameter λ ∈ R≥1, we have

βλ (Z) ≤ θ (see Definition 3.7). Suppose that Xij is a Bernoulli random variable that takes value s with

probability q, and value 0 with probability 1 − q. For any ℓ ∈ POS<, we have aℓij ≤s·1(s<tℓ)

4tℓ; recall that

1E ∈ 0, 1 is 1 if and only if the event E happens. By definition of POS<, the tℓs decrease geometrically

over ℓ ∈ POS<, so

ℓ∈POS<

aℓij ≤∑

ℓ∈POS<

s · 1(s<tℓ)

4tℓ=

1

4

ℓ∈POS<

s

tℓ1(s/tℓ<1) ≤

1 + 1/2 + 1/4 + . . .

4≤ 1

2.

As before, we analyze E[

Topℓ(loadσ)]

by bounding the expected Topℓ norm of the exceptional jobs

for index ℓ, the truncated jobs for index ℓ in Mℓlow, and the truncated jobs for index ℓ in Mℓ

hi. Define

Y excep,ℓi :=

j:j 7→iX≥tℓij , and

Y low,ℓi :=

j:j 7→iX<tℓij ; if i ∈ Mℓ

low

0; otherwiseY hi,ℓi :=

j:j 7→iX<tℓij ; if i ∈ Mℓ

hi

0; otherwise

We have loadσ = Y low,ℓ + Y hi,ℓ + Y excep,ℓ. Given the above claims, it follows from Theorem 3.6 (b) that

the assignment σ satisfies the constraints of (Ber-LP) with an additive violation of at most 5. This will

enable us to bound E[

Topℓ(Z)]

, where Z ∈ Y low,ℓ, Y hi,ℓ, Y excep,ℓ, by O(ℓtℓ) (Lemmas 6.13–6.15), and

hence show that E[

Topℓ(loadσ)]

= O(ℓtℓ) (Lemma 6.16). The proofs follow the same template as those

in Section 6.1, but we need to also account for the redundant constraints that were dropped. Recall that

POS< := ℓ ∈ POS : ℓ = 1 or tℓ =tℓ/22 , and POS= := ℓ ∈ POS : 2ℓ /∈ POS or tℓ = t2ℓ.

Lemma 6.13. We have E[

Topℓ(

Y excep,ℓ)]

≤ 6 ℓtℓ for all ℓ ∈ POS.

Proof. Fix ℓ ∈ POS. We upper bound E[

Topm(Y excep,ℓ)]

=∑

j E[

X≥tℓσ(j),j

]

by 6ℓtℓ. If ℓ ∈ POS<∩POS=,

then by Theorem 3.6 (b), we have∑

j E[

X≥tℓσ(j),j

]

≤ 6ℓtℓ since the column sum for each ηij variable with

zij > 0 is at most 5 by Claims 6.11 and 6.12.

Next, suppose that ℓ ∈ POS \ POS<, so tℓ = tℓ/2. Let ℓ′ be the largest index in POS< that is at most

ℓ. Then, ℓ′ ≤ ℓ/2 and tℓ = tℓ′ . So Y excep,ℓ = Y excep,ℓ′ . Also, ℓ′ ∈ POS=, since tℓ ≤ t2ℓ′ ≤ tℓ′ = tℓ.Therefore, E

[

Topm(Y excep,ℓ)]

= E[

Topm(Y excep,ℓ′)]

≤ 6ℓ′tℓ′ ≤ 6ℓtℓ.Finally, suppose ℓ ∈ POS \ POS=, so tℓ = 2t2ℓ. Now, let ℓ′ be the smallest index in POS= that is at

least ℓ. Then, ℓ′ ≥ 2ℓ, and tℓ′/2 > tℓ′ (since ℓ′/2 /∈ POS=), so ℓ′ ∈ POS<. We claim that ℓ′tℓ′ = ℓtℓ. This

28

Page 29: Approximation Algorithms for Stochastic Minimum Norm ...

is because for every ℓ′′ ∈ POS, ℓ ≤ ℓ′′ < ℓ′, we have tℓ′′ = 2t2ℓ′′ , and so ℓ′′tℓ′′ = 2ℓ′′t2ℓ′′ . Hence, we have

E[

Topm(Y excep,ℓ)]

≤ E[

Topm(Y excep,ℓ′)]

≤ 6ℓ′tℓ′ = 6ℓtℓ.

Lemma 6.14. We have E[

Topℓ(

Y hi,ℓ)]

≤ 104 ℓtℓ for all ℓ ∈ POS.

Proof. Fix ℓ ∈ POS. We again show that in fact E[

Topm(Y hi,ℓ)]

is at most 104 ℓtℓ. Note that we only need

to show this for ℓ ∈ POS<: if ℓ ∈ POS \ POS<, then if we consider the largest index ℓ′ ∈ POS< that is

at most ℓ, we have tℓ′ = tℓ, and so Y hi,ℓ′ = Y hi,ℓ; thus, the bound on E[

Topm(Y hi,ℓ)]

follows from that on

E[

Topm(Y hi,ℓ′)]

. So suppose ℓ ∈ POS<. We have

E[

Y hi,ℓi

]

=∑

j:j 7→i

E[

X<tℓij

]

≤ 4tℓ(4ξi,ℓ + 11) ≤ 4tℓ · 26ξi,ℓ.

The first inequality follows from Theorem 3.6 (b) Summing over all i, since∑

i ξi,ℓ ≤ ℓ, we obtain that

E[

Topm(Y hi,ℓ)]

≤ 104 ℓtℓ.

Lemma 6.15. We have E[

Topℓ(

Y low,ℓ)]

≤ 296 ℓtℓ for all ℓ ∈ POS.

Proof. As with Lemma 6.14, we only need to consider ℓ ∈ POS<. So fix ℓ ∈ POS<. Let W = Y low,ℓ/4tℓ.Mimicking the proof of Lemma 6.7, it suffices to show that

i∈Mℓlow

E[

W≥37i

]

≤ 37ℓ, as Lemma 4.1 then

implies that E[

Topℓ (W )]

≤ 74 ℓ, or equivalently E[

Topℓ(

Y low,ℓ)]

≤ 296 ℓtℓ.

By Theorem 3.6 (b), for any i ∈ Mℓlow, we have βλi,ℓ

(Wi) =∑

j:j 7→i βλi,ℓ

(

X<tℓij /4tℓ

)

≤ 15. So by

Lemma 3.8, we have E[

W≥16i

]

≤ 18/λi,ℓ ≤ 18 max( 1100m , 1

⌊1/ξi,ℓ⌋) ≤ 36ξi,ℓ + 18/100m. This implies

that∑

i∈Mℓlow

E[

W≥16i

]

≤ 37ℓ.

Lemma 6.16. The assignment σ satisfies E[

Topℓ (loadσ)]

≤ 406 ℓtℓ.

Finishing up the proof of Theorem 6.2

For ℓ ∈ POS, let t∗ℓ be the smallest number of the form 2k, where k ∈ Z (and could be negative) such that

t∗ℓ ≥ 2E[

Topℓ(loadσ∗)]

/ℓ. Given Claim 6.10 and Lemma 6.16, we know that the solution σ computed

for ~t∗ will be an O(1)-approximate solution. We argue below that we can identify a poly(m)-size set

T ⊆ RPOS≥0 containing ~t∗, and ~t∗ satisfies the assumptions made at the beginning of Section 6.2.1, namely

t∗ℓ/t∗2ℓ ∈ 1, 2 for all ℓ such that ℓ, 2ℓ ∈ POS. While we cannot necessarily identify ~t∗, we use Corollary 5.4

to infer that if (LP(~t)) is feasible, then we can use ~t to come up with a good estimate of the expected f -norm

of the solution computed for ~t. This will imply that the “best” vector ~t ∈ T yields an O(1)-approximate

solution.

By Lemma 6.9, we have a bound on OPT and t∗1: UBm ≤ E

[

Top1 (loadσ∗)]

≤ E[

f(loadσ∗)]

≤ UB. Ob-

serve that E[

Topℓ(loadσ∗)]

/ℓ does not increase as ℓ increases. It follows that 2UBm2 ≤ 2E

[

Topℓ(loadσ∗)]

/ℓ ≤2UB for all ℓ ∈ POS. Define

T :=

~t ∈ RPOS≥0 : ∀ℓ ∈ POS,

2UB

m2≤ tℓ < 4UB, tℓ is a power of 2,

tℓt2ℓ

∈ 1, 2 whenever 2ℓ ∈ POS

(17)

Claim 6.17. The vector ~t∗ belongs to T , and |T | = O(m logm).

Proof. Note that each vector ~t ∈ T is completely defined by specifying t1, and the set of “breakpoint”

indices ℓ ∈ POS for which tℓt2ℓ

= 2. There are O(logm) choices for t1, and at most 2|POS| ≤ m choices

for the set of breakpoint indices; hence, |T | = O(m logm). To see that ~t∗ ∈ T , by definition, each

29

Page 30: Approximation Algorithms for Stochastic Minimum Norm ...

t∗ℓ is a power of 2. The bounds shown above on 2E[

Topℓ(load∗σ)]

/ℓ show that t∗ℓ lies within the stated

bounds. The only nontrivial condition to check is t∗ℓ/t∗2ℓ ∈ 1, 2 whenever 2ℓ ∈ POS. We have

2ℓ t∗2ℓ2 ≥

E[

Top2ℓ (loadσ∗)]

≥ E[

Topℓ (loadσ∗)]

>ℓt∗ℓ4 , which implies t∗ℓ < 4 t∗2ℓ. Next, by Top2ℓ (·) ≤ 2Topℓ (·)

we haveℓt∗ℓ2 ≥ E

[

Topℓ (loadσ∗)]

≥ 12E[

Top2ℓ (loadσ∗)]

>2ℓt∗2ℓ8 , which implies t∗ℓ > 1

2 t∗2ℓ. Since t∗ℓs are

powers of 2, this implies t∗ℓ/t∗2ℓ ∈ 1, 2.

We enumerate over all guess vectors ~t ∈ T and check if (LP(~t)) is feasible. For each ~t ∈ T , and

ℓ ∈ POS, define Bℓ(~t ) := ℓtℓ. Let ~b(~t ) ∈ Rm≥0 be the vector given by Lemma 5.3 for the sequence

(

Bℓ(~t ))

ℓ∈POS. Among all feasible ~t vectors, let~t be a vector that minimizes f

(

~b(~t ))

. Let σ be the assign-

ment computed by our algorithm by rounding a feasible solution to (LP(~t)).

Then, we have f(

~b(~t ))

≤ f(

~b(~t∗ ))

. Since Bℓ(~t∗ ) = ℓt∗ℓ ≤ 4E[

Topℓ(loadσ∗)]

for all ℓ ∈ POS,

Corollary 5.4 (b) shows that f(

~b(~t∗ ))

≤ 12E[

f(loadσ∗)]

; hence, we have f(

~b(~t ))

≤ 12OPT.

Our rounding algorithm ensures that E[

Topℓ (loadσ)]

≤ 406Bℓ(~t ) for all ℓ ∈ POS (Lemma 6.16).

Hence, by Corollary 5.4 (a), we have E[

f(loadσ)]

≤ 2 · 28 · 406 ·(

f(~b(~t )))

= O(1) ·OPT.

6.2.2 Rounding algorithm and analysis: general distributions

As in Section 6.2.1, we assume that ~t ∈ RPOS≥0 is such that (LP(~t)) is feasible, tℓ is a power of 2 (pos-

sibly smaller than 1) for all ℓ ∈ POS, and tℓ/t2ℓ ∈ 1, 2 for all ℓ such that 2ℓ ∈ POS. Let (z, ξ) be

a feasible fractional solution to this LP. We show how to round z to obtain an assignment σ satisfying

E[

Topℓ (loadσ)]

= O (logm/ log logm) ℓtℓ for all ℓ ∈ POS.

Our strategy is simpler than the one used in Section 6.2.1. We set up an auxiliary LP which is a spe-

cialization of (Ber-LP): for each machine i and index ℓ ∈ POS< we let λi,ℓ := 1, or equivalently, we let

Mℓlow = ∅ and Mℓ

hi = [m]. For each ℓ ∈ POS, the constraint (14) bounding the total contribution from

exceptional jobs is retained as is. Formally, for the truncated jobs at any index ℓ ∈ POS<, we are only

bounding their expected contribution to the load vector. The fractional assignment z yields a feasible solu-

tion to this auxiliary LP. We invoke Theorem 3.6 with this simpler auxiliary LP to round z and obtain an

assignment σ.

Analysis. We first show that the constraint matrix of the above auxiliary LP has O(1)-bounded column

sums for the support of z. Recall from Claim 6.11 that for any job size distributions the budget constraints for

the exceptional jobs have O (1)-bounded column sums. Claim 6.18 shows O(1) column sum for constraints

(16) in (Ber-LP). Note that there are no constraints of the type (15) since Mℓlow = ∅.

Claim 6.18. Let Xijs be arbitrary random variables. Then,∑

ℓ∈POS< E[

X<tℓij /4tℓ

]

≤ 1 for any i, j.

Proof. Fix some machine i and job j. Recall that POS< = ℓ ∈ POS : ℓ = 1 or tℓ =tℓ/22 . So, tℓ drops

by a factor 2 as ℓ steps over the indices in POS<. For the sake of simplicity, suppose that Xij is a discrete

random variable; the same argument can be extended to continuous distributions. For s ∈ supp(Xij), let

qs = Pr[Xij = s]. Then,

ℓ∈POS<

E[

X<tℓij /4tℓ

]

=∑

s∈supp(Xij)

qs4

·(

ℓ∈POS<:tℓ>s

s

tℓ

)

≤∑

s∈supp(Xij )

qs4(1 + 1/2 + 1/4 + . . . ) ≤ 1

2.

Claims 6.11 and 6.18 imply that the column-sums across the budget constraints are bounded by 5.

30

Page 31: Approximation Algorithms for Stochastic Minimum Norm ...

Theorem 6.19. Let σ denote the assignment obtained by invoking Theorem 3.6 for rounding the solution zwith the parameter ν = 5. Then,

(i) for any index ℓ ∈ POS,∑

j E[

X≥tℓσ(j),j

]

≤ 6 ℓtℓ;

(ii) for any machine i and index ℓ ∈ POS,∑

j:j 7→iE[

X<tℓij

]

≤ (16ξi,ℓ + 44)tℓ.

(iii) for any index ℓ ∈ POS, E[

Topℓ (loadσ)]

= O (logm/ log logm) ℓtℓ.

Proof. By Theorem 3.6 (b), part (i) holds for any ℓ ∈ POS< ∩ POS=, and repeating the argument from

Lemma 6.13 implies the bound for remaining indices in POS. Again, Theorem 3.6 (b) yields part (ii) for

any index ℓ ∈ POS<; the bound also holds for remaining ℓ ∈ POS since tℓ = tℓ/2.

We prove part (iii) by separately bounding the Topℓ norm of the load vectors induced by the truncated

jobs and the exceptional jobs for any ℓ ∈ POS. Let Y bd,ℓ and Y excep,ℓ be defined as follows: for each i,Y bd,ℓi :=

j:j 7→iX<tℓij ; and Y excep,ℓ

i :=∑

j:j 7→iX≥tℓij . By construction, loadσ = Y bd,ℓ + Y excep,ℓ. As

before, the treatment for the load induced by exceptional jobs is easy to handle. The first conclusion implies

a bound on the Topm norm of Y excep,ℓ: E[

Topm(

Y excep,ℓ)]

=∑

j E[

X≥tℓσ(j),j

]

≤ 6ℓtℓ. So, the same bound

extends to the (possibly smaller) Topℓ norm of Y excep,ℓ.

Since each Y bd,ℓi is a sum of independent [0, tℓ)-bounded r.v.s, we can use standard Chernoff-style

concentration inequalities to bound E[

Topℓ(

Y bd,ℓ)]

by O (logm/ log logm) ℓtℓ; this is sufficient since

triangle inequality implies the bound given in (iii). To this end, fix ℓ ∈ POS. The following fact will

be useful. Let θ > 0 be a scalar and y ∈ Rm≥0 be some vector. Suppose that for each i ∈ [m], yi ≤

θ(16ξi,ℓ + 44)tℓ holds. Then,

Topℓ (y) = maxS⊆[m],|S|=ℓ

i∈S

yi ≤ θ

(

16tℓ∑

i

ξi,ℓ + 44ℓtℓ

)

≤ 60θℓtℓ ,

where∑

i ξi,ℓ ≤ ℓ follows from the feasibility of (z, ξ) to (LP(~t)).

For any θ ∈ R≥0, let q(θ) denote the probability of the event:

∃i s.t. Y bd,ℓi > α(16ξi,ℓ + 44)tℓ

. By

the above, we have that Pr[

Topℓ(

Y bd,ℓ)

> 60θℓtℓ]

is at most q(θ). Therefore

E[

Topℓ(Ybd,ℓ)

]

=

∫ ∞

0Pr[

Topℓ(Ybd,ℓ) > α

]

=(

60ℓtℓ)

·∫ ∞

0Pr[

Topℓ(Ybd,ℓ) > 60θℓtℓ

]

dθ ≤(∫ ∞

0q(θ)dθ

)

O (ℓtℓ) .

The second equality follows by the change of variable, α = 60θℓtℓ. We finish the proof by arguing that∫∞0 q(θ)dθ = O (logm/ log logm).

By Chernoff bounds (see Lemma 3.1), for any machine i and any θ ≥ e2 we have:

Pr[

Y bd,ℓi > θ

(

16 ξi,ℓ + 44)

tℓ]

≤(

eθ−1

θθ

)16 ξi,ℓ+44

≤ e−12θ ln θ(16 ξi,ℓ+44) ≤ e−22 θ ln θ .

For θ = α lnm/ ln lnm, where α ≥ 1, we therefore obtain that Pr[

Y bd,ℓi > θ(16 ξi,ℓ + 44)

]

is at most

e−11α lnm (since θ ln θ ≥ α lnm/2). By the union bound, it follows that q(α lnm/ ln lnm) ≤ e−10α lnm

for all α ≥ 1. Thus,∫ ∞

0q(θ)dθ ≤ lnm

ln lnm+

∫ ∞

lnmln lnm

q(θ)dθ =lnm

ln lnm

[

1 +

∫ ∞

1q(α lnm/ ln lnm)dα

]

≤ lnm

ln lnm

[

1 +

∫ ∞

1e−10α lnmdα

]

=lnm

ln lnm+

m−10

10 ln lnm= O (logm/ log logm) .

31

Page 32: Approximation Algorithms for Stochastic Minimum Norm ...

Finishing up the proof of Theorem 6.3. We finish up the proof by finding a suitable ~t vector, as in the

proof of Theorem 6.2, and returning the solution found for this vector. As before, for each ~t ∈ T (see

(17) for the definition of T ), and ℓ ∈ POS, define Bℓ(~t ) := ℓtℓ. Let ~b(~t ) ∈ Rm≥0 be the vector given

by Lemma 5.3 for the sequence(

Bℓ(~t ))

ℓ∈POS. Among all feasible ~t vectors, let ~t be a vector that mini-

mizes f(

~b(~t ))

; recall that f(

~b(~t ))

= O (OPT). Let σ be the assignment computed by our algorithm by

rounding a feasible solution to (LP(~t)). By Theorem 6.19, for each ℓ ∈ POS, σ satisfies E[

Topℓ (loadσ)]

=

O (logm/ log logm)Bℓ(~t ). Hence, by Corollary 5.4 (a), we have E

[

f(loadσ)]

= O(

logmlog logm

)

·f(

~b(~t ))

=

O(

logmlog logm

)

·OPT.

7 Spanning Trees

In this section, we apply our framework to devise an approximation algorithm for stochastic minimum norm

spanning tree. We are given an undirected graph G = (V,E) with stochastic edge-weights and we are

interested in connecting the vertices of this graph to each other. For an edge e ∈ E, we have a nonnegative

r.v. Xe that denotes its weight. Edge weights are independent. A feasible solution is a spanning tree T ⊆ Eof G. This induces a random weight vector Y T = (Xe)e∈T ; note that Y T follows a product distribution on

Rn−1≥0 where n := |V |. The goal is to find a spanning tree T that minimizes E

[

f(Y T )]

for a given monotone

symmetric norm f . (As an aside, we note that the deterministic version of this problem, wherein each edge ehas a fixed weight we ≥ 0, can be solved optimally: an MST T ′ simultaneously minimizes

e∈T (we − θ)+

among all spanning trees T of G, for all θ ≥ 0. Since Topℓ (x) = minθ∈R≥0

ℓθ +∑

i (xi − θ)+

, it

follows that T ′ minimizes every Topℓ-norm, and so by Theorem 3.4, it is simultaneously optimal for all

monotone, symmetric norms.)

Theorem 7.1. There is an O (1)-approximation algorithm for stochastic f -norm spanning tree with arbi-

trary edge-weight distributions.

Overview. We drop the superscript T in Y T when the tree T is clear from the context. We use Ye to denote

the coordinate in Y corresponding to edge e; we will ensure that Ye is used only when e ∈ T . Let T ∗ denote

an optimal solution. Let Y ∗ := Y T ∗and OPT := E

[

f(Y ∗)]

denote the optimal value.

The cost vector Y T is inherently less complex than the load vector in load balancing, in that each coor-

dinate Y Te is an “atomic” random variable whose distribution we can directly access. Thus, our approach is

guided by Lemma 5.8 and Theorem 5.9, which show that to obtain an approximation guarantee for stochastic

f -norm spanning tree, it suffices to find a spanning tree T such that the τℓ statistics of Y T are “comparable”

to those of Y ∗.

We write an LP relaxation (Tree(~t)) that works with a non-increasing vector ~t ∈ RPOSn−1

≥0 that is in-

tended to be a guess of(

τℓ(Y∗))

ℓ∈POS. By Lemma 5.8, E

[

f(Y )]

= Θ(∑

eE[

(Ye − τ1(Y ))+]

+ f(~τ(Y )))

.

So our LP seeks a (fractional) spanning tree whose cost vector Y minimizes∑

eE[

(Xe−t1)]+

subject to the

constraint that τℓ(Y ) ≤ tℓ for all ℓ ∈ POS. We round an optimal solution to this LP using Theorem 3.6, and

argue that this solution has expected cost at most O(

Tree-OPT(

~t)

+f(~t′))

, where Tree-OPT(

~t)

is the opti-

mal value of (Tree(~t)) and ~t′ ∈ Rn−1≥0 is the expansion of~t. (Recall that ~t′ is defined by setting ~t′i := ~t2⌊log2 i⌋

for all i ∈ [n− 1].) Finally, we show that we can find a suitable vector ~t for which(

Tree-OPT(

~t)

+ f(~t′))

is O(OPT).

32

Page 33: Approximation Algorithms for Stochastic Minimum Norm ...

7.1 LP relaxation

Let ~t ∈ RPOSn−1

≥0 be a non-increasing vector. We have ze variables indicating if an edge e belongs to the

spanning tree. So z belongs to the polytope

Qtree :=

z ∈ RE≥0 : z(E) = |V | − 1, z(A) ≤ n− comp(A) ∀A ⊆ E

,

where comp(A) denotes the number of connected components of (V,A), and z(A) denotes∑

e∈A ze. It is

well-known that the above polytope is the spanning-tree polytope of G, i.e., it is the convex hull of indicator

vectors of spanning trees of G. We use the matroid base-polytope characterization for two reasons: (i) our

arguments can be generalized verbatim to the more general setting with an arbitrary matroid; and (ii) we can

directly invoke Theorem 3.6 to round fractional LP solutions to integral solutions. The constraints of our

LP encode that z ∈ Qtree, and that the τℓ-statistics of the cost vector induced by z are bounded by ~t. The

objective function accounts for the contribution from the coordinates larger than τ1 to the expected f -norm.

min∑

e∈E

E[

(Xe − t1)+]ze (Tree(~t))

s.t.∑

e∈E

Pr[

Xe > tℓ]

ze ≤ ℓ ∀ℓ ∈ POS (18)

z ∈ Qtree. (19)

The following claim is immediate.

Claim 7.2. (Tree(~t)) is feasible whenever tℓ ≥ τℓ (Y∗) for all ℓ ∈ POS.

Proof. Consider the solution z∗ induced by the optimal solution T ∗ i.e., z∗ is the indicator vector of T ∗. For

each ℓ ∈ POS constraint (18) is satisfied since tℓ ≥ τℓ.

7.2 Rounding algorithm and analysis

Assume that ~t ∈ RPOS≥0 is a non-increasing vector such that (Tree(~t)) is feasible. Let z be an optimal

fractional spanning tree solution to (Tree(~t)), and let Tree-OPT(

~t)

be the optimal value of (Tree(~t)). Define

value(~t) := Tree-OPT(

~t)

+ f(~t′), where ~t′ ∈ Rn−1≥0 denotes the expansion of ~t. We show how to round z

to obtain a spanning tree T satisfying E[

f(Y )]

= O(

value(~t))

. The expression for value(~t) is motivated

by Lemma 5.8, which shows that E[

f(Y )]

= Θ(∑

eE[

(Ye − τ1)+]+ f(~τ)

)

.

Note that scaling constraint (18) by ℓ yields O(1)-bounded column sums in the resulting constraint

matrix, and an additive O(1) violation of the scaled constraint translates to an additive O(ℓ) violation of

constraint (18) for index ℓ.We invoke Theorem 3.6 with the parameter ν :=

ℓ∈POS1ℓ < 2 to round z to an integral spanning tree

T .

Theorem 7.3. The weight vector Y = Y T satisfies:

(i)∑

e∈T E[

(Ye − t1)+] ≤ Tree-OPT

(

~t)

;

(ii) for each ℓ ∈ POS, τ3ℓ (Y ) ≤ tℓ; and

(iii) E[

f(Y )]

≤ 126 · value(~t).Proof. Parts (i) and (ii) follow from parts (a) and (b) of Theorem 3.6 respectively. Part (i) is immedi-

ate from part (a). For part (ii), since we invoke Theorem 3.6 with ν < 2, by part (b), we have that∑

e∈T Pr[

Ye > tℓ]

< 3ℓ, and hence, τ3ℓ (Y ) ≤ tℓ. For part (iii), observe that Y,~t satisfy the assumptions of

Theorem 5.9(a) with α = 1 and β = 3, so we obtain that E[

f(Y )]

≤ 126(∑

e∈T E[

(Ye − t1)+]+ f(~t′)

)

,

and the RHS expression is at most 126 · value(~t) by part (i).

33

Page 34: Approximation Algorithms for Stochastic Minimum Norm ...

Finishing up the proof of Theorem 7.1

For ℓ ∈ POS, let t∗ℓ be the largest number of the form 2k, where k ∈ Z (possibly negative) such that

t∗ℓ ≤ 2τℓ (Y∗) + δ and δ > 0 is a small constant that we will set later. Given Claim 7.2 and Theorem 7.3,

we know that the solution T computed by rounding an optimal fractional solution z∗ to (Tree(~t∗)) satisfies

E[

f(Y T )]

= O(

value(~t∗))

. We argue below that we can identify a poly(m)-sized set T ⊆ RPOS≥0 contain-

ing ~t∗. While we cannot necessarily identify ~t∗, we use Theorem 5.9 (b) to infer that if (Tree(~t)) is feasible,

then value(~t) acts as a proxy for the expected f -norm of the solution computed for ~t. This will imply that

the “best” vector ~t ∈ T yields an O (1)-approximate solution.

As in the case of load balancing, Claim 3.2 immediately yields lower and upper bounds on OPT that

differ by a multiplicative factor of n− 1.

Lemma 7.4. Define UB to be the weight of a minimum-weight spanning tree in G with edge-weights given

by (the deterministic quantity) E[

Xe

]

. Then, UBn−1 ≤ E

[

Top1 (Y∗)]

≤ E[

f(Y ∗)]

≤ UB.

Claim 7.5. τ1 (Y∗) ≤ 4UB.

Proof. Using Lemma 4.5 and Theorem 4.3 we get: τ1 (Y∗) ≤ γ1 (Y

∗) ≤ 4E[

Top1 (Y∗)]

≤ 4UB.

Define δ := UB/n2. Let

T :=

~t ∈ RPOS≥0 : ∀ℓ ∈ POS, δ/2 < tℓ ≤ 8UB+ δ, tℓ is a power of 2

Claim 7.6. The vector ~t∗ belongs to T , and |T | = poly(m).

Proof. The polynomial bound follows from Claim 3.5 since each ~t ∈ T is a non-increasing vector, and

log2 ~tℓ can take only O (logm) values. To see that ~t∗ ∈ T , by definition, t∗ℓ is the highest power of 2satisfying t∗ℓ ≤ 2τℓ (Y

∗) + δ. Since t∗ℓ is the largest such power of 2, we get t∗ℓ > δ/2. And by Claim 7.5,

we get that t∗ℓ ≤ 8UB+ δ. Thus, ~t∗ ∈ T .

We enumerate over all guess vectors ~t ∈ T and check if (Tree(~t)) is feasible. Among all feasible ~t

vectors, let ~t be a vector that minimizes value(~t); recall that value(~t) = Tree-OPT(

~t)

+ f(~t′), where

Tree-OPT(

~t)

is the optimal value of (Tree(~t)) and ~t′ ∈ Rn−1≥0 is the expansion of ~t. Let T be the spanning

tree computed by our algorithm by rounding an optimal fractional solution to (LP(~t)). By Theorem 7.3 (iii),

we have E[

f(Y T )]

= O(

value(~t))

, and by Claim 7.6, we know that value(~t) ≤ value(~t∗).

To finish the proof, we argue that value(~t∗) = O (OPT). Recall that for each ℓ ∈ POS, t∗ℓ is the highest

power of 2 satisfying t∗ℓ ≤ 2τℓ (Y∗) + δ. It follows that τℓ (Y

∗) ≤ t∗ℓ ≤ 2τℓ (Y∗) + δ for all ℓ ∈ POS.

Thus, by Claim 7.2, (Tree(~t∗)) is feasible; in particular, the indicator vector of T ∗ is a feasible solution to

(Tree(~t∗)). It follows that

value(~t∗) ≤∑

e∈T ∗

E[

(Y ∗e − t∗1)

+]+ f(

~t∗′) ≤ 32 ·E

[

f(Y ∗)]

+ (n− 1)δ = O(OPT),

where the second inequality is due to Theorem 5.9 (b).

7.3 Extension: stochastic minimum-norm matroid basis

Our results extend quite seamlessly to the stochastic minimum-norm matroid basis problem, which is the

generalization of stochastic minimum-norm spanning tree, where we replace spanning trees by bases of an

arbitrary given matroid. More precisely, the setup is that we are given a matroid M = (U,F ) specified via

34

Page 35: Approximation Algorithms for Stochastic Minimum Norm ...

its rank function r, and a monotone symmetric norm f on Rr(U). Each element e ∈ U has a random weight

Xe. The goal is to find a basis of M whose induced random weight vector has minimum expected f -norm.

The only change in the algorithm is that we replace the spanning-tree polytope Qtree in (19) by the base

polytope of M, and we of course invoke Theorem 3.6 with the same base polytope. The analysis is identical

to the spanning-tree setting.

8 Proof of Lemma 3.9

We restate the lemma below for convenient reference.

Lemma 3.9 (restated). Let S =∑

j∈[k]Zj , where the Zjs are independent [0, θ]-bounded random vari-

ables. Let λ ≥ 1 be an integer. Then,

E[

S≥θ]

≥ θ ·∑

j∈[k] βλ (Zj/4θ)− 6

It suffices to prove the result for θ = 1. This is because if we set Z ′j = Zj/θ for all j ∈ [k], then the Z ′

js

are [0, 1]-bounded random variables, and S′ = S/θ; so applying the result for θ = 1 yields the inequality

E[

S′≥1]

≥∑

j∈[k] βλ(Z′/4)−6

4λ , and we have E[

S′≥1]

= E[

S≥θ]

/θ. So we work with θ = 1 from now on.

To keep notation simple, we always reserve j to index over the set [k]. We will actually prove the

following “volume” inequality:

j∈[k]

βλ (Zj/4) ≤ (3λ+ 2)E[

S≥1]

+ 6 , (20)

holds for all integer λ ≥ 1. The above inequality holds trivially for λ = 1 since∑

j βλ (Zj/4) =14 E[

S]

≤14

(

E[

S≥1]

+ 1)

. (In fact, the inequality, and the lemma is very weak for λ = 1.) In the sequel, λ ≥ 2, and

note that for 3λ+ 2 ≤ 4λ for λ ≥ 2, so (20) implies the lemma.

At a high level, we combine and adapt the proofs of Lemmas 3.2 and 3.4 from [16] to obtain our result.

We say that a Bernoulli trial is of type (q, s) if it takes size s with probability q, and size 0 with probability

1− q. For a Bernoulli trial B of type (q, s), Kleinberg et al. [16] define a modified notion of effective size:

β′λ (B) := min(s, sqλs). The following claim will be useful.

Claim 8.1 (Proposition 2.5 from [16]). βλ (B) ≤ β′λ (B).

Roughly speaking, inequality (20) states that each unit of λ-effective size contributes “λ−1-units” to-

wards E[

S≥1]

. This is indeed what we show, but we first reduce to the setting of Bernoulli trials.

The proof will involve various transformations of the Zj random variables, and the notion of stochastic

dominance will be convenient to compare the various random variables so obtained. A random variable Rstochastically dominates another random variable B, denoted B sd R, if Pr

[

R ≥ t]

≥ Pr[

B ≥ t]

for

all t ∈ R. We will use the following well-known facts about stochastic dominance.

(F1) If B sd R, then for any non-decreasing function u : R 7→ R, we have E[

u(B)]

≤ E[

u(R)]

.

(F2) If Bi sd Ri for all i = 1, . . . , k, then(∑k

i=1 Bi

)

sd

(∑k

i=1 Ri

)

.

All the random variables encountered in the proof will be nonnegative, and we will often omit stating this

explicitly.

35

Page 36: Approximation Algorithms for Stochastic Minimum Norm ...

8.1 Bernoulli decomposition: reducing to Bernoulli trials

We utilize a result of [16] that shows how to replace an arbitrary random variable R with a sum of Bernoulli

trials that is “close” to R in terms of stochastic dominance. This allows us to reduce to the case where all

random variables are Bernoulli trials (Lemma 8.4).

We use supp(R) to denote the support of a random variable R; if R is a discrete random variable, this

is the set x ∈ R : Pr[

R = x]

> 0. Following [16], we say that a random variable is geometric if its

support is contained in 0 ∪ 2r : r ∈ Z.

Lemma 8.2 (Lemma 3.10 from [16]). Let R be a geometric random variable. Then there exists a set of

independent Bernoulli trials B1, . . . , Bp such that B = B1+ · · ·+Bp satisfies Pr[

R = t]

= Pr[

t ≤ B <2t]

for all t ∈ supp(R). Furthermore, the support of each Bi is contained in the support of R.

Corollary 8.3. Let R be a geometric random variable, and B be the corresponding sum of Bernoulli trials

given by Lemma 8.2. Then, we have B/2 sd R sd B.

Proof. To see that R sd B, consider any t ≥ 0. We have

Pr[

R ≥ t]

=∑

t′∈supp(R):t′≥t

Pr[

R = t′]

=∑

t′∈supp(R):t′≥t

Pr[

t′ ≤ B < 2t′]

≤ Pr[

B ≥ t]

where the second equality is due to Lemma 8.2, and the last inequality follows since the intervals [t′, 2t′)are disjoint for t′ ∈ supp(R), as R is a geometric random variable.

To show that B/2 sd R, we argue that Pr[

R < t]

≤ Pr[

B < 2t]

for all t ∈ R. This follows from a

very similar argument as above. We have

Pr[

R < t]

=∑

t′∈supp(R):t′<t

Pr[

R = t′]

=∑

t′∈supp(R):t′<t

Pr[

t′ ≤ B < 2t′]

≤ Pr[

B < 2t]

.

The last inequality again follows because the intervals [t′, 2t′) are disjoint for t′ ∈ supp(R), as R is a

geometric random variable.

We intend to apply Lemma 8.2 to the Zjs to obtain a collection of Bernoulli trials, but first we need to

convert them to geometric random variables. For technical reasons that will be clear soon, we first scale

our random variables by a factor of 4; then, we convert each scaled random variable to a geometric random

variable by rounding up the non-zero values in its support to the closest power of 2. Formally, for each j,

let Zrdj denote the geometric random variable obtained by rounding up each non-zero value in the support

of Zj/4 to the closest power of 2. (So, for instance, 0.22 would get rounded up to 0.25, whereas 1/8 would

stay the same.) Note that Zrdj is a geometric [0, 1/4]-bounded random variable.

We now apply Lemma 8.2 to each Zrdj to obtain a collection Zber

ℓ ℓ∈Fjof independent Bernoulli trials.

Let F be the disjoint union of the Fjs, so F is the collection of all the Bernoulli variables so obtained.

Define Sber :=∑

ℓ∈F Zberℓ . For ℓ ∈ F , let Zber

ℓ be a Bernoulli trial of type (qℓ, sℓ); from Lemma 8.2, we

have that sℓ ∈ [0, 1/4] and is a (inverse) power of 2. We now argue that it suffices to prove (20) for the

Bernoulli trials Zberℓ ℓ∈F .

Lemma 8.4. Inequality (20) follows from the inequality:∑

ℓ∈F βλ(

Zberℓ

)

≤ (3λ+2)E[

(Sber)≥1]

+6. By

Claim 8.1, this in turn is implied by the inequality

ℓ∈F

β′λ

(

Zberℓ

)

≤ (3λ+ 2)E[

(Sber)≥1]

+ 6. (21)

36

Page 37: Approximation Algorithms for Stochastic Minimum Norm ...

Proof. Fix any j ∈ [k]. By Corollary 8.3, we have Zrdj sd

ℓ∈FjZberℓ . We also have that Zj/4 ≤ Zrd

j ,

and so Zj/4 sd∑

ℓ∈FjZberℓ . Therefore, since λx is an increasing function of x, we have βλ (Zj/4) ≤

βλ

(

ℓ∈FjZberℓ

)

=∑

ℓ∈Fjβλ(

Zberℓ

)

. Summing over all j, we obtain that∑

j∈[k] βλ (Zj/4) ≤∑

ℓ∈F βλ(

Zberℓ

)

.

Corollary 8.3 also yields that∑

ℓ∈FjZberℓ sd 2 · Zrd

j for all j. We also have Zrdj ≤ 2(Zj/4), and

therefore, we have∑

ℓ∈FjZberℓ sd Zj for all j. Using Fact (F2), this implies that Sber sd S, and hence,

by Fact (F1), we have E[

(Sber)≥1]

≤ E[

S≥1]

.

To summarize, we have shown that∑

j∈[k] βλ (Zj/4) is at most∑

ℓ∈F βλ(

Zberℓ

)

, which is at most the

LHS of (21) (by Claim 8.1), and the RHS of (20) is at least the RHS of (21).

8.2 Proof of inequality (21)

We now focus on proving inequality (21). Let F sml := ℓ ∈ F : λsℓ ≤ 2 index the set of small Bernoulli

trials, and let F lg := F \ F sml index the remaining large Bernoulli trials.

It is easy to show that the total modified effective size of small Bernoulli trials is at most 2E[

Sber]

≤2E[

(Sber)≥1]

+ 2 (see Claim 8.5 and inequality (vol-small)). Bounding the modified effective size of the

large Bernoulli trials is more involved. Roughly speaking, we first consolidate these random variables by

replacing them with “simpler” Bernoulli trials, and then show that each unit of total modified effective size

of these simpler Bernoulli trials makes a contribution of λ−1 towards E[

(Sber)≥1]

. The constant 6 in (21)

arises due to two reasons: (i) because we bound the modified effective size of the small Bernoulli trials by

O(

E[

Sber])

(as opposed to O(

E[

(Sber)≥1])

); and (ii) because we lose some modified effective size in the

consolidation or large Bernoulli trials.6

Claim 8.5 (Shown in Lemma 3.4 in [16]). Let B be a Bernoulli trial of type (q, s) with λs ≤ 2. Then,

β′λ (B) ≤ 2E

[

B]

.

Proof. By definition, β′λ (B) = min s, sqλs ≤ 2qs = 2E

[

B]

.

By the above claim we get the following volume inequality for the small Bernoulli trials.

ℓ∈F sml

β′λ

(

Zberℓ

)

≤ 2∑

ℓ∈F sml

E[

Zberℓ

]

≤ 2E[

Sber]

≤ 2E[

(Sber)≥1]

+ 2 . (vol-small)

We now handle the Bernoulli trials in F lg. For each ℓ ∈ F lg, we set q′ℓ = min(qℓ, λ−sℓ). Observe that

this operation does not change the modified effective size, and we have that β′λ ((qℓ, sℓ)) = β′

λ ((q′ℓ, sℓ)) =

sℓq′ℓλ

sℓ . The following claim from [16] is useful in consolidating Bernoulli trials of the same size.

Claim 8.6 (Claim 3.1 from [16]). Let E1, . . . , Ep be independent events, with Pr[

Ei]

= pi. Let E ′ be the

event that at least one of these events occurs. Then Pr[

E ′]

≥ 12 min(1,

i pi).

Consolidation for a fixed size. For each s that is an inverse power of 2, we define F s := ℓ ∈ F lg : sℓ =s, so that

s Fs is a partition of F lg. Next, we further partition F s into sets P s

1 , . . . , Psns

such that for all

i = 1, . . . , ns − 1, we have 2λ−s ≤∑ℓ∈P siq′ℓ < 3λ−s and

ℓ∈P sns

q′ℓ < 2λ−s. Such a partitioning always

exists since for each ℓ ∈ F s we have q′ℓ ≤ λ−s] by definition. We now apply Claim 8.6 to “consolidate”

each P si : by this, we mean that for i = 1, . . . , ns−1, we think of representing P s

i by the “simpler” Bernoulli

trial Bsi of type (λ−s, s) and using this to replace the individual random variables in P s

i .

6Note that some constant must unavoidably appear on the RHS of inequalities (20) and (21); that is, we cannot bound the total

effective size (or modified effective size) by a purely multiplicative factor of E[

S≥1]

, even for Bernoulli trials. This is because

if λ = 1, and say we have only one (Bernoulli) random variable Z that is strictly less than 1, then its effective size (as also its

modified effective size) is simply E[

Z]

, whereas S≥1 = Z≥1 = 0.

37

Page 38: Approximation Algorithms for Stochastic Minimum Norm ...

By Claim 8.6, for any i = 1, . . . , ns − 1, we have Pr[∑

ℓ∈P siZberℓ ≥ s

]

≥ λ−s = Pr[

Bsi = s

]

(note

that this only works for large Bernoulli trials since 2λ−s ≤ 1); hence, it follows that Bsi sd

ℓ∈P siZberℓ .

Note that∑

ℓ∈P si

β′λ

(

Zberℓ

)

= sλs∑

ℓ∈P si

q′ℓ <

3s; if i ∈ 1, . . . , ns − 12s; if i = ns.

Also, β′λ (B

si ) = min(s, s λ−s λs) = s. Putting everything together,

(ns − 1)s =

ns−1∑

i=1

β′λ (B

si ) ≥

ℓ∈F s β′λ

(

Zberℓ

)

− 2s

3.

Consolidation across different sizes. Summing the above inequality for all s (note that each s is an

inverse power of 2 and at most 1/4), we obtain that

s

ns−1∑

i=1

β′λ (B

si ) ≥

ℓ∈F lg β′λ

(

Zberℓ

)

− 1

3. (22)

Let vol denote the RHS of (22), and let m := ⌊vol⌋. (Note that m could be 0; but if m = 0, then vol < 1,

and (21) trivially holds.) Since each Bsi is a Bernoulli trial of type (λ−s, s), where s is an inverse power of 2,

we can obtain m disjoint subsets A1, . . . , Am of (s, i) pairs from the entire collection Bsi s,i of Bernoulli

trials, such that∑

(s,i)∈Auβ′λ (B

si ) =

(s,i)∈Aus = 1 for each u ∈ [m].7 For each subset Au,

Pr

[

(s,i)∈Au

Bsi = 1

]

=∏

(s,i)∈Au

Pr[

Bsi = s

]

=∏

(s,i)∈Au

λ−s = λ−1 .

Finishing up the proof of inequality (21). For any nonnegative random variables R1, R2, we have

E[

(R1 +R2)≥1] ≥ E

[

R≥11

]

+E[

R≥12

]

. So,

E

[

(

s

ns−1∑

i=1

Bsi

)≥1]

≥m∑

u=1

E

[

(

(s,i)∈Au

Bsi

)≥1]

=m

λ≥∑

ℓ∈F lg β′λ

(

Zberℓ

)

− 4

3λ.

As noted earlier, we have that Bsi sd

ℓ∈P siZberℓ for all s, and all i = 1, . . . , ns − 1. By Fact (F2), it

follows that(∑

s

∑ns−1i=1 Bs

i

)

sd

(∑

s

∑ns−1i=1

ℓ∈P siZberℓ

)

. Also,

s

ns−1∑

i=1

ℓ∈P si

Zberℓ ≤

ℓ∈F lg

Zberℓ ≤

ℓ∈F

Zberℓ = Sber,

and combining the above with Fact (F1), we obtain that E[(∑

s

∑ns−1i=1 Bs

i

)≥1] ≤ E[

(Sber)≥1]

. Thus, we

have shown that∑

ℓ∈F lg

β′λ

(

Zberℓ

)

≤ 3λE[

(Sber)≥1]

+ 4 . (vol-large)

7To justify this statement, it suffices to show the following. Suppose we have created some r sets A1, . . . , Ar, where r < m,

and let I be the set of (s, i) pairs indexing the Bernoulli trials that are not in A1, . . . , Ar; then, we can find a subset I ′ ⊆ I such that∑

(s,i)∈I′ s = 1. To see this, first since r < m, we have∑

(s,i)∈I s ≥ 1. We sort the (s, i) pairs in I in non-increasing order of

s; to avoid excessive notation, let I denote this sorted list. Now since each s is an inverse power of 2, it is easy to see by induction

that if J is a prefix of I such that∑

(s,i)∈J s < 1, then 1−∑

(s,i)∈J s is at least as large as the s-value of the pair in I appearing

immediately after J . Coupled with the fact that∑

(s,i)∈I s ≥ 1, this implies that there is a prefix I ′ such that∑

(s,i)∈I′ s = 1.

38

Page 39: Approximation Algorithms for Stochastic Minimum Norm ...

Adding (vol-small) and (vol-large) gives∑

ℓ∈F

β′λ

(

Zberℓ

)

≤ (3λ+ 2)E[

(Sber)≥1]

+ 6.

This completes the proof of inequality (21), and hence the lemma.

9 Conclusions and discussion

We introduce stochastic minimum-norm optimization, and present a framework for designing algorithms

for stochastic minimum-norm optimization problems. A key component of our framework is a structural

result showing that if f is a monotone symmetric norm, and Y ∈ Rm≥0 is a nonnegative random vector with

independent coordinates, then E[

f(Y )]

= Θ(

f(E[

Y ↓]

))

; in particular, this shows that E[

f(Y )]

can be

controlled by controlling E[

Topℓ(Y )]

for all ℓ ∈ [m] (or all ℓ ∈ 1, 2, 4, . . . , 2⌊log2 m⌋). En route to prov-

ing this result, we develop various deterministic proxies to reason about expected Topℓ-norms, which also

yield a deterministic proxy for E[

f(Y )]

. We utilize our framework to develop approximation algorithms for

stochastic minimum-norm load balancing on unrelated machines and stochastic minimum-norm spanning

tree (and stochastic minimum-norm matroid basis). We obtain O(1)-approximation algorithms for spanning

tree, and load balancing with (i) arbitrary monotone symmetric norms and Bernoulli job sizes, and (ii) Topℓnorms and arbitrary job-size distributions.

The most pressing question left open by our work is developing a constant-factor approximation algo-

rithm for the general case of stochastic min-norm load balancing, where both the monotone symmetric norm

and the job-size distributions are arbitrary; currently, we only have an O( logmlog logm

)

-approximation.

Another interesting question is to obtain significantly-improved approximation factors. As mentioned in

the Introduction, we have nowhere sought to optimize constants, and it is indeed possible to identify some

places where one could tighten constants. However, even with such optimizations, the approximation factor

that we obtain for load balancing (in settings (i) and (ii) above) is likely to be in the thousands (this is true

even of prior work on minimizing expected makespan [11]), and the approximation factor for spanning tree

is likely to be in the hundreds. It would be very interesting to obtain substantial improvements in these

approximation factors, and in particular, obtain approximation factors that are close to those known for the

deterministic problem. This will most likely require new insights. It would also be interesting to know if the

stochastic problems are strictly harder than their deterministic counterparts.

In this context, we highlight one particularly mathematically appealing question, namely, that of proving

tight(er) bounds on the ratio E[

f(Y )]

/f(

E[

Y ↓])

. We feel that this question, which is an analysis question

bereft of any computational concerns, is a fundamental question about monotone symmetric norms and

independent random variables that is of independent interest. We prove an upper bound of 28 (which can

be improved by a factor of roughly 2), but we do not know if a much smaller constant upper bound—say,

even 2—is possible. Of course, any improvement in the upper bound would also translate to improved

approximation factors. Proving lower bounds on the above ratio would also be illuminating, especially, if

one is looking to establish tight bounds.

References

[1] S. Alamdari and D. B. Shmoys. A bicriteria approximation algorithm for the k-center and k-median

problems. In Proceedings of the 15th WAOA, pages 66–75, 2017.

[2] A. E. Alaoui, X. Cheng, A. Ramdas, M. J. Wainwright, and M. I. Jordan. Asymptotic behavior of ℓp-

based Laplacian regularization in semi-supervised learning. In Proceedings of the 29th COLT, pages

879–906, 2016.

39

Page 40: Approximation Algorithms for Stochastic Minimum Norm ...

[3] A. Aouad and D. Segev. The ordered k-median problem: Surrogate models and approximation algo-

rithms. Mathematical Programming, 177:55—-83, 2019.

[4] B. Awerbuch, Y. Azar, E. F. Grove, M. Kao, P. Krishnan, and J. S. Vitter. Load balancing in the Lp

norm. In Proceedings of the 36th FOCS, pages 383–391, 1995.

[5] Y. Azar and A. Epstein. Convex programming for scheduling unrelated parallel machines. In Proceed-

ings of the 37th STOC, pages 331–337, 2005.

[6] J. Byrka, K. Sornat, and J. Spoerhase. Constant-factor approximation for ordered k-median. In Pro-

ceedings of the 50th STOC, pages 620–631, 2018.

[7] D. Chakrabarty and C. Swamy. Interpolating between k-median and k-center: Approximation algo-

rithms for ordered k-median. In Proceedings of the 45th ICALP, pages 29:1–29:14, 2018.

[8] D. Chakrabarty and C. Swamy. Approximation algorithms for minimum norm and ordered optimiza-

tion problems. In Proceedings of the 51st STOC, pages 126–137, 2019.

[9] D. Chakrabarty and C. Swamy. Simpler and better algorithms for minimum-norm load balancing. In

Proceedings of the 27th ESA, pages 27:1–27:12, 2019.

[10] A. Goel and P. Indyk. Stochastic load balancing and related problems. In Proceedings of the 40th

FOCS, pages 579–586, 1999.

[11] A. Gupta, A. Kumar, V. Nagarajan, and X. Shen. Stochastic load balancing on unrelated machines. In

Proceedings of the 29th SODA, pages 1274–1285, 2018.

[12] A. Gupta and K. Tangwongsan. Simpler analyses of local search algorithms for facility location. CoRR,

abs/0809.2554, 2008.

[13] V. Gupta, B. Moseley, M. Uetz, and Q. Xie. Greed works—online algorithms for unrelated machine

stochastic scheduling. Mathematics of Operations Research, 45(2):497–516, 2020.

[14] J. Y. Hui. Resource allocation for broadband networks. IEEE Journal on Selected Areas in Communi-

cations, 6(9):1598–1608, 1988.

[15] S. Im, B. Moseley, and K. Pruhs. Stochastic scheduling of heavy-tailed jobs. In Proceedings of the

32nd STACS, pages 474–486, 2015.

[16] J. M. Kleinberg, Y. Rabani, and E. Tardos. Allocating bandwidth for bursty connections. SIAM J.

Comput., 30(1):191–217, 2000.

[17] G. Laporte, S. Nickel, and F. S. da Gama. Location science. Springer, 2019.

[18] R. Latała. Estimation of moments of sums of independent real random variables. The Annals of

Probability, 25:1502–1513, 1997.

[19] J. Li and A. Deshpande. Maximizing expected utility for stochastic combinatorial optimization prob-

lems. Mathematics of Operations Research, 44(1):354–375, 2019.

[20] J. Li and Y. Liu. Approximation algorithms for stochastic combinatorial optimization problems. Jour-

nal of the Operations Research Society of China, 4(1):1–47, 2016.

[21] J. Li and W. Yuan. Stochastic combinatorial optimization via Poisson approximation. In Proceedings

of the 45th STOC, pages 971–980, 2013.

40

Page 41: Approximation Algorithms for Stochastic Minimum Norm ...

[22] A. Linhares, N. Olver, C. Swamy, and R. Zenklusen. Approximate multi-matroid intersection via

iterative refinement. Math. Program., 183(1):397–418, 2020.

[23] K. Makarychev and M. Sviridenko. Solving optimization problems with diseconomies of scale via

decoupling. J. ACM, 65(6):42:1–42:27, 2018.

[24] R. H. Mohring, A. S. Schulz, and M. Uetz. Approximation in stochastic scheduling: the power of

LP-based priority policies. J. ACM, 46(6):924–942, 1999.

[25] M. Molinaro. Stochastic ℓp load balancing and moment problems via the L-function method. In

Proceedings of the 30th SODA, pages 343–354, 2019.

[26] S. Nickel and J. Puerto. Location Theory: A Unified Approach. Springer Berlin Heidelberg, 2006.

[27] M. Pinedo. Offline deterministic scheduling, stochastic scheduling, and online deterministic schedul-

ing. In Handbook of Scheduling. Chapman and Hall/CRC, 2004.

[28] D. B. Shmoys and E. Tardos. An approximation algorithm for the generalized assignment problem.

Math. Program., 62:461–474, 1993.

41