The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research....

20
The Complexity of Contracts * Paul D¨ utting Tim Roughgarden Inbal-Talgam Cohen § Abstract We initiate the study of computing (near-)optimal contracts in succinctly representable principal-agent settings. Here op- timality means maximizing the principal’s expected payoff over all incentive-compatible contracts—known in economics as “second-best” solutions. We also study a natural relax- ation to approximately incentive-compatible contracts. We focus on principal-agent settings with succinctly described (and exponentially large) outcome spaces. We show that the computational complexity of computing a near-optimal contract depends fundamentally on the number of agent actions. For settings with a constant number of actions, we present a fully polynomial-time approximation scheme (FPTAS) for the separation oracle of the dual of the problem of minimizing the principal’s payment to the agent, and use this subroutine to efficiently compute a δ-incentive- compatible (δ-IC) contract whose expected payoff matches or surpasses that of the optimal IC contract. With an arbitrary number of actions, we prove that the problem is hard to approximate within any constant c. This inapproximability result holds even for δ-IC contracts where δ is a sufficiently rapidly-decaying function of c. On the positive side, we show that simple linear δ-IC contracts with constant δ are sufficient to achieve a constant-factor approximation of the “first-best” (full-welfare-extracting) solution, and that such a contract can be computed in polynomial time. 1 Introduction Economic theory distinguishes three fundamentally dif- ferent problems involving asymmetric information and incentives. In the first—known as mechanism design * Supported by BA/Leverhulme Small Research Grant SRG1819\191601, NSF Award CCF-1813188, ARO grant W911NF1910294, and the Israel Science Foundation (Grant No. 336/18). Part of the work of the first author was done while visiting Google Research. The third author is a Taub Fellow (sup- ported by the Taub Family Foundation). The authors thank the anonymous referees for their helpful feedback. Department of Mathematics, London School of Eco- nomics, Houghton Street, London WC2A 2AE, UK. Email: [email protected]. Department of Computer Science, Columbia University, 500 West 120th Street, New York, NY 10027, USA. Email: [email protected]. § Department of Computer Science, Technion, Israel In- stitute of Technology, Haifa, Israel 3200003. Email: [email protected]. (or screening )—the less informed party has to make a decision. A canonical example is Myerson’s optimal auc- tion design problem [Myerson, 1981], in which a seller wants to maximize the revenue from selling an item, having only incomplete information about the buyers’ willingness to pay. The second problem is known as sig- nalling (or Bayesian persuasion ). Here, as in the first case, information is hidden, but this time the more in- formed party is the active party. A canonical example is Akerlof’s “market for lemons” [Akerlof, 1970]. In this example, sellers are better informed about the quality of the products they sell, and may benefit by sharing (some) of their information with the buyers. Both of these basic incentive problems have been studied very successfully and extensively from a compu- tational perspective, see, e.g., [Cai et al., 2012a,b, 2013; Babaioff et al., 2014; Cai et al., 2016; Babaioff et al., 2017; Gonczarowski, 2018; Gonczarowski and Weinberg, 2018] and [Dughmi, 2014; Dughmi et al., 2014; Cheng et al., 2015; Dughmi and Xu, 2016]. The third basic problem, the agency problem in contract theory, has received far less attention from the theoretical computer science community, despite being regarded as equally important in economic theory (see, e.g., the scientific background on the 2016 Nobel Prize for Hart and Holmstr¨ om [Royal Swedish Academy of Sciences, 2016]). (A notable exception is [Babaioff et al., 2012], which we will discuss with further related work in more detail below.) The basic scenario of contract theory is captured by the following hidden-action principal-agent problem [Grossman and Hart, 1983]: There is one principal and one agent. The agent can take one of n actions a i A n . Each action a i is associated with a distribution F i over m outcomes x j R 0 , and has a cost c i R 0 . The principal designs a contract p that specifies a payment p(x j ) for each outcome x j . The agent chooses an action a i that maximizes expected payment minus cost, i.e., j F i,j p(x j ) - c i . The principal seeks to set up the contract so that the chosen action maximizes expected outcome minus expected payment, i.e., j F i,j x j - j F i,j p(x j ). The principal-agent problem is quite different from mechanism design and signalling, where the basic diffi- Copyright © 2020 by SIAM Unauthorized reproduction of this article is prohibited

Transcript of The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research....

Page 1: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

The Complexity of Contracts ∗

Paul Dutting† Tim Roughgarden‡ Inbal-Talgam Cohen§

Abstract

We initiate the study of computing (near-)optimal contractsin succinctly representable principal-agent settings. Here op-timality means maximizing the principal’s expected payoffover all incentive-compatible contracts—known in economicsas “second-best” solutions. We also study a natural relax-ation to approximately incentive-compatible contracts.

We focus on principal-agent settings with succinctlydescribed (and exponentially large) outcome spaces. Weshow that the computational complexity of computing anear-optimal contract depends fundamentally on the numberof agent actions. For settings with a constant number ofactions, we present a fully polynomial-time approximationscheme (FPTAS) for the separation oracle of the dual of theproblem of minimizing the principal’s payment to the agent,and use this subroutine to efficiently compute a δ-incentive-compatible (δ-IC) contract whose expected payoff matchesor surpasses that of the optimal IC contract.

With an arbitrary number of actions, we prove thatthe problem is hard to approximate within any constant c.This inapproximability result holds even for δ-IC contractswhere δ is a sufficiently rapidly-decaying function of c. Onthe positive side, we show that simple linear δ-IC contractswith constant δ are sufficient to achieve a constant-factorapproximation of the “first-best” (full-welfare-extracting)solution, and that such a contract can be computed inpolynomial time.

1 Introduction

Economic theory distinguishes three fundamentally dif-ferent problems involving asymmetric information andincentives. In the first—known as mechanism design

∗Supported by BA/Leverhulme Small Research GrantSRG1819\191601, NSF Award CCF-1813188, ARO grant

W911NF1910294, and the Israel Science Foundation (GrantNo. 336/18). Part of the work of the first author was done whilevisiting Google Research. The third author is a Taub Fellow (sup-

ported by the Taub Family Foundation). The authors thank theanonymous referees for their helpful feedback.

†Department of Mathematics, London School of Eco-

nomics, Houghton Street, London WC2A 2AE, UK. Email:[email protected].

‡Department of Computer Science, Columbia University,

500 West 120th Street, New York, NY 10027, USA. Email:[email protected].

§Department of Computer Science, Technion, Israel In-

stitute of Technology, Haifa, Israel 3200003. Email:[email protected].

(or screening)—the less informed party has to make adecision. A canonical example is Myerson’s optimal auc-tion design problem [Myerson, 1981], in which a sellerwants to maximize the revenue from selling an item,having only incomplete information about the buyers’willingness to pay. The second problem is known as sig-nalling (or Bayesian persuasion). Here, as in the firstcase, information is hidden, but this time the more in-formed party is the active party. A canonical exampleis Akerlof’s “market for lemons” [Akerlof, 1970]. In thisexample, sellers are better informed about the qualityof the products they sell, and may benefit by sharing(some) of their information with the buyers.

Both of these basic incentive problems have beenstudied very successfully and extensively from a compu-tational perspective, see, e.g., [Cai et al., 2012a,b, 2013;Babaioff et al., 2014; Cai et al., 2016; Babaioff et al.,2017; Gonczarowski, 2018; Gonczarowski and Weinberg,2018] and [Dughmi, 2014; Dughmi et al., 2014; Chenget al., 2015; Dughmi and Xu, 2016].

The third basic problem, the agency problem incontract theory, has received far less attention from thetheoretical computer science community, despite beingregarded as equally important in economic theory (see,e.g., the scientific background on the 2016 Nobel Prizefor Hart and Holmstrom [Royal Swedish Academy ofSciences, 2016]). (A notable exception is [Babaioff et al.,2012], which we will discuss with further related workin more detail below.)

The basic scenario of contract theory is capturedby the following hidden-action principal-agent problem[Grossman and Hart, 1983]: There is one principal andone agent. The agent can take one of n actions ai ∈ An.Each action ai is associated with a distribution Fi overm outcomes xj ∈ R≥0, and has a cost ci ∈ R≥0. Theprincipal designs a contract p that specifies a paymentp(xj) for each outcome xj . The agent chooses an actionai that maximizes expected payment minus cost, i.e.,∑j Fi,jp(xj) − ci. The principal seeks to set up the

contract so that the chosen action maximizes expectedoutcome minus expected payment, i.e.,

∑j Fi,jxj −∑

j Fi,jp(xj).The principal-agent problem is quite different from

mechanism design and signalling, where the basic diffi-

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 2: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

culty is the information asymmetry and that part of theinformation is hidden. In the principal-agent problemthe issue is one of moral hazard : in and by itself theagent has no intrinsic interest in the expected outcometo the principal.

It is straightforward to see that the optimal contractcan be found in time polynomial in n and m bysolving n linear programs (LPs). For each action thecorresponding LP gives the smallest expected paymentat which this action can be implemented. The actionthat yields the highest expected reward minus paymentgives the optimal payoff to the principal, and the LP forthis action the optimal contract.

Succinct principal-agent problems. This linearprogramming-based algorithm for computing an opti-mal contract has several analogs in algorithmic gametheory:

1. Mechanism design. For many basic mechanism de-sign problems, the optimal (randomized) mecha-nism is the solution of a linear program with sizepolynomial in that of the players’ joint type space.

2. Signalling. For many computational problems insignalling, the optimal signalling scheme is thesolution to a linear program with size polynomial inthe number of receiver actions and possible states ofnature.

3. Correlated equilibria. In finite games, a correlatedequilibrium can be computed using a linear programwith size polynomial in the number of game out-comes.

These linear-programming-based solutions are unsatis-factory when their size is exponential in some parameterof interest. For example, in the mechanism design andcorrelated equilibria examples, the size of the LP is ex-ponential in the number of players. A major contribu-tion of theoretical computer science to game theory andeconomics has been the articulation of natural classes ofsuccinctly representable settings and a thorough studyof the computational complexity of optimal design prob-lems in such settings. Examples include work on multi-dimensional mechanism design that has emphasized suc-cinct type distributions [Cai et al., 2012a,b, 2013, 2016],succinct signalling schemes with an exponential numberof states of nature [Dughmi and Xu, 2016], and the effi-cient computation of correlated equilibria in succinctlyrepresentable multi-player games [Papadimitriou andRoughgarden, 2008; Jiang and Leyton-Brown, 2015].The goal of this paper is to initiate an analogous line ofwork for succinctly described agency problems in con-tract theory.

We focus on principal-agent settings with succinctlydescribed (and exponentially large) outcome spaces,along with a reward function that supports value queries

and a distribution for each action with polynomialdescription. While there are many such settings one canstudy, we focus on what is arguably the most naturalone from a theoretical computer science perspective,where outcomes correspond to vertices of the hypercube,the reward function is additive, and the distributionsare product distributions. (Cf., work on computingrevenue-maximizing multi-item auctions with productdistributions over additive valutions, e.g. [Cai et al.,2012a,b].)

For example, outcomes could correspond to setsof items, where items are sold separately using postedprices. Actions could correspond to different marketingstrategies (with different costs), which lead to different(independent) probabilities of sales of various items.Or, imagine that a firm (principal) uses a headhunter(agent) to hire an employee (action). Dimensions couldcorrespond to tasks or skills. Actions correspond totypes of employees, costs correspond to the difficultyof recruiting an employee of a given type, and for eachemployee type there is some likelihood that they willpossess each skill (or be able to complete some task).The firm wants to motivate the headhunter to put inenough effort to recruit an employee who is likely tohave useful skills for the firm, without actually runningextensive interviews to find out the employee’s type.

In our model, as in the classic model, there is aprincipal and an agent. The agent can take one of nactions ai ∈ An, and each action has a cost ci ∈ R≥0.Unlike in the original model, we are given a set ofitems M , with |M | = m. Outcomes correspond tosubsets of items S ∈ 2M . Each item has a reward rj ,and the reward of a set S of items is

∑j∈S rj . Every

action ai comes with probabilities qi,j for each item j.If action ai is chosen, each item j is included in theoutcome independently with probability qi,j . A contractspecifies a payment pS for each outcome S ∈ 2M . Thegoal is to compute a contract that maximizes (perhapsapproximately) the principal’s payoff in running timepolynomial in n and m (which is logarithmic in thesize |2M | of the outcome space). Note that if wewish to describe the contract by its nonzero payments,the running time requirement forces us to use onlycontracts with polynomial description length, that is,zero payments for all but polynomially-many S ∈ 2M .

A notion of approximate IC for contracts.The classic approach in contract theory is to requirethat the agent is incentivized exactly, i.e., he (weakly)prefers the chosen action over every other action. Werefer to such contracts as incentive compatible or justIC contracts. Motivated in part by our hardness resultsfor IC contracts (see the next section) and inspiredby the success of notions of approximate incentive

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 3: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

compatibility in mechanism design (see [Cai, 2013;Weinberg, 2014; Cai et al., 2016], hereafter referredto as the CDW framework), we introduce a notion ofapproximate incentive compatibility that is suitable forcontracts.

Our notion of δ-incentive compatibility (or δ-IC) isthat the agent utility of the approximately incentivizedaction ai is at least that of any other action ai′ , less δ.(See Section 2 and full version for details and discus-sion.) This notion is natural for several reasons. First,it coincides with the usual notion of ε-IC in “normal-ized” mechanism design settings (with all valuations be-tween 0 and 1), as in [Cai, 2013; Weinberg, 2014]. A sec-ond reason is behaviorial. There is an increasing body ofwork in economics on behavioral biases in contract the-ory [Koszegi, 2014], including strong empirical evidencethat such biases play an important role in practice—forexample, that agents “gift” effort to the principals em-ploying them [Akerlof, 1982]. The notion of δ-IC offersa mathematical formulation of an agent’s bias. Alongsimilar lines, Carroll [2013] advocates generally for ap-proximate IC constraints in settings where the designercan propose their “preferred action” to agents, in whichcase an agent may be biased against deviating due to thecomplexities involved in determining the agent-optimalaction or the psychological costs of deviating. See also[Englmaier and Leider, 2012] for related discussion inthe context of contract theory.

1.1 Our contribution and techniques We proveseveral positive and negative algorithmic results forcomputing near-optimal contracts in succinctly de-scribed principal-agent settings. Our work reveals a fun-damental dichotomy between settings with a constantnumber of actions and those with an arbitrary numberof actions.

Constant number of actions. For a constantnumber of actions, we prove in Section 3 that whileit is NP -hard to compute an optimal IC contract,there is an FPTAS that computes a δ-IC contractwith expected principal surplus at least that of theoptimal IC contract; the running time is polynomialin m and 1/δ.

Theorem (See Theorem 3.1, Corollary 3.2). Forevery constant n ≥ 1 and δ > 0, there is an algorithmthat computes a δ-IC contract with expected principalsurplus at least that of an optimal IC contract in timepolynomial in m and 1/δ.

The starting point of our algorithm is a linear pro-gramming formulation of the problem of incentivizing agiven action with the lowest possible expected payment.Our formulation has a polynomial number of constraints

(one per action other than the to-be-incentivized one)but an exponential number of variables (one per out-come). A natural idea is to then solve the dual linearprogram using the ellipsiod method. The dual separa-tion oracle is: given a weighted mixture of n−1 productdistributions (over the m items) and a reference productdistribution q∗, minimize the ratio of the probability ofoutcome S in the mixture distribution and that in thereference distribution. Unfortunately, as we show, thisis an NP -hard problem, even when there are only n = 3actions. On the other hand, we provide an FPTAS forthe separation oracle in the case of a constant num-ber of actions, based on a delicate multi-dimensionalbucketing approach. The standard method of translat-ing an FPTAS for a separation oracle to an FPTASfor the corresponding linear program relies on a scale-invariance property that is absent in our problem. Weproceed instead via a strengthened version of our duallinear program, to which our FPTAS separation oraclestill applies, and show how to extract from an approxi-mately optimal dual solution a δ-IC contract with objec-tive function value at least that of the optimal solutionto the original linear program.

Arbitrary number of actions. The restriction toa constant number of actions is essential for the positiveresults above (assuming P 6= NP ). Specifically, weprove in Section 4 that computing the IC contract thatmaximizes the expected payoff to the principal is NP -hard, even to approximate to within any constant c.This hardness of approximation result persists even ifwe relax from exact IC to δ-IC contracts, provided δ issufficiently small as a function of c.

Theorem (See Theorem 4.1, Corollary 4.2). Forevery constant c ∈ R, c ≥ 1, it is NP -hard to find a ICcontract that approximates the optimal expected payoffachievable by an IC contract to within a multiplicativefactor of c.

Theorem (See Theorem 4.1, Corollary 4.3). Forany constant c ∈ R, c ≥ 5 and δ ≤ ( 1

4c )c, it is NP -hard

to find a δ-IC contract that guarantees > 2cOPT, where

OPT is the optimal expected payoff achievable by an ICcontract.

We prove these hardness of approximation resultsby reduction from MAX-3SAT, using the fact that itis NP -hard to distinguish between a satisfiable MAX-3SAT instance and one in which there is no assignmentsatisfying more than a 7/8 + α fraction of the clauses,where α is some arbitrarily small constant [Hastad,2001]. Our reduction utilizes the gap between “firstbest” (full-welfare-extracting) and “second best” solu-tions in contract design settings, where satisfiable in-stances of MAX-3SAT map to instances where there

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 4: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

is no gap between first and second best and instancesof MAX-3SAT in which no more than 7/8 + α clausescan be satisfied map to instances where there is aconstant-factor multiplicative gap between the first-bestand second-best solutions.

On the positive side, we prove that for everyconstant δ there is a simple (in fact, linear1) contractthat achieves a cδ-approximation, where cδ is a constantthat depends on δ. This approximation guarantee iswith respect to the strongest possible benchmark, thefirst-best solution.2

Theorem (See Theorem 5.1). For every constantδ > 0 there exists a constant cδ and a polynomial-time(in n and m) computable δ-IC contract that obtains amultiplicative cδ-approximation to the optimal welfare.

Our proof of this result, in Section 5, shows that theoptimal social welfare can be upper bounded by a sumof (constantly many in δ) expected payoffs achievableby δ-IC contracts. The best such contract thus obtainsa constant approximation to the optimal welfare.

Black-box distributions. Product distributionsare a rich and natural class of succinctly representabledistributions to study, but one could also consider otherclasses. Perhaps the strongest-imaginable positive re-sult would be an efficient algorithm for computing anear-optimal contract that works with no assumptionsabout each action’s probability distribution over out-comes, other than the ability to sample from them ef-ficiently. (Positive examples of this sort in signallingproblem include Dughmi and Xu [2016] and in mech-anism design include Hartline and Lucier [2015] andits many follow-ups.) Interestingly, the principal-agentproblem poses unique challenges to such “black-box”positive results. The moral reason for this is explained,for example, in Salanie [2005]: Rewards play a dual rolein contract settings, both defining the surplus from thejoint project to be shared between the principal andagent and providing a signal to the principal of theagent’s action. For this reason, in optimal contracts,the payment to the agent in a given outcome is gov-erned both by the outcome’s reward and on its “infor-mativeness,” and the latter is highly sensitive to the

1A linear contract is defined by a single parameter α ∈ [0, 1],

and sets the payment pS for any set S ∈ 2M to pS = α ·∑

j∈S rj .

Linear contracts correspond to a simple percentage commission,and are arguably among the most frequently used contracts in

practice. See [Carroll, 2015] and [Dutting et al., 2019] for recent

work in economics and computer science in support of linearcontracts.

2Note that the principal’s objective function (reward minus

payment to the agent) is a mixed-sign objective; such functionsare generally challenging for relative approximation results.

precise probabilities in the outcome distributions asso-ciated with each action. In Section 6 we translate thisintuition into an information-theoretic impossibility re-sult for the black-box model, showing that positive re-sults are possible only under strong assumptions on thedistributions (e.g., that the minimum non-zero proba-bility is bounded away from 0).

1.2 Related work The study of computational as-pects of contract theory was pioneered by Babaioff,Feldman and Nisan [Babaioff et al., 2012] (see also theirsubsequent works, notably [Emek and Feldman, 2012]and [Babaioff and Winter, 2014]). This line of workstudies a problem referred to as combinatorial agency,in which combinations of agents replace the single agentin the classic principal-agent model. The challenge inthe new model stems from the need to incentivize mul-tiple agents, while the action structure of each agent iskept simple (effort/no effort). The focus of this line ofwork is on complex combinations of agents’ efforts in-fluencing the outcomes, and how these determine thesubsets of agents to contract with. The resulting com-putational problems are very different from the compu-tational problems in our model.3

A second direction of highly related work is [Azarand Micali, 2018]. This work considers a principal-agentmodel in which the agent action space is exponentiallysized but compactly represented, and argue that in suchsettings indirect (interactive) mechanisms can be bet-ter than one-shot mechanisms. Our focus is more al-gorithmic, and instead of a compactly represented ac-tion space we consider a compactly represented outcomespace.

A third direction of related work considers a bandit-style model for contract design [Ho et al., 2016]. In theirmodel each arm corresponds to a contract, and theypresent a procedure that starts out with a discretizationof the contract space, which is adaptively refined, andwhich achieves sublinear regret in the time horizon.Again the result is quite different from our work, wherethe complexity comes from the compactly representedoutcome space, and our result on the black-box modelsheds a more negative light on the learning approach.

Further related work comes from Kleinberg andKleinberg [2018] who consider the problem of delegat-ing a task to an agent in a setting where (unlike in ourmodel) monetary compensation is not an option. Al-though payments are not available, they show throughan elegant reduction to the prophet-inequality problemthat constant competitive solutions are possible.

3For example, several of the key computational questions intheir problem turn out to be #P -hard, while all of the problems

we consider are in NP .

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 5: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

A final related line of work was initiated by Carroll[2015] who—working in the classic model (where com-putational complexity is not an issue)—shows a sensein which linear contracts are max-min optimal (see alsothe recent work of [Walton and Carroll, 2019]). Duttinget al. [2019] show an alternative such sense, and alsoprovide tight approximation guarantees for linear con-tracts.

2 Preliminaries

2.1 Succinct principal-agent settings Let n andm be parameters. A principal-agent setting is composedof the following: n actions An among which the agentcan choose, and their costs 0 = c1 ≤ · · · ≤ cn for theagent; outcomes which the actions can lead to, and theirrewards for the principal; and a mapping from actionsto distributions over outcomes. Crucially, the agent’schoice of action is hidden from the principal, who ob-serves only the action’s realized outcome. Our goal isto study succinct principal-agent settings with descrip-tion size polynomial in n and m; the (implicit) outcomespace can have size exponential in m. Throughout, un-less stated otherwise, all principal-agent settings we con-sider are succinct. We focus on arguably one of the mostnatural models of succinctly-described settings, namelythose with additive rewards and product distributions.

In more detail, let M = {1, 2, ..,m}, where M isreferred to as the item set. Let the outcome spacebe {0, 1}M , that is, every outcome is an item subsetS ⊆ M . For every item j ∈ M , the principal getsan additive reward rj if the realized outcome includesj, so the principal’s reward for outcome S is rS =∑j∈S rj . Every action ai ∈ An is associated with

probabilities qi,1, ..., qi,m ∈ [0, 1] for the items. Wedenote the corresponding product distribution by qi.When the agent takes action ai, item j is includedin the realized outcome independently with probabilityqi,j . The probability of outcome S is thus qi,S =(∏j∈S qi,j)(

∏j /∈S(1−qi,j)). By linearity of expectation,

the principal’s expected reward given action ai is Ri =∑S qi,SrS =

∑j qi,jrj . Action ai’s expected welfare is

Ri − ci, and we assume Ri − ci ≥ 0 for every i ∈ [n].

Example 2.1 (Succinct principal-agent set-ting). A company (principal) hires an agent to sell itsm products. The agent may succeed in selling any sub-set of the m items, depending on his effort level, wherethe ith level leads to sale of item j with probability qi,j.Reward rj from selling item j is the profit-margin ofproduct j for the company.

Representation. A succinct principal-agent set-ting is described by an n-vector of costs c, an m-vectorof rewards r, and an n×m-matrix Q where entry (i, j)

is equal to probability qi,j (and we assume for simplic-ity that the number of bits of precision for all values ispoly(n,m)).

Assumptions. Unless stated otherwise, we as-sume that all principal-agent settings are normalized,i.e., Ri ≤ 1 for every ai ∈ An (and thus also ci ≤ 1).Normalization amounts to a simple change of “cur-rency”, i.e., of the units in which rewards and costs aremeasured. It is a standard assumption in the contextof approximate incentive compatibility—see Section 2.2(similar assumptions appear in both the CDW frame-work and in [Carroll, 2013]). We also assume no domi-nated actions: every two actions ai, ai′ have distinct ex-pected rewards Ri 6= Ri′ , and Ri′ < Ri implies ci′ < ci.That is, we assume away any action that simultaneouslycosts more for the agent and has lower expected rewardfor the principal than some (dominating) alternative ac-tion.

Contracts and incentives. A contract p is avector of payments from the principal to the agent.Payments are non-negative; this is known as limitedliability of the agent.4 The contractual payments arecontingent on the outcomes and not actions, as theactions are not directly observable by the principal. Acontract p can potentially specify a payment pS ≥ 0for every outcome S, but by linear programming (LP)considerations detailed below, we can focus on contractsfor which the support size of the vector p is polynomialin n. We sometimes denote by pi the expected payment∑S⊆M qi,SpS to the agent for choosing action ai, and

without loss of generality restrict attention to contractsfor which pi ≤ Ri for every ai ∈ An (in particular,pi ≤ 1 by normalization).

Given contract p, the agent’s expected utility fromchoosing action ai is pi − ci. The principal’s expectedpayoff is then Ri − pi. The agent wishes to maximizehis expected utility over all actions and over an outsideoption with utility normalized to zero (“individual ratio-nality” or IR). Since by assumption the cost c1 of actiona1 is 0, the outside opportunity is always dominated byaction a1 and so we can omit the outside option fromconsideration. Therefore, the incentive constraints forthe agent to choose action ai are: pi − ci ≥ pi′ − ci′ forevery i′ 6= i. If these constraints hold we say ai is in-centive compatible (IC) (and as discussed, in our modelIC implies IR). The standard tie-breaking assumptionin the contract design literature is that among severalIC actions the agent tie-breaks in favor of the principal,i.e. chooses the IC action that maximizes the princi-

4Limited liability plays a similar role in the contract literatureas risk-averseness of the agent. Both reflect the typical situation

in which the principal has “deeper pockets” than the agent and

is thus the better bearer of expenses/risks.

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 6: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

pal’s expected payoff.5 We say contract p implementsor incentivizes action ai if given p the agent chooses ai(namely ai is IC and survives tie-breaking). If thereexists such a contract for action ai we say ai is imple-mentable, and slightly abusing notation we sometimesrefer to the implementing contract as an IC contract.

Simple contracts. In a linear contract, the pay-ment scheme is a linear function of the rewards, i.e.,pS = αrS for every outcome S. We refer to α ∈ [0, 1] asthe linear contract’s parameter, and it serves as a suc-cinct representation of the contract. Linear contractshave an alternative succinct representation by an m-vector of item payments pj = αrj for every j ∈ M ,which induce additive payments pS =

∑j∈S pj . A nat-

ural generalization is separable contracts, the paymentsof which can also be separated over the m items and rep-resented by an m-vector of non-negative payments (notnecessarily linear). It is straightforward to see that theoptimal linear (resp., separable) contract can be foundin polynomial time (see full version for details). Wereturn to linear contracts in Section 5 and provide ad-ditional discussion of linear and separable contracts inthe full version.

2.2 Contract design and relaxations The goal ofcontract design is to maximize the principal’s expectedpayoff from the action chosen by the agent subject to ICconstraints. A corresponding computational problem isOPT-CONTRACT: The input is a succinct principal-agent setting, and the output is the principal’s expectedpayoff from the optimal IC contract. A related problemis MIN-PAYMENT: The input is a succinct principal-agent setting and an action ai, and the output is theminimum expected payment p∗i with which ai can beimplemented (up to tie-breaking). OPT-CONTRACTreduces to solving n instances of MIN-PAYMENT tofind p∗i for every action ai, and returning the maximumexpected payoff to the principal maxi∈[n]{Ri − p∗i }.Observe that MIN-PAYMENT can be formulated as anexponentially-sized LP with 2m variables {pS} (one foreach set S ⊆M) and n− 1 constraints:

min∑S⊆M

qi,SpS(2.1)

s.t.∑S⊆M

qi,SpS − ci ≥∑S⊆M

qi′,SpS − ci′ ∀i′ 6= i, i′ ∈ [n],

pS ≥ 0 ∀S ⊆M.

5The idea is that one could perturb the payment schedule

slightly to make the desired action uniquely optimal for the agent.For further discussion see [Caillaud and Hermalin, 2000, p. 8].

The dual LP has n− 1 nonnegative variables {λi′}(one for every action i′ other than i), and exponentially-many constraints:

max∑i′ 6=i

λi′(ci − ci′)(2.2)

s.t.(∑i′ 6=i

λi′)− 1 ≤

∑i′ 6=i

λi′qi′,Sqi,S

∀S ⊆ E, qi,S > 0,

λi′ ≥ 0 ∀i′ 6= i, i′ ∈ [n].

Standard duality considerations imply that therealways exists a succinct optimal contract with n − 1nonzero payments. However, the ellipsoid methodcannot be applied to solve the dual LP in polynomialtime because the required separation oracle turns outto be NP-hard except for n = 2.

Proposition 2.1. Solving the separation oracle of dualLP (2.2) is NP-hard for n ≥ 3.

A proof of this result can be found in the fullversion. We return to LP (2.1) and to its dual LP (2.2)in Section 3.

Relaxed IC. Contract design like auction designis ultimately an optimization problem subject to ICconstraints. The state-of-the-art in optimal auctiondesign requires a relaxation of IC constraints to ε-IC. Inthe CDW framework, the ε loss factor is additive andapplies to normalized auction settings. The frameworkenables polytime computation of an ε-IC auction withexpected revenue approximating that of the optimalIC auction.6 Appropriate ε-IC relaxations are alsostudied in multiple additional contexts—see [Carroll,2013] and references within for voting, matching andcompetitive equilibrium; and [Papadimitriou, 2006] forNash equilibrium. We wish to achieve similar resultsin the context of optimal contracts. For completenesswe include the definition of ε-IC cast in the language ofcontracts:

Definition 2.2 (δ-IC action). Consider a (normal-ized) contract setting. For δ ≥ 0, an action ai isδ-IC given a contract p if the agent loses no morethan additive δ in expected utility by choosing ai, i.e.:pi − ci ≥ pi′ − ci′ − δ for every action ai′ 6= ai.

(Notation-wise, we will sometimes replace δ by ∆and refer to ∆-IC actions.) As in the IC case, we of-ten slightly abuse notation and refer to the contract p

6To be precise, the CDW framework focuses on Bayesian IC(BIC) and ε-BIC auctions.

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 7: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

itself as δ-IC. By this we mean a contract p with an(implicit) action ai that is δ-IC given p (if there areseveral such δ-IC actions, by our tie-breaking assump-tion the agent chooses the one that maximizes the prin-cipal’s expected payoff). We also say the contract δ-implements or δ-incentivizes action ai. Finally if thereexists such a contract for ai then we say this actionis δ-implementable. Interestingly, by LP duality, anyaction can be δ-implemented up to tie-breaking evenfor arbitrarily small δ (see full version for details). Wedenote by δ-OPT-CONTRACT and δ-MIN-PAYMENTthe above computational problems with IC replaced byδ-IC (e.g., the input to δ-OPT-CONTRACT is a suc-cinct principal-agent setting and a parameter δ, and theoutput is the principal’s expected payoff from the opti-mal δ-IC contract).

In the full version we study the relation betweenoptimal IC and δ-IC contracts in more detail. We inparticular show that for every δ-IC contract there isan IC contract with approximately the same expectedpayoff to the principal up to small—and necessary—multiplicative and additive losses. Thus relaxing IC toδ-IC increases the expected payoff of the principal onlyto a certain extent. Similar results are known in thecontext of auctions (see [Hartline et al., 2015; Dughmiet al., 2017] for welfare maximization and [Daskalakisand Weinberg, 2012] for revenue maximization).7

Proposition 2.3. Fix a principal-agent setting andδ > 0. Let p be a contract that δ-incentivizes action ai.Denote by `α=1 the linear contract with parameter α = 1(that transfers the full reward from principal to agent).Then the IC contract p′ defined as (1−

√δ)p+

√δ`α=1

achieves for the principal expected payoff of at least(1−√δ)(Ri−pi)−(

√δ−δ), where Ri−pi is the expected

payoff of contract p.

We also show that an additive loss is necessary, aseven for tiny δ there can be a multiplicative constant-factor gap between the expected payoff of an IC contractand a δ-IC one.

Relaxed IC with exact IR. In our model, ICimplies IR due to the existence of a zero-cost action a1,but this is no longer the case for δ-IC. What if we arewilling to relax IC to δ-IC due to the considerationsabove, but do not want to give up on IR? Suppose weenforce IR by assuming that the agent chooses a δ-ICaction only if it has expected utility ≥ 0. The followinglemma shows that this has only a small additive effecton the principal’s expected payoff, allowing us from nowon to focus on δ-IC contracts (IR can be later enforcedby applying the lemma):

7We thank an anonymous reviewer for pointing us to these

references.

Lemma 2.4. For every δ-IC contract p that achievesexpected payoff of Π for the principal, there exists a δ-IC and IR contract p′ that achieves expected payoff of≥ Π− δ.

Proof. Fix a principal-agent setting. Let ai be theaction δ-incentivized by contract p and assume ai is notIR. Observe that the agent’s expected utility from aiis ≥ −δ (otherwise ai would not be δ-IC with respectto a1, which has expected utility ≥ 0 for the agent).First, if Π > δ, then let p′ be identical to p except foran additional δ payment for every outcome. Contractp′ still δ-incentivizes action ai, but now the agent’sexpected utility from ai is ≥ 0, as required. Otherwiseif Π ≤ δ, let p′ be the contract with all-zero payments.The expected payoff to the principal is zero, which is atmost an additive δ loss compared to Π.

3 Constant number of actions

In this section we begin our exploration of thecomputational problems OPT-CONTRACT and MIN-PAYMENT by considering principal-agent settings witha constant number n of actions. For every constantn ≥ 3 these problems are NP-hard, and this holds evenif the IC requirement is relaxed to δ-IC (see full versionfor details). As our main positive result, we establish thetractability of finding a δ-IC contract that matches theexpected payoff of the optimal IC contract. In Section 4we show this result is too strong to hold for non-constantvalues of n (under standard complexity assumptions),and in Section 5 we provide an approximation result forgeneral settings.

To state our results more formally, fix a principal-agent setting and action ai; let OPTi be the solutionto MIN-PAYMENT for ai (or ∞ if ai cannot be imple-mented up to tie-breaking without loss to the principal);and let OPT be the solution to OPT-CONTRACT. Ob-serve that OPT = maxi∈[n]{Ri − OPTi}. Our mainresults in this section are the following:

Theorem 3.1 (MIN-PAYMENT). There exists analgorithm that receives as input a (succinct) principal-agent setting with a constant number of actions and mitems, an action ai, and a parameter δ > 0, and returnsin time poly(m, 1

δ ) a contract that δ-incentivizes ai withexpected payment ≤ OPTi to the agent.

Corollary 3.2 (OPT-CONTRACT). There ex-ists an algorithm that receives as input a (succinct)principal-agent setting with a constant number of ac-tions and m items, and a parameter δ > 0, and returnsin time poly(m, 1

δ ) a δ-IC contract with expected payoff≥ OPT to the principal.

Proof. Apply the algorithm from Theorem 3.1 once per

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 8: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

action ai to get a contract that δ-incentivizes ai withexpected payoff at least Ri − OPTi to the principal.Maximizing over the actions we get a δ-IC contract withexpected payoff ≥ OPT to the principal.

Corollary 3.2 shows how to achieve OPT with a δ-IC contract rather than an IC one, in the same vein asthe CDW results for auctions. A similar result does nothold for general n unless P=NP (Corollary 4.3). Notethat the δ-IC contract can be transformed into an IR onewith an additive δ loss by applying Lemma 2.4, and to afully IC one with slightly more loss by Proposition 2.3,where δ can be an arbitrarily small inverse polynomialin m.

In the rest of the section we prove Theorem 3.1;omitted proofs appear in the full version.

An FPTAS for the separation oracle. Webegin by stating the separation oracle problem. LP (2.1)formulates MIN-PAYMENT for action ai. Its dualLP (2.2) has constraints of the form:

(3.3) (∑i′ 6=i

λi′)− 1 ≤∑i′ 6=i

λi′qi′,Sqi,S

.

We can rewrite (3.3) as 1 − 1/(∑i′ 6=i λi′) ≤

1qi,S·∑

i′ 6=i(λi′/(∑i′ 6=i λi′))·qi′,S . Thus the separation oracle

problem for dual LP (2.2) is in fact the followingproblem: Let n be a constant and m a parameter.The input is n − 1 nonnegative weights {αi′} thatsum to 1; n − 1 product distributions {qi′}; and aproduct distribution qi; where all product distributionsare over m items M . The goal is to minimize the

likelihood ratio∑

i′ αi′qi′,Sqi,S

over all outcomes S ⊆ M ,

where the numerator is the likelihood given S of theweighted combination distribution

∑i′ αi′qi′ , and the

denominator is the likelihood given S of distribution qi.Denote the optimal solution (i.e. the minimum

likelihood ratio) by ρ∗. Solving the separation oracleproblem is NP-hard (Proposition 2.1),8 but we show anFPTAS (Lemma 3.3). Lemma 3.4 gives the guaranteefrom applying this FPTAS as a separation oracle fordual LP (2.2).

Lemma 3.3 (FPTAS). There is an algorithm for theseparation oracle problem that returns an outcome Swith likelihood ratio ≤ (1 + δ)ρ∗ in time polynomial inm, 1

δ .

Lemma 3.4. If the separation oracle FPTAS with pa-rameter δ does not find a violated constraint of dual

8In fact the problem is strongly NP-hard; but because itinvolves products of the form qi,S = (

∏j∈S qi,j)(

∏j /∈S(1 −

qi,j)), the strong NP-hardness does not rule out an FPTAS[Papadimitriou and Steiglitz, 1982, Theorem 17.12].

LP (2.2), then for every S the inequality in (3.3) holdsapproximately up to (1 + δ):

(∑i′ 6=i

λi′)− 1 ≤ (1 + δ)∑i′ 6=i

λi′qi′,Sqi,S

.

Proof. Assume there exists S such that (∑i′ 6=i λi′)−1 >

(1 + δ)∑i′ 6=i λi′

qi′,Sqi,S

. Then dividing by (∑i′ λi′) and

using the definition of ρ∗ as the minimum likelihoodratio we get 1 − 1∑

i′ λi′> (1 + δ)ρ∗. Combining this

with the guarantee of Lemma 3.3, the FPTAS returnsS′ with likelihood ratio < 1− 1∑

i′ λi′, thus identifying a

violated constraint. This completes the proof.

Applying the separation oracle FPTAS: Thestandard method. Given an FPTAS with parame-ter δ for the separation oracle of a dual LP, for manyproblems it is possible to find in polynomial time anapproximately-optimal, feasible solution to the primal—see, e.g., [Karmarkar and Karp, 1982; Carr and Vem-pala, 2002; Jain et al., 2003; Nutov et al., 2006; Fleischeret al., 2011; Feldman et al., 2012]. We first describe afairly standard approach in the literature to utilizing aseparation oracle FPTAS, which we refer to as the stan-dard method, and explain where we must deviate fromthis approach. The proof of Theorem 3.1 then appliesan appropriately modified approach.

The standard method works as follows: Let OPTibe the optimal value of the primal (minimization) LP.For a benchmark value Γ, add to the (maximization)dual LP a constraint that requires its objective to be atleast Γ, and attempt to solve the dual by running theellipsoid algorithm with the separation oracle FPTAS.

Assume first that the ellipsoid algorithm returnsa solution with value Γ. Since the separation oracleapplies the FPTAS, it may wrongly conclude that somesolution is feasible despite a slight violation of one ormore of the constraints. For example, if we were toapply the FPTAS separation oracle from Lemma 3.3 tosolve dual LP (2.2), we could possibly get a solution forwhich there exists S such that:∑

i′ 6=i

λi′qi′,Sqi,S

< (∑i′ 6=i

λi′)− 1 ≤ (1 + δ)∑i′ 6=i

λi′qi′,Sqi,S

where the second inequality is by Lemma 3.4. Clearly,the value Γ of an approximately-feasible solution maybe higher than OPTi. In the standard method, theapproximately-feasible solution can be scaled by 1

1+δ

to regain feasibility while maintaining value of Γ1+δ .

Scaling thus establishes that Γ1+δ ≤ OPTi. Now assume

that for some (larger) value of Γ, the ellipsoid algorithmidentifies that the dual LP is infeasible. In this case wecan be certain that OPTi < Γ, and we can also find

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 9: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

in polynomial time a primal feasible solution with value< Γ (more details in the proof of Theorem 3.1 below).

Using binary search (in our case over the range[ci, Ri] ⊆ [0, 1] since Ri is the maximum the principalcan pay without losing money), the standard methodfinds the smallest Γ∗ for which the dual is identifiedto be infeasible, up to a negligible binary search errorε. This gives a primal feasible solution that achievesvalue Γ∗ + ε, and at the same time establishes that(Γ∗)−

1+δ ≤ OPTi by the scaling argument.9 So thestandard method has found an approximately-optimal,feasible solution to the primal.

Applying the separation oracle FPTAS: Ourmethod. The issue with applying the standard methodto solve MIN-PAYMENT is that the scaling argumentdoes not hold. To see this, consider an approximately-feasible dual solution for which (

∑i′ 6=i λi′) − 1 ≤ (1 +

δ)∑i′ 6=i λi′

qi′,Sqi,S

for every S, and notice that scaling the

values {λi′} does not achieve feasibility. We thereforeturn to an alternative method to prove Theorem 3.1.

Proof of Theorem 3.1. We apply the standard methodusing the FPTAS with parameter δ (see Lemma 3.3) asseparation oracle to the following strengthened versionof dual LP (2.2),10 where the extra (1+δ) multiplicativefactor in the constraints makes them harder to satisfy:

max∑i′ 6=i

λi′(ci − ci′)(3.4)

s.t. (1 + δ)((∑i′ 6=i

λi′)− 1)≤

∑i′ 6=i

λi′qi′,Sqi,S

∀S ⊆ E, qi,S > 0

λi′ ≥ 0 ∀i′ 6= i, i′ ∈ [n].

Let Γ∗ be the infimum value for which dual LP (3.4)would be identified as infeasible. The ellipsoid algorithmis thus able to find an approximately-feasible solution todual LP (3.4) with objective (Γ∗)−. The key observationis that this solution is fully feasible with respect to theoriginal dual LP (2.2). This is because if the separationoracle FPTAS does not find a violated constraint of dualLP (3.4), then for every S it holds that (

∑i′ 6=i λi′)−1 ≤∑

i′ 6=i λi′qi′,Sqi,S

(by the same argument as in the proof of

Lemma 3.4). From the key observation it follows that

(3.5) (Γ∗)− ≤ OPTi−

(despite the fact that the scaling argument does nothold).

9The notation (Γ∗)− means any number smaller than Γ∗.10Strengthened duals appear, e.g., in [Nutov et al., 2006;

Feldman et al., 2012].

Now let Γ∗ + ε be the smallest value for whichthe binary search runs the ellipsoid algorithm fordual LP (3.4) and identifies its infeasibility. Duringits run for Γ∗ + ε, the ellipsoid algorithm identifiespolynomially-many separating hyperplanes that con-strain the objective to < Γ∗ + ε. Formulate a “small”primal LP with variables corresponding exactly to thesehyperplanes. By duality, the small primal LP has a so-lution with objective < Γ∗ + ε, and moreover since thenumber of variables and constraints is polynomial wecan find such a solution p∗ in polynomial time. Observethat p∗ is also a feasible solution to the primal LP cor-responding to dual (3.4) (the only difference from thesmall LP is more variables):

min (1 + δ)∑S⊆E

qi,SpS(3.6)

s.t. (1 + δ)( ∑S⊆E

qi,SpS)− ci ≥∑

S⊆E

qi′,SpS − ci′ ∀i′ 6= i, i′ ∈ [n]

pS ≥ 0 ∀S ⊆ E.

We have thus obtained a contract p∗ that isa feasible solution to LP (3.6) with objective (1 +δ)∑S⊆E qi,SpS < Γ∗ + ε. For action ai, this contract

pays the agent an expected transfer of∑S⊆E qi,SpS <

Γ∗+ε1+δ . We have the following chain of inequalities:∑S⊆E qi,SpS ≤

(Γ∗)−+ε1+δ ≤ OPTi+ε

1+δ ≤ OPTi, where thesecond inequality is by (3.5), and the last inequalityis by taking the binary search error to be sufficientlysmall.11 To complete the proof we must show that p∗

is δ-IC. This holds since the constraints of LP (3.6) en-sure that for every action ai′ 6= ai, using the notationpi =

∑S⊆E qi,SpS , we have pi′ − ci′ ≤ (1 + δ)pi − ci ≤

pi − ci + δpi ≤ pi − ci + δ (the last inequality uses thatpi ≤ Ri ≤ 1 by normalization).

4 Hardness of approximation

In this section unlike the previous one, the number ofactions is no longer assumed to be constant. We show ahardness of approximation result for optimal contracts,based on the known hardness of approximation forMAX-3SAT. In his landmark paper, Hastad [2001]shows that it is NP-hard to distinguish between asatisfiable MAX-3SAT instance, and one in which thereis no assignment satisfying more than 7/8 + α ofthe clauses, where α is an arbitrarily-small constant(Theorems 5.6 and 8.3 in [Hastad, 2001]). We build

11We use here that OPTi ≥ ci and that the number of bits ofprecision is polynomial.

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 10: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

upon this to prove our main technical contributionstated in Theorem 4.1, which immediately leads to ourmain results for this section in Corollaries 4.2-4.3.

Theorem 4.1. Let c ∈ Z, c ≥ 3 be an (arbitrarilylarge) constant integer. Let ε,∆ ∈ R, ε > 0,∆ ∈[0, 1

20c ] be such that ε−2∆1/c

3 ∈ (0, 120 ] and ( ε−2∆1/c

3 )c

is an (arbitrarily small) constant. Then it is NP-hardto determine whether a principal-agent setting has anIC contract extracting full expected welfare, or whetherthere is no ∆-IC contract extracting > 1

c + ε of theexpected welfare.

We present two direct implications of Theorem 4.1.First, Corollary 4.2 applies to the OPT-CONTRACTproblem, and states hardness of approximation withinany constant of the optimal expected payoff by an ICcontract. (A similar result can be shown for MIN-PAYMENT; see full version.)

Corollary 4.2. For any constant c ∈ R, c ≥ 1, itis NP-hard to approximate the optimal expected payoffachievable by an IC contract to within a multiplicativefactor c.

Corollary 4.2 suggests that in order to achievepositive results, we may want to follow the approachof the CDW framework and relax IC to ∆-IC. Thatis, instead of trying to compute in polynomial timean approximately-optimal IC contract, we should tryto compute in polynomial time a ∆-IC contract withexpected payoff that is guaranteed to approximatelyexceed that of the optimal IC contract. The nextcorollary establishes a computational limitation on thisapproach: Corollary 4.3 fixes a constant approximationfactor c, and derives ∆ for which a c-approximationby a ∆-IC contract is NP-hard to find. (It is alsopossible to reverse the roles—fix ∆ and derive a constantapproximation factor for which NP-hardness holds.) Weshall complement this limitation with a positive resultin Section 5.

Corollary 4.3. For any constant c ∈ R, c ≥ 5 and∆ ≤ ( 1

4c )c, it is NP-hard to find a ∆-IC contract

that guarantees > 2cOPT , where OPT is the optimal

expected payoff achievable by an IC contract.12

Proof. The corollary follows from Theorem 4.1 by set-ting ε = 1

c .

It also follows from Theorem 4.1 and Corollary4.3 that for every c,∆ as specified, it is NP-hard toapproximate the optimal expected payoff achievable by

12The relevant hardness notion is more accurately FNP-hardness.

a ∆-IC contract to within a multiplicative factor c/2.That is, hardness of approximation also holds for δ-OPT-CONTRACT.

In the remainder of the section we prove Theo-rem 4.1. After a brief overview, Section 4.2 sets up sometools for the proof, in Section 4.3 we focus on the spe-cial case of c = 2, and in Section 4.4 we prove the moregeneral statement for any constant c. Missing proofsappear in the full version.

4.1 Proof overview It will be instructive to considerfirst a version of Theorem 4.1 for the case of c = 2:

Theorem 4.4. Let ε,∆ ∈ R, ε > 0,∆ ∈ [0, 1202 ] be such

that ε−2∆1/2

3 ∈ (0, 120 ] and ( ε−2∆1/2

3 )2 is an (arbitrarilysmall) constant. Then it is NP-hard to determinewhether a principal-agent setting has an IC contractextracting full expected welfare, or whether there is no∆-IC contract extracting > 1

2 +ε of the expected welfare.

This theorem is already interesting as it shows thateven relaxing IC to ∆-IC where ∆� 0, approximatingthe optimal expected payoff within 65% is computation-ally hard:

Corollary 4.5. For any ∆ ≤ 1202 , it is NP-hard to

find a ∆-IC contract that guarantees > 0.65 · OPT ,where OPT is the optimal expected payoff achievable byan IC contract.

Proof. The corollary follows from Theorem 4.4 by set-ting ε = 3

20 .

To establish Theorem 4.4 we present a gap-preserving reduction from any MAX-3SAT instance ϕto a principal-agent setting that we call the “productsetting” (the reduction appears in Algorithm 2 and isanalyzed in Proposition 4.15). The product setting en-compasses a 2-action principal-agent “gap setting”, inwhich any δ-IC contract for sufficiently small δ can-not extract much more than 1

2 of the expected welfare(Proposition 4.8).

The special case of c = 2 captures most ideas behindthe proof of the more general Theorem 4.1, but the anal-ysis is simplified by the fact that to extract more thanroughly 1

2 of the expected welfare in the 2-action gapsetting, there is a single action that the contract couldpotentially incentivize. The more general case involvesgap settings with more actions (the reduction appearsin Algorithm 3 and is analyzed in Proposition 4.17).To extract more than ≈ 1

c of the expected welfare, thecontract could potentially incentivize almost any one ofthese actions (Proposition 4.9).

Barrier to going beyond constant c. Our tech-niques for establishing Theorem 4.1 do not generalize

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 11: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

beyond constant values of c (the approximation factor).The reason for this is that we do not know of (c, ε, f)-gap settings (Definition 4.6) where f(c, ε) = o(εc). Aslong as f(c, ε) is of order εc, the gap in the MAX-3SATinstance we reduce from must be between 7/8 + εc and1, and this gap problem is known to be NP-hard onlyfor constant c. As Hastad [2001] notes, significantlystronger complexity assumptions may lead to hardnessfor slightly (but not significantly) larger values of c.

4.2 Key ingredients In this section we formalize thenotions of “gap” and “SAT” principal-agent settings aswell as the notion of an “average action”, which willbe useful in proving Theorems 4.1 and 4.4. The term“gap setting” reflects the gap between the first-bestsolution (i.e., the expected welfare), and the second-best solution (i.e., the expected payoff to the principalfrom the optimal contract). It will be convenient notto normalize gap settings (and thus also the productsettings encompassing them). This makes our negativeresults only stronger, as we show next.

Unnormalized settings and a stronger δ-ICnotion. Before proceeding we must define what wemean by a δ-IC contract in an unnormalized setting.Moreover we show that if Theorems 4.1 or 4.4 hold forunnormalized settings with the new δ-IC notion, thenthey also hold for normalized settings with the standardδ-IC notion.

Recall that in a normalized setting, action ai thatis δ-incentivized by the contract must satisfy δ-ICconstraints of the form pi − ci + δ ≥ pi′ − ci′ for everyi′ 6= i. In an unnormalized setting, an additive δ-deviation from optimality is too weak of a requirement;we require instead that ai satisfy δ-IC constraints of theform

(4.7) (1 + δ)pi − ci ≥ pi′ − ci′ ∀i′ 6= i.

Two key observations are: (i) The constraints in (4.7)imply the standard δ-IC constraints if pi ≤ 1, as is thecase if the setting is normalized; (ii) The constraints in(4.7) are invariant to scaling of the setting and contract(i.e., to a change of currency of the rewards, costsand payments). By these observations, a δ-IC contractaccording to the new notion in an unnormalized settingbecomes a standard δ-IC contract after normalizationof the setting and payments, with the same fractionof optimal expected welfare extracted as payoff to theprincipal.

Assume a negative result holds for unnormalizedsettings, i.e., it is NP-hard to determine between the twocases stated in Theorem 4.1 (or Theorem 4.4). Assumefor contradiction this does not hold for normalizedsettings. Then given an unnormalized setting, we

can simply scale the expected rewards and costs tonormalize it, and then determine whether or not thereis an IC contract extracting full expected welfare. Ifsuch a contract exists, it is also IC and full-welfare-extracting in the unnormalized setting after scaling backthe payments. On the other hand, by the discussionabove, if there is no standard-notion ∆-IC contractextracting a given fraction of the expected welfarein the normalized setting, there can also be no suchcontract with the new ∆-IC notion in any scaling ofthe setting. We have this reached a contradiction toNP-hardness. We conclude that proving our negativeresults for unnormalized settings only strengthens theseresults.

Gap settings and their construction. We nowturn to the definition of gap settings.

Definition 4.6 (Unstructured gap setting). Letf(c, ε) ∈ R≥0 be an increasing function where c ∈ Z>0

and ε ∈ R>0. An unstructured (c, ε, f)-gap setting isa principal-agent setting such that for every 0 ≤ δ ≤f(c, ε), the optimal δ-IC contract can extract no morethan 1

c + ε of the expected welfare as the principal’sexpected payoff.

For convenience we focus on (structured) gap set-tings as follows.

Definition 4.7 (Gap setting). A (c, ε, f)-gap set-ting is a setting as in Definition 4.6 with the followingstructure: there is a single item and c actions; the firstaction has zero cost; the last action has probability 1for the item and maximum expected welfare among allactions.

To construct a gap setting, we construct a principal-agent setting with a single item, c actions and parameterγ ∈ R>0, γ < 1. The construction is similar to [Duttinget al., 2019], but requires a different analysis. For everyi ∈ [c], set the probability of action ai for the item toγc−i, and set ai’s cost to ci = (1/γi−1) − i + (i − 1)γ.Set the reward for the item to be 1/γc−1. Observethat the expected welfare of action ai is i − (i − 1)γ,so the last action has the maximum expected welfarec−(c−1)γ. This establishes the structural requirementsfrom a gap setting (Definition 4.7). Propositions 4.8 and4.9 establish the gap requirements from a gap setting(Definition 4.6) for c = 2 and c ≥ 3, respectively—the separation between these cases is for clarity ofpresentation. We use the former in Section 4.3, in whichwe show hardness for the c = 2 case; the latter is ageneralization to arbitrary-large constant c.

Proposition 4.8 (2-action gap settings). For ev-ery ε ∈ (0, 1

4 ], there exists a (2, ε, ε2)-gap setting.

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 12: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

Proposition 4.9 (c-action gap settings). For ev-ery c ≥ 3 and ε ∈ (0, 1

4 ], there exists a (c, ε, εc)-gapsetting.

Average actions and SAT settings. The moti-vation for the next definition is that given a contract,for an action to be IC or δ-IC it must yield higher ex-pected utility for the agent in comparison to the “aver-age action”. Average actions are thus a useful tool foranalyzing contracts.

Definition 4.10 (Average action). Given aprincipal-agent setting and a subset of actions, by theaverage action we refer to a hypothetical action withthe average of the subset’s distributions, and averagecost. (If a particular subset is not specified, the averageis taken over all actions in the setting.)

Definition 4.11 (SAT setting). A SAT principal-agent setting corresponds to a MAX-3SAT instance ϕ.If ϕ has n clauses and m variables then the SAT settinghas n actions and m items. Two conditions hold: (1)ϕ is satisfiable if and only if there is an item set inthe SAT setting that the average action leads to withzero probability; (2) If every assignment to ϕ satisfiesat most 7/8 + α of the clauses, then for every item setS the average action leads to S with probability at least1−8α2m .

Proposition 4.12. For every ϕ the reduction in Algo-rithm 1 runs in polynomial time on input ϕ and returnsa SAT setting corresponding to ϕ.

ALGORITHM 1: SAT setting construction in polytime

Input : A MAX-3SAT instance ϕ with n clauses and mvariables.

Output : A principal-agent SAT setting (Definition 4.11)corresponding to ϕ.

beginGiven ϕ, construct a principal-agent setting in whichevery clause corresponds to an action with a productdistribution, and for every variable there is acorresponding item. If variable j appears in clause iof ϕ as a positive literal, then let item j’s probabilityin the ith product distribution be 0, and if it appearsas a negative literal then let item j’s probability be 1.Set all other probabilities to be 1

2. We set the costs of

all actions and the rewards for all items to be 0.end

4.3 The c = 2 case: Proof of Theorem 4.4 Inthis section we present a polynomial-time reductionfrom MAX-3SAT to a product setting, which combinesgap and SAT settings. The reduction appears in

Algorithm 2. We then analyze the guarantees of thereduction and use them to prove Theorem 4.4. Most ofthe analysis appears in Proposition 4.15, which showsthat the reduction in Algorithm 2 is gap-preserving.Some of the results are formulated in general terms sothey can be reused in the next section (Section 4.4).

Before turning to Proposition 4.15, we begin withtwo simple observations about the product setting re-sulting from the reduction.

ALGORITHM 2: Polytime reduction from MAX-3SAT to

principal-agent

Input : A MAX-3SAT instance ϕ with n clauses and mvariables; a parameter ε ∈ R≥0.

Output : A principal-agent product setting combining aSAT setting and a gap setting.

beginCombine the SAT setting corresponding to ϕ(attainable in polytime by Proposition 4.12) with apoly-sized (2, ε, ε2)-gap setting (exists by Proposition4.8) to get the product setting, as follows:• The product setting has n+ 1 actions and m+ 1

items: m “SAT items” correspond to the SAT set-ting items, and the last “gap item” corresponds tothe gap setting item.

• The upper-left block of the product setting’s(n+ 1)× (m+ 1) matrix of probabilities is the SATsetting’s n×m matrix of probabilities. The entirelower-left 1×m block is set to 1

2. The entire

upper-right n× 1 block is set to the probabilitythat action a1 in the gap setting results in the item.The remaining lower-right 1× 1 block is set to theprobability that the last action in the gap settingresults in the item.

• In the product setting, the rewards for the m SATitems are set to 0, and the reward for the gap itemis set as in the gap setting.

• The costs of the first n actions in the product set-ting are the cost of action a1 in the gap setting; thecost of the last action in the product setting is thecost of the last action in the gap setting.

end

Observation 4.13. Partition all actions of the productsetting but the last one into blocks of n actions each.13

Every action in the ith block has the same expectedreward for the principal as action ai in the gap setting,and the last action in the product setting has the sameexpected reward as the last action in the gap setting.

Corollary 4.14. The optimal expected welfares ofthe product and gap settings are the same, and aredetermined by their respective last actions.

13If the number of actions in the gap setting is 2, there is asingle such block.

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 13: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

Proposition 4.15 (Gap preservation by Algo-rithm 2). Let ϕ be a MAX-3SAT instance for whicheither there is a satisfying assignment, or every as-signment satisfies at most 7/8 + α of the clauses forα ≤ (0.05)2. Let ∆ ≤ (0.05)2. Consider the productsetting resulting from the reduction in Algorithm 2 runon input ϕ, ε = 3α1/2 + 2∆1/2 ≤ 1

4 . Then:

1. If ϕ has a satisfying assignment, the product set-ting has an IC contract that extracts full expectedwelfare;

2. If every assignment to ϕ satisfies at most 7/8+α ofthe clauses, the optimal ∆-IC contract can extractno more than 1

2 + ε of the expected welfare.

Proof. First, if ϕ has a satisfying assignment, then thereis a subset of SAT items that has zero probabilityaccording to every one of the first n actions. Considerthe outcome S∗ combining this subset together withthe gap item. We construct a full-welfare extractingcontract: the contract’s payment for S∗ is the cost ofthe last action in the product setting multiplied by 2m

(since the probability of S∗ according to the last actionis 1/2m), and all other payments are set to zero. It isnot hard to see that the resulting contract makes theagent indifferent among all actions, so by tie-breakingin favor of the principal, the principal receives the fullexpected welfare as her payoff.

Now consider the case that every assignment to ϕsatisfies at most 7/8 + α of the clauses, and assume forcontradiction that there is a ∆-IC contract p for theproduct setting that extracts more than 1

2 + ε of theexpected welfare. We derive from p a δ-IC contractp′ for the (2, ε, ε2)-gap setting where δ ≤ ε2, whichextracts more than 1

2 + ε of the expected welfare. Thisis a contradiction to the properties of the gap setting(Definition 4.6).

It remains to specify and analyze contract p′ : Forbrevity we denote the singleton containing the gap itemby M ′, and define

p′(S′) = 1−8α2m

∑S⊆[m] p(S ∪ S′) ∀S′ ⊆M ′,(4.8)

where S′ is either the singleton containing the gapitem or the empty set. The starting point of theanalysis is the observation that to extract > 1

2 + ε ofthe expected welfare in the product setting, contractp must ∆-incentivize the last action (this follows sincethe expected rewards and costs of the actions are as inthe gap setting by Observation 4.13, and so the sameargument as in the proof of Proposition 4.8 holds).

Claim 4.16 below establishes that if contract p ∆-incentivizes the last action in the product setting, thencontract p′ δ-incentivizes the last action in the gap

setting for δ = 8α+∆1−8α . So indeed

δ =8α

1− 8α+

1− 8α

≤ 9α+ 4∆

= (3α1/2)2 + (2∆1/2)2

≤ (3α1/2 + 2∆1/2)2 = ε2,

using that α,∆ ≤ (0.05)2 for the first inequality.Now observe that the expected payoff to the princi-

pal from contract p′ δ-incentivizing the last gap settingaction is at least that of contract p ∆-incentivizing thelast product setting action: the payments of p′ as de-fined in (4.8) are the average payments of p lowered bya factor of (1−8ε), and the expected rewards in the twosettings are the same (Observation 4.13). The expectedwelfares in the two settings are also equal (Corollary4.14). We conclude that like contract p in the productsetting, contract p′ guarantees extraction of > 1

2 + ε ofthe expected welfare in the gap setting. This leads toa contradiction and completes the proof of Proposition4.15 (up to Claim 4.16 proved below).

The next claim is formulated in general terms sothat it can also be used in Section 4.4. It references thecontract p′ defined in (4.8).

Claim 4.16. Assume every assignment to the MAX-3SAT instance ϕ satisfies at most 7/8 +α of its clauseswhere α < 1

8 , and consider the product and gap set-tings returned by the reduction in Algorithm 2 (resp.,Algorithm 3). If in the product setting the last action is∆-incentivized by contract p, then in the gap setting thelast action is δ-incentivized by contract p′ for δ = 8α+∆

1−8α .

Proof. Let gi denote the distribution of action ai in thegap setting and let c be the number of actions in thissetting. In the product setting, by construction its last

action assigns probability gc(S′)2m to every set S∪S′ such

that S contains SAT items and S′ ⊆ M ′. Thus theexpected payment for the last action given contract p is∑

S⊆[m]

∑S′⊆M ′

gc(S′)

2mp(S ∪ S′)(4.9)

=1

1− 8α

∑S′⊆M ′

gc(S′)p′(S′),

where the equality follows from the definition of p′ in(4.8). Note that the resulting expression in (4.9) isprecisely the expected payment for the last action inthe gap setting given contract p′, multiplied by factor1/(1− 8α).

Similarly, for every i ∈ c consider the averageaction over the ith block of n actions in the product

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 14: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

setting.14 Again by construction, the probability this

ith average action assigns to S ∪ S′ is ≥ gi(S′)(1−8α)2m ,

where we use that the average action of the SAT settinghas probability ≥ 1−8α

2m for S (Definition 4.11). Thusthe expected payment for the ith average action givencontract p is at least∑

S⊆[m]

∑S′⊆M ′

gi(S′)(1− 8α)

2mp(S ∪ S′)(4.10)

=∑

S′⊆M ′gi(S

′)p′(S′) ∀i ∈ [c],

where again the equality follows from (4.8). Notethat the resulting expression in (4.10) is precisely theexpected payment for action ai in the gap setting givencontract p′.

We now use the assumption that in the productsetting, contract p ∆-incentivizes the last action. Thismeans the agent ∆-prefers the last action to the ithaverage action, which has cost zero. Combining (4.9)and (4.10) we get

1 + ∆

1− 8α

∑S′⊆M ′

gc(S′)p′(S′)− C(4.11)

≥∑

S′⊆M ′gi(S

′)p′(S′) ∀i ∈ [c],

where C denotes the cost of the last action in the productand gap settings. By definition of δ-IC, Inequality (4.11)immediately implies that in the gap setting, the lastaction is δ-IC given contract p′ where δ = 8α+∆

1−8α , thuscompleting the proof of Claim 4.16.

We can now use Proposition 4.15 to prove Theo-rem 4.4.

Proof of Theorem 4.4. Recall that (ε−2∆1/2)2

9 is a con-stant ≤ (0.05)2. Assume a polynomial-time algorithmfor determining whether a principal-agent setting has a(fully-IC) contract that extracts the full expected wel-fare, or whether no ∆-IC contract can extract more than12 + ε. Then given a MAX-3SAT instance ϕ for whicheither there is a satisfying assignment or every assign-

ment satisfies at most 78 + (ε−2∆1/2)2

9 of the clauses,by Proposition 4.15 the product setting (constructedin polynomial time) either has a full-welfare extract-ing contract or has no ∆-IC contract that can extractmore than 1

2 + ε. Since the algorithm can determineamong these two cases, it can solve the MAX-3SAT in-

stance ϕ. But by Hastad [2001] and since (ε−2∆1/2)2

9 isa constant, we know that there is no polynomial-time

14If c = 2 there is a single such block.

algorithm for solving such MAX-3SAT instances unlessP = NP . This completes the proof of Theorem 4.4.

4.4 The general case: Proof of Theorem 4.1 Inthis section we formulate and analyze the guarantees ofthe reduction in Algorithm 3.

ALGORITHM 3: Generalized polytime reduction from

MAX-3SAT to principal-agent

Input : A MAX-3SAT instance ϕ with n clauses and mvariables; parameters ε ∈ R≥0 and c ∈ Z>0

where c ≥ 3.Output : A principal-agent product setting combining

copies of a SAT setting and a gap setting.begin

Combine multiple copies of the SAT settingcorresponding to ϕ (attainable in polytime byProposition 4.12) with a poly-sized (c, ε, εc)-gap setting(exists by Proposition 4.9) to get the product setting,as follows:• The product setting has cn+ 1 actions and m+ 1

items: m “SAT items” correspond to the SAT set-ting items, and the last “gap item” corresponds tothe gap setting item.

• For every i ∈ [c], consider the ith block of n rows ofthe product setting’s (cn+ 1)× (m+ 1) matrix ofprobabilities. The ith block consists of row(i− 1) · n+ 1 to row i · n and forms a submatrix ofsize n× (m+ 1). The first m columns of the sub-matrix are set to a copy of the SAT setting’s n×mmatrix of probabilities, and the entire last columnis set to the probability that action ai in the gapsetting results in the item. Finally, the first m en-tries of the last row of the product setting’s matrix(i.e., row cn+ 1) are set to 1

2, and the last entry

(the lower-right corner of the matrix) is set to theprobability that the last action in the gap settingresults in the item.

• In the product setting, the rewards for the m SATitems are set to 0, and the reward for the gap itemis set as in the gap setting.

• For every i ∈ [c], the costs of the n actions in blocki are the cost of action ai in the gap setting; thecost of the last action in the product setting is thecost of the last action in the gap setting.

end

Proposition 4.17 (Gap preservation by Algo-rithm 3). Let c ∈ Z, c ≥ 3. Let ϕ be a MAX-3SATinstance for which either there is a satisfying assign-ment, or every assignment satisfies at most 7/8 + α ofthe clauses for α ≤ (0.05)c. Let ∆ ≤ (0.05)c. Considerthe product setting resulting from the reduction in Al-gorithm 3 run on input ϕ, c, ε = 3α1/c + 2∆1/c ≤ 1

4 .Then:

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 15: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

1. If ϕ has a satisfying assignment, the product set-ting has an IC contract that extracts full expectedwelfare;

2. If every assignment to ϕ satisfies at most 7/8+α ofthe clauses, the optimal ∆-IC contract can extractno more than 1

c + ε of the expected welfare.

Proof. First, if ϕ has a satisfying assignment, then thereis a subset of SAT items that has zero probability ac-cording to every one of the actions in the product set-ting except for the last action, and so we can construct afull-welfare extracting contract as in the proof of Propo-sition 4.15. From now on consider the case that everyassignment to ϕ satisfies at most 7/8 +α of the clauses,and assume for contradiction there is a ∆-IC contract pfor the product setting that extracts more than 1

c + ε ofthe expected welfare.

Consider the case that p ∆-incentivizes the lastaction in the product setting. Then we can derive fromit a δ-IC contract p′ for the (c, ε, εc)-gap setting whereδ ≤ εc, which extracts more than 1

c + ε of the expectedwelfare. This is a contradiction to the properties ofthe gap setting (Definition 4.6). The construction of p′

and its analysis are as in the proof of Proposition 4.15(where Equation (4.8) defines p′), and so are omittedhere except for the following verification: we must verifythat indeed δ ≤ εc. We know from Claim 4.16 thatδ = 8α+∆

1−8α . As in the proof of Proposition 4.15 this is≤ 9α+ 4∆, and it is not hard to see that

9α+4∆ ≤ (3α1/c)c+(2∆1/c)c ≤ (3α1/c+2∆1/c)c = εc,

where the first inequality uses that c ≥ 3.In the remaining case, p ∆-incentivizes an action

ai∗k in the product setting which is the kth action inblock i∗ ∈ [c] (recall each block has n actions). Wederive from p a contract p′k (depending on k) for thegap setting that ∆-incentivizes ai∗ at the same expectedpayment. As in the proof of Proposition 4.17, thismeans that p′k extracts > 1

c + ε of the expected welfare

in the gap setting. Since ∆ ≤ δ = 8α+∆1−8α it follows

from the argument above that ∆ ≤ εc, and so we havereached a contradiction to the properties of the gapsetting (Definition 4.6).

We define p′k as follows: Let sk denote the distribu-tion of action ak in the SAT setting. For every subsetS′ ⊆M ′ of gap items,

p′k(S′) =∑S⊆[m]

p(S ∪ S′)sk(S) ∀S′ ⊆M ′,(4.12)

where S′ is either the singleton containing the gap itemor the empty set.

For the analysis, let gi denote the distribution ofaction ai in the gap setting. In the product setting, for

every i ∈ [c], k ≤ n the expected payment for action aikby contract p is

(4.13)∑S∈[m]

∑S′⊆M ′

sk(S)gi(S′)p(S ∪ S′).

In the gap setting, the expected payment for ai bycontract p′k is

∑S′⊆M ′ gi(S

′)p′(S′), and by definition ofp′k in (4.12) this coincides with the expected paymentin (4.13). We know that contract p ∆-incentivizes ai∗kin the product setting, in particular against any actionaik where i ∈ [c] \ {i∗} (i.e., against actions in the sameposition k but in different blocks). This implies thatcontract p′k ∆-incentivizes ai∗ in the gap setting againstany action ai, completing the proof.

We can now use Proposition 4.17 to prove Theorem4.1. The proof is identical to that of Theorem 4.4 andso is omitted here.

5 Approximation guarantees

In this section we show that for any constant δ thereis a simple, namely linear, δ-IC contract that extractsas expected payoff for the principal a cδ-fraction of theoptimal welfare, where cδ is a constant that dependsonly on δ. Recall that a linear contract is defined by aparameter α ∈ [0, 1], and pays the agent pS = α

∑j∈S rj

for every outcome S ⊆M .

Theorem 5.1. Consider a principal-agent setting withn actions. For every γ ∈ (0, 1) and every δ > 0 there isa δ-IC linear contract with expected payoff ALG where

ALG ≥

((1− γ)

1

dlog1+δ(1γ )e+ 1

)maxi∈[n]{Ri − ci}.

An immediate corollary of Theorem 5.1 is that wecan compute a δ-IC linear contract that achieves theclaimed constant-factor approximation in polynomialtime. By Corollary 4.2 we cannot achieve a similarresult for IC (rather than δ-IC) contracts unless P =NP . In fact, an even stronger lower bound holdsfor the class of exactly IC linear (or, more generally,separable) contracts. These contracts cannot achieve anapproximation ratio better than n (see [Dutting et al.,2019] and full version for details).

Geometric understanding of linear contracts.To prove Theorem 5.1 we will rely on the followinggeometric understanding of linear contracts developedin [Dutting et al., 2019]. Fix a principal-agent setting.For a linear contract with parameter α ∈ [0, 1] andan action ai, the expected reward Ri =

∑S qi,SrS is

split between the principal and the agent, leaving theprincipal with (1−α)Ri in expected utility and the agent

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 16: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

with αRi− ci (the sum of the players’ expected utilitiesis action ai’s expected welfare). The agent’s expectedutility for choosing action ai as a function of α is thus aline from −ci (for α = 0) to Ri−ci (for α = 1). Drawingthese lines for each of the n actions, we trace the agent’sutility for his best action as α goes from 0 to 1. Thisgives us the upper envelope diagram for linear contractsin the given principal-agent setting.

We now analyze some properties of the actionsalong the upper envelope diagram (i.e., as the linearcontract gives an increasingly higher fraction of therewards to the agent). Let IN be the subset of N ≤ nactions implementable by some linear contract. Wesubdivide the interval [0, 1] into N ≤ n intervals T1 =[`1, r1), T2 = [`2, r2), . . . , TN = [`N , rN ], with `1 = 0,`i = ri−1 for 2 ≤ i ≤ N , and rN = 1. The subdivisionis such that there is a bijection τ between the indicesof actions ai ∈ IN and between those of intervals Tτ(i),with the following properties:

1. For every i ∈ [N ] and α in the ith interval Ti, thelinear contract with parameter α incentivizes actionaτ−1(i). It follows that every action ai ∈ IN appearson the upper envelope once, and the smallest αthat incentivizes it is the left endpoint `τ(i) of thisaction’s interval.

2. For every i ∈ [N ] it holds that cτ−1(i) ≤ cτ−1(i+1),Rτ−1(i) ≤ Rτ−1(i+1), and Rτ−1(i) − cτ−1(i) ≤Rτ−1(i+1) − cτ−1(i+1).

Notation. Our proof of Theorem 5.1 uses thefollowing notation:

• We renumber the actions as they appear on theupper envelope from left to right. By the secondproperty above, we get actions sorted by increasingcost, increasing expected reward and increasingexpected welfare. In particular, the final action(action aN after renaming) must be the action inAn with the highest expected welfare.15

• For every linearly-implementable action ai ∈ IN , wedenote by αi the smallest parameter α of a linearcontract that incentivizes ai (i.e., the left endpointof ai’s corresponding interval).

Approximation guarantee proof. With thesedefinitions at hand, we are now ready to prove thetheorem.

Proof of Theorem 5.1. Given γ ∈ (0, 1) and δ > 0,set κ = dlog1+δ(

1γ )e. Subdivide the range [0, 1] of α-

15This is easy to see for α = 1, for which the full reward is

transferred to the agent who also bears the cost, and so picks thewelfare-maximizing action.

parameters into κ+ 1 intervals:

[0, γ(1 + δ)0), [γ(1 + δ)0, γ(1 + δ)1),

[γ(1 + δ)1, γ(1 + δ)2), . . . , [γ(1 + δ)κ−1, 1].

For each interval k ∈ [κ + 1], denote by ah(k) theaction ai with the highest expected reward for which αifalls into this interval (for simplicity of presentation weassume without loss of generality that such an actionexists for each interval).

Note that h(k) < h(k + 1) due to renaming andbecause actions appear on the upper envelope in non-decreasing order of expected reward. We require thefollowing definition: For k ≥ 2, define

αh(k−1),h(k) =ch(k) − ch(k−1)

Rh(k) −Rh(k−1),

i.e., αh(k−1),h(k) is the α that makes the agent indifferentbetween actions h(k − 1) and h(k). For k = 1 defineαh(k−1),h(k) = 0.

The proof now proceeds by two claims. The firstclaim derives an upper bound on maxi∈[n](Ri − ci) =RN − cN . The proof appears in the full version.

Claim 5.2. maxi∈[n](Ri − ci) = RN − cN ≤∑κ+1k=1(1−

αh(k−1),h(k))Rh(k).

The second crucial observation is that whileαh(k−1),h(k) is generally smaller than αh(k) and thusdoes not incentivize action h(k), it still δ-incentivizesit.

Claim 5.3. For k = 2, . . . , κ + 1, the linear contractwith α = αh(k−1),h(k) ensures that αRh(k) − ch(k) + δ ≥αRi − ci for every i ∈ [n].

Proof of Claim 5.3. The lines Rh(k) − ch(k) andRh(k−1) − ch(k−1) intersect at αh(k−1),h(k). By con-struction, their intersection must fall between, on theone hand, the left endpoint γ(1 + δ)k−2 of the intervalin which αh(k) falls, and αh(k) on the other hand. This

shows that (1 + δ)αh(k−1),h(k) ≥ (1 + δ)γ(1 + δ)k−2 =

γ(1− δ)k−1 ≥ αh(k). Combining this with the fact thatah(k) is incentivized exactly at αh(k), we obtain thatαh(k−1),h(k)Rh(k)−ch(k) +δ ≥ (1+δ)αh(k−1),h(k)Rh(k)−ch(k) ≥ αh(k)Rh(k) − ch(k) ≥ αh(k)Ri − ci for all i ∈ [n],where the first inequality holds since Rh(k) ≤ 1 bynormalization. This completes the proof of Claim5.3. 4

Using Claims 5.2 and 5.3, the theorem follows fromthe fact that to obtain a δ-IC linear contract we caneither incentivize ah(1) at α = αh(1) or δ-incentivize one

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 17: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

of the actions ah(2), . . . , ah(κ+1) at α = αh(k−1),h(k) with2 ≤ k ≤ κ+ 1. Namely,

ALG ≥ max{(1− αh(1))Rh(1), (1− αh(1),h(2))Rh(2), . . . ,

(1− αh(κ),h(κ+1))Rh(κ+1)}≥ (1− γ) max{(1− αh(0),h(1))Rh(1),

(1− αh(1),h(2))Rh(2), . . . ,

(1− αh(κ),h(κ+1))Rh(κ+1)}

≥ (1− γ)1

κ+ 1

κ+1∑i=1

(1− αh(k−1),h(k))Rh(k)

≥ (1− γ)1

κ+ 1OPT,

where for the first inequality applied to ALG we useClaim 5.3, for the second inequality we use αh(1) ≤ γand αh(0),h(1) ≥ 0, for the third inequality we lowerbound the maximum by the average, and for the finalinequality we use Claim 5.2.

6 Black-box model

We conclude by considering a black-box model whichconcerns non-necessarily succinct principal-agent set-tings. In this model, the principal knows the set ofactions An, the cost ci of each action ai ∈ An, the setof items M and the rewards rj for each item j ∈ M ,but does not know the probabilities qi,S that action aiassigns to outcome S ⊆ M . Instead, the principal hasquery access to the distributions {qi}. Upon queryingdistribution qi of action ai, a (random) set is returnedwhere S is selected with probability qi,S . Our goal is tostudy how well a δ-IC contract in this model can approx-imate the optimal IC contract if limited to a polynomialnumber of queries (where the guarantees should holdwith high probability over the random samples). Black-box models have been studied in other algorithmic gametheory contexts such as signaling—see [Dughmi and Xu,2016] for a successful example.

Let η = min{qi,S | i ∈ [n], S ⊆ M, qi,S 6= 0} bethe minimum non-zero probability of any set of itemsunder any of the actions. Note that then either qi,S = 0or qi,S ≥ η for every S. In Section 6.1 we address thecase in which η is inverse super-polynomial and obtain anegative result; in Section 6.2 we show a positive resultfor the case of inverse polynomial η.

6.1 Inverse super-polynomial probabilities Weshow a negative result for the case where the minimumprobability η is inverse super-polynomial, by showingthat poly(1/

√η) samples are required to obtain a con-

stant factor multiplicative approximation better than≈ 1.15. The negative result holds even for succinct set-tings, in which the unknown distributions are product

distributions.

Theorem 6.1. Assume η ≤ η0 = 1/625 and δ ≤δ0 = 1/100. Even with n = 2 actions and m = 2items, achieving a multiplicative ≤ 1.15 approximationto the optimal IC contract through a δ-IC contract,where the approximation guarantee is required to holdwith probability at least 1 − γ, may require at leasts ≥ − log(γ)/(9

√η) queries.

Proof. We consider a scenario with two settings, bothof which have n = 2 actions and m = 2 items, andwhich differ only in the probabilities of the items giventhe second action. Let τ be some constant > 2 (to be

fixed later), and let µ =√η

τ . Let β = (1 + 1τ2 )−1 and

note that β < 1.

Setting I:

r1 = βτ2µ r2 = β

τ2µ

a1 : τµ τµ c1 = 0

a2 : τ2µ µ c2 = τ−1τ3

11−µβ

Setting II:

r1 = βτ2µ r2 = β

τ2µ

a1 : τµ τµ c1 = 0

a2 : µ τ2µ c2 = τ−1τ3

11−µβ

Note further that the minimum probability of anyset of items in both settings is q2,{1,2} = τ2µ2 = η, asrequired by definition of η.

The expected reward achieved by the two actionsin the two settings is R1 = 2β/τ < 1 and R2 =(1 + 1/τ2)β = 1. Moreover, the cost of action 2 isc2 ≤ β/τ2. So the welfare achieved by the two actionsis R1 − c1 < β and R2 − c2 ≥ β.

In both settings the optimal IC contract incentivizesaction 2, by paying only for the set of items thatmaximizes the likelihood ratio. In Setting 1 this is{1}, in Setting 2 it is {2}. The payment for thisset in both cases is c2/(τ

2µ(1 − µ) − τµ(1 − τµ)) =c2/(τ

2µ − τµ). This leads to an expected payment ofτ2µ(1−µ) · c2/(τ2µ− τµ) = β/τ2. The resulting payoff(and our benchmark) is therefore R2 − β/τ2 = β.

We now argue that if we cannot distinguish betweenthe two settings, then we can only achieve a ≈ 1.1568approximation. Of course, we can always pay nothingand incentivize action 1, but this only yields a payoff of2β/τ . We can also try to δ-incentivize action 2 in bothsettings, by paying for outcome {1} and {2}. But (as weshow below) the payoff that we can achieve this way is(for δ → 0 and µ→ 0) at most (1+1/τ2−(τ2 +1)/((τ−1)τ3)β. Now max{2/τ, 1+1/τ2−(τ2 +1)/((τ−1)τ3} is

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 18: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

minimized at τ = 1+√

2 where it is 2/(1+√

2) ≈ 0.8284.The upper bound on the payoff from action 2 for thischoice of τ is actually increasing in both µ and δ and≈ 0.8644·β at the upper bounds µ0 =

√η0/(2

2) = 1/100and δ0 = 1/100, implying that the best we can achievewithout knowing the setting is a ≈ 1/0.8644 ≈ 1.1568approximation.

So if we want to achieve at least a ≤ 1.15 approxi-mation with probability at least 1− γ, then we need tobe able to distinguish between the two settings with atleast this probability. A necessary condition for beingable to distinguish between the two settings is that wesee at least some item in one of our queries to action 2.So,

1− γ ≤ 1− (1− τ2µ)2s,

which implies that s ≥ log(γ)/(2 log(1 − τ2µ) ≥− log(γ)/(2 · µ · τ2) ≥ − log(γ)/(18µ). Plugging in µ

we get s ≥ − log(γ)/(18√µ

τ ) > − log(γ)/(9√µ).

We still need to prove our claims regarding thepayoff that we can achieve if we want to δ-incentivizeaction 2 in both settings. To this end consider the ICconstraints for δ-incentivizing action 2 over action 1 inSetting I and Setting II, respectively:

τ2µ(1− µ)p{1} + (1− τ2µ)µp{2} − c2 ≥τµ(1− τµ)p{1} + (1− τµ)τµp{2} − δ

(1− τ2µ)µp{1} + τ2µ(1− µ)p{2} − c2 ≥τµ(1− τµ)p{1} + (1− τµ)τµp{2} − δ

Adding up these constraints yields

(τ2µ(1− µ) + (1− τ2µ)µ− 2τµ(1− τµ)) · (p{1} + p{2})

≥ 2c2 − 2δ

We maximize the minimum performance across the twosettings by choosing p{1} = p{2}. Letting p = p{1} =p{2} we thus obtain

(τ2µ(1− µ) + (1− τ2µ)µ− 2τµ(1− τµ))p ≥ c2 − δ.

It follows that

p ≥ c2 − δτ2µ+ µ− 2τµ

The performance of the optimal contract that δ-incentivizes action 2 in both settings thus achieves anexpected payoff of

R2 − (τ2µ(1− µ) + (1− τ2µ)µ)c2 − δ

τ2µ+ µ− 2τµ=

R2 −τ2(1− 2µ) + 1

(τ − 1)2(c2 − δ)

Plugging in R2 and c2 and letting δ → 0 and µ →0 we obtain the aforementioned 1 + 1/τ2 − (τ2 +1)/((τ−1)τ3)β. Finally, to see that the expected payoffevaluated at τ = 1+

√2 > 2 is increasing in both δ and µ

observe that the derivative in δ is simply the probabilityterm (τ2(1−2µ)+1)/(τ−1)2 which is positive and thatboth this probability term and the cost c2 are decreasingin µ implying that as µ increases we subtract less.

6.2 Inverse polynomial probabilities We showa positive result for the case where the minimumprobability η is inverse polynomial. Namely, let OPTdenote the expected payoff of the optimal IC contract;then with poly(n,m, 1

η ,1ε ,

1γ ) queries it is possible to find

with probability at least (1 − γ) a 4ε-IC contract withpayoff at least OPT − 5ε. Formally:

Theorem 6.2. Fix ε > 0, and assume ε ≤ 1/2. Fixdistributions Q such that qi,S ≥ η for all i ∈ [n] andS ⊆ M . Denote the expected payoff of the optimal ICcontract for distributions Q by OPT . Then there is analgorithm that with s = (3 log( 2n

ηγ ))/(ηε2) queries to eachaction and probability at least 1−γ, computes a contractp which (i) is 4ε-IC on the actual distributions Q; and(ii) has expected payoff Π on the actual distributionssatisfying Π ≥ OPT − 5ε.

To prove Theorem 6.2, we first prove a series oflemmas (Lemmas 6.3 to 6.7). Proofs appear in the fullversion.

Lemma 6.3. Consider the algorithm that issues squeries to each action i ∈ N , and sets qi,S to be theempirical probability of set S under action i. Withs = (3 log( 2n

ηγ ))/(ηε2) queries to each action, with prob-

ability at least 1− γ, for all i ∈ [n] and S ⊆M ,

(1− ε)qi,S ≤ qi,S ≤ (1 + ε)qi,S .

Lemma 6.4. Suppose that (1−ε)qi,S ≤ qi,S ≤ (1+ε)qi,Sfor all i ∈ [n] and S ⊆ M . Consider contract p. If aiis the action that is incentivized by this contract underthe actual probabilities Q, then the payoff of ai underthe empirical distributions Q is at least as high as thatof any other action up to an additive term of 2ε.

Lemma 6.5. Suppose that (1−ε)qi,S ≤ qi,S ≤ (1+ε)qi,Sfor all i ∈ [n] and S ⊆M . Consider contract p. If ai isthe action that is δ-incentivized by this contract underthe empricial probabilities Q, then the payoff of ai underthe actual distributions is at least as high as that of anyother action up to an additive term of δ + 2ε.

Lemma 6.6. Suppose that (1−ε)qi,S ≤ qi,S ≤ (1+ε)qi,Sfor all i ∈ [n] and S ⊆ M . If action ai achieves payoff

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 19: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

Π under contract p when evaluated on the empiricaldistributions Q, then it achieves payoff Π ≥ Π−2ε whenevaluated on the actual distributions Q.

Lemma 6.7. Assume ε ≤ 1/2. Suppose that (1 −ε)qi,S ≤ qi,S ≤ (1 + ε)qi,S for all i ∈ [n] and S ⊆ M .If action ai achieves payoff P under contract p whenevaluated on the actual distributions Q, then it achievespayoff P ≥ P − 3ε when evaluated on the empiricaldistributions Q.

We are now ready to prove the theorem.

Proof of Theorem 6.2. Compute empirical probabilitiesQ by querying each action s times. By Lemma 6.3, withprobability at least 1 − γ, the empirical probabilitiesobtained in this way will satisfy (1 − ε)qi,S ≤ qi,S ≤(1 + ε)qi,S for all i ∈ [n] and S ⊆M .

Suppose we compute the optimal 2ε-IC contract pon the empirical distributions Q. Denote the expectedpayoff achieved by this contract on Q by Π, and the ex-pected payoff it achieves on Q by Π. Likewise, considerthe optimal IC contract p on the actual distributionsQ. Denote the expected payoff OPT achieved by thiscontract on the actual distributions Q by P , and theexpected payoff it achieves on Q by P .

Note that by Lemma 6.5, contract p which is 2ε-IC on Q is 4ε-IC on Q. Furthermore, by Lemma 6.4,contract p which is IC on Q is 2ε-IC on Q. This impliesthat Π ≥ P . Together with Lemma 6.6 and Lemma 6.7we obtain

Π ≥ Π− 2ε ≥ P − 2ε ≥ P − 5ε,

which proves the theorem.

References

G. A. Akerlof. The market for “lemons”: Quality un-certainty and the market mechanism. The QuarterlyJournal of Economics, 84(3):488–500, 1970.

G. A. Akerlof. Labor contracts as partial gift-exchange.The Quarterly Journal of Economics, 97:543–569,1982.

P. D. Azar and S. Micali. Computational principal-agent problems. Theoretical Economics, 13:553–578,2018.

M. Babaioff and E. Winter. Contract complexity. InEC’14, page 911, 2014.

M. Babaioff, M. Feldman, N. Nisan, and E. Winter.Combinatorial agency. Journal of Economic Theory,147(3):999–1034, 2012.

M. Babaioff, N. Immorlica, B. Lucier, and S. M. Wein-berg. A simple and approximately optimal mecha-nism for an additive buyer. In FOCS’14, pages 21–30,2014.

M. Babaioff, Y. A. Gonczarowski, and N. Nisan. Themenu-size complexity of revenue approximation. InSTOC’17, pages 869–877, 2017.

Y. Cai. Mechanism design: A new algorithmic frame-work. PhD thesis, Massachusetts Institute of Tech-nology (MIT), 2013.

Y. Cai, C. Daskalakis, and S. M. Weinberg. An algo-rithmic characterization of multi-dimensional mecha-nisms. In STOC’12, pages 459–478, 2012a.

Y. Cai, C. Daskalakis, and S. M. Weinberg. Optimalmulti-dimensional mechanism design: Reducing rev-enue to welfare maximization. In FOCS’12, pages130–139, 2012b.

Y. Cai, C. Daskalakis, and S. M. Weinberg. Understand-ing incentives: Mechanism design becomes algorithmdesign. In FOCS’13, pages 618–627, 2013.

Y. Cai, N. R. Devanur, and S. M. Weinberg. A du-ality based unified approach to bayesian mechanismdesign. In STOC’16, pages 926–939, 2016.

B. Caillaud and B. E. Hermalin. Hidden-information agency. Lecture notes available fromhttp://faculty.haas.berkeley.edu/hermalin/

mechread.pdf, 2000.

R. D. Carr and S. Vempala. Randomized metarounding.Random Structures and Algorithms, 20(3):343–352,2002.

G. Carroll. A quantitative approach to incentives:Application to voting rules. Working paper, 2013.

G. Carroll. Robustness and linear contracts. AmericanEconomic Review, 105(2):536–563, 2015.

Y. Cheng, H. Y. Cheung, S. Dughmi, E. Emamjomeh-Zadeh, L. Han, and S. Teng. Mixture selection,mechanism design, and signaling. In FOCS’15, pages1426–1445, 2015.

C. Daskalakis and S. M. Weinberg. Symmetries andoptimal multi-dimensional mechanism design. InEC’12, pages 370–387, 2012.

S. Dughmi. On the hardness of signaling. In FOCS’14,pages 354–363, 2014.

S. Dughmi and H. Xu. Algorithmic Bayesian persuasion.In STOC’16, pages 412–425, 2016.

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited

Page 20: The Complexity of Contracts - Inbal Talgam Complexity of Contracts.pdf · visiting Google Research. The third author is a Taub Fellow (sup-ported by the Taub Family Foundation). The

S. Dughmi, N. Immorlica, and A. Roth. Constrainedsignaling in auction design. In SODA’14, pages 1341–1357, 2014.

S. Dughmi, J. D. Hartline, R. Kleinberg, and R. Ni-azadeh. Bernoulli factories and black-box reductionsin mechanism design. In STOC’17, pages 158–169,2017.

P. Dutting, T. Roughgarden, and I. Talgam-Cohen.Simple versus optimal contracts. In EC’19, pages369–387, 2019.

Y. Emek and M. Feldman. Computing optimal con-tracts in combinatorial agencies. Theoretical Com-puter Science, 452:56–74, 2012.

F. Englmaier and S. Leider. Contractual and orga-nizational structure with reciprocal agents. Amer-ican Economic Journal: Microeconomics, 4(2):146–183, 2012.

M. Feldman, G. Kortsarz, and Z. Nutov. Improvedapproximation algorithms for directed Steiner forest.Journal of Computer and System Sciences, 78(1):279–292, 2012.

L. Fleischer, M. X. Goemans, V. S. Mirrokni, andM. Sviridenko. Tight approximation algorithms formaximum separable assignment problems. Mathe-matics of Operations Research, 36(3):416–431, 2011.

Y. A. Gonczarowski. Bounding the menu-size of ap-proximately optimal auctions via optimal-transportduality. In STOC’18, pages 123–131, 2018.

Y. A. Gonczarowski and S. M. Weinberg. The sam-ple complexity of up-to-ε multi-dimensional revenuemaximization. In FOCS’18, pages 416–426, 2018.

S. J. Grossman and O. D. Hart. An analysis of theprincipal-agent problem. Econometrica, 51(1):7–45,1983.

J. D. Hartline and B. Lucier. Non-optimal mechanismdesign. American Economic Review, 105(20):3102–3124, 2015.

J. D. Hartline, R. Kleinberg, and A. Malekian. Bayesianincentive compatibility via matchings. Games andEconomic Behavior, 92:401–429, 2015.

J. Hastad. Some optimal inapproximability results.Journal of the ACM, 48(4):798–859, 2001.

C. Ho, A. Slivkins, and J. W. Vaughan. Adaptive con-tract design for crowdsourcing markets: Bandit algo-rithms for repeated principal-agent problems. Journalof Artificial Intelligence Research, 55:317–359, 2016.

K. Jain, M. Mahdian, and M. R. Salavatipour. PackingSteiner trees. In SODA’03, pages 266–274, 2003.

A. X. Jiang and K. Leyton-Brown. Polynomial-timecomputation of exact correlated equilibrium in com-pact games. Games and Economic Behavior, 91:347–359, 2015.

N. Karmarkar and R. M. Karp. An efficient approx-imation scheme for the one-dimensional bin-packingproblem. In FOCS’82, pages 312–320, 1982.

J. M. Kleinberg and R. Kleinberg. Delegated searchapproximates efficient search. In EC’18, pages 287–302, 2018.

B. Koszegi. Behavioral contract theory. Journal ofEconomic Literature, 52(4):1075–1118, 2014.

R. B. Myerson. Optimal auction design. Mathematicsof Operations Research, 6(1):58–73, 1981.

Z. Nutov, I. Beniaminy, and R. Yuster. A (1-1/e)-approximation algorithm for the generalized assign-ment problem. Operations Research Letters, 34(3):283–288, 2006.

C. H. Papadimitriou. The complexity of finding Nashequilibria. In N. Nisan, T. Roughgarden, E. Tardos,and V. V. Vazirani, editors, Algorithmic Game The-ory, chapter 2, pages 29–51. Cambridge UniversityPress, 2006.

C. H. Papadimitriou and T. Roughgarden. Computingcorrelated equilibria in multi-player games. Journalof the ACM, 55(3):14:1–14:29, 2008.

C. H. Papadimitriou and K. Steiglitz. Combinatorialoptimization: Algorithms and complexity. Prentice-Hall, 1982.

Royal Swedish Academy of Sciences. Scientific back-ground on the 2016 Nobel Prize in Economic Sciences,2016.

B. Salanie. The Economics of Contracts: A Primer.MIT Press, 2005.

D. Walton and G. Carroll. When are robust contractslinear? Working paper, 2019.

S. M. Weinberg. Algorithms for strategic agents. PhDthesis, Massachusetts Institute of Technology (MIT),2014.

Copyright © 2020 by SIAMUnauthorized reproduction of this article is prohibited