A technical framework for light-handed regulation of ...sahai/Papers/COM_MagazineSubmission.pdf ·...

8
A technical framework for light-handed regulation of cognitive radios Anant Sahai and Kristen Ann Woyach Electrical Engineering and Computer Sciences University of California, Berkeley, CA Emails: {sahai,kwoyach}@eecs.berkeley.edu George Atia and Venkatesh Saligrama Electrical and Computer Engineering Boston University, Boston, MA Emails: {geokamal,srv}@bu.edu I. I NTRODUCTION Governments around the world have to decide on what regulation is going to look like for the next generation of wireless devices. The current regulatory model – often called “command-and-control” – in which spectrum is parceled and allocated to specific uses and companies, was designed for one-to-many broadcast systems such as TV and AM/FM radio. This centralized solution is easy to enforce, but has difficulty managing allocations on the heterogeneous usage scales of interest. It leaves “holes” in both time and space where valuable spectrum is being wasted [3]. In common language, both the need to get lengthy government approvals and the wasted spectrum are often viewed as problems of regulatory overhead. Legal scholars and economists have debated how to solve this problem. While all agree that decentralized and more “light-handed” regulation is desirable, the form of this regulation is contested. Spectrum privatization advocates rely on market forces to determine who will be allowed to transmit. In this model, government regulation certifies devices, monitors market transactions, and resolves disputes as civil offenses through the courts. Spectrum commons advocates note that with current technological advances, a simpler approach is possible that puts the burden of regulation entirely on equipment — any certified device may transmit. Regardless of the policy approach, the looming introduction of frequency-agile and software-defined radios poses a major challenge. Cognitive radios are autonomous and possibly adaptive, allowing them to adjust their transmission patterns according to local observations [4]. This forces us to confront the wireless version of an age-old philosophical question: for autonomous beings, is the freedom to do good distinguishable a priori from the freedom to do evil? In this perspective, frequency-agility runs the risk of being the wireless equivalent of Plato’s Ring of Gyges that conferred invisibility and hence unaccountability to its wearer. Faulhaber raises this specter through his discussion of “hit and run radios” that are virtually uncatchable because they turn on, use the spectrum for a period of time, and turn off without a trace [5]. The knee-jerk response to this prospect is to just ban frequency-agility altogether. But in the age of an ever increasing number of wireless interfaces on portable devices, the potential cost and power savings enabled by radio unification through frequency-agility is hard to ignore. Besides, precluding frequency-agility would eliminate the long-term prospects for dynamic spectrum access (whether by markets or commons!) to reduce the regulatory overhead of wasted spectrum. So the core question that the wireless community faces is how to readjust the balance to exploit frequency-agile devices for reducing regulatory overhead. It is tempting to wish for a crisp way to certify the safety of wireless protocols involving frequency-agility and then to lock these down at device certification time. Besides the obvious problem that G¨ odel and Turing tell us that automatically proving correctness of general programs is impossible and engineering bug-free software is hard even in deterministic settings, Hatfield has pointed out that wireless environments often lead to unpredictable interactions [6] that are hard to model a priori. Such a detailed code-level certification is likely to be costly and thus represents a barrier to entry that effectively reduces the freedom to innovate at the wireless transport level. The real-world of politics dictates that such complex barriers will provide many opportunities for manipulation by parties interested in blocking competitors [7]. Some of this material has appeared previously in [1], [2], and [3].

Transcript of A technical framework for light-handed regulation of ...sahai/Papers/COM_MagazineSubmission.pdf ·...

A technical framework for light-handed regulationof cognitive radios

Anant Sahai and Kristen Ann WoyachElectrical Engineering and Computer Sciences

University of California, Berkeley, CAEmails: {sahai,kwoyach}@eecs.berkeley.edu

George Atia and Venkatesh SaligramaElectrical and Computer Engineering

Boston University, Boston, MAEmails: {geokamal,srv}@bu.edu

I. INTRODUCTION

Governments around the world have to decide on what regulation is going to look like for the next generationof wireless devices. The current regulatory model – often called “command-and-control” – in which spectrum isparceled and allocated to specific uses and companies, was designed for one-to-many broadcast systems such asTV and AM/FM radio. This centralized solution is easy to enforce, but has difficulty managing allocations on theheterogeneous usage scales of interest. It leaves “holes” in both time and space where valuable spectrum is beingwasted [3]. In common language, both the need to get lengthy government approvals and the wasted spectrum areoften viewed as problems of regulatory overhead.

Legal scholars and economists have debated how to solve this problem. While all agree that decentralizedand more “light-handed” regulation is desirable, the form of this regulation is contested. Spectrum privatizationadvocates rely on market forces to determine who will be allowed to transmit. In this model, government regulationcertifies devices, monitors market transactions, and resolves disputes as civil offenses through the courts. Spectrumcommons advocates note that with current technological advances, a simpler approach is possible that puts theburden of regulation entirely on equipment — any certified device may transmit.

Regardless of the policy approach, the looming introduction of frequency-agile and software-defined radiosposes a major challenge. Cognitive radios are autonomous and possibly adaptive, allowing them to adjust theirtransmission patterns according to local observations [4]. This forces us to confront the wireless version of anage-old philosophical question: for autonomous beings, is the freedom to do good distinguishable a priori from thefreedom to do evil? In this perspective, frequency-agility runs the risk of being the wireless equivalent of Plato’sRing of Gyges that conferred invisibility and hence unaccountability to its wearer. Faulhaber raises this specterthrough his discussion of “hit and run radios” that are virtually uncatchable because they turn on, use the spectrumfor a period of time, and turn off without a trace [5].

The knee-jerk response to this prospect is to just ban frequency-agility altogether. But in the age of an everincreasing number of wireless interfaces on portable devices, the potential cost and power savings enabled byradio unification through frequency-agility is hard to ignore. Besides, precluding frequency-agility would eliminatethe long-term prospects for dynamic spectrum access (whether by markets or commons!) to reduce the regulatoryoverhead of wasted spectrum.

So the core question that the wireless community faces is how to readjust the balance to exploit frequency-agiledevices for reducing regulatory overhead. It is tempting to wish for a crisp way to certify the safety of wirelessprotocols involving frequency-agility and then to lock these down at device certification time. Besides the obviousproblem that Godel and Turing tell us that automatically proving correctness of general programs is impossibleand engineering bug-free software is hard even in deterministic settings, Hatfield has pointed out that wirelessenvironments often lead to unpredictable interactions [6] that are hard to model a priori. Such a detailed code-levelcertification is likely to be costly and thus represents a barrier to entry that effectively reduces the freedom toinnovate at the wireless transport level. The real-world of politics dictates that such complex barriers will providemany opportunities for manipulation by parties interested in blocking competitors [7].

Some of this material has appeared previously in [1], [2], and [3].

From the societal perspective, the value of wireless freedom really depends on the synergies between con-tent/applications and transport. If all desirable future wireless services with all device lifetimes can be served usinga few stable interfaces, then freedom to innovate at the spectrum access level is not necessarily very valuable. Onthe surface though, this sounds like the wireless analog of the apocryphal quote from the pre-digital-revolutiondays “I think there is a world market for maybe five computers” or the pre-Internet-revolution worldview that the“information superhighway” would just consist of audio/video-on-demand, home-shopping, multiplayer-gaming,digital libraries and maybe some distance learning. Meanwhile, multiuser information theory is still revealinginnovative ways of doing wireless communication so the question of potential synergies is still open [8]. For now,it seems reasonable to come down on the side that freedom is important.

If entirely a priori enforcement is difficult, it seems natural to follow the example of crime in human societyand have a role for a posteriori spectrum rule enforcement that uses incentives to deter bad behavior rather thanprecluding all bad behavior by design. The role of a priori certification is then limited to maintaining the incentives.Certification is required since the existing game-theoretic literature says that while a pair of equal users can self-enforce (see, e.g. [9]) to a range of stable and fair equilibria, this breaks down when users are unequal. Considera case where the first user can cause very little interference to the second while the second can cause a greatdeal of interference to the first. The first has neither defense nor ammunition. Without a possibly external force towhich the second is vulnerable, the first cannot reasonably believe that the second will follow sharing rules. Indeed,vulnerability is the mother of trust. Furthermore, robust identity is needed to avoid the “ring of Gyges” problemwhen there are more than two users since without identity, there is no threat of being held accountable.

In this paper, we consider how to give radios an identity in a way that is easy to certify, easy to implement, and doesnot presume much about the kinds of waveforms that the radio system can implement. Perhaps more importantly,this approach to radio identity allows harmful interference to be causally attributed with great confidence to theguilty radio(s) without imposing a significant PHY burden on the victims. This is done by giving each radio itsown spectral fingerprint of time-frequency slots that it is forbidden to use. The proportion of taboo slots quantifiesthe spectrum overhead of such an identity system.

To understand how to set the parameters, we then sketch out a simple system of punishment for misbehavingradios that involves sending them to “spectrum jail” for finite amounts of time. This system is explained in thecontext of a toy real-time spectrum market where the overhead imposed is the proportion of time that innocentsystems spend in jail. Somewhat surprisingly, this gives us a spectral analog of human criminal law’s Blackstone’sratio (“Better that ten guilty persons escape than that one innocent suffer”) [10]. Overall, we see that while light-handed regulation is possible, some significant spectral overhead seems unavoidable.

II. IDENTITY

There are many potential approaches to ‘identity.’ In the most straightforward approach, identity is explicitlytransmitted by the physical layer as a separate signal in a mandated format. If a victim experiences harmfulinterference, then it merely has to decode this signal to learn the identities of all the potential interferers. However,while this approach is conceptually simple, it has three major shortcomings:

1) It forces us to mandate a standard PHY-layer waveform for transmission of this identity information. Thisadds additional complexity to systems that need different waveforms for their own signals.

2) It imposes an additional decoder PHY burden either on the part of specially deployed enforcement radios, oron the potential victims of interference.

3) A broadcast identity does not distinguish between the guilty and innocent bystanders. Thus it reduces theincentive to deploy innovative approaches to reduce interference.

This last issue is particularly significant where we wish to only punish users that are actually causing harmfulinterference as opposed to punishing any user who is transmitting without authorization. The “no harm no foul”principle is attractive in the context of light-handed regulation, but the explicit identity beacon approach does notdistinguish between harmful interference and unfortunate fading or bad luck.

A second approach to identity can be developed (see e.g. [11]) where idiosyncrasies of the radio front-ends areused to identify devices. While this “accidental identity” approach deals with the first objection above, the othersremain. Furthermore, such accidental identities provide no way of having multiple coordinates associated with a

Network IDUser ID

× Device IDTX Identity: Band 1

TX Identity: Band 2

TX Identity: Band 3

. . .Cannot transmit

Fig. 1. Taboo-based identities, demonstrated here as the composition of three levels: network, user, and device. The taboo times can bedifferent in different bands to enable intelligent frequency hopping to maintain steady low-latency links.

Fig. 2. The tradeoffs involved in the taboo approach to identity for the specific case of 90% probability of detection and 0.5% probabilityof false alarm. The first two panels consider a single isolated user while the last two panels consider tradeoffs relevant to multiple users.

single transmission. For example, an offending transmission might originate from a particular device that is in aparticular network and being used by a particular human user. An explicit beacon could just concatenate bit-fieldsto transmit all three identities but there is no way to do this with accidental identities. For example, contrast “tallfemale, short blond hair, slim build, wearing a purple bodysuit” as a description with a more engineered identitylike “Seven of Nine, Tertiary Adjunct to Unimatrix Zero-One.”

Stepping back, the use of accidental identities is very much like the use of cyclostationary signal features todetect and distinguish legacy primary users. It turns out that much better performance can be obtained if we designthese signal features ourselves [12]. We were inspired by how “Geographic Profiling” of criminals exploits the factthat serial killers tend to maintain a taboo buffer zone around their homes wherein they do not kill anyone [13].Fig. 1 shows the wireless equivalent – an engineered identity-specific code that specifies which time-slots are taboofor this “temporal profile.” The length of the time-slots should be significantly longer than the delay spread of therelevant channels as well as longer than the useful length of packets. Something like 1− 10ms seems reasonable.

This temporal taboo can easily be certified since it only requires a circuit that disables transmission. Differentidentities can also be stacked by giving each code a veto over using timeslots that are taboo to it. This avoids allthe problems above: no separate PHY is needed, there is no additional decoder burden on victims since they justneed to record the pattern of harm, and there is the hope that only the guilty parties will be convicted.

A technical analysis of why these codes work is given in [1], but the basic idea can be understood by consideringa randomized code wherein each user’s temporal profile is chosen randomly by tossing an independent biased coin.If it comes up heads, the slot is taboo. This is like a randomized trial in clinical medicine: the taboo periods actas “experimental controls” for the efficient testing of the hypothesis that this user is causing interference.

As a hypothesis testing problem for each user, the usual tradeoffs apply and are depicted in Fig. 2. It turns outthat there are three critical parameters:Tc: The time-till-conviction is the number of slots that must be observed before a decision can be made to the

requisite probability of false alarm and missed detection.∆: The additional probability of bad performance induced by the presence of the harmful interferer. This represents

the degree to which the guilty user is harming the victim. It plays a role analogous to the channel gain inwireless communication. The first panel in Fig. 2 shows that more disruptive interferers are easier to convict.

γ: The fraction of slots that are taboo for a user. This represents the time-frequency overhead of this approach toidentity since even when users are honest, they are unable to use these opportunities. This parameter plays arole analogous to the signal strength in traditional wireless communication. The second panel in Fig. 2 showsthat higher overhead makes it easier to convict a guilty party.

The key tradeoff equation obtained using the central limit theorem is [1]

Tc ≈

[√θ(1− θ) 1

γ zf +√

θ1(1− θ1) + θ0(1−θ0)(1−γ)γ zm

]2

(1− γ)∆2(1)

where θ0 is the true background level of harm without the added interference, θ1 = θ0 + ∆ is the net level ofharm with the interference, and θ = (1 − γ)θ1 + γθ0 is the overall level of observed harm. zf = Φ−1(1 − Pfa)(similarly for zm using instead the target probability of missed detection) with Φ−1(.) denoting the inverse CDFof a standard normal Gaussian distribution.

Notice that this formulation is agnostic to what the underlying mechanism of the harm is — it could be raisingthe noise floor, intermodulation, receiver desensitization, or even disrupting the MAC layer. It does not matter whatit is, as long as the victim can note when it is experiencing low performance. However, the first two panels in Fig. 2reveal that the nature of the victim matters. If a victim is like Hans Christian Anderson’s folk-tale princess andhas a low tolerance for background loss θ0, then it is easier to catch those introducing small amounts (a figurative“pea”) of additional disruption. The worst case for conviction are victims where the acceptable background levelsof loss are much higher.

This particular approach to identity does not demand strict synchronization. Suppose that the criminal and thevictim clocks were offset by up to 10 timeslots in either direction. Then, each user effectively has 20 identity codes— all shifts of each other. If any one of them is convicted, the user should be punished. So the effect of imperfectsynchronization is just a proportional change in the required probability of false alarm.

The more subtle issue is how to deal with multiple users. It might be that more than one user is guilty. It isimportant to deal with this to avoid the cognitive-radio equivalent of looting where the presence of one criminalinduces others to join in. As long as each criminal is imposing sufficient additional harm on its own, then suchadditional criminals will be detected under their own hypothesis tests. The harm caused by the other guilty userswill just effectively raise the level of background losses and hence make it take longer to catch the weaker criminals.

We can also search for pairs or triples of users together to catch small conspiracies. However, the third panelof Fig. 2 shows that this comes at a cost since the effective overhead for the group is greatly reduced — thegroup can transmit if anyone within the group can transmit. This in turn increases the time required to catch theconspirators. To be able to catch even small conspiracies in a timely manner requires an identity overhead that issubstantial — more than 25%. However, this same plot says that from a societal perspective, this full overheadis only experienced when there is only a single legitimate user of the channel. If a channel can legitimately beoversold to many different users, then the effective societal overhead is much less.

A final cost of additional overhead is shown in the fourth panel of Fig. 2. Increased overhead makes it harder forgroups of radios to find times to coordinate transmissions for adaptive beamforming or other such purposes. Theyhave to find times that are not taboo for any of them and this may reduce their utility. The critical open questionhere is how many radios will need to simultaneously transmit in the realistic systems of the future.

The fourth panel of Fig. 2 can also be interpreted in a different context — a single criminal might decide tomaliciously “frame” other innocent users by voluntarily deciding not to transmit in the taboo slots that correspondto another identity. However, it does so at a cost to itself (its own utility is reduced) and a benefit to the potentialvictims who suffer less harm. Even with only 5% overhead in the codes, trying to frame a hundred differentidentities reduces the harm to less than 0.6% of slots. How to avoid being framed is discussed at the end of thenext section.

Fig. 3. A jail-based deterrence system for punishing cognitive radios. The plots show the constraints on expansion and the size of purchasedhome bands when the utilities of the players are considered.

Fig. 4. When all parameters are optimized, we can see the maximal expansion possible and the overhead necessary to achieve it. Noticethat while Pcatch affects the expansion and overhead, the more critical parameter is in fact Pwrong , the probability of being wrongfullypunished.

III. DETERRENCE AND THE NEED FOR SOMETHING TO LOSE

To enable light-handed regulation, the key idea is that in addition to mandating the identity code, each radio iscertified to obey a go-to-jail command from the spectrum regulator. (A monetary fine could serve a similar purposebut it is much harder to certify that a device will pay a fine than it is to certify that it goes to jail.) The reasonis that even in a real-time spectrum market, the actual waveform design within a band might be done in software(consider an OFDM based system) and it is therefore hard to certify that a radio will only use those channelsthat it has paid for. Figs. 3 and 4 explore the jail-based deterrence system to understand the important parametersfor encouraging cognitive users to follow spectrum sharing rules. For more information, see [2]. Although thetreatment there is in the context of opportunistic spectrum use by cognitive radios, we see here that most of thesame arguments apply to cognitive radios participating in a dynamic spectrum market.

The first panel of Fig. 3 shows the setup. There are B channels that are available for sale. Some of these may beoccupied for a time by priority users willing to pay more than you. The cognitive radio is supposed to pay beforeusing a channel, but it is technically capable of transmitting at will when it is not in jail. If the cognitive user iscaught introducing interference, we model it as receiving a go-to-jail command with probability Pcatch. At thatpoint, it is sent to jail where it is not allowed to use any of the channels, including any channels that it mighthave actually paid for or any unlicensed bands. The length of the jail sentence is determined by Ppen. Since allsystems of wireless identity will have some level of false accusations, a radio can also be wrongfully sent to jailwith probability Pwrong.

The market operator is concerned about the case illustrated in the second panel of Fig 3 for B = 1. If this priority

user is very active and the jail sentence is not harsh enough, a rational cognitive user that wants to maximize itsaccess to channels will cheat because jail is not a big enough threat. The problem is that radio certification (includingchecking Ppen compliance) occurs in advance while the attractiveness of cheating varies based on local market andpropagation conditions. So, the regulator must make jail painful enough to deter cheating even in the most attractivecase — when there are simply no more channels available for sale or non-interfering use.

The somewhat surprising consequence is that for deterrence to be effective, a cognitive radio always has to havesomething to lose. One way is to have exclusive access to at least one paid priority channel. The prospect oftemporarily losing access to the unlicensed bands might also provide such a deterrent, but we will not explore thishere. Let β be the number of channels on which the cognitive radio already has highest priority. The importantquantity is the “expansion factor” B−β

β representing the ratio of the number of additional channels that couldpotentially be illegally/legally accessed to the number of channels on which the radio already has priority access.As the expansion factor increases, the jail sentences must lengthen in order to balance the added temptation tocheat. This is illustrated in the top half of the third panel of Fig 3.

It is at this point that the prospect of wrongful conviction must be considered. As the jail sentences lengthen todeter cheating, the honest radios also suffer from being occasionally sent to jail for longer and longer periods oftime. The green curve in the bottom half of the third panel of Fig 3 depicts this. The fraction of time spent injail by an innocent radio can be viewed as the overhead imposed by the enforcement scheme since usablespectrum is being wasted.

Meanwhile, the additional benefits to a cognitive radio from participating in a real-time market depends on thefraction of channels that are available for use/purchase (not being used by a higher-priority user). The blue curveshows that wrongful convictions really take their toll on the extra utility — with 15 extra channels of which 6.75on average ((1− 0.55) ∗ 15) are available, the actual net extra benefit is less than 3 because of utility lost to beingin jail!

The full benefits of dynamic spectrum access are only obtained when the expansion factor is high — becausethat is when statistical multiplexing best allows radios to exploit local spectrum. After all, an expansion factor of0 just means that all the channels you can access are already preassigned to you – static spectrum access. The firstpanel in Fig 4 shows the maximal expansion factor that a cognitive radio will tolerate (when the blue curve in thelast panel of Fig 3 starts to dip) as a function of Pwrong and Pcatch. The cutout shows the interesting dependenceon PTX — participating in a spectrum market becomes less and less appealing the more you believe that otherusers will have higher priority (willingness to pay) than you. You risk jail for less benefit.

A more insightful picture is obtained in the last two panels of Fig. 4. These concentrate on the overhead – thetime innocent users spend in jail. Notice that Pwrong is the critical parameter. Although the last panel shows thatPcatch has an effect on the expansion and overhead, the requirements on Pwrong are much stricter. In order to getgood expansion with low overhead, Pwrong must be very small.

What does all this tell us about the identity system? Perhaps the most important consequence is that the identitycode system must support many distinct identities — probably at least in the tens of thousands. Suppose that therewere a thousand identities that were shared among individual radios. This way, if one radio commits a violation,any other individual radio only has a 1

1000 chance of sharing an identity-code (and hence jail sentence) with thatguilty radio. It takes ten thousand distinct identities in the system bring the probability of wrongful conviction downto 1% if we also fear that a miscreant is also capable of framing a hundred others.

Too few identity codewords would result in too high a chance of wrongful conviction. This pretty much rulesout relying on accidental identity as a viable way forward. In addition, a radio should have its own identity-codeassignment randomly change as time goes on so that it is not always subject to collective punishment with thesame miscreant. Notice that this changing identity does not have to be registered with the regulator and can remainprivate and thus preserve user anonymity. All that is required for the incentives to work is that the radio knows itsown identity and is certified to obey go to jail messages directed to it.

IV. CONCLUSIONS

“Light handed regulation” is desirable to allow cognitive radios to be deployed in a way that encouragesinnovation. Doing this requires imposing only minimalist certification requirements that do not restrict technological

innovation while also not imposing a large regulatory overhead. The requirements need to be such that users havesome faith that others will follow the rules.

Surprisingly, in this perspective, whether cognitive radios are permitted to use spectrum opportunistically orwhether they must engage in “Coasian bargains” using markets turns out not to matter much in terms of the broadoutline of what is required. Either way, there has to be a system of identity for radios so that violators can becaught. Identity systems must be able to reliably assign blame for harmful interference even when the victim ismuch older than the culprit. The spirit of light-handed regulation suggests that the identity of the culprit shouldbe reliably inferable from the pattern of interference itself, and this leads naturally to the fingerprint metaphor foridentity.

The challenge in fingerprint design is to manage the overhead of the fingerprints. One overhead is easy tounderstand: the degrees of freedom and spectral resources left unavailable for productive use because they arededicated instead to the fingerprints. The other is more nebulous: the extent to which the fingerprint discriminatesin favor or against certain approaches to spectrum use as well as how hard it is to certify the fingerprints. Thefingerprints discussed here are largely agnostic towards how spectrum is used and are easy to certify. An initialanalysis was done using random codes and it reveals that the “overhead” of the code (the proportion of tabootimeslots) plays a role analogous to the transmit signal power in traditional wireless communication. It has to behigh enough or it will not be possible to meet target enforcement quality-of-service parameters.

Going forward, it will be important to see how well new or existing error-correcting code families can be adaptedto this task. List decoding, rateless fountain codes, and jamming-oriented adversarial models are all likely to beimportant to this task. More subtly, this problem is closer to Ahlswede’s appropriately named information-theoreticproblem of identification [14] than it is to Shannon’s classical sense of communication.

Identity on its own is insufficient — the incentives must be there to encourage good behavior for cognitive radios.This paper has explored a model of deterrence in which radios that have been convicted of misbehaving are sentencedto finite jail sentences. Two surprising things have been found. First, in order for a lightly certified cognitive radioto be trustworthy, it must have something to lose. This suggests that cognitive radios are subject to their own“Matthew Effect” [15] — it will be easiest for systems that already have licensed spectrum to deploy cognitiveradios. Second, overall system performance is much more dependent on the probability of wrongful conviction thanon the probability of successfully punishing an actual wrongdoer. Going forward, incentives must be understood inthe contexts beyond the simple delay-insensitive infinitely bandwidth-hungry cognitive users implicitly consideredhere. For example, power-sensitive and delay-sensitive cognitive users might respond differently.

This article has sketched out a new paradigm for light-handed spectrum regulation, but a great deal of technicalwork remains to be done before the viability of this approach can be established. Intuitively, the two overheads(identity and wrongful convictions) must be balanced appropriately to find the sweet spot of maximal regulatoryefficiency. However, it might be that qualitatively different applications having very different wireless requirementswill require different balances — suggesting that some form of centralized spectrum zoning will remain with us.The advantage of this new paradigm is that the such questions might be able to answered by theorems rather thanmere rhetoric.

REFERENCES

[1] G. Atia, A. Sahai, and V. Saligrama, “Spectrum enforcement and liability assignment in cognitive radio systems,” in Proceedings ofThird IEEE International Symposium on New Forntiers in Dynamic Spectrum Access Networks, (Chicago, IL), Oct. 2008.

[2] K. A. Woyach, A. Sahai, G. Atia, and V. Saligrama, “Crime and punishment for cognitive radios,” in Proceedings of the forty-sixthAllerton Conference on Communication, Control, and Computing, (Monticello, IL), Sept. 2008.

[3] A. Sahai, S. M. Mishra, R. Tandra, and K. A. Woyach, “DSP Applications: Cognitive radios for spectrum sharing,” IEEE SignalProcessing Magazine, Jan. 2009.

[4] J. Mitola, Cognitive Radio: an integrated agent architecture for software defined radio. Ph.d. thesis, KTH Royal Inst. of Tech.,Stockholm, Sweden, 2000.

[5] G. R. Faulhaber, “The future of wireless telecommunications: spectrum as a critical resource,” Information Economics and Policy,vol. 18, pp. 256–271, Sept. 2006.

[6] D. Hatfield and P. Weiser, “Toward property rights in spectrum: The difficult policy choices ahead,” CATO Institute, Aug. 2006.[7] N. Isaacs, “Barrier activities and the courts: A study in anti-competitive law,” Law and Contemporary Problems, vol. 8, no. 2, pp. 382–

390, 1941.

[8] J. Andrews, S. Shakkottai, R. Heath, N. Jindal, M. Haenggi, R. Berry, D. Guo, M. Neely, S. Weber, S. A. Jafar, and A. Yener,“Rethinking information theory for mobile ad hoc networks,” IEEE Communications Magazine, Dec. 2008.

[9] R. Etkin, A. Parekh, and D. Tse, “Spectrum sharing for unlicensed bands,” in First IEEE International Symposium on New Frontiersin Dynamic Spectrum Access Networks, (Baltimore, MD), Nov. 2005.

[10] A. Volokh, “n guilty men,” University of Pennsylvania Law Review, vol. 146, no. 1, pp. 173–216, 1997.[11] V. Brik, S. Banerjee, M. Gruteser, and S. Oh, “PARADIS: Physical 802.11 device identification with radiometric signatures,” in

Proceedings of ACM Mobicom, (Burlingame, CA), Sept. 2008.[12] R. Tandra and A. Sahai, “Overcoming SNR walls through macroscale features,” in Proceedings of the forty-sixth Allerton Conference

on Communication, Control, and Computing, (Monticello, IL), Sept. 2008.[13] D. K. Rossmo, Geographic profiling: target patterns of serial murderers. Ph.d. thesis, Simon Fraser University, 1995.[14] R. Ahlswede and G. Dueck, “Identification via channels,” IEEE Trans. Inf. Theory, vol. 35, no. 1, pp. 15–29, 1989.[15] R. K. Merton, “The Matthew Effect in Science,” Science, vol. 159, pp. 56–63, Jan. 1968.