The Place of Care in an Experimental Society

12
The place of care in an experimental society Christopher Groves School of Social Sciences Cardiff University, UK [email protected] https://cardiff.academia.edu/ChristopherGroves Introduction Technological societies depend upon an implicit social contract between technoscience and wider society. However, there is a growing sense that the reciprocal expectations on which this contract is based are threatened by the very activities that are undertaken to meet them. For many, the legitimacy of technoscience and the authority of the forms of knowledge on which it relies are threatened by unwanted rebound effects associated with its products (Tenner, 1997), and the inability of scientific knowledge to adequately understand the world technoscience has created. For Ulrich Beck, this represents an epoch-defining form of consciousness that defines ‘risk society’ (Beck, 1992). Whether such a view is accurate or not, it is true that technoscientific innovation or novelty is a double-edged sword: the future new technologies help create may be very different to the realities studied within the ‘secluded laboratory’ (Callon, Lascoumes, & Barth, 2009). The continued production of novelty has market value in itself: innovations are rapidly commercialised, disseminated and then made to enter obsolescence. Technological societies are thus also inevitably experimental societies. Unforeseen interactions between rapidly disseminated technologies and natural systems are the cause of eroding trust in the social contract between society and technoscience. Other commentators, however, note that expectations of material progress remain a key part of society’s view of the future (even if cast in reflexive terms, as getting rid of the ‘bad’ elements of modernity, like fossil fuels or iatrogenic disease), or that innovation has effects on identity, agency and the sense of what it means to be human, enlarging our sense of who we are. A major characteristic of experimental societies is thus tensions relating to whom we take ourselves to be and how we should behave, and the views of science, technology and nature shaped by such values. These roots are, for some commentators, reflected in what they take to be central political antagonisms around how to live with uncertainty 1

description

The implicit social contract between technoscience and society that produces technological societies threatens being undermined by tensions that arise from this social contract itself. Scientific knowledge feeds innovation, but is also used reflexively in attempting to anticipate the consequences of employing innovations ‘in the wild’ (Beck, 1992). The tension arises from within this reflexive application of science, as a result of novelty: the future created as a result of deploying technologies may be very different to the realities studied within the ‘secluded laboratory’ (Callon, Lascoumes, & Barth, 2009). Technological, market societies in which innovations are rapidly disseminated are thus inevitably experimental societies. The insecurities that haunt such societies may create novel forms of political antagonism, ones which, it has been suggested, may even be reducible to a central antagonism between institutional and social cultures of ‘proaction’ and ones of ‘precaution’ (Fuller, 2012). A response to such antagonisms might begin by deconstructing the assumptions on which they are based. Such a strategy forms part of recent work on a future-oriented ethics and politics of uncertainty based on a concept of care that weaves together the phenomenological and ethical usages of this concept (Groves, 2009, 2011). The centrality of care to an experimental society has been recently affirmed by discussions of the necessity of ‘care for the future’ as the foundation of ‘responsible innovation’ (Owen, Macnaghten, & Stilgoe, 2012). However, what exactly concepts of care may contribute to the idea of responsible innovation within an experimental society, remains far from clear. In this paper, I explore its usefulness in this context, firstly by outlining how a ‘thick’ concept such as care can be of normative and not just descriptive significance , and secondly, by distinguishing care as an ethical concept from both ‘proaction’ and ‘precaution’.

Transcript of The Place of Care in an Experimental Society

Page 1: The Place of Care in an Experimental Society

The place of care in an experimental society

Christopher Groves

School of Social SciencesCardiff University, [email protected]://cardiff.academia.edu/ChristopherGroves

IntroductionTechnological societies depend upon an implicit social contract between technoscience and wider society. However, there is a growing sense that the reciprocal expectations on which this contract is based are threatened by the very activities that are undertaken to meet them. For many, the legitimacy of technoscience and the authority of the forms of knowledge on which it relies are threatened by unwanted rebound effects associated with its products (Tenner, 1997), and the inability of scientific knowledge to adequately understand the world technoscience has created. For Ulrich Beck, this represents an epoch-defining form of consciousness that defines ‘risk society’ (Beck, 1992). Whether such a view is accurate or not, it is true that technoscientific innovation or novelty is a double-edged sword: the future new technologies help create may be very different to the realities studied within the ‘secluded laboratory’ (Callon, Lascoumes, & Barth, 2009). The continued production of novelty has market value in itself: innovations are rapidly commercialised, disseminated and then made to enter obsolescence. Technological societies are thus also inevitably experimental societies. Unforeseen interactions between rapidly disseminated technologies and natural systems are the cause of eroding trust in the social contract between society and technoscience.

Other commentators, however, note that expectations of material progress remain a key part of society’s view of the future (even if cast in reflexive terms, as getting rid of the ‘bad’ elements of modernity, like fossil fuels or iatrogenic disease), or that innovation has effects on identity, agency and the sense of what it means to be human, enlarging our sense of who we are. A major characteristic of experimental societies is thus tensions relating to whom we take ourselves to be and how we should behave, and the views of science, technology and nature shaped by such values. These roots are, for some commentators, reflected in what they take to be central political antagonisms around how to live with uncertainty that characterise experimental societies. For example, one recent perspective identifies a central antagonism between institutional and social cultures of ‘proaction’ and ones of ‘precaution’ that, it is claimed, will during the 21st century replace industrial-era divisions between political ‘left’ and ‘right’ (Fuller, 2012).

If such tensions are real, and stem from different ways of trying to make meaningful uncertainty about the future, then they are shaped by a concern which has the honour of being one of the few universal human material and symbolic needs (Jackson, 1989). Beck and others insist that the character of uncertainty changes with the advent of technological societies – as uncertainty becomes reflexive or ‘iatrogenic’. The implicit social contract between the social institutions of technoscience and wider society finds it hard to make sense of this transformation. This is because technoscience is simultaneously a catalyst of global transformations, and part of the governance architecture for the transformations it produces. To understand the relationship between these two roles, we can turn to some much-quoted remarks made in 2002, possibly by Karl Rove, President G.W. Bush’s then chief of staff. Along with Donald Rumsfeld’s often-quoted taxonomy of knowns and unknowns, they confirm that the Bush presidency was one of our foremost contemporary sources of insights into the existential significance of uncertainty and risk.

The aide said that guys like me were ''in what we call the reality-based community,'' which he defined as people who ''believe that solutions emerge from your judicious study of discernible reality.'' I nodded and murmured something about enlightenment principles

1

Page 2: The Place of Care in an Experimental Society

and empiricism. He cut me off. ''That's not the way the world really works anymore,'' he continued. ''We're an empire now, and when we act, we create our own reality. And while you're studying that reality -- judiciously, as you will -- we'll act again, creating other new realities, which you can study too, and that's how things will sort out. We're history's actors . . . and you, all of you, will be left to just study what we do. (Suskind, 2004)

The capacity of technoscience to change the world is enormous. Its capacity to foresee the results of these changes is often very limited. This mismatch was a key theme of the work of Hans Jonas (1984, originally 1976) and has recently been re-examined as part of attempts to outline an integrated sociology and ethics of the future (Adam & Groves, 2007). Technoscience enactors are, to take a phrase from Shelley, the ‘unacknowledged legislators’ of the world, pre-empting the future by helping to create particular developmental paths around which processes of social change coalesce and turn, giving rise to a ‘timescape’ (Adam, 1998) of slower and faster tempos of interacting transformations.

Scientific knowledge, in the service of governance, is part of the ‘reality-based community’ in trying to predict what might happen if things go wrong, through the activity of risk assessment. Risk assessment is a formalisation and extension of techniques first developed to understand the failure modes of closed engineered systems (Wynne, 1992). But innovation is, as noted above, the production of novelty. With such novelties in it, the world is perhaps not what it was. In the wild, as it were, the artefact enters into anticipated and unanticipated interactions with natural entities and other artefacts, producing side-effects and ‘interference effects’ (Hacking, 1986). Transformative technoscience science therefore produces knowledge, but also produces ignorance (Schummer, 2001). The aforementioned tension between recommendations for living with uncertainty arises from their ethical commitments with which these recommendations attempt to deal with the existential fact that our foresight about the results of our transformative efforts is hobbled by constraints native to the world we are trying to transform. These recommendations include distinct suggestions for renegotiating the social contract between technoscience and society

Each recommendation – proaction or precaution – aims to respond to uncertainty by developing and defending criteria for making decisions that may be short of being decision rules (such as ‘always maximize benefits’) but express definite normative commitments. Each sets out to reconsider how we collectively decide whether a given risk is worth running, whether particular nagging uncertainties are worth living with, or whether our inevitable ignorance of stubbornly indeterminate outcomes should be set aside in any given case. Precautionary approaches identify criteria that define classes of harm that are socially unacceptable and which, in the face of significant uncertainty, should require extensive risk minimisation (up to and including, say, the possibility of a moratorium on the further development of a technology). Precaution emphasizes concern about the dangers implicit in how innovation speeds up the pre-emption of possible futures, and aims to put on the brakes, thus creating opportunities for reflection and for identifying avenues for additional research by the risk governance arm of technoscience before releases into the wild. Proactionary arguments, by contrast, suggest that attempts to eliminate the possibility of particular unacceptable risk ‘slippery-sloping’ us into generalised attitudes of risk-aversion, and are also unable to adjudicate between particular risks or uncertainties (such as the risks associated with developing a transgenic crop versus the risk of increased hunger that may be occasioned by not developing it). Proaction recommends urgency, even the speeding up, of innovation if the benefits sought are of enough importance, in aggregate, to humanity (More, 2005).

Many dominant articulations of either position tend to subscribe to traditional liberal views of regulation – namely, that regulation is a way of aiming to prevent harm to individuals, but not a way of trying to promote particular versions of the good life that place restrictions on individual preferences (Jensen, 2002). Both, in John Rawls’ terms, represent attempts to specify mid-level principles for settling questions of justice (here, concerning exposure to risk) on which all reasonable people should in principle agree, without entailing belief in any ‘comprehensive doctrine’ that prescribes particular visions of the good, rooted in metaphysical beliefs (Rawls, 1972). These mid-level principles would concern how best to prevent risks to individuals ex ante. The liberal harm

2

Page 3: The Place of Care in an Experimental Society

principle implies that, in determining whether to allow a risk to be taken, one must weigh up how strong the right to take a risk is in comparison with the need to protect others from the harm it may impose on them.

Nonetheless, both positions contain elements of normative ethics that go beyond liberal determinations of harm and gesture in the direction of a ‘good life’ ethics. Proaction is a rule-utilitarian approach that recommends risk assessment based on the current corpus of science as the best way to identify potential hazards, and to weigh up whether they are worth taking through risk-cost-benefit analysis (RCBA). Innovation, subjected to regulatory constraints established using non-precautionary approaches, is socially rational insofar as it provides the best chance of maximising the social benefits of technology. To concern ourselves with possibilities of harm beyond quantifiable risks carries its own reflexive risk: namely, reducing the possibility of finding suitable technological solutions to problems that require them. However, it also identifies innovation as expressing a disposition – risk-taking – that is intrinsically valuable to a worthwhile human life.

When technological progress is halted, people lose an essential freedom and the accompanying opportunities to learn through diverse experiments. We already suffer from an undeveloped capacity for rational decision making. Prohibiting technological change will only stunt that capacity further. (More, 2005)

What needs to be avoided, then, is the risk of harm to individuals but also the erosion of certain fundamental values. The best way to avoid both problems is to commit institutions to properly informed RCBA. And if there is no evidence of actual harm (but only suspicion that an action may entail it), then whether or not to act should be entirely up to the individual (of course, perhaps with one eye on relevant laws regarding liability). The law should not interfere. Precaution, on the other hand, views harm to individuals as important, but where action entails particular kinds of uncertainties and ignorance, then potential impacts on wider conditions of individual and group welfare should be given additional moral weight. This means that precaution exceeds the bounds of liberal attitudes to risk by suggesting that certain kinds of harms to classes of entity (ecosystems or future people, for example), even if highly uncertain, are more important than any individual right to take risks. This goes beyond issues of harm to individuals, either in the direction of identifying something like the natural conditions of a worthwhile human life in general (now and for future generations) or non-human entities (e.g. individuals, species, ecosystems) that have to be protected because of their intrinsic value. What exactly should be included in these considerations, and where thresholds of risk should be set, are underdetermined, as they are essentially political decisions that have to be based on what matters to people (Jensen, 2002).

We can trace, then, behind these distinct positions equally distinct fundamental values – ones connected to different ‘comprehensive doctrines’ about nature and human subjectivity. Cultural theories of risk, as outlined by, for example, Mary Douglas and Aaron Wildavsky (1982), suggest that founding and unquestioned beliefs (‘myths’) about human agency and nature, and about the uncertainties associated with them, determine attitudes to how to live with uncertainty. Both approaches rely on particular ‘myths’ documented by other researchers in the cultural theory of risk that help to define how to act rationally in the face of uncertainty (Schwarz & Thompson, 1990). Proaction assumes that natural processes may be beneficial or harmful, but that natural systems are ‘tolerant’ of insults, and therefore argues that taking risks is both (a) socially rational insofar as it is instrumentally valuable for achieving particular future outcomes, and (b) also eudaimonistic behaviour insofar as it is valuable in itself. Precaution assumes that the natural order is fragile or ephemeral and prone to being destabilised through human action. As such, uncertainties about how human action may interact with particular ecosystems provide a reason for restraining action.

Each attitude provides a different basis for re-casting the social contract between technoscience and wider society – (very) broadly speaking, entrepreneurial or bureaucratic – associated with the distinct constructions of power and agency associated with entrepreneurialism or strong, top-down governance. However, if the coherence of the recommendations emerging from each attitude is rooted

3

Page 4: The Place of Care in an Experimental Society

mainly in its founding myth, then the basis for adopting one rather than the other can only be a choice made without further rational grounds, and a choice, moreover, that must then rigidify itself into institutional and regulatory design (Fuller, 2012) – illegitimately, without appeal to mid-level principles such as the harm principle.

Is there an alternative way of shaping innovation in the face of uncertainty that avoids this decisionistic choice between different worldviews? One possible alternative is hinted at within the emergent paradigm of responsible research and innovation (RRI), which speaks of the need to make ex ante ‘care for the future’ the normative keystone of a social contract with technoscience (Stilgoe, Owen, & Macnaghten, 2013). Here, a concern with risk is represented as a subordinate element of public concern about the uncertainties built into the experimental society, one which may exacerbate anxieties about uncertainty rather than address them, a point implicit in Mary Douglas’ observation that:

The modern concept of risk, parsed now as danger, is invoked to protect individuals against the encroachment of others. It is part of the system of thought that upholds the type of individualist culture, which sustains an expanding industrial system. (Douglas, 1994, p. 28).

The use of ‘care’ here evokes commonplace and legally-inflected meanings usages (as in ‘duty of care’), and the mention of ‘the future’ evokes ideas of sustainability. But cashing out what ‘care’ could mean in the context of RRI can also examine feminist and other uses of the term. Feminist ethics defines care as a specific kind of activity, with its own particular hoped-for outcomes, indispensable virtues and dispositions and modes of rationality (Gilligan, 1982; Ruddick, 1989). Political applications of care add to this conceptions of justice embedded in caring and accounts of the proper role of private and public institutions in promoting care (Engster, 2007; Tronto, 1993). How to respond to uncertainty about the future – and specifically, to human bodily, emotional and existential vulnerability – is a key concern of such ethical and political approaches. Here a link can be made with phenomenological uses of the idea of care, as for example by Heidegger (1998), Jonas (1982, 1984) and Levinas (1979), in which uncertainty and how it shapes yet also threatens subjectivity is a central theme. Both feminist and phenomenological approaches see uncertainty as having particular significance in human life. Building stable expectations in the face of uncertainty is a key element of care ethics and some phenomenological treatments of the role of trust or solidarity in ethics, as given by e.g. Martin Buber (2002) or Knud Løgstrup (1997).

Some political interpretations of care view it as an activity required by the interdependency of moral subjects. The dependence of moral subjects on each other generates an obligation to care that is prior to the need to avoid harming others, for example (Engster, 2007, pp. 36-40). The nature of this interdependence is not just material, however – it is also arguably emotional or existential. Making uncertainty liveable, insofar as it is part of all social practices, is not an end fulfilled by giving people the cognitive expectation that their general needs will continue to be met in the future. It is fulfilled through processes of emotional attachment, beginning in childhood, which are constitutive of selfhood and the development of one’s sense of oneself as an agent (Groves, 2011). Needs-satisfaction is not about continually plugging holes in the subject that exist because of the generic biological and psychological requirements of an organism, but about creating meaning, through which an individual’s sense of identity and world are shaped. As object-relations psychology suggests, the satisfaction of needs ideally creates self and world as poles of an internalized ‘secure space’ through which the subject is able to explore the world without being paralyzed by anxiety (Bretherton, 1992). As the sociologist Peter Marris has written,

The meaning of our lives cannot, therefore, be understood as a search to satisfy generalizable needs for food, shelter, sex, company and so on, as if our particular relationships were simply how we had provided for them. It is more the other way round: without attachments we lose our appetite for life. (Marris, 1996, p. 45).

4

Page 5: The Place of Care in an Experimental Society

Attachment in this sense can be to a variety of objects: people and non-humans, places, objects, institutions and ideals. But for any given individual, their styles of dealing with their attachments vary – in ways shaped by experience (e.g. instances of loss) but also by culturally and social-structurally ingrained habits. As Marris (1992, 1996) has argued, the importance of attachment can help explain how people make sense of uncertainty – and often in different ways, valuing strategies of autonomy, withdrawal or solidarity in ways that echo the division between Carol Gilligan’s ‘Kantian’ and ‘Humean’ high-school boys and girls, or the distinctions made in cultural theories of risk between fatalists, egalitarians, hierarchists and individualists. The experiences of individuals lead them towards some styles of dealing with uncertainty, of caring for attachments that predominate over others – and thereby to different ‘myths’ about nature and human agency.

But focusing on the importance of attachment does not just help explain how people deal differently with uncertainty. It also provides normative guidance. As Marris (1996) argues, strategies for dealing with attachment are not simply distinct. Solidarity, for example, is morally preferable to either autonomy or withdrawal (although it can ‘go bad’, as in racism or other forms of discrimination). If we use attachment in this way to help specify what care means, then it is evident we have travelled some distance from the vague allusion to a ‘duty of care’ in RRI’s call for ‘care for the future’. Taking into account this understanding of care, it becomes possible to view care as a class of practices, but also a kind of disposition – the disposition to look after the singular, uncertain futures of the things that matter to us (individually and collectively).

How might a perspective on what matters to people that focuses on attachment and care be useful in thinking about living with technological uncertainty? The concept of technoscience evokes, in contrast with localised attachments, large-scale economic, scientific and political processes but also ‘secluded’ research undertaken by technical experts in widely distributed research and development establishments. Yet technologies also intimately intrude into the fabric of daily life and reshape it: innovation is ‘a machine for changing the life of laypersons, but without really involving them in the conception or implementation of this change’ (Callon, Lascoumes, & Barth, 2009, p. 70). Technological artefacts are mediators of relationships and meanings, as much as they are functional, instrumental objects (Verbeek, 2011). For example, they reshape and create new dependencies, and in the process, modify or undermine existing attachments, and may even become objects of attachment themselves (as with mobile phones or vinyl records). They can also reflect and reinforce generalised patterns of dealing with uncertainty, encouraging withdrawal, autonomy or solidarity for example. The primary question regarding how to live with the uncertainties of experimental society is therefore: which of the potential shifts a technology may occasion are desirable and which are not?

This takes us back to RRI. The demand to ‘care for the future’ may be too much motherhood and apple pie for some tastes. But its accompanying commitments to specific dimensions of technology assessment make it more than just an affirmation that good things are, indeed, good. For example, RRI requires processes in which multiple viewpoints on social priorities (and potential hazards) may be represented. It also requires responsiveness on the part of institutions (enactors and selectors) engaged with innovation to the priorities such processes may identify, and to citizens’ concerns (Stilgoe et al., 2013). RRI would thus be a move in the direction of what Callon, Lascoumes and Barth have called ‘technical democracy’, where the primary shaping of innovation is performed neither by generalised imperatives, nor abstract definitions of need, nor technical processes of standardization, but the concrete commitments of citizens and the groups to which they belong. RRI thus takes its direction from STS-based investigations of citizens’ concerns about living in an experimental society that find these concerns typically emphasize not risk to individuals, but rather the social priorities addressed by innovation and the trustworthiness of those involved in it (Jensen, 2006). What matters to people are the concrete values they wish to see respected and promoted through social practices, where such values include particular ways of life, forms of relationship, and other foci of emotional attachment (Sayer, 2011). As Callon, Lascoumes and Barth put it with respect to the epistemological advantages conferred in the face of uncertainty by multiple perspectives,

5

Page 6: The Place of Care in an Experimental Society

There is a reversal of priorities in comparison with simple secluded research: what matters is not the construction of a universal through standardization, and so by elimination of local specificities, but the construction of a universal through the recognition and successive reorganisation of these specificities. (Callon et al., 2009)

Translating this into an ethical idiom appropriate for the experimental society, we could say that, when determining ‘what to do’ (the universal), we should not proceed by trying to produce a generic decision procedure or criterion for decision making without both (a) interrogating the basic commitments that are in play among innovation actors and (b) introducing to the scene the commitments of wider sets of actors. Viewed from the care perspective I have outlined above (and which is set out in more detail in Groves (2009, 2011)), proaction and precaution can be thought of as rooted in opposed attachments, ones that dictate different general strategies for dealing with uncertainty (and which are relatively rigid because of their unacknowledged emotional and existential dimension). As such, each can serve the political interest of different social actors (say, entrepreneurs on the one hand and regulators on the other) in maintaining autonomy and freedom to act.

However, a care perspective is interested not only in exploring the nature and usefulness of different value-orientations as ways of living with uncertainty, but also in exploring the nature of and prospects for solidarity (as contrasted with autonomy) as a normatively valorised orientation towards uncertainty. It therefore demands that we cope with uncertainty by exploring the field of relevant commitments and the potential meanings of technologies for them. Care understood as the disposition to look after and look out for the vulnerabilities of what matters to humans, as beings universally concerned with uncertainty, is in itself neither proactionary nor precautionary. While its sensitivity to vulnerability naturally leads a care perspective to a normative preference for the kinds of reciprocal commitments that come with solidarity, it recognises too the need to sometimes take risks in the service of providing for the kinds of values citizens prioritise. Such a view of risk has been part of feminist care ethics, for example, since its early development (Gilligan, 1982). At the same time, care demands that what is at stake, and the hopes and fears of citizens, be understood as fully as possible as part of responsible research and innovation, and the kinds of values that innovation puts at stake be fully acknowledged. Caring is itself an experiment conducted under conditions of uncertainty, enlivened with hope but also – ideally – with as complete as possible an acknowledgement of the vulnerabilities of whomever is cared for and of whoever is caring.

ReferencesAdam, Barbara. (1998). Timescapes of modernity: the environment and invisible hazards. London:

Routledge.Adam, Barbara, & Groves, Chris. (2007). Future Matters: Action, Knowledge, Ethics. Leiden: Brill.Bretherton, I. (1992). The origins of attachment theory: John Bowlby and Mary Ainsworth.

Developmental Psychology, 28(5), 759-775. Buber, M. (2002). Between Man and Man (R. G. Smith, Trans.). London: Routledge.Callon, M., Lascoumes, Pierre, & Barth, Yannick. (2009). Acting in an Uncertain World. Cambridge,

MA: MIT Press.Douglas, M. (1994). Risk and Blame: Essays in Cultural Theory. London: Routledge.Douglas, Mary, & Wildavsky, Aaron. (1982). Risk and Culture: An Essay on the Selection of

Technological and Environmental Dangers. Berkeley: University of California Press.Engster, Daniel. (2007). The heart of justice: care ethics and political theory. Oxford: Oxford

University Press.Fuller, Steve. (2012). Precautionary and Proactionary as the New Right and the New Left of the

Twenty-First Century Ideological Spectrum. international Journal of Politics, Culture and Society, 25(4), 157-214.

Gilligan, C. (1982). In a different voice. Cambridge, MA: Harvard University Press.Groves, C.. (2009). Future ethics: risk, care and non-reciprocal responsibility. Journal of Global

Ethics, 5(1), 17-31. Online at https://cardiff.academia.edu/ChristopherGroves

6

Page 7: The Place of Care in an Experimental Society

Groves, C.. (2011). The Political Imaginary of Care: Generic versus Singular Futures. Journal of International Political Theory, 7(2), 165-189. Online at https://cardiff.academia.edu/ChristopherGroves

Hacking, Ian. (1986). Culpable Ignorance of Interference Effects. In D. MacLean (Ed.), Values at Risk (pp. 136-154). Totowa NJ: Rowman and Allanheld.

Heidegger, Martin. (1998). Being and time. Oxford: Blackwell.Jackson, M. (1989). Paths toward a clearing: Radical empiricism and ethnographic inquiry.

Bloomington, IN: Indiana University Press.Jensen, Karsten Klint. (2002). The Moral Foundation of the Precautionary Principle. Journal of

Agricultural and Environmental Ethics, 15(1), 39-55. Jensen, Karsten Klint. (2006). Conflict over Risks in Food Production: A Challenge for Democracy.

Journal of Agricultural and Environmental Ethics, 19(3), 269-283. Jonas, Hans. (1982). The Phenomenon of Life. Chicago and London: University of Chicago Press.Jonas, Hans. (1984). The imperative of responsibility : in search of an ethics for the technological

age. Chicago ; London: University of Chicago Press.Levinas, E. (1979). Totality and Infinity: An Essay on Exteriority. Dordrecht: Nijhoff.Løgstrup, K.E. (1997). The Ethical Demand Notre Dame, IN: University of Notre Dame Press.Marris, Peter. (1996). The politics of uncertainty: attachment in private and public life. London; New

York: Routledge.More, Max. (2005). The Proactionary Principle. Retrieved 7 June, 2012, from

http://www.extropy.org/proactionaryprinciple.htm Rawls, John. (1972). A Theory of Justice. Oxford Clarendon Press.Ruddick, S. (1989). Maternal Thinking: Toward a Politics of Peace. New York: Beacon Press.Sayer, Andrew. (2011). Why things matter to people : social science, values and ethical life.

Cambridge; New York: Cambridge University Press.Schummer, J. (2001). Ethics of Chemical Synthesis. Hylé, 7(2), 103-124. Schwarz, M., & Thompson, M. (1990). Divided We Stand: Redefining Politics, Technology and

Social Choice. Philadelphia: University of Pennsylvania Press.Stilgoe, Jack, Owen, Richard, & Macnaghten, Phil. (2013). Developing a framework for responsible

innovation. Research Policy, 42(9), 1568-1580. Suskind, Ron. (2004, 17 October). Faith, Certainty and the Presidency of George W. Bush, New York

Times. Retrieved from http://www.nytimes.com/2004/10/17/magazine/17BUSH.html Tenner, E. (1997). Why things bite back: technology and the revenge of unintended consequences.

New York: Vintage Books.Tronto, Joan C. (1993). Moral boundaries: a political argument for an ethic of care. New York:

Routledge.Verbeek, P.P. (2011). Moralizing Technology: Understanding and Designing the Morality of Things.

Chicago: University of Chicago Press.Wynne, B. (1992). Uncertainty and Environmental Learning - Reconceiving Science and Policy in the

Preventive Paradigm. Global Environmental Change-Human and Policy Dimensions, 2(2), 111-127.

7