Moral Framing - American Politics Workshop · PDF fileare well-suited to help us understand...
Transcript of Moral Framing - American Politics Workshop · PDF fileare well-suited to help us understand...
Moral Framing
Brad Jones
22 October 2013
Note for APW readers: These are my first steps down what I imagine to be a long road
(and hopefully there will be something at the end of it). At this early stage, I would most
appreciate feedback on the setup of the method. Do you buy my measure of moral fram-
ing? How might you do it differently? With respect to the results, are there interesting
comparisons that you think I am missing? Do you have other ideas for how to interpret
the findings – especially those dealing with any patterns you might be able to pick out of
the results broken out by issue? Thanks!
1 Introduction
As others have noted, ”Words do the work of politics” (Graham, Haidt and Nosek, 2009;
Clifford and Jerit, 2013). Walter Lippmann put it like this, ”The analyst of public opinion
must begin [] by recognizing the triangular relationship between the scene of the action,
the human picture of that scene, and the human response to that picture working itself
out upon the scene of action” (1997, 11). Almost all individuals in the mass public are
only able to access a ”picture” of the political world. The ways in which these pictures
are painted deserve our careful attention.
It is true that the words that do the work of constructing these pictures are reaching an
increasingly narrow segment of the population (Prior, 2007), and they most often reach
1
the public only after being filtered through reporters and other media. However, the well-
spring of these words—the politicians and other elite actors interested in making a case to
the public—devote a great deal of time and energy to crafting persuasive appeals (Lakoff,
2004; Luntz, 2007; Westen, 2008). In other words, a large part of the work of politics is
a kind of rhetorical ”heresthetics” (Riker, 1986) known as issue framing in the scholarly
literature.
Social scientists have thoroughly documented the existence and importance of fram-
ing effects. In a range of different issue areas (Tversky and Kahneman, 1981; Gamson and
Modigliani, 1989; Iyengar, 1990; Nelson, Clawson and Oxley, 1997)—with a few caveats
(McCaffrey and Keys, 2000; Druckman, 2001; Druckman and Nelson, 2003; Druckman,
2004; Chong and Druckman, 2007; Hopkins, 2012)—how an issue is framed seems to pow-
erfully effect how the public thinks about and reacts to it. However studies of framing
have spent less time attempting to understand how frames are actually generated and
employed ”in the wild.”1 Survey experiments (the workhorse of most framing research)
are well-suited to help us understand the size and importance of framing effects, but they
have limitations (Barabas and Jerit, 2010). Less research has been devoted to exploring the
processes that generate frames in the first place. Experimentation has greatly advanced
our understanding of certain aspects of this important political phenomenon, but there
are many elements of framing that cannot be studied in the lab.
In this paper, I set out to chart one element of political framing that seems especially
aimed at mobilization. I focus on the moral content of political speech. Politics has al-
ways been infused with a moral element, and some have suggested that the polarization
in contemporary politics has been accompanied by an increase in emotionally charged
and moralistic language (Sobieraj and Berry, 2011). As the parties have transformed and
activists have taken a central role (Polsby, 1983; Abramowitz, 2010), it is perhaps unsur-
1There are a few notable exceptions. For example, (Kellstedt, 2000) studies different ways of framing
racial issues with a careful combination of observational and experimental work.
2
prising that officeholders increasingly employ emotional appeals to connect with their
constituencies (Jerit, 2004). However, we know very little about the ways in which politi-
cal elites use frames in their daily communication.
In addition to the substantive focus of this paper, I also employ novel tools in the study
of political-text-as-data. Political scientists are beginning to pay more serious attention to
the words spoken or otherwise produced by politicians (Benoit, Laver and Mikhaylov,
2009; Grimmer, 2010; Grimmer and Stewart, 2013), but there remains considerable slip-
page between our theories about the rhetorical connection between individuals and elites
and our operationalizations of it. In this paper, I develop a method of textual analysis
that retains theoretically important relationships between words that are often lost in the
most common content-analytic techniques.
I find important differences in the content of the moral frames constructed by politi-
cians. The data show that members of the minority party are substantially more likely
to use moralized language. There is also a significant and substantively large role of ide-
ological extremism. The most extreme members of both parties are also most likely to
employ moral rhetoric.
The paper proceeds as follows. Section 2 begins by defining the basic terms that I will
use throughout the paper and reviewing some of the relevant literature on morality and
elite rhetoric. Section 3 describes the data, outlines the methodology, and provides a brief
overview of the tools used. Section 4 reports on the results of my analysis, and Section 5
concludes the paper with some discussion of the implications of my findings and possible
extensions in future work.
2 Theory and Literature Review
A rigorous study of language and politics must begin with careful definitions of the terms
employed. I begin by defining what I mean by ”morality” (a term that often goes unde-
3
fined in political science research). I then review the literature on framing and lay out
a careful definition. Finally, I synthesize the two definitions to define what I mean by
”moral frame.”
Morality
The burgeoning field of moral psychology has demonstrated the important role of moral
appeals in structuring our social world. ”Morality,” as one theorist in this tradition puts
it, ”binds and blinds” (Haidt, 2013); it knits us together into groups while simultaneously
closing us off from outsiders (Ryan, 2012). Because moral judgement is so closely related
to emotion, it is also a mobilizing force (Pagano and Huo, 2007; Valentino et al., 2011;
Leonard et al., 2011).
The political science literature is speckled with references to ”morality,” but for the
most part when we talk about it at all, our discipline equates ”moral” with religious or
behavioral norms (Ben Nun, 2010). The term is often used as a catch-all category for issues
that don’t fit well into other categories. Morality is not often studied explicitly nor defined
carefully, so my own usage of the term will borrow heavily from moral psychology.
In a rather provocative piece, Gray, Young and Waytz (2012) suggest that moral action
can be identified by three elements: the moral agent, the moral action, and the moral
patient. A moral agent (defined as an individual capable of intentional action) performs a
moral act (defined as an intentional action which affects another either for ill) on a moral
patient (the object of the moral action that is either deserving of good treatment or at
least undeserving of bad treatment). A key component of identifying moral action is the
normative evaluation that is either implicit or explicit in the description of the moral act.2
Gray, Young and Waytz were criticized for too narrow a focus on harmful behavior
2Gray et al. focus exclusively on immoral action, but their framework is easily extended to moral action.
One need only reverse the polarity, if you will, of the moral action. Moral agents who act for the good of a
moral patient would be examples of positive moral behavior.
4
(Carnes and Janoff-Bulman, 2012; Bauman, Wisneski and Skitka, 2012; Koleva and Haidt,
2012), but their focus on actors will prove especially helpful in the study of moral framing.
As Harold Lasswell put it, ”Politics is about who gets what, when, and how.” If this is
true, the question of deservingness becomes fundamental, and the task of moral framing is
to cast certain groups and individuals as either deserving or undeserving of help or harm.
Framing
The most influential definitions in the framing literature all relate to the ways in which
frames are designed to highlight the important parts of an issue and raise the salience
of certain considerations. In an influential article that preceded (and helped to precipi-
tate) the flood of framing research in the 1990s and 2000s, Gamson and Modigliani define
framing as an integral part of what they call media ”packages.” They write, ”media dis-
course can be conceived of as a set of interpretive packages that give meaning to an issue.
A package has an internal structure. At its core is a central organizing idea, or frame, for
making sense of relevant events, suggesting what is at issue” (1989, pg. 3). Along similar
lines, Entman’s influential article a few years later, subtitled ”Toward Clarification of a
Fractured Paradigm”:
Framing essentially involves selection and salience. To frame is to select some
aspects of a perceived reality and make them more salient in a communicating text,
in such a way as to promote a particular problem definition, causal interpretation,
moral evaluation, and/or treatment recommendation for the item described. Typical
frames diagnose, evaluate, and prescribe (1993, 52, emphasis in original).
This was perhaps most formally restated by (Chong and Druckman, 2007), who–
drawing on earlier work on attitude strength and importance–write down a mathematical
representation of framing effects as increasing the weight given to certain dimensions of
a political issue.
5
A slightly different approach to framing, and the one that I adopt in this paper, comes
out the study of language and meaning-making itself. The linguistic perspective is espe-
cially well-suited to the study of the content and construction of frames, as it pays close
attention to the structure of language and the process of conveying meaning. Charles Fill-
more’s theoretical work on ”frame-semantics” provides a useful perspective from which
to study moral framing. In a seminal paper, Fillmore writes:
A frame is a kind of outline figure with not necessarily all of the details filled
in.... Comprehension can be thought of as an active process during which the
comprehender—to the degree that it interests him—seeks to fill in the details
of the frames that have been introduced, either by looking for the needed infor-
mation in the rest of the text, by filling it in from his awareness of the current
situation, or from his own system of beliefs, or by asking his interlocutor to
say more (1976, 29).
According to Fillmore’s view, key words in the appropriate context evoke a culturally
constructed frame in the hearer’s mind. This line of work was eventually systematized
in the FrameNet database (Baker, Fillmore and Lowe, 1998). The FrameNet project has
cataloged and tagged more than 170,000 example sentences that fit into more than 1,100
frames.
For example, the ”Assessing” frame is evoked by the words ”appraisal,” ”assess,”
”evaluate,” ”rank,” and ”value.” The FrameNet entry describes the frame in these terms:
”An Assessor examines a Phenomenon to figure out its Value according to some Fea-
ture of the Phenomenon. This Value is a factor in determining the acceptability of the
Phenomenon. In some cases, a Method (implicitly involving an Assessor) is used to
determine the Phenomenon’s Value.” The bolded terms in the text refer to the Frame Ele-
ments. Given the sentence, ”Each company is then evaluated for their earning potential,”
the key word ”evaluate” activates the Assessing Frame. We can then classify the phrase,
6
”each company,” as the Phenomenon being assessed, ”for their earning potential” as the
Feature of ”each company” being assessed, and we can see that the Assessor is not ex-
plicitly mentioned in the frame nor is a specific Value judgement pronounced. It might
be helpful to link the ambiguous ”each company” to more specific information contained
elsewhere in the text. In a similar way, the other ambiguous elements of the sentence can
be resolved by appealing to the surrounding text and context (e.g., who or what is doing
the assessment? what is the specific method of evaluation? etc.).
The frame semantic approach provides us with a disciplined way of decomposing
frames into their constituent parts. With these analytical tools at our disposal, we are
prepared to approach the topic of moral framing in a systematic and thorough way.
Moral Framing
As is probably apparent, Fillmore’s perspective on frames fits well with an agent-focused
definition of morality. In this paper, I build from Gray et al.’s concise outline and use
it to identify moral frames–”moral dyads” in their terms–in political speech. Similar to
the example frame used above, we can formalize the definitions of moral agent, moral
action, and moral patient. A moral sentence is composed of, or at least implies all three
elements. This degree of formalization will facilitate automated coding (as discussed in
Section 3 below).
A converging body of literature is showing that liberals and conservatives are sep-
arated by a deep moral divide (Graham, Haidt and Nosek, 2009; Janoff-Bulman, 2009;
Inbar et al., 2012; Jones, 2011). One particularly useful theoretical framework for the
study of moral framing is Janoff-Bulman’s work on the relationship between basic moti-
vational systems (approach/avoid or activation/inhibition) and morality (Janoff-Bulman
and Carnes, 2013; Janoff-Bulman, 2009). Janoff-Bulman’s work has shown that conser-
vatives tend to focus more on an avoidance- or protection-based morality (proscriptive)
7
whereas liberal morality is based on the approach systems and is prescriptive. Her re-
search in this area leads us to expect to see a greater proportion of proscriptive arguments
coming from Republicans, and a greater proportion of prescriptive arguments coming
from Democrats.
Proscriptive Moral Frames
The prototypical proscriptive moral frame casts the moral hero as protecting someone or
something that is worth saving or in danger. Conversely a moral villain in a proscriptive
frame is threatening or endangering something worth protecting. We can identify pro-
scriptive moral frames by the verbs that they use (”imperil,” ”risk,” ”expose,” etc. on the
negative side and ”defend,” ”guard,” ”preserve,” etc. on the positive side; a complete list
of the words used can be found in the appendix).
Prescriptive Moral Frames
A prescriptive moral frame casts the moral hero as promoting or caring for someone or
something in need. Prescriptive moral villains, on the other hand, are guilty of taking
away something deserved by the patient or neglecting to attend to its needs. Prescrip-
tive moral frames are also evoked by the particular verbs they use (”abandon,” ”refuse,”
”spurn,” etc. on the negative side and ”nurture,” ”provide,” ”care” on the positive side).3
Variation across issues
In a competitive partisan environment, parties have incentives to cultivate reputations
in various issue areas. The issue ownership literature has shown that Democrats and
3There is a parallel here to Lakoff’s (2002) idea of ”strict father” (proscriptive) and ”nurturant mother”
(prescriptive) morality that has received some attention in the political science literature (Barker and Tin-
nick, 2006).
8
Republicans are differentially trusted in distinct policy domains (Petrocik, 1996; Petrocik,
Benoit and Hansen, 2003). Given the issue reputations, we might expect to see variation
in the quantity of moral framing by issue. I expect to see a higher proportion of moral
framing in issues by partisans whose parties ”own” the issue. For example, I would
expect a relatively higher proportion of moral framing from Democrats speaking about
education issues than Republicans.
Variation across legislators
We might expect legislators who are more extreme ideologically would be more likely to
use moralized rhetoric. This would be consistent with the conventional wisdom that tells
us that as the parties have polarized they have also become more moralized.
Hypotheses
1. Conservatives and Republicans will be more likely to use a proscriptive focus
2. Liberals and Democrats will be more likely to use a prescriptive focus
3. Partisans will use relatively more moral language in the issues owned by their par-
ties
4. Legislators who are more extreme ideologically will be more likely to use moral
language
9
3 Data and Methods
The data for this study is drawn primarily from the Congressional Record for the 101st
through the 112th Congresses.4 I formatted the Record into a dataset with rows corre-
sponding to individual speeches. This yielded nearly 220,000 speeches overall (117,000
made by representatives in the House and 105,000 by senators.5
Semantic Role Labeling
A wide range of tools are becoming available from the computational linguistics commu-
nity for natural language processing. Great strides have been made in processing natural
language for applications ranging from speech recognition to generating summaries of
texts and extracting meaningful relationships from large bodies of scientific literature. In
this paper, I rely on a program designed for efficient ”semantic role labeling” (SRL) called
SENNA (Collobert et al., 2011). While a full review of the methods and development of
SRL is beyond the scope of this paper, it might be helpful to briefly review the process at
a conceptual level, as it is likely unfamiliar to most readers.
Semantic role labelling is the process of identifying the agents, objects, and their re-
lationships in natural language. The process is somewhat analogous to the grade-school
exercise of diagramming a sentence, and it relies upon a suite of lower-level tasks (pri-
marily part of speech tagging to identify the key verbs and noun phrases in a sentence).
For example, given the sentence: ”The Republican Party protects the lives of the unborn.”
The software would tell us that the noun-phrase, ”The Republican Party,” is the subject of
the verb, ”protect,” and the thing that it is protecting is the noun phrase ”the lives of the
unborn.” In principle, the same results would be returned for the equivalent (but more
4It is surprisingly difficult to get an electronic version of the Congressional Record. For the purposes of
this paper, I scraped the text from the online THOMAS database myself.5Wherever feasible, I excluded parliamentary language and other non-substantive speeches.
10
awkward) alternate construction: ”The lives of the unborn are protected by the Republi-
can Party.”
For my purposes, the results of the SRL process enable me to quickly6 identify the
moral frames as defined above by identifying all the sentences which hinge around one
of the key verbs that I defined as ”moral” (see the Appendix for the complete list).7 My
dictionary of moral words was compiled by using a thesaurus to find as many verbs that
relate to positive and negative versions of the proscriptive and prescriptive moralities de-
fined by Janoff-Bulman (2009). This yielded a four-way typology of moral verbs: positive-
proscriptive, negative-proscriptive, positive-prescriptive, and negative-prescriptive.
Topic Classification
I classified each speech into issue categories by using the information in the speeches that
mentioned specific bills to classify the remaining speeches. For speeches that mentioned
specific bills, I matched the bill to its record in the Congressional Bills Project data (Adler
and Wilkerson, 2013) and assigned it the ”major topic” code that was assigned to the bill
on the assumption that speeches that mention specific bills are discussing the contents of
those bills.
About 10 percent of the speeches I identified mention a specific bill that was able to
6”Quick” in the sense that it was much quicker than reading the entire Congressional Record myself
or training human coders to do it for me. Ultimately, the job required nearly 4 days of computing time to
process the entire body of text. SENNA is implemented in C++ and is actually much quicker than some
of the other software that is available to do roughly the same task. In this paper, I used the default model
included with the v. 2.0 download (available at http://ml.nec-labs.com/senna/), but SENNA allows the
user to train a new model.7It would certainly be possible to expand the parameters of the search to include a greater range of
linguistic constructions in the category ”moral.” For example, it might prove fruitful to look at the valence
and meaning of adjectives in addition to verbs. For the purposes of this paper, I chose to stick with a more
narrowly defined criteria that identify accounts of moral action.
11
be matched to the Adler and Wilkerson (2013) data. I used these 22,000 speeches as a
training set to classify the remaining speeches into the same set of topic codes. Using a
relatively simple, naive Bayes text classification algorithm (Yang and Liu, 1999),8 I clas-
sified the remaining speeches into the same set of topic categories (see the Appendix for
more details on the text classification process).
Legislator Data
For comparisons of moral language use across legislators, I matched first dimension DW-
NOMINATE scores (Poole and Rosenthal, 2013) to the names of legislators. It was then
straightforward to come up with a proportion of moral speech by dividing the total num-
ber of moral sentences used by a particular legislator by the total number of sentences
spoken.
4 Analysis and Results
In all, I identified over 83,000 speeches that contained at least one ”moral frame” by the
definition outlined above. This more than 37 percent of the total speeches in the Congres-
sional Record (the rate was higher in the House, 42%, and lower in the Senate, 32%).9
8While more sophisticated approaches exist, the Naive Bayes classifier has several advantages in this
case not the least of which is the relatively light computational burden. In comparisons against other more
technically demanding models of text classification, naive Bayes performs remarkably well.9It is worth asking at this point if the method I propose in this paper produces substantially different
results from a more straightforward ”bag-of-words” approach. If it is the case that a method that ignores the
relationships between words produces roughly similar results as my much more computationally intensive
alternative, it would not make sense to proceed. The typical ”bag-of-words” approach yields about twice
as many hits for moral words as does my more constrained classification. For example, one of the positive-
prescriptive words is ”care.” The SRL technique distinguishes between ”care” as it is used in a noun phrase
(e.g. health care) and ”care” as it is used as a verb (e.g. care for the poor). A simpler but more naive approach
12
To give the reader a feel for the kinds of sentences we are talking about, I selected a
few example sentences that highlight the ”moral frames” defined above.
Positive-Proscriptive
”We can have a good voluntary program to set up to protect and preserve Social Secu-
rity.”10 ”We must enforce the laws that are on the books so we can save lives....”11 ”[W]e in
Congress will continue to watch and ensure that the Navy not only adheres, but is com-
mitted to the programs and changes it has implemented to eradicate all forms of sexual
harassment in the Armed Forces.”12
Negative-Proscriptive
”The chairman of the House Interior Committee was pushing a bill through the House
that would devastate those timber communities and destroy the wood products industry
of Washington, Oregon, and northern California.”13 ”A growing American economic ca-
pability is the only way we can do such things as fight our war on terrorism....”14 ”They
are destroying our jobs, and they must be reined in.”15
Positive-Prescriptive
”That is why I introduced legislation last year to help streamline the permitting pro-
cess for new energy facilities.”16 ”This program does not provide charity; it provides a
would result in too many false-positives.10Speech made by Rep. Kingston (R), July 12, 2000.11Speech made by Rep. McCarthy (D), September 6, 2006.12Speech made by Rep. Snowe (R), April 27, 1993.13Speech made by Sen. Gorton (R), June 23, 1992.14Speech made by Sen. Voinovich (R), March 5, 2002.15Speech made by Rep. Scott (R-GA), March 29, 2011.16Speech made by Sen. Voinovich (R), March 5, 2002.
13
chance.”17
Negative-Prescriptive
”Again, they refuse to deal with the overwhelming problem of school construction that
we need help in constructing more classrooms.”18 ”To do this, they exploit our banks and
business” 19
What is worth ”protecting”?
The resulting data retains important relationships between the concepts. For example
using the results of the coding, we can identify all of the noun phrases that are the subjects
in sentences where the operative verb is ”protect.” I identified nearly 16,000 speeches that
contained ”protect” sentences. Of these, 8600 were made by Democrats and nearly 7200
by Republicans. Table 1 shows the top ranking noun phrases that were the objects of the
verb ”protect” for Democrats and Republicans. The columns on the left side of the vertical
line show the rankings for the top twenty ranked words for Democrats compared to the
same words’ rankings for Republicans. The columns on the right side of the vertical line
show the top twenty words for Republicans.
Quick inspection of the table confirms (most) of our thinking about the different par-
ties. When Democrats talk about ”protecting” they are more likely to be talking about
government programs and social welfare issues (the higher rankings for ”social security,”
”health,” and ”communities”). Republicans, on the other hand, ranked words like ”free-
dom”, ”border” and ”borders”, and ”flag” much higher than the Democrats. The table
also reveals important rhetorical similarities between the two parties. ”People,” ”rights,”
and ”environment” are ranked in the top five positions for both parties. Each of the par-
17Speech made by Rep. Clayton (D), March 19, 1996.18Speech made by Rep. Owens (D), June 16, 1998.19Speech made by Sen. Grassley (R), September 25, 1996.
14
Dem. Words D R Rep. Words R D
environment 1 4 people 1 3
rights 2 3 country 2 5
people 3 1 rights 3 2
children 4 9 environment 4 1
country 5 2 united states 5 13
citizens 6 7 nation 6 8
american people 7 8 citizens 7 6
nation 8 6 american people 8 7
social security 9 15 children 9 4
families 10 24 borders 10 44
communities 11 23 freedom 11 27
interests 12 13 flag 12 22
united states 13 5 interests 13 12
right 14 17 lives 14 16
americans 15 18 social security 15 9
lives 16 14 border 16 529
health 17 41 right 17 14
public health 18 40 americans 18 15
troops 19 25 freedoms 19 32
national security 20 28 homeland 20 39
Table 1: What’s worth protecting?
15
ties talked about protecting ”Americans” and the ”American people”, and the ”country”
with relatively similar frequencies.20
The table also reveals how some caution needs to be taken with the method. For
example on the Republicans’ side, the word ”interests” is listed 11th. This word could
(and almost certainly does) appear on the list for two reasons. First, Republicans could be
talking about protecting the national interest, and this sense of the word ”interest” would
qualify as something that the Republicans want to ”protect.” On the other hand, they are
also use this construction to accuse the Democrats of protecting ”special” interests.
Morality and Partisanship
My first two hypotheses relate to the expected relationship between moral language use
and partisanship. The literature on morality would lead us to expect that Republicans
would be more likely to employ proscriptive moral frames and Democrats would be
more likely to employ prescriptive frames. It is relatively straightforward to compare
the rates of moral framing between Republicans and Democrats. The first two columns
of Table 2 show the average proportion of speeches containing proscriptive moral frames
by Democrats and Republicans respectively. The third column reports the p-value for a
simple difference of means test.
In all cases, it was actually the Democrats who were more likely to use proscriptive
framing. The differences between the parties are not particularly large, but the large sam-
ple (220,000 speeches) leads to significant results with these modest differences.
Table 3 shows the results for prescriptive moral language. Hypothesis 2 predicts that
Democrats will be more likely to use prescriptive moral language. Again, the differences
20It might be interesting to repeat this exercise with a different focus. For example, we could examine
the ways the parties differentially refer to immigrants by finding all of the noun phrases in the Record that
contained some version of ”immigrant” (”immigrant”, ”migrant”, ”alien”, etc.) and exploring the different
verbs or adjectives associated with the terms.
16
Democrats Republicans p-value
House 30.52 28.19 0.00
Senate 25.17 21.56 0.00
Combined 27.86 25.24 0.00
Table 2: Proscriptive Frames (Protect)
Democrats Republicans p-value
House 36.13 32.88 0.00
Senate 29.07 26.01 0.00
Combined 32.61 29.82 0.00
Table 3: Prescriptive Frames (Provide)
are not large, but in this case they run in the correct direction. Democrats were more
likely to use prescriptive moral language in their speeches than Republicans in the period
under study.
An obvious potential confounding factor here is majority status. There is most likely
an interaction between majority status and moral language. In Table 4, I break out the
use of proscriptive moral frames by Congress. The cells of the table show the differences
between Democrats and Republicans. For example, the first row shows that Republicans
used slightly more proscriptive moral frames than Democrats in the 101st Congress. In
this particular instance the difference is not statistically significant.
Examining the differences over time reveals the interaction between majority status
and moral framing. It seems that partisanship is less important than minority status in
this case. Parties in the minority are significantly more likely to use proscriptive moral
frames than parties in the majority. The signs of differences track the changing majority
status exactly in both chambers.
Table 5 does the same for prescriptive moral frames. Consistently the story is the same,
17
House p-value Senate p-value
101 -0.62 0.52 -3.13 0.00
102 -1.38 0.13 -6.33 0.00
103 -4.06 0.00 -8.34 0.00
104 5.18 0.00 6.28 0.00
105 3.99 0.00 20.64 0.00
106 10.47 0.00 23.43 0.00
107 11.74 0.00 -12.61 0.00
108 10.05 0.00 20.88 0.00
109 7.72 0.00 27.63 0.00
110 -4.91 0.00 -13.19 0.00
111 -6.26 0.00 -6.11 0.00
112 6.57 0.00 -12.13 0.00
Table 4: Difference in Proscriptive Frames by Congress (D - R)
18
House p-value Senate p-value
101 -9.23 0.00 -11.44 0.00
102 -1.19 0.21 -10.78 0.00
103 -3.32 0.00 -12.45 0.00
104 7.44 0.00 7.26 0.00
105 5.38 0.00 22.79 0.00
106 11.05 0.00 24.27 0.00
107 12.46 0.00 -8.19 0.00
108 12.82 0.00 25.26 0.00
109 10.13 0.00 30.38 0.00
110 -1.85 0.05 -13.06 0.00
111 -4.79 0.00 -8.92 0.00
112 6.61 0.00 -16.23 0.00
Table 5: Difference in Prescriptive Frames by Congress (D - R)
minority parties are significantly more likely to use prescriptive moral frames as well.
Being a member of the minority, it would seem, is associated with a higher likelihood of
using any moral frame.
As a final exploration of the relationship between partisanship and the type of moral
rhetoric, we can control in some measure for the effect of minority status by looking at the
breakdown of the kinds of moral frames used as a proportion of all moral frames used
(rather than a proportion of speeches containing moral frames).
Since my classification of moral issues is mutually exclusive and exhaustive, it is only
necessary to look at one of the two (proscriptive or prescriptive) when we look at the
difference in the composition of moral frames by party. Table 6 shows these differences
by Congress for prescriptive moral frames as a proportion of total moral frames. In the
House for all Congresses, Democrats used relatively more prescriptive frames than Re-
19
House p-value Senate p-value
101 0.92 0.00 0.08 0.00
102 0.35 0.09 -0.29 0.00
103 4.01 0.02 0.10 0.01
104 4.78 0.03 -1.71 0.38
105 3.46 0.24 -2.36 0.03
106 1.85 1.00 -0.98 0.00
107 4.33 0.79 2.07 0.00
108 4.02 0.02 -2.62 0.02
109 2.61 0.00 0.26 0.36
110 5.44 0.00 0.26 0.01
111 4.82 0.00 -0.21 0.14
112 1.52 0.77 -1.76 0.25
Table 6: Difference in Proportion of Proscriptive Moral Frames (D - R)
publicans (and by definition Republicans used more proscriptive frames). The results
from the Senate are less consistent and nearly always of lesser magnitude. The data give
limited support for the first and second hypotheses.
Morality and Issue Content
My third hypothesis relates to the differences between the parties by issues. Table 7 (pg.
22), breaks out the proportion of moral language use by issue area for Congresses with
Democratic majorities (on the left) and Republican majorities separately (on the right).
The overall pattern (where parties in the minority are more likely to use moral langauge
than parties in the minority. Underneath that overall trend, there is substantial variation
by issue area. Even when in the majority, Democrats are more likely that Republicans to
20
use moral frames when talking about economic issues and social welfare issues.
Table 7 breaks out moral language use by issue area and majority status.
Morality and Extremity
My last hypothesis considers the difference in language use by ideological extremity. To
test for a relationship between ideological extremity and moral language use, I ran a re-
gression with moral frames as the dependent variable and nominate score and its inter-
action with party as the dependent variable. The unit of analysis here is speeches. As
speeches are clustered by legislator, I used a clustered bootstrapping procedure to esti-
mate the standard errors.21 The regression controlled for the topic of the speech (as dif-
ferent topics are more readily ”moralized”) and the length of the speech (longer speeches
have more opportunities for a larger number of moral frames).
Figure 1 visualizes the results. The line on the left-hand side of the figure shows the
relationship for Democrats (moving from the most extreme on the left to more moderate
on the right). The line on the left-hand side of the figure shows the results for Republicans.
For both Republicans and Democrats, increasing ideological extremity is associated with
increasing use of moral frames. This result appears to be strongest for Democrats.22
21I estimated the standard errors by first drawing a sample of legislators (with replacement) equal to the
number of legislators in the data (1,355). I then sampled a number of speeches from each sampled legislator
(repeating the process for the legislators that were selected into the bootstrapped sample more than once)
equal to the number of speeches he or she gave in the data set. This produces a distribution of regression
coefficients from which I can use to create the confidence intervals.22The relationship does not seem to vary significantly by Congress or chamber. This suggests that at least
some of the disparity we saw that was related to majority status has to do with the kinds of representatives
who make speeches while their party is in the minority.
21
Dem. Majorities p-value Rep. Majorities p-value
Agriculture 3.94 0.00 6.95 0.00
Banking/Finance -9.66 0.00 12.46 0.00
Civil Rights -2.29 0.07 1.49 0.22
Community/Housing -22.96 0.00 20.01 0.00
Defense -13.68 0.00 5.12 0.00
Education -0.14 0.93 9.94 0.00
Energy -2.25 0.24 11.35 0.00
Environment -5.38 0.02 15.16 0.00
Health -0.64 0.60 4.42 0.00
International Affairs/Aid -0.81 0.47 7.32 0.00
Labor/Immigration -2.57 0.18 10.03 0.00
Law/Family -11.39 0.00 10.89 0.00
Macroeconomics 6.82 0.00 8.85 0.00
Public Lands -5.52 0.00 5.50 0.00
Science/Communications -0.03 0.99 13.89 0.00
Social Welfare 7.79 0.00 8.12 0.00
Trade -17.14 0.00 18.07 0.00
Transportation -2.58 0.00 2.84 0.00
Table 7: Differences (D - R) in overall moral framing by issue and majority status
22
Coef. Bootstrapped SE
Intercept -13.63 0.80
Nominate Score -2.48 0.68
Republican 0.30 0.29
Nom.*Rep. 3.87 0.84
log(Words in Speech) 2.52 0.12
n 215752
R-squared 0.26
Table 8: Ideological Extremity and Morality
Figure 1: Ideological Extremity and Moral Rhetoric
−1.0 −0.5 0.0 0.5 1.0
23
45
6
Ideological Extremity and Moral Language Use
Nominate Score
Pre
dict
ed N
umbe
r of
Mor
al F
ram
es p
er S
peec
h
23
5 Discussion and Conclusion
Theories of the connections between the political elite and the mass public are the heart of
democratic theory. In this paper, I have argued that our methods for analyzing speech—
the primary link between citizens and their representatives—are wanting in several re-
spects. Too often, we throw out much of what makes communication meaningful. By
paying more careful attention to the structure of natural language and the most basic
units of communication (subjects, predicates, and their objects), we can more systemati-
cally study political communication.
In this paper, I briefly introduced one potential application of natural language pro-
cessing to the study of political communication. This approach forces us to carefully de-
fine our concepts and think through what we mean by a ”frame.” Provided that we can
define our terms appropriately, the tools coming out of the natural language processing
community allow us to relatively quickly process texts with minimal human intervention.
This greatly reduces the costs of content analysis and has the potential to dramatically in-
crease its reliability and replicability.
24
References
Abramowitz, Alan I. 2010. The disappearing center: Engaged citizens, polarization, and Amer-
ican democracy. Yale University Press.
Adler, E. Scott and John Wilkerson. 2013. “Congressional Bills Project.”.
Baker, Collin F, Charles J Fillmore and John B Lowe. 1998. The berkeley framenet project.
In Proceedings of the 17th international conference on Computational linguistics-Volume 1.
Association for Computational Linguistics pp. 86–90.
Barabas, Jason and Jennifer Jerit. 2010. “Are Survey Experiments Externally Valid?” Amer-
ican Political Science Review 104(2):226–242.
Barker, David C and James D Tinnick. 2006. “Competing visions of parental roles and
ideological constraint.” American Political Science Review 100(2):249.
Bauman, Christopher W, Daniel C Wisneski and Linda J Skitka. 2012. “Cubist consequen-
tialism: The pros and cons of an agent–patient template for morality.” Psychological
Inquiry 23(2):129–133.
Ben Nun, Pazit. 2010. The Moral Public: Moral Judgment and Political Attitudes PhD
thesis Stony Brook University.
Benoit, Kenneth, Michael Laver and Slava Mikhaylov. 2009. “Treating words as data with
error: Uncertainty in text statements of policy positions.” American Journal of Political
Science 53(2):495–513.
Carnes, Nate and Ronnie Janoff-Bulman. 2012. “Harm, help, and the nature of (im) moral
(in) action.” Psychological Inquiry 23(2):137–142.
Chong, Dennis and James N Druckman. 2007. “A theory of framing and opinion forma-
tion in competitive elite environments.” Journal of Communication 57(1):99–118.
25
Clifford, Scott and Jennifer Jerit. 2013. “How words do the work of politics: Moral Foun-
dations Theory and the debate over stem-cell research.” Journal of Politics .
Collobert, Ronan, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu and
Pavel Kuksa. 2011. “Natural language processing (almost) from scratch.” The Journal of
Machine Learning Research 12:2493–2537.
Druckman, James N. 2001. “On the limits of framing effects: who can frame?” Journal of
Politics 63(4):1041–1066.
Druckman, James N. 2004. “Political preference formation: Competition, deliberation,
and the (ir) relevance of framing effects.” American Political Science Review 98(4):671–
686.
Druckman, James N and Kjersten R Nelson. 2003. “Framing and deliberation: How citi-
zens’ conversations limit elite influence.” American Journal of Political Science 47(4):729–
745.
Entman, Robert M. 1993. “Framing: Toward clarification of a fractured paradigm.” Journal
of communication 43(4):51–58.
Feinerer, Ingo, Kurt Hornik and David Meyer. 2008. “Text mining infrastructure in R.”
Journal of Statistical Software 25(5):1–54.
Fillmore, Charles J. 1976. “Frame semantics and the nature of language*.” Annals of the
New York Academy of Sciences 280(1):20–32.
Gamson, William A and Andre Modigliani. 1989. “Media discourse and public opinion
on nuclear power: A constructionist approach.” American journal of sociology pp. 1–37.
Graham, Jesse, Jonathan Haidt and Brian A Nosek. 2009. “Liberals and conservatives
rely on different sets of moral foundations.” Journal of personality and social psychology
96(5):1029.
26
Gray, Kurt, Liane Young and Adam Waytz. 2012. “Mind perception is the essence of
morality.” Psychological Inquiry 23(2):101–124.
Grimmer, Justin. 2010. “A Bayesian hierarchical topic model for political texts: Measuring
expressed agendas in Senate press releases.” Political Analysis 18(1):1–35.
Grimmer, Justin and Brandon M Stewart. 2013. “Text as data: The promise and pitfalls of
automatic content analysis methods for political texts.” Political Analysis .
Haidt, Jonathan. 2013. The righteous mind: Why good people are divided by politics and religion.
Random House Digital, Inc.
Hopkins, Daniel. 2012. “The Exaggerated Life of Death Panels: The Limits of Framing
Effects in the 2009-2012 Health Care Debate.” Available at SSRN 2163769 .
Inbar, Yoel, David Pizarro, Ravi Iyer and Jonathan Haidt. 2012. “Disgust sensitivity, polit-
ical conservatism, and voting.” Social Psychological and Personality Science 3(5):537–544.
Iyengar, Shanto. 1990. “Framing responsibility for political issues: The case of poverty.”
Political behavior 12(1):19–40.
Janoff-Bulman, Ronnie. 2009. “To provide or protect: Motivational bases of political lib-
eralism and conservatism.” Psychological Inquiry 20(2-3):120–128.
Janoff-Bulman, Ronnie and Nate C Carnes. 2013. “Surveying the Moral Landscape Moral
Motives and Group-Based Moralities.” Personality and Social Psychology Review .
Jerit, Jennifer. 2004. “Survival of the fittest: Rhetoric during the course of an election
campaign.” Political Psychology 25(4):563–575.
Jones, Bradley. 2011. Intuitive Politics: How Moral Intuitions Shape Political Identities
and Behavior. In APSA 2011 Annual Meeting Paper.
27
Kellstedt, Paul M. 2000. “Media framing and the dynamics of racial policy preferences.”
American Journal of Political Science pp. 245–260.
Koleva, Spassena and Jonathan Haidt. 2012. “Let’s Use Einstein’s Safety Razor, Not Oc-
cam’s Swiss Army Knife or Occam’s Chainsaw.” Psychological Inquiry 23(2):175–178.
Lakoff, George. 2002. Moral politics: How liberals and conservatives think. University of
Chicago Press.
Lakoff, George. 2004. Don’t think of an elephant: Know your values and frame the debate.
Chelsea Green Publishing.
Leonard, Diana J, Wesley G Moons, Diane M Mackie and Eliot R Smith. 2011. “Were mad
as hell and were not going to take it anymore: Anger self-stereotyping and collective
action.” Group Processes & Intergroup Relations 14(1):99–111.
Lippmann, Walter. 1997. Public opinion. Simon and Shuster.
Luntz, Frank. 2007. Words that work: it’s not what you say, it’s what people hear. Hyperion
Books.
McCaffrey, Dawn and Jennifer Keys. 2000. “Competitive Framing Processes in the Abor-
tion Debate: Polarization-Vilification, Frame Saving, and Frame Debunking.” Sociologi-
cal Quarterly pp. 41–61.
Nelson, Thomas E, Rosalee A Clawson and Zoe M Oxley. 1997. “Media framing of a civil
liberties conflict and its effect on tolerance.” American Political Science Review pp. 567–
583.
Pagano, Sabrina J and Yuen J Huo. 2007. “The Role of Moral Emotions in Predicting
Support for Political Actions in Post-War Iraq.” Political Psychology 28(2):227–255.
28
Petrocik, John R. 1996. “Issue ownership in presidential elections, with a 1980 case study.”
American Journal of Political Science pp. 825–850.
Petrocik, John R, William L Benoit and Glenn J Hansen. 2003. “Issue ownership and
presidential campaigning, 1952–2000.” Political Science Quarterly 118(4):599–626.
Polsby, Nelson W. 1983. Consequences of party reform. Oxford University Press.
Poole, Kieth and Howard Rosenthal. 2013. “DW-NOMINATE.”.
Prior, Markus. 2007. Post-broadcast democracy: How media choice increases inequality in polit-
ical involvement and polarizes elections. Cambridge University Press.
Riker, William H. 1986. The art of political manipulation. Yale University Press.
Ryan, Timothy J. 2012. Flipping the Moral Switch: The Divisive Core of Moral Frames.
Midwest Political Science Association.
Sobieraj, Sarah and Jeffrey M Berry. 2011. “From incivility to outrage: Political discourse
in blogs, talk radio, and cable news.” Political Communication 28(1):19–41.
Tversky, Amos and D Kahneman. 1981. “Rational choice and the framing of decisions.”
Science 211:453–458.
Valentino, Nicholas A, Ted Brader, Eric W Groenendyk, Krysha Gregorowicz and Vin-
cent L Hutchings. 2011. “Election nights alright for fighting: The role of emotions in
political participation.” Journal of Politics 73(1):156–170.
Westen, Drew. 2008. Political Brain: The Role of Emotion in Deciding the Fate of the Nation.
PublicAffairs.
Yang, Yiming and Xin Liu. 1999. A re-examination of text categorization methods. In Pro-
ceedings of the 22nd annual international ACM SIGIR conference on Research and development
in information retrieval. ACM pp. 42–49.
29
Protect-positive Protect-negative Provide-positive Provide-negative
1 assure [4884] attack [5086] accommodate [1582] abandon [2372]
2 conserve [699] destroy [7346] administer [2300] abuse [1371]
3 cover [9159] endanger [1728] aid [1324] brutalize [144]
4 cushion [65] expose [2228] alleviate [816] desert [149]
5 defend [8108] fight [18534] assist [4980] discard [306]
6 fend [8194] harm [2085] attend to [0] disregard [600]
7 fortify [128] hazard [16] bestow [477] dump [1196]
8 guard [1501] hurt [4989] care [7839] exploit [1127]
9 hold [27896] imperil [163] commiserate [6] fail [15552]
10 insulate [202] injure [2105] cultivate [292] forsake [76]
11 maintain [10011] kill [16019] empathize [38] maltreat [6]
12 preserve [6135] maim [187] furnish [549] mistreat [122]
13 prevent [12788] menace [19] grant [5996] neglect [1009]
14 protect [29297] peril [165] help [55657] overlook [973]
15 redeem [145] risk [2251] impart [130] prey [194]
16 rescue [783] terrorize [376] minister [2386] ravage [342]
17 safeguard [742] threaten [7407] mother [6] refuse [6889]
18 save [14957] torment [55] nourish [116] slight [22]
19 secure [4904] wound [1639] nurse [41] spurn [27]
20 shelter [170] nurture [521] take [85472]
21 shield [379] provide [73773] torture [868]
22 uphold [1909] relieve [844] trample [256]
23 watch [7098] reprieve [1] traumatize [86]
24 succor [4] trouble [1322]
25 sustain [3123] withdraw [2918]
26 sympathize [110] withhold [989]
31
B Appendix: Text Classification
The Naive Bayes text classification algorithm is one of the oldest and most intuitive tools
in the machine learning community. The model looks like this:
P (Topicj|wj,1, wj,2, ...wj,nj) ∝ P (Topicj)
∏P (wi|Topicj)
The model uses a set of texts that have been tagged into topic categories as a ”training
set.” From the training set data we calculate the conditional probabilities of each word in
the collection of texts given the topic. We then make the (assuredly wrong but blessedly
inconsequential) assumption that words are independent of one another to calculate out
the posterior probabilities of a given topic given the words that we see in the speech. We
can also use the overall proportion of documents in the training set as an estimate of the
prior probability of the topic. For my purposes, I used a uniform prior across topics since
I was not confident that the training data I used (speeches that mention specific bills) was
a representative sample of all speeches. Given the size of my training set, the data should
swamp the prior in any case.
The Naive Bayes model has been shown to be sensitive to the choice of features (in
my case words in the document). I cleaned up the features in several ways. First I
removed ”stop words” (very common articles and conjucntions that don’t contain any
substantive meaning). Secondly, I removed numbers and converted all of the words to
lowercase. Finally, I ”stemmed” the words using the stemming algorithm contained in
the ”tm” package in R (Feinerer, Hornik and Meyer, 2008). I also supplemented the fea-
tures data with a few additional features to improve its performance (an identifier for the
member of congress who made the speech, an identifier for the party of the member, and
an identifier for the Congress in which the speech was made).
To validate the performance of the algorithm, I partitioned my training data into ten
equally sized groups. I then ran the algorithm ten times. In each iteration, I used nine of
the partitions as training data and the other as test data. This allows us to examine the
32
performance of the classifier against data where the topics are known. The table below
reports the overall accuracy (proportion correctly classified), precision (proportion cor-
rectly classified of the classified), and recall (proportion correctly classified of the for each
topic) averaged across the ten partitions of the data.
In general given the difficult task of correctly classifiying 18 different topics, the model
performs well.
33
Accuracy Precision Recall
Macroeconomics 0.95 0.41 0.31
Civil Rights 0.97 0.52 0.36
Health 0.93 0.78 0.41
Agriculture 0.95 0.26 0.4
Labor/Immigration 0.95 0.69 0.29
Eduacation 0.96 0.67 0.32
Environment 0.94 0.4 0.3
Energy 0.97 0.63 0.29
Transportation 0.88 0.21 0.48
Law/Family 0.94 0.66 0.33
Social Welfare 0.97 0.52 0.37
Community/Housing 0.98 0.43 0.21
Banking/Finance 0.92 0.71 0.22
Defense 0.85 0.37 0.48
Science/Communications 0.97 0.55 0.28
Trade 0.97 0.67 0.26
International Affairs/Aid 0.96 0.56 0.27
Public Lands 0.73 0.25 0.76
Overall 0.93 0.52 0.35
34