Political Diversity Will Improve Social Psychological Science
José L. Duarte, Arizona State University
Jarret T. Crawford, The College of New Jersey
Charlotta Stern, Stockholm University
Jonathan Haidt, New York University—Stern School of Business
Lee Jussim, Rutgers University
Philip E. Tetlock, University of Pennsylvania1
July 11, 2014
In press at Behavioral and Brain Sciences
(minor changes likely in final published version)
Corresponding Author: Jonathan Haidt, email@example.com
Acknowledgements: We thank Bill von Hippel, Michael Huemer, Jon Krosnick, Greg Mitchell, Richard Nisbett, and Bobbie Spellman for their comments on earlier drafts of this manuscript, although they do not necessarily endorse views expressed in this paper.
Psychologists have demonstrated the value of diversity—particularly diversity of viewpoints—for enhancing creativity, discovery, and problem solving. But one key type of viewpoint diversity is lacking in academic psychology in general and social psychology in particular: political diversity. This article reviews the available evidence and finds support for four claims: 1) Academic psychology once had considerable political diversity, but has lost nearly all of it in the last 50 years; 2) This lack of political diversity can undermine the validity of social psychological science via mechanisms such as the embedding of liberal values into research questions and methods, steering researchers away from important but politically unpalatable research topics, and producing conclusions that mischaracterize liberals and conservatives alike; 3) Increased political diversity would improve social psychological science by reducing the impact of bias mechanisms such as confirmation bias, and by empowering dissenting minorities to improve the quality of the majority’s thinking; and 4) The underrepresentation of non-liberals in social psychology is most likely due to a combination of self-selection, hostile climate, and discrimination. We close with recommendations for increasing political diversity in social psychology.
Political Diversity Will Improve Social Psychological Science
“He who knows only his own side of the case, knows little of that."
–John Stuart Mill, On Liberty (1859/2002)
In the last few years, social psychology has faced a series of challenges to the validity of its research, including a few high-profile replication failures, a handful of fraud cases, and several articles on questionable research practices and inflated effect sizes (John, Loewenstein & Prelec, 2012; Simmons, Nelson, & Simonsohn, 2011). In response, the Society for Personality and Social Psychology (SPSP) convened a Task Force on Publication and Research Practices which provided a set of statistical, methodological, and practical recommendations intended to both limit integrity failures and broadly increase the robustness and validity of social psychology (Funder et al., 2014, p. 18). In this article we suggest that one largely overlooked cause of failure is a lack of political diversity. We review evidence suggesting that political diversity and dissent would improve the reliability and validity of social psychological science.
We are not the first to make this point. Tetlock (1994) identified ways in which moral-political values led to unjustified conclusions about nuclear deterrence and prejudice, and Redding (2001) showed how the lack of political diversity across psychology’s subfields threatens the validity of the conclusions of psychological science. Unfortunately, these concerns have gone largely unheeded. As we shall show, the reasons for concern are even greater now than when Tetlock and Redding published their critiques.
This article makes five distinct contributions to the scientific literature, each corresponding to a separate section of the paper. Section two shows that although psychology once had considerable political diversity, the trend over the last four decades has been toward political homogeneity. Section three identifies three risks points where the lack of political diversity can undermine the validity of scientific research claims. Section four draws on findings from organizational psychology to show how increasing political diversity can improve social psychological science. Section five examines possible sources of political homogeneity in social psychology today, including differences between liberals and non-liberals in ability and interest, hostility toward non-liberal views, and discrimination against non-liberals. In section six, we offer recommendations for how social psychologists can increase political diversity within their own ranks and reduce the harmful effects of political homogeneity on their research.
Some comments on terminology are needed before we begin. First, we use the term “social psychology” to also include personality psychology because the two fields are closely intertwined and because it is awkward to refer repeatedly to “social and personality psychological science.” We focus on social psychology because it is the subfield of psychology that most directly examines ideologically controversial topics, and is thus most in need of political diversity. Second, we focus on conservatives as an under-represented group because the data on the prevalence in psychology of different ideological groups is best for the liberal-conservative contrast—and the departure from the proportion of liberals and conservatives in the U.S. population is so dramatic. However, we argue that the field needs more non-liberals however they specifically self-identify (e.g., libertarian, moderate). Third, it is important to recognize that conservatism is not monolithic—indeed, self-identified conservatives may be more diverse in their political beliefs than are liberals (Feldman & Johnston, 2014; Klein & Stern, 2005; Stenner, 2009). Fourth, we note for the curious reader that the collaborators on this article include one liberal, one centrist, two libertarians, one whose politics defy a simple left/right categorization, and one neo-positivist contrarian who favors a don't-ask-don't-tell policy in which scholarship should be judged on its merits. None identifies as conservative or Republican.
A final preparatory comment we must make is that the lack of political diversity is not a threat to the validity of specific studies in many and perhaps most areas of research in social psychology. The lack of diversity causes problems for the scientific process primarily in areas related to the political concerns of the left—areas such as race, gender, stereotyping, environmentalism, power, and inequality--as well as in areas where conservatives themselves are studied, such as in moral and political psychology. And even in those areas, we are not suggesting that most of the studies are flawed or erroneous. Rather, we argue that the collective efforts of researchers in politically charged areas may fail to converge upon the truth when there are few or no non-liberal researchers to raise questions and frame hypotheses in alternative ways. We do not intend this article to be an attack on social psychology – a field that has a long track record of producing research that is vital to understanding and improving the human condition (see examples in Zimbardo, 2004). We are proud to be social psychologists, and we believe that our field can—and will—embrace some relatively simple methods of using diversity to improve itself as a science.
2. Psychology is Less Politically Diverse than Ever
There are many academic fields in which surveys find self-identified conservatives to be about as numerous as self-identified liberals; typically business, computer science, engineering, health sciences, and technical/vocational fields (Zipp & Fenwick, 2006; Gross & Simmons, 2007)2. In the social sciences and humanities, however, there is a stronger imbalance. For instance, recent surveys find that 58 - 66 percent of social science professors in the United States identify as liberals, while only 5 - 8 percent identify as conservatives, and that self-identified Democrats outnumber Republicans by ratios of at least 8 to 1 (Gross & Simmons, 2007; Klein & Stern, 2009; Rothman & Lichter, 2008). A similar situation is found in the humanities where surveys find that 52 - 77 percent of humanities professors identify as liberals, while only 4 - 8 percent identify as conservatives, and that self-identified Democrats outnumber Republicans by ratios of at least 5:1 (Gross & Simmons, 2007; Rothman & Lichter, 2008). In psychology the imbalance is slightly stronger: 84 percent identify as liberal while only 8 percent identify as conservative (Gross & Simmons, 2007; Rothman & Lichter, 2008). That is a ratio of 10.5 to 1. In the United States as a whole, the ratio of liberals to conservatives is roughly 1 to 2 (Gallup, 2010).
Has academic psychology always tilted so far left? The existing data is imperfect, as the only data we could find that date back beyond a few decades examined party identification (Democrat vs. Republican; McClintock, Spaulding, & Turner 1965), not ideological self-placement. Before the 1980s, party identification did not correlate with the left-right dimension as strongly as it does today (Barber & McCarty, 2013). There used to be substantial minorities of liberal Republicans and conservative Democrats. Nonetheless, since the early 20th century, the Democratic Party has been the left-leaning party and the Republican Party has been the right-leaning party (Levendusky, 2009). In Figure 1, we have plotted all available data points on the political identity of psychologists at American colleges and universities, including both party identification (diamonds) and liberal-conservative identification (circles). Both sets of measures show a strong left-ward movement. Psychology professors were as likely to report voting Republican as Democrat in presidential contests in the 1920s. From the 1930s through 1960, they were more likely to report voting for Democrats, but substantial minorities voted for Wilkie, Eisenhower, and Nixon (in 1960). By 2006, however, the ratio of Democrats to Republicans had climbed to more than 11:1 (Gross & Simmons, 2007; Rothman & Lichter, 2008).
Is social psychology less politically diverse than academic psychology as a whole? There has never been an extensive or representative survey of the political attitudes of social psychologists, but we do have two imperfect sources of evidence. One of the largest gatherings of social psychologists is the presidential symposium at SPSP’s annual meeting. At the 2011 meeting in San Antonio, Texas, Jonathan Haidt asked the roughly 1,000 attendees to identify themselves politically with a show of hands. He counted the exact number of hands raised for the options “conservative or on the right” (3 hands), “moderate or centrist” (20 hands), and “libertarian” (12 hands). For the option “liberal or on the left,” it was not possible to count, but he estimated that approximately 80% of the audience raised a hand (i.e., roughly 800 liberals). The corresponding liberal-conservative ratio of 267:1 is surely an overestimate; in this non-anonymous survey, many conservatives may have been reluctant to raise their hands. But if conservatives were disproportionately reluctant to self-identify, it illustrates the problem we are raising.
The other piece of evidence we have comes from an anonymous internet survey conducted by Inbar and Lammers (2012), who set out to test Haidt’s claim that there were hardly any conservatives in social psychology. They sent an email invitation to the entire SPSP discussion list, from which 2923 individuals participated. Inbar & Lammers found that 85 percent of these respondents declared themselves liberal, 9 percent moderate, and only 6 percent conservative4 (a ratio of 14:1). Furthermore, the trend toward political homogeneity seems to be continuing: whereas 10% of faculty respondents self-identified as conservative, only 2% of graduate students and postdocs did so (Inbar, 2013, personal communication). This pattern is consistent with the broader trends throughout psychology illustrated in Figure 1: the field is shifting leftward, the ratio of liberals to conservatives is now greater than 10:1, and there are hardly any conservative students in the pipeline.
3. Three Ways That the Lack of Diversity Undermines Social Psychology
If left unchecked, an academic field can become a cohesive moral community, creating a shared reality (Hardin & Higgins, 1996) that subsequently blinds its members to morally or ideologically undesirable hypotheses and unanswered but important scientific questions (Haidt, 2012). The sociologist Christian Smith (2003) has studied such moral communities within the academy and has identified a set of moral narratives that link researchers’ conceptions of history to their conceptions of their research. Smith describes the left-leaning field of sociology as sharing what he calls the “liberal progress narrative.”
Once upon a time, the vast majority of human persons suffered in societies and social institutions that were unjust, unhealthy, repressive, and oppressive. These traditional societies were reprehensible because of their deep-rooted inequality, exploitation, and irrational traditionalism ... But the noble human aspiration for autonomy, equality, and prosperity struggled mightily against the forces of misery and oppression, and eventually succeeded in establishing modern, liberal, democratic… welfare societies. While modern social conditions hold the potential to maximize the individual freedom and pleasure of all, there is much work to be done to dismantle the powerful vestiges of inequality, exploitation, and repression. This struggle for the good society in which individuals are equal and free to pursue their self-defined happiness is the one mission truly worth dedicating one’s life to achieving. (Smith, 2003, p. 82)
Although Smith wrote this narrative for sociology, it is a plausible shared narrative for social psychology—a field that has produced copious research on racism, sexism, stereotypes, and the baneful effects of power and obedience to authority. Given the political homogeneity demonstrated in section 1 of this paper, the field of social psychology is at risk of becoming a cohesive moral community. Might a shared moral-historical narrative in a politically homogeneous field undermine the self-correction processes on which good science depends? We think so, and present three risk points— three ways in which political homogeneity can threaten the validity of social psychological science—and examples from the extant literature illustrating each point.
3.1 Risk Point #1: Liberal values and assumptions can become embedded into theory and method
Political values can become embedded into research questions in ways that make some constructs unobservable and unmeasurable, thereby invalidating attempts at hypothesis testing (Sniderman & Tetlock, 1986; Tetlock & Mitchell, 1993; Tetlock, 1994). The embedding of values occurs when value statements or ideological claims are wrongly treated as objective truth, and observed deviation from that truth is treated as error.
Example 1: Denial of environmental realities. Feygina, Jost and Goldsmith (2010) sought to explain the “denial of environmental realities” using system justification theory (Jost & Banaji, 1994). In operationalizing such denial, the authors assessed the four constructs listed below, with example items in parentheses:
Construct 1: Denial of the possibility of an ecological crisis (“If things continue on their present course, we will soon experience a major environmental catastrophe,” reverse scored).
Construct 2: Denial of limits to growth (“The earth has plenty of natural resources if we just learn how to develop them.”)
Construct 3: Denial of the need to abide by the constraints of nature (“Humans will eventually learn enough about how nature works to be able to control it.”)
Construct 4: Denial of the danger of disrupting balance in nature (“The balance of nature is strong enough to cope with the impacts of modern industrial nations.”)
The core problem with this research is that it misrepresents those who merely disagree with environmentalist values and slogans as being in “denial.” Indeed, the papers Feygina et al (2010) cited in support of their “denial” questions never used the terms “deny” or denial” to describe these measures. Clark, Kotchen, and Moore (2003) referred to the items as assessing “attitudes” and Dunlap, Van Liere, Mertig, and Jones (2000) characterized the items as tapping “primitive beliefs” (p. 439) about the environment.
The term “denial” implies that 1) the claim being denied is a “reality” – that is, a descriptive fact, and that 2) anyone who fails to endorse the pro-environmental side of these claims is engaged in a psychological process of denial. We next describe why both claims are false, and why the measures, however good they are at assessing attitudes or primitive beliefs, fail to assess denial.
Construct 1 refers to a “possibility” so that denial would be belief that an ecological crisis was impossible. This was not assessed and the measure that supposedly tapped this construct refers to no descriptive fact. Without defining “soon” or “major” or “crisis,” it is impossible for this to be a fact. Without being a statement of an actual fact, disagreeing with the statement does not, indeed cannot, represent denial.
Similar problems plague Construct 2 and its measurement. Denial of the limits of growth could be measured by agreement with an alternative statement, such as “The Earth’s natural resources are infinite.” Agreement could be considered a form of denial of the limits of growth. However, this was not assessed. Absent a definition of “plenty,” it is not clear how this item could be refuted or confirmed. If it cannot be refuted or confirmed, it cannot be a descriptive fact. If it is not a fact, it can be agreed or disagreed with, but there is no “denial.” Even strongly agreeing with this statement does not necessarily imply denying that there are limits to growth. “Plenty” does not imply “unlimited.” Moreover, the supposed reality being denied is, in fact, heavily disputed by scholars, and affirming the Earth’s resources as plentiful for human needs, given human ingenuity, was a winning strategy in a famous scientific bet (Sabin, 2013).
Construct 3 is an injunction that we need to abide by the constraints of nature. Again “constraints of nature” is a vague and undefined term. Further, the construct is not a descriptive fact – it is a philosophical/ideological prescription, and the item is a prophecy about the future, which can never be a fact. Thus, this construct might capture some attitude towards environmentalism, but it does not capture denial of anything. It would be just as unjustified to label those who disagree with the item as being in denial about human creativity, innovation, and intelligence.
Construct 4 is similarly problematic. “Balance in nature” is another vague term, and the item assessing this construct is another vague prediction. One can agree or disagree with the item. And such differences may indeed by psychologically important. Disagreement, however, is not the same construct as denial.
Whether some people deny actual environmental realities, and if so, why, remains an interesting and potentially scientifically tractable question. For example, one might assess “environmental denial” by showing people a time-lapse video taken over several years showing ocean levels rising over an island, and asking people if sea levels were rising. There would be a prima facie case for identifying those who answered “no” to such a question as “denying environmental realities.” However, Feygina et al. (2010) did not perform such studies. Instead, they simply measured support for primitive environmentalist beliefs and values, called low levels of such support denial, and regressed it on the system justification scores and other measures (a third, experimental study, did not assess denial). None of Feygina et al.’s (2010) measures refer to environmental realities. Thus, the studies were not capable of producing scientific evidence of denial of environmental realities.
Vague environmentalist philosophical slogans and values are unjustifiably converted to scientific truths even though no data could ever tell us whether humans should “abide by the constraints of nature.” It is not just that people have different environmental attitudes; the problem is the presumption that one set of attitudes is right and those who disagree are in denial. This conversion of a widely shared political ideology into “reality,” and its concomitant treatment of dissent as denial, testifies to the power of embedded values to distort science within a cohesive moral community.
Example 2: Ideology and unethical behavior. Son Hing, Bobocel, Zanna, and McBride (2007) found that: 1) people high in social dominance orientation (SDO) were more likely to make unethical decisions, 2) people high in right-wing authoritarianism (RWA) were more likely to go along with the unethical decisions of leaders, and 3) dyads with high SDO leaders and high RWA followers made more unethical decisions than dyads with alternative arrangements (e.g., low SDO—low RWA dyads).
Yet consider the decisions they defined as unethical: not formally taking a female colleague’s side in her sexual harassment complaint against her subordinate (given little information about the case), and a worker placing the well-being of his or her company above unspecified harms to the environment attributed to the company’s operations. Liberal values of feminism and environmentalism were embedded directly into the operationalization of ethics, even to the extent that participants were expected to endorse those values in vignettes that lacked the information one would need to make a considered judgment.
How to recognize and avoid embedded values biases. The appearance of certain words that imply pernicious motives (e.g., deny, legitimize, rationalize, justify, defend, trivialize) may be particularly indicative of research tainted by embedded values. Such terms imply, for example, that the view being denied is objectively valid and the view being “justified” is objectively invalid. In some cases, this may be scientifically tenable, as when a researcher is interested in the denial of some objective fact. Rationalization can be empirically demonstrated, but doing so requires more than declaring some beliefs to be rationalizations, as in Napier and Jost (2008), where endorsement of the efficacy of hard work – on one item – was labeled rationalization of inequality.
Turnabout tests often constitute a simple tool for identifying and avoiding embedded values bias (Tetlock, 1994). Imagine a counterfactual social psychology field in which conservative political views were treated as “scientific facts” and disagreements with conservative views treated as denial or error. In this field, scholars might regularly publish studies on "the denial of the benefits of free market capitalism” or “the denial of the benefits of a strong military” or “the denial of the benefits of church attendance.” Or, they might publish studies showing that people low in RWA and SDO (i.e., liberals) are more unethical because they are more willing to disrespect authority, disregard private property, and restrict voluntary individual choice in the marketplace. Embedding any type of ideological values into measures is dangerous to science. Later in this paper we review evidence suggesting that this is much more likely to happen – and to go unchallenged by dissenters – in a politically homogeneous field.
3.2 Risk Point #2: Researchers may concentrate on topics that validate the liberal progress narrative and avoid topics that contest that narrative
Since the enlightenment, scientists have thought of themselves as spreading light and pushing back the darkness. The metaphor is apt, but in a politically homogeneous field, a larger-than-optimal number of scientists shine their flashlights on ideologically important regions of the terrain. Doing so leaves many areas unexplored. Even worse, some areas become walled off, and inquisitive researchers risk ostracism if they venture in (see Redding 2013 for a discussion of a recent example in sociology). Political homogeneity in social psychology can restrict the range of possible research programs or questions. It may also deprive us of tools and research findings we need to address pressing social issues. Two examples below illustrate this threat.
Example 1: Stereotype accuracy. Since the 1930s, social psychologists have been proclaiming the inaccuracy of social stereotypes, despite lacking evidence of such inaccuracy. Evidence has seemed unnecessary because stereotypes have been, in effect, stereotyped as inherently nasty and inaccurate (see Jussim, 2012a for a review).
Some group stereotypes are indeed hopelessly crude and untestable. But some may rest on valid empiricism—and represent subjective estimates of population characteristics (e.g. the proportion of people who drop out of high school, are victims of crime, or endorse policies that support women at work, see Jussim, 2012a, Ryan, 2002 for reviews). In this context, it is not surprising that the rigorous empirical study of the accuracy of factual stereotypes was initiated by one of the very few self-avowed conservatives in social psychology—Clark McCauley (McCauley & Stitt, 1978). Since then, dozens of studies by independent researchers have yielded evidence that stereotype accuracy (of all sorts of stereotypes) is one of the most robust effects in all of social psychology (Jussim, 2012a). Here is a clear example of the value of political diversity: a conservative social psychologist asked a question nobody else thought (or dared) to ask, and found results that continue to make many social psychologists uncomfortable. McCauley’s willingness to put the assumption of stereotype inaccuracy to an empirical test led to the correction of one of social psychology’s most longstanding errors.
Example 2: The scope and direction of prejudice. Prejudice and intolerance have long been considered the province of the political right (e.g., Adorno, Frenkel-Brunswik, Levinson, & Sanford, 1950; Duckitt, 2001; Lindner & Nosek, 2009). Indeed, since Allport (1954), social psychologists have suspected that there is a personality type associated with generalized prejudice toward a variety of social groups (Akrami, Ekehammar, & Bergh, 2011), which they have linked to political conservatism (Roets & van Hiel, 2011). More recently, however, several scholars have noted that the groups typically considered targets of prejudice in such research programs are usually low status and often left-leaning (e.g., African-Americans and Communists; for more examples and further arguments, see Chambers, Schlenker & Collisson, 2013 and Crawford & Pilanski, 2013). Using research designs that include both left-leaning and right-leaning targets, and using nationally representative as well as student and community samples, these researchers have demonstrated that prejudice is potent on both the left and right. Conservatives are prejudiced against stereotypically left-leaning targets (e.g., African-Americans), whereas liberals are prejudiced against stereotypically right-leaning targets (e.g., religious Christians; see Chambers et al., 2013; Crawford & Pilanski, 2013; Wetherell, Brandt, & Reyna, 2013).
Summarizing these recent findings, Brandt, Reyna, Chambers, Crawford, and Wetherell (2014) put forward the ideological conflict hypothesis, which posits that people across the political spectrum are prejudiced against ideologically dissimilar others. Once again, the shared moral narrative of social psychology seems to have restricted the range of research: the investigation of prejudice was long limited to prejudice against the targets that liberals care most about. But the presence of a non-liberal researcher (John Chambers is a libertarian) contributed to an expansion of the range of targets, which might, over time, lead the entire field to a more nuanced view of the relationship between politics and prejudice.
How to avoid a narrow emphasis on topics that advance liberal narratives. When researchers primarily focus on addressing questions that advance liberal narratives, or systematically ignore research inconsistent with liberal narratives, the risk of political bias increases. Instead of assuming that stereotypes are inaccurate without citing evidence, ask, “How (in)accurate are stereotypes? What has empirical research found?” Instead of asking, “Why are conservatives so prejudiced and politically intolerant?” (Hodson & Busseri, 2012; Lindner & Nosek, 2009), ask, “Which groups are targets of prejudice and intolerance across the political spectrum, and why?” (Brandt et al., 2014). One does not need to be politically conservative to ask the latter questions. Indeed, one of the authors of the ideological conflict hypothesis (Crawford) self-describes as liberal. Thus, simply having an ideology does not inevitably lead to biased research, even on politicized topics. Nonetheless, as we show later in this paper, having a greater number of nonliberal scientists would likely reduce the time it takes for social psychology to correct longstanding errors on politicized topics.
3.3 Risk Point #3: Negative attitudes regarding conservatives can produce a psychological science that mischaracterizes their traits and attributes
A long-standing view in social-political psychology is that the right is more dogmatic and intolerant of ambiguity than the left, a view Tetlock (1983) dubbed the rigidity-of-the-right hypothesis. Altemeyer (1996; 1998) argued that a consequence of this asymmetry in rigidity is that those on the right (specifically, people high in RWA) should be more prone to making biased political judgment than those on the left. For example, Altemeyer (1996) found that people high in RWA were biased in favor of Christian over Muslim mandatory school prayer in American and Arab public schools, respectively, whereas people low in RWA opposed mandatory school prayer regardless of the religious target group. On the basis of these and other results, Altemeyer (1996) characterized people high in RWA (who tend to be socially conservative) as hypocritical and rigid, and people low in RWA (who tend to be socially liberal) as consistent and fair-minded. Others have relied on this evidence to make similar arguments (e.g., Peterson, Duncan, & Pang, 2002). But had social psychologists studied a broad enough range of situations to justify these broad conclusions? Recent evidence suggests not. The ideologically objectionable premise model (IOPM; Crawford, 2012) posits that people on the political left and right are equally likely to approach political judgments with their ideological blinders on. That said, they will only do so when the premise of a political judgment is ideologically acceptable. If it’s objectionable, any preferences for one group over another will be short-circuited, and biases won’t emerge. The IOPM thus allows for biases to emerge only among liberals, only among conservatives, or among both liberals and conservatives, depending on the situation. For example, reinterpreting Altemeyer’s mandatory school prayer results, Crawford (2012) argued that for people low in RWA who value individual freedom and autonomy, mandatory school prayer is objectionable; thus, the very nature of the judgment should shut off any biases in favor of one target over the other. However, for people high in RWA who value society-wide conformity to traditional morals and values, mandating school prayer is acceptable; this acceptable premise then allows for people high in RWA to express a bias in favor of Christian over Muslim school prayer. Crawford (2012, Study 1) replaced mandatory prayer with voluntary prayer, which would be acceptable to both people high and low in RWA. In line with the IOPM, people high in RWA were still biased in favor of Christian over Muslim prayer, while people low in RWA now showed a bias in favor of Muslim over Christian voluntary prayer. Hypocrisy is therefore not necessarily a special province of the right. In another study, Crawford (2012, Study 2) reasoned that the left typically finds it acceptable to criticize and question authority. Therefore, a scenario involving a subordinate criticizing an authority figure would permit people low in RWA to punish a subordinate who criticizes an ideologically similar leader (e.g., President Barack Obama) more harshly than one who criticizes an ideologically dissimilar leader (e.g., President George W. Bush). However, such criticism of authority represents an objectionable premise for people high in RWA—thus, they should punish the subordinate equally, regardless of the leader’s identity. Consistent with the IOPM, people low in RWA more harshly punished a military general who criticized Obama than one who criticized Bush, whereas people high in RWA punished the general equally regardless of the target leader’s identity. Thus, this scenario shows the reversal of Altemeyer’s findings—biases emerged among the left, but not the right. Results from seven scenarios have supported the ideologically objectionable premise model (see Crawford, 2012; Crawford & Xhambazi, 2013) and indicate that biased political judgments are not predicted by ideological orientation (as per Altemeyer), but rather by the qualities of the judgment scenarios used in the research.
These example illustrate the threats to truth-seeking that emerge when members of a politically homogenous intellectual community are motivated to cast their perceived outgroup (i.e., the ones who violate the liberal progressive narrative) in a negative light. If there were more social psychologists who were motivated to question the design and interpretation of studies biased towards liberal values during peer review, or if there were more researchers running their own studies using different methods, social psychologists could be more confident in the validity of their characterizations of conservatives (and liberals).