Mixed methods research has evolved from its formative years a half-century ago into an increasingly accepted methodology, especially among pragmatists. The competing epistemologies underpinning the evaluation community’s paradigm wars have been brought into common practice with emerging typologies and terminology. Although many purists, positivists and interpretivists alike, still decry the potential pitfalls of mixed methods research, the incompatibility thesis appears not to have limited the recent growth in mixed methods research within publicly funded research projects. The purposes of this paper are to provide an overview of mixed methods research, to provide a summary of the viewpoints of some of the more prominently published authors in this field, and to highlight the major problems and controversies surrounding mixed methods.
Mixed Methods Research:
An Emerging Paradigm?
Mixed methods research has garnered increasing attention from the research and evaluation community during the past decade. Throughout the 1980s and 1990s, while debate raged over the supremacy of quantitative versus qualitative research paradigms, pragmatists developed a third paradigm that supports the combination of quantitative and qualitative methodologies within a single research project: mixed methods. Following a description of how this paper was developed, the discussion outlines quantitative and qualitative research paradigms, mono-method and mixed method research, the intellectual underpinnings of mixed methods in pragmatism, and problems and controversies surrounding the advent of the mixed methods paradigm Conclusions regarding the adoption of mixed methods in contemporary research and evaluation follow.
Sources for this article were identified through an electronic search of the term ‘mixed methods research’ in a series of electronic databases: Google and Yahoo! Search engines; Google Scholar;the University of Victoria’s Library web-based search engine; and Canadian and American-based evaluation journals.i
This initial search yielded an unwieldy number of articles and other sources (upwards of 35 million). To narrow the sources, the researchers identified articles or other sources that were highly ranked in two or more search returns. These articles were reviewed for content and citations of other (second tier) articles. Based on this initial review, a general outline was drafted and research team members were assigned responsibility for the various sections based on mutual agreement.
The research team then electronically searched for cited articles and for additional information to inform the drafted outline, starting with the second tier articles and then with others highly ranked in a single search engine. Each subsequent article was reviewed for content and citation until there was sufficient information to address the issues identified in the draft outline. The draft sections were shared amongst the research team for review then combined and edited by all research team members.
Quantitative and Qualitative Paradigms
For decades the debate regarding research methodology, or paradigm wars, has been waged by academics around the world. Understanding the debate begins with an understanding of the two traditional methodologies – quantitative and qualitative.
Quantitative research focuses on deductive or ‘top-down’ logic where the researcher tests hypotheses and theories through the collection and analysis of numerical data (Johnson & Onwuegbuzie, 2004). Data collection in quantitative research is based on defined variables and is collected using precise measurement, with validated instruments aimed at drawing general conclusions confirmed via statistical analysis. Quantitative research can be experimental, quasi-experimental, or non- experimental. The randomized controlled trial is a well known example of experimental quantitative research while single-group post-test design is an example of non-experimental quantitative research (O’Sullivan, Rassel & Berner, 2008).
By comparison, qualitative research is an inductive or ‘bottom-up’ methodology that focuses on researcher generated hypotheses arising from information gathered in the field (Frankel & Devers, 2000). Data collection in qualitative research is based on observation, field notes, and interviews collected by researchers. Investigators search for patterns and themes from the collected information and present them in a narrative report form. Qualitative research is typically less focused on providing generalized insight and more focused on descriptive understanding. The results from multiple studies can however be used to provide a more generalized understanding or to spur particular action (Frankel & Devers, 2000). Examples of qualitative research include ethnography, historical research and case study research
The singular use of quantitative or qualitative research paradigms represents what is considered mono-method research. Mono-methodology follows a narrow framework where the research objectives, data collection, and analysis follow a singular paradigm where no boundaries between quantitative or qualitative methods are crossed (for a graphic example of mono-method research paths, see Figure 1). For more than a century, advocates on either side of the methodological debate have argued the superiority of one research philosophy over the other (Johnson & Onwuegbuzie, 2004). Proponents of pure mono-methods argue that quantitative and qualitative paradigms are absolutely incompatible, cannot be mixed, and that research following strictly one method is appropriate (Onwuegbuzie & Leech, 2005). More moderate advocates maintain the separation between paradigms while recognizing that both methodologies have merits and drawbacks and are, as a result, relatively more applicable depending on the needs of particular research questions (Onwuegbuzie & Leech, 2005). Peace in the paradigm wars appears to have been achieved (Bryman, 2006), leading to a general understanding that quantitative and qualitative research can be usefully integrated into a third research paradigm: mixed methods.
Figure 1. Six possible categories or designs of mixed method research (adapted from Johnson & Onwuegbuzie, 2004).
Mixed Methods Research
Mixed methods research is known by many different names, including: Multitrait-Multi-method Research, Integrated Research, Combined Research, Methodological Triangulation, and Mixed Methodology (Creswell & Plano Clark, 2007). Not only are there many synonyms for mixed methods, but Bazeley (2004) claims that “there is no one mixed methods methodology, and the term can be applied to widely divergent approaches to research” (p.142). That being said, Johnson & Onwuegbuzie (2004) categorize mixed methods research methodology into mixed methods designs and mixed model designs. The main difference between these two designs is that the former refers to the inclusion of sequential quantitative and qualitative phases in a single research study whereas the latter refers to the concurrent use of traditional methods across the stages of research (Johnson & Onwuegbuzie, 2004).
Mixed methods research experienced its formative years in the late 1950s when Campbell and Fiske began using multiple quantitative methods in single research studies (Creswell & Plano Clark, 2007). As researchers began combining quantitative and qualitative methods in the late 1970s, debate intensified as to the propriety of mixed methods and the following twenty years saw the development of conflicting stances and typologies (Creswell & Plano Clark, 2007). In the last decade, advocates of mixed methods research have been working diligently to position it as complementary to traditional mono-method approaches (Creswell & Plano Clark, 2007). This advocacy may be yielding results as noted by Creswell (2008) since the documented use of mixed methods in funded National Institute of Health research projects has risen dramatically between 2004 and 2008.
As outlined by Johnson & Onwuegbuzie (2004), mixed methods research typically serves five broad purposes: triangulation, complementarity, initiation, development, and expansion. Triangulation and complementarity refer mainly to the corroboration or enhancement of findings across quantitative and qualitative methods. Initiation, development, and expansion generally refer to new avenues of research arising from conflicting or contributory findings across traditional methods (Johnson & Onwuegbuzie, 2004).
Proponents of mixed methods research maintain that although quantitative methods may lack contextual understanding and qualitative methods may contain significant sources of bias, combining the two methods can mitigate the inherent mono-method weaknesses (Creswell & Plano Clark, 2007). This viewpoint, among others, has stemmed from a philosophy known as pragmatism.
The theoretical underpinnings for mixed methods research lie in pragmatism (Johnson and Onwuegbuzie 2004). Pragmatism is a deliberate stepping back from the strict epistemologies of quantitative or qualitative mono-method research. In 1980, Heilman focused attention on the users of research finding, suggesting that “the most useful way of contrasting the two approaches [qualitative and quantitative] is in terms of when and for what audiences they are most appropriate, rather than in terms of their incorporating contradictory research philosophies” (p. 706). McConney, Rudd, and Ayres (2002) built on this idea suggesting that “…pragmatists are more concerned with informing stakeholders and policy makers by using whatever type of data or method best answers [the research] questions” (p. 122). They go on to describe the pragmatists’ belief that “the combined use of qualitative and quantitative data may strengthen evaluations by offsetting the limitations and biases of any one method.” Johnson and Onwuegbuzie see pragmatism as a pluralist position able to improve communication among researchers from different paradigms and as means by which to mix research approaches fruitfully (2004). Mixed methodology is thus a tool for researchers who focus on how information will be used, by whom it will be used and under what conditions. However, mixed methods research is not without problems and controversies.
Problems and Controversies
The primary problem associated with mixed methods research lays in its definition, interpretation, and application. As noted previously, Johnson and Onwuegbuzie (2004) take a broad perspective of mixed methods and define it as a category of research which brings together techniques, styles, concepts or vernacular of both quantitative and qualitative research into one study. However, many researchers have become so entrenched in their methods that they believe them to be superior and incompatible with other methods (Guba, 1987). Similarly, since researchers may only be confident in one form of research (e.g., quantitative) then corruption of methods or violation of assumptions may occur leading to questionable analysis (Bazeley, 2004).
The Paradigm Wars and the Incompatibility Thesis
Each field or discipline has an accepted way of conducting research. These ways are built on values, beliefs, and assumptions which collectively form its research culture or paradigm (Johnson and Onwuegbuzie, 2004). The physical and biological disciplines are often associated with researchers who can be considered quantitative purists (e.g., positivists). This group subscribes to precisely measured objective observations. Conversely, the social sciences and humanities are often associated with qualitative methods researchers (e.g., Interpretivists) who operate exclusively in interpretive and subjective experiences (Bazeley, 2004; Howe, 1988).
Herein lay the incompatibility. The purists in each methodological camp believe that their methods are incompatible with the other. The positivists argue that the observer is separate from the article being measured. The Interpretivists argue that the observer and the article are connected because without the act of observing, there is no context for the observation. It is very much an “If a tree falls in a forest…?” type of question. Guba (1987) describes the incompatibility with the analogy that one’s belief that the world is round prevents one from believing that the world is flat.
Each methodology is associated with specific strengths and weaknesses. However, it is the focus on the differences that prolong the incompatibility debate.
Conclusions and Assessment
As an alternative to the purist approaches, we find compatibilists or pragmatists (Johnson and Onwuegbuzie, 2004) who advocate the mixing and matching of quantitative and qualitative approaches. Johnson and Onwuegbuzie (2004) describe the application of quantitative or qualitative methods to three stages of research (i.e., objective development, data collection, and data analysis). With this model, they show how various designs of mixed research can be classified based on the stage at which the research is mixed. Figure 1 illustrated the continuum of designs from purely quantitative to purely qualitative. Although useful as a diagrammatic simplification, the number of possible mixed methods permutations is far greater since the type of and degree to which quantitative and quantitative approaches are combined are only limited by the researcher’s imagination. That being said, it is important to note that not all studies should be mixed methods designs. However, the mixed methods approach appears to have recently become more functional and widely used when complex interdisciplinary research has been conducted. In a review of new research projects funded by the National Institutes of Health from 2003-2007, Creswell (2008) found that use of mixed methods approaches had more than doubled each year since 2004. So while the paradigm wars may not be over, it does appear that a new group (e.g., pragmatists) can claim an important victory.
Recommended Sources for Further Exploration
Tashakkori, A., & Teddlie, C. (1998). Mixed methodology: Combining qualitative and quantitative approaches. Applied Social Research Methods Series (Vol. 46). Thousand Oaks, CA: Sage.
Johnson, R.B., Onwuegbuzie, A.J., & Turner, L. A. (2007) Toward a Definition of Mixed Methods Research. Journal of Mixed Methods Research, 1(2), 112-133. DOI: 10.1177/1558689806298224
Bazeley, P. (2004). Issues in mixing qualitative and quantitative approaches to research. In R. Buber, J. Gadner, & L. Richards (Eds.), Applying qualitative methods to marketing management research (pp141-156). UK: Palgrave Macmillan. Retrieved February 3, 2009 from http://www.researchsupport.com.au/MMIssues.pdf
Creswell, J. (2008). A current assessment of how mixed methods has developed. Keynote address presented at the 4 Annual Mixed Methods Conference, Cambridge University, UK. Retrieved February 8, 2009 from www.mixedmethods.leeds.ac.uk/downloads/Cambridge2008Keynote.ppt
Creswell, J. & Plano Clark, V. (2007). Understanding mixed methods research. In J. Creswell (Ed.), Designing and conducting mixed methods research (pp. 1-19). Thousand Oaks, CA: Sage.
Frankel, R., & Devers, K. (2000, March). Qualitative research: a consumer's guide. Education for health: change in learning & practice, 13(1), 113-123.
Guba, E. (1987). What have we learned about naturalistic evaluation? Evaluation Practice. 8(1), 23-43.
Heilman, J.G. (1980). Paradigmatic Choices in Evaluation Methodology. Evaluation Review. 1980(4), 693-712. DOI: 10.1177/0193841X8000400510 Retrieved February 5, 2009, from http://erx.sagepub.com/cgi/content/abstract/4/5/693
Howe, K.R. (1988). Against the quantitative-qualitative incompatibility thesis or dogmas die hard. Educational Researcher, 17(8):10-16. Retrieved February 3, 2009 from http://epicpolicy.org/files/Howe_Against_the_Quant_Qual_Incompatibility_Thesis.pdf
Johnson, R. & Onwuegbuzie, A. (2004). Mixed methods research: a research paradigm whose time has come. Educational Researcher, 33(7), 14–26. Retrieved February 2, 2009 from http://educ.queensu.ca/graduate/news/documents/Johnson2004ONMIXEDMETHODS.pdf
Johnson, R.B., Onwuegbuzie, A. J., & Turner, L.A. (2007). Toward a definition of mixed methods research. Journal of Mixed Methods Research. 1(2), 112-133.
McConney, A., Rudd, A., & Robert Ayres, R. (2002). Getting to the Bottom Line: A Method for Synthesizing Findings Within Mixed-method Program Evaluations. American Journal of Evaluation, 2002(23), 121-140. DOI: 10.1177/109821400202300202. Retrieved February 5, 2009, from http://aje.sagepub.com/cgi/content/abstract/23/2/121
Onwuegbuzie, A., & Leech, N. (2005, December). On becoming a pragmatic researcher: the importance of combining quantitative and qualitative research methodologies. International Journal of Social Research Methodology, 8(5), 375-387.
O’Sullivan, E. , Rassel, G. & Berner, M. (2008). Research methods for public administrators. Fifth Ed., Pearson Longman.
i1. Evaluation journals were searched separately for two reasons: (i) the course has an emphasis on evaluation and (ii) one of the researchers had access outside of the University to these journals.