As incidents of domestic and sexual violence become increasingly publicised, there has also been a rapid growth in the body of research on domestic and sexual violence perpetrator intervention programs, particularly since the 1990s.
Research has sought to inform the design and delivery of intervention programs by:
developing and refining theoretical frameworks for understanding the causes of domestic and sexual violence
evaluating the effectiveness of intervention programs in reducing recidivism and identifying methodological issues for assessing program effectiveness
identifying elements of good practice for intervention programs (i.e., effective program components) as well as evaluation research.
Theoretical frameworks for domestic and sexual violence play a critical role in shaping intervention programs for perpetrators, who are predominantly men (e.g., Chung, O’Leary & Hand, 2006; Day, Chung, O’Leary & Carson, 2009). The conceptualisation of domestic and sexual violence as behaviour caused by psychological dysfunction or other individual or socio-demographic characteristics, for example, removes the responsibility of violence from the perpetrator, and tends to support a psychotherapeutic approach to intervention. Understanding domestic and sexual violence as the result of social constructions about masculinity, gender identities and power relations, on the other hand, supports a gendered and educational approach to intervention, and points to the need to address social structures that reinforce men’s violence against women.
Different theoretical frameworks place different emphasis on intervention programs as the best ways to address domestic violence and sexual assault. The social constructionist or feminist perspective of men’s violence against women has had a particularly important role in putting the emphasis on the principle of responsibility or accountability, which has been widely incorporated in the design of intervention programs across theoretical frameworks. However, there remains a great deal of debate and questions about whether and how theoretical frameworks can be effectively translated to program delivery, and which aspects of which theoretical frameworks (hence, program components) have the greatest impact on reducing recidivism.
Rather than being underpinned by a single theoretical framework, intervention programs for domestic and sexual violence perpetrators are increasingly drawing on multiple (and sometimes contradictory) theoretical perspectives. This amalgamation of theoretical approaches poses challenges for evaluation research in terms of attributing overall program effectiveness to specific theoretical approaches and ascertaining the relative contribution of different program components to perpetrator and victim outcomes. While an increasing number of evaluation studies have been undertaken on domestic and sexual violence perpetrator intervention programs, there are also a number of methodological challenges that affect the capacity of evaluation research to determine program effectiveness.
In this section of the literature review, we first outline key evaluation approaches to domestic and sexual violence perpetrator intervention programs and associated methodological issues. Major intervention approaches and the evidence on program effectiveness for domestic violence intervention programs and sexual assault intervention programs are then discussed in separate sections.
3.2Evaluation Research methodology and issues
While an increasing number of evaluation studies have been undertaken to determine the effectiveness of domestic and sexual violence perpetrator intervention programs, few studies are considered to be of high quality (from an empirical perspective), and have limited capacity to inform the effectiveness of intervention programs (see e.g., Babcock, Green & Robie, 2004; Eckhardt, Murphy, Black & Suhr, 2006). There are also ongoing debates among researchers about the design and quality of evaluation studies of domestic and sexual violence perpetrator intervention programs, in particular, what constitutes a high quality study and how effectiveness should be determined (e.g., Ashcroft, Daniels & Hart, 2003; Day et al., 2009; Eckhardt et al., 2006; Gondolf, 2004, 2009, 2010).
A set of guidelines for evaluating sex offender treatment was developed by the Collaborative Outcome Data Committee (CODC; Beech et al., 2007). These guidelines were developed in light of the lack of consensus regarding the characteristics of a high quality or credible study. The CODC recognises that both experimental and quasi-experimental studies have merit and limitations, and that different approaches are needed to address different research questions in different contexts. Given the complexity of intervention programs for sex offenders, the CODC argued that multiple research methods and small studies are needed to contribute to cumulative knowledge about the effectiveness of different intervention programs.
The guidelines address seven elements: administrative control of the independent variable, experimenter expectancies, sample size, attrition, equivalence of groups, outcome variables and correct comparison conducted. These seven elements are assessed via 20 items to produce a categorisation of the study quality as strong, good, weak or rejected. A strong study is well designed and implemented, has minimal bias in the estimate of the effectiveness of the treatment and convincing results. A strong study is likely to have only minor problems that are unlikely to change the findings.
Using the CODC guidelines, a recent meta-analysis of studies on sexual offender intervention programs by Hanson, Bourgon, Helmus and Hodgson (2009) categorised only five studies (out of 129 studies) as good, with 19 as weak, and 104 as rejected.
Evaluation research on perpetrator intervention programs typically involves randomised controlled trials (RCTs), that is, a ‘true’ experimental design, or quasi-experimental designs. Each design has its own challenges in terms of evaluating the effectiveness of domestic and sexual violence intervention programs. The two research designs and their methodological challenges are described below. We recognise that in addition to experimental and quasi-experimental studies, qualitative studies based on interviews or focus groups have also been undertaken (and are sometimes a preferred method by feminist researchers) to determine the effectiveness of intervention programs. The empirical reliability and validity of qualitative studies, however, can often be more difficult to establish, and qualitative studies are generally not included in meta-analyses of domestic and sexual violence perpetrator intervention programs.
A true experiment is conducted in highly controlled conditions in order to isolate the effects of an intervention program, so that changes in the outcome measure(s) can be attributed to the intervention program. In doing so, a true experiment involves randomly assigning participants (i.e., offenders) to either the intervention or no intervention (control) condition, and matching participants in the two groups to ensure equivalence on a wide range of relevant variables such as age, employment status, psychological indicators (e.g., depression, personality disorders), and criminal history. The implementation of the intervention program is tightly controlled and monitored in an experiment, with training and program manuals provided to program facilitators to ensure consistency and integrity in program delivery.
While considered to be the ‘gold standard’ in evaluation research, true experimental designs pose significant implementation challenges (e.g., see Day et al., 2009; Gondolf, 2004, 2009, 2010). Random assignment to the intervention or no intervention group is not always feasible owing to ethical and practical concerns (e.g., not providing intervention to offenders who are perceived to be in a high need for intervention or who are motivated to change). Random assignment and identifying a matched control group can also often be time and resource consuming, and are challenges for evaluating.
Conducting a highly controlled experiment requires close collaboration between the researcher and service provider and places more resources pressures on the service provider (e.g., staff training). Factors such as the organisation’s research readiness and staff turnover, which are beyond the control of the researcher, can impact on the integrity of program implementation, and therefore the quality of the research and validity of the findings. Further, while a true experiment might have high internal validity, it can lack external validity – that is, its findings cannot be generalised to real program implementation contexts.
Given the challenges associated with conducting experimental studies, quasi-experimental designs are most commonly employed in evaluations of domestic and sexual violence offender intervention programs (Gondolf, 2004, 2009).
Evaluation studies using quasi-experimental designs typically determine the effectiveness of intervention programs by comparing offenders who complete the program (intervention group) to those who drop out of the program or ‘no-shows’. This comparison is problematic because program completers and non-completers can differ systematically on a range of individual variables (e.g., criminal history, comorbidity, education). Consequently, differences in recidivism between program completers and non-completers may be caused by differences in pre-existing characteristics between completers and non-completers rather than the effects of the intervention program. These differences, however, have typically been addressed by statistically controlling the effects of variables such as criminal history, alcohol use, and age prior to assessing the effects of the intervention.
In addition, it is worth noting that ‘program dropouts’ are not defined in a consistent way. They may include those who are removed from a program as a result of poor compliance or those who self-withdraw from the program. While quasi-experimental designs are easier to implement and have greater external validity, the findings of such studies are considered to be less valid because there are fewer controls to eliminate alternative explanations of findings.
A multi-site approach to program evaluation is increasingly endorsed by researchers in order to take into account the impact of the program context on the effectiveness of intervention programs (e.g., Day et al. 2009; Gondolf 2004, 2009).In other words, rather than evaluating only a single program at one location, program (internal) and contextual (external) factors affecting the effectiveness of a program can be more adequately discerned when the evaluation includes multiple programs at multiple locations.
Despite the challenges associated with implementing well-designed experimental and quasi-experimental studies of domestic and sexual violence offender intervention programs, the accumulation of evaluation studies over time has provided valuable information about, and points for further investigation on, the nature and effectiveness of intervention programs.