This is an excerpt from Advances in Motivation in Sport and Exercise-3rd Edition.
Intervention Planning and Evaluation
As highlighted in previously cited reviews (Kahn et al., 2002; Ogilvie et al., 2007), there is a need for enhanced evidence of what works in terms of promoting physical activity in the real world, that is, intervention effectiveness. There are several reasons for the current limitations in the evidence base and for the disappointing effect of some commonly utilized interventions (Blamey & Mutrie, 2004). These reasons include limitations in planning and implementation such as
- a lack of theory-driven interventions (those informed by psychological or behavioral theory that are appropriately tailored and targeted) and
- intervention plans that do not explicitly detail the following:
- the anticipated steps between the selected intervention activities and the long-term outcomes (e.g., the short and interim outcomes), nor describe the actual mechanisms or psychological concepts, or mediators or correlates of behavior change that the planned activities are anticipated to trigger or change in their target population;
- the likely reach of the interventions and the levels of exposure to the intervention that targeted participants will experience;
- the evidence upon which the intervention activities are based; and
- issues related to implementation failure, such as a lack of targeting and tailoring, or not delivering the intervention according to the evidence-informed plans agreed upon or in a consistent manner across multiple sites.
Additional explanations for the gaps in evidence are, in part, caused by evaluation issues such as a lack of an evaluative culture in many of our public sector agencies tasked with promoting physical activity, poor quality of many of the evaluations conducted, and a tendency to present evaluation findings without reflecting on where programs have been successful for some participants but not others and the underlying reasons for this (e.g., differential motivations and mediators for various target groups).
In an attempt to address these issues and improve the planning, implementation, and evaluation of social interventions, increasing emphasis has been placed on outcome-focused planning, improved process evaluation, and evaluation approaches that attempt to enhance attribution in complex real-life interventions (where controlled experiments are more difficult to conduct). The latter evaluation approaches are often referred to as theory based. These evaluation approaches, exemplified by theories of change and realistic evaluation (Blamey & Mackenzie, 2007; Fulbright-Anderson, Kubisch, & Connell, 1998; Pawson & Tilley, 1997), attempt to uncover the program theories (e.g., the prescriptive theory or program activities and their postulated links to the outcomes), as well as the more descriptive theories (e.g., the likely causal mechanisms that will motivate behavior change, such as reduction of known barriers to physical activity, like cost or time, or changes in psychological concepts or mediators) (Chen, 1990).
Outcome-focused planning encourages planning from right to left, meaning that plans detail the specific long-term outcomes and the interim and short-term outcomes that will be needed to achieve them. These outcomes then drive the selection of activities. The activities and interventions are, in turn, influenced by evidence (in terms of evaluative learning, review evidence, and tacit experience) of their likely effect on the agreed outcomes and for specific target groups. The reality is that planning processes in many public agencies are more influenced by left to right thinking—in other words, What can we achieve through the activities that we currently offer?
Evaluation approaches linked to theory, more so than traditional evaluation approaches, seek to understand the prescriptive and descriptive theory of an intervention (Chen, 1990; Weiss, 1998) by explicating the detailed program plans and their underlying assumptions about the psychological concepts and mediators that their planned activities are trying to change. As highlighted earlier, the uncovered theories are then used to drive the design of the subsequent evaluation and the methods that it will use. These approaches attempt, where feasible, to forge explicit links between process and outcome evaluation data and findings so that changes in longer-term outcomes might be more convincingly explained by data from the detailed process evaluation. As an example, changes in participants' levels of fitness and their disease risk factors, such as reduced hypertension or cholesterolemia, would more convincingly be seen to have resulted from their participation on an exercise referral program if detailed information was available about their attendance and adherence.
Theory-based evaluations would also ideally try to strengthen the underlying descriptive theory of the program by testing what key mediators had changed in those showing positive outcomes compared with those who did not. This might involve analyzing the changes in mediators (e.g., self-efficacy or attitudes) for these two groups and their explanation for these changes or exposure to particular aspects of the intervention (e.g., access to accurate knowledge or support from significant others, changed social norms because of family or peer support and approval) (see next section on mediation analysis). The limitations in planning processes detailed earlier and subsequently, however, often limit the extent to which evaluations are actually used to refine and enhance descriptive theory.
Both outcome-focused planning and theory-driven evaluations often use tools such as logic modeling (W.K. Kellogg Foundation, 2001) and the RE-AIM framework (Estabrooks & Gyurcsik, 2003; Glasgow, Vogt, & Boles, 1999) to enhance implementation plans. Where detailed plans and theories are not already available, theory-based evaluators often use such tools to develop an evaluation framework, identify key evaluation questions, and focus the subsequent design and methods. These approaches encourage right to left thinking so that the outcomes drive the selection of activities. They can help bring evaluative thinking into program design and thus test the linkages between activities and short-term, interim, and long-term outcomes through reference to available evidence and tacit professional or participant knowledge. The combination of such tools and approaches encourages greater consideration during planning of the prescriptive and descriptive theory, or the how (which intervention activities) and why (by changing the moderators or barriers) of behavior change.
If our existing evidence base for physical activity promotion is to be enhanced, those designing, planning, implementing, and evaluating interventions need to use the tools and approaches encouraged in outcome-focused planning and theory-driven evaluation and to consider more closely how different types of theory (prescriptive and descriptive) influence behavior change.