Skip to Main Content

Introduction

This chapter describes evaluation studies that generate evidence to identify if a discrete intervention works; this is stage 3 of the stages of evaluation model presented in Chapter 2. The focus of this chapter is to understand research methods used to establish if an intervention works. There is a need for balance between ‘scientific’ design against public health pragmatism in research design, sample selection and measurement.

Chapters 3 and 4 described the formative and process evaluation methods that support the development and implementation of a health promotion project. This chapter focuses on the evaluation designs and research methods that test the efficacy and effectiveness of a health promotion intervention. ‘Efficacy’ is an assessment of the outcomes of an intervention in ideal circumstances—where there was optimal delivery and a high degree of control over the intervention. ‘Effectiveness’ is an assessment of the success of a health promotion intervention under ‘real-world’ or ‘field’ conditions—where there is less control over the conditions that might influence success or failure.

Efficacy studies and some effectiveness studies are often smaller scale, usually conducted in selected populations of volunteers who agree to participate. These are often evaluations of discrete well-defined projects (projects usually using a single intervention strategy); for example, a social media intervention to encourage healthy eating, a school curriculum to teach young people about HIV risk, or a social cognitive theory–led behaviour change intervention to support regular smokers to quit. Chapter 6 addresses issues relating to the evaluation of more complex, multi-component health promotion programs and Chapter 7 deals with research issues around the evaluation of scaled-up studies.

5.1 Evaluation designs for health promotion projects

The term ‘evaluation design’ describes the set of tasks used to systematically examine the effects of a health promotion intervention. A well-conducted evaluation can provide decision-makers with confidence that the intervention caused the observed effects and that these did not occur by chance or due to other factors or influences. To achieve this, we need to ensure that:

  • the program was optimally developed and planned (formative evaluation), implemented as intended, and reached the target audience (process evaluation)

  • the processes of recruiting people into the intervention are described (e.g. who they were and how they were selected)

  • the best possible reliable and valid measurements were used to assess the impact and outcomes from the intervention (the results)

  • the best possible research design was used to assess the effects of the intervention

  • no alternative explanations exist for the results, so that we can be confident that the results observed are attributable to the intervention

  • further research may be conducted to identify how and why the program worked (or did not work) for the whole, or for subsets of, the target group.

The research process in an evidence-generating intervention

Figure 5.1 shows some of the key features of good ...

Pop-up div Successfully Displayed

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.

  • Create a Free Profile