Skip to Main Content


This chapter summarises some of the issues that impede or facilitate research translation and application in health promotion and prevention. As a part of this analysis we also consider necessary skills for the critical appraisal of evidence, enabling practitioners to make good decisions based on the strength and usefulness of published evidence.

9.1 Getting evidence into practice

Improving the quality and effectiveness of health promotion interventions ultimately depends on our ability to optimally use the evidence generated through research and evaluation. The central purpose of evaluation is to guide improvements in practice through the adoption of interventions that have a known and quantifiable impact on health and quality of life.

The use of evidence to guide decision-making in health promotion varies considerably and there are several reasons why this will be the case. In some circumstances sufficient ‘evidence’ of effectiveness does not exist or the evidence available may be insufficient to reach a conclusion in a timely way. For example, when a new public health threat emerges (such as COVID-19), rapid public health responses were needed in advance of the optimal level of evidence from careful research. In other cases where new public health challenges (such as e-cigarette use) emerge, there may be disagreement on the best mix of interventions to address the problem on a population basis (advocacy, education, social media campaigns, regulation and so on). For policymakers and practitioners, doing nothing in the absence of conclusive evidence is often not an option in the face of a public health emergency or community pressure for action.

When lacking clear evidence of effectiveness, those responsible for taking action (policymakers and practitioners) have to locate and prioritise the best available evidence from existing relevant studies and on information from practitioners and consumers. Responses have to be practical to implement and have potential for high population reach. Where an intervention has been initiated with limited relevant evidence, the case for evaluation is strong. This not only allows for immediate learning from an experimental intervention (and enables adjustment to the intervention as it is implemented) but is also important for public accountability.

A model to illustrate differences in the use of evidence in health promotion programs is provided in Figure 9.1. One stage does not lead to another, but there is a proposed hierarchy that suggests better practice will be informed by theory and research evidence.

Figure 9.1

Variation in the use of evidence in health promotion: planned, responsive and reactive practice (adapted from Nutbeam 1996)

The planned approach is exemplified by the use of planning frameworks and logic models, as described in Chapters 1 and 2. This approach is based on rational, systematic assessment of the best available evidence of population health needs, effective interventions and the organisational ...

Pop-up div Successfully Displayed

This div only appears when the trigger link is hovered over. Otherwise it is hidden from view.