Projects per year
Abstract
Background: Evidence-based approaches are requisite in evaluating public health programmes. Nowhere are they more necessary than physical activity interventions where evidence of effectiveness is often poor, especially within
hard to reach groups. Our study reports on the quality of the evaluation of a government funded walking programme in five ‘Walking Cities’ in England. Cities were required to undertake a simple but robust evaluation using the Standard Evaluation Framework (SEF) for physical activity interventions to enable high quality, consistent evaluation. Our aim was not to evaluate the outcomes of this programme but to evaluate whether the evaluation process had been effective in generating new and reliable evidence on intervention design and what had worked in ‘real world’ circumstances.
Methods: Funding applications and final reports produced by the funder and the five walking cities were obtained.These totalled 16 documents which were systematically analysed against the 52 criteria in the SEF. Data were cross
checked between the documents at the bid and reporting stage with reference to the SEF guidance notes.
Results: Generally, the SEF reporting requirements were not followed well. The rationale for the interventions was badly described, the target population was not precisely specified, and neither was the method of recruitment. Demographics of individual participants, including socio-economic status were reported poorly, despite being a key criterion for funding.
Conclusions: Our study of the evaluations demonstrated a missed opportunity to confidently establish what worked and what did not work in walking programmes with particular populations. This limited the potential for
evidence synthesis and to highlight innovative practice warranting further investigation. Our findings suggest a mandate for evaluability assessment. Used at the planning stage this may have ensured the development of realistic objectives and crucially may have identified innovative practice to implement and evaluate. Logic models may also have helped in the development of the intervention and its means of capturing evidence prior to implementation. It may be that research-practice partnerships between universities and practitioners could enhance this process. A lack of conceptual clarity means that replicability and scaling-up of effective interventions is difficult and the opportunity to learn from failure lost
hard to reach groups. Our study reports on the quality of the evaluation of a government funded walking programme in five ‘Walking Cities’ in England. Cities were required to undertake a simple but robust evaluation using the Standard Evaluation Framework (SEF) for physical activity interventions to enable high quality, consistent evaluation. Our aim was not to evaluate the outcomes of this programme but to evaluate whether the evaluation process had been effective in generating new and reliable evidence on intervention design and what had worked in ‘real world’ circumstances.
Methods: Funding applications and final reports produced by the funder and the five walking cities were obtained.These totalled 16 documents which were systematically analysed against the 52 criteria in the SEF. Data were cross
checked between the documents at the bid and reporting stage with reference to the SEF guidance notes.
Results: Generally, the SEF reporting requirements were not followed well. The rationale for the interventions was badly described, the target population was not precisely specified, and neither was the method of recruitment. Demographics of individual participants, including socio-economic status were reported poorly, despite being a key criterion for funding.
Conclusions: Our study of the evaluations demonstrated a missed opportunity to confidently establish what worked and what did not work in walking programmes with particular populations. This limited the potential for
evidence synthesis and to highlight innovative practice warranting further investigation. Our findings suggest a mandate for evaluability assessment. Used at the planning stage this may have ensured the development of realistic objectives and crucially may have identified innovative practice to implement and evaluate. Logic models may also have helped in the development of the intervention and its means of capturing evidence prior to implementation. It may be that research-practice partnerships between universities and practitioners could enhance this process. A lack of conceptual clarity means that replicability and scaling-up of effective interventions is difficult and the opportunity to learn from failure lost
Original language | English |
---|---|
Article number | 674 |
Journal | BMC Public Health |
Volume | 17 |
DOIs | |
Publication status | Published - 22 Aug 2017 |
Keywords
- Evaluation
- Physical activity
- Public health
- Evidence based medicine
Projects
- 1 Finished
-
Centre for Diet and Activity Research (CEDAR)
Jones, A., Wareham, N., Battersby, J., Benjamin-Neelon, S., Brayne, C., Cambridge, D., Griffin, S., Lakshman, R. & Monsivais, P.
1/10/13 → 30/09/18
Project: Research