Analysis of a normative framework for evaluating public engagement exercises: Reliability, validity and limitations

Gene Rowe, Tom Horlick-Jones, John Walls, Wouter Poortinga, Nick F. Pidgeon

Research output: Contribution to journalArticlepeer-review

72 Citations (Scopus)

Abstract

Over recent years, many policy-makers and academics have come to the view that involving the public in policy setting and decision-making (or "public engagement") is desirable. The theorized benefits of engagement (over traditional approaches) include the attainment of more satisfactory and easier decisions, greater trust in decision-makers, and the enhancement of public and organizational knowledge. Empirical support for these advantages is, however, scant. Engagement processes are rarely evaluated, and when they are, the quality of evidence is generally poor. The absence of standard effectiveness criteria, and instruments to measure performance against these, hinders evaluation, comparison, generalization and the accumulation of knowledge. In this paper one normative framework for evaluating engagement processes is considered. This framework was operationalized and used as part of the evaluation of a recent major UK public engagement initiative: the 2003 GM Nation? debate. The evaluation criteria and processes are described, and their validity and limitations are analyzed. Results suggest the chosen evaluation criteria have some validity, though they do not exhaustively cover all appropriate criteria by which engagement exercises ought to be evaluated.
Original languageEnglish
Pages (from-to)419-441
Number of pages23
JournalPublic Understanding of Science
Volume17
Issue number4
DOIs
Publication statusPublished - Oct 2008

Cite this