Abstract
Background: National clinical guideline developers, such as the UK’s National Institute for Health and Clinical Excellence (NICE), produce high quality guidelines, yet primary care practitioners (PCPs) may question the relevance of the evidence and recommendations to a primary care (PC) population.
Objectives: To evaluate PCPs’ views about the relevance of NICE clinical guidelines to PC.
Methods: An online Delphi panel of 28 PCPs, recruited regionally and nationally, reviewed 14 guideline recommendations: 8 supported by PC relevant evidence and 6 by evidence from elsewhere. Panellists scored recommendations twice, on a scale of 1–9 (9 = highly relevant for PC), before and then again after reading a summary of the evidence, including study setting and population. They also commented on factors influencing guideline validity and PC implementability.
Results: 25 PCPs (89%) completed the Delphi. Overall mean scores were 7.4 (range 6.2–8.2) before reading the evidence summary, and 6.6 (4.6–8.3) after. Mean scores for the 8 recommendations supported by PC evidence were 7.4 before and 7.2 after (change -0.2). Mean scores for the 6 with evidence from elsewhere were 7.4 before and 5.8 after (change -1.6). Factors perceived to influence implementation included clarity, brevity, and relevance to PC.
Discussion: PCPs’ ratings of PC guideline validity dropped when they became aware that substantial supporting evidence for the guidelines had come from non PC settings. The relevance of the evidence to PC patients was important.
Implications for Guideline Developers/Users: Developers should explicitly describe the relevance of available evidence for PCPs and their patients.
Objectives: To evaluate PCPs’ views about the relevance of NICE clinical guidelines to PC.
Methods: An online Delphi panel of 28 PCPs, recruited regionally and nationally, reviewed 14 guideline recommendations: 8 supported by PC relevant evidence and 6 by evidence from elsewhere. Panellists scored recommendations twice, on a scale of 1–9 (9 = highly relevant for PC), before and then again after reading a summary of the evidence, including study setting and population. They also commented on factors influencing guideline validity and PC implementability.
Results: 25 PCPs (89%) completed the Delphi. Overall mean scores were 7.4 (range 6.2–8.2) before reading the evidence summary, and 6.6 (4.6–8.3) after. Mean scores for the 8 recommendations supported by PC evidence were 7.4 before and 7.2 after (change -0.2). Mean scores for the 6 with evidence from elsewhere were 7.4 before and 5.8 after (change -1.6). Factors perceived to influence implementation included clarity, brevity, and relevance to PC.
Discussion: PCPs’ ratings of PC guideline validity dropped when they became aware that substantial supporting evidence for the guidelines had come from non PC settings. The relevance of the evidence to PC patients was important.
Implications for Guideline Developers/Users: Developers should explicitly describe the relevance of available evidence for PCPs and their patients.
Original language | English |
---|---|
Article number | A21 |
Journal | BMJ Quality & Safety |
Volume | 22 |
Issue number | Suppl 1 |
DOIs | |
Publication status | Published - 2013 |