McCaughey Centre, Melbourne School of Population Health, University of Melbourne, Carlton, Vic., 3053, Australia
Public health decision-makers are often overwhelmed with large quantities of data, evidence, reviews and summaries. As the volume of information increases, the need for trusted sources of synthesis becomes greater.1
If we recognize the need for good methods of summarizing research that address policy makers' information needs in a reliable and timely manner, then how do we agree on what those methods are? This is a lively debate, resonant of earlier debates on alternative designs for evaluating interventions, that was polarized for many years around the merits or otherwise of the randomized controlled trial design.2 There is a danger that an equally polarized debate occurs here, around the merits or otherwise of systematic reviews.
Using knowledge to improve the effectiveness and efficiency of public health policy requires strengthening the links between synthesis, generation and translation of that knowledge. We need to understand the strengths and limitations of the knowledge we have, identify and fill gaps in the time available, and work in partnerships that comprise users and generators across all the relevant sectors.3,4 Doing so requires transparent and reproducible methods of research synthesis, where strength of evidence can be more easily judged. This is much more difficult when opaque review methods are used, where strength of opinion, rhetoric or resources can be the main determinants of the reviews' findings and impact.
It is generally acknowledged that systematic reviews, which apply highly restrictive inclusion criteria based on the strength of the internal validity of the research design, may be of limited value in public health. Such reviews may only be based on randomized trials of weak interventions, while more promising interventions are omitted due to the study design used. Similarly, unsystematic reviews, which include a biased selection of evaluation studies that are then synthesized in an uneven and subjective manner, will be of limited value if they risk reflecting the views of the author rather than the strength of the underlying evidence. However, focusing on these extremes with their inherent weaknesses blocks progress towards a more useful middle position.
The key characteristic of a systematic review is its reproducibility, since it is a result of the application of systematic search and synthesis methods. The search method may include only randomized controlled trials where this is appropriate, but equally the search criteria may include any research method and define broad inclusion criteria such as research question, population or setting. Synthesis can be done as quantitative meta-analysis, but can also be done using other methods such as meta-ethnography or realist review.
However, both the search and selection of included studies and the synthesis method need to be defined and reproducible. By applying search and synthesis methods in a systematic manner, the review can be reproduced by others, can be readily updated, and can be read and interpreted with confidence. It is not a defining characteristic of a systematic review that it only includes randomized trials, or even that it only focuses on questions of effectiveness.
Recent developments in public health systematic reviews have provided new methods and infrastructure.5-7 For example, the Cochrane Public Health Review Group has developed specific expertise in searching for studies with a wider range of research designs to ensure better coverage of low- and middle-income countries. By including a wider range of research designs, public health reviews can become more useful for policy makers.8,9 If this approach continues to evolve it should serve low- and middle-income countries well by profiling the importance of equity and context when translating evidence from one situation to another.9-12
1. Petticrew M, Platt S, McCollam A, Wilson S, Thomas S. "We're not short of people telling us what the problems are. We're short of people telling us what to do": an appraisal of public policy and mental health. BMC Public Health 2008;8:314. PMID:18793414 doi:10.1186/1471-2458-8-314
2. McQueen DV. The evidence debate. J Epidemiol Community Health 2002;56:83-4. PMID:11812803 doi:10.1136/jech.56.2.83
3. Roberts H, Arai L, Roen K, Popay J. It might work in a trial, but how do we make it work round here? In: M Kelly, et al. Eds. Evidence at the crossroads: new directions in changing the health of the public: a manual. Oxford: Oxford University Press; 2006.
4. Lavis JN, Lomas J, Hamid M, Sewankambo NK. Assessing country-level efforts to link research to action. Bull World Health Organ 2006;84:620-8. PMID:16917649 doi:10.2471/BLT.06.030312
5. Oliver S, Harden A, Rees R, Shepherd J, Brunton G, Garcia J, et al. An emerging framework for including different types of evidence in systematic reviews for public policy. Evaluation: the International Journal of Theory. Research and Practice 2005;11:428-46.
6. Petticrew M, Roberts H. Systematic reviews in the social sciences: a practical guide. Oxford: Blackwell Publishing; 2006.
7. Higgins J, Green S. Cochrane handbook for systematic reviews of interventions. The Cochrane Collaboration; 2008.
8. Petticrew M. Roberts, H. Evidence, hierarchies, and typologies: horses for courses. J Epidemiol Community Health 2003;57:527-9. PMID:12821702 doi:10.1136/jech.57.7.527
9. Tugwell P, Petticrew M, Robinson V, Kristjansson E, Maxwell L. Cochrane and Campbell Collaborations, and health equity. Lancet 2006;367:1128-30. PMID:16616547 doi:10.1016/S0140-6736(06)68490-0
10. Lavis J. Evaluating knowledge translation platforms in low and middle-income countries. Canada; 2008.
11. Pang T. Evidence to action in the developing world: what evidence is needed? Bull World Health Organ 2007;85:247. PMID:17546299 doi:10.2471/BLT.07.040824
12. Wang S, Moss JR, Hiller JE. Applicability and transferability of interventions in evidence-based public health. Health Promot Int 2005;21:76-83. PMID:16249192 doi:10.1093/heapro/dai025