DEBATE DEBATE
Debate on the paper by Celia Almeida & Ernesto Báscolo
Debate sobre el artículo de Celia Almeida & Ernesto Báscolo
James A. Trostle
Trinity College, Hartford, U.S.A. James.Trostle@trincoll.edu
This is, in general, a nice summation and assessment of the theoretical work being done on the use (and lack of use) of scientific evidence to improve health policy and health systems. However, at times the authors summarized excessively, and at other times they missed opportunities to critically compare the various approaches they reviewed.
For example, the historical growth of political science theorizing was painted too broadly. It is difficult to interpret the conclusion that there was a "confusion between research and the operations approach" that led to a "differentiation (and separation) of functions between scientists and 'consultants'" (is this a critique of the field of operations research? What does it mean to separate scientists from consultants in this way?).
I would have liked to see more explicit and detailed comparisons and evaluations of the formulations of people like Kirkhart or Patton or Forss or Walt & Gilson. Are they all compatible with one another? If not, which approaches make most sense for which circumstances? Answering these questions would have helped forward the authors' expressed goal of "formulating and developing analytical and explanatory frameworks that perhaps offer more promise ". I also wanted to see more follow-up of the authors' point that in Spanish and Portuguese the same word, "política", refers both to the content of policy and to the policy-making process itself. What implications might this and other regional differences have for the generalizability in Latin America of research and theorizing based in the United States and Europe?
Whether referring to Northern or Central or Southern America, or elsewhere, I think the authors are quite correct in emphasizing the dynamic and nonlinear relationship between research and policy. It is important that they acknowledge that much of the literature now being generated on evidence-based policy-making has a rather naïve sense of optimism about it, despite warnings decades ago that policy-making has many irrational inputs and may not be particularly open to evidence. One need look no farther than my own country, the United States, for some recent, major (and somehow shocking, even to this jaded observer) examples of the willful neglect and manipulation of scientific evidence to fit policy agendas rather than to shape them. These range from ignoring evidence for policies the Bush administration opposes (recommendations that emergency contraception be made readily available, or that greenhouse gases are an important cause of global warming); to refusing to collect scientific evidence for policies Bush promotes (abstinence-only interventions for reproductive health are funded without evidence of their effectiveness); to willfully manipulating scientific evidence to reframe issues in the Bush administration's favor (a national report on health disparities is censored so thoroughly that instead of calling such disparities a national problem it emphasizes ways that ethnic minorities are healthier than the general U.S. population; and government websites are altered to contradict accepted scientific data reviewing condom effectiveness or abortion risks). (Many of these abuses and others are documented at: http://www.democrats.reform.house.gov/index.asp.) One wonders at times whether theorizing about improving the use of research in policy is even useful in the absence of political change.
If it is true, as the authors suggest, that there is "a certain consensus" among analysts with respect to the barriers that impede use of research in decision-making, how do they explain this consensus given the many competing theoretical formulations of the research to policy process? That is, if a theorist like Patton thinks more about use processes than products, but a theorist like Kirkhart talks more about influence than use, why wouldn't these different formulations lead to the identification of quite different types of barriers and a consequent lack of consensus?
I would have liked to see the authors pay more attention to, and review more of, the empirical research they call for in their last paragraph. This attention would have been of benefit to those considering designing such research. Nonetheless, I think this summary piece provides a good introduction to a broad array of important theories and concepts and definitions related to research and policy-making, and I will look forward to the authors' eventual review of the empirical research they urge.