DEBATE DEBATE
Use of research results in policy decision-making, formulation, and implementation: a review of the literature
La utilización de los resultados de la investigación en el proceso de decisión, formulación y implementación de políticas: una revisión de la literatura
Celia AlmeidaI; Ernesto BáscoloII
IEscola Nacional de Saúde Pública Sergio Arouca, Fundação Oswaldo Cruz, Rio de Janeiro, Brasil
IIInstituto de la Salud Juan Lazarte, Rosario, Argentina
ABSTRACT
This paper offers a critical review of the theoretical literature on the relationship between the production of scientific knowledge and its use in policy formulation and implementation. Extensive academic output, using a diversity of approaches and analytical frameworks, has sought to systematize knowledge transfer categories and strategies with a view to improving the application of scientific knowledge. A considerable part of this thinking addresses the problem from a more traditional perspective, which (explicitly or implicitly) regards research results as an "accumulable product", depicts the decision-making process simplistically and linearly, and thus restricts strategies to the suiting of research endeavors to needs. Newer approaches place greater importance on the complexity of policymaking and the knowledge production process, which are integrated into and explained in particular political and institutional settings. Although the application of knowledge transfer ideas to health policy and systems research does generate some interesting contributions, a long process of theoretical thinking lies ahead.
Health Policy; Policy Making; Use of Scientific Information for Health Decision Making
RESUMEN
Este texto presenta una revisión crítica de la producción teórica dedicada a la reflexión de la relación entre la producción del conocimiento científico y su utilización en el proceso de formulación e implementación de políticas. Una extensa producción académica ha procurado una sistematización de categorías y estrategias de transferencia de conocimiento con el propósito de mejorar la aplicación del conocimiento científico, desde diversas perspectivas y distintos marcos de análisis. Parte importante de esa reflexión aborda el problema desde una perspectiva más tradicional, que asume (explícita o implícitamente) una concepción de los resultados de la investigación como un "producto acumulable", mientras el proceso decisorio se presenta en forma simplista y lineal, restringiendo las estrategias a la adecuación entre necesidades y los esfuerzos de la investigación. Con nuevos abordajes, se revaloriza la complejidad de la creación de políticas y del proceso de producción del conocimiento para la acción, integrados y explicados en un marco político e institucional particular. A pesar de que la aplicación de las ideas de transferencia de conocimiento aplicada al campo de la investigación en políticas y sistemas de salud genera algunos aportes interesantes, un largo camino de reflexión teórica queda pendiente.
Política de Salud; Formulación de Políticas; Uso de la Información Científica en la Tomada de Decisiones en Salud
INTRODUCTION
Although the debate on the use of research results for policy decision-making and implementation processes is not new and its features have changed over time, the issue has gained greater prominence in recent decades following the major processes of world change that increasingly call for concrete evidence to support or challenge the innovations that are implemented in a variety of contexts, including health policies and systems.
The background to the debate dates to the mid-20th century and overlaps the following: (1) the field of public policy analysis, and thus the formation of political science as a specific discipline in the social sciences and (2) discussions of the social sciences approach and methods as applied to various fields of knowledge, including health services research.
The literature on the analysis of health and health systems public policy does not provide a clear, single definition of either health policy or health services and systems. However, there is a clear consensus that what is being analyzed is public policy, and thus state policy; there is less agreement (particularly more recently) as to the scope of such policy and whether health policies (or at least part of their content) belong to the roll of social policies.
Importantly, unlike the English-language literature, which distinguishes between "policy" (policy content, a framework of guidelines for action) and "politics" (the political decision-making process, involving a range of loci and actors), Latin American literature (in Spanish or Portuguese) uses a single term (política), with various uses and meanings that encompass both senses.
Public policy analysis emerged, particularly in the United States, as a science of action, a contribution by experts (analysts) to government decision-making processes. The central concern was to direct research in such a way as to be relevant, useful for action. That pragmatic view of political science came, in turn, as a reaction to a prevailing formalism 1 and against the abstract rationalism of Homo economicus 2.
Underlying such an approach was obviously an ingenuous conception that tended to see a simplistic relationship between improved understanding of action and better government performance. In the late 1950s and early 60s it was believed that this association between experts (policy analysts) and policymakers would facilitate solutions to society's problems. This helped focus attention on crafting tools to be made available to politicians and decision-makers, while theoretical considerations were relegated to secondary importance.
This trend was extremely strong in the United States in the 1960s and 70s. According to some authors it led to the production of "practical" knowledge. This submissive, a-theoretical view of political science was challenged, thus sparking interest in other concerns more fundamental to this debate, making it possible to break out of the vicious circle that threatened to confine public policy analysis to the "function of a decision-making aid, [placing experts in the role of] 'consultant' and prince's helper" 2 (p. 45). This confusion between research and the operations approach led to a differentiation (and separation) of functions between scientists and "consultants".
Paradoxically, however, the resurgence of theory during the subsequent decades returned to the same theoretical models that had been rejected initially, because public policy theories are not particularly innovative. Whether for or against, they are rooted in political philosophy and economic thinking, with the important, original difference that they address what was then a little-explored field, i.e., they are based on empirical research, but do not disregard the implications for action, thus surmounting both the confusion between "practical usefulness" and "recipe for government" and the false, radical opposition between theory and practice.
Writing in the context of the process summarized above and concerned with the history and development of health services research as a field of inquiry, Greenberg & Choi 3 suggest, based on Anderson 4, that the development of public policy consensus on a particular issue or set of issues establishes the basic framework for policy-related social and economic research, within which research priorities are set.
As Greenberg & Choi 3 (p. 4) emphasize, "two key words systematic and instrumental emerge from Anderson's thesis, which is not arguing that no research or data collection occurs prior to policy consensus, but rather that large-scale, systematic collection and analysis efforts only emerge (and are funded) after the establishment of consensus on issues to be addressed by particular policies ( ) [and] the resulting research efforts become primarily instrumental in nature. The term 'instrumen- tal' refers to work that is oriented to the solution of specific management program or policy problems and tends to be constrained by very short time frames". Seen in this way, health policy and services research becomes a problem-oriented field and, thus understood, is instrumental.
It is important to remember that these authors are discussing the health services research field, which was developing as part of a much larger trend towards using applied social science research to improve public decision-making. Many historians of health policy and health services research argue that this field, as a distinct field of inquiry, really began in the 1960s 5,6. Before then, the primary focus of health policy had been to develop a delivery system that would provide equitable access to care, and the perceived role of federal government was to channel resources for that purpose. The 1970s witnessed a shift in the policy consensus away from access to health care and expansion of coverage and toward cost containment and "shrinking the system" 3,5,7,8,9,10. The issue of health sector reform has been on the agenda of almost all countries ever since, and new models of health systems and services organization have come into play. New policy instruments have become necessary, and these have generated great demand for new knowledge about how to control health systems more effectively.
As part of this process, the health services research field took on new life in the 1980s, but received heavy criticism for its apparent failure to perform as a source of instrumental findings the central issue in debate came to be the "use of research results". As a result, there is no general agreement on how to distinguish between applied social science research in health delivery systems, health services research, and health policy analysis. Such criticism also raises the issue of emphasis: whether towards theoretical development or problem-solving approaches 3.
In order to help ensure that research would be "problem-solving oriented" and would focus on desired policy issues, the sponsors and financers in many cases mainly the state, i.e., the federal government, and international organizations began to rely more heavily on the "call for proposals" approach to award grants, in contrast to the researcher-oriented approach of the past 3,11. The issue of the diffusion of service innovations gained prominence, as did the search for evidence for public health and health policy. Thus, the problems posed by the difficulty in transferring scientific information or research results to decision-making have become a subject for academic thinking.
As many authors point out, whether research results or scientific evidence have contributed significantly to public policy depends largely on how one defines public policy and the policy development process. Opposing approaches can be identified in the recent past.
One view of the policy-making and policy-formation processes sees them as sets of explicit, authoritative decisions by sets of identifiable public officials. The other view argues that a more complex, general political process is involved, strongly influenced by values, opinions, and actions which move decisions in certain directions in the political, social, and economic systems. In the first approach, the impact of research results tends to be evaluated in terms of the direct effects on such decisions; in the second, "research performance" should be evaluated in terms of its overall ability to shape debate and action. Accordingly, one should not expect any specific result of any specific study to play a central role in any specific decision, but should look toward the nature of the issues being raised and the debate surrounding theses issues. Greenberg & Choi 3 feel one should at least be able to associate past and ongoing research with current policy issues and debates.
In the wake of these discussions and changes in direction, analysis of the health policy process has gained prominence, and efforts are under way to equip and formalize the health sector decision-making process by developing "facilitator" tools.
Generally speaking, the literature warns that the fields of knowledge production and policy formulation and implementation are very different: their goals and methods for working and evaluating results are completely different and not easily interchangeable. Meanwhile, this lack of clarity makes it difficult to establish strategies to effectively draw these two activities closer together on the basis of greater cooperation and visible, concrete results. In addition, much of the frustration surrounding attempts to apply research results to policy stems from mistaken expectations as to what such application means 12, as well as to a lack of any clear perception of what the decision-making process is really like 13. There is also no single way of viewing that process, and depending on which analytical perspective is used, the "entry points" for research in the policy process also shift, as do the variables that interact in it 14.
Terms such as "informed choice", "considered decision", "rational policy", "evidence-based policy", "strategic research", and "essential national research" have been used to express the belief in the need to build a "bridge" between research and policy. National and international seminars, congresses, and scientific meetings have focused on the "research-to-policy" issue. A number of moves have attempted to facilitate the use of research results in policies, including the preparation of tool kits for defining research priorities, demand for (and utilization of) health policy and systems research, and capacity-building (Training Materials and Tools, Alliance for Health Policy and System Research, http://www.alliance-hpsr.org); or "political maps" designed to facilitate an understanding of the decision-making process 15 and mainly involving the national and international financing agencies. Increasingly, such agencies are requiring that research protocols state this link explicitly and discuss specific strategies for this purpose. Still, little is known about where, how, and by whom such a bridge is to be built 12.
The health field provides abundant examples of the complex relationship between research results and policy formulation and of the ways by which scientific evidence influences specific causes of disease or individual behavior changes. Taking prevention as an example (e.g., smoking and related health damage, or environmental measures), the evidence is that the accumulated scientific knowledge on a given issue is not what determines significant policy or even behavioral change 16. The interests involved in these issues are far more varied and extend far beyond the health sector 3,14.
The purpose of this article is to offer a literature review on this subject, based on selected authors, and to discuss their arguments and some of the main theoretical approaches to explain or back the relationship between the production of scientific knowledge and its use in policy formulation and implementation. The first section presents several analytical models designed to explain these relationships. The second reviews the issue of the use of research results and policy-making. The third analyzes the literature on the interaction between researchers and policymakers. The fourth reviews issues related to the spread of knowledge, knowledge transfer, and evidence base in health policy and practice. The initial hypothesis is that there is an excessive formalization of instruments and pragmatic simplification in the proposals designed to draw the two fields (research and policy) closer, while the importance of formulating and developing analytical and explanatory frameworks that perhaps offer more promise in this process is underestimated.
Some explanatory models for use of research in policy
The term "knowledge transfer" is increasingly used to describe a series of activities contained in the process of generating knowledge based on user needs, disseminating it, building capacity for its uptake by decision-makers, and finally tracking its application in specific contexts. While there is increasing interest in knowledge transfer for health policy and practice, there is still no dominant explanatory model to guide efforts in this area, and there is little empirical research on what has worked in specific contexts.
Weiss 17 is cited in the literature as having pioneered the identification and description of seven models to illustrate how research is used in policy formulation or how it functions as a guide for the decision-making process. From those models, authors are defining analytical categories that enable this knowledge to be extended.
Trostle et al. 12 summarize these models in three basic approaches: (1) the rational approach includes the models that Weiss calls "knowledge-driven" and "problem-solving", representing the conventional manner of thinking about this relationship: the policy process is inherently rational, with research results being used when they exist and decision-makers calling for research when it is needed; (2) the strategic approach groups the models Weiss calls "political" and "tactical" and views research as a kind of ammunition in support or critical of certain positions, prompting or delaying policy action; and (3) the enlightenment or diffusion approach, comprising Weiss' three remaining models "interactive", "enlightenment", and "intellectual enterprise" and stressing that both the research and decision-making processes take place in parallel with a number of other social processes and thus play several different roles.
The knowledge-driven model assumes that basic research leads to applied research, to development, and finally to application of the results, and the problem-solving model starts with a problem that needs solving, in turn requiring research, the results of which lead to action being taken. The political and tactical models connect directly to executive action; and the last three interactive, enlightenment, and intellectual enterprise relate to the production of scientific knowledge in a given line of research, fostering a build-up of knowledge that (it is believed) gradually informs action 13.
In the late 1980s, the concept of use expanded to encompass at least three different types of meaning: (1) instrumental, as an input to decision-making; (2) conceptual, contributing to improved understanding of the subject matter, the related problems, or the political interventions under study; and (3) strategic, serving to persuade other actors or as a means to attain certain aims 18.
With regard to evaluation of study results, Kirkhart 19 criticized the traditional concept of use as being unidirectional, episodic, intended, and instrumental. It failed to adequately describe types of impact deriving from sources other than the results of evaluation, or unintended results or gradual, incremental impact over time. Kirkhart addressed issues such as the ways by which the results of an evaluation study are used, including an examination of the ways (how and to what extent) the evaluator participates, affects, supports, and proposes mechanisms that foster behavior changes in people and systems. This places new importance on the impact of evaluation itself, rather than on any immediate use of its output 20. From this perspective, it is supposed to be easier to measure the extent of changes in a program or the views of strategic actors than in the use of study results.
In a similar approach, Rich 21 characterizes the use of research results in the following stages, viewed from a process perspective: information pick-up, processing, and application.
Kirkhart thus proposes the development of an integrated theory of influence, enabling the impact of research or evaluation to be assessed, by replacing the category "use" (with its more restricted meaning) with "influence", defined as the "capacity or power of persons or things to produce effects on others by intangible or indirect means" 19 (p. 7).
The integrated theory of influence rests on three dimensions: source, intention, and timing. Source relates to the initial cause of a process of change. Sources can arise either in the evaluation process (process-based) or the results (results-based). The former are oriented towards increasing understanding among the actors, changing their sense of the values and developing new relationships, dialogues, and networks. The categories used are: (i) cognitive changes in understandings stimulated by the discussion, reflection, and problem analysis in an evaluation study; (ii) affective changes in the individual and collective feelings of worth and value resulting from involvement in a study; and (iii) political changes resulting from the role of evaluation in creating new dialogues, drawing attention to social problems, or influencing power relationships. Evaluation of results is seen in terms of the more conventional categories of instrumental, conceptual, symbolic, and strategic influence.
A second dimension of influence identified by Kirkhart 19 (p. 11) is intention, defined as "the extent to which evaluation influence is purposefully directed, consciously recognized and planfully anticipated" and characterized by the type, target, and sources of influence (people, processes, and findings). Other complementary features are dimension (intended/unintended), explicitness (manifest/latent), orientation (results-oriented/process-oriented), and direction (positive/negative). Lastly, influence is seen in terms of three categories of temporal dimension: immediate (during the study), end-of-cycle, and long-term.
Patton 22 identifies a typology of use that stresses processes rather than products: (1) enhancing shared understandings, especially about results; (2) supporting and reinforcing the program through intervention-oriented evaluation; (3) increasing participants' engagement, sense of ownership, and self-determination; and (4) program or organizational development.
Meanwhile, Forss et al. 23 expand on the previous proposal to identify five types of process-related use: learning to learn; developing networks; creating shared understanding; strengthening the project; and boosting morale.
Despite the extensive literature (the majority of the studies have not been cited here), the state-of-the-art on the most effective and efficient knowledge transfer strategies is still in its infancy.
Use of research results and policy-making in health
The authors' thinking and models are useful for understanding the various ways by which research results are used in policy-making. However, Walt & Gilson 13 emphasize that the underlying assumption of many is that both research and policy-making are logical, rational processes where researchers ask the right questions, plan and conduct their studies rigorously, and circulate their results appropriately, and that decision-makers read research reports, understand the results and their implications, and act to correct their course in the direction indicated. Even admitting to a specific rationality in each of these processes, the real world is not so linear: new knowledge and information do in fact penetrate the policy sphere and become part of decision-makers' argumentation and thinking, but in a much more diffuse way that depends on the accumulation of scientific information on a given issue, the political environment, and the conjuncture, among numerous other variables. The search to find (and accumulate) evidence thus becomes the other important part of this equation.
In order to capture this dynamic, Walt & Gilson 13 suggested the use of other analytical categories to understand the influence of research on specific concrete cases, namely policy content, social actors, decision-making process, and context.
These are basic political science categories. The formulation, implementation, and evaluation of social policies are heavily guided by the values and concepts of social realities shared by the leading actors in the various process levels, or by bureaucratic elites. These values and concepts provide the "terms of the debate" on policies, delimiting and circumscribing the policy agenda at any given moment 24. Meanwhile, the political, economic, and institutional context of the decision-making process shapes the range of available options and affects decision-makers' choices 25, i.e., "the ways in which the evidence is used in the policy process are largely determined by beliefs and values of policymakers, as well as by considerations of timing, economics, costs, and politics" 27 (p. 601). Besides, the policy formulation process is completely different from the implementation process, so that a proposal for change rarely retains its original characteristics when implemented, because it alters the status quo and mobilizes actors to defend their interests. Overall, the central category that emerges from such discussions is power, with its innumerable facets and dimensions.
Brown 14 (p. 20) asserts that the numerous generalizations on the role of health systems and services research in the political process are not particularly useful, because as a power resource, knowledge plays innumerable roles, which change with place, time, and circumstances. Each concrete case also involves different explanatory variables for policy change, and each variable implies that the research has played a specific role. Some variables are particularly important, as Brown points out on the basis of case studies in the United States. Trostle et al. 12 find the same in their Mexican case studies on the same subject.
There is a certain consensus among authors concerning various barriers that hinder or prevent research from being used in the decision-making process. These include:
a) Ideological problems that constrain political rhetoric and the formulation of reform agendas, in addition to a lack of political "will" or an inability to formulate and implement more integrated, interactive policies;
b) Historical separation between researchers, policymakers, service providers, administrators, managers, etc., allied to a mutual intellectual disdain 12;
c) "Uncertainty" caused by scientific divergences among researchers on any given problem (conceptual confusion, methodological problems); by the very changes that scientific and technological progress foster in explanations of any given phenomenon, meaning that the scientific "truths" of today are challenged tomorrow; and because the information is fragmented, with different definitions of the important problems;
d) Different conceptions of risk at the individual or collective level, and in the same sector or among various sectors, particularly in multi-sector actions (e.g. environmental control, tobacco use, alcohol, diet, and nutrition);
e) Media interference, which can both confuse the issue by publicizing results inappropriately and exploit divergences rather than clarifying them;
f) Marketing and circulation of research: researchers using impenetrable language; inappropriate means restricted to "peer" forums and publications; and
g) Research process timeframes out of step with those of the decision-making process.
Trostle et al. 12, analyzing the Mexican case studies, identify both barriers and factors that promote and facilitate the use of research results in policy in each of the analytical categories suggested by Walt & Gilson, which are generally specific to each policy area under consideration. However, little headway has been made on proposals to surmount these barriers; the question is whether simply surmounting them will solve the impasses or whether this approach (although necessary) is really sufficient to promote greater use of research in policy formulation and implementation.
Brown 14 argues that some variables are fundamental, although they should not be considered absolute. In addition, the degree to which research actually influences policy varies inversely with the complexity of the issue under study. The first health services research activity that he identifies with an important influence on policy formulation and implementation (and which is not exactly research, but relates to it) is documentation: the gathering, cataloguing, and correlating of facts depicting the state of affairs that policy-makers wish to change. In addition, compiling statistical data allows establishing temporal and spatial correlations.
However, Brown cautions that quantitative and qualitative databases and statistical indicators cannot be considered research results per se, but are the raw material on which research is shaped and without which it cannot be conducted. He also emphasizes that "documentation is not always a step toward action; sometimes it stultifies it" 14 (p. 28). When the right predisposition or political and material conditions exist, information can become a powerful weapon to spur the shift from political rhetoric to concrete action. However, information alone cannot create such a predisposition. Likewise, Bardach 27 states that policy analysis theory proposes that evidence is information that affects existing beliefs by important persons about significant features of the problem under study and how it might be solved or mitigated.
A second role of health service research in policy design, according to Brown 14, is analytical, i.e. demonstrating what works or does not, and explaining why. In a conflictive reform context, research that raises doubts about the omnipresent list of alternatives for government intervention can prompt policy-makers to take action. Yet it also is a two-edged sword, since it can call attention to aspects unforeseen by policymakers and thus discredit both the policy as formulated and the proposed alternatives.
Bowen & Zwi 26 (p. 601) also argue that "the way in which research evidence is combined with other forms of information is key to understanding the meaning and use of evidence in policy development and practice. (...) A major challenge to contextualizing evidence for policymaking is recognition that a broad information base is required [and] ( ) considering the evidence within the context in which it will be used is critical for effective policymaking and practice".
A final role that Brown highlights for health systems and services research is prescription. The political force of empirical evaluation and analytical constructs resides in the "scientific" character they lend to reform proposals, reinforcing decision-makers' prior positions.
Research can thus play a variety of roles in policy. The task of prescription differs greatly from those of documentation and analysis: demonstrating how and why the system functions is itself a contribution to the production of knowledge, which is the essence of research activity, but on its own that contribution will not "teach" decision-makers how to change the system. On the other hand, more narrowly focused studies, directed exclusively to problem-solving, have no broader implications and do not contribute to the build-up of knowledge achieved by pursuing a line of research that lasts over time.
Prescription requires understanding not just how and why actors and institutions behave as they do under given conditions, but also (and more importantly) how such behavior can be altered under new conditions; and here not even the most complete documentation or analysis is enough to ensure any result. In addition, in order to "prescribe" solutions, researchers would have to shed their academic guise and explore avenues not entirely authorized by their theoretical and methodological frameworks, thus running the risk of being discredited by actual developments. Seen thus, health systems and services research has been very erratic as a prescriptive guide and, in the endeavor, has tended to deviate from the field of research and to confuse it with others, such as technocratic and planning activities. In practice, however, these fields are very different and operate with distinct conceptual universes.
In summary, Brown 14 regards research as most valuable to policy in supporting the documentation that describes the system; most erratic in analyzing how policy functions and explaining what works, and how and why; and considerably limited in its prescriptive capacity. Thus, the potential contribution of research to the decision-making process has less to do with offering definitive solutions to the problematic issues in debate and more with improving the quality of the terms of the debate. Thus, the ability to change the nature of public debate on a given issue is an important form of power, because bringing ideas, proposals, and interests into confrontation is an important force in changing the balance of power among the various contesting groups. Other authors, like Weiss 17 and Majone 28, reinforce this argument.
The contradiction here is that as the research field becomes more developed and promising, the range of results, explanations, arguments, and recommendations tends to expand, and the field may thus appear more "chaotic" to decision-makers. In other words, an active, highly-skilled scientific community in the health systems and services research field, closely attuned to institutional niches in the state apparatus and civil society and with a secure place in the decision-making arena (such as the "policy networks" 29,30 or the "epistemic communities" 31,32) is certain to foster better debate, but not necessarily better policy results.
Also distant from the "rational" approach and closer to Brown and Walt, some authors recognize an "incremental" dimension to the use of knowledge 26,33,34, placing new importance on the complexity of the decision-making process; this dimension identifies conditioning factors complementary to scientific knowledge, such as interests, values, established institutional positions, and personal ambitions. This view includes a political and institutional approach to the decision-making process, where identifying and characterizing the actors and interrelations is a key dimension to understanding the process and the use of research results in a framework of political negotiation, rather than restricted to criteria sustained by scientific evidence.
Interaction between researchers and decision-makers as an explanatory dimension conditioning the use of research results
Interrelations between researchers and decision-makers have been considered a prime factor in analyzing knowledge transfer processes. Analysis of the weaves (or models) of interrelations between researchers and decision-makers is relevant when one realizes that the use of scientific knowledge depends largely on certain characteristics of the actors (researchers' behavior and de-cision-makers' receptiveness). Various authors ha- ve examined and promoted different ways of improving interrelations between researchers and decision-makers, such as collaborative or "allied research" 35, or constructivist approaches to evaluative research 36, including strategies to improve the knowledge output for decision-making.
This approach fosters a political and institutional analysis of relations between actors and organizations in the interconnections between research and policy formulation and implementation processes as a conditioning (or independent) variable in the use of knowledge or its impact on decision-making processes (dependent variable). Kothari et al. 37 evaluate the implications of different interrelations between researchers and decision-makers for the use of research results in health policies and programs. Their conceptual framework for such interactions is based on Rich 21, viewing possible opportunities for contact at the following research stages: (i) defining the research questions; (ii) conducting the research, and the research findings; (iii) circulating results; and (iv) research utilization, defined as the ways by which research findings influence decision-makers. As already mentioned, use is characterized by dimensions in which several stages play out from a process perspective, summarized as receiving and reading; information processing; and application.
Viewed in a less linear light, interaction could be defined as a number of "disordered interactions" that take place between researchers and users, more than a sequence starting with the needs of researchers or decision-makers.
Lomas 38 has built on this literature and used the term "push and pull" to describe specific potential actions to promote knowledge transfer. Push strategies involve transforming the message according to the needs of specific audiences. This represents a departure from "one-size-fits-all" dissemination strategies. Pull strategies include efforts to build capacity at the user end, such as workshops to explain how to access information and evaluate its scientific rigor.
Meanwhile, Lomas 39, Lavis et al. 40, and Landry et al. 41 draw on the literature of action science and participatory research to suggest that these contact situations or interfaces must be leveraged from the beginning of the research process to promote activities that create "linkages". Such activities include providing opportunities for researchers and users to jointly define the research question, maintain contact during the research process, and (following completion) discuss findings and the potential implications for policy and practice.
Landry et al. 42 analyzed research use as the dependent variable and its association with a wide range of independent determinants, including various knowledge transfer activities. Research use was measured on a scale of one to six using Knott & Wildavsky's 43 self-reported index, as follows:
1) Reception: received research pertinent to one's work;
2) Cognition: read and understood it;
3) Discussion: discussed the research in meetings;
4) Reference: cited the research in reports/presentations;
5) Effort: made an effort to favor the use of research; and
6) Influence: the research influenced decisions.
In their study of 833 Canadian policy-makers, Landry et al. 42 grouped independent variables by conceptual paradigm and implicit assumptions about factors that increase research use. They found that scholarly research was as likely to be used as applied research, and that highly theoretical qualitative research was slightly less likely to be used than quantitative studies. Their conclusions can be summarized as follows:
Factors deriving from the "two-communities" approach, such as whether there had been specific efforts to tailor research for users (push) and to build capacity among decision-makers to use research (pull), were found to be highly related to use;
Similarly, the "linkages" approach, which posits that more interaction with researchers throughout the research cycle improves use, proved to be a significant determinant of use;
"Engineering solutions", cited by those who believe research can "fix" policy problems, propose that the type of research (applied versus scholarly or qualitative versus quantitative) is the most important determinant of use;
"Organizational factors" like structure, size, culture, and policy domain are often posited as important determinants, but were not found to be associated with research use;
Among the "individual factors" often assigned importance, like educational level and position within an institution, education was indeed found to be associated with higher levels of research use, although position within an organization was not.
Looking at the knowledge production and decision-making processes suggests several points where contact between these two dimensions and logics could be useful. The identification and analysis of these points of contact or interfaces charts another process: the interrelations between researchers and decision-makers.
Trostle et al. 12 offer a graphic representation of the dynamic relationship between research and policy formulation, highlighting that although the processes are usually independent of each other, there may be connections between them. They call these contacts "moments of opportunity", since the actors in one process learn from or contribute to those in the other. They also emphasize that the main challenge for applying research results to policy is to learn to create or recognize such moments of opportunity, and then to act effectively to take advantage of them. Starting with the left side of a conceptual figure and moving counter-clockwise, the research process includes the stages of idea generation, design, data gathering, analysis, and application (an interrelationship represented by arrows). Following the arrows, the research results can lead to new ideas and research projects, or may also be applied.
The policy process is similarly represented, starting on the left side of the conceptual figure and moving clockwise. When the needs or problems that arise can be addressed on the basis of specific policies, there is an endeavor to gather information on them from a variety of sources. Interest groups may lobby at various moments and thereby influence definition of priorities among the needs, decisions, or policies. As in the previous process, some arrows return to the information-gathering stage, before policy is designed. Some decisions generate a search for additional information and further negotiations, while others produce policies. New policies can also lead to new interest groups and new political challenges.
Other authors 44 formulate concepts similar to the "moments of opportunity", containing the idea that knowledge can be useful in generating chan-ge only when a "window of opportunity" appears.
It is important that analysis examines the actors in the process and evaluates the sources of information they trust, the type of information that interests them, how they evaluate information, what motivates them to make decisions, and the actors with whom they interact, compete, and form alliances.
Trostle et al. 12 and Bronfman et al. 43 also offer a graphic representation of relations between actors and context. In their figure, two larger intersecting circles represent civil society and the state. Other smaller circles represent the interest groups' real or potential impact on policies. Researchers are one such group. Also shown graphically are the mutual influences between interest groups and decision-makers. Public policies (the focus of their study) are formulated by public decision-makers and thus stand at the intersection between state and civil society. Actors are located in one sphere or another, or at their intersections.
Other studies propose a variant on stakeholder analysis, featuring researchers as a group with the ability to intervene in the decision-making process 46 and in the policy formulation process directly or through other key stakeholders over whom they have influence. This interpretation sees the decision-making process as susceptible to influence not only from the knowledge generated by research, but also from the research process itself. This notion allows analyzing the influence of the knowledge produced separately from the influence of the process by which such knowledge is produced.
Similarly, the position to be taken towards the problem at hand cannot be interpreted separately from the context and stakeholder analysis. In some situations, objective variables predominate and an instrumental approach may be sufficient. In others, subjective variables predominate, making it possible to opt for a conceptual or symbolic approach.
Diffusion, knowledge transfer, and evidence base in health policy and practice
Many proponents of research use in health care have drawn on the theory of diffusion of innovation 47. As Bowen & Zwi 26 (p. 600) also point out, "the contemporary public health effort sees much debate about the concepts of 'evidence' and 'the evidence base', and the usefulness and relevance of such terms to both policy-making and practice". A key challenge would be to better contextualize evidence for more effective policy-making and practice.
Greenhalgh et al. 48 conducted a systematic review (commissioned by the UK Department of Health) of the diffusion of service innovations. Drawing on Rogers' overview 47 and other empirical work, they formulated a unifying conceptual model that, instead of serving as a prescriptive formula, intended to be mainly a mnemonic aid for considering the different aspects of a complex situation and their many interactions. These authors define "'innovation' in service delivery and organizations as a novel set of behaviors, routines, and ways of working that are directed at improving health outcomes, administrative efficiency, cost effectiveness, or users' experience and that are implemented by planned and coordinated actions". The authors distinguish between "'diffusion' (passive spread), 'dissemination' (active and planned efforts to persuade target groups to adopt an innovation), 'implementation' (active and planned efforts to mainstream an innovation within an organization), and 'sustainability' (making an innovation routine until it reaches obsolescence)" 48 (p. 582).
For the purposes of this study, we highlight some issues emphasized by the authors. Their view is that "the various influences that help spread the innovation can be thought of as lying on a continuum". In pure diffusion the spread of innovation is unplanned, informal, decentralized, and largely horizontal or peer-mediated, while active dissemination is planned, formal, often centralized, and likely to occur more through vertical hierarchies. Whereas mass media and other impersonal channels may create awareness of an innovation, interpersonal influence through social networks is the dominant mechanism for diffusion. "The adoption of innovation by individuals is powerfully influenced by the structure and quality of their social networks, and different social networks also have different uses for different types of influence" 48 (p. 601-2).
According to their review, the literature on diffusion of innovation in health care organizations is vast and complex and contains many well-described themes, such as the useful list of attributes of innovation that predict (but do not guarantee) successful adoption, and the importance of social influence and the networks through which it operates. They also exposed the lack of empirical evidence for the widely cited "adopters' traits", the limited ability to generalize from the empirical work, and the near-absence of studies focusing primarily on the sustainability of complex service innovations. Finally, they emphasize the multiple and often unpredictable interactions arising in particular contexts and settings, which determine the success or failure of a dissemination initiative, and "clear knowledge gaps" where further research on diffusion of innovation in service organizations should focus. One of the study's most important outputs, resulting from their analysis of the literature, was a parsimonious, evidence-based model for considering innovation diffusion in this area 48.
Based on a thorough literature review, Bowen & Zwi 26 (p. 600) propose an "evidence-informed policy and practice pathway to help researchers and policy actors navigate the use of evidence".
The pathway involves three active stages of progression, influenced by the policy context: sourcing, using, and implementing the evidence. It also involves decision-making factors and a process they call "adopt, adapt, and act" 26 (p. 600).
The authors argue that the term evidence-based policy is used largely for one type of evidence (research) using the term evidence-influenced or evidence-informed would reflect the need to be context-sensitive and concerned with everyday circumstances and that the types of evidence that inform the policy process can be grouped as research, knowledge/information, ideas/interests, politics, and economics.
In their attempt to construct a more comprehensive framework to understand the use of evidence for policy, Bowen & Zwi 26 (p. 602) define the categories of "relative advantage, compatibility with values and past experience, cost and flexibility, trialability, and reversibility, policy environment" and others, besides conducting a critical review of the literature on knowledge transfer, diffusion, and innovation.
Meanwhile, they emphasize that "it is difficult for evidence to remain intact through the process given the policy context, decision-making factors, and the need to adapt", indicating that "evidence interacts with 'context' before it is fully adopted in policy and practice, and/or that different types of evidence are useful at different times in the policy process". Therefore, "effective knowledge transfer is not a 'one off' event, rather it is a powerful and continuous process in which knowledge accumulates and influences thinking over time". Thus, "the ability to sustain this process and a focus on human interaction is essential. Differences in conceptual understanding, scientific uncertainty, timing, and confusion influence the response to evidence". In summary, "understanding knowledge utilization in policymaking requires an understanding of what drives policy..." and "determining capacity to act on evidence is a neglected area of policy analysis and research efforts to date" 26 (p. 603-4). In other words, understanding how evidence informs policy and practice is critical to promoting effective and sustained health policy.
Some final thoughts and remarks
While there is considerable production seeking to systematize knowledge transfer categories and strategies to improve the application of scientific knowledge, there is still a wide diversity of conceptions and analytical frameworks.
According to the traditional view, scientific knowledge is regarded as an "accumulable product" which decision-makers can resort to according to their needs. This conception is generally allied with a simplified view of the decision-making process 49, assuming policy formulation and implementation as a linear process comprising a chain of rational decisions made by privileged actors. On this basis the problem is seen to lie in the difficulties involved in making the right information available to decision-makers at the right moment.
Others studies explore the subject in greater depth and highlight the specific features and complexity of policy-making, where the actors (researchers, policy-makers, members of the state structure or civil society) and decisive, specific conjunctures are all fundamental categories for addressing the problems of knowledge transfer. Likewise, researchers and their production process are considered to be actors integrated into a particular political and institutional context, thus reinterpreting the relationship between the subject and the object of study. In this regard, integration between researchers and decision-makers is assigned greater value as a potential factor conditioning the ways by which research results are used in policies, while the actors' organization in "social networks" is regarded as another fundamental variable for facilitating such interaction and guaranteeing that specific innovations are incorporated at given conjunctures.
The various models and analytical categories discussed in this study describe important aspects and dimensions in this interaction between research and policy-making, revealing its complexity and helping understand it. However, the problem lies in the "prescriptive intention" of some proposals, since the decision-making process on any given policy involves numerous intervening variables that can combine in highly random ways (all of which is proper to the political process) making it hard to predict the outcome of such interaction.
Research results are seen as having highly varied roles in policy formulation, the most effective perhaps being to change the "terms of the debate" on a given issue, depending on the actors' political power of persuasion and their ability (using politics and lobbying) to keep the specific issue on the policy agenda over time and to implement the intended changes, as well as the issue's importance to a given society at a specific moment.
The ability to transform technical and scientific proposals into policy changes to be implemented or in progress (or reform policies) is a process involving much more than the actors' will or even the technical quality of the scientific information recommending such a change, because ideological, political, and conjunctural factors are decisive for the proposal's formulation and the courses of action chosen to implement it 50,51, and the production of scientific knowledge moves forward precisely by questioning earlier "scientific truths", while concrete realities demand that the directions of any policy for change be permanently reviewed and reformulated. Some authors thus warn that it may be easier to measure the extent of the changes produced or the views of the strategic actors towards the use that was actually made of the research results underpinning the change. They also propose that the category "use" should be replaced by "influence", i.e., actors' ability or power to produce effects in given areas.
In addition, programs or processes of change in the public health field or in health services cannot be evaluated using the same technologies normally used to evaluate medical care (e.g., clinical trials), which are the basis for "evidence-based medicine" so in vogue in recent decades. That is, evaluation of evidence in public health depends on other analytical categories, which Habicht et al. 52 and Victora et al. 53 call adequacy, plausibility, and probability, described as " three types of scientific inference that are often used for making policy decisions in the fields of health" 53 (p. 400), because of the complexity of certain interventions and the ways by which changes influence the characteristics of the institutions or populations under study, or vice-versa.
In summary, the literature review presented here confirmed our initial impression that there is an excessive formalization of instruments and pragmatic simplification in both processes knowledge production and policymaking in the health sector. Nonetheless, interesting progress can be seen in considering the use of research results for policy implementation, particularly regarding the application of knowledge transfer-related ideas (diffusion, dissemination, and innovation) in the area of health policy and systems research from a political science perspective. Even so, there is still a long road ahead in this theoretical reflection, and there seems to be a need for greater investment in empirical research, which although not reproducible, does bring to bear elements of the concrete reality that help decipher the acknowledged complexity in the issue of the use of scientific knowledge for policy formulation and implementation.
Contributors
The two authors jointly developed this paper's underlying ideas and structure. C. Almeida conducted the literature inventory and review, summarized the material, examined the theories and ideas, conceived the study structure, wrote the paper, and revised the final draft. E. Báscolo also searched the literature, summarized material, examined the theories and ideas, commented on and wrote in each successive version, and assisted in shaping the paper for publication.
Acknowledgements
The authors thank Patricia Pittman for her support in surveying the literature and for valuable suggestions on the first draft of the paper. Obviously, the authors assume sole responsibility for the analysis and interpretation of the arguments presented here.
REFERENCES
1. White K, editor. Investigaciones sobre servicios de salud: una antología. Washington DC: Organización Panamericana de la Salud; 1992. (Publicación Científica, 534).
2. Meny I, Thoenig J-C. El marco conceptual. In: Meny I, Thoenig J-C, editors. Las políticas públicas. Barcelona: Editorial Ariel; 1992. p. 88-108.
3. Greenberg JN, Choi T. The role of the social sciences in the health services research: an overview. In: Choi T, Greenberg JN, editors. Social science approaches to health services research. Ann Arbor: Health Administration Press; 1982. p. 2-20.
4. Anderson OW. Influences of social and economic research on public policy in the health field: a review. Milbank Mem Fund Q 1966; 44:1-11.
5. Bice T. Social science and health services research: contributions to public policy. Milbank Mem Fund Q Health Soc 1980; 58:173-200.
6. Mechanic D. Prospects and problems in health services research. Milbank Mem Fund Q Health Soc 1978; 56:127-39.
7. Arellano OL. La selectividad en la política de salud. In: Laurell AC, editor. Nuevas tendencias y alternativas en el sector salud. México DF: Casa Abierta al Tiempo/Fundación Friedrich Ebert Stiftung; 1994. p. 33-60.
8. Almeida CM. As reformas sanitárias nos anos 80: crise ou transição? [Tese de Doutorado]. Rio de Janeiro: Escola Nacional de Saúde Pública, Fundação Oswaldo Cruz; 1995.
9. Chernichovsky D. Health system reforms in industrialized democracies: an emerging paradigm. Milbank Q 1995; 73:339-72.
10. Lee K, Fustukian S, Buse K. An introduction to global health policy. In: Lee K, Buse K, Fustukian S, editors. Health policy in a globalising world. Cambridge: Cambridge University Press; 2002. p. 3-17.
11. Kralewski JE, Greene BR. Health services research and the evolving health systems. In: Levey S, McCar- thy T, editors. Health management for tomorrow. Philadelphia: J. B. Lippincott; 1980. p. 293-307.
12. Trostle J, Bronfman M, Langer A. How do researchers influence decision-makers? Case studies of Mexican policies. Health Policy Plan 1999; 14:103-14.
13. Walt G, Gilson L. Reforming the health sector in developing countries: the central role of policy analysis. Health Policy Plan 1994; 9:353-70.
14. Brown L. Knowledge and power: health services research as a political resource. In: Ginzberg E, editor. Health services research: key to health policy. Cambridge: Harvard University Press; 1991. p. 20-45.
15. Reich M. The politics of health sector reform in developing countries: three cases of pharmaceutical policies. Boston: Harvard School of Public Health; 1994. (Working Paper, 10).
16. Barreto ML. O conhecimento científico e tecnológico como evidência para políticas e atividades regulatórias em saúde. Ciênc Saúde Coletiva 2004; 9:329-38.
17. Weiss C. The many meanings of research utilization. Public Adm Rev 1979; 39:429-31.
18. Shadish WR, Cook TD, Leviton LC. Foundations of program evaluation: theories of practice. Thousand Oaks: Sage Publications; 1990.
19. Kirkhart KE. Reconceptualizing evaluation use: an integrated theory of influence. In: Caracelli VJ, Preskill H, editors. The expanding scope of evaluation use. San Francisco: Jossey-Bass; 2000. p. 5-24. (New Directions for Evaluation, 88).
20. Henry GT, Rog DJ. A realist theory and analysis of utilization. In: Henry GT, Julnes G, Mark MM, editors. Realist evaluation: an emerging theory in support of practice. San Francisco: Jossey-Bass; 1998. p. 89-102. (New Directions for Evaluation, 78).
21. Rich RF. Measuring knowledge utilization: processes and outcomes. Knowledge and Policy: The International Journal of Knowledge Transfer and Utilization 1997; 10:11-24.
22. Patton M. Utilization-focused evaluation. Newbury Park: Sage Publications; 1997.
23. Forss K, Rebien CC, Carlsson J. Process use of evaluations: types of use that precede lessons learned and feedback. Evaluation 2002; 8:29-45.
24. Melo MA. As sete vidas da agenda pública brasileira. In: Rico EM, organizador. Avaliação de políticas sociais: uma questão em debate. São Paulo: Cortez Editora/Instituto de Estudos Especiais; 1998. p. 11-28.
25. Lennarson-Greer A. Advances in the study of diffusion of innovation in health care organisa- tions. Milbank Mem Fund Q Health Soc 1977; Fall: 505-32.
26. Bowen S, Zwi AB. Pathways to "evidence-informed" policy and practice: a framework for action. PLoS Medicine 2005; 2:600-5.
27. Bardach E. A practical guide for policy analysis: the eightfold path to more effective problem solving. New York: Seven Bridges Press; 2000.
28. Majone G. Evidence, argument and persuasion in the policy process. New Haven: Yale University Press; 1989.
29. Davies H, Nutley S, Smith P, editors. What works? Evidence-based policy and practice in public services. Bristol: The Policy Press; 2000.
30. Nutley S, Webb J. Evidence and the policy process. In: Davies H, Nutley S, Smith P, editors. What works? Evidence-based policy and practice in public services. Bristol: The Policy Press; 2000. p. 13-4.
31. Haas PM. Introduction: epistemic communities and international policy coordination. Int Organ 1992; 46:1-21.
32. Adler E, Hass PM. Conclusion: epistemic communities, world order, and the creation of a reflective research program. Int Organ 1992; 46:390.
33. Albaeck E. Why all this evaluation? Theoretical notes and empirical observations on the functions and growth of evaluation, with Denmark as an illustration case. Canadian Journal of Program Evaluation 1996; 11:1-34.
34. Hanney S, Gonzalez-Block M, Buxton M, Kogan M. The utilisation of health research in policy-making: concepts, examples and methods of assessment. Health Res Policy Syst 2003; 1:2.
35. Pittman P. Allied research: experimenting with structures and processes to increase the use of research in health policy. In: Global Forum for Health Research final documents [CD-ROM]. Mexico DF: Global Forum for Health Research; 2004.
36. Furtado JP. Um método construtivista para a avaliação em saúde. Ciênc Saúde Coletiva 2001; 6:165-81.
37. Kothari A, Birch S, Charles C. Interaction and research utilization in health policies and programs: does it work? Health Policy 2005; 71:117-25.
38. Lomas J. Improving research dissemination and uptake in the health sector: beyond the sound of one hand clapping. Ontario: Centre for Health Economics and Policy Analysis, McMaster University; 1997.
39. Lomas J. Using "linkage and exchange" to move research into policy at a Canadian foundation. Health Aff (Millwood) 2000; 19:236-40.
40. Lavis JN, Ross SE, Hurley JE. Examining the role of health services research in public policymaking. Milbank Q 2002; 80:125-54.
41. Landry R, Amara N, Lamari M. Climbing the ladder of research utilization. Sci Commun 2001; 9:171-82.
42. Landry R, Lamari M, Amara N. The extent and determinants of utilization of university research in government agencies. Public Adm Rev 2003; 63:192-205.
43. Knott J, Wildavsky A. If dissemination is the solution, what is the problem? Knowledge: Creation, Diffusion, Utilization 1980; 1:537-78.
44. Tangcharoensathien V, Wibulpholprasert S, Nitayaramphong S. Knowledge-based changes to health systems: the Thai experience in policy development. Bull World Health Organ 2004; 82:750-6.
45. Bronfman M, Trotsle J, Langer A. De la investi-gación en salud a la política: la difícil traducción. México DF: Instituto Nacional de Salud Pública/ Editorial El Manual Moderno; 2000.
46. Sauerborn R, Nitayarumphong S, Gerhardus A. Strategies to enhance the use of health systems research for health sector reform. Trop Med Int Health 1999; 4:827-35.
47. Rogers E. Diffusion of innovation. 3rd Ed. New York: Free Press; 1988.
48. Greenhalgh T, Robert G, MacFarlane F, Bate P, Kyriakidou O. Diffusion of innovation in service organizations: systematic review and recommendations. Milbank Q 2004; 82:581-629.
49. Pellegrini Filho A. Ciencia en pro de la salud. Notas sobre la organización de la actividad científica para el desarrollo de la salud en América Latina y el Caribe. Washington DC: Organización Panamericana de la Salud; 2000. (Publicación Científica y Técnica, 578).
50. Pêgo RA, Almeida CM. Teoría y práctica de las reformas en los sistemas de salud: los casos de Brasil y México. Cad Saúde Pública 2002; 18:971-89.
51. Pêgo RA, Almeida C. Ámbito y papel de los especialistas en las reformas en los sistemas de salud: los casos de Brasil y México. Notre Dame: The Helen Kellogg Institute for International Studies/University of Notre Dame; 2002. (Working Paper, 299).
52. Habicht JP, Victora CG, Vaughan JP. Evaluation designs for adequacy, plausibility and probability of public health programmes performance and impact. Int J Epidemiol 1999; 28:10-8.
53. Victora CG, Habicht JP, Bryce J. Evidence-based public health: moving beyond randomized trials. Am J Public Health 2004; 94:400-5.
Correspondence
C. Almeida
Departamento de Administração e Planejamento em Saúde
Escola Nacional de Saúde Pública Sergio Arouca, Fundação Oswaldo Cruz
Rua Leopoldo Bulhões 1480
Rio de Janeiro, RJ 21041-210, Brasil
calmeida@ensp.fiocruz.br
Submitted on 16/Mar/2006
Approved on 20/Mar/2006