DEBATE DEBATE

 

 

 

 

 

 

Zulmira M. A. Hartz 1


Institutionalizing the evaluation of health programs and policies in France: cuisine internationale over fast food and sur mesure over ready-made

Institucionalizando a avaliação de programas e políticas de saúde: culinária e corte-costura nas lições francesas

 

1 Departamento de Epidemiologia, Escola Nacional de Saúde Púbica, Fundação Oswaldo Cruz. Rua Leopoldo Bulhões 1480, 8o andar, Rio de Janeiro, RJ 21041-210, Brasil.   Abstract The purpose of this article is to describe several chronological milestones in institutionalizing the evaluation of public programs and policies in France from a governmental perspective and in the health sector, situating such references in the international context. The institutional nature of evaluation implies integrating it into an action-oriented model, linking analytical activities to management, thus constituting the formulation of an evaluation policy for policy evaluation. The study focuses on issues related to the structure, practice, and utilization of evaluation results as well as other characteristics providing the French model with a certain resistance to traditional "fast-food" or "ready-made" methodological approaches. The institutionalization of sectorial evaluation appears more promising than that of the government's centralized channel, despite the work developed by a Scientific Evaluation Council, and suggests avenues for reflection and debate pertaining to the Brazilian Unified Health System.
Key words Program Evaluation; Health Policy; Health System; Health Planning

Resumo Este texto tem o propósito de descrever alguns marcos cronológicos de institucionalização da avaliação de programas/políticas públicas da França, na perspectiva governamental e no setor saúde, contextualizando-os no âmbito internacional. O caráter institucional da avaliação supõe integrá-la em um modelo orientado para ação, ligando atividades analíticas às de gestão, constituindo assim uma formulação da política de avaliação para avaliação de políticas. São focalizadas questões relacionadas à estrutura, à prática e à utilização dos resultados da avaliação, bem como outras características que conferem ao modelo francês uma certa resistência ao "fast food" ou "prêt-à-porter" das abordagens metodológicas tradicionais. A avaliação setorial se revela mais promissora em sua instucionalização do que o dispositivo centralizado do governo, apesar do trabalho desenvolvido por um Conselho Científico de Avaliação, sugerindo pistas de reflexão e debate na perspectiva do SUS.

Palavras-chave Avaliação de Programas; Política de Saúde; Sistema de Saúde; Planejamento em Saúde

"One important question then, when one compares experiences from one country to another, is what lessons can be learnt...there is always a question about learning from the errors of others. Does one have to go through the same path, to be convinced through personal experience of what is right or wrong, even though one has been told, or can one skip stages to come directly to what seem the right points?" (Pouvourville, 1997:170).

 

 

Introduction

 

While evaluation of public programs and policies appears to be a consensus in light of the need to know the effects of such interventions, the models used in their institutionalization, that is, the structures, agencies in charge, objectives, methods, and utilization of results vary from one country to another. In Brazil and other South American countries, the practice of evaluation as a public function is rare, although it has occupied increasing space in the legal and technical and scientific literature (Hartz & Pouvourville, 1998). I believe, however, that one can take advantage of the lessons learned from the more advanced countries in evaluation programs for the evaluation of programs, as in the case of the US model for agencies in charge of public health interventions (PHS, 1996) or the difficulties experienced by those who have more recently begun to build an evaluation policy for the evaluation of policies. France is one of the latter, with the advantage of being one of the European countries with a major influence on Brazil's state model for public administration, besides displaying greater similarity to our university and scientific research infrastructure, as highlighted by Novaes (1992) in citing it as a reference for understanding the determinants in the incorporation and dissemination of new medical technologies by the health sector.

My objective here is to present the characteristics of the French experience based on a review of the literature, with a chronological summary of some events related to the evaluation of public policies in general, first, and then concentrate on the health sector, where I added to the bibliographical information through a visit and interviews with the people in charge of the French evaluation units at the ministerial level, adapting the interview protocol used by Love (1996). This process is accompanied by comments on other international experiences with institutionalization, drawing on common denominators or a certain specificity in the French context which contributed to a reflection and debate on adaptable/avoidable pathways or shortcuts for a sectorial evaluation policy for the country.

I begin with some information about France and its health system. This developed country, with nearly 60 million inhabitants, is a parliamentary democratic republic whose Constitution defines health as a fundamental right of all citizens, specifically ensuring health protection for mothers, children, and the elderly. The government (Council of Ministers) executes the parliamentary laws and is responsible for the programs and policies that have been approved. Local and regional governments are responsible for administering local services under the aegis of the Ministries, as in the case of the public hospital network (two-thirds of the 247,813 beds). The central government is in charge of health planning, establishing the mechanisms for regulation and control (Weill, 1995). France spent 9.8% of its GNP (Gross National Product) on health care in 1993 and, within the European countries, France has the highest expenditure. The value per capita was $1,835 (in purchasing power parities) compared with $3,299 (USA), $1,213 (UK), and $1,815 (Germany). Coverage against the financial costs of illness is nearly universal, with less than 1% of the population still without insurance (Tonnelier & Lucas, 1996). The country's level of industrial development is similar to that of such other nations as Canada and the United States, and it faces the same challenges of restructuring its health system to meet new demands from the elderly and other population groups (Battista et al.,1995).

Before presenting a chronology of the events related to the history of evaluation of public policies in France, it is important to define several terms whose concept can vary from one author to another or according to different fields. The word "evaluation", as part of a prêt-à-penser discourse, buzzword, or mot passe-partout, can be "dangerous" if left undefined (Gremy et al., 1995). Our study treats evaluation as an institutional activity aimed at becoming part of public management and the functioning of the political system, based on (but not limited to) evaluative research. The purpose of evaluation is to foster an improved value judgment, but also to enhance the practical implementation of a policy or the functioning of a service (CSE, 1996). It may cover an intervention as a whole or any one of its components (Contandriopoulos et al., 1997). In the public domain, the fundamental idea is an action program whose frame of reference is the reality on which one intends to act (Henrard, 1996a). Jorjani (1994:69) speaks of "evaluation of public programs", proposing a conceptual framework considering intersectorial aspects of interventions, representing "new approaches to governance that are changing the basic thinking of modern politics".

The focus on evaluation of programs is a priority all over the world, including the academic literature on the subject, since it is defined better as an approach, favoring an empirical evaluation of results analogous to a scientific test of the validity of theories that verify hypotheses pertaining to the association between the means employed and the effects obtained (Perret, 1995). In France, what is generally mentioned is the evaluation of public policies, but this usually pertains to evaluation of programs, particularly with regard to transfers from the economic to the social sector. In England, a paper on policy evaluation deals simultaneously with programs and policies, where the notion of policy is similar to that of the French logic as the means employed to achieve objectives (Perret, 1994). The Inter-American Institute for Social Development (Indes), in its course for Latin American and Caribbean leaders, combines in a common training objective the design and management of social policies and programs. Summing up, in France, as elsewhere, the distinction is not always clear between policies and programs in evaluative research (Perret, 1995), as in the case of Germany (Wollmann, 1997), Sweden (Furubo & Sandahl, 1996), and Spain (Ramos, 1996; Ballart, 1997), which justifies this preference for treating them jointly from the point of view of institutionalization.

To institutionalize evaluation in the sense employed here means to integrate it into an organizational system whose behavior it is capable of influencing, i.e., an action-oriented model necessarily linking analytical and management activities (Mayne, 1992). The institutional nature of evaluation also presupposes a formal definition of the respective command responsibilities (i.e., those who commission the evaluation) and the evaluators, so that in principle the results of the knowledge produced can be appropriated and integrated into their own view of reality (Perret, 1995). According to Mayne (1992), the decision to institutionalize evaluation at the federal government level requires a national definition of a minimum set of policy guidelines to be incorporated into our discussion, such as:

• purposes and resources attributed to the evaluation (structure);

• location and methodological approaches of the tier(s) in the evaluation (practice);

• relations established with management and decision-making (utilization).

 

 

Evaluation of public policies in France (Table 1 )

 

 

 

Bion (1994) links the beginning of the French institutionalization process to the publication of the Rapport Deleau (1986), but the founding of the Office Parlementaire d'Évaluation des Choix Scientifiques et Technologiques (1983) merits equal attention, having been inspired by the Office of Technology Assessment, linked to the US Congress, an important milestone in the institutionalization of Congressional evaluative practices. The objective of the Office Parlementaire is to inform Parliament in such a way as to orient decision-making (Maury, 1997). It consists of eight Deputies and eight Senators with no hierarchical ascendancy and is assisted by a 15-member scientific board. Demand for evaluations on issues may come from the two Chambers themselves, the chairman of a political party, a group of Senators (minimum of 40), or Deputies (minimum of 60), or by its own decision. A feasibility analysis based on state-of-the-art knowledge of the issue at hand precedes all studies, and one or more rapporteurs are chosen and provided with the necessary means to perform their tasks (with the possibility of performing audits in all state agencies except on issues of national defense). Despite all these facilities, Maury (1997) points to the limits of its contribution: the work is concentrated on problems rather than public policy per se; the autonomy and power of the Scientific Council are limited and subject to delegation without proper methodological adjustments; and no report has been produced nor has the Board met since 1994. Soon, le parlement a, un peu comme M. Jourdain fait de la prose, toujours fait de l'évaluation (Maury, 1997:2).

The Rapport Deleau (1986) presents evaluation in the French political and administrative context as a means to identify and measure the effects of public policies with methodological rigor, adopting an experimental or cognitive perspective, utilizing objective data and external evaluators based on the concept of on ne peut pas être juge et partie (Bion, 1994). Meanwhile, the Rapport Viveret (1988) criticizes the preceding report, assuming a political approach to evaluation that is no longer considered a measurement tool. According to its view, elected representatives are the ones that can legitimately pass judgment on the value of policies, considering their inherent subjectivity. Evaluation is thus seen as necessarily contradictory, ensuring the plurality of points of view and adding a tribunicienne function (Bion, 1994).

In 1989 the Prime Minister's Memorandum, considered the backbone for renovating public administration, took on the task of promoting policy evaluation, resulting in 1990 in the Decree on the Evaluation of Public Policies. Institutionalization was formalized under the aegis of the Executive Branch and considered essentially relevant for the central government. The official text drew on tendencies from previous reports (Perret, 1995) and created the Interministerial Evaluation Committee (Cime) and the Scientific Evaluation Council (CSE).

The Cime is chaired by the Prime Minister, having as permanent members the Ministers of Planning, Economy, Budget, Interior, and Public Affairs, but with no representation from Parliament, thus making it impossible for the latter to convene it (Maury, 1997). An evaluation by the Cime provides the respective Minister with the possibility of accessing information on programs, thus in a sense cutting across various administrative echelons.

The CSE is named by the Prime Minister for a term of six years, and its participants are chosen on the basis of proven competence and are commissioned to define a deontology for evaluation with regard to the form and methods but not the bien fondé of a given project. Its autonomy is intended to be guaranteed by the non-renewal of its term and the fact that it is not limited to demands raised by the Cime. It is also supposed to gather and publish information pertaining to evaluation and contribute to the training of specialists and the development of evaluative research. Priority projects for Cime are expected to be analyzed by the CSE in response to funding by the FNDE (Fond National de Développement de l'Évaluation). The Council's roles also include methodological assistance (CSE, 1996) and the final opinion in a study is based on the coherence between the report and other studies on the same theme, as well as the recommendations and analysis. These attributions allow the Council to capitalize on experiences and to become familiar with the limits and facilities experienced by evaluators (CSE, 1996). The Economic and Social Council, a third constitutional assembly, with representatives mainly from among the social partners, may use up to 20% of the FNDE with the condition that the projects be submitted to the CSE (Perret, 1995). The Comissariat Général du Plan prepares the rulings by the Cime, oversees the enforcement of decisions, manages the FNDE, and raises and organizes the demand for evaluation from the Ministries.

The Memorandum on State-Regional Contractualization (1994-1998) provides for the evaluation of certain priority programs, and projects are expected to comply with the above-mentioned guidelines. According to Monnier (1995), even while the practice of contractualization in a context of co-responsibility runs the risk of being limited to a simple regulation, it is interesting to note that certain administrations have permanent institutionally established mechanisms, and these local devices have generally imitated the institutional architecture adopted by the Prime Minister.

The year 1996 witnessed the creation of another Office Parlementaire, specific for the evaluation of public policies, with the chair alternating once a year between the heads of the two chambers of Parliament. The partners in evaluations were the same as those of the 1983 ruling, but with more limited powers (Maury, 1997). Perret (1995), a member of the CSE, also recalls that other agencies such as the CNE (Comité National d'Évaluation des Établissements Publiques à Caractère Scientifique, Culturel ou Professionnel), CNER (Comité Nationale d'Évaluation de la Recherche), and the Cour des Comptes may also pass judgment on the efficacy of public actions. Nevertheless, Maury (1997) observes that the legislation is not precise about the objectives of the new parliamentary office and sees other issues as problematic: the absence of a Scientific Council (turned down by the Senate) and the lack of a certain personnel allocation or fixed budget.

 

 

The international context


 

To get a better understanding of the trend, we find it relevant to add a few remarks on other experiences around the world. From a less restricted historical perspective, the roots of evaluation of social policies or programs date far back, but they were only formally expressed in the West in the 1960s, through the Planning Programming Budgeting System (PPBS) logic, beginning in the United States and followed by Canada, Germany, and Sweden. During the 1980s there was an explosive trend in its implementation in the Netherlands, Denmark, and Switzerland (Albaeck, 1996). During that same period a new trend also emerged in the United States and Canada which has recently taken hold in the European countries, i.e., the development of salience performance audits by the Supreme Audit Institutions: efficiency, effectiveness, good management, and services quality have become the focus of issues, contrary to the emphasis previously placed on legal and procedural requirements in government expenditures (Pollit, 1997). This same trend has been observed in other countries where auditing spheres have pushed the development of evaluation to the point of fomenting the creation of European societies of evaluators since 1994. One thus understands why the General Accounting Office (GAO) created in 1980 and the Program Evaluation Methodology Division (PEMD-GAO) play such an important role in responding to demands by Congress and from different countries, making their studies and guidelines available. A good example in the biomedical field is the publication Cross Design Synthesis (GAO, 1992). As a top agency for verifying public accounts (similar to France's la Cour des comptes) and linked to the American Congress, this division of the GAO specialized in evaluation has a hundred employees (Perret, 1995).

With regard to the structure of institutionalization models, most governments that have implemented relatively successful evaluation policies (even though with different approaches) have included interest on the part of budget sectors, as in the case of Canada, United States, Australia, and Great Britain (Mayne, 1992). According to the classification proposed by Monnier (1995), France would be included in this group of countries with greater management efficacy: scientific criteria as the basis for legitimacy; problem-solving; modernization of public services; and use of independent evaluators with no direct link to the budget.

Hudson & Mayne (1992) compared the Canadian and US experiences and found that despite the heavy influence from the United States, considered the pioneer in the evaluation of social programs, the two countries differ in that evaluation in the United States is part of a logic of social experimentation to judge the effectiveness of social programs a priori, while in Canada the implementation of such programs is primarily conditioned by the political debate. In relation to specific financial resources, although there are few references in the literature, Canada spent U$28.5 million in 1991-1992 (CES, 1994), while the United States passed a law in the 1970s authorizing the use of 1% of the funds allocated for health programs to be used in evaluation, a total of US$100 million (GAO, 1993).

With regard to the specific issue of decentralization of evaluation, i.e., the relationship in actual practice between the Legislative and Executive Branches or the attributions of decision-makers, evaluators, and quality controllers in evaluation (that is, the central, department, and program levels), Canada encouraged a decentralized approach. Thus, even in the 1960s, the central government provided funds for evaluation studies at the provincial government level. The absence of a satisfactory response to this initiative by the Provinces was interpreted by Marceau et al. (1992), analyzing the situation in Québec, as a refusal of the prevailing PPBS logic, creating difficulties between politicians and administrators, since all information on performance and real program costs was centralized at a single level where decisions as to allocation were made. In 1978-1979 the creation of an Office of the Controller General (OCG) launched a process of institutionalization per se as a policy guideline in which the central level kept its role of supervision, control, and technical support, elaborating evaluation guidelines. The principle of ministerial responsibility determines the practice of program evaluation, as it has determined that of others in the administration of the Canadian parliamentary system, where evaluation as a process is more sectorial than central (Marceau et al., 1992).

According to Hudson & Mayne (1992), the efficacy of this Canadian institutional evaluation device hinged on an official mechanism of external assessment of the internally performed evaluation principles. One indicator of the success of this new guideline could be the incentive for professionalization expressed by the Canadian Evaluation Society (CES), founded in 1981 and which had 1400 members by 1991. On the other hand, this system was never all-inclusive. A report from 1993 showed that only one-fourth of public expenditures had been evaluated from 1985 to 1992, while evaluations had been subject to problems, concentrating only on small observational units without analyzing the joint effects or ever questioning the overall cost or existence of the respective program (CES, 1994).

The closing of the OCG, as part of the reorganization of the evaluation system conducted by the Treasury Council, suggests that a good formula had still not been reached, a fact that was anticipated by Marceau et al. (1992) when he stated that the policy-making and administrative machinery still faced problems in adjusting to this "foreign body", i.e., evaluation. According to the new institutionalization model, evaluation and verification were subordinated directly to the Secretariat of the Treasury Council, although their execution is still a responsibility of the Ministries, who now have an independent professional in charge (vis-à-vis the activities evaluated), organizational visibility, and direct access to the Assistant Minister (Turgeon, 1997). The Secretariat is limited to supervising the actual implementation of the new policy and the evaluators (based on their role with the managers and program heads) attempt to facilitate quality improvement and efficiency in procedures (the measures employed must include assessment by users of the respective public services).

According to Muller-Clemm & Barnes (1997), since 1993 this change resulted in the inability of the OCG to reduce the gap between promises and performance in government activities. This also increased the risk of a throwback to the hard facts by constituting a single governmental sector for auditing and evaluation, marginalizing the so-called soft program evaluation products. The issue remaining for the authors was "whether or not this order is too tall for the evaluator to fill".

In my opinion this change ascribes a role to the Treasury Council that is similar to that of the Finance Ministry in the Australian model. In a six-year inventory of the Australian system by MacKay (1994), one notes excellent performance, to the point of having performed 290 evaluations, the merit of which is attributed particularly to the role of catalyst/coordinator played by the Finance Ministry. According to the author, "...finance's advocacy of evaluation as a worthwhile activity is more likely to be influential with portfolios than if evaluation had been the responsibility of a stand-alone specialist organization that has perceived as tangential to mainstream government activities", with the Canadian model fitting into this latter category. In Australia there has been a relaxation of controls, offering methodological support when requested and favoring the construction of networks of evaluators, which "helps the managers manage". Without getting involved directly in the process, they complement the work philosophy with the slogan "letting the manager manage". Awards are provided to achieve the planned results, and a three-year evaluation plan is mandatory for each program, the report on which is usually published. It is believed that this publication fosters utilization of the data and quality improvement in prime approaches in addition to avoiding "[steering] the evaluation away from the difficult questions or suspect areas" (MacKay, 1994).

Reviewing the action-oriented logic discussed by Mayne (1992) and adopted as a reference in this analysis of institutionalization, the legitimacy of the process lies in its capacity to influence decision-making by utilizing evaluation data. Still, there is no consensus as to facilitating factors in this process, which vary according to whether one works with the rational or political approach (Albaeck, 1996). Suffice it to say that the rational approach captures the most stable components of organizational behavior but fails to consider power conflicts except as indicators of failure. According to the author, without denying a certain rationality of behavior, organizations constitute themselves as political arenas for negotiation and bargaining, meaning that "the very choice to utilize evaluation research is politics". These two theories involve different research methods or paradigms (positivist and constructivist, respectively) and were appropriately labeled by Turgeon (1997) as prêt-à-porter and sur mesure. The former focused on experimental models in summatory evaluations, the essential characteristic of which was to know the program's impact, relying on the use of quantitative data. It corresponded to a model such as the Civic Textbook View of the Role of Science and Evaluation Research criticized by Sabatier (1997) due to the supposition that scientists and evaluators are neutral and that their data are presented in "unbiased fashion to policy makers". The latter theory, sur mesure, identified with the constructivist paradigm, of a more qualitative nature, placed the evaluator in a position of listening to the political arena, seeking greater utilization of the results through the permanent involvement of the various players influencing the decision-making process. This characterized the "fourth stage of evaluation" as identified by Guba & Lincoln (1990, apud Contandriopoulos et al., 1997), which although encompassing the accumulation of knowledge from previous stages (validity of measures, program effectiveness, and value judgments), focused on the negotiation between players. It was also analogous to "empowerment evaluation", which could be translated as évaluation axée sur l'autonomie, providing for active participation by program heads (and ideally users, too) in the evaluation process and resulting decisions (Rowe,1997).

According to Chen (1997), who formulated the concept of "theory-driven evaluation", it is crucial to reconcile quantitative and qualitative analytical techniques, which are not conflicting, rather constituting different possibilities available to evaluators to deal with the specificity of the problems entrusted to them. This focus is becoming a consensus amongst the various institutionalization models studied. Thus, one already notes in the Canadian guidelines from 1991 that evaluations were to be designed with multiple lines of evidence, including carefully selected quantitative and qualitative data (McQueen, 1992). In Switzerland, with a political structure of the typically intensive direct democratic participation type, data from evaluations are used extensively for consensus-building at the federal and canton level to shed light on problems and foster solutions, employing multiple approaches: "more positivist-more constructivist; quantitative-qualitative; distant-participant; prospective-retrospective" (Bussmann, 1996).

Contrary to this intensive use of evaluation as reported by Switzerland, a preliminary inventory of the first five years of the French governmental ruling presented by Perret (1995) showed that only 17 projects were analyzed by the CSE, while Weill (1995) pointed out that health policies were not amongst even this small group. On the positive side, Perret identified the open nature of discussions within evaluation spheres, allowing for a broad grasp of the evaluation issue and its results by political players, although the effects on the decision-making process were insufficient. Leca (1997), chairman of the CSE, is somewhat skeptical of ruling, noting that "evaluation appears to be a sacred cow", where despite the lofty inauguration of the CSE (including even a seat for the President of the French Republic), interest waned in submitting projects during the period from 1991 to 1996 (with 10, 7, 6, 4, 1, and 0 projects per year, respectively).

Analyzing the institutional "import-export" flow of administrative reforms or modernization schemes in France, Rouban (1993) found that they have always functioned as loans or adaptations of organizational formats originating either from abroad or from the private sector. Resistance to evaluation, or ascribing to it what might appear to be a minor role, might be better explained by the fact that evaluation is not limited to a merely administrative adaptation. It challenges the primacy of decision-making in the hands of public employees to foster intervention by diverse players, including representatives of political pressure groups, to change the work of the public sector, as in the case of the United States. The author emphasizes the importance of evaluation as a possibility for solving the historical impasse of the bicephalous system (President versus Congress) in US public administration through administrative and budget control (accountability), which for years impeded reforms in US democracy. Thus the GAO, in its capacity as an auditing and evaluating agency with a pedagogical mandate, is symbolic of the capacity to give Congress a pouvoir de bourse, situating itself at the crossroads between the two branches of government. Taking advantage of program-based cost structure, clearly identified on the budget and policy levels, evaluation has acted to counterbalance the separation of powers, regulating the political game between various players. While the author recognizes that evaluation cannot be done without institutionalization (training of top personnel, adequate organization, and financial management), as shown by the experiences discussed herein, evaluation cannot turn politics into economics. The success of the United States is partially due to a public administration rooted in the acknowledgment of citizens' rights to control the details of public activity and morals. Thus, without pretending to turn the "democratic scene into statistical democracy", which would destroy the principle of imputation, in order for evaluation to emerge from its modest space of management control to achieve a general recomposition of relations between administration and politics, it presumes a response to the inquiry into the form of democracy conceived for the French political system (Rouban, 1993).

 

 

Chronology of sectorial evaluation in health (Table 2 )

 

 

 

The perinatal program implemented in the 1970s was a milestone in sectorial institutionalization, illustrating the relations between decision-making and technoscientific knowledge generated by evaluative research. According to Chapalain (1996a), analysis and evaluation methodology was used for the first time in public health at the national level. An economic analysis (ex-ante) and epidemiological surveys conducted by the Inserm (Institut National Spécialisé en Recherche Médicale) supported the policy choices expressed as administrative and regulatory measures and budget credits (Rationalisation des Choix Budgétaires-RCB). This policy was based on PPBS, a methodological legacy of the US defense used in renewing public administration and institutionalized evaluation practices mentioned above. On the other hand, it did not involve the same US ambitions of maximizing the expected benefits as a function of costs, incorporating from the beginning the critical view of American theoreticians who in the late 1960s called attention to the importance of replacing the "optimization" of public policy results with the search for a "satisfactory nature" in the improvement of this same policy. Thus, perinatal studies were always considered instruments for negotiation between evaluators and décideurs, with a strong relationship developed right from the beginning of the study (Chapalain,1996a). Another initiative that deserves highlighting involves the studies by the Groupe de Réflexion sur l'Économie des Transports Urbains (Ministère des Transports/École Nationale Superieure des Mines de Paris ­ 1974-1978), focusing on the country's highway safety program, which can be viewed as avant-gardistes and close to "empowerment evaluation" or sur mesure evaluation, underscoring the mechanisms for negotiation and identifying an evaluator's limits as l'homme d'étude, in the crossroads between intellectual and political endeavor. In addition, they conceived of and provided an excellent exercise in their role as pilot committee, defined by them as a group activity facilitating the work of the commanditaire in turning a vague question into the object of research, accompanying its construction and providing technical support to the researchers in their methodological and operational options.

The year 1982 witnessed the founding of the Cedit (Comité d'Évaluation et de Difusion des Innovations Technologiques), linked to the public assistance department of the Parisian public hospital system (Ap-HP). Its objective was to summarize scientific data and conduct studies prior to the dissemination of technological innovations, thereby helping improve the sectorial institutionalization model. Some of its studies showed the practical importance of the role of evaluators in mediating between scientific knowledge and decision-making, from both the medical point of view (efficacy and risks), the organizational or economic point of view (impact on functioning and budget), and the challenges posed by such performance in terms of professional training (Pouvourville & Minvielle, 1995).

In 1985, the conclusions from the Report by the Ministry of Health emphasized the deficiencies in the evaluation of medical technology, with Cedit appearing as the only available specialized agency in France. The report recommended the creation of a specialized autonomous foundation at a consensus meeting that only occurred with the creation of the Andem (Agence Nationale pour le Développement de l'Évaluation Médicale), made possible when the Socialists regained the majority in Parliament (Weill, 1995). However, I feel that two intervening factors contributed to push forward and implement the report's recommendations. The first was the creation of the Sofestec (Societé Française pour l'Évaluation des Soins et Technologies), considered a French version of the International Society for Quality Assurance with the main objective of bringing experts together from various institutions to disseminate methods and techniques whose results had been evaluated. The second was the naming of the Comité National pour l'Évaluation Médicale des Soins de Santé (CNEM), which was commissioned to discuss ethical problems and methodological issues in institutional evaluation with a view towards defining national priorities and brought leaders and authorities together from the health sector, but had no budget or formal timetable of its own (Weill, 1995).

The creation of Andem in 1989 with the autonomy recommended in the 1985 report fostered the dissemination of evidence-based knowledge in medical practice and helped define methods for technological evaluation. It also served as a scientific consulting body for the National Health Insurance Fund (CNAMTS) and physicians' unions. The original budget of US$1.5 million had increased to U$5 million by 1992. Evaluation themes are proposed by its Board (representatives from the Ministries of Health, Education, Research, and Agriculture, CNAMTS, CNEM, etc.). Evaluation of medical technologies has formal status as a national project: The emphasis on technology assessment must be placed in the wider context of the French government's concern about lack of evaluation of public programs in general during a time of economic difficulties. The need to assess public policy and programs was indicated by several reports as a much-needed goal (Weill, 1995). From 1992 on there was close collaboration with the Agency for Health Care Policy and Research (AHCPR) and by 1996 over 100 consensus meetings had been held (Durieux & Ravaud, 1997).

The field of public health was enhanced through the Special Committee for Research in Prevention and Evaluation, created by Inserm, originally as an ad hoc committee, with funds coming from the national health insurance system. This committee has been an important catalyst, since epidemiologists, economists, and social scientists are now much more involved than before in evaluation projects (Weill, 1995).

The 1990 administrative reform, recommending the decentralization of services through Schèmas Régionaux des Organisations Sanitaires (SROSS), was a strong argument for the Hospital Act of 1991 in the sphere of health care system reform. The new regulatory framework reviewed the principles of the 1970 Act, which organized medical care only on the basis of structural indicators of the beds/inhabitants type, attempting to adapt them to the health objectives established by the SROSS. It was an attempt to move from an administrative type of logic to one of opportunity oriented by contractualization (Guers-Guilhot, 1997) and to build a "new public health", not only the reproduction of the central model, demanding the adaptation of intervention instruments consistent with local needs (Henrard, 1996). The law provides for "the need for evaluation, respect for patients' rights, and the concept of universal health care. Evaluation, an important yet undefined concept, has become through this law a leading channel for health care regulation, management and planning in France" (Weill, 1995). A study by Michel et al. (1997) gives concrete examples of the process of evaluation of professional practices and a quality assurance program evolving towards a regional health policy. In order to implement this new sectorial evaluation policy, the legislation provides for the creation of two new mechanisms:

1) Regional Committees for the Evaluation of Medical and Hospital Care (Cremes).

Interdisciplinary teams (2 hospital physicians, 1 clinician, 1 sage-femme, 1 hospital director, 2 biomedical engineers, and 2 professionals representing the Andem), named by the regional prefects, commissioned to provide the necessary methodological support to the local level, where each public or private hospital should evaluate its activity in delivering quality care. These committees, not being permanent organizations and with no human resources or budgets of their own, leave doubts as to their efficacy. Andem remains as support for the entire system, since it is in charge of validating the methods used in the planning process (Weill, 1995).

2) Bureau for Evaluation in the Hospital Department of the Ministry of Health.

Commissioned to function as a consulting body for issues related to the new hospital mission of evaluating care, this broad responsibility involves defining adequate and well-adapted methods for evaluating services as part of policies, under the objectives of public health and system performance at the local, regional, and national levels. In describing this new structure, Weill (1995) was concerned with the fact that its staff was limited at the time to one public health physician as a permanent member. The unit's main goal in the first two-year period was to consecrate the mise-en-place of a quality assurance policy, with actual compliance by the hospital system, in which evaluation was seen as one of the fundamental instruments. A special federal budget allocated in 1994 as an incentive to hospitals that joined through projects for the development of measures in qualité et securité sanitaire is viewed as an indicator of the success of this strategy.

The agency currently has a staff of four full-time professionals and a technical board (made up of physicians and nurses) with a small weekly time allotment, allowing for the implementation of the bureau and its initial proposal. An example of its activity was the utilization of "tracer conditions" for quality evaluation, applied so as to prioritize health problems amongst various population groups and reorganize health care resources, incorporated into a participant methodology in the form of a conférence de consensus (similar to the procedures adopted by Andem) which proved to be a facilitating factor for the health reform regionalization process.

Under the ordonnances of 1996, the Regional Hospital Agencies (according to public health objectives) are directly responsible for contractualization of goals, evaluation of results obtained by various establishments in accordance with the guidelines of the Schèmas Régionaux des Organisations Sanitaires (SROSS) and the National Agency for Accreditation and Evaluation (ANAES, formerly Andem). Authorizations for initial and on-going functioning of services remain under the bureau, which is also in charge of constructing the evaluation model for monitoring the set of new measures as prevailing public health policy. One question demanding an immediate answer and summarizing this level's concern is how to evaluate the new territorial organization of health care.

The regulatory machinery for sectorial institutionalization was appended in 1992 with the Act on RMOs-Références Médicales Opposables. These mandatory medical guidelines were drafted by the Andem for out-patient care, bolstering the logic of a medicine based on scientific evidence (Pouvourville, 1997). The use of financial incentives and articulation with the pharmaceutical industry at the time the guidelines were drafted indicates the factors that were to favor their implementation, to be renewed permanently based on contributions from evaluative research, but the attempt to transpose this to the hospital area appeared impossible (Durieux & Ravaud, 1997).

With the decree on the mission of decentralized services and given the law's imperative wording, the DRASS (Directions Régionales d'Action Sanitaire et Sociale) were to be involved in the evaluation of public policies, and each echelon, in coordination with other institutional partners, was to focus on the evaluation of programs and actions (Catinchi, 1995). The state was to be the "foreman" of evaluation, and although it was difficult to determine whether only one level (like the Comité technique régional et interdépartemental) could command, coordinate, and/or perform evaluations, it was up to the state to ensure the coexistence of inspection-as-control and inspection-as-evaluation and the negotiated and validated norms and methodologies. Another French author (Geffroy, 1994) had already identified the combination of evaluation and regulation (distinguishing between but not opposing the domains of verification and evaluation, where the latter allowed for the validity of references for regulation) as the only approach capable of reconciling ethics, quality of care, and economics. According to Schaetzel & Marchand (1996), who analyzed some of these experimental regional projects, oriented by PSAS (Programmation Stratégique des Actions de Santé), evaluation began to actually draw on the participation of local players, giving it an "unavoidable character". The following phenomena were already observed:

• consistency with the Haut Comité de Santé Publique in the quantification of objectives pertaining to the improvement of health state;

• emergence of programs oriented by a population approach, in a break with the preponderant concerns of the organization and management of health care structures and activities;

• affirmation of a common desire on the part of the DGS (Direction Générale de Santé Publique) and the ENSP (École Nationale de Santé Publique) of the Ministry of Health that evaluation be contemplated systematically in training career professionals; and

• raised awareness of a context subject to budget constraints, in which it is necessary to be accountable and argue for resource allocation.

Examples of the role that evaluation can play in the regionalization process are covered in the work of Abballeia & Jourdain (1996), with the emergency medical services in Bourgonne and the utilization of quality tracers for planning the SROSS in Lorraine (Garin et al. 1995).

The Bureau for Health Evaluation and Economics of the DGS, established in 1993 but only actually implemented in 1994, is in charge of defining the policy objectives for evaluation of medical practices, with the broader challenge of institutionalizing the legal, financial, and organizational aspects of evaluation in the health field (Weill, 1995). It fell to this ministerial unit to conclude the regulatory framework for health reform in relation to the generalization and expansion of evaluation mechanisms developed experimentally by the Bureau for Hospital Administration. With a staff of just one public health medical inspector and two economists, with a back-up secretariat, it became clear that it sought to maintain this non-executive profile vis-à-vis evaluation, the overall execution of which was entrusted to the ANAES. It is interesting to highlight that the bureau was concerned with maintaining the clear distinction between inspection and evaluation, despite the discourse of the auditing bodies being oriented increasingly towards evaluation of results. Illustrating this issue, the physicians in charge of accreditation were not inspectors but visiting physicians, acting directly in hospital care. What remains to be decided, where irregularities are found, is how they articulate with the Public Inspection area.

The relationship between different bodies involved in health evaluation can be illustrated by the National Program for the Prevention of Breast Cancer, whose operational responsibility and funding come from the national health insurance system (CNAMTS). The Cahier des Charges (implementation guide) was drafted under the coordination of the Andem and standardized by the National Pilot Program Committee of the DGS, to orient the monitoring and evaluation of the program's impact by the CNAMTS with the same partnerships. In short, the role of the bureau appears to concentrate essentially on issues proper to institutionalization, i.e.: to introduce and maintain evaluation at the center of technical/policy decisions depending on the state apparatus and not only with the role of encouraging or facilitating under the aegis of professional bodies (with the exception of out-patient evaluation or médecine de ville). Its role as catalyst was illustrated, for example, by the special issue on l'Évaluation et Santé published by the Haut Comité de la Santé Publique, the editor of which was the head of the bureau, and the regulatory decrees (Ordonnances) in the health care system reform that made possible the creation of the ANAES (National Agency for Accreditation and Evaluation in Health).

The new laws or Ordonnances, the main themes of which involved the maîtrise médicalisé of medical care expenses (96-345) and the reform of public and private hospitalization (96-346), substantially altered the public health and health insurance code. They are consistent with the stance taken by Geffroy (1994): following the failure of accounting cost reduction, only regulation based on evaluation would provide legitimacy for cost reduction, beyond a mere rationing to allow for an increase in quality of care. Going beyond the limits of treatment practices and protocols to encompass organizational practices focusing on the solution of problems at the population level, the main thrusts are the following:

• planning focused on priorities defined yearly in national and regional health conferences, duly backed by the Haut Comité de Santé Publique;

• adaptation of initial medical training and incentives for continuing medical education;

• promotion of experimental coordination models for the out-patient and in-hospital care systems over a five-year period, under the responsibility of regional agencies;

• transformation of Andem into ANAES, executive evaluation unit per se coordinating a national and local network of experts, amongst other things to articulate the activity of the two ministerial evaluation units.

The most pertinent point for our analysis is the creation of the ANAES, as regulated by Decree 97-311 of April 7, 1997, who budget is set by the Ministers of Health, Social Security, and Planning. Among the general provisions we find that the mission of the ANAES is "favoriser tant au sein des établissements de santé publics et privés que dans le cadre de l' éxercice liberal, le développement de l'évaluation des soins et des pratiques professionnelles et mettre en oeuvre la procédure d'accréditation...". Note the agency's ability to acquire goods and property and to allocate, from its own budget, "subventions, prêts ou avancés à des personnes publiques ou privées qui réalisent des études, recherches, travaux ou équipements concourant à l'application de ses missions". In addition it can cooperate with individuals and corporations either French or foreign and "nottament avec des établissements d'enseignement, de recherche ou de santé qui ont des missions identiques ou complementaires des siennes".

With regard to evaluation, the agency's annual and pluriannual program was to focus especially on the epidemiological profile, including health problems and their respective risk factors, evolution of available technologies (de la prévention à la réanimation), iatrogenic accidents, hospital infections, and materials and equipment still not validated in the health field. Administrative and Scientific Councils were to share technical and financial responsibilities. The scientific board has a single chairperson and two specific sections (evaluation and accreditation) whose members are not remunerated except for frais de deplacement et de séjour. A college of accreditation and a reseau national et local d'experts participate in the ANAES missions. Members of the college may be remunerated, while the wording does not mention this point for the reseau.

By way of concluding this brief description of sectorial initiatives in health-related evaluation, we should highlight that although in France there is still no certified (agréé) professional milieu for evaluation of programs and policies (CSE, 1996), an important evaluation market is opening up to researchers. Thus, private consulting firms specializing in evaluation of health services, medical technologies, and hospital management, like the CNEH (Centre National de l'Équipement Hospitalier, a semi-public organization until 1990), "establish databases, audit hospitals, and report medical projects for establishments made legal by the 1991 law" (Weill, 1995). Another indicator is from the graduate studies training sector, where more than 20 courses are listed in the Annuaire des Formations à l'Évaluation Médicale en France (Andem), 1997.

Articles by Frossard & Jourdain (1997) and Parayra (1997) identify a promising scenario for the practice of collective learning by regional players at a moment when cost-effectiveness criteria are being reconceptualized and information systems restructured, exercising new regulation models that enhance the local decision-making process. This dynamism values planning and validation of theoretical models at the local level, since "les fonctionnaires ne doivent pas s'interdire de théoriser"(Basset & Haury, 1995).

Before contextualizing the French trend in the international scenario I should reiterate my justification for having used an integrated approach to the evaluation of programs and policies, in some cases involving medical care practices and technologies intrinsically related to them. Although I agree that these dimensions can be treated separately, whenever analyzing the effectiveness of public health interventions a systemic conceptual framework is necessary to understand these dimensions as observation units necessarily linked to the evaluation process. Thus, a change in a population's state of health cannot be limited to that of an individual, for whom an evaluation of the clinical efficacy of medical practices and technologies may be sufficient. It requires organized programmatic action, demanding collective choices proper to the field of public policies. Meanwhile, the implementation of a program derives from the evolution of available practices, which in turn are influenced by the development of medical technologies, organizational structures, and political priorities formulated for a collective body (Hartz et al., 1997). According to Schraiber (1997), "the program/plan" in response to technically (i.e., epidemiologically) defined health needs is part of the public policy field, at least in Brazil. In the case of France, this relationship can be illustrated by the evaluation of the breast cancer screening program formulated as a government policy, involving as crucial issues for collective intervention the strengthening of professional organization of radiologists and the multiplication of medical tests using types of equipment with varying degrees of performance (Gremy et al., 1995).

 

 

The international context


 

Focusing on the international context, the first common denominator one finds in the objectives and structural and practical characteristics of evaluation is that it becomes imperative considering the uncertainty encompassing health intervention and the results observed in individuals, a phenomenon that increases as one moves to objectives at the population level. Evaluation thus emerges as the best way to obtain information on the efficacy of a health system (Contandriopoulos et al., 1997). Recourse to evaluation is justified as an essential practice for the rationalization of medical activity and decisions concerning resource allocation (Pouvourville & Minvielle, 1995).

Considering the importance of cost control in medical care and the worldwide crisis in social security systems, we are not surprised that the health sector was among the first to benefit from the PPBS logic, as in the case of the RAND (Research and Development) Corporation, the result of an American strategic research project during World War II, which in 1968 launched its Health Science Program (still one of the most important investments in civilian research) when Medicare and Medicaid were created (Gerbaud, 1995).

The health sector is also one of the most important ones in the PEMD-GAO, to the point of having performed studies commissioned by the AHCPR, created by the US Congress in 1989 to analyze the effectiveness of medical interventions as public policy (GAO, 1992). The GAO led a study on evaluations performed in the 1988-1992 fiscal period, a sort of meta-evaluation on the work of the various levels of the Public Health Services Program Evaluation, concluding that the evaluations had been insufficient as a source of information for Congress. Another more recent report (GAO, 1995) deals with an evaluation of the decentralization of grants, concluding in favor of greater flexibility in the frequency of reports required of the States for them to concentrate on results. Congress only becomes more prescriptive in cases involving inadequate information systems on funded programs.

According to Myers (1996), while the United States and Canada have their specificities, they have evolved under the same Continuous Quality Improvement model, meaning that evaluators incorporate scientifically credible indicators reflecting patient satisfaction. This and other changes in evaluative practices speak in favor of plurality as the rallying cry for institutionalization models. The Programme d'Action Communautaire pour les Enfants (PACE), with an annual budget of U$33 million (10% of which is allocated for evaluation), adopted in the new Canadian evaluation policy, is a good example of organizing a national prêt-à-porter and a provincial sur mesure (Turgeon, 1997). Pettigrew (1996), in England, observes great interest both in experimental models, extrapolation of clinical trials, and the American theories of "empowerment and democratic evaluation" inspiring evaluators engaged at the regional or local level.

A current program in Québec, Simad (Services Intensifs de Maintien à Domicile), illustrates the issue of proper utilization of results of evaluation for the (re)formulation of sectorial policies, in addition to showing the complementary nature of an initial (quantitative) survey from 1983, indicating the apparent failure of a previous program focusing on the elderly, and a subsequent qualitative case study suggesting that the main problem was that of conceptualization and the degree of implementation. Cobatoff (1996) summarized the latter's critical reaction to the former: "how could the author suppose ...that persons living at home with relatively severe disabilities ...might be influenced by short-term and sporadic services which amount on average to less then one hour per week?" The question also suggests the ambiguity of the term "home care" as used by the program, meaning either medical and/or home services. The Simad proved to be a new intervention modality, evaluated in a pilot study incorporating gaps in prior knowledge, demonstrating that the "program evaluation methods are only better in the sense that they are better adapted to policy-making situations that they are attempting to influence" (Cobatoff, 1996).

The fact that one defends the importance of adjusting research lines to planned utilization of results does not mean that the research means nothing more than precise and immediate application. On the contrary, evaluative research should be encouraged on an on-going basis, since its "timing" cannot be determined by the urgency of the decision. The main objective of the National Health Forum created recently by the Canadian government with participation by experts from the field is to understand how research can help the government develop consensuses on the means to maintain the system's efficiency while respecting les temps de la recherche (Champagne, 1996). A pathway worth exploring was suggested by Pouvourville (1992), i.e., to organize a regular flow of information from the production of knowledge generated by evaluations in such a way as to be available to the Ministry of Health and other agencies responsible for related activities.

While France institutionalizes evaluation as a mechanism for regulating the health system under the same origins described internationally, it is structured on the basis of a biomedical model (i.e., professional logic) and not on epidemiology (as a response to population needs), which is criticized by Henrard (1996b). In the former case, health policy is reduced to the sum total of practices, and research priorities are defined through new management instruments and standardization of medical procedures, while the vast majority of health problems require the integration of the two approaches, which is in keeping with our conceptualization. According to the author, the PSAS approach is promising, attempting to pass the scale of program interventions on to population groups.

 

 

Conclusions

 

According to Leca (1997), the "official creed" of many public players reveals the mistakes of "évaluation à la française", i.e.,: évaluation de premier secours, which searches for an acceptable solution; aide au moral des troupes, which involves uncertain or dissatisfied players; and évaluation inutile, which commissions the study and moves elsewhere.

One notes however that these representations are repeated on the international and intersectorial levels, since it is not a matter of mistakes, but of many expectations proper to a field linked to political life. The Report by the Ministry of Transportation/École des Mines (1978) already indicated that one of the "universally employed" reactions when the established powers are questioned is to say, "...une étude va être engagée sur ce point..une étude sur ce point précis a montré que...". Mayne (1992), while defending the advantages of institutionalization, does not fail to call attention to the fact that "the most significant problem in institutionalizing evaluation in government is to reconcile people's different and competing expectations of evaluation...This may be the price to pay to be part of the complex yet intriguing web of politics and bureaucracy".

One can conclude that over the last decade evaluation has gained prominence in French public sector reforms, despite a drop in demands on the CSE. We agree with Perret (1994) when pointing out that the advantages of the Council's overseer model are offset by the lack of a doctrine to employ evaluation, but it appears insufficient to us, since we believe that the issue cannot be resolved without the active voices of the Ministry of Finance and Parliament, as claimed by Nioche (1993). He identifies other ambiguities in the 1990 provision, including the following: isolation of other ministerial evaluation practices, like those of the CNEU (Comité d'Évaluation des Universités), CNER, or Andem; prior expert opinion by the CSE, having to simultaneously draft non-restrictive recommendations and limiting conditions, which in addition to hampering its role in a sense can make it co-responsible for the final assessment. The arguments by Nioche (1993) support those of Rouban (1993), discussing the French state first, when the author shows that while on the one hand, as is common in socialist governments, evaluation is a space with a pluralistic reference where the public authority enters the debate and is accountable to the public, while it is difficult to renounce the comfort offered by an État gaullien in which the administration is little accustomed to the principle of accountability and Parliament remains the poor cousin of evaluation.

One does not note what Pettigrew (1996) observes in England, where "evaluative activity is institutionalised as routine practice... the whole process of checking afterwards how far policy objectives have been achieved and how efficiently and economically". In France this model of British rationality has not become universal, but there is no lack of competence in the social or applied sciences, indicating that the country has at its disposal the fashionable elements of evaluation: knowing how to reconcile prêt-à-porter and sur mesure in this world of political and scientific "fashion design". Perret (1994) sums up the French model's specificity as a greater lack of definition in the goals of evaluation as compared to the Anglo-Saxon model, since the former ascribes great value to the "game" of the players in the production of knowledge. According to the same author, another specificity is in the role of scientific and deontological regulation played by the CSE. Nioche (1993) feels that the original side of the French model is its resistance to "fast food", preferring an international cuisine, but with "les cuisiniers, le fonctionnement des cuisines, le rapport avec la salle semblent bien français".

With regard to the health sector, which is certainly influenced but not determined exclusively by the overall institutionalization framework, from the perinatal program of the 1970s to the ordonnances of 1996, including the recent evaluation of the experimental model for nationwide implementation of the breast cancer prevention program and the work of the ministerial evaluation divisions, one notes a growth in the institutionalization process with similarities to the international context: partial transfer of Executive attributions to the peripheral level or semi-public agencies; regulation of health care practices and structures; and greater accountability to taxpayers. The structure of the scientific councils in ANAES is also promising in that it allows for the exercise of a "meta-evaluator" role (evaluation of evaluation) since, as recalled by Hudson & Mayne (1992), effectiveness audits are indispensable at the various evaluation levels.

Finally, it is important to highlight the questions raised by Pouvourville (1997) and quoted at the beginning of this article as a basis for the limits of this work, which did not intend to teach, but rather to expound on lessons that I believe I have learned from the example of what France and other countries have attempted to do. It appears obvious that while there is not just one road to institutionalization, it is fundamentally important and the various experiences have some reasonable potential for generalization due to their common denominators. In Brazil, knowing as we do that in 1997 the Chamber of Deputies passed a bill providing for the Council on Higher Studies and Technological Evaluation and the Ministry of Health formally created its Department of Policy Evaluation, I feel that such initiatives should not be seen as a déjà vu of others (that were not always successful); on the contrary, they are a stimulus for leapfrogging stages, insights for a regulatory and organizational framework drawing us closer to a situation characterized by the notion of what "seem the right points". The multiplicity of isolated studies in health shows that the field is developing through attempts to professionalize, enhance, and disseminate knowledge and methodologies in evaluation (Hartz & Camacho, 1996). Nevertheless, without an effort at institutionalization by political and government structures so as to introduce technical and financial incentives and encourage a culture of evaluation for decision-making and program budget allocation, all this knowledge will be nothing but an academic exercise, powerless to help solve the problems identified. An editorial by the chairman of the Societé Française de Santé Publique (Brodin, 1997) leaves us with one final lesson of an international scope: a reform only makes sense if the choices (both explicit and implicit) of health or social policies and programs actually seek to reduce the inequalities that laisser faire, laisser aller cannot help but maintain or increase.

 

 

References

 

ABBALLEIA, P. & JOURDAIN, A., 1995. Secours vers le futur: un exercice de prospective dans le SROS des urgences de Bourgogne. Santé Publique, 7: 169-183.         

ALBAECK, E., 1996. Why all this evaluation? Theoretical notes and empirical observations on the functions and growth of evaluation, with Denmark as an illustration case. Canadian Journal of Program Evaluation,11:1-34.         

BALLART, X., 1997. Spanish Evaluation Practice vs Program Evaluation Theory: Cases from Five Policy Areas. Paris: Ed. ENS/CNRS.         

BASSET, B. & HAURY, B., 1995. Planification hospitalière: un modèle théorique à l'épreuve des faits et du temps. Santé Publique, 7:199-207.         

BATTISTA, R.; BANTA, H. D.; JONSSON, E.; HODGE, M; & GELBAND, H. 1995. Lessons from eight countries. In: Health Care Technology and its Assessment in Eight Countries. (Office of Technology Assessment, ed.), pp. 335-354, Washington: United States Congress.         

BION, J.-Y., 1994. L'évaluation en France: à une forte prescription d'évaluation correspond un faible essor des pratiques. Canadian Journal of Program Evaluation, 9:151-163.         

BLUM-BOISGARD, C.; GAILLOT-MANGIN, J.; CHABAUD, F. & MATILLON, Y., 1996. Évaluation en santé publique. Actualité et Dossier en Santé Publique, 17:18-22.         

BRIANÇON, S.; PRESIOSI, P.; CAO, M. M.; GALAN, P.; LEPAUX, D.-J.; COLLIN, J.-F.; PAUL-DAUPHIN, A. & HERCBERG, S., 1996. Évaluation de la prévention. Actualité et Dossier en Santé Publique, 17:22-25.         

BRODIN, M., 1997. Mauvaise querelle et vraies questions. Santé et Societé, 10:1.         

BUSSMANN, W., 1996. Democracy and evaluation contribution to negotiation, empowerment and information: some findings from Swiss democratic experience. Evaluation, 2:307-320.         

CABATOFF, K. A., 1996. Getting on and off the policy agenda: a dualistic theory of program evaluation utilization. Canadian Journal of Program Evaluation, 11:35-60.         

CATINCHI, A., 1995. Rapport sur la Fonction d'Inspection en Services Deconcentrés. Ministère des Affaires Sociales de la Santé et de la Ville.         

CEPE (Centre d'Études des Programmes Économiques), 1992. La prévention, les soins, leurs moyens, leurs resultats et leur financement. Compte-rendus de l'atelier 4. In: Actes du Colloque Le Malade, les Actes, les Couts. Paris: Ed. Cepe.         

CES (Canadian Evaluation Society), 1994. Vérificateur général du Canada ­ entrevue avec Denis Desautels. Bulletin de Liaison, 14:1-5.         

CHAMPAGNE, F., 1996. Technocratie et recherche: la chimère de la prise de décisions fondées sur des données probantes. Ruptures, 3:114-116.         

CHAPALAIN, M.-T., 1996a. La politique de perinatalité en France entre 1970 et 1980. In: Les Dix Ans du Centre d'Études des Programmes Économiques (Direction Générale de la Santé du Ministère du Travail et des Affaires Sociales, ed.), pp. 217-220, Paris: Ed. Cepe.         

CHAPALAIN, M.-T., 1996b. L'Analyse de Système couplé avec l'analyse économique. In: Les Dix Ans du Centre d'Études des Programmes Économiques (Direction Générale de la Santé du Ministère du Travail et des Affaires Sociales, ed.), pp. 157-181, Paris: Ed. Cepe.         

CHEN, H.-T., 1997. Applying Mixed Methods Under the Framework of Theory-Driven Evaluations. New Directions for Evaluation, 74:61-73.         

CONTANDRIOPOULOS, A. P.; CHAMPAGNE, F.; DENIS, J. L. & PINEAULT, R., 1997. A avaliação na área da saúde: conceitos e métodos. In: Avaliação em Saúde. Dos Conceitos à Prática na Análise de Implantação de Programa (Z. M. A Hartz, org.), pp. 29-48, Rio de Janeiro: Editora Fiocruz.         

CSE (Conseil Scientifique d'Évaluation), 1996. Petit Guide de l'Évaluation des Politiques Publiques. Paris: La Documentation Française.         

DURIEUX, P. & RAVAUD, P., 1997. From clinical guidelines to quality assurance: the experience of assistance publique-hôpitaux de Paris. International Journal for Quality in Health Care, 9:215-219.         

FROSSARD, M. & JOURDAIN, A., 1997. De l'observation à la décision. La régulation régionale du système de santé: ni plan ni marché. Actualités et Dossier en Santé Publique, 19:21-23.         

FURUBO, J.-E. & SANDAHL, R., 1996. Some notes on evaluation in Sweden. European Evaluation Society. Newsletter, 1:3-4.         

GAO (General Accounting Office), 1992. Cross Design Synthesis. Washington: GAO/Program Evaluation Methodology Division.         

GAO (General Accounting Office), 1993. Public Health Service: Evaluation Has Not Realized its Potential to Inform the Congress. Washington: GAO/ Program Evaluation Methodology Division.         

GAO (General Accounting Office), 1995. Block grants: characteristics, experience and lessons learned. Nation's Health, april: 6.         

GARDEUR, P., 1996. L'évaluation en santé. Actualité et Dossier en Santé Publique, 17:1-3.         

GARIN, H.; BAUBEAU, D.; MANCIAUX, C. & CAILLER, I., 1995. Le concept de pathologies traceuses est-il opérationnel dans une démarche de planification? Santé Publique, 7:157-167.         

GEFFROY, L., 1994. Évaluation et régulation du système de santé en France. In: L'Évaluation Médicale: du Concept à la Pratique (Y. Matillon & P. Durieux, ed.), pp. 154-161, Paris: Ed Flammarion.         

GERBAUD, L., 1995. La RAND en 1994. Journal d'Économie Médicale, 13:209-214.         

GREMY, F., MANDERSHEID, J. C. & PENOCHET, J.-C., 1995. Évaluation de la qualité dans le domaine de la santé. Santé Publique et Territoires. Nanci: Ed. CNFPT/ENSP/SFSP.         

GUERS-GUILLHOT, J., 1997. L'hôpital et sa tutelle: interactions dialogiques. Ruptures, 4:252-267.         

HARTZ, Z. M. & CAMACHO, L. A. B., 1996. Formação de recursos humanos em epidemiologia e avaliação dos programas de saúde. Cadernos de Saúde Pública, 12 (Sup. 2):13-20.         

HARTZ, Z. M. A.; CHAMPAGNE, F; LEAL, M. C. & CONTANDRIOPOULOS, A.-P., 1997. Avaliação do programa materno-infantil: análise de implantação em sistemas locais de saúde no nordeste do Brasil. In: Avaliação em Saúde: Dos Modelos Conceituais à Prática na Análise da Implantação de Programas (Z. M. A. Hartz, org.), pp. 19-28, Rio de Janeiro: Editora Fiocruz.         

HARTZ , Z. M. A. & POUVOURVILLE, G., 1998. Avaliação da eficiência em saúde: a eficiência em questão. Ciência e Saúde Coletiva, 3:68-82.         

HENRARD, J.-C., 1996a. L'évaluation. Des mythes aux réalités. Actualité et Dossier en Santé Publique, 17:36-37.         

HENRARD, J.-C., 1996b. Politiques locales de santé. In: Systèmes et Politiques de Santé (J.-C. Henrard, J. Ankri & F. Bertolotto, eds.), pp. 165-182, Rennes: Ed. ENSP.         

HUDSON, J. & MAYNE, J., 1992. Auditing effectiveness evaluation. In: Action Oriented Evaluation in Organizations (J. Hudson, J. Mayne & R. Thomlison, eds.), pp. 196-209, Toronto: Wall & Emerson Inc.         

JORJANI, H., 1994. The holistic perspective in the evaluation of public programs: a conceptual framework. Canadian Journal of Program Evaluation, 9:71-92.         

LECA, J., 1997. L'évaluation comme intervention. In: Actes du Colloque International "l'Évaluation des Politiques Publiques". Paris: Ed. ENS/CNRS.         

LOVE, A. J., 1996. Visits to the world of practice. Evaluation, 2:349-361.         

MacKAY, K., 1994. The Australian Governements Evaluation Strategy: a perspective from the Center. Canadian Journal of Program Evaluation, 9:15-30.         

MARCEAU, R.; SIMARD, P. & ORTIS, D., 1992. Program evaluation in the gouvernement of Québec. In: Action Oriented Evaluation in Organizations (J. Hudson, J. Mayne & R. Thomlison, eds.), pp. 48-63, Toronto: Wall & Emerson Inc.         

MAURY, E., 1997. Le parlement Français face au défi de l'évaluation des politiques publiques. In: Actes du Colloque International "l'Évaluation des Politiques Publiques". Paris: Ed. ENS/CNRS.         

MAYNE, J., 1992. Institutionalizing program evaluation. in action-oriented evaluation in organizations. In: Action Oriented Evaluation in Organizations (J. Hudson, J. Mayne & R. Thomlison, eds.), pp. 21-27, Toronto: Wall & Emerson Inc.         

McQUEEN, C., 1992. Program evaluation in the Canadian Federal Government. In: Action Oriented Evaluation in Organizations (J. Hudson, J. Mayne & R. Thomlison, eds.), pp. 28-47, Toronto: Wall & Emerson Inc.         

MICHEL, P.; CAPDENAT, E.; RAYMOND, J. M.; MAURETTE, P.; DAUBECH, L.; SALOMON, R. & AMOURETTI, M., 1997. Regional experience of evaluation of professional practice and quality assurance implementation in Aquitaine. International Journal for Quality in Health Care, 9:221-223.         

MINISTERE DES TRANSPORTS/ÉCOLE NATIONALE SUPERIEURE DES MINES DE PARIS, 1974-1978. Les Études et les Decisions; les Études et les Négotiations; les Études et les Institutions. Paris: Groupe de Reflexion sur l'Économie des Transports Urbains.         

MONNIER, E., 1992. Évaluation de l'Action des Pouvoirs Publiques. Paris: Ed. Economica.         

MONNIER, E., 1995. L'Évaluation au sein des régions Françaises et de la commission européenne: son rôle dans le cadre du partage des responsabilités. Canadian Journal of Program Evaluation, 10:135-149.         

MULLER-CLEMM, W. J. & BARNES, M. P., 1997. A historical perspective on federal program evaluation in Canada. Canadian Journal of Program Evaluation, 12:47-70.         

MYERS, A. M., 1996. Coming to grips with changing Canadian Health Care Organizations: challenges for evaluation. Canadian Journal of Program Evaluation, 11:127-147.         

NIOCHE, J.-P., 1993. L'évaluation des politiques publiques en France: "fast food", recettes du terroir ou cuisine internationale? Revue Française d'Administration Publique, 66:209-220.         

NOVAES, H. M. D., 1992. Processus de Developpement Scientifique et Technologique: Technologies Médicales en France, 1970-1990. Relatório de pós-doutorado ao CNPq. Brasília: CNPq. (mimeo.)         

PARAYRA, C., 1997. De l'observation à la décision. Politiques régionales et système d'information. Actualités et Dossier en Santé Publique, 19:23-26.         

PERRET, B., 1995. Principes et objectifs généraux de l'évaluation de politiques publiques. Santé Publique et Territoires. Nanci: Ed. CNFPT/ENSP/SFSP.         

PERRET, B., 1994. Le context Français de l'évaluation: approche comparative. Canadian Journal of Program Evaluation, 9:93-114.         

PERRIN, B., 1995. Conférence Inaugurale de la Societé Européenne d'Évaluation. Bulletin de Liaison de la Societé Canadienne d'Évaluation, 15:6-7.         

PETTIGREW, M., 1996. Evaluation in the United Kingdom. European Evaluation Society. Newsletter, 3:6-7.         

PHS (Public Health Service), 1996. Public Health Service. September 1996. http://aspe.os.dhhs. gov/ProgSys/phsrept/chap-03.htm.         

POLLIT, C. & SUMMA, H., 1997. Evaluation and the work of Supreme Audit Institutions: an uneasy relationship? In: Actes du Colloque International "l'Évaluation des Politiques Publiques". Paris: Ed. ENS/CNRS.         

POUVOURVILLE, G., 1997. Quality of care initiatives in French context. International Journal for Quality in Health Care, 9:163-170.         

POUVOURVILLE, G. & MINVIELLE, E. 1995. Connaissances scientifiques et aide à la décision: la diffusion des innovations en santé. In: Des Savoirs en Action (F. Charue-Duboc, dir.), pp. 89-137, Paris: L'Harmatan.         

POUVOURVILLE, G., 1992. Les décideurs ont-ils réellement besoin d'évaluations? La diffusion de la lithotritie en France. In: Évaluation des Innovations Technologiques et Décisions en Santé Publique (J.-P. Moatti & C. Mawas, orgs.), pp. 75-78, Paris: Ed. Inserm.         

RAMOS, A., 1996. Evaluation in Spain. European Evaluation Society Newsletter, 2:6-7.         

ROUBAN, L., 1993. L'évaluation, nouvel avatar de la rationaliation administrative? Les limites de l'import-export institutionnel. Revue Française d'Administration Publique, 66:197-208.         

ROWE, A., 1997. L'évaluation axée sur l'autonomie. Bulletin CES, 217:1-3.         

SABATIER, P. A., 1997. The Political Context of Evaluation Research: an advocacy coalition perspective. In: Actes du Colloque International "l'Évaluation des Politiques Publiques". Paris: Ed. ENS/CNRS.         

SCHRAIBER, L., 1997. Prefácio. In: Avaliação em Saúde: Dos Modelos Conceituais à Prática na Análise da Implantação de Programas (Z. M. A. Hartz, org.), pp. 19-28, Rio de Janeiro: Editora Fiocruz.         

SCHAETZEL, F. & MARCHAND, A.-C., 1996. L'Évaluation, mirage ou virage? Échanges Santé-Social, 83:9-11.         

TONNELIER, F. & LUCAS, V., 1996. Health Care Reforms in France: Between Centralism and Local Authorities. Paris: Credes.         

TURGEON, J., 1997. Le Programme d'action communautaire pour les enfants (PACE): nouvelle tendance dans l'évaluation des politiques publiques au Canada? In: Actes du Colloque International "l'Évaluation des Politiques Publiques". Paris: Ed. ENS/CNRS.         

WARD, P. & BARRADOS, M., 1994. Rapport du Bureau du Vérificateur Général. Points saillants portant sur l'evaluation de programmes. Bulletin de Liaison de la Societé Cannadienne d'Évaluation, 14:6-7.         

WEILL, C., 1995. Health Care Technology in France. In: Health Care Technology and its Assessment in Eight Countries (Office of Technology Assessment United States Congress, eds.), pp. 103-135, Washington: Office of Technology Assessment United States Congress.         

WOLLMANN, H., 1997. Evaluation in Germany. Newsletter EES, 3:4-5.         

WYE, C. G. & SONNICHEN, R. C., 1992. Evaluation in Federal Government: changes, trends and opportunities. New Directions for Program Evaluation, 55, fall:1-10.         

Escola Nacional de Saúde Pública Sergio Arouca, Fundação Oswaldo Cruz Rio de Janeiro - RJ - Brazil
E-mail: cadernos@ensp.fiocruz.br