CARTAS LETTERS

 

 

On scientific misconduct

Má conduta em ciência

 

Birgitta Forsman


Department of Medical Ethics, Lund University. Stora Gråbrödersgatan 16, S-222 22, Lund, Sweden. birgitta.forsman@medetik.lu.se

 

 

It was with great satisfaction that I read the leading article "Plagiarism in Science" by Carlos E.A. Coimbra Jr. in Cadernos de Saude Publica. I myself have written about the problem and found it very complicated (Forsman, 1996). Scientific integrity and misconduct is an important issue to handle. The problem was first noticed officially in the United States. During the 1980s several organizations were established to deal with allegations of misconduct. The most well-known is the Office of Research Integrity (ORI) under the Department of Health and Human Services. Other countries have also established organizations to handle the problem, as in Scandinavia. Sweden was the last of the Scandinavian countries to form an "expert group" for treating allegations of misconduct in medical research. This group has 11 members, most of whom are medical professors. However, the chairman is a prominent jurist, and there are also lay people (politicians) in the group.

Some of the most well-known cases discussed in this context are connected with rather famous senior scientists in Great Britain, United States, and Australia. Most of the cases have to do with medical research. A book by Stephen Lock and Frank Wells, Fraud and Misconduct in Medical Research, gives an interesting survey of the problem area (Lock & Wells, 1996).

Of course, the fact that most cases have been detected in medical research in English-speaking countries does not mean that this is the only area in which problems exist. It just happens that this area has been most thoroughly investigated. In the book by Lock & Wells (1996) there are indications that scientists in some countries are more eager to hide possible misconduct. Two French authors, Lagarde & Maisonneuve (1996), claim that in France there is not much interest in revealing such negative news. And a German author, Stegemann-Boehl (1996), reports that "there is a fairly high percentage of undetected cases in Germany. Almost everyone knows of more or less serious cases from their immediate environment ­ cases that have never been systematically cleared up or made public".

The issue of scientific misconduct is, of course, extremely important, since we should be able to trust research results. However, there are a great deal of problems in the area. Definitions of misconduct themselves involve many intricate problems. The ORI's definition reads like this:

Fabrication, falsification, plagiarism, or other practices that seriously deviate from those that are commonly accepted within the scientific community for proposing, conducting, or reporting research. It does not include honest error or honest differences in interpretations or judgments of data. (ORI, 1993)

In late 1995 an American commission led by Dr Kenneth Ryan of the Harvard Medical School published a report in which a new and much more extensive definition was suggested. The main points in this definition are the following: "Misappropriation", "Interference", and "Misrepresentation"(CRI, 1995.) Misappropriation means plagiarism and use of material one does not have the right to use. Interference means, among other things, destroying the material and/or property of others. Misrepresentation is, for example, falsely establishing something as a fact or excluding facts in order to give a favorable version.

These suggestions have still not been implemented, and I am not certain whether they ever will be.

There has been much discussion over a proper definition of scientific misconduct and where the borders should be drawn. One of the most active participants in this debate is Donald Buzzelli, head of the Office of Inspector General at the National Science Foundation in Washington, D.C. (Buzzelli, 1992; 1993).

However, even if there is an accepted definition of misconduct, it is very hard to determine if a single case is applicable to this definition. Of course, there are clearcut cases, such as that of William Summerlin, who painted parts of white mouse skin black and pretended to have transplanted black skin onto white mice. But many cases involve a much more subjective judgment. How much can one use from another person's intellectual property before it can be regarded as plagiarism? How important must a fabricated detail be before it is judged as scientific misconduct?

The well-known Thereza Imanishi-Kari case in the United States illustrates the problems. In 1986 an article was published in Cell about the immune system of a certain strain of transgenic mice. A junior researcher at the same laboratory, Dr Margot O'Toole, reported that Thereza Imanishi-Kari had interfered in the laboratory data. There were many turns in this case, before the ORI found her guilty of scientific misconduct on 19 counts in 1994. Imanishi-Kari appealed this judgment to the Departmental Appeals Board (DAB) and used a very prominent lawyer, Joseph Onek. DAB found Imanishi-Kari not guilty and said that the challenged data were not used in the Cell article or were unimportant and peripheral (Kaiser & Marshall, 1996).

There are also great problems in detecting misconduct. It has often been said that peer review in journals or funding authorities will sort out everything that is not reliable. However, many well-informed persons doubt this, for example Frank Wells (Wells, 1994).

One judgement is this: peer reviewers necessarily assume that authors are truthful. Because the central issue to be resolved in a misconduct case is whether an individual has been truthful, a system like peer review is inappropriate because it lacks the investigative, adjudicatory, and due process mechanisms to evaluate that issue accurately and fairly (Goldman Herman et al., 1994).

One of the most prominent Danish players in this debate, Povl Riis, writes: "Editors have few opportunities both in theory and in practice to detect and to prevent scientific fraud. [...] Readers often credit editors with far more power and competence for detecting scientific dishonesty than they can ever exert." (Riis, 1994)

On the whole, there is not much evidence that the self-policing systems of the scientific community work. Rather, most reported cases come from whistleblowers. And these people live dangerously. (See for example Devine & Aplin, 1988; Stewart et al., 1989.)

I have given some glimpses of the problems in connection with scientific misconduct. What I have said gives an indication that it is far from easy to determine how common the problem is. There are also challenges in deciding what to do about it. Buzzelli and Ryan have both said that universities must play a more active part in fighting misconduct. However, they also say that universities have not done very well so far. There have been many heated articles about who should deal with allegations. Internalists say that this must be done by the scientists themselves, while externalists say that scientists cannot be trusted. There must be a transparency to outside society.

However, the undeniable fact that the problem of scientific misconduct is highly complex should not prevent us from trying to handle it.

The following could be a worthwhile strategy:

• Identify what should be regarded as scientific misconduct, both with theoretical considerations and empirical investigations, for example by asking active researchers of different ages, sexes, and positions how they would judge certain cases.

• Make an anonymous questionnaire to find out how common scientific misconduct is in the country (according to the chosen definition).

• Organize courses in research ethics. I myself have started such courses in the medical school at Lund University in Sweden, and I find this work very promising.

• Organize a national body to handle allegations of scientific misconduct. There is much wisdom to be learned, not the least from Denmark and the United States.

 

 


BUZZELLI, D. E., 1992. Measurement of misconduct. Knowledge, 14:205­211.

BUZZELLI, D. E., 1993. The definition of misconduct in science: a view from NSF. Science, 259:584-585, 647-648.

CRI (Commission on Research Integrity), 1995. Integrity and Misconduct in Research: Report of the Commission on Research Integrity. Rockville: Dept. of Health and Human Services.

DEVINE, T. M., & APLIN, D. G., 1988. Whistleblower protection: the gap between the law and reality. Howard Law Journal, 31:223-239.

FORSMAN, B., 1996. Forskningsfusk Och Vetenskaplig Oredlighet: Forskarnas Ansvar Och SamhòLlets Insyn. Lund: Lund University, Dept. of Medical Ethics.

GOLDMAN-HERMAN, K.; SUNSHINE, P. L.; FISHER, M. K.; ZWOLENIK, J. J. & HERZ, C. H., 1994. Investigating misconduct in science: The National Science Foundation model. Journal of Higher Education, 65:384-400.

KAISER, J. & MARSHALL, E., 1996. Imanishi-Kari ruling slams ORI. Science, 272:1864-1865.

LAGARDE, D. & MAISONNEUVE, H., 1996. Fraud in clinical research from the original idea to publication: the French scene. In: Fraud and Misconduct in Medical Research. (S. Lock & F. Wells, eds.) pp. 180-188. London: BMJ Publishing Group.

LOCK, S. & WELLS, F., 1996. Fraud and Misconduct in Medical Research. London: BMJ Publishing Group.

ORI (Office of Research Integrity), 1993. Office of Research Integrity: An Introduction. Rockville: ORI.

RIIS, P., 1994. Prevention and management of fraud-in theory. Journal of Internal Medicine, 235:107-113.

STEGEMANN-BOEHL, S., Some legal aspects of misconduct in science: a German view. In: Fraud and Misconduct in Medical Research. (S. Lock & F. Wells, eds.) pp. 189-205. London: BMJ Publishing Group.

STEWART, J., DEVINE, T., & RASOR, D., 1989. Courage Without Martyrdom: A Survival Guide for Whistleblowers. Washington, DC: Government Accountability Project.

WELLS, F. O., 1994. Management of research misconduct-in practice. Journal of Internal Medicine, 235:115-121.

Escola Nacional de Saúde Pública Sergio Arouca, Fundação Oswaldo Cruz Rio de Janeiro - RJ - Brazil
E-mail: cadernos@ensp.fiocruz.br