FORUM
Mixing journal, article, and author citations, and other pitfalls in the bibliographic impact factor
La mezcla de revistas, artículos, citas de autores, y otros riesgos en el "factor de impacto" bibliográfico
Miquel PortaI,II; José L. CopeteII; Esteve FernandezIII,IV; Joan AlguacilI,V; Janeth MurilloI,II
IInstitut Municipal dInvestigació Mèdica. Carrer del Dr. Aiguader 80, E-08003 Barcelona, Spain. mporta@imim.es
IIUniversitat Autònoma de Barcelona, Spain
IIIInstitut Català dOncologia. Gran Vía s/n, km 2,7, 08907 LHospitalet, Barcelona, Spain
IVUniversitat de Barcelona. Feixa Llarga SN, E-08907 LHospitalet, Spain
VDivision of Cancer Epidemiology & Genetics, National Cancer Institute. 6120 Executive Boulevard, EPS Room 8070, Bethesda, Maryland 20892, USA
ABSTRACT
News of the death of biomedical journals seem premature. Revamped traditional scientific journals remain highly valued sources and vehicles of information, critical debate, and knowledge. Some analyses seem to place a disproportionate emphasis on technological and formal issues, as compared to the importance ascribed to matters of power. Not all journals must necessarily have a large circulation. There are many examples of efficient, high-quality journals with a great impact on relatively small audiences for whom the journal is thought-provoking, useful, and pleasant to read. How can we achieve a better understanding of an articles spectrum of impacts? A certain mixing of three distinct entities (journals, articles, and authors) has often pervaded judgments. Data used by the Institute for Scientific Information present weaknesses in their accuracy. The two-year limit for citations to count towards the bibliographic impact factor favors "fast-moving", "basic" biomedical disciplines and is less appropriate for public health studies. Increasing attention is given to the specific number of citations received by each individual article. It is possible to make progress towards more valid, accurate, fair, and relevant assessments.
Key words: Journal Article; Impact Factor; Periodicals
RESUMEN
Noticias sobre la desaparición de las revistas biomédicas parecen prematuras. Puestas al día, las revistas científicas tradicionales continúan siendo fuentes altamente valiosas, vehículos de información, debate crítico y conocimiento. Algunos de los análisis muestran un énfasis desproporcionado sobre cuestiones tecnológicas y formales, si los comparamos con la importancia atribuida a los asuntos relacionados con el poder. No todas las revistas deben tener una tirada grande. Hay muchos ejemplos de revistas eficientes y de alta calidad con un magnífico impacto en audiencias relativamente pequeñas, a quienes la revista les resulta crítica desde el punto de vista intelectual, provechosa y agradable para la lectura. ¿Cómo podemos alcanzar un mejor conocimiento de la gama de impactos completa de un artículo? La mezcla entre las tres entidades definidas (revistas, artículos y autores) ha dominado a menudo los juicios de valor. Datos utilizados por el Instituto para la Información Científica presentan puntos flacos en los análisis atentos de sus exámenes. El plazo de dos años para que las citas cuenten con los favores de "movimiento rápido" del factor de impacto bibliográfico, "elementales" disciplinas biomédicas, consideramos que es menos apropiada para los estudios de salud pública. Es cada vez mayor la atención prestada al número de citas recibidas por cada artículo individual. Por lo que es posible realizar progresos dirigidos a asuntos más válidos, precisos, justos y relevantes.
Palabras-clave: Artículo de Revista; Factor de Impacto; Publicaciones Periodicas
Introduction
During the past decade, things may have changed considerably in the global communication scenarios, or in "the" scenario, you may prefer to say. Whether you are reading this article on paper, on your computer screen, or somewhere in between, like in the now-popular PDF format (again, on some sort of screen or on some sort of paper), we trust that most formal and substantive changes in the publishing world are obvious to you. Exactly where all such changes are leading is another matter, and no one knows for certain. Exactly who is in charge and who owns what after so many mergers of publishing companies is also rather difficult to determine.
Still, in the early years of the 21st century, news of the death of biomedical journals seem premature: more or less revamped traditional scientific journals remain highly valued sources and vehicles of information, critical debate, and knowledge (Coimbra Jr., 1999; Fernàndez, 1998; Garfield, 1972, 2003; LaPorte et al., 1995, 2002; Porta, 1993; Seglen, 1991, 1992, 1997). We thus contend that the main question is not really whether one or another type of journal or format will become more or less prominent in the future. In fact, we suspect that some analyses place disproportionate emphasis on technological and formal issues, as compared to the importance they confer to matters of power. For instance, who owns and manages the publishing medium. At some point it is advisable to analyze power issues explicitly, rigorously.
Therefore, most crucial questions are still the following:
How valid and relevant are a journals published contents?
What kind of impact does the publication have? and
Who has the power to influence such outcomes?
Naturally, some journals do die (Anonymous, 1987; Gunn, 2000). When a journal dies (i.e., it ceases publication; again, it matters little in what format) either a little or a lot may be lost. A lot will probably be lost if the journals objectives were consistent with the following "obituary" (Gunn, 2000):
"The announced cessation of publication of [the journal] CRNA: The Clinical Forum for Nurse Anesthetists brings into focus the problems that confront smaller subscription-based professional journals in a print or paper mode. We are in an age in which there is a high level of competition for peoples time and attention, in an environment where there is a danger of information overload. Electronic media are competing with print media, and there are advantages and disadvantages to both. Unfortunately, the revelations in the past few years of the poor state of published research in our print journals, despite peer review, makes it more difficult to advocate them in a climate where the shift may well be toward electronic journals. Some of the journals that have been created as online journals and have their articles peer reviewed have not been in existence long enough to examine the extent to which the peer-review system is any better than for print journals. It is exceedingly important that research and its conclusions that make its way into either print or online journals are reliable and valid before we apply them to our practice. Relying on individual readers to make that determination is problematic, because most readers are not that astute in sophisticated research methods and statistics. With the loss of this journal, CRNA opportunities for research and commentary publication are lost. It will produce new challenges to interested CRNAs who choose to balance this loss with new opportunities".
Whether abrupt or long-feared (or both), the death of a journal may cause "great dismay and anger" (Anonymous, 1987). How long such sorrow lasts is harder to know just as in other, more intimate life events. However, contrary to the experience of common mortals, journals may have several lives, they quit lifelong partners, change names, and resuscitate. For instance, The Journal of the Irish Free State Medical Union was published from July, 1937 to August, 1941; it was continued by or "as" The Journal of the Medical Association of Eire (September, 1941 to December, 1950), which was continued as The Journal of the Irish Medical Association (January, 1951 to June, 1974), which was continued as the Irish Medical Journal, whose "abrupt" death in 1987 at the "golden" age of 50 was sympathetically chronicled by The Lancet. But this was nothing compared to the Dublin Journal of Medical and Chemical Science (1832-1836), which was continued as the Dublin Journal of Medical Science (1836-1845), itself continued "under the title of" Dublin Quarterly Journal of Medical Science (1846-1871), which (if we may forget the Dublin Quarterly Journal of Science, 1861-1866) reassumed the name of Dublin Journal of Medical Science (1872-1920), later to be continued as the Irish Journal of Medical Science, today still in good shape in spite of a 2001 bibliographic "impact factor" (BIF) of 0.336 and ranking 84th in the "Medicine, General and Internal" subject category of the Science Citation Index (SCI). Whoever said that journals die failed to witness a fraction of the wars that these journal(s) survived.
Achievable ideals and the mind-numbing fad
In other words, a lot is lost if the defunct journal was a source of valid and relevant knowledge. This judgment is barely altered if the reader public was small. Who says that journals must have a large circulation? Of course some people believe this. But there are countless examples of tremendously efficient journals of high scientific quality and with a high impact on relatively small audiences: constituencies and readers for whom the journal is thought-provoking, useful, and pleasant, journals who give a lot to readers (Porta, 1998).
Needless to say, the above judgments do not change if the journal has a low BIF, or for that matter no BIF at all. Such a low or absent profile is absolutely compatible with the journal being well-read: read with pleasure, and in that reflective way that favors the internalization of contents (and hence, change): from change in professional attitudes and practice to change in "pure" knowledge. This is the sort of impact that is most needed. How much do we all agree on this? Because if many of us do agree, then we are freer to focus on achieving those ideals and less prey to the "impact factor fad" which is not even fashionable anymore (for better or for worse, there is an emerging fashion: counting individual citations, as we comment below). Please note the extent to which, thus far, the journal has been the object of our comments: this will also turn out to be important later.
Because the BIF produced by the Institute for Scientific Information (ISI) focuses heavily on citations in the academic literature, debates on its virtues and flaws often obscure the many dimensions of practice and knowledge that a scientific journal or paper may impact upon. Even when the focus is on uses of scientometric indicators for academic evaluation, we still overlook a major portion of the problem: how can we achieve a better grasp of an articles complete spectrum of impacts?
As researchers, academics, and medical or public health practitioners, we may choose to neglect how much goes on after a paper is published. But this attitude is mistaken. Intuitively we know for certain that a lot goes on, yet it is uncommon to venture beyond simply illustrative stories (Figure 1).
Some features of the bibliographic impact factor
A basic understanding of some features of the BIF is useful before one gets too philosophical (Porta, 1993, 1996). The BIF is produced and published every year as part of the Journal Citation Reports (JCR) by the Institute for Scientific Information (currently part of the Thomson Scientific company) (see http://www.thomson.com and http://www.isinet.com/isi). Impact factors are available for many journals, but not for all journals that publish valid and relevant articles; many of them merely aspire to having their own impact factor. Although it is possible to compute some kind of impact factor for all journals (by received counting citations and papers published, as discussed below), not all journals have the impact factor (i.e., the one by ISI) computed and published in the JCR. Publishers and editors of journals that do not "have the impact factor" frequently feel they suffer a number of disadvantages, although the journals visibility (through paper and electronic dissemination) is often good enough (Fernandez & Plasència, 2002).
The impact factor is the average number of citations received by articles in that journal for two years after publication. It results from a ratio: in the numerator, the number of citations to the journal in a given year; in the denominator, the number of "citable" items published in that journal (Egghe & Rousseau, 1990; Porta, 1996). Citations do not necessarily refer to citable items (we comment further on this below).
An impact factor is sometimes regarded incorrectly as a quality indicator for journals and is used by some journals in their advertising (Fernandez, 1995; Seglen, 1997), although significant limitations of the impact factor have been highlighted (Adam, 2002; Coimbra Jr., 1999; Decker et al., in press; Moed & van Leeuwen, 1995; Porta, 1996; Porta et al., 1994; Seglen, 1997; van Leeuwen et al., 1999). Such limitations concern the validity of the impact factor itself, its uses and misuses, and other problems.
Knowledge of the impact factor itself and the process yielding its values is important in order to focus the debate on what the factor really measures. For example, it is warranted to recall that all the data stem from citations used under the scholarly process of scientific communication, which is a social process (Coimbra Jr., 1999; Cronin, 1984; Porta et al., 1994). Even within this perspective there is debate, with various positions in recent decades. For instance, the field of sociology refers to the many "ambiguities in level and interpretation of measurement" and "meaningless numerology" (Cronin, 1984). We also have the wisdom of Eugene Garfield (1972 to 2003), frequently worried that citation indexing be unfairly dismissed simply because of improper uses of citation databases by some professionals and policymakers (see http://www.garfield.library.upenn.edu). ISI openly warrants that the JCR "are intended to complement, not replace, traditional qualitative and subjective inputs, such as peer surveys and specialist opinions", although at other times they claimed that JCR provide "a view that is unobtrusive, quantitative, objective, unique", or that "JCR tells you what are the hottest journals" (ISI, 1994; Porta, 1996) (Table 1). The logical counterpart to that claim must be that JCR also tell us who the "losers" are (Table 2). Journals whose main focus is in occupational, environmental, and public health have much lower impact factors than purely biomedical journals, and the gap may be widening (Table 3; Figure 2).
Data in Table 3 reflect some of the current trends in epidemiology and public health research. Remember that what is published now is the result of what was funded and investigated three, four, or more years ago. The journal with the highest impact factor in 2000 was the Milkbank Quarterly (4.568), which is only mentioned in the Social Sciences Citation Index of the JCR. Next comes the Annual Review of Public Health (4.524), a journal devoted to review articles and thus frequently cited, despite (but also partly because of) the relatively low number of articles it publishes. Placing second was Cancer Epidemiology, Biomarkers and Prevention (impact factor: 4.354). No other journal of epidemiology or public health had a factor over 4. Only four other journals in this field had impact factors over 3: American Journal of Epidemiology (3.870), Epidemiology (3.632), American Journal of Public Health (3.269), and Environmental Health Perspectives (3.033). Comparison with figures in Table 1 needs few comments: the five leading medical journals according to their impact factors (New England Journal of Medicine, JAMA, The Lancet, Annual Review of Medicine, and Annals of Internal Medicine) (not to be confused with the so-called "big five", which include the British Medical Journal instead of the Annual Review of Medicine) have an average impact factor of over 9; nonetheless, this figure is substantially lower than that of the top journals in the overall "classification" of the SCI (Table 1).
Moreover, Figure 2 focuses on the "top" five journals in internal medicine and in public health, i.e., the five journals with the highest impact factors in each year in the SCI. In the figure, "Internal Medicine" refers to ISIs "subject category" named "Medicine, General & Internal", while "Public health" refers to the category "Public, Environmental & Occupational Health" (Fernandez, 1995). As shown, the impact factor of the "top five" doubled over the 20 year period, probably because more references are used per article and because the number of journals increased. The increase was slightly higher for public health journals than for medical journals (116% and 96%, respectively). However, 20 years ago the difference was already large, and in absolute numbers the gap widened. The most likely explanation is that from 1980 to 2000, more new journals were launched in medicine than in public health, hence increasing the citation chances for articles published in medical journals (if we used the Spanish term for reference or quote cita we could play on the notion of articles getting a cita, a rendez-vous or "date", a not entirely inaccurate concept).
It is useful to analyze the underlying forces moving the citation process, among other reasons because they strongly shape cognitive aspects of many scholarly disciplines. It might be expected perhaps naively that use of citations as a basis for value judgments implies or requires widely recognized conventions for the citation process among authors. However, this hardly seems to be the case, and the process actually displays a remarkable resistance to standardization. At times the citation process is healthily plural and subjective, as well as luckily quite inhospitable to homogeneity. Unfortunately, at other times it is rather random, parochial, or (less commonly) even sectarian. It occasionally reflects quite a picturesque variety of philias, phobias, ignorance, myths and rituals. The main avowable reasons we all generally adhere to when citing have been summarized by Cronin (1984) (Table 4).
Some of these reasons are not directly related to a cited articles relative contribution to scientific knowledge. For example, articles are often cited merely because of conventional technical, methodological, or topical aspects, because it is scientifically or culturally "correct" to do so, because the cited authors have some power within the scientific area, as a placebo, or just incorrectly (authors choose the cited reference on the basis of a superficial reading, and the ensuing citation is inadequate). How many cited articles have actually been read by the authors citing them? How many articles have been cited merely because they were cited in other articles?
Seglen (1997) aptly summarizes various critiques of (mis)uses of BIF and of related issues:
The journals impact factor is not necessarily representative of the individual journal articles.
Authors use many criteria other than impact when submitting to journals.
Citations to "non-citable" items are erroneously included in ISIs database.
Reviewers or editors seldom correct for authors self-citations.
Review articles are heavily cited and inflate the impact factor of some journals.
Long articles collect many citations and produce high journal impact factor.
A short publication lag favors self-citations in short-term journals and hence gives them a higher impact factor.
Citations in the journals national language are preferred by the journals authors.
Selective journal self-citation: articles tend to preferentially cite other articles from the same journal.
Coverage of ISIs database is not complete, and not all excluded journals are irrelevant.
Books are not included in the database as a source for citations.
Database has an English-language bias.
Database is dominated by American publications.
Journals included in database may vary from year to year.
The impact factor is highly dependent on the number of references per article in the research field.
Research fields with literature that rapidly becomes obsolete are favored.
The impact factor depends on dynamics (expansion or contraction) of the research field.
Small research fields tend to lack journals with high impact factor but may publish articles with high scientific or practical impact.
Relations between fields (clinical vs. basic research, for instance) strongly determine the impact factor.
The citation rate of articles influences the journal impact, but not vice versa.
Since it would make little sense to try to address all these issues, we shall focus on just two related questions.
Mixing citations "of" or "to" a journal, an article, and an author
We believe that a certain "mix-up" of three different entities a journal, an article, and an author has occasionally pervaded the otherwise judicious assessments of ISI. The following is a recent example from the man who invented the ISI impact factor, Eugene Garfield (2003). He writes [our comments added in brackets]:
"If circulation [number of copies published] were the determining factor in journal impact then JAMA should have the highest impact factor and journals like NEJM with lower circulation would not." [To this we say: fine. And by the way, rather than a high average impact factor, the widest possible circulation tout court is what you may wish to pursue when deciding to which journal you submit your paper].
"Impact is primarily a measure of the use (value?) by the research community of the article in question". [Garfields allusion to the distinction between "use" and "value" is remarkable, both crisp and controversial, and no major flaw is apparent in the reasoning; the main problem we see is how easily ideas about the journal impact slip or creep into the article: but the article has no "impact", i.e., it has no impact factor; what an article has is a unique, "non-transferable" number of citations].
"If an author or journal is cited significantly above the average then we say the authors work has been influential albeit sometimes controversial." [Strictly speaking, an unflawed argument; however, a certain mixing is again there: now, what gets mixed up is not the journal and the article, as in the previous paragraph, but the journal and the author].
"It is true that quality like beauty is often in the eyes of the beholder, but if peer judgments are taken as a potential source of quality judgment then citation frequency is well correlated with e.g. Nobel and other awards. It is extremely rare for a Nobel class scientist not to have published one or more citation classics. Indeed in 1967 we determined that Nobel scientists publish five to six times as often as the average author and their work is cited thirty to fifty times as often." [Now, Garfield is not talking about citations received by a journal or an article, but about citations received by an individual scientist; or by a given set of articles authored or co-authored by an individual scientist. Hence, Garfield is in no way using the famous and infamous "impact factor" of individual journals: he is not adding the average (or the median or any other summary statistic) of the impact factor of all journals where the individuals articles appeared; rather, he is rightly using the specific, concrete, unique number of citations that the authors papers received. With the necessary caveats, this is fine with us ].
Watch your denominator, think which measure suits you best
Though well-meant, it would be mistaken to claim that a journals impact factor is seldom representative of the impact factor of the individual articles published in it: articles do not have an impact factor, they have a specific number of citations, as we just said. The number of citations actually received by each article is seldom close to the impact factor. For the journals factor to be reasonably representative of its articles, citations to articles should follow a narrow distribution around the mean value of the population of all articles published in the journal. This is seldom the case (Seglen, 1997). On the contrary, many published articles are never cited again, whereas a few are cited well above the journals average impact factor.
A related issue was recently raised by Joseph (2003). He first showed that while JAMAs impact factor increased from 4.8 in 1989 to 17.6 in 2001, the number of items (generally, articles) that according to ISI data JAMA published (publication volume) declined steadily, from a high of 656 items in 1990 to 389 published in 2001. By contrast, and again based on ISI data, The Lancet publication volume increased from 469 items in 1989 to 1,108 in 1999. Meanwhile, the Lancets impact factor increased from 14.4 in 1989 to 17.9 in 1996; it then dropped sharply to 10.2 in 1999. The respective mirror images of JAMA and Lancet are truly striking (Joseph, 2003).
Of particular interest is the finding that among the five leading journals in internal medicine, the number of items published in the previous two years was inversely related to impact factor (r = -0.45, p < 0.001). In other words, one message for journal editors is: dont publish too much, or at least watch how much you publish; unless, of course, you are the leader of the pack and receive the best manuscripts in your subject area.
But the story does not end here, because Joseph also performed a hand count of the number of items published by JAMA in 1989 and 1990: in those two years, JAMA published 376 and 397 items, respectively. Surprisingly, however, according to the ISI JCR, JAMA had published 627 and 656 items, respectively. In fact JAMA did not significantly change the number of items it published for 20 years. This kind of error has been made by ISI on other occasions. Joseph also noted that a similar error concerning the labeling of news articles as substantive items was identified by the Canadian Medical Association Journal; it led to a significant change in its impact factor (Joseph, 2003; notice the title of the paper).
What criteria does ISI really apply to decide whether an "item" published by a journal is "citable" and thus whether it is added to the impact factors denominator? And how consistently are these criteria applied to the thousands of journals constantly screened by ISI? We believe few people know for sure. But the task must not be an easy one, since journals luckily strive to publish a large diversity of editorial formats. Remember also that once a citation is received by an article published in a given journal, ISI will count it "in favor" of the journal no matter whether the cited item was included in the denominator as "citable". Another fundamental decision solely in the hands of ISI as it should be is whether a journal is considered as "citing" or as "cited-only"; we have addressed this in some detail previously (Coimbra Jr., 1999; Porta, 1996).
We believe that the results of the thoughtful analysis by Joseph added to evidence on weaknesses in the accuracy of data extracted from journals by ISI. Although ISI has struggled to avoid data collection errors, the vast amount of data needed to create their products (processing over 10 million citations per year) highlights the importance of quality checks. Such controls are usually impossible to perform by users of JCR and related products, since most of us lack access to the original raw data, for instance, on which citable articles were counted in the impact factor denominator (Porta, 1996, 2003).
Besides other caveats, Eugene Garfield (1972, 1976, 1977, 1979, 1983a, 1983b, 1985, 1986) himself has long emphasized (almost from his first writings fifty years ago!) that the impact factor is often not the scientometric indicator of choice. Specifically, our view is that if one wishes to know a journals true impact (if the journal is really ones unit of analysis and interest), then one should begin by considering the total number of citations it has received, either for its lifetime or in the past two or more years (Porta, 1996). In passing, the is likely to avoid the pitfall identified by Joseph: the total number of citations is scarcely influenced by the number of citable items chosen to compute the impact factor. This may appear odd, but it is a reality, and not by chance.
Figures 3 and 4 compare the impact factor and the total number of citations received by journals. As shown in Figure 4, when the latter is considered, journals like the American Journal of Public Health, Environmental Health Perspectives, and Medical Care outperform others like Cancer Epidemiology, Biomarkers & Prevention (CEBP), which has a higher impact factor than any of them. This is probably due not only to the fact that CEBP publishes fewer citable items than the other three, but mostly to the two-year limit for a citation to count towards the impact factor (remember that the impact factors numerator is the number of citations received by papers published during the two previous years); the two-year period is more appropriate for "fast-moving" biomedical disciplines than for "quietly maturing" public health studies. CEBP receives numerous citations from journals specializing in molecular biology and basic cancer research, which often conduct studies and experiments with a much faster pace or "tempo" than studies in epidemiology and public health. Although CEBP does publish epidemiologic studies, the contrast shown in Figure 4 suggests the importance of "who" cites you. So, do you want your journals BIF to increase? Get papers that interest molecular biology, genomics, and related "basic" disciplines.
Although the American Journal of Epidemiology was cited 18,191 times in 2000 and the American Journal of Public Health received 14,167 citations, as far as the impact factor is concerned what counts is the number of citations to recent papers; according to ISI, "recent papers" are those published in the last two years.
With increasing Internet access to ISI data (by subscription to their "Web of Knowledge", etc.), attention is turning to the specific number of citations received by each individual article, as mentioned above and elsewhere (Porta, 2003). This could help avoid another intrinsic weakness of the impact factor, for which no one is to blame: as we noted before, the impact factor is just the ("misleading") average of a highly skewed distribution; often, 85% of citations received "by a journal" (i.e., by articles published in a journal) are actually received by about 15% of the articles it publishes (Porta, 1996; Seglen, 1991, 1992). Although much of the impact factors appeal may stem precisely from the fact that an average is such a simple measurement, as scientists we surely can venture beyond it. An example of such progress is at the Medical School of the University of Münster and other medical schools in North Rhine Westphalia, Germany, which use an interesting system for evaluating the publication output of all institutes and clinics. Their bibliometric evaluation system was developed by the Universities of Bielefeld (Germany) and Leiden (Netherlands). The system emphasizes the individual article and counts its individual citations. Therefore, it does not matter much if the particular article appeared in the Lancet, the NEJM, or in another journal; what matters is how often each specific article was cited. The Bielefeld/Leiden system also allows national and international comparisons of individual researchers, research groups, and institutions with the international community of specialists in any discipline (U. Keil, personal communication, February 7, 2003).
In summary, we believe that a journals bibliographic, scholarly impact in the field of public health is better reflected by the total number of citations received than by the impact factor. The two-year time window for a citation to count towards the impact factor favors "fast-moving, basic" biomedical disciplines, and seems less appropriate for public health output. Currently, increasing attention is given to the specific number of citations received by each individual article, or by the articles from a specific author or research group. Despite the widespread, habitual, and "scientific" misuse of the impact factor for evaluative purposes in academia, progress is possible towards more valid, accurate, fair, and relevant scientometric assessments.
References
ADAM, D., 2002. The counting house. Nature, 415: 726-729.
ANONYMOUS, 1987. Death of a journal. Lancet, 2:1442.
COIMBRA Jr., C. E. A., 1999. Produção científica em saúde pública e as bases bibliográficas internacionais. Cadernos de Saúde Pública, 15:883-888.
CRONIN, B., 1984. The Citation Process. London: Taylor Graham.
DECKER, O.; BEUTEL, M. E. & BRÄHLER, E., in press. Deep impact. Evaluation in the Sciences. Sozial und Präventivmedizin.
EGGHE, L. & ROUSSEAU, R., 1990. Introduction to Informetrics. Amsterdam: Elsevier.
FERNANDEZ, E., 1995. Factor de impacto bibliográfico 1993. Salud pública y epidemiología. Gaceta Sanitaria, 9:213-214.
FERNANDEZ, E., 1998. Internet y salud pública. Gaceta Sanitaria, 12:176-181.
FERNANDEZ, E. & PLASENCIA, A., 2002. Contamos contigo ¿Contamos también con tus citas? Gaceta Sanitaria, 16:288-290.
GARFIELD, E., 1972. Citation analysis as a tool in journal evaluation. Science, 178:471-479.
GARFIELD, E., 1976. Significant journals of science. Nature, 264:609-615.
GARFIELD, E., 1977. Caution urged in use of citation analyses. Trends in Biochemical Sciences, 2:84.
GARFIELD, E., 1979. Is citation analysis a legitimate evaluation tool? Scientometrics, 1:359-375.
GARFIELD, E., 1983a. How to use citation analysis for faculty evaluations and when is it relevant? Part 1. Current Contents Life Sciences, 44:5-13.
GARFIELD, E., 1983b. How to use citation analysis for faculty evaluations and when is it relevant? Part 2. Current Contents Life Sciences, 45:5-14.
GARFIELD, E., 1985. Uses and misuses of citation frequency. Current Contents, 43:3-9.
GARFIELD, E., 1986. Which medical journals have the greatest impact? Annals of Internal Medicine, 10:313-320.
GARFIELD, E., 2003. Re: Quality of Impact Factors of General Medical Journals PRAVDA Wins Hands Down. eLetter to BMJ Editor. 19 February 2003 <http://bmj.com/cgi/eletters/326/7383/283# 29761>.
GUNN, I. P., 2000. Death of a journal: Lost opportunities, new challenges, or both? CRNA, 11:197-201.
ISI (Institute for Scientific Information), 1994. 1993 Science Citation Index. Journal of Citation Reports.
JOSEPH, K. S., 2003. Quality of impact factors of general medical journals. BMJ, 326:283.
LaPORTE, R. E.; LINKOV, F.; VILLASEÑOR, T.; SAUER, F.; GAMBOA, C.; LOVALEKAR, M.; SHUBNIKOV, E; SEKIKAWA, A. & SA, E. R., 2002. Papyrus to PowerPoint (P 2 P): Metamorphosis of scientific communication. BMJ, 325:1478-1481.
LaPORTE, R. E.; MARLER, E.; AKAZAWA, S.; SAUER, F.; GAMBOA, C.; SHENTON, C.; GLOSSER, C.; VILLASEÑOR, A. & MacLURE, M., 1995. The death of biomedical journals. BMJ, 310:1387-1390.
MOED, H. F. & van LEEUWEN, T. N., 1995. Improving the accuracy of Institute for Scientific Informations journal impact factors. Journal of the American Society for Information Science, 46:461-467.
PLASENCIA, A., 2002. Lo que Gaceta Sanitaria hace para ti y lo que tú puedes hacer para Gaceta Sanitaria. Gaceta Sanitaria, 16:212-213.
PORTA, M., 1993. Factor de impacto bibliográfico (Science Citation Index y Social Sciences Citation Index) de las principales revistas de medicina preventiva, salud pública y biomedicina. Algunas cifras, algunas impresiones. In: Revisiones en Salud Pública, v. 3, pp. 313-347, Barcelona: Masson.
PORTA, M., 1996. The bibliographic "impact factor" of the Institute for Scientific Information, Inc.: How relevant is it really for Public Health journals? Journal of Epidemiology and Community Health, 50:606-610.
PORTA, M., 1998. "Gaceta Sanitaria": Artesanía y complicidad en pro de una revista con interés científico y profesional. SEENota (Boletín de la Sociedad Española de Epidemiología), 13:8-9. 23 Febrero 2003 <http://www.websee.org/documentacion/seenota/13-Enero-Abril-1998>.
PORTA, M., 2003. Quality matters And the choice of indicator, too. BMJ, 326:931.
PORTA, M.; BOLUMAR, F.; ALONSO, J. & ALVAREZ-DARDET, C., 1994. Encerrados con un solo juguete (sobre los usos de los indicadores bibliométricos). Medicina Clínica (Barcelona), 103: 716-717.
SEGLEN, P. O., 1991. Citation Frequency and Journal Impact: valid indicators of scientific quality? Journal of Internal Medicine, 229:109-111.
SEGLEN, P. O., 1992. How representative is the journal impact factor? Research Evaluation, 2:143-149.
SEGLEN, P. O., 1997. Why the impact factor of journals should not be used for evaluating research. BMJ, 314:498-502.
van LEEUWEN, T. N.; MOED, H. F. & REEDIJK, J., 1999. Critical comments on Institute for Scientific Information impact factors: A sample of inorganic molecular chemistry journals. Journal of Information Science, 25:489-498.
Submitted on April 11, 2003
Approved on June 12, 2003