With this opinion from Astrophysicist Raymond Piccoli, European Scientist publishes the 2nd part of its tour of research financing across Europe. We want to give various decision makers and scientists the opportunity to lay out the visions and policies of their respective countries in the field of research and development. You are welcome to present your contributions.
Without money, resources, or teamwork, it is impossible to carry out scientific research worthy of the name. To obtain funding, you have to publish and have published, and to publish, you have to carry out research, and therefore have funding… This is a convoluted equation which is sometimes hard to solve…
Over the last quarter century, French research has undergone two major revolutions.
The first, at the beginning of the 1990s, saw the creation of Mixed Research Units (UMR), designed to create laboratory based groups of researchers and teaching-researchers attached to several supervisory bodies – universities, EPST [Etablissement Public à caractère Scientifique et Technologique – State owned science and technology establishment] and EPIC [Etablissement Public à Caractère Industriel et Commercial – state-controlled entities of an industrial or commercial nature] Setting up these new laboratories addressed a need for concentration (reaching a critical mass): bringing together human and material resources which had previously been widely dispersed; and improving international visibility. The second, in the 2000s, introduced calls for tenders (for ANR, the French national research agency, and for European agencies) as the main method of financing research work to replace the recurring credits previously granted directly to laboratories. There was a dual rationale: concentrating resources on targeted projects, and centralising decisions rather than relying on local decision making of doubtful efficiency. The reforms were therefore mainly based on their characteristic features: competition, openness to exchange, productivity, human resources management, etc.
But following this quite sensible rationale, these reforms have in fact, against all expectations, had disastrous, largely retrograde effects on scientific production and the level of research in France. To spell out the problems, managerial and accounting reforms have fatally compromised the researchers’ independence, essential to the exercise of their profession, and their freedom of thought. With scientific objectives being defined by political and economic decision-makers through calls for tenders, 90% of the former subject fields have been abandoned. Entire areas of competence and expertise, including in applied sciences, have disappeared. Under the pretext of openness and competition, French research has been transformed into a hyper-centralised bureaucratic monster. This is not just hyperbole1: it’s putting in writing what everyone knows but no longer dares say. The heart of the mechanism is to be found in the relationship between funding and publication, which we will now examine.
Funding, publication, and evaluation of researchers
To carry out research work successfully, you have to be funded, and come under the topics set as priorities, defined by the donors who, most of the time, ironically, are not scientists. With sources of funding drying up all the time, and the development of calls for tender, scientific practices have seen a very fast pace of change.
These days researchers are required to “bag” calls for tender or risk no longer being able to carry out their activities. And, when you read the titles of these calls for tender, their disconnect from the real issues facing science is obvious to anyone. The amateurishness of the people who “think” they are directing the research would be laughable if it wasn’t so tragic. Let’s take a closer look at the principle of calls for tender. The ridiculous nature of how tenders are awarded is obvious: to be one of the “chosen ones”, you have to present results before even starting the work. It’s a surreal situation because these days almost no funding is allocated to projects that have even the smallest degree of uncertainty. To be rated, you have to be able to show evidence of multiple publications and a history of collaborative working. The more you have worked on a subject, the more likely you are to obtain funding on the same subject. This mechanism contributes to a considerable impoverishment of research work, focused on a few fashionable themes, which have the advantage of being popularly “readable”. Moreover, most national calls for tender are now systematically aligned with European projects, meaning they are only a sub-set under the same conditions. In the research field, France has lost its national independence, to general indifference. Maintaining the diversity of projects, for national and European calls for tender, would however be a very simple measure to adopt. The two could be perfectly complementary.
It is estimated that researchers spend a third of their time preparing responses to calls for tender, 90% of which are unsuccessful. This is a colossal financial loss. Our researchers’ brains are being wasted on endless form-filling, which, more often than not, are filed away with no outcome… It’s a mess. Calls for tender give pride of place to manufacturers’ expectations and encourage public finance to take account of private interests. They also favour kow-towing to the political-ideological concerns of the moment.
In short, many genuine research proposals do not tick the right boxes and are abandoned. For science, the results are disastrous: a lot of innovative research is not funded because it isn’t immediately bankable. We crush real innovative, original research. So in a system like this, how can we give the impression that we’re still able to “do science”? What kind of gloss can we put on this? The solution comes from publication. It all starts with the evaluation of researchers. How on earth can we evaluate scientists’ thinking when their activities, by definition, are often incomprehensible to ordinary people? Evaluation will have to be based on a simple principle, on one single measurable criterion, elevated to the ultimate reference: the sacrosanct publication.
The first harmful effect is that this obligation to publish (which amounts to an obligation to obtain results for researchers!) feeds colossal bibliographic databases entirely in the hands of private anglophone companies, who are obviously not averse to selling the products of this work, the articles, for several dozen or sometimes several hundred Euros. French journals are notably absent from this flagrant profit-making mechanism.
In the researcher evaluation process, the engine of career progression, none of the “peers” appointed to commissions will read a single article. The titles and covers of the journals will suffice, as if moviegoers were to limit their passion for cinema to looking at the film posters. What is written or laid out in the article does not matter, an editorial statement is enough. A co-author amongst many with ten publications can thus be considered an active and successful researcher. The result is this incredible paradox: Laboratory Directors are people who publish a great deal, while they spend most of their time doing administration. They no longer devote any time to scientific work, but they continue to publish nonetheless. Quite an achievement! This is possible because their dominant position makes it possible to publish and insist on inclusion in ongoing research programs. Your name is on the cover, so you’re a great scientist. The publication system dice are loaded. Laboratory Directors are thus regularly invited to be lead author, leaving the last positions to the lower ranks, lesser known researchers or students.
The second effect, perversely, is downstream and is responsible for a profound transformation in research. These days, anyone who publishes, publishes, and publishes again is considered a good researcher.
We’re developing chronic compulsive writing syndrome. To keep pace, to be ranked among the best, quantity largely takes precedence over quality.
One example, among others, of a very fashionable ploy to publish copiously: to have many authors, as many as several dozen, for a paper of a few pages long. Each author can rework the process with other “volunteers” over and over again. It’s magic: everyone becomes instantly productive. The phenomenon is exponential. Groups of authors structure and co-opt themselves without even rereading the papers signed in turn by other members of the group. Some researchers spend their time “co-publishing”, so much so that their endless lists of publications would, in a normal world, attract the attention of any halfway scrupulous observer. This system is not risk-free. But who actually wrote the paper and can guarantee its accuracy? The scientific authorship of publications is completely diluted. This has also caused some problems with researcher recruitment in recent years, real failures, where unfortunate people have found themselves in much higher level positions than their actual experience should have permitted. A few scandals like this have come out recently: researchers, Laboratory Directors, and even senior researchers have been sued for plagiarism; or how cut and pasting becomes a habit. So much harm done to science, so much credibility sacrificed, a credibility which is absolutely fundamental to scientific practice disappearing in a puff of smoke on the altar of vanity.
The plagiarism phenomenon has received little media coverage. Attention most often focuses on students, known to be given to the practice of copying whole chunks, but little information comes out about their “masters”. University sites will happily issue strict warnings to students, and even get to the point of using software to check that papers or files are not plagiarised – with very limited success. However, the inventiveness of many professors and researchers “publishing” in this field is quite astounding. Admittedly, techniques vary considerably from one discipline to another. But the principles of plagiarism are based on simple techniques that are difficult to detect by conventional means, especially for non-specialists.
The most common practice in research is to self-plagiarise, that is, to increasingly borrow from one’s own previous works, without forgetting the deftly used “self-citation” that will automatically increase the author’s H-index. Lindsay Waters, former director of publications at Harvard University, wrote a short prophetic book on the subject, in the form of a pamphlet, entitled The Eclipse of Scholarship2: Multi-publication implies a fragmentation of thought, a dissipation, which is highly detrimental to scientific developments. With a corresponding collapse of reading! Articles are quoted, shared, advertised, downloaded, but rarely read. In essence, the dilution of scientific content also means a phenomenal drop in quality. A remarkable paradox of impact factor journals: the citation as the sole reference is craftily set up as a quality marker, while the scientific added value of each article has never been so low because of this dilution.
There are of course other plagiarism techniques, which involve third parties. Spying on your lab mate is not useless, but the most effective way is to use digital tools. Automatic translations enabled by our web browsers allow access to an infinity of scientific literature, perfectly reusable in current form. All you have to do is correct the French or English to produce a text which is undetectable by plagiarism software. “Nicking” ideas right left and centre, including from non-scientific bodies of work, has never been easier. To assuage their consciences, most plagiarists will adopt a defensive position of principle: knowledge must circulate. “Relaying” work – in other words, plagiarism – they would claim is integral to scientific practice. However, this process is clearly illegal.
One might object that these practices are self-limiting. The most serious authorship escapes plagiarism when experiments and data production and processing happily prevent the mere replication of ideas. Impostors should be less numerous in hard science than in soft science. Except that this hard science has long been criticised for its inefficiency and is becoming increasingly rare. Experiments and compiling data are extremely time-consuming, require significant funding and do not guarantee definite results. And those are all criteria which bibliometric evaluation would liberate researchers from. To circumvent the obstacles, “hard” scientists will resort to stratagems other than plagiarism, such as data manipulation or even invention. These processes are very difficult to detect, as is “intelligent” plagiarism. And when the ruse is discovered by colleagues or rivals, most often by chance during experiments, institutions are often quick to cover up scandals likely to implicate the decision-makers themselves. The complete disappearance of single-authored publications and the compulsory recourse to collective authorship has these implications: a dilution of authorship (who was the plagiarist?), therefore of legal risk, a dilution of competence (who actually wrote it?), a dilution of content (what is new about it?). I’m endlessly surprised by this astonishing spectacle: the current evaluation criteria are the opposite of those of an honest, efficient, productive and inventive science.
Another example, how many private firms or lobbies, pay, or more insidiously, fund, researchers whom they provide with turnkey articles so that they can publish them in so-called prestigious journals? Dummy operations have emerged and some have even been unmasked. These behaviours and practices are the opposite of the ideals of science.
This is a real problem, because there is no possible quantification of quality. Isn’t the prestige of a journal a sufficient guarantee of the quality of a researcher’s work? Well, no, not anymore. Scandals like this regularly appear in the press, such as articles written by bots and accepted by major journals. Bibliometric evaluation is ultimately a very strong incentive to commit fraud and plagiarism. This phenomenon, which is not unique to the scientific community, has now become mainstream, and fortunately, fraud is regularly being uncovered3
No journal, even the most prestigious, can double-check what researchers are actually doing in the labs. Much high quality research, discoveries, innovative ideas or new avenues of research have seen the light of day in journals considered as secondary, often in languages other than English. The scientific publishers’ interest is obvious. The mechanism validates their position and guarantees their income. They are the ultimate beneficiaries of publication inflation. In addition to the strategic advantage that control of the main publication organs constitutes in the hands of anglophone publishers, we should also consider the fact that the major journals are mainstream journals controlled by scientific committees where we find the multi-publishers mentioned earlier. Innovation rarely happens through them and discoveries will happen through innovation. That’s squaring the circle. Here as in the press, only maintaining a wide diversity of reviews can make it possible to escape the monopoly of one prevailing school of thought.
The order in which authors appear is one of the variables on which authors and their evaluators rely. In practice, the first author is supposed to be the one who contributed the most to the paper, and so on in order of decreasing importance. However, this order no longer expresses the level of involvement of the authors (the “paternity”) at all, but rather the power structure within the institutions. The circle is complete. This operation clearly demonstrates that occupying a senior post, a key position, makes it possible to become, or rather, to be classified as a high-ranking scientist! We have come round to the way of thinking that an effective researcher is someone who knows how to surround himself with a docile workforce, not someone who is labouring away at the coalface of the lab bench. Scientific prestige has become an outsourced operation.
Officially, there is the author ranking or H index, which is derived from the journals Impact Factor. This index claims to quantify scientific productivity according to the level of citation of an author’s works. In short, the more an article is cited, the higher its authors score. This is the principle of ratings applied to research. It does not mean anything, it is absolutely not representative of the quality of a work that it should be cited. This virtual system is about as valid as a social network where you would count hundreds of clicks, and think you had that many “friends”… Is the quality of a TV programme guaranteed by its audience level? You might well think the opposite is true. But no, in science, with the support of the highest authorities, we believe that a good researcher is a known, quoted researcher. We will find these “researchers” on TV, for better and sometimes for worse: the famous “experts” are often criticised, but always invited. In the end, intellectual navel-gazing takes hold and with it the strength of self-conviction. The more you believe in yourself, the more likely you are to get yourself known and climb the hierarchy.
To sum up, the publication problem is closely linked to the institutional fragility observed in recent years within French research. Practices considered “borderline” from an ethical point of view have become endemic, even if it means flouting intellectual property law.
Participatory research, private funding and sponsorship
One of the alternatives to these abuses is perhaps – and this should be the subject of a really fundamental debate – the financing of science through individual donation or patronage, known as participatory research.
So, what does participatory research mean? It is quite simply the possibility that everyone could be able to generously give a few euros4, a few dozen or a few hundred euros for a scientific project that was close their hearts, to a company or a wealthy person to endow a patronage, and perhaps even for some very large-scale work, offer their time5, but in a very circumscribed way so as not to depart from rigorous scientific protocol. Many researchers between the 18th and 20th centuries benefited from patronage. It is to this, in part, that we owe the vigour of historical development of the sciences, but also of the arts. Patronage needs a generous donor to finance a researcher, a project, an artist, without expectation of any return, simply out of intellectual curiosity, which presupposes cultivated financial elites, interested in science and the arts.
In sponsorship, on the other hand, conflicts of interest can quickly arise: money is granted in exchange for visibility, legitimacy or publicity for the funder. Participatory scientific research, sometimes called contributory science, is most certainly the future of strong research, which would be very diversified and would benefit from wealth from unexpected sources, provided that strict frameworks are in place from the outset. Contributory science must be carefully considered so as not to confuse contribution with total openness to looting. This cannot, of course, work for industrial research or work relating to national security. To the extent that we consider that science is not the sum of captive knowledge, this principle of participation open to all who are favourably inclined towards it is nothing new; it is even relatively old. This would allow new projects to emerge or research to be carried out that is considered irrelevant, just because it does not fit within general guidelines.
It is surprising to note that the scientific added value of work is often linked to original, marginal initiatives that are not recognised by the official research system. By definition, science is participatory in the sense that every scientist is the repository of previous knowledge willingly shared and transmitted by lecturers or professors.
Participation is coded into the very genome of science. It epitomises integration into the heart of a community which, while it may not be social and current, is at least intellectual and timeless. Participation implies transmission, and the privileged tool of this knowledge transfer is written material, books. The major contemporary scientific publishers are in this respect custodians of an essential mission. But this transmission-participation requires payment in advance, an unprecedented event in the history of science, because once again, these major scientific publishers are setting a very high price for the distribution of publications. It gives them enormous power by controlling the conditions of transmission, which, as a result, becomes highly elitist. Subscription to these publications takes up a significant portion of research organisations’ budgets. In the sociological sense, participation is also the ability of people from wider society to contribute to the production of scientific knowledge through direct or indirect contributions such as specific actions, material or intellectual participation, financial support, volunteering etc.
Finally, in the political sense, introducing a dose of participation into scientific development means widening democracy in a very hierarchical world, particularly in the extremely centralised French system. Participation here consists in giving free rein to projects that are heterodox and not top-down. This momentum is vital, because it makes it possible to refresh scientific approaches, bringing a breath of fresh air which is essential for the renewal of paradigms, concepts and methods. Research organisations and universities in France are based on the opposite principle to participation. Relationships are only vertical and top-down. The development of participatory research outside official institutions would be a way of escaping the crushing weight of castes which have for too long been sat at the top of research structures, and their intellectual monopoly.
Participatory research is therefore an essential parallel to institutional research whose capacity for discernment and initiative is constrained and limited by functional and shady economic imperatives.
_____________
- This article is part of the publication “Réflexions sur la recherche française…” [“Reflections on French research”] Raymond PICCOLI, Les Notes de l’Institut Diderot, 2018, ISBN 979-1093704-45-6. http://www.institutdiderot.fr/refl exions-sur-la-resche-francaise/
2. http://lettres-scpo.asso.univ-poitiers.fr/spip.php?article353 “Eclipse du savoir” [“The eclipse of scholarship”] by Lindsey Waters is an essay translated from English and published in knowledge
4. Example of a transition from public to participatory science. This is the LISE project. This hypertelescope intended for the detection and observation of exoplanets was conceived of while its designer, Antoine Labeyrie, was Chair of Astrophysics at the Collège de France. The initial funding for the project therefore came from state institutes and agencies. Then when he came to retire the funding dried up. However, the LISE hypertelescope continues to be developed and built in the Southern Alps thanks to a great team of volunteers from different fields, all highly motivated, and organized in partnership. The subsidies that allow the adventure to continue come from donations and memberships. This example is exceptional insofar as the expected results are likely to exceed the expectations of the largest public facilities, financed to the tune of several billion euros.
5. Since Pluto was downgraded from planet to dwarf planet, our solar system now has only eight official planets. Nevertheless, the possibility of the existence of a “planet X” or “9th planet” arouses the curiosity of astronomers around the world. NASA recently came up with the idea of creating a contributory research program that uses the talents of enthusiastic Internet followers to continue this quest. The project, entitled “Backyard Worlds: Planet 9”, is built around a participatory site on which each budding researcher can view images captured by the WISE exploration mission. The principle is simple: if an Internet user detects moving objects based on animations made over several years, he can report it to NASA scientists. All these additional eyes increase the probability of a discovery.
On the same topics : Time for innovation, By polish Professor, Aleksander Nawrat
This post is also available in: FR (FR)DE (DE)