Beef, Carrot and Meta-Research
You might have never heard or never thought about those in charge to keep an eye on the police. In french they earned the nickname of “Beef-Carrot”, as for the way they stew corrupted cops, or presumed ones, during interrogatories. Internal Affairs, the “Suits”, the police of the police. Although their role of controllers makes them unpleasant to most representatives of the law on this earth, there is no doubt these eyes in the shadow are necessary, to watch the hands that hold power. Nowadays more than ever.
As a multi-millennial institution, Science implies power and also has its own watchmen.
As we all woke up this Tuesday, for our very first day on this common adventure, an odd-named article was published in the very reputed Proceedings of the National Academy of Sciences of the USA. Meta-assessment of bias in Science, written by Daniele Fanelli and his colleagues Rodrigo Costas and John Ioannidis.
Here they are. After a quick look at their bibliographies and previous publications, it came clear that these authors basically perform research on research, with the aim of highlighting where, and describing how scientific research sometimes fail to reach exactitude in its results, and hereby slip away from the truth of the world it attempts to explain. This being achieved under the name of “Meta-Research”.
This fairly recent field of study developed mainly in reaction to the “Replication crisis”. This refers to a period situated in the 2010’s, when scientific community suddenly became aware of the difficulty, or even impossibility, to replicate numerous scientific studies. This problem, likely to exist for as long as science itself, caused an obvious issue concerning reliability of research findings. While it concerned for the most part psychology science, this so called crisis expanded to all other fields, including natural sciences, and thus meta-research articles landed in the pages of the most prestigious scientific journals. Here comes a selection of a few titles of these works, to give an insight of the theme approached in meta-research :
Why current publication practices may distort science.
(Young NS, Ioannidis JPA, Al-Ubaydli O (2008) PLoS Med.)
Negative results are disappearing from most disciplines and countries.
(Fanelli D (2012) Scientometrics)
Are men more likely than women to commit scientific misconduct? Maybe, maybe not.
(Kaatz A, Vogelman PN, Carnes M (2013) mBio)
And an interesting answer to this crucial question, which was not long to spurt out:
Males are overrepresented among life science researchers committing scientific misconduct.
(Fang FC, Bennett JW, Casadevall A (2013) mBio).
Now back to us, and our dear Mr. Fanelli, author of the fresh-out paper on bias mentioned earlier. There are a few ideas in this article interesting to pick, and that could be discussed in relation with this Spring-school experience and being marine sciences master students.
· Small Studies tend to overestimate results.
Hoping no one will get offended by this statement, it is fair to say that all of the 8 research projects which punctuates our days here are small scale studies. Is there then a risk of overestimating our results? And more importantly, why would it be so ?
Daniele Fanelli emits an hypothesis there: selective reporting. In other words, having fewer data collected on a small scale studies, it is easier to influence the final results by sneakily adapting our way to collect or interpret it. Hopefully, a second hypothesis from the article is a genuine unintentional overestimation caused by the restricted size of the study : It is not our fault if our study site happens to be a golden mine of lush biodiversity…
Anyways, here is a second assert :
· Early-career researchers are more incline to bias.
Being still (for most of us), embryo-career researchers, this statement will surely concern some of us in a near future. There are some obvious arguments coming fast to mind : lack of experience, eagerness to prove its worth. For instance by saving dolphins from ending as steaks, or any other heroic quests.
However it might be irritating for some to read this peremptory sentence.
It is now time to reverse the light beam, and question in an ironic fashion what methods do meta-researchers use to assess bias in science. Couldn’t it be biased equally? It surely is, as bias is inherent to our condition of living and thinking beings, which is precisely what allows us to perform science. Before further consideration, I insert it a short paragraph from the method section of the article:
“Each individual bias or risk factor was first tested within each metaanalysis, using simple meta-regression. The meta-regression slopes thus obtained were then summarized by a second-order meta-analysis, weighting by the inverse square of their respective SEs, assuming random variance of effects across meta-analyses.”
It looks like it is written in English, even though this is not quite sure, but it would make no difference to read it in Swedish (excepting superior minds for whom these words make sense, of course). A safe conclusion is that methods used to assess bias here are statistics, with scientific articles as data. Does it makes sense? Can we quantify bias, thinking it is directly linked to highly subjective concepts, such as personal appreciation, moral or ethics?
It would be pretentious to give an answer to these questions here. The aim is just to approach this complicated subject, and reflect on it as we are here learning to conduct research in Tjarno. At the end of these two weeks, our respective results will not reflect only the environment we will have explored. It will also reflect ourselves, our behaviour, our perception of science. It is important to keep this idea in mind. Except when snorkeling in 6°C water, I give it to you.