Skip to content

The Second Rule of Crowdfight: Improvise, Adapt, Overcome

Facebook
Twitter
LinkedIn

When telling strangers about my science background, I usually get some surprised looks. The general feeling seems to be that Neuroscience is unfathomably complicated.

If only these bystanders knew the not-so-glamorous life of research, they would probably be less impressed. It is not that neuroscience – or research in general – is simple. The case is that research is complicated for all the wrong reasons: precarious conditions driven by local governments and global expectations, the inefficient sharing of knowledge, and unforgiving competition in a place where collaboration should be the rule of thumb. Such drawbacks make for an ill-fated work culture.

Improving these aspects of research was the clear theme in the first Crowdfight Symposium on the Science of Collaboration.

Crowdfight was born out of Covid necessities and it is expanding. Now, more than helping scientists researching COVID related themes, Crowdfight wants to change research culture.

What is research culture? 

Research culture includes everything related to the daily life of researchers in academia: the career expectations, the evaluation process, and the merit and credit we give to a contributor. The current culture is a bit outdated and no longer matches the needs and expectations of modern professionals. The first Crowdfight Symposium eloquently pinpointed some of the big flaws and promptly presented solutions.

Crowdfight is about transparency. The symposium started by introducing collaborators from all levels of the research work chain. The new system proposed by Crowdfight is quite efficient. When a researcher or group of researchers submits a request, a preliminary team inquires for further details. Once specifics are provided, the request can be transformed into an easily understood question for the volunteers. A mass email alerts volunteers there is a project in need of assistance and allows them 24 hours to respond. Answers are evaluated and the best candidate is presented to the requesting team. A coordinator is assigned to the request and manages the communication between volunteers and researchers. 

Science is too competitive and there is limited collaboration and transparency.

After explaining how the initiative works, the symposium opened the floor to debate collaboration in science. There was a clear consensus among attendees. Science is too competitive and there is limited collaboration and transparency.

The responsibility was easily, and unsurprisingly, attributed to big publishing houses. We have explored this issue before in our article about Open Access Science.  Big publishers, having none of the work and all of the profit, are feeding on the efforts of scientists worldwide and kindling the unhealthy flame of competition to a point where it is detrimental to scientific progress and accessibility. ‘Publish or Perish‘ is the mantra of researchers in academia and it results in untapped knowledge, unreliable results, and a lot of bias.

Silver linings 

As Sandrine Coquille, a researcher at the University of Bayreuth and a Crowdfight volunteer put it, science must revolve around collaboration, not competition. 

One of the main problems in research culture, as dictated by big publishers, is that only the newest, freshest, most innovative results get published. In the race to publish or perish, the less exciting – but just as necessary results – get left behind. 

Publishing negative results (the things that don’t work) and comments on protocols would benefit all scientists. Imagine you worked in a lab and wanted to identify a protein in the cells you are growing. You would start by looking at the literature for ways to see the protein. ‘Has this been attempted before?’ you would ask. After a while, you conclude there isn’t data to identify that protein in these specific cell types. This can mean one of two things: either no one has done it before, or it has been attempted and it failed. Because failed results aren’t usually published, you have no way of knowing. 

So, you try to identify the protein in your cells, just in case, and you find it doesn’t work. This information falls into a black hole, where currently all negative results lie, with no hope of leaving. Meanwhile, in another part of the globe, a separate group of scientists spends time and taxpayer money to do the exact same thing. 

Further on during the conference, Brian Nosek, Co-founder and Executive Director of the Center for Open Science (COS), elaborated on the issues of the current publication system.

The Executive Director believes that this publication system is, and will continue to be, a necessity in research, yet it should not be the only way for career advancement. Brian suggested alternative ways to credit researchers, like getting credit from quality peer review or from sharing material and data. He also added that there is a need to change the criteria for publication, switching from an obsession with novelty to a focus on rigorous methodologies and pertinent research questions. As Brian put it, this ‘won’t eliminate competition but will align it with good science’. 

Peer-Review

A recurrent topic during the round table was the peer review process. According to Stuart King, ‘peer review pits reviewers and researchers against each other.’ eLife (a selective, not-for-profit peer-reviewed open access scientific journal) defends that a reviewer should be seen as a critical friend, who provides meaningful, constructive criticism.

Brian Nosek agrees. COS (Center for Open Science) is even investing in a different way to approach peer review. Traditionally, reviewers read and critique the final research results. The process is done by other researchers in the same field, who aren’t paid for the hours dedicated to the reviewing process. These reviewers then submit comments to the authors of the paper; but because the project has already been carried out, researchers rarely have the opportunity to address negative feedback. This limitation leads to frustration and missed opportunities. On top of that, if a paper is rejected, the review process has to be done again by another publisher since previous comments aren’t publicly available. This means that many reviewers end up going through the same articles, which is not efficient at all.

COS wants to implement peer review during the project to analyse research questions and experimental procedures. Early review allows alterations to experimental design, which is not only effective but also strengthens the relationship between researchers and reviewers. As Brian Nosek explained, ‘Feedback is no longer ‘look at all the terrible things about your research.’ Stuart King added that peer review should also be open access and shared between publishers. On top of that, editors and reviewers should talk with each other before sending feedback to researchers. In an often confusing process (just check the hashtag #reviewer2 on Twitter) it is important to hold everyone accountable.

Pandelis Perakakis added another layer to the conversation – sharing peer review with the community through an open platform scheme, for example. This would take some of the control that big publishers exert on the research community.

The Credit System

Another problem afflicting research is the crediting system, illustrated by extensive author lists in research papers. There is a rigid hierarchy in research: the first name is the person who did most of the work. They designed the study, researched the topic, carried out the experiments, analysed the data, and wrote the manuscript. Being the first author is a highly coveted position that translates to career advancement and funding capture. The subsequent authors are listed according to contribution and often the last author is the head of the lab. 

We shouldn’t be measuring scientific success by numbers

For Stuart King, there are better ways to attribute credit. Publishers could provide more information on the authors as a way to clarify contributions. By doing so, the pressure of having first author articles–one of the drivers of the highly competitive environment–would decrease. To illustrate, Pandelis Perakakis shared that during his PhD, it was expected for students at the end of the programme to have published between 3 to 4 papers. Currently, there are students finishing their PhD’s with 14 different publications. He argues that we shouldn’t be measuring scientific success by numbers.

Brian Nosek believes there is an ‘universal sentiment in academia’. Researchers hold transparency, openness and collaboration as core values in science, yet the current system of incentives seems to mainly bring frustration. 

Overall, the mere basis for change seems to be the biggest hurdle. Brian explained ‘No actor in the system can change the system. If I tell my students we are going to start acting in a different way, they will be taking a risk. They will likely find themselves behind the competition because I don’t know if someone around us will also change’. For Brian, the solution is in the individual actors who switch from research to other scopes of science. ‘They shift the norm and help others realise that there are alternative ways to do something’.

Further Reading

Casci, T., & Adams, E. (2020). Research Culture: Setting the right tone. eLife. Retrieved 6 August 2021, from https://elifesciences.org/articles/55543

Venkatraman, V. (2010). Conventions of Scientific Authorship. Science | AAAS. Retrieved 6 August 2021, from https://www.sciencemag.org/careers/2010/04/conventions-scientific-authorship

Author

  • Antonia Fortunato

    Antónia was a Biologist, once upon a time. She transitioned from the glamorous world of lab benches and international conferences to her one true love - Science Communication. Antónia works as a freelance science writer and also manages social media for THINK. In her free time, she pets street cats and educates people on Portuguese gastronomy - often against their will.

More to Explore

Comments are closed for this article!